Compare commits
78 Commits
1.0.0-alph
...
1.0.0-alph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ce4252eb1a | ||
|
|
db708917b4 | ||
|
|
8ddb45627d | ||
|
|
550c225b79 | ||
|
|
0d46b550a8 | ||
|
|
0693cca1a4 | ||
|
|
0d9f9e381a | ||
|
|
6c7aa5a7ae | ||
|
|
a27d935925 | ||
|
|
b4f87a4fee | ||
|
|
ee5f94a2e2 | ||
|
|
9c3cf554d3 | ||
|
|
addbfa5487 | ||
|
|
5eb461d7b7 | ||
|
|
1ea45afcd7 | ||
|
|
dbd86f6aee | ||
|
|
af693f7b3f | ||
|
|
3be5ee6445 | ||
|
|
0acc8fe26a | ||
|
|
ecf40eb86c | ||
|
|
48ce7055f8 | ||
|
|
749f55d688 | ||
|
|
e5d17f5382 | ||
|
|
982cc66c74 | ||
|
|
74bf4909c8 | ||
|
|
9c956b4445 | ||
|
|
4c1fc9317e | ||
|
|
a9d77a618f | ||
|
|
38cdc87e93 | ||
|
|
f5ff93b65e | ||
|
|
6ef6f188e5 | ||
|
|
ccad91a4a9 | ||
|
|
63b79ae151 | ||
|
|
9284f64e2a | ||
|
|
b9bbae27de | ||
|
|
36e3efb5a5 | ||
|
|
04d1c8724d | ||
|
|
4fb4b353f8 | ||
|
|
564a02f344 | ||
|
|
5b582a4234 | ||
|
|
2e9792577f | ||
|
|
2066e0a03b | ||
|
|
a4d49a500f | ||
|
|
a8fbced928 | ||
|
|
99ca405279 | ||
|
|
2e1d1018aa | ||
|
|
c57b4be1c7 | ||
|
|
238a016242 | ||
|
|
2c0c7fafa3 | ||
|
|
ee4962fe31 | ||
|
|
55895d0a10 | ||
|
|
676897d389 | ||
|
|
5205ff6695 | ||
|
|
15cf3ce92b | ||
|
|
c0441b2412 | ||
|
|
6267872ddb | ||
|
|
618779a89d | ||
|
|
b3ec2325ed | ||
|
|
49a5643e76 | ||
|
|
657395af8a | ||
|
|
4de62ed77e | ||
|
|
505f493729 | ||
|
|
be05b704b0 | ||
|
|
b33c2fa3cf | ||
|
|
98674c60d4 | ||
|
|
e39eb86967 | ||
|
|
646070ae7a | ||
|
|
2525b66658 | ||
|
|
58c5a633e2 | ||
|
|
aefd894fc2 | ||
|
|
1e1d4646a2 | ||
|
|
b97845fffd | ||
|
|
84f5a4cb48 | ||
|
|
2832f0e089 | ||
|
|
a3b5445824 | ||
|
|
363e37c791 | ||
|
|
1b0b041530 | ||
|
|
7d5fc87002 |
49
.cursorrules
@@ -1,22 +1,39 @@
|
||||
# RustFS Project Cursor Rules
|
||||
|
||||
## ⚠️ CRITICAL DEVELOPMENT RULES ⚠️
|
||||
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
|
||||
|
||||
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
|
||||
### ⛔️ ABSOLUTE PROHIBITION: NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH ⛔️
|
||||
|
||||
- **This is the most important rule - NEVER modify code directly on main or master branch**
|
||||
- **ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO EXCEPTIONS**
|
||||
- **Always work on feature branches and use pull requests for all changes**
|
||||
- **Any direct commits to master/main branch are strictly forbidden**
|
||||
- **Pull requests are the ONLY way to merge code to main branch**
|
||||
- Before starting any development, always:
|
||||
1. `git checkout main` (switch to main branch)
|
||||
2. `git pull` (get latest changes)
|
||||
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
|
||||
4. Make your changes on the feature branch
|
||||
5. Commit and push to the feature branch
|
||||
6. **Create a pull request for review - THIS IS MANDATORY**
|
||||
7. **Wait for PR approval and merge through GitHub interface only**
|
||||
**🔥 THIS IS THE MOST CRITICAL RULE - VIOLATION WILL RESULT IN IMMEDIATE REVERSAL 🔥**
|
||||
|
||||
- **🚫 ZERO DIRECT COMMITS TO MAIN/MASTER BRANCH - ABSOLUTELY FORBIDDEN**
|
||||
- **🚫 ANY DIRECT COMMIT TO MAIN BRANCH MUST BE IMMEDIATELY REVERTED**
|
||||
- **🚫 NO EXCEPTIONS FOR HOTFIXES, EMERGENCIES, OR URGENT CHANGES**
|
||||
- **🚫 NO EXCEPTIONS FOR SMALL CHANGES, TYPOS, OR DOCUMENTATION UPDATES**
|
||||
- **🚫 NO EXCEPTIONS FOR ANYONE - MAINTAINERS, CONTRIBUTORS, OR ADMINS**
|
||||
|
||||
### 📋 MANDATORY WORKFLOW - STRICTLY ENFORCED
|
||||
|
||||
**EVERY SINGLE CHANGE MUST FOLLOW THIS WORKFLOW:**
|
||||
|
||||
1. **Check current branch**: `git branch` (MUST NOT be on main/master)
|
||||
2. **Switch to main**: `git checkout main`
|
||||
3. **Pull latest**: `git pull origin main`
|
||||
4. **Create feature branch**: `git checkout -b feat/your-feature-name`
|
||||
5. **Make changes ONLY on feature branch**
|
||||
6. **Test thoroughly before committing**
|
||||
7. **Commit and push to feature branch**: `git push origin feat/your-feature-name`
|
||||
8. **Create Pull Request**: Use `gh pr create` (MANDATORY)
|
||||
9. **Wait for PR approval**: NO self-merging allowed
|
||||
10. **Merge through GitHub interface**: ONLY after approval
|
||||
|
||||
### 🔒 ENFORCEMENT MECHANISMS
|
||||
|
||||
- **Branch protection rules**: Main branch is protected
|
||||
- **Pre-commit hooks**: Will block direct commits to main
|
||||
- **CI/CD checks**: All PRs must pass before merging
|
||||
- **Code review requirement**: At least one approval needed
|
||||
- **Automated reversal**: Direct commits to main will be automatically reverted
|
||||
|
||||
## Project Overview
|
||||
|
||||
@@ -517,7 +534,7 @@ let results = join_all(futures).await;
|
||||
|
||||
### 3. Caching Strategy
|
||||
|
||||
- Use `lazy_static` or `OnceCell` for global caching
|
||||
- Use `LazyLock` for global caching
|
||||
- Implement LRU cache to avoid memory leaks
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
FROM ubuntu:22.04
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
|
||||
|
||||
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
|
||||
CMD [ "bash", "-c", "while true; do sleep 1; done" ]
|
||||
@@ -1,32 +0,0 @@
|
||||
FROM rockylinux:9.3 AS builder
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -e 's|^mirrorlist=|#mirrorlist=|g' \
|
||||
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.ustc.edu.cn/rocky|g' \
|
||||
-i.bak \
|
||||
/etc/yum.repos.d/rocky-extras.repo \
|
||||
/etc/yum.repos.d/rocky.repo
|
||||
|
||||
RUN dnf makecache
|
||||
|
||||
RUN yum install wget git unzip gcc openssl-devel pkgconf-pkg-config -y
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc \
|
||||
&& rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
@@ -1,25 +0,0 @@
|
||||
FROM ubuntu:22.04
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
|
||||
|
||||
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
261
.docker/README.md
Normal file
@@ -0,0 +1,261 @@
|
||||
# RustFS Docker Images
|
||||
|
||||
This directory contains Docker configuration files and supporting infrastructure for building and running RustFS container images.
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
rustfs/
|
||||
├── Dockerfile # Production image (Alpine + pre-built binaries)
|
||||
├── Dockerfile.source # Development image (Debian + source build)
|
||||
├── docker-buildx.sh # Multi-architecture build script
|
||||
├── Makefile # Build automation with simplified commands
|
||||
└── .docker/ # Supporting infrastructure
|
||||
├── observability/ # Monitoring and observability configs
|
||||
├── compose/ # Docker Compose configurations
|
||||
├── mqtt/ # MQTT broker configs
|
||||
└── openobserve-otel/ # OpenObserve + OpenTelemetry configs
|
||||
```
|
||||
|
||||
## 🎯 Image Variants
|
||||
|
||||
### Core Images
|
||||
|
||||
| Image | Base OS | Build Method | Size | Use Case |
|
||||
|-------|---------|--------------|------|----------|
|
||||
| `production` (default) | Alpine 3.18 | GitHub Releases | Smallest | Production deployment |
|
||||
| `source` | Debian Bookworm | Source build | Medium | Custom builds with cross-compilation |
|
||||
| `dev` | Debian Bookworm | Development tools | Large | Interactive development |
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Quick Start (Production)
|
||||
|
||||
```bash
|
||||
# Default production image (Alpine + GitHub Releases)
|
||||
docker run -p 9000:9000 rustfs/rustfs:latest
|
||||
|
||||
# Specific version
|
||||
docker run -p 9000:9000 rustfs/rustfs:1.2.3
|
||||
```
|
||||
|
||||
### Complete Tag Strategy Examples
|
||||
|
||||
```bash
|
||||
# Stable Releases
|
||||
docker run rustfs/rustfs:1.2.3 # Main version (production)
|
||||
docker run rustfs/rustfs:1.2.3-production # Explicit production variant
|
||||
docker run rustfs/rustfs:1.2.3-source # Source build variant
|
||||
docker run rustfs/rustfs:latest # Latest stable
|
||||
|
||||
# Prerelease Versions
|
||||
docker run rustfs/rustfs:1.3.0-alpha.2 # Specific alpha version
|
||||
docker run rustfs/rustfs:alpha # Latest alpha
|
||||
docker run rustfs/rustfs:beta # Latest beta
|
||||
docker run rustfs/rustfs:rc # Latest release candidate
|
||||
|
||||
# Development Versions
|
||||
docker run rustfs/rustfs:dev # Latest main branch development
|
||||
docker run rustfs/rustfs:dev-13e4a0b # Specific commit
|
||||
docker run rustfs/rustfs:dev-latest # Latest development
|
||||
docker run rustfs/rustfs:main-latest # Main branch latest
|
||||
```
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Quick setup using Makefile (recommended)
|
||||
make docker-dev-local # Build development image locally
|
||||
make dev-env-start # Start development container
|
||||
|
||||
# Manual Docker commands
|
||||
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs/rustfs:latest-dev
|
||||
|
||||
# Build from source locally
|
||||
docker build -f Dockerfile.source -t rustfs:custom .
|
||||
|
||||
# Development with hot reload
|
||||
docker-compose up rustfs-dev
|
||||
```
|
||||
|
||||
## 🏗️ Build Arguments and Scripts
|
||||
|
||||
### Using Makefile Commands (Recommended)
|
||||
|
||||
The easiest way to build images using simplified commands:
|
||||
|
||||
```bash
|
||||
# Development images (build from source)
|
||||
make docker-dev-local # Build for local use (single arch)
|
||||
make docker-dev # Build multi-arch (for CI/CD)
|
||||
make docker-dev-push REGISTRY=xxx # Build and push to registry
|
||||
|
||||
# Production images (using pre-built binaries)
|
||||
make docker-buildx # Build multi-arch production images
|
||||
make docker-buildx-push # Build and push production images
|
||||
make docker-buildx-version VERSION=v1.0.0 # Build specific version
|
||||
|
||||
# Development environment
|
||||
make dev-env-start # Start development container
|
||||
make dev-env-stop # Stop development container
|
||||
make dev-env-restart # Restart development container
|
||||
|
||||
# Help
|
||||
make help-docker # Show all Docker-related commands
|
||||
```
|
||||
|
||||
### Using docker-buildx.sh (Advanced)
|
||||
|
||||
For direct script usage and advanced scenarios:
|
||||
|
||||
```bash
|
||||
# Build latest version for all architectures
|
||||
./docker-buildx.sh
|
||||
|
||||
# Build and push to registry
|
||||
./docker-buildx.sh --push
|
||||
|
||||
# Build specific version
|
||||
./docker-buildx.sh --release v1.2.3
|
||||
|
||||
# Build and push specific version
|
||||
./docker-buildx.sh --release v1.2.3 --push
|
||||
```
|
||||
|
||||
### Manual Docker Builds
|
||||
|
||||
All images support dynamic version selection:
|
||||
|
||||
```bash
|
||||
# Build production image with latest release
|
||||
docker build --build-arg RELEASE="latest" -t rustfs:latest .
|
||||
|
||||
# Build from source with specific target
|
||||
docker build -f Dockerfile.source \
|
||||
--build-arg TARGETPLATFORM="linux/amd64" \
|
||||
-t rustfs:source .
|
||||
|
||||
# Development build
|
||||
docker build -f Dockerfile.source -t rustfs:dev .
|
||||
```
|
||||
|
||||
## 🔧 Binary Download Sources
|
||||
|
||||
### Unified GitHub Releases
|
||||
|
||||
The production image downloads from GitHub Releases for reliability and transparency:
|
||||
|
||||
- ✅ **production** → GitHub Releases API with automatic latest detection
|
||||
- ✅ **Checksum verification** → SHA256SUMS validation when available
|
||||
- ✅ **Multi-architecture** → Supports amd64 and arm64
|
||||
|
||||
### Source Build
|
||||
|
||||
The source variant compiles from source code with advanced features:
|
||||
|
||||
- 🔧 **Cross-compilation** → Supports multiple target platforms via `TARGETPLATFORM`
|
||||
- ⚡ **Build caching** → sccache for faster compilation
|
||||
- 🎯 **Optimized builds** → Release optimizations with LTO and symbol stripping
|
||||
|
||||
## 📋 Architecture Support
|
||||
|
||||
All variants support multi-architecture builds:
|
||||
|
||||
- **linux/amd64** (x86_64)
|
||||
- **linux/arm64** (aarch64)
|
||||
|
||||
Architecture is automatically detected during build using Docker's `TARGETARCH` build argument.
|
||||
|
||||
## 🔐 Security Features
|
||||
|
||||
- **Checksum Verification**: Production image verifies SHA256SUMS when available
|
||||
- **Non-root User**: All images run as user `rustfs` (UID 1000)
|
||||
- **Minimal Runtime**: Production image only includes necessary dependencies
|
||||
- **Secure Defaults**: No hardcoded credentials or keys
|
||||
|
||||
## 🛠️ Development Workflow
|
||||
|
||||
### Quick Start with Makefile (Recommended)
|
||||
|
||||
```bash
|
||||
# 1. Start development environment
|
||||
make dev-env-start
|
||||
|
||||
# 2. Your development container is now running with:
|
||||
# - Port 9000 exposed for RustFS
|
||||
# - Port 9010 exposed for admin console
|
||||
# - Current directory mounted as /workspace
|
||||
|
||||
# 3. Stop when done
|
||||
make dev-env-stop
|
||||
```
|
||||
|
||||
### Manual Development Setup
|
||||
|
||||
```bash
|
||||
# Build development image from source
|
||||
make docker-dev-local
|
||||
|
||||
# Or use traditional Docker commands
|
||||
docker build -f Dockerfile.source -t rustfs:dev .
|
||||
|
||||
# Run with development tools
|
||||
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs:dev bash
|
||||
|
||||
# Or use docker-compose for complex setups
|
||||
docker-compose up rustfs-dev
|
||||
```
|
||||
|
||||
### Common Development Tasks
|
||||
|
||||
```bash
|
||||
# Build and test locally
|
||||
make build # Build binary natively
|
||||
make docker-dev-local # Build development Docker image
|
||||
make test # Run tests
|
||||
make fmt # Format code
|
||||
make clippy # Run linter
|
||||
|
||||
# Get help
|
||||
make help # General help
|
||||
make help-docker # Docker-specific help
|
||||
make help-build # Build-specific help
|
||||
```
|
||||
|
||||
## 🚀 CI/CD Integration
|
||||
|
||||
The project uses GitHub Actions for automated multi-architecture Docker builds:
|
||||
|
||||
### Automated Builds
|
||||
|
||||
- **Tags**: Automatic builds triggered on version tags (e.g., `v1.2.3`)
|
||||
- **Main Branch**: Development builds with `dev-latest` and `main-latest` tags
|
||||
- **Pull Requests**: Test builds without registry push
|
||||
|
||||
### Build Variants
|
||||
|
||||
Each build creates three image variants:
|
||||
|
||||
- `rustfs/rustfs:v1.2.3` (production - Alpine-based)
|
||||
- `rustfs/rustfs:v1.2.3-source` (source build - Debian-based)
|
||||
- `rustfs/rustfs:v1.2.3-dev` (development - Debian-based with tools)
|
||||
|
||||
### Manual Builds
|
||||
|
||||
Trigger custom builds via GitHub Actions:
|
||||
|
||||
```bash
|
||||
# Use workflow_dispatch to build specific versions
|
||||
# Available options: latest, main-latest, dev-latest, v1.2.3, dev-abc123
|
||||
```
|
||||
|
||||
## 📦 Supporting Infrastructure
|
||||
|
||||
The `.docker/` directory contains supporting configuration files:
|
||||
|
||||
- **observability/** - Prometheus, Grafana, OpenTelemetry configs
|
||||
- **compose/** - Multi-service Docker Compose setups
|
||||
- **mqtt/** - MQTT broker configurations
|
||||
- **openobserve-otel/** - Log aggregation and tracing setup
|
||||
|
||||
See individual README files in each subdirectory for specific usage instructions.
|
||||
@@ -1,19 +0,0 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
[source.crates-io]
|
||||
registry = "https://github.com/rust-lang/crates.io-index"
|
||||
|
||||
[net]
|
||||
git-fetch-with-cli = true
|
||||
80
.docker/compose/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Docker Compose Configurations
|
||||
|
||||
This directory contains specialized Docker Compose configurations for different use cases.
|
||||
|
||||
## 📁 Configuration Files
|
||||
|
||||
This directory contains specialized Docker Compose configurations and their associated Dockerfiles, keeping related files organized together.
|
||||
|
||||
### Main Configuration (Root Directory)
|
||||
|
||||
- **`../../docker-compose.yml`** - **Default Production Setup**
|
||||
- Complete production-ready configuration
|
||||
- Includes RustFS server + full observability stack
|
||||
- Supports multiple profiles: `dev`, `observability`, `cache`, `proxy`
|
||||
- Recommended for most users
|
||||
|
||||
### Specialized Configurations
|
||||
|
||||
- **`docker-compose.cluster.yaml`** - **Distributed Testing**
|
||||
- 4-node cluster setup for testing distributed storage
|
||||
- Uses local compiled binaries
|
||||
- Simulates multi-node environment
|
||||
- Ideal for development and cluster testing
|
||||
|
||||
- **`docker-compose.observability.yaml`** - **Observability Focus**
|
||||
- Specialized setup for testing observability features
|
||||
- Includes OpenTelemetry, Jaeger, Prometheus, Loki, Grafana
|
||||
- Uses `../../Dockerfile.source` for builds
|
||||
- Perfect for observability development
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Production Setup
|
||||
|
||||
```bash
|
||||
# Start main service
|
||||
docker-compose up -d
|
||||
|
||||
# Start with development profile
|
||||
docker-compose --profile dev up -d
|
||||
|
||||
# Start with full observability
|
||||
docker-compose --profile observability up -d
|
||||
```
|
||||
|
||||
### Cluster Testing
|
||||
|
||||
```bash
|
||||
# Build and start 4-node cluster (run from project root)
|
||||
cd .docker/compose
|
||||
docker-compose -f docker-compose.cluster.yaml up -d
|
||||
|
||||
# Or run directly from project root
|
||||
docker-compose -f .docker/compose/docker-compose.cluster.yaml up -d
|
||||
```
|
||||
|
||||
### Observability Testing
|
||||
|
||||
```bash
|
||||
# Start observability-focused environment (run from project root)
|
||||
cd .docker/compose
|
||||
docker-compose -f docker-compose.observability.yaml up -d
|
||||
|
||||
# Or run directly from project root
|
||||
docker-compose -f .docker/compose/docker-compose.observability.yaml up -d
|
||||
```
|
||||
|
||||
## 🔧 Configuration Overview
|
||||
|
||||
| Configuration | Nodes | Storage | Observability | Use Case |
|
||||
|---------------|-------|---------|---------------|----------|
|
||||
| **Main** | 1 | Volume mounts | Full stack | Production |
|
||||
| **Cluster** | 4 | HTTP endpoints | Basic | Testing |
|
||||
| **Observability** | 4 | Local data | Advanced | Development |
|
||||
|
||||
## 📝 Notes
|
||||
|
||||
- Always ensure you have built the required binaries before starting cluster tests
|
||||
- The main configuration is sufficient for most use cases
|
||||
- Specialized configurations are for specific testing scenarios
|
||||
@@ -14,70 +14,69 @@
|
||||
|
||||
services:
|
||||
node0:
|
||||
image: rustfs:v1 # 替换为你的镜像名称和标签
|
||||
image: rustfs/rustfs:latest # Replace with your image name and label
|
||||
container_name: node0
|
||||
hostname: node0
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9000:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "8000:9001" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "9000:9000" # Map port 9001 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node0:/data # 将当前路径挂载到容器内的 /root/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node1:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node1
|
||||
hostname: node1
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9001:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
|
||||
- "9001:9000" # Map port 9002 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node1:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node2:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node2
|
||||
hostname: node2
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9002:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
|
||||
- "9002:9000" # Map port 9003 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node2:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node3:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node3
|
||||
hostname: node3
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9003:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
|
||||
- "9003:9000" # Map port 9004 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node3:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
@@ -14,11 +14,11 @@
|
||||
|
||||
services:
|
||||
otel-collector:
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
|
||||
image: otel/opentelemetry-collector-contrib:0.129.1
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
- ../../.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888
|
||||
- 8888:8888
|
||||
@@ -30,7 +30,7 @@ services:
|
||||
networks:
|
||||
- rustfs-network
|
||||
jaeger:
|
||||
image: jaegertracing/jaeger:2.6.0
|
||||
image: jaegertracing/jaeger:2.8.0
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
ports:
|
||||
@@ -40,11 +40,11 @@ services:
|
||||
networks:
|
||||
- rustfs-network
|
||||
prometheus:
|
||||
image: prom/prometheus:v3.4.1
|
||||
image: prom/prometheus:v3.4.2
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
- ../../.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
networks:
|
||||
@@ -54,16 +54,16 @@ services:
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
|
||||
- ../../.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
|
||||
ports:
|
||||
- "3100:3100"
|
||||
command: -config.file=/etc/loki/local-config.yaml
|
||||
networks:
|
||||
- rustfs-network
|
||||
grafana:
|
||||
image: grafana/grafana:12.0.1
|
||||
image: grafana/grafana:12.0.2
|
||||
ports:
|
||||
- "3000:3000" # Web UI
|
||||
- "3000:3000" # Web UI
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=admin
|
||||
- TZ=Asia/Shanghai
|
||||
@@ -72,85 +72,69 @@ services:
|
||||
|
||||
node1:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node1
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9001:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "9101:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data # 将当前路径挂载到容器内的 /root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9001:9000" # Map port 9001 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node2:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node2
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9002:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
|
||||
- "9102:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9002:9000" # Map port 9002 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node3:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node3
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9003:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
|
||||
- "9103:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9003:9000" # Map port 9003 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node4:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node4
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9004:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
|
||||
- "9104:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9004:9000" # Map port 9004 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
@@ -13,24 +13,40 @@
|
||||
# limitations under the License.
|
||||
|
||||
services:
|
||||
|
||||
tempo:
|
||||
image: grafana/tempo:latest
|
||||
#user: root # The container must be started with root to execute chown in the script
|
||||
#entrypoint: [ "/etc/tempo/entrypoint.sh" ] # Specify a custom entry point
|
||||
command: [ "-config.file=/etc/tempo.yaml" ] # This is passed as a parameter to the entry point script
|
||||
volumes:
|
||||
- ./tempo-entrypoint.sh:/etc/tempo/entrypoint.sh # Mount entry point script
|
||||
- ./tempo.yaml:/etc/tempo.yaml
|
||||
- ./tempo-data:/var/tempo
|
||||
ports:
|
||||
- "3200:3200" # tempo
|
||||
- "24317:4317" # otlp grpc
|
||||
networks:
|
||||
- otel-network
|
||||
|
||||
otel-collector:
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
|
||||
image: otel/opentelemetry-collector-contrib:0.129.1
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888
|
||||
- 8888:8888
|
||||
- 8889:8889
|
||||
- 13133:13133
|
||||
- 4317:4317
|
||||
- 4318:4318
|
||||
- 55679:55679
|
||||
- "1888:1888"
|
||||
- "8888:8888"
|
||||
- "8889:8889"
|
||||
- "13133:13133"
|
||||
- "4317:4317"
|
||||
- "4318:4318"
|
||||
- "55679:55679"
|
||||
networks:
|
||||
- otel-network
|
||||
jaeger:
|
||||
image: jaegertracing/jaeger:2.7.0
|
||||
image: jaegertracing/jaeger:2.8.0
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
ports:
|
||||
@@ -40,7 +56,7 @@ services:
|
||||
networks:
|
||||
- otel-network
|
||||
prometheus:
|
||||
image: prom/prometheus:v3.4.1
|
||||
image: prom/prometheus:v3.4.2
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
@@ -64,6 +80,8 @@ services:
|
||||
image: grafana/grafana:12.0.2
|
||||
ports:
|
||||
- "3000:3000" # Web UI
|
||||
volumes:
|
||||
- ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=admin
|
||||
- TZ=Asia/Shanghai
|
||||
|
||||
32
.docker/observability/grafana-datasources.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: Prometheus
|
||||
type: prometheus
|
||||
uid: prometheus
|
||||
access: proxy
|
||||
orgId: 1
|
||||
url: http://prometheus:9090
|
||||
basicAuth: false
|
||||
isDefault: false
|
||||
version: 1
|
||||
editable: false
|
||||
jsonData:
|
||||
httpMethod: GET
|
||||
- name: Tempo
|
||||
type: tempo
|
||||
access: proxy
|
||||
orgId: 1
|
||||
url: http://tempo:3200
|
||||
basicAuth: false
|
||||
isDefault: true
|
||||
version: 1
|
||||
editable: false
|
||||
apiVersion: 1
|
||||
uid: tempo
|
||||
jsonData:
|
||||
httpMethod: GET
|
||||
serviceMap:
|
||||
datasourceUid: prometheus
|
||||
streamingEnabled:
|
||||
search: true
|
||||
@@ -33,6 +33,10 @@ exporters:
|
||||
endpoint: "jaeger:4317" # Jaeger 的 OTLP gRPC 端点
|
||||
tls:
|
||||
insecure: true # 开发环境禁用 TLS,生产环境需配置证书
|
||||
otlp/tempo: # OTLP 导出器,用于跟踪数据
|
||||
endpoint: "tempo:4317" # tempo 的 OTLP gRPC 端点
|
||||
tls:
|
||||
insecure: true # 开发环境禁用 TLS,生产环境需配置证书
|
||||
prometheus: # Prometheus 导出器,用于指标数据
|
||||
endpoint: "0.0.0.0:8889" # Prometheus 刮取端点
|
||||
namespace: "rustfs" # 指标前缀
|
||||
@@ -53,7 +57,7 @@ service:
|
||||
traces:
|
||||
receivers: [ otlp ]
|
||||
processors: [ memory_limiter,batch ]
|
||||
exporters: [ otlp/traces ]
|
||||
exporters: [ otlp/traces,otlp/tempo ]
|
||||
metrics:
|
||||
receivers: [ otlp ]
|
||||
processors: [ batch ]
|
||||
@@ -66,6 +70,12 @@ service:
|
||||
logs:
|
||||
level: "info" # Collector 日志级别
|
||||
metrics:
|
||||
address: "0.0.0.0:8888" # Collector 自身指标暴露
|
||||
level: "detailed" # 可以是 basic, normal, detailed
|
||||
readers:
|
||||
- periodic:
|
||||
exporter:
|
||||
otlp:
|
||||
protocol: http/protobuf
|
||||
endpoint: http://otel-collector:4318
|
||||
|
||||
|
||||
|
||||
@@ -18,8 +18,11 @@ global:
|
||||
scrape_configs:
|
||||
- job_name: 'otel-collector'
|
||||
static_configs:
|
||||
- targets: ['otel-collector:8888'] # 从 Collector 刮取指标
|
||||
- targets: [ 'otel-collector:8888' ] # 从 Collector 刮取指标
|
||||
- job_name: 'otel-metrics'
|
||||
static_configs:
|
||||
- targets: ['otel-collector:8889'] # 应用指标
|
||||
- targets: [ 'otel-collector:8889' ] # 应用指标
|
||||
- job_name: 'tempo'
|
||||
static_configs:
|
||||
- targets: [ 'tempo:3200' ]
|
||||
|
||||
|
||||
1
.docker/observability/tempo-data/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*
|
||||
8
.docker/observability/tempo-entrypoint.sh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/bin/sh
|
||||
# Run as root to fix directory permissions
|
||||
chown -R 10001:10001 /var/tempo
|
||||
|
||||
# Use su-exec (a lightweight sudo/gosu alternative, commonly used in Alpine mirroring)
|
||||
# Switch to user 10001 and execute the original command (CMD) passed to the script
|
||||
# "$@" represents all parameters passed to this script, i.e. command in docker-compose
|
||||
exec su-exec 10001:10001 /tempo "$@"
|
||||
55
.docker/observability/tempo.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
stream_over_http_enabled: true
|
||||
server:
|
||||
http_listen_port: 3200
|
||||
log_level: info
|
||||
|
||||
query_frontend:
|
||||
search:
|
||||
duration_slo: 5s
|
||||
throughput_bytes_slo: 1.073741824e+09
|
||||
metadata_slo:
|
||||
duration_slo: 5s
|
||||
throughput_bytes_slo: 1.073741824e+09
|
||||
trace_by_id:
|
||||
duration_slo: 5s
|
||||
|
||||
distributor:
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
endpoint: "tempo:4317"
|
||||
|
||||
ingester:
|
||||
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
|
||||
|
||||
compactor:
|
||||
compaction:
|
||||
block_retention: 1h # overall Tempo trace retention. set for demo purposes
|
||||
|
||||
metrics_generator:
|
||||
registry:
|
||||
external_labels:
|
||||
source: tempo
|
||||
cluster: docker-compose
|
||||
storage:
|
||||
path: /var/tempo/generator/wal
|
||||
remote_write:
|
||||
- url: http://prometheus:9090/api/v1/write
|
||||
send_exemplars: true
|
||||
traces_storage:
|
||||
path: /var/tempo/generator/traces
|
||||
|
||||
storage:
|
||||
trace:
|
||||
backend: local # backend configuration to use
|
||||
wal:
|
||||
path: /var/tempo/wal # where to store the wal locally
|
||||
local:
|
||||
path: /var/tempo/blocks
|
||||
|
||||
overrides:
|
||||
defaults:
|
||||
metrics_generator:
|
||||
processors: [ service-graphs, span-metrics, local-blocks ] # enables metrics generator
|
||||
generate_native_histograms: both
|
||||
15
.github/actions/setup/action.yml
vendored
@@ -60,15 +60,7 @@ runs:
|
||||
pkg-config \
|
||||
libssl-dev
|
||||
|
||||
- name: Cache protoc binary
|
||||
id: cache-protoc
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.local/bin/protoc
|
||||
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
|
||||
|
||||
- name: Install protoc
|
||||
if: steps.cache-protoc.outputs.cache-hit != 'true'
|
||||
uses: arduino/setup-protoc@v3
|
||||
with:
|
||||
version: "31.1"
|
||||
@@ -94,6 +86,9 @@ runs:
|
||||
if: inputs.install-cross-tools == 'true'
|
||||
uses: taiki-e/install-action@cargo-zigbuild
|
||||
|
||||
- name: Install cargo-nextest
|
||||
uses: taiki-e/install-action@cargo-nextest
|
||||
|
||||
- name: Setup Rust cache
|
||||
uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
@@ -101,7 +96,3 @@ runs:
|
||||
cache-on-failure: true
|
||||
shared-key: ${{ inputs.cache-shared-key }}
|
||||
save-if: ${{ inputs.cache-save-if }}
|
||||
# Cache workspace dependencies
|
||||
workspaces: |
|
||||
. -> target
|
||||
cli/rustfs-gui -> cli/rustfs-gui/target
|
||||
|
||||
640
.github/workflows/build.yml
vendored
@@ -12,11 +12,23 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Build and Release Workflow
|
||||
#
|
||||
# This workflow builds RustFS binaries and automatically triggers Docker image builds.
|
||||
#
|
||||
# Flow:
|
||||
# 1. Build binaries for multiple platforms
|
||||
# 2. Upload binaries to OSS storage
|
||||
# 3. Trigger docker.yml to build and push images using the uploaded binaries
|
||||
#
|
||||
# Manual Parameters:
|
||||
# - build_docker: Build and push Docker images (default: true)
|
||||
|
||||
name: Build and Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*"]
|
||||
tags: ["*.*.*"]
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
@@ -52,10 +64,10 @@ on:
|
||||
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
force_build:
|
||||
description: "Force build even without changes"
|
||||
build_docker:
|
||||
description: "Build and push Docker images after binary build"
|
||||
required: false
|
||||
default: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
@@ -65,39 +77,78 @@ env:
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
# Second layer: Business logic level checks (handling build strategy)
|
||||
# Build strategy check - determine build type based on trigger
|
||||
build-check:
|
||||
name: Build Strategy Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
version: ${{ steps.check.outputs.version }}
|
||||
short_sha: ${{ steps.check.outputs.short_sha }}
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Determine build strategy
|
||||
id: check
|
||||
run: |
|
||||
should_build=false
|
||||
build_type="none"
|
||||
version=""
|
||||
short_sha=""
|
||||
is_prerelease=false
|
||||
|
||||
# Business logic: when we need to build
|
||||
if [[ "${{ github.event_name }}" == "schedule" ]] || \
|
||||
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
|
||||
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
|
||||
# Get short SHA for all builds
|
||||
short_sha=$(git rev-parse --short HEAD)
|
||||
|
||||
# Determine build type based on trigger
|
||||
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
# Tag push - release or prerelease
|
||||
should_build=true
|
||||
tag_name="${GITHUB_REF#refs/tags/}"
|
||||
version="${tag_name}"
|
||||
|
||||
# Check if this is a prerelease
|
||||
if [[ "$tag_name" == *"alpha"* ]] || [[ "$tag_name" == *"beta"* ]] || [[ "$tag_name" == *"rc"* ]]; then
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🚀 Prerelease build detected: $tag_name"
|
||||
else
|
||||
build_type="release"
|
||||
echo "📦 Release build detected: $tag_name"
|
||||
fi
|
||||
elif [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
|
||||
# Main branch push - development build
|
||||
should_build=true
|
||||
build_type="development"
|
||||
fi
|
||||
|
||||
# Always build for tag pushes (version releases)
|
||||
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
version="dev-${short_sha}"
|
||||
echo "🛠️ Development build detected"
|
||||
elif [[ "${{ github.event_name }}" == "schedule" ]] || \
|
||||
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
|
||||
# Scheduled or manual build
|
||||
should_build=true
|
||||
build_type="release"
|
||||
echo "🏷️ Tag detected: forcing release build"
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
echo "⚡ Manual/scheduled build detected"
|
||||
fi
|
||||
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "build_type=$build_type" >> $GITHUB_OUTPUT
|
||||
echo "Build needed: $should_build (type: $build_type)"
|
||||
echo "version=$version" >> $GITHUB_OUTPUT
|
||||
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📊 Build Summary:"
|
||||
echo " - Should build: $should_build"
|
||||
echo " - Build type: $build_type"
|
||||
echo " - Version: $version"
|
||||
echo " - Short SHA: $short_sha"
|
||||
echo " - Is prerelease: $is_prerelease"
|
||||
|
||||
# Build RustFS binaries
|
||||
build-rustfs:
|
||||
@@ -168,6 +219,7 @@ jobs:
|
||||
echo "// Static assets not available" > ./rustfs/static/empty.txt
|
||||
fi
|
||||
else
|
||||
chmod +w ./rustfs/static/LICENSE || true
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
if [[ $? -eq 0 ]]; then
|
||||
@@ -190,7 +242,7 @@ jobs:
|
||||
cargo install cross --git https://github.com/cross-rs/cross
|
||||
cross build --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
else
|
||||
# Use zigbuild for Linux ARM64
|
||||
# Use zigbuild for other cross-compilation
|
||||
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
fi
|
||||
else
|
||||
@@ -201,7 +253,38 @@ jobs:
|
||||
id: package
|
||||
shell: bash
|
||||
run: |
|
||||
PACKAGE_NAME="rustfs-${{ matrix.target }}"
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
|
||||
|
||||
# Extract platform and arch from target
|
||||
TARGET="${{ matrix.target }}"
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
|
||||
# Map target to architecture
|
||||
case "$TARGET" in
|
||||
*x86_64*)
|
||||
ARCH="x86_64"
|
||||
;;
|
||||
*aarch64*|*arm64*)
|
||||
ARCH="aarch64"
|
||||
;;
|
||||
*armv7*)
|
||||
ARCH="armv7"
|
||||
;;
|
||||
*)
|
||||
ARCH="unknown"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Generate package name based on build type
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
# Development build: rustfs-${platform}-${arch}-dev-${short_sha}.zip
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-dev-${SHORT_SHA}"
|
||||
else
|
||||
# Release/Prerelease build: rustfs-${platform}-${arch}-v${version}.zip
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${VERSION}"
|
||||
fi
|
||||
|
||||
# Create zip packages for all platforms
|
||||
# Ensure zip is available
|
||||
@@ -214,9 +297,15 @@ jobs:
|
||||
cd target/${{ matrix.target }}/release
|
||||
zip "../../../${PACKAGE_NAME}.zip" rustfs
|
||||
cd ../../..
|
||||
|
||||
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
|
||||
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
|
||||
echo "Package created: ${PACKAGE_NAME}.zip"
|
||||
echo "build_type=${BUILD_TYPE}" >> $GITHUB_OUTPUT
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📦 Package created: ${PACKAGE_NAME}.zip"
|
||||
echo "🔧 Build type: ${BUILD_TYPE}"
|
||||
echo "📊 Version: ${VERSION}"
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
@@ -226,13 +315,15 @@ jobs:
|
||||
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
|
||||
|
||||
- name: Upload to Aliyun OSS
|
||||
if: needs.build-check.outputs.build_type == 'release' && env.OSS_ACCESS_KEY_ID != ''
|
||||
if: env.OSS_ACCESS_KEY_ID != '' && (needs.build-check.outputs.build_type == 'release' || needs.build-check.outputs.build_type == 'prerelease' || needs.build-check.outputs.build_type == 'development')
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
|
||||
# Install ossutil (platform-specific)
|
||||
OSSUTIL_VERSION="2.1.1"
|
||||
case "${{ matrix.platform }}" in
|
||||
@@ -270,221 +361,392 @@ jobs:
|
||||
;;
|
||||
esac
|
||||
|
||||
# Upload the package file directly to OSS
|
||||
echo "Uploading ${{ steps.package.outputs.package_file }} to OSS..."
|
||||
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" oss://rustfs-artifacts/artifacts/rustfs/ --force
|
||||
|
||||
# Create latest.json (only for the first Linux build to avoid duplication)
|
||||
if [[ "${{ matrix.target }}" == "x86_64-unknown-linux-musl" ]]; then
|
||||
VERSION="${GITHUB_REF#refs/tags/v}"
|
||||
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
|
||||
$OSSUTIL_BIN cp latest.json oss://rustfs-version/latest.json --force
|
||||
# Determine upload path based on build type
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/dev/"
|
||||
echo "📤 Uploading development build to OSS dev directory"
|
||||
else
|
||||
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/release/"
|
||||
echo "📤 Uploading release build to OSS release directory"
|
||||
fi
|
||||
|
||||
# Release management
|
||||
release:
|
||||
name: GitHub Release
|
||||
# Upload the package file to OSS
|
||||
echo "Uploading ${{ steps.package.outputs.package_file }} to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" "$OSS_PATH" --force
|
||||
|
||||
# For release and prerelease builds, also create a latest version
|
||||
if [[ "$BUILD_TYPE" == "release" ]] || [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
# Extract platform and arch from package name
|
||||
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
|
||||
|
||||
# Create latest version filename
|
||||
# Convert from rustfs-linux-x86_64-v1.0.0 to rustfs-linux-x86_64-latest
|
||||
LATEST_FILE="${PACKAGE_NAME%-v*}-latest.zip"
|
||||
|
||||
# Copy the original file to latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$LATEST_FILE"
|
||||
|
||||
# Upload the latest version
|
||||
echo "Uploading latest version: $LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Latest version uploaded: $LATEST_FILE"
|
||||
fi
|
||||
|
||||
# For development builds, create dev-latest version
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
# Extract platform and arch from package name
|
||||
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
|
||||
|
||||
# Create dev-latest version filename
|
||||
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-dev-latest
|
||||
DEV_LATEST_FILE="${PACKAGE_NAME%-*}-latest.zip"
|
||||
|
||||
# Copy the original file to dev-latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$DEV_LATEST_FILE"
|
||||
|
||||
# Upload the dev-latest version
|
||||
echo "Uploading dev-latest version: $DEV_LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$DEV_LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Dev-latest version uploaded: $DEV_LATEST_FILE"
|
||||
|
||||
# For main branch builds, also create a main-latest version
|
||||
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
|
||||
# Create main-latest version filename
|
||||
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-main-latest
|
||||
MAIN_LATEST_FILE="${PACKAGE_NAME%-dev-*}-main-latest.zip"
|
||||
|
||||
# Copy the original file to main-latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$MAIN_LATEST_FILE"
|
||||
|
||||
# Upload the main-latest version
|
||||
echo "Uploading main-latest version: $MAIN_LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$MAIN_LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Main-latest version uploaded: $MAIN_LATEST_FILE"
|
||||
|
||||
# Also create a generic main-latest for Docker builds
|
||||
if [[ "${{ matrix.platform }}" == "linux" ]]; then
|
||||
DOCKER_MAIN_LATEST_FILE="rustfs-linux-${{ matrix.target == 'x86_64-unknown-linux-musl' && 'x86_64' || 'aarch64' }}-main-latest.zip"
|
||||
|
||||
cp "${{ steps.package.outputs.package_file }}" "$DOCKER_MAIN_LATEST_FILE"
|
||||
$OSSUTIL_BIN cp "$DOCKER_MAIN_LATEST_FILE" "$OSS_PATH" --force
|
||||
echo "✅ Docker main-latest version uploaded: $DOCKER_MAIN_LATEST_FILE"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Upload completed successfully"
|
||||
|
||||
# Build summary
|
||||
build-summary:
|
||||
name: Build Summary
|
||||
needs: [build-check, build-rustfs]
|
||||
if: always() && needs.build-check.outputs.build_type == 'release'
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Build completion summary
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
echo "🎉 Build completed successfully!"
|
||||
echo "📦 Build type: $BUILD_TYPE"
|
||||
echo "🔢 Version: $VERSION"
|
||||
echo ""
|
||||
|
||||
# Check build status
|
||||
BUILD_STATUS="${{ needs.build-rustfs.result }}"
|
||||
|
||||
echo "📊 Build Results:"
|
||||
echo " 📦 All platforms: $BUILD_STATUS"
|
||||
echo ""
|
||||
|
||||
case "$BUILD_TYPE" in
|
||||
"development")
|
||||
echo "🛠️ Development build artifacts have been uploaded to OSS dev directory"
|
||||
echo "⚠️ This is a development build - not suitable for production use"
|
||||
;;
|
||||
"release")
|
||||
echo "🚀 Release build artifacts have been uploaded to OSS release directory"
|
||||
echo "✅ This build is ready for production use"
|
||||
echo "🏷️ GitHub Release will be created in this workflow"
|
||||
;;
|
||||
"prerelease")
|
||||
echo "🧪 Prerelease build artifacts have been uploaded to OSS release directory"
|
||||
echo "⚠️ This is a prerelease build - use with caution"
|
||||
echo "🏷️ GitHub Release will be created in this workflow"
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "🐳 Docker Images:"
|
||||
if [[ "${{ github.event.inputs.build_docker }}" == "false" ]]; then
|
||||
echo "⏭️ Docker image build was skipped (binary only build)"
|
||||
elif [[ "$BUILD_STATUS" == "success" ]]; then
|
||||
echo "🔄 Docker images will be built and pushed automatically via workflow_run event"
|
||||
else
|
||||
echo "❌ Docker image build will be skipped due to build failure"
|
||||
fi
|
||||
|
||||
# Create GitHub Release (only for tag pushes)
|
||||
create-release:
|
||||
name: Create GitHub Release
|
||||
needs: [build-check, build-rustfs]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
outputs:
|
||||
release_id: ${{ steps.create.outputs.release_id }}
|
||||
release_url: ${{ steps.create.outputs.release_url }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./release-artifacts
|
||||
|
||||
- name: Prepare release assets
|
||||
id: release_prep
|
||||
run: |
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
VERSION_CLEAN="${VERSION#v}"
|
||||
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
|
||||
|
||||
# Organize artifacts
|
||||
mkdir -p ./release-files
|
||||
|
||||
# Copy all artifacts (.zip files)
|
||||
find ./release-artifacts -name "*.zip" -exec cp {} ./release-files/ \;
|
||||
|
||||
# Generate checksums for all files
|
||||
cd ./release-files
|
||||
if ls *.zip >/dev/null 2>&1; then
|
||||
sha256sum *.zip >> SHA256SUMS
|
||||
sha512sum *.zip >> SHA512SUMS
|
||||
fi
|
||||
cd ..
|
||||
|
||||
# Display what we're releasing
|
||||
echo "=== Release Files ==="
|
||||
ls -la ./release-files/
|
||||
|
||||
- name: Create GitHub Release
|
||||
id: create
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
|
||||
TAG="${{ needs.build-check.outputs.version }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
|
||||
# Determine release type for title
|
||||
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
if [[ "$TAG" == *"alpha"* ]]; then
|
||||
RELEASE_TYPE="alpha"
|
||||
elif [[ "$TAG" == *"beta"* ]]; then
|
||||
RELEASE_TYPE="beta"
|
||||
elif [[ "$TAG" == *"rc"* ]]; then
|
||||
RELEASE_TYPE="rc"
|
||||
else
|
||||
RELEASE_TYPE="prerelease"
|
||||
fi
|
||||
else
|
||||
RELEASE_TYPE="release"
|
||||
fi
|
||||
|
||||
# Check if release already exists
|
||||
if gh release view "$VERSION" >/dev/null 2>&1; then
|
||||
echo "Release $VERSION already exists, skipping creation"
|
||||
if gh release view "$TAG" >/dev/null 2>&1; then
|
||||
echo "Release $TAG already exists"
|
||||
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
|
||||
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
|
||||
else
|
||||
# Get release notes from tag message
|
||||
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
|
||||
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
|
||||
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
RELEASE_NOTES="Release ${VERSION_CLEAN}"
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
RELEASE_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
|
||||
else
|
||||
RELEASE_NOTES="Release ${VERSION}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Determine if this is a prerelease
|
||||
# Create release title
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
TITLE="RustFS $VERSION (${RELEASE_TYPE})"
|
||||
else
|
||||
TITLE="RustFS $VERSION"
|
||||
fi
|
||||
|
||||
# Create the release
|
||||
PRERELEASE_FLAG=""
|
||||
if [[ "$VERSION" == *"alpha"* ]] || [[ "$VERSION" == *"beta"* ]] || [[ "$VERSION" == *"rc"* ]]; then
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
PRERELEASE_FLAG="--prerelease"
|
||||
fi
|
||||
|
||||
# Create the release only if it doesn't exist
|
||||
gh release create "$VERSION" \
|
||||
--title "RustFS $VERSION_CLEAN" \
|
||||
gh release create "$TAG" \
|
||||
--title "$TITLE" \
|
||||
--notes "$RELEASE_NOTES" \
|
||||
$PRERELEASE_FLAG
|
||||
$PRERELEASE_FLAG \
|
||||
--draft
|
||||
|
||||
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
|
||||
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
|
||||
fi
|
||||
|
||||
- name: Upload release assets
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
echo "release_id=$RELEASE_ID" >> $GITHUB_OUTPUT
|
||||
echo "release_url=$RELEASE_URL" >> $GITHUB_OUTPUT
|
||||
echo "Created release: $RELEASE_URL"
|
||||
|
||||
# Prepare and upload release assets
|
||||
upload-release-assets:
|
||||
name: Upload Release Assets
|
||||
needs: [build-check, build-rustfs, create-release]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
actions: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download all build artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./artifacts
|
||||
pattern: rustfs-*
|
||||
merge-multiple: true
|
||||
|
||||
- name: Prepare release assets
|
||||
id: prepare
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
TAG="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
cd ./release-files
|
||||
mkdir -p ./release-assets
|
||||
|
||||
# Upload all binary files
|
||||
for file in *.zip; do
|
||||
# Copy and verify artifacts
|
||||
ASSETS_COUNT=0
|
||||
for file in ./artifacts/*.zip; do
|
||||
if [[ -f "$file" ]]; then
|
||||
echo "Uploading $file..."
|
||||
gh release upload "$VERSION" "$file" --clobber
|
||||
cp "$file" ./release-assets/
|
||||
ASSETS_COUNT=$((ASSETS_COUNT + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Upload checksum files
|
||||
if [[ -f "SHA256SUMS" ]]; then
|
||||
echo "Uploading SHA256SUMS..."
|
||||
gh release upload "$VERSION" "SHA256SUMS" --clobber
|
||||
if [[ $ASSETS_COUNT -eq 0 ]]; then
|
||||
echo "❌ No artifacts found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -f "SHA512SUMS" ]]; then
|
||||
echo "Uploading SHA512SUMS..."
|
||||
gh release upload "$VERSION" "SHA512SUMS" --clobber
|
||||
cd ./release-assets
|
||||
|
||||
# Generate checksums
|
||||
if ls *.zip >/dev/null 2>&1; then
|
||||
sha256sum *.zip > SHA256SUMS
|
||||
sha512sum *.zip > SHA512SUMS
|
||||
fi
|
||||
|
||||
- name: Update release notes
|
||||
# Create signature placeholder files
|
||||
for file in *.zip; do
|
||||
echo "# Signature for $file" > "${file}.asc"
|
||||
echo "# GPG signature will be added in future versions" >> "${file}.asc"
|
||||
done
|
||||
|
||||
echo "📦 Prepared assets:"
|
||||
ls -la
|
||||
|
||||
echo "🔢 Asset count: $ASSETS_COUNT"
|
||||
|
||||
- name: Upload to GitHub Release
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
|
||||
TAG="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
# Check if release already has custom notes (not auto-generated)
|
||||
EXISTING_NOTES=$(gh release view "$VERSION" --json body --jq '.body' 2>/dev/null || echo "")
|
||||
cd ./release-assets
|
||||
|
||||
# Only update if release notes are empty or auto-generated
|
||||
if [[ -z "$EXISTING_NOTES" ]] || [[ "$EXISTING_NOTES" == *"Release ${VERSION_CLEAN}"* ]]; then
|
||||
echo "Updating release notes for $VERSION"
|
||||
|
||||
# Get original release notes from tag
|
||||
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
|
||||
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
ORIGINAL_NOTES="Release ${VERSION_CLEAN}"
|
||||
# Upload all files
|
||||
for file in *; do
|
||||
if [[ -f "$file" ]]; then
|
||||
echo "📤 Uploading $file..."
|
||||
gh release upload "$TAG" "$file" --clobber
|
||||
fi
|
||||
done
|
||||
|
||||
# Create comprehensive release notes
|
||||
cat > enhanced_notes.md << EOF
|
||||
## RustFS ${VERSION_CLEAN}
|
||||
echo "✅ All assets uploaded successfully"
|
||||
|
||||
${ORIGINAL_NOTES}
|
||||
|
||||
---
|
||||
|
||||
### 🚀 Quick Download
|
||||
|
||||
**Linux (Static Binaries - No Dependencies):**
|
||||
\`\`\`bash
|
||||
# x86_64 (Intel/AMD)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-unknown-linux-musl.zip
|
||||
unzip rustfs-x86_64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# ARM64 (Graviton, Apple Silicon VMs)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-unknown-linux-musl.zip
|
||||
unzip rustfs-aarch64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
\`\`\`
|
||||
|
||||
**macOS:**
|
||||
\`\`\`bash
|
||||
# Apple Silicon (M1/M2/M3)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-apple-darwin.zip
|
||||
unzip rustfs-aarch64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# Intel
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-apple-darwin.zip
|
||||
unzip rustfs-x86_64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
\`\`\`
|
||||
|
||||
### 📁 Available Downloads
|
||||
|
||||
| Platform | Architecture | File | Description |
|
||||
|----------|-------------|------|-------------|
|
||||
| Linux | x86_64 | \`rustfs-x86_64-unknown-linux-musl.zip\` | Static binary, no dependencies |
|
||||
| Linux | ARM64 | \`rustfs-aarch64-unknown-linux-musl.zip\` | Static binary, no dependencies |
|
||||
| macOS | Apple Silicon | \`rustfs-aarch64-apple-darwin.zip\` | Native binary, ZIP archive |
|
||||
| macOS | Intel | \`rustfs-x86_64-apple-darwin.zip\` | Native binary, ZIP archive |
|
||||
|
||||
### 🔐 Verification
|
||||
|
||||
Download checksums and verify your download:
|
||||
\`\`\`bash
|
||||
# Download checksums
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/SHA256SUMS
|
||||
|
||||
# Verify (Linux)
|
||||
sha256sum -c SHA256SUMS --ignore-missing
|
||||
|
||||
# Verify (macOS)
|
||||
shasum -a 256 -c SHA256SUMS --ignore-missing
|
||||
\`\`\`
|
||||
|
||||
### 🛠️ System Requirements
|
||||
|
||||
- **Linux**: Any distribution with glibc 2.17+ (CentOS 7+, Ubuntu 16.04+)
|
||||
- **macOS**: 10.15+ (Catalina or later)
|
||||
- **Windows**: Windows 10 version 1809 or later
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- [Installation Guide](https://github.com/rustfs/rustfs#installation)
|
||||
- [Quick Start](https://github.com/rustfs/rustfs#quick-start)
|
||||
- [Configuration](https://github.com/rustfs/rustfs/blob/main/docs/)
|
||||
- [API Documentation](https://docs.rs/rustfs)
|
||||
|
||||
### 🆘 Support
|
||||
|
||||
- 🐛 [Report Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- 💬 [Community Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- 📖 [Documentation](https://github.com/rustfs/rustfs/tree/main/docs)
|
||||
EOF
|
||||
|
||||
# Update the release with enhanced notes
|
||||
gh release edit "$VERSION" --notes-file enhanced_notes.md
|
||||
else
|
||||
echo "Release $VERSION already has custom notes, skipping update to preserve manual edits"
|
||||
# Update latest.json for stable releases only
|
||||
update-latest-version:
|
||||
name: Update Latest Version
|
||||
needs: [build-check, upload-release-assets]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.is_prerelease == 'false'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Update latest.json
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
run: |
|
||||
if [[ -z "$OSS_ACCESS_KEY_ID" ]]; then
|
||||
echo "⚠️ OSS credentials not available, skipping latest.json update"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
TAG="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
# Install ossutil
|
||||
OSSUTIL_VERSION="2.1.1"
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-amd64.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-amd64"
|
||||
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
chmod +x "${OSSUTIL_DIR}/ossutil"
|
||||
|
||||
# Create latest.json
|
||||
cat > latest.json << EOF
|
||||
{
|
||||
"version": "${VERSION}",
|
||||
"tag": "${TAG}",
|
||||
"release_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"release_type": "stable",
|
||||
"download_url": "https://github.com/${{ github.repository }}/releases/tag/${TAG}"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Upload to OSS
|
||||
./${OSSUTIL_DIR}/ossutil cp latest.json oss://rustfs-version/latest.json --force
|
||||
|
||||
echo "✅ Updated latest.json for stable release $VERSION"
|
||||
|
||||
# Publish release (remove draft status)
|
||||
publish-release:
|
||||
name: Publish Release
|
||||
needs: [build-check, create-release, upload-release-assets]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Update release notes and publish
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
TAG="${{ needs.build-check.outputs.version }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
|
||||
# Determine release type
|
||||
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
if [[ "$TAG" == *"alpha"* ]]; then
|
||||
RELEASE_TYPE="alpha"
|
||||
elif [[ "$TAG" == *"beta"* ]]; then
|
||||
RELEASE_TYPE="beta"
|
||||
elif [[ "$TAG" == *"rc"* ]]; then
|
||||
RELEASE_TYPE="rc"
|
||||
else
|
||||
RELEASE_TYPE="prerelease"
|
||||
fi
|
||||
else
|
||||
RELEASE_TYPE="release"
|
||||
fi
|
||||
|
||||
# Get original release notes from tag
|
||||
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
|
||||
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
ORIGINAL_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
|
||||
else
|
||||
ORIGINAL_NOTES="Release ${VERSION}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Publish the release (remove draft status)
|
||||
gh release edit "$TAG" --draft=false
|
||||
|
||||
echo "🎉 Released $TAG successfully!"
|
||||
echo "📄 Release URL: ${{ needs.create-release.outputs.release_url }}"
|
||||
|
||||
6
.github/workflows/ci.yml
vendored
@@ -81,7 +81,7 @@ jobs:
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
|
||||
# Never skip release events and tag pushes
|
||||
do_not_skip: '["release", "push"]'
|
||||
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
|
||||
|
||||
test-and-lint:
|
||||
name: Test and Lint
|
||||
@@ -102,7 +102,9 @@ jobs:
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Run tests
|
||||
run: cargo test --all --exclude e2e_test
|
||||
run: |
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
- name: Check code formatting
|
||||
run: cargo fmt --all --check
|
||||
|
||||
449
.github/workflows/docker.yml
vendored
@@ -12,42 +12,34 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Docker Images Workflow
|
||||
#
|
||||
# This workflow builds Docker images using pre-built binaries from the build workflow.
|
||||
#
|
||||
# Trigger Types:
|
||||
# 1. workflow_run: Automatically triggered when "Build and Release" workflow completes
|
||||
# 2. workflow_dispatch: Manual trigger for standalone Docker builds
|
||||
#
|
||||
# Key Features:
|
||||
# - Only triggers when Linux builds (x86_64 + aarch64) are successful
|
||||
# - Independent of macOS/Windows build status
|
||||
# - Uses workflow_run event for precise control
|
||||
# - Only builds Docker images for releases and prereleases (development builds are skipped)
|
||||
|
||||
name: Docker Images
|
||||
|
||||
# Permissions needed for workflow_run event and Docker registry access
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*"]
|
||||
# Automatically triggered when build workflow completes
|
||||
workflow_run:
|
||||
workflows: ["Build and Release"]
|
||||
types: [completed]
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
# Manual trigger with same parameters for consistency
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
push_images:
|
||||
@@ -55,156 +47,359 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
version:
|
||||
description: "Version to build (latest for stable release, or specific version like v1.0.0, v1.0.0-alpha1)"
|
||||
required: false
|
||||
default: "latest"
|
||||
type: string
|
||||
force_rebuild:
|
||||
description: "Force rebuild even if binary exists (useful for testing)"
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
DOCKERHUB_USERNAME: rustfs
|
||||
CARGO_TERM_COLOR: always
|
||||
REGISTRY_DOCKERHUB: rustfs/rustfs
|
||||
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
|
||||
DOCKER_PLATFORMS: linux/amd64,linux/arm64
|
||||
|
||||
jobs:
|
||||
# Check if we should build
|
||||
# Check if we should build Docker images
|
||||
build-check:
|
||||
name: Build Check
|
||||
name: Docker Build Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
should_push: ${{ steps.check.outputs.should_push }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
version: ${{ steps.check.outputs.version }}
|
||||
short_sha: ${{ steps.check.outputs.short_sha }}
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
create_latest: ${{ steps.check.outputs.create_latest }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check build conditions
|
||||
id: check
|
||||
run: |
|
||||
should_build=false
|
||||
should_push=false
|
||||
build_type="none"
|
||||
version=""
|
||||
short_sha=""
|
||||
is_prerelease=false
|
||||
create_latest=false
|
||||
|
||||
# Always build on workflow_dispatch or when changes detected
|
||||
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event_name }}" == "push" ]] || \
|
||||
[[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
if [[ "${{ github.event_name }}" == "workflow_run" ]]; then
|
||||
# Triggered by build workflow completion
|
||||
echo "🔗 Triggered by build workflow completion"
|
||||
|
||||
# Check if the triggering workflow was successful
|
||||
# If the workflow succeeded, it means ALL builds (including Linux x86_64 and aarch64) succeeded
|
||||
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
|
||||
echo "✅ Build workflow succeeded, all builds including Linux are successful"
|
||||
should_build=true
|
||||
should_push=true
|
||||
else
|
||||
echo "❌ Build workflow failed (conclusion: ${{ github.event.workflow_run.conclusion }}), skipping Docker build"
|
||||
should_build=false
|
||||
fi
|
||||
|
||||
# Extract version info from commit message or use commit SHA
|
||||
# Use Git to generate consistent short SHA (ensures uniqueness like build.yml)
|
||||
short_sha=$(git rev-parse --short "${{ github.event.workflow_run.head_sha }}")
|
||||
|
||||
# Determine build type based on event type and git refs
|
||||
# Check if this is a tag push (release build)
|
||||
if [[ "${{ github.event.workflow_run.event }}" == "push" ]]; then
|
||||
# Get git refs to determine if this is a tag or branch push
|
||||
git_ref="${{ github.event.workflow_run.head_branch }}"
|
||||
|
||||
# Check if this is a tag push by looking at the git ref
|
||||
if git show-ref --tags | grep -q "${{ github.event.workflow_run.head_sha }}"; then
|
||||
# This commit has tags, extract the tag name
|
||||
tag_name=$(git tag --points-at "${{ github.event.workflow_run.head_sha }}" | head -n1)
|
||||
if [[ -n "$tag_name" ]]; then
|
||||
version="$tag_name"
|
||||
# Remove 'v' prefix if present for consistent version format
|
||||
if [[ "$version" == v* ]]; then
|
||||
version="${version#v}"
|
||||
fi
|
||||
|
||||
if [[ "$version" == *"alpha"* ]] || [[ "$version" == *"beta"* ]] || [[ "$version" == *"rc"* ]]; then
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🧪 Building Docker image for prerelease: $version"
|
||||
else
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "🚀 Building Docker image for release: $version"
|
||||
fi
|
||||
else
|
||||
# Regular branch push
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
should_build=false
|
||||
echo "⏭️ Skipping Docker build for development version (branch push)"
|
||||
fi
|
||||
else
|
||||
# Regular branch push
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
should_build=false
|
||||
echo "⏭️ Skipping Docker build for development version (branch push)"
|
||||
fi
|
||||
else
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
should_build=false
|
||||
echo "⏭️ Skipping Docker build for development version (non-push event)"
|
||||
fi
|
||||
|
||||
echo "🔄 Build triggered by workflow_run:"
|
||||
echo " 📋 Conclusion: ${{ github.event.workflow_run.conclusion }}"
|
||||
echo " 🌿 Branch: ${{ github.event.workflow_run.head_branch }}"
|
||||
echo " 📎 SHA: ${{ github.event.workflow_run.head_sha }}"
|
||||
echo " 🎯 Event: ${{ github.event.workflow_run.event }}"
|
||||
|
||||
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
|
||||
# Manual trigger
|
||||
input_version="${{ github.event.inputs.version }}"
|
||||
version="${input_version}"
|
||||
should_push="${{ github.event.inputs.push_images }}"
|
||||
should_build=true
|
||||
fi
|
||||
|
||||
# Push only on main branch, tags, or manual trigger
|
||||
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
|
||||
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
|
||||
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
|
||||
should_push=true
|
||||
# Get short SHA
|
||||
short_sha=$(git rev-parse --short HEAD)
|
||||
|
||||
echo "🎯 Manual Docker build triggered:"
|
||||
echo " 📋 Requested version: $input_version"
|
||||
echo " 🔧 Force rebuild: ${{ github.event.inputs.force_rebuild }}"
|
||||
echo " 🚀 Push images: $should_push"
|
||||
|
||||
case "$input_version" in
|
||||
"latest")
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "🚀 Building with latest stable release version"
|
||||
;;
|
||||
# Prerelease versions (must match first, more specific)
|
||||
v*alpha*|v*beta*|v*rc*|*alpha*|*beta*|*rc*)
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🧪 Building with prerelease version: $input_version"
|
||||
;;
|
||||
# Release versions (match after prereleases, more general)
|
||||
v[0-9]*|[0-9]*.*.*)
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "📦 Building with specific release version: $input_version"
|
||||
;;
|
||||
*)
|
||||
# Invalid version for Docker build
|
||||
should_build=false
|
||||
echo "❌ Invalid version for Docker build: $input_version"
|
||||
echo "⚠️ Only release versions (latest, v1.0.0, 1.0.0) and prereleases (v1.0.0-alpha1, 1.0.0-beta2) are supported"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "should_push=$should_push" >> $GITHUB_OUTPUT
|
||||
echo "Build: $should_build, Push: $should_push"
|
||||
echo "build_type=$build_type" >> $GITHUB_OUTPUT
|
||||
echo "version=$version" >> $GITHUB_OUTPUT
|
||||
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
|
||||
echo "create_latest=$create_latest" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "🐳 Docker Build Summary:"
|
||||
echo " - Should build: $should_build"
|
||||
echo " - Should push: $should_push"
|
||||
echo " - Build type: $build_type"
|
||||
echo " - Version: $version"
|
||||
echo " - Short SHA: $short_sha"
|
||||
echo " - Is prerelease: $is_prerelease"
|
||||
echo " - Create latest: $create_latest"
|
||||
|
||||
# Build multi-arch Docker images
|
||||
# Strategy: Build images using pre-built binaries from dl.rustfs.com
|
||||
# Supports both release and dev channel binaries based on build context
|
||||
# Only runs when should_build is true (which includes workflow success check)
|
||||
build-docker:
|
||||
name: Build Docker Images
|
||||
needs: build-check
|
||||
if: needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
variant:
|
||||
- name: production
|
||||
dockerfile: Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: ubuntu
|
||||
dockerfile: .docker/Dockerfile.ubuntu22.04
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: alpine
|
||||
dockerfile: .docker/Dockerfile.alpine
|
||||
platforms: linux/amd64,linux/arm64
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ env.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
# - name: Login to GitHub Container Registry
|
||||
# uses: docker/login-action@v3
|
||||
# with:
|
||||
# registry: ghcr.io
|
||||
# username: ${{ github.actor }}
|
||||
# password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Login to Docker Hub
|
||||
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
if: needs.build-check.outputs.should_push == 'true'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata
|
||||
- name: Extract metadata and generate tags
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: |
|
||||
${{ env.REGISTRY_DOCKERHUB }}
|
||||
${{ env.REGISTRY_GHCR }}
|
||||
tags: |
|
||||
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
|
||||
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
|
||||
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
|
||||
flavor: |
|
||||
latest=false
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
|
||||
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
|
||||
|
||||
# Convert version format for Dockerfile compatibility
|
||||
case "$VERSION" in
|
||||
"latest")
|
||||
# For stable latest, use RELEASE=latest + release CHANNEL
|
||||
DOCKER_RELEASE="latest"
|
||||
DOCKER_CHANNEL="release"
|
||||
;;
|
||||
v*)
|
||||
# For versioned releases (v1.0.0), remove 'v' prefix for Dockerfile
|
||||
DOCKER_RELEASE="${VERSION#v}"
|
||||
DOCKER_CHANNEL="release"
|
||||
;;
|
||||
*)
|
||||
# For other versions, pass as-is
|
||||
DOCKER_RELEASE="${VERSION}"
|
||||
DOCKER_CHANNEL="release"
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "docker_release=$DOCKER_RELEASE" >> $GITHUB_OUTPUT
|
||||
echo "docker_channel=$DOCKER_CHANNEL" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "🐳 Docker build parameters:"
|
||||
echo " - Original version: $VERSION"
|
||||
echo " - Docker RELEASE: $DOCKER_RELEASE"
|
||||
echo " - Docker CHANNEL: $DOCKER_CHANNEL"
|
||||
|
||||
# Generate tags based on build type
|
||||
# Only support release and prerelease builds (no development builds)
|
||||
TAGS="${{ env.REGISTRY_DOCKERHUB }}:${VERSION}"
|
||||
|
||||
# Add channel tags for prereleases and latest for stable
|
||||
if [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
# Stable release
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
|
||||
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
# Prerelease channel tags (alpha, beta, rc)
|
||||
if [[ "$VERSION" == *"alpha"* ]]; then
|
||||
CHANNEL="alpha"
|
||||
elif [[ "$VERSION" == *"beta"* ]]; then
|
||||
CHANNEL="beta"
|
||||
elif [[ "$VERSION" == *"rc"* ]]; then
|
||||
CHANNEL="rc"
|
||||
fi
|
||||
|
||||
if [[ -n "$CHANNEL" ]]; then
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${CHANNEL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Output tags
|
||||
echo "tags=$TAGS" >> $GITHUB_OUTPUT
|
||||
|
||||
# Generate labels
|
||||
LABELS="org.opencontainers.image.title=RustFS"
|
||||
LABELS="$LABELS,org.opencontainers.image.description=RustFS distributed object storage system"
|
||||
LABELS="$LABELS,org.opencontainers.image.version=$VERSION"
|
||||
LABELS="$LABELS,org.opencontainers.image.revision=${{ github.sha }}"
|
||||
LABELS="$LABELS,org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}"
|
||||
LABELS="$LABELS,org.opencontainers.image.created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
|
||||
LABELS="$LABELS,org.opencontainers.image.build-type=$BUILD_TYPE"
|
||||
|
||||
echo "labels=$LABELS" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "🐳 Generated Docker tags:"
|
||||
echo "$TAGS" | tr ',' '\n' | sed 's/^/ - /'
|
||||
echo "📋 Build type: $BUILD_TYPE"
|
||||
echo "🔖 Version: $VERSION"
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ matrix.variant.dockerfile }}
|
||||
platforms: ${{ matrix.variant.platforms }}
|
||||
file: Dockerfile
|
||||
platforms: ${{ env.DOCKER_PLATFORMS }}
|
||||
push: ${{ needs.build-check.outputs.should_push == 'true' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
|
||||
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
|
||||
cache-from: |
|
||||
type=gha,scope=docker-binary
|
||||
cache-to: |
|
||||
type=gha,mode=max,scope=docker-binary
|
||||
build-args: |
|
||||
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
|
||||
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
|
||||
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
|
||||
BUILDTIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
|
||||
VERSION=${{ needs.build-check.outputs.version }}
|
||||
BUILD_TYPE=${{ needs.build-check.outputs.build_type }}
|
||||
REVISION=${{ github.sha }}
|
||||
RELEASE=${{ steps.meta.outputs.docker_release }}
|
||||
CHANNEL=${{ steps.meta.outputs.docker_channel }}
|
||||
BUILDKIT_INLINE_CACHE=1
|
||||
# Enable advanced BuildKit features for better performance
|
||||
provenance: false
|
||||
sbom: false
|
||||
# Add retry mechanism by splitting the build process
|
||||
no-cache: false
|
||||
pull: true
|
||||
|
||||
# Create manifest for main production image
|
||||
create-manifest:
|
||||
name: Create Manifest
|
||||
# Note: Manifest creation is no longer needed as we only build one variant
|
||||
# Multi-arch manifests are automatically created by docker/build-push-action
|
||||
|
||||
# Docker build summary
|
||||
docker-summary:
|
||||
name: Docker Build Summary
|
||||
needs: [build-check, build-docker]
|
||||
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Login to Docker Hub
|
||||
if: secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Create and push manifest
|
||||
- name: Docker build completion summary
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/}
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
|
||||
|
||||
# Create main image tag (without variant suffix)
|
||||
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
|
||||
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
|
||||
fi
|
||||
echo "🐳 Docker build completed successfully!"
|
||||
echo "📦 Build type: $BUILD_TYPE"
|
||||
echo "🔢 Version: $VERSION"
|
||||
echo "🚀 Strategy: Images using pre-built binaries (release channel only)"
|
||||
echo ""
|
||||
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_GHCR }}:latest \
|
||||
${{ env.REGISTRY_GHCR }}:${VERSION}-production
|
||||
case "$BUILD_TYPE" in
|
||||
"release")
|
||||
echo "🚀 Release Docker image has been built with ${VERSION} tags"
|
||||
echo "✅ This image is ready for production use"
|
||||
if [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
echo "🏷️ Latest tag has been created for stable release"
|
||||
fi
|
||||
;;
|
||||
"prerelease")
|
||||
echo "🧪 Prerelease Docker image has been built with ${VERSION} tags"
|
||||
echo "⚠️ This is a prerelease image - use with caution"
|
||||
echo "🚫 Latest tag NOT created for prerelease"
|
||||
;;
|
||||
*)
|
||||
echo "❌ Unexpected build type: $BUILD_TYPE"
|
||||
;;
|
||||
esac
|
||||
|
||||
24
.github/workflows/issue-translator.yml
vendored
@@ -1,8 +1,22 @@
|
||||
name: 'issue-translator'
|
||||
on:
|
||||
issue_comment:
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: "issue-translator"
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
@@ -14,5 +28,5 @@ jobs:
|
||||
IS_MODIFY_TITLE: false
|
||||
# not require, default false, . Decide whether to modify the issue title
|
||||
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
|
||||
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
|
||||
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
|
||||
# not require. Customize the translation robot prefix message.
|
||||
|
||||
17
.github/workflows/performance.yml
vendored
@@ -18,10 +18,10 @@ on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- '**/*.rs'
|
||||
- '**/Cargo.toml'
|
||||
- '**/Cargo.lock'
|
||||
- '.github/workflows/performance.yml'
|
||||
- "**/*.rs"
|
||||
- "**/Cargo.toml"
|
||||
- "**/Cargo.lock"
|
||||
- ".github/workflows/performance.yml"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
profile_duration:
|
||||
@@ -73,12 +73,11 @@ jobs:
|
||||
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
|
||||
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
|
||||
|
||||
- name: Download static files
|
||||
- name: Verify console static assets
|
||||
run: |
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o tempfile.zip --retry 3 --retry-delay 5
|
||||
unzip -o tempfile.zip -d ./rustfs/static
|
||||
rm tempfile.zip
|
||||
# Console static assets are already embedded in the repository
|
||||
echo "Console static assets size: $(du -sh rustfs/static/)"
|
||||
echo "Console static assets are embedded via rust-embed, no external download needed"
|
||||
|
||||
- name: Build with profiling optimizations
|
||||
run: |
|
||||
|
||||
1
.gitignore
vendored
@@ -19,3 +19,4 @@ deploy/certs/*
|
||||
profile.json
|
||||
.docker/openobserve-otel/data
|
||||
*.zst
|
||||
.secrets
|
||||
|
||||
712
Cargo.lock
generated
26
Cargo.toml
@@ -36,6 +36,7 @@ members = [
|
||||
"crates/utils", # Utility functions and helpers
|
||||
"crates/workers", # Worker thread pools and task scheduling
|
||||
"crates/zip", # ZIP file handling and compression
|
||||
"crates/ahm",
|
||||
]
|
||||
resolver = "2"
|
||||
|
||||
@@ -62,6 +63,7 @@ rustfs-filemeta = { path = "crates/filemeta" }
|
||||
rustfs-rio = { path = "crates/rio" }
|
||||
|
||||
[workspace.dependencies]
|
||||
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
|
||||
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
|
||||
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
|
||||
rustfs-common = { path = "crates/common", version = "0.0.5" }
|
||||
@@ -87,7 +89,7 @@ aes-gcm = { version = "0.10.3", features = ["std"] }
|
||||
arc-swap = "1.7.1"
|
||||
argon2 = { version = "0.5.3", features = ["std"] }
|
||||
atoi = "2.0.0"
|
||||
async-channel = "2.4.0"
|
||||
async-channel = "2.5.0"
|
||||
async-recursion = "1.1.1"
|
||||
async-trait = "0.1.88"
|
||||
async-compression = { version = "0.4.0" }
|
||||
@@ -105,7 +107,7 @@ byteorder = "1.5.0"
|
||||
cfg-if = "1.0.1"
|
||||
chacha20poly1305 = { version = "0.10.1" }
|
||||
chrono = { version = "0.4.41", features = ["serde"] }
|
||||
clap = { version = "4.5.40", features = ["derive", "env"] }
|
||||
clap = { version = "4.5.41", features = ["derive", "env"] }
|
||||
const-str = { version = "0.6.2", features = ["std", "proc"] }
|
||||
crc32fast = "1.4.2"
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
@@ -114,7 +116,7 @@ datafusion = "46.0.1"
|
||||
derive_builder = "0.20.2"
|
||||
dioxus = { version = "0.6.3", features = ["router"] }
|
||||
dirs = "6.0.0"
|
||||
enumset = "1.1.6"
|
||||
enumset = "1.1.7"
|
||||
flatbuffers = "25.2.10"
|
||||
flate2 = "1.1.2"
|
||||
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
|
||||
@@ -128,7 +130,7 @@ hex-simd = "0.8.0"
|
||||
highway = { version = "1.3.0" }
|
||||
hmac = "0.12.1"
|
||||
hyper = "1.6.0"
|
||||
hyper-util = { version = "0.1.14", features = [
|
||||
hyper-util = { version = "0.1.15", features = [
|
||||
"tokio",
|
||||
"server-auto",
|
||||
"server-graceful",
|
||||
@@ -180,9 +182,9 @@ pbkdf2 = "0.12.2"
|
||||
percent-encoding = "2.3.1"
|
||||
pin-project-lite = "0.2.16"
|
||||
prost = "0.13.5"
|
||||
quick-xml = "0.37.5"
|
||||
quick-xml = "0.38.0"
|
||||
rand = "0.9.1"
|
||||
rdkafka = { version = "0.37.0", features = ["tokio"] }
|
||||
rdkafka = { version = "0.38.0", features = ["tokio"] }
|
||||
reed-solomon-simd = { version = "3.0.1" }
|
||||
regex = { version = "1.11.1" }
|
||||
reqwest = { version = "0.12.22", default-features = false, features = [
|
||||
@@ -205,10 +207,10 @@ rumqttc = { version = "0.24" }
|
||||
rust-embed = { version = "8.7.2" }
|
||||
rust-i18n = { version = "3.1.5" }
|
||||
rustfs-rsc = "2025.506.1"
|
||||
rustls = { version = "0.23.28" }
|
||||
rustls = { version = "0.23.29" }
|
||||
rustls-pki-types = "1.12.0"
|
||||
rustls-pemfile = "2.2.0"
|
||||
s3s = { version = "0.12.0-minio-preview.1" }
|
||||
s3s = { version = "0.12.0-minio-preview.2" }
|
||||
shadow-rs = { version = "1.2.0", default-features = false }
|
||||
serde = { version = "1.0.219", features = ["derive"] }
|
||||
serde_json = { version = "1.0.140", features = ["raw_value"] }
|
||||
@@ -220,10 +222,12 @@ siphasher = "1.0.1"
|
||||
smallvec = { version = "1.15.1", features = ["serde"] }
|
||||
snafu = "0.8.6"
|
||||
snap = "1.1.1"
|
||||
socket2 = "0.5.10"
|
||||
socket2 = "0.6.0"
|
||||
strum = { version = "0.27.1", features = ["derive"] }
|
||||
sysinfo = "0.35.2"
|
||||
sysinfo = "0.36.0"
|
||||
sysctl = "0.6.0"
|
||||
tempfile = "3.20.0"
|
||||
temp-env = "0.3.6"
|
||||
test-case = "3.3.1"
|
||||
thiserror = "2.0.12"
|
||||
time = { version = "0.3.41", features = [
|
||||
@@ -237,6 +241,7 @@ tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
|
||||
tokio-rustls = { version = "0.26.2", default-features = false }
|
||||
tokio-stream = { version = "0.1.17" }
|
||||
tokio-tar = "0.3.1"
|
||||
tokio-test = "0.4.4"
|
||||
tokio-util = { version = "0.7.15", features = ["io", "compat"] }
|
||||
tonic = { version = "0.13.1", features = ["gzip"] }
|
||||
tonic-build = { version = "0.13.1" }
|
||||
@@ -261,6 +266,7 @@ winapi = { version = "0.3.9" }
|
||||
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
|
||||
zip = "2.4.2"
|
||||
zstd = "0.13.3"
|
||||
anyhow = "1.0.98"
|
||||
|
||||
[profile.wasm-dev]
|
||||
inherits = "dev"
|
||||
|
||||
133
Dockerfile
@@ -1,50 +1,121 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Multi-stage build for RustFS production image
|
||||
FROM alpine:latest AS build
|
||||
|
||||
FROM alpine:3.18 AS builder
|
||||
# Build arguments - use TARGETPLATFORM for consistency with Dockerfile.source
|
||||
ARG TARGETPLATFORM
|
||||
ARG BUILDPLATFORM
|
||||
ARG RELEASE=latest
|
||||
|
||||
RUN apk add -U --no-cache \
|
||||
# Install dependencies for downloading and verifying binaries
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
bash \
|
||||
unzip
|
||||
wget \
|
||||
unzip \
|
||||
jq
|
||||
|
||||
# Create build directory
|
||||
WORKDIR /build
|
||||
|
||||
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-x86_64-unknown-linux-musl.zip && \
|
||||
unzip -o /tmp/rustfs.zip -d /tmp && \
|
||||
mv /tmp/rustfs /rustfs && \
|
||||
chmod +x /rustfs && \
|
||||
rm -rf /tmp/*
|
||||
# Map TARGETPLATFORM to architecture format used in builds
|
||||
RUN case "${TARGETPLATFORM}" in \
|
||||
"linux/amd64") ARCH="x86_64" ;; \
|
||||
"linux/arm64") ARCH="aarch64" ;; \
|
||||
*) echo "Unsupported platform: ${TARGETPLATFORM}" && exit 1 ;; \
|
||||
esac && \
|
||||
echo "ARCH=${ARCH}" > /build/arch.env
|
||||
|
||||
FROM alpine:3.18
|
||||
# Download rustfs binary from dl.rustfs.com (release channel only)
|
||||
RUN . /build/arch.env && \
|
||||
BASE_URL="https://dl.rustfs.com/artifacts/rustfs/release" && \
|
||||
PLATFORM="linux" && \
|
||||
if [ "${RELEASE}" = "latest" ]; then \
|
||||
# Download latest release version \
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-latest.zip"; \
|
||||
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
|
||||
echo "📥 Downloading latest release build: ${PACKAGE_NAME}"; \
|
||||
else \
|
||||
# Download specific release version \
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${RELEASE}.zip"; \
|
||||
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
|
||||
echo "📥 Downloading specific release version: ${PACKAGE_NAME}"; \
|
||||
fi && \
|
||||
echo "🔗 Download URL: ${DOWNLOAD_URL}" && \
|
||||
curl -f -L "${DOWNLOAD_URL}" -o /build/rustfs.zip && \
|
||||
if [ ! -f /build/rustfs.zip ] || [ ! -s /build/rustfs.zip ]; then \
|
||||
echo "❌ Failed to download binary package"; \
|
||||
echo "💡 Make sure the package ${PACKAGE_NAME} exists"; \
|
||||
echo "🔗 Check: ${DOWNLOAD_URL}"; \
|
||||
exit 1; \
|
||||
fi && \
|
||||
unzip /build/rustfs.zip -d /build && \
|
||||
chmod +x /build/rustfs && \
|
||||
rm /build/rustfs.zip && \
|
||||
echo "✅ Successfully downloaded and extracted rustfs binary"
|
||||
|
||||
RUN apk add -U --no-cache \
|
||||
# Runtime stage
|
||||
FROM alpine:latest
|
||||
|
||||
# Set build arguments and labels
|
||||
ARG RELEASE=latest
|
||||
ARG BUILD_DATE
|
||||
ARG VCS_REF
|
||||
|
||||
LABEL name="RustFS" \
|
||||
vendor="RustFS Team" \
|
||||
maintainer="RustFS Team <dev@rustfs.com>" \
|
||||
version="${RELEASE}" \
|
||||
release="${RELEASE}" \
|
||||
build-date="${BUILD_DATE}" \
|
||||
vcs-ref="${VCS_REF}" \
|
||||
summary="RustFS is a high-performance distributed object storage system written in Rust, compatible with S3 API." \
|
||||
description="RustFS is a high-performance distributed object storage software built using Rust. It supports erasure coding storage, multi-tenant management, observability, and other enterprise-level features." \
|
||||
url="https://rustfs.com" \
|
||||
license="Apache-2.0"
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates \
|
||||
bash
|
||||
|
||||
COPY --from=builder /rustfs /usr/local/bin/rustfs
|
||||
curl \
|
||||
tzdata \
|
||||
bash \
|
||||
&& addgroup -g 1000 rustfs \
|
||||
&& adduser -u 1000 -G rustfs -s /bin/sh -D rustfs
|
||||
|
||||
# Environment variables
|
||||
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
|
||||
RUSTFS_SECRET_KEY=rustfsadmin \
|
||||
RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_CONSOLE_ADDRESS=":9001" \
|
||||
RUSTFS_CONSOLE_ENABLE=true \
|
||||
RUSTFS_VOLUMES=/data \
|
||||
RUST_LOG=warn
|
||||
|
||||
EXPOSE 9000 9001
|
||||
# Set permissions for /usr/bin (similar to MinIO's approach)
|
||||
RUN chmod -R 755 /usr/bin
|
||||
|
||||
RUN mkdir -p /data
|
||||
VOLUME /data
|
||||
# Copy CA certificates and binaries from build stage
|
||||
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
COPY --from=build /build/rustfs /usr/bin/
|
||||
|
||||
CMD ["rustfs", "/data"]
|
||||
# Set executable permissions
|
||||
RUN chmod +x /usr/bin/rustfs
|
||||
|
||||
# Create data directory
|
||||
RUN mkdir -p /data /config && chown -R rustfs:rustfs /data /config
|
||||
|
||||
# Switch to non-root user
|
||||
USER rustfs
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /data
|
||||
|
||||
# Expose port
|
||||
EXPOSE 9000
|
||||
|
||||
|
||||
# Volume for data
|
||||
VOLUME ["/data"]
|
||||
|
||||
# Set entrypoint
|
||||
ENTRYPOINT ["/usr/bin/rustfs"]
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
FROM ubuntu:latest
|
||||
|
||||
# RUN apk add --no-cache <package-name>
|
||||
# 如果 rustfs 有依赖,可以在这里添加,例如:
|
||||
# RUN apk add --no-cache openssl
|
||||
# RUN apk add --no-cache bash # 安装 Bash
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# 创建与 RUSTFS_VOLUMES 一致的目录
|
||||
RUN mkdir -p /root/data/target/volume/test1 /root/data/target/volume/test2 /root/data/target/volume/test3 /root/data/target/volume/test4
|
||||
|
||||
# COPY ./target/x86_64-unknown-linux-musl/release/rustfs /app/rustfs
|
||||
COPY ./target/x86_64-unknown-linux-gnu/release/rustfs /app/rustfs
|
||||
|
||||
RUN chmod +x /app/rustfs
|
||||
|
||||
EXPOSE 9000
|
||||
EXPOSE 9002
|
||||
|
||||
CMD ["/app/rustfs"]
|
||||
@@ -1,10 +1,26 @@
|
||||
# Multi-stage Dockerfile for RustFS
|
||||
# Multi-stage Dockerfile for RustFS - LOCAL DEVELOPMENT ONLY
|
||||
#
|
||||
# ⚠️ IMPORTANT: This Dockerfile is for local development and testing only.
|
||||
# ⚠️ It builds RustFS from source code and is NOT used in CI/CD pipelines.
|
||||
# ⚠️ CI/CD pipeline uses pre-built binaries from Dockerfile instead.
|
||||
#
|
||||
# Usage for local development:
|
||||
# docker build -f Dockerfile.source -t rustfs:dev-local .
|
||||
# docker run --rm -p 9000:9000 rustfs:dev-local
|
||||
#
|
||||
# Supports cross-compilation for amd64 and arm64 architectures
|
||||
ARG TARGETPLATFORM
|
||||
ARG BUILDPLATFORM
|
||||
|
||||
# Build stage
|
||||
FROM --platform=$BUILDPLATFORM rust:1.85-bookworm AS builder
|
||||
FROM --platform=$BUILDPLATFORM rust:1.88-bookworm AS builder
|
||||
|
||||
# Re-declare build arguments after FROM (required for multi-stage builds)
|
||||
ARG TARGETPLATFORM
|
||||
ARG BUILDPLATFORM
|
||||
|
||||
# Debug: Print platform information
|
||||
RUN echo "🐳 Build Info: BUILDPLATFORM=$BUILDPLATFORM, TARGETPLATFORM=$TARGETPLATFORM"
|
||||
|
||||
# Install required build dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
@@ -18,6 +34,8 @@ RUN apt-get update && apt-get install -y \
|
||||
lld \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Note: sccache removed for simpler builds
|
||||
|
||||
# Install cross-compilation tools for ARM64
|
||||
RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
|
||||
apt-get update && \
|
||||
@@ -37,10 +55,13 @@ RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# Set up Rust targets based on platform
|
||||
RUN case "$TARGETPLATFORM" in \
|
||||
RUN set -e && \
|
||||
PLATFORM="${TARGETPLATFORM:-linux/amd64}" && \
|
||||
echo "🎯 Setting up Rust target for platform: $PLATFORM" && \
|
||||
case "$PLATFORM" in \
|
||||
"linux/amd64") rustup target add x86_64-unknown-linux-gnu ;; \
|
||||
"linux/arm64") rustup target add aarch64-unknown-linux-gnu ;; \
|
||||
*) echo "Unsupported platform: $TARGETPLATFORM" && exit 1 ;; \
|
||||
*) echo "❌ Unsupported platform: $PLATFORM" && exit 1 ;; \
|
||||
esac
|
||||
|
||||
# Set up environment for cross-compilation
|
||||
@@ -50,37 +71,37 @@ ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
|
||||
|
||||
WORKDIR /usr/src/rustfs
|
||||
|
||||
# Copy Cargo files for dependency caching
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY */Cargo.toml ./*/
|
||||
|
||||
# Create dummy main.rs files for dependency compilation
|
||||
RUN find . -name "Cargo.toml" -not -path "./Cargo.toml" | \
|
||||
xargs -I {} dirname {} | \
|
||||
xargs -I {} sh -c 'mkdir -p {}/src && echo "fn main() {}" > {}/src/main.rs'
|
||||
|
||||
# Build dependencies only (cache layer)
|
||||
RUN case "$TARGETPLATFORM" in \
|
||||
"linux/amd64") cargo build --release --target x86_64-unknown-linux-gnu ;; \
|
||||
"linux/arm64") cargo build --release --target aarch64-unknown-linux-gnu ;; \
|
||||
esac
|
||||
|
||||
# Copy source code
|
||||
# Copy all source code
|
||||
COPY . .
|
||||
|
||||
# Configure cargo for optimized builds
|
||||
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true \
|
||||
CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse \
|
||||
CARGO_INCREMENTAL=0 \
|
||||
CARGO_PROFILE_RELEASE_DEBUG=false \
|
||||
CARGO_PROFILE_RELEASE_SPLIT_DEBUGINFO=off \
|
||||
CARGO_PROFILE_RELEASE_STRIP=symbols
|
||||
|
||||
# Generate protobuf code
|
||||
RUN cargo run --bin gproto
|
||||
|
||||
# Build the actual application
|
||||
# Build the actual application with optimizations
|
||||
RUN case "$TARGETPLATFORM" in \
|
||||
"linux/amd64") \
|
||||
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs && \
|
||||
echo "🔨 Building for amd64..." && \
|
||||
rustup target add x86_64-unknown-linux-gnu && \
|
||||
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
|
||||
cp target/x86_64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
|
||||
;; \
|
||||
"linux/arm64") \
|
||||
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs && \
|
||||
echo "🔨 Building for arm64..." && \
|
||||
rustup target add aarch64-unknown-linux-gnu && \
|
||||
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
|
||||
cp target/aarch64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
|
||||
;; \
|
||||
*) \
|
||||
echo "❌ Unsupported platform: $TARGETPLATFORM" && exit 1 \
|
||||
;; \
|
||||
esac
|
||||
|
||||
# Runtime stage - Ubuntu minimal for better compatibility
|
||||
@@ -111,11 +132,19 @@ RUN chmod +x /app/rustfs && chown rustfs:rustfs /app/rustfs
|
||||
USER rustfs
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 9000 9001
|
||||
EXPOSE 9000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD wget --no-verbose --tries=1 --spider http://localhost:9000/health || exit 1
|
||||
# Environment variables
|
||||
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
|
||||
RUSTFS_SECRET_KEY=rustfsadmin \
|
||||
RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_CONSOLE_ENABLE=true \
|
||||
RUSTFS_VOLUMES=/data \
|
||||
RUST_LOG=warn
|
||||
|
||||
|
||||
# Volume for data
|
||||
VOLUME ["/data"]
|
||||
|
||||
# Set default command
|
||||
CMD ["/app/rustfs"]
|
||||
325
Makefile
@@ -5,7 +5,9 @@
|
||||
DOCKER_CLI ?= docker
|
||||
IMAGE_NAME ?= rustfs:v1.0.0
|
||||
CONTAINER_NAME ?= rustfs-dev
|
||||
DOCKERFILE_PATH = $(shell pwd)/.docker
|
||||
# Docker build configurations
|
||||
DOCKERFILE_PRODUCTION = Dockerfile
|
||||
DOCKERFILE_SOURCE = Dockerfile.source
|
||||
|
||||
# Code quality and formatting targets
|
||||
.PHONY: fmt
|
||||
@@ -31,7 +33,8 @@ check:
|
||||
.PHONY: test
|
||||
test:
|
||||
@echo "🧪 Running tests..."
|
||||
cargo test --all --exclude e2e_test
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
.PHONY: pre-commit
|
||||
pre-commit: fmt clippy check test
|
||||
@@ -43,21 +46,6 @@ setup-hooks:
|
||||
chmod +x .git/hooks/pre-commit
|
||||
@echo "✅ Git hooks setup complete!"
|
||||
|
||||
.PHONY: init-devenv
|
||||
init-devenv:
|
||||
$(DOCKER_CLI) build -t $(IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.devenv .
|
||||
$(DOCKER_CLI) stop $(CONTAINER_NAME)
|
||||
$(DOCKER_CLI) rm $(CONTAINER_NAME)
|
||||
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) -p 9010:9010 -p 9000:9000 -v $(shell pwd):/root/s3-rustfs -it $(IMAGE_NAME)
|
||||
|
||||
.PHONY: start
|
||||
start:
|
||||
$(DOCKER_CLI) start $(CONTAINER_NAME)
|
||||
|
||||
.PHONY: stop
|
||||
stop:
|
||||
$(DOCKER_CLI) stop $(CONTAINER_NAME)
|
||||
|
||||
.PHONY: e2e-server
|
||||
e2e-server:
|
||||
sh $(shell pwd)/scripts/run.sh
|
||||
@@ -66,86 +54,184 @@ e2e-server:
|
||||
probe-e2e:
|
||||
sh $(shell pwd)/scripts/probe.sh
|
||||
|
||||
# make BUILD_OS=ubuntu22.04 build
|
||||
# in target/ubuntu22.04/release/rustfs
|
||||
|
||||
# make BUILD_OS=rockylinux9.3 build
|
||||
# in target/rockylinux9.3/release/rustfs
|
||||
BUILD_OS ?= rockylinux9.3
|
||||
# Native build using build-rustfs.sh script
|
||||
.PHONY: build
|
||||
build: ROCKYLINUX_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
|
||||
build: ROCKYLINUX_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
|
||||
build: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
|
||||
build:
|
||||
$(DOCKER_CLI) build -t $(ROCKYLINUX_BUILD_IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.$(BUILD_OS) .
|
||||
$(DOCKER_CLI) run --rm --name $(ROCKYLINUX_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(ROCKYLINUX_BUILD_IMAGE_NAME) $(BUILD_CMD)
|
||||
@echo "🔨 Building RustFS using build-rustfs.sh script..."
|
||||
./build-rustfs.sh
|
||||
|
||||
.PHONY: build-dev
|
||||
build-dev:
|
||||
@echo "🔨 Building RustFS in development mode..."
|
||||
./build-rustfs.sh --dev
|
||||
|
||||
# Docker-based build (alternative approach)
|
||||
# Usage: make BUILD_OS=ubuntu22.04 build-docker
|
||||
# Output: target/ubuntu22.04/release/rustfs
|
||||
BUILD_OS ?= rockylinux9.3
|
||||
.PHONY: build-docker
|
||||
build-docker: SOURCE_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
|
||||
build-docker: SOURCE_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
|
||||
build-docker: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
|
||||
build-docker:
|
||||
@echo "🐳 Building RustFS using Docker ($(BUILD_OS))..."
|
||||
$(DOCKER_CLI) build -t $(SOURCE_BUILD_IMAGE_NAME) -f $(DOCKERFILE_SOURCE) .
|
||||
$(DOCKER_CLI) run --rm --name $(SOURCE_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(SOURCE_BUILD_IMAGE_NAME) $(BUILD_CMD)
|
||||
|
||||
.PHONY: build-musl
|
||||
build-musl:
|
||||
@echo "🔨 Building rustfs for x86_64-unknown-linux-musl..."
|
||||
cargo build --target x86_64-unknown-linux-musl --bin rustfs -r
|
||||
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-musl
|
||||
|
||||
.PHONY: build-gnu
|
||||
build-gnu:
|
||||
@echo "🔨 Building rustfs for x86_64-unknown-linux-gnu..."
|
||||
cargo build --target x86_64-unknown-linux-gnu --bin rustfs -r
|
||||
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-gnu
|
||||
|
||||
.PHONY: deploy-dev
|
||||
deploy-dev: build-musl
|
||||
@echo "🚀 Deploying to dev server: $${IP}"
|
||||
./scripts/dev_deploy.sh $${IP}
|
||||
|
||||
# Multi-architecture Docker build targets
|
||||
.PHONY: docker-build-multiarch
|
||||
docker-build-multiarch:
|
||||
@echo "🏗️ Building multi-architecture Docker images..."
|
||||
./scripts/build-docker-multiarch.sh
|
||||
# ========================================================================================
|
||||
# Docker Multi-Architecture Builds (Primary Methods)
|
||||
# ========================================================================================
|
||||
|
||||
.PHONY: docker-build-multiarch-push
|
||||
docker-build-multiarch-push:
|
||||
@echo "🚀 Building and pushing multi-architecture Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --push
|
||||
# Production builds using docker-buildx.sh (for CI/CD and production)
|
||||
.PHONY: docker-buildx
|
||||
docker-buildx:
|
||||
@echo "🏗️ Building multi-architecture production Docker images with buildx..."
|
||||
./docker-buildx.sh
|
||||
|
||||
.PHONY: docker-build-multiarch-version
|
||||
docker-build-multiarch-version:
|
||||
.PHONY: docker-buildx-push
|
||||
docker-buildx-push:
|
||||
@echo "🚀 Building and pushing multi-architecture production Docker images with buildx..."
|
||||
./docker-buildx.sh --push
|
||||
|
||||
.PHONY: docker-buildx-version
|
||||
docker-buildx-version:
|
||||
@if [ -z "$(VERSION)" ]; then \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-build-multiarch-version VERSION=v1.0.0"; \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-version VERSION=v1.0.0"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🏗️ Building multi-architecture Docker images (version: $(VERSION))..."
|
||||
./scripts/build-docker-multiarch.sh --version $(VERSION)
|
||||
@echo "🏗️ Building multi-architecture production Docker images (version: $(VERSION))..."
|
||||
./docker-buildx.sh --release $(VERSION)
|
||||
|
||||
.PHONY: docker-push-multiarch-version
|
||||
docker-push-multiarch-version:
|
||||
.PHONY: docker-buildx-push-version
|
||||
docker-buildx-push-version:
|
||||
@if [ -z "$(VERSION)" ]; then \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-push-multiarch-version VERSION=v1.0.0"; \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-push-version VERSION=v1.0.0"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🚀 Building and pushing multi-architecture Docker images (version: $(VERSION))..."
|
||||
./scripts/build-docker-multiarch.sh --version $(VERSION) --push
|
||||
@echo "🚀 Building and pushing multi-architecture production Docker images (version: $(VERSION))..."
|
||||
./docker-buildx.sh --release $(VERSION) --push
|
||||
|
||||
.PHONY: docker-build-ubuntu
|
||||
docker-build-ubuntu:
|
||||
@echo "🏗️ Building multi-architecture Ubuntu Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type ubuntu
|
||||
# Development/Source builds using direct buildx commands
|
||||
.PHONY: docker-dev
|
||||
docker-dev:
|
||||
@echo "🏗️ Building multi-architecture development Docker images with buildx..."
|
||||
@echo "💡 This builds from source code and is intended for local development and testing"
|
||||
@echo "⚠️ Multi-arch images cannot be loaded locally, use docker-dev-push to push to registry"
|
||||
$(DOCKER_CLI) buildx build \
|
||||
--platform linux/amd64,linux/arm64 \
|
||||
--file $(DOCKERFILE_SOURCE) \
|
||||
--tag rustfs:source-latest \
|
||||
--tag rustfs:dev-latest \
|
||||
.
|
||||
|
||||
.PHONY: docker-build-rockylinux
|
||||
docker-build-rockylinux:
|
||||
@echo "🏗️ Building multi-architecture RockyLinux Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type rockylinux
|
||||
.PHONY: docker-dev-local
|
||||
docker-dev-local:
|
||||
@echo "🏗️ Building single-architecture development Docker image for local use..."
|
||||
@echo "💡 This builds from source code for the current platform and loads locally"
|
||||
$(DOCKER_CLI) buildx build \
|
||||
--file $(DOCKERFILE_SOURCE) \
|
||||
--tag rustfs:source-latest \
|
||||
--tag rustfs:dev-latest \
|
||||
--load \
|
||||
.
|
||||
|
||||
.PHONY: docker-build-devenv
|
||||
docker-build-devenv:
|
||||
@echo "🏗️ Building multi-architecture development environment Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type devenv
|
||||
.PHONY: docker-dev-push
|
||||
docker-dev-push:
|
||||
@if [ -z "$(REGISTRY)" ]; then \
|
||||
echo "❌ 错误: 请指定镜像仓库, 例如: make docker-dev-push REGISTRY=ghcr.io/username"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🚀 Building and pushing multi-architecture development Docker images..."
|
||||
@echo "💡 推送到仓库: $(REGISTRY)"
|
||||
$(DOCKER_CLI) buildx build \
|
||||
--platform linux/amd64,linux/arm64 \
|
||||
--file $(DOCKERFILE_SOURCE) \
|
||||
--tag $(REGISTRY)/rustfs:source-latest \
|
||||
--tag $(REGISTRY)/rustfs:dev-latest \
|
||||
--push \
|
||||
.
|
||||
|
||||
.PHONY: docker-build-all-types
|
||||
docker-build-all-types:
|
||||
@echo "🏗️ Building all multi-architecture Docker image types..."
|
||||
./scripts/build-docker-multiarch.sh --type production
|
||||
./scripts/build-docker-multiarch.sh --type ubuntu
|
||||
./scripts/build-docker-multiarch.sh --type rockylinux
|
||||
./scripts/build-docker-multiarch.sh --type devenv
|
||||
|
||||
|
||||
# Local production builds using direct buildx (alternative to docker-buildx.sh)
|
||||
.PHONY: docker-buildx-production-local
|
||||
docker-buildx-production-local:
|
||||
@echo "🏗️ Building single-architecture production Docker image locally..."
|
||||
@echo "💡 Alternative to docker-buildx.sh for local testing"
|
||||
$(DOCKER_CLI) buildx build \
|
||||
--file $(DOCKERFILE_PRODUCTION) \
|
||||
--tag rustfs:production-latest \
|
||||
--tag rustfs:latest \
|
||||
--load \
|
||||
--build-arg RELEASE=latest \
|
||||
.
|
||||
|
||||
# ========================================================================================
|
||||
# Single Architecture Docker Builds (Traditional)
|
||||
# ========================================================================================
|
||||
|
||||
.PHONY: docker-build-production
|
||||
docker-build-production:
|
||||
@echo "🏗️ Building single-architecture production Docker image..."
|
||||
@echo "💡 Consider using 'make docker-buildx-production-local' for multi-arch support"
|
||||
$(DOCKER_CLI) build -f $(DOCKERFILE_PRODUCTION) -t rustfs:latest .
|
||||
|
||||
.PHONY: docker-build-source
|
||||
docker-build-source:
|
||||
@echo "🏗️ Building single-architecture source Docker image..."
|
||||
@echo "💡 Consider using 'make docker-dev-local' for multi-arch support"
|
||||
$(DOCKER_CLI) build -f $(DOCKERFILE_SOURCE) -t rustfs:source .
|
||||
|
||||
# ========================================================================================
|
||||
# Development Environment
|
||||
# ========================================================================================
|
||||
|
||||
.PHONY: dev-env-start
|
||||
dev-env-start:
|
||||
@echo "🚀 Starting development environment..."
|
||||
$(DOCKER_CLI) buildx build \
|
||||
--file $(DOCKERFILE_SOURCE) \
|
||||
--tag rustfs:dev \
|
||||
--load \
|
||||
.
|
||||
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
|
||||
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
|
||||
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) \
|
||||
-p 9010:9010 -p 9000:9000 \
|
||||
-v $(shell pwd):/workspace \
|
||||
-it rustfs:dev
|
||||
|
||||
.PHONY: dev-env-stop
|
||||
dev-env-stop:
|
||||
@echo "🛑 Stopping development environment..."
|
||||
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
|
||||
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
|
||||
|
||||
.PHONY: dev-env-restart
|
||||
dev-env-restart: dev-env-stop dev-env-start
|
||||
|
||||
|
||||
|
||||
# ========================================================================================
|
||||
# Build Utilities
|
||||
# ========================================================================================
|
||||
|
||||
.PHONY: docker-inspect-multiarch
|
||||
docker-inspect-multiarch:
|
||||
@@ -159,41 +245,106 @@ docker-inspect-multiarch:
|
||||
.PHONY: build-cross-all
|
||||
build-cross-all:
|
||||
@echo "🔧 Building all target architectures..."
|
||||
@if ! command -v cross &> /dev/null; then \
|
||||
echo "📦 Installing cross..."; \
|
||||
cargo install cross; \
|
||||
fi
|
||||
@echo "💡 On macOS/Windows, use 'make docker-dev' for reliable multi-arch builds"
|
||||
@echo "🔨 Generating protobuf code..."
|
||||
cargo run --bin gproto || true
|
||||
@echo "🔨 Building x86_64-unknown-linux-musl..."
|
||||
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-musl
|
||||
@echo "🔨 Building aarch64-unknown-linux-gnu..."
|
||||
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
|
||||
./build-rustfs.sh --platform aarch64-unknown-linux-gnu
|
||||
@echo "✅ All architectures built successfully!"
|
||||
|
||||
# ========================================================================================
|
||||
# Help and Documentation
|
||||
# ========================================================================================
|
||||
|
||||
.PHONY: help-build
|
||||
help-build:
|
||||
@echo "🔨 RustFS 构建帮助:"
|
||||
@echo ""
|
||||
@echo "🚀 本地构建 (推荐使用):"
|
||||
@echo " make build # 构建 RustFS 二进制文件 (默认包含 console)"
|
||||
@echo " make build-dev # 开发模式构建"
|
||||
@echo " make build-musl # 构建 musl 版本"
|
||||
@echo " make build-gnu # 构建 GNU 版本"
|
||||
@echo ""
|
||||
@echo "🐳 Docker 构建:"
|
||||
@echo " make build-docker # 使用 Docker 容器构建"
|
||||
@echo " make build-docker BUILD_OS=ubuntu22.04 # 指定构建系统"
|
||||
@echo ""
|
||||
@echo "🏗️ 跨架构构建:"
|
||||
@echo " make build-cross-all # 构建所有架构的二进制文件"
|
||||
@echo ""
|
||||
@echo "🔧 直接使用 build-rustfs.sh 脚本:"
|
||||
@echo " ./build-rustfs.sh --help # 查看脚本帮助"
|
||||
@echo " ./build-rustfs.sh --no-console # 构建时跳过 console 资源"
|
||||
@echo " ./build-rustfs.sh --force-console-update # 强制更新 console 资源"
|
||||
@echo " ./build-rustfs.sh --dev # 开发模式构建"
|
||||
@echo " ./build-rustfs.sh --sign # 签名二进制文件"
|
||||
@echo " ./build-rustfs.sh --platform x86_64-unknown-linux-musl # 指定目标平台"
|
||||
@echo " ./build-rustfs.sh --skip-verification # 跳过二进制验证"
|
||||
@echo ""
|
||||
@echo "💡 build-rustfs.sh 脚本提供了更多选项、智能检测和二进制验证功能"
|
||||
|
||||
.PHONY: help-docker
|
||||
help-docker:
|
||||
@echo "🐳 Docker 多架构构建帮助:"
|
||||
@echo ""
|
||||
@echo "基本构建:"
|
||||
@echo " make docker-build-multiarch # 构建多架构镜像(不推送)"
|
||||
@echo " make docker-build-multiarch-push # 构建并推送多架构镜像"
|
||||
@echo "🚀 生产镜像构建 (推荐使用 docker-buildx.sh):"
|
||||
@echo " make docker-buildx # 构建生产多架构镜像(不推送)"
|
||||
@echo " make docker-buildx-push # 构建并推送生产多架构镜像"
|
||||
@echo " make docker-buildx-version VERSION=v1.0.0 # 构建指定版本"
|
||||
@echo " make docker-buildx-push-version VERSION=v1.0.0 # 构建并推送指定版本"
|
||||
@echo ""
|
||||
@echo "版本构建:"
|
||||
@echo " make docker-build-multiarch-version VERSION=v1.0.0 # 构建指定版本"
|
||||
@echo " make docker-push-multiarch-version VERSION=v1.0.0 # 构建并推送指定版本"
|
||||
@echo "🔧 开发/源码镜像构建 (本地开发测试):"
|
||||
@echo " make docker-dev # 构建开发多架构镜像(无法本地加载)"
|
||||
@echo " make docker-dev-local # 构建开发单架构镜像(本地加载)"
|
||||
@echo " make docker-dev-push REGISTRY=xxx # 构建并推送开发镜像"
|
||||
@echo ""
|
||||
@echo "镜像类型:"
|
||||
@echo " make docker-build-ubuntu # 构建 Ubuntu 镜像"
|
||||
@echo " make docker-build-rockylinux # 构建 RockyLinux 镜像"
|
||||
@echo " make docker-build-devenv # 构建开发环境镜像"
|
||||
@echo " make docker-build-all-types # 构建所有类型镜像"
|
||||
@echo "🏗️ 本地生产镜像构建 (替代方案):"
|
||||
@echo " make docker-buildx-production-local # 本地构建生产单架构镜像"
|
||||
@echo ""
|
||||
@echo "辅助工具:"
|
||||
@echo "📦 单架构构建 (传统方式):"
|
||||
@echo " make docker-build-production # 构建单架构生产镜像"
|
||||
@echo " make docker-build-source # 构建单架构源码镜像"
|
||||
@echo ""
|
||||
@echo "🚀 开发环境管理:"
|
||||
@echo " make dev-env-start # 启动开发容器环境"
|
||||
@echo " make dev-env-stop # 停止开发容器环境"
|
||||
@echo " make dev-env-restart # 重启开发容器环境"
|
||||
@echo ""
|
||||
@echo "🔧 辅助工具:"
|
||||
@echo " make build-cross-all # 构建所有架构的二进制文件"
|
||||
@echo " make docker-inspect-multiarch IMAGE=xxx # 检查镜像的架构支持"
|
||||
@echo ""
|
||||
@echo "环境变量 (在推送时需要设置):"
|
||||
@echo "📋 环境变量:"
|
||||
@echo " REGISTRY 镜像仓库地址 (推送时需要)"
|
||||
@echo " DOCKERHUB_USERNAME Docker Hub 用户名"
|
||||
@echo " DOCKERHUB_TOKEN Docker Hub 访问令牌"
|
||||
@echo " GITHUB_TOKEN GitHub 访问令牌"
|
||||
@echo ""
|
||||
@echo "💡 建议:"
|
||||
@echo " - 生产用途: 使用 docker-buildx* 命令 (基于预编译二进制)"
|
||||
@echo " - 本地开发: 使用 docker-dev* 命令 (从源码构建)"
|
||||
@echo " - 开发环境: 使用 dev-env-* 命令管理开发容器"
|
||||
|
||||
.PHONY: help
|
||||
help:
|
||||
@echo "🦀 RustFS Makefile 帮助:"
|
||||
@echo ""
|
||||
@echo "📋 主要命令分类:"
|
||||
@echo " make help-build # 显示构建相关帮助"
|
||||
@echo " make help-docker # 显示 Docker 相关帮助"
|
||||
@echo ""
|
||||
@echo "🔧 代码质量:"
|
||||
@echo " make fmt # 格式化代码"
|
||||
@echo " make clippy # 运行 clippy 检查"
|
||||
@echo " make test # 运行测试"
|
||||
@echo " make pre-commit # 运行所有预提交检查"
|
||||
@echo ""
|
||||
@echo "🚀 快速开始:"
|
||||
@echo " make build # 构建 RustFS 二进制"
|
||||
@echo " make docker-dev-local # 构建开发 Docker 镜像(本地)"
|
||||
@echo " make dev-env-start # 启动开发环境"
|
||||
@echo ""
|
||||
@echo "💡 更多帮助请使用 'make help-build' 或 'make help-docker'"
|
||||
|
||||
69
README.md
@@ -1,14 +1,13 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
|
||||
<p align="center">RustFS is a high-performance distributed object storage software built using Rust</p>
|
||||
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="Featured|HelloGitHub" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -19,20 +18,19 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
|
||||
<!-- Keep these links. Translations will automatically update with the README. -->
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
|
||||
</p>
|
||||
|
||||
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
|
||||
|
||||
|
||||
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
|
||||
|
||||
## Features
|
||||
@@ -74,7 +72,7 @@ Stress test server parameters
|
||||
|
||||
To get started with RustFS, follow these steps:
|
||||
|
||||
1. **One-click installation script (Option 1)**
|
||||
1. **One-click installation script (Option 1)**
|
||||
|
||||
```bash
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
@@ -83,13 +81,52 @@ To get started with RustFS, follow these steps:
|
||||
2. **Docker Quick Start (Option 2)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
# Latest stable release
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
|
||||
|
||||
# Development version (main branch)
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:main-latest
|
||||
|
||||
# Specific version
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:v1.0.0
|
||||
```
|
||||
|
||||
3. **Build from Source (Option 3) - Advanced Users**
|
||||
|
||||
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
|
||||
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
|
||||
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
|
||||
For developers who want to build RustFS Docker images from source with multi-architecture support:
|
||||
|
||||
```bash
|
||||
# Build multi-architecture images locally
|
||||
./docker-buildx.sh --build-arg RELEASE=latest
|
||||
|
||||
# Build and push to registry
|
||||
./docker-buildx.sh --push
|
||||
|
||||
# Build specific version
|
||||
./docker-buildx.sh --release v1.0.0 --push
|
||||
|
||||
# Build for custom registry
|
||||
./docker-buildx.sh --registry your-registry.com --namespace yourname --push
|
||||
```
|
||||
|
||||
The `docker-buildx.sh` script supports:
|
||||
- **Multi-architecture builds**: `linux/amd64`, `linux/arm64`
|
||||
- **Automatic version detection**: Uses git tags or commit hashes
|
||||
- **Registry flexibility**: Supports Docker Hub, GitHub Container Registry, etc.
|
||||
- **Build optimization**: Includes caching and parallel builds
|
||||
|
||||
You can also use Make targets for convenience:
|
||||
|
||||
```bash
|
||||
make docker-buildx # Build locally
|
||||
make docker-buildx-push # Build and push
|
||||
make docker-buildx-version VERSION=v1.0.0 # Build specific version
|
||||
make help-docker # Show all Docker-related commands
|
||||
```
|
||||
|
||||
4. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console, default username and password is `rustfsadmin` .
|
||||
5. **Create a Bucket**: Use the console to create a new bucket for your objects.
|
||||
6. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -122,7 +159,7 @@ If you have any questions or need assistance, you can:
|
||||
RustFS is a community-driven project, and we appreciate all contributions. Check out the [Contributors](https://github.com/rustfs/rustfs/graphs/contributors) page to see the amazing people who have helped make RustFS better.
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
|
||||
10
README_ZH.md
@@ -7,6 +7,7 @@
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="Featured|HelloGitHub" /></a>
|
||||
</p >
|
||||
|
||||
<p align="center">
|
||||
@@ -61,7 +62,7 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
|
||||
要开始使用 RustFS,请按照以下步骤操作:
|
||||
|
||||
1. **一键脚本快速启动 (方案一)**
|
||||
1. **一键脚本快速启动 (方案一)**
|
||||
|
||||
```bash
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
@@ -70,11 +71,10 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
2. **Docker快速启动(方案二)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs
|
||||
```
|
||||
|
||||
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9000` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
|
||||
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
|
||||
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
|
||||
|
||||
@@ -109,7 +109,7 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
RustFS 是一个社区驱动的项目,我们感谢所有的贡献。查看[贡献者](https://github.com/rustfs/rustfs/graphs/contributors)页面,了解帮助 RustFS 变得更好的杰出人员。
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
|
||||
</a >
|
||||
|
||||
## 许可证
|
||||
|
||||
564
build-rustfs.sh
Executable file
@@ -0,0 +1,564 @@
|
||||
#!/bin/bash
|
||||
|
||||
# RustFS Binary Build Script
|
||||
# This script compiles RustFS binaries for different platforms and architectures
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Auto-detect current platform
|
||||
detect_platform() {
|
||||
local arch=$(uname -m)
|
||||
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$os" in
|
||||
"linux")
|
||||
case "$arch" in
|
||||
"x86_64")
|
||||
echo "x86_64-unknown-linux-musl"
|
||||
;;
|
||||
"aarch64"|"arm64")
|
||||
echo "aarch64-unknown-linux-musl"
|
||||
;;
|
||||
"armv7l")
|
||||
echo "armv7-unknown-linux-musleabihf"
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"darwin")
|
||||
case "$arch" in
|
||||
"x86_64")
|
||||
echo "x86_64-apple-darwin"
|
||||
;;
|
||||
"arm64"|"aarch64")
|
||||
echo "aarch64-apple-darwin"
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Cross-platform SHA256 checksum generation
|
||||
generate_sha256() {
|
||||
local file="$1"
|
||||
local output_file="$2"
|
||||
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$os" in
|
||||
"linux")
|
||||
if command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
elif command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found (sha256sum or shasum)"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
"darwin")
|
||||
if command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
elif command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found (shasum or sha256sum)"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
# Try common commands in order
|
||||
if command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
elif command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Default values
|
||||
OUTPUT_DIR="target/release"
|
||||
PLATFORM=$(detect_platform) # Auto-detect current platform
|
||||
BINARY_NAME="rustfs"
|
||||
BUILD_TYPE="release"
|
||||
SIGN=false
|
||||
WITH_CONSOLE=true
|
||||
FORCE_CONSOLE_UPDATE=false
|
||||
CONSOLE_VERSION="latest"
|
||||
SKIP_VERIFICATION=false
|
||||
CUSTOM_PLATFORM=""
|
||||
|
||||
# Print usage
|
||||
usage() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Description:"
|
||||
echo " Build RustFS binary for the current platform. Designed for CI/CD pipelines"
|
||||
echo " where different runners build platform-specific binaries natively."
|
||||
echo " Includes automatic verification to ensure the built binary is functional."
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -o, --output-dir DIR Output directory (default: target/release)"
|
||||
echo " -b, --binary-name NAME Binary name (default: rustfs)"
|
||||
echo " -p, --platform TARGET Target platform (default: auto-detect)"
|
||||
echo " --dev Build in dev mode"
|
||||
echo " --sign Sign binaries after build"
|
||||
echo " --with-console Download console static assets (default)"
|
||||
echo " --no-console Skip console static assets"
|
||||
echo " --force-console-update Force update console assets even if they exist"
|
||||
echo " --console-version VERSION Console version to download (default: latest)"
|
||||
echo " --skip-verification Skip binary verification after build"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Build for current platform (includes console assets)"
|
||||
echo " $0 --dev # Development build"
|
||||
echo " $0 --sign # Build and sign binary (release CI)"
|
||||
echo " $0 --no-console # Build without console static assets"
|
||||
echo " $0 --force-console-update # Force update console assets"
|
||||
echo " $0 --platform x86_64-unknown-linux-musl # Build for specific platform"
|
||||
echo " $0 --skip-verification # Skip binary verification (for cross-compilation)"
|
||||
echo ""
|
||||
echo "Detected platform: $(detect_platform)"
|
||||
echo "CI Usage: Run this script on each platform's runner to build native binaries"
|
||||
}
|
||||
|
||||
# Print colored message
|
||||
print_message() {
|
||||
local color=$1
|
||||
local message=$2
|
||||
echo -e "${color}${message}${NC}"
|
||||
}
|
||||
|
||||
# Get version from git
|
||||
get_version() {
|
||||
if git describe --abbrev=0 --tags >/dev/null 2>&1; then
|
||||
git describe --abbrev=0 --tags
|
||||
else
|
||||
git rev-parse --short HEAD
|
||||
fi
|
||||
}
|
||||
|
||||
# Setup rust environment
|
||||
setup_rust_environment() {
|
||||
print_message $BLUE "🔧 Setting up Rust environment..."
|
||||
|
||||
# Install required target for current platform
|
||||
print_message $YELLOW "Installing target: $PLATFORM"
|
||||
rustup target add "$PLATFORM"
|
||||
|
||||
# Set up environment variables for musl targets
|
||||
if [[ "$PLATFORM" == *"musl"* ]]; then
|
||||
print_message $YELLOW "Setting up environment for musl target..."
|
||||
export RUSTFLAGS="-C target-feature=-crt-static"
|
||||
|
||||
# For cargo-zigbuild, set up additional environment variables
|
||||
if command -v cargo-zigbuild &> /dev/null; then
|
||||
print_message $YELLOW "Configuring cargo-zigbuild for musl target..."
|
||||
|
||||
# Set environment variables for better musl support
|
||||
export CC_x86_64_unknown_linux_musl="zig cc -target x86_64-linux-musl"
|
||||
export CXX_x86_64_unknown_linux_musl="zig c++ -target x86_64-linux-musl"
|
||||
export AR_x86_64_unknown_linux_musl="zig ar"
|
||||
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target x86_64-linux-musl"
|
||||
|
||||
export CC_aarch64_unknown_linux_musl="zig cc -target aarch64-linux-musl"
|
||||
export CXX_aarch64_unknown_linux_musl="zig c++ -target aarch64-linux-musl"
|
||||
export AR_aarch64_unknown_linux_musl="zig ar"
|
||||
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target aarch64-linux-musl"
|
||||
|
||||
# Set environment variables for zstd-sys to avoid target parsing issues
|
||||
export ZSTD_SYS_USE_PKG_CONFIG=1
|
||||
export PKG_CONFIG_ALLOW_CROSS=1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Install required tools
|
||||
if [ "$SIGN" = true ]; then
|
||||
if ! command -v minisign &> /dev/null; then
|
||||
print_message $YELLOW "Installing minisign for binary signing..."
|
||||
cargo install minisign
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Download console static assets
|
||||
download_console_assets() {
|
||||
local static_dir="rustfs/static"
|
||||
local console_exists=false
|
||||
|
||||
# Check if console assets already exist
|
||||
if [ -d "$static_dir" ] && [ -f "$static_dir/index.html" ]; then
|
||||
console_exists=true
|
||||
local static_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
|
||||
print_message $YELLOW "Console static assets already exist ($static_size)"
|
||||
fi
|
||||
|
||||
# Determine if we need to download
|
||||
local should_download=false
|
||||
if [ "$WITH_CONSOLE" = true ]; then
|
||||
if [ "$console_exists" = false ]; then
|
||||
print_message $BLUE "🎨 Console assets not found, downloading..."
|
||||
should_download=true
|
||||
elif [ "$FORCE_CONSOLE_UPDATE" = true ]; then
|
||||
print_message $BLUE "🎨 Force updating console assets..."
|
||||
should_download=true
|
||||
else
|
||||
print_message $GREEN "✅ Console assets already available, skipping download"
|
||||
fi
|
||||
else
|
||||
if [ "$console_exists" = true ]; then
|
||||
print_message $GREEN "✅ Using existing console assets"
|
||||
else
|
||||
print_message $YELLOW "⚠️ Console assets not found. Use --download-console to download them."
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$should_download" = true ]; then
|
||||
print_message $BLUE "📥 Downloading console static assets..."
|
||||
|
||||
# Create static directory
|
||||
mkdir -p "$static_dir"
|
||||
|
||||
# Download from GitHub Releases (consistent with Docker build)
|
||||
local download_url
|
||||
if [ "$CONSOLE_VERSION" = "latest" ]; then
|
||||
print_message $YELLOW "Getting latest console release info..."
|
||||
# For now, use dl.rustfs.com as fallback until GitHub Releases includes console assets
|
||||
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
|
||||
else
|
||||
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-${CONSOLE_VERSION}.zip"
|
||||
fi
|
||||
|
||||
print_message $YELLOW "Downloading from: $download_url"
|
||||
|
||||
# Download with retries
|
||||
local temp_file="console-assets-temp.zip"
|
||||
local download_success=false
|
||||
|
||||
for i in {1..3}; do
|
||||
if curl -L "$download_url" -o "$temp_file" --retry 3 --retry-delay 5 --max-time 300; then
|
||||
download_success=true
|
||||
break
|
||||
else
|
||||
print_message $YELLOW "Download attempt $i failed, retrying..."
|
||||
sleep 2
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$download_success" = true ]; then
|
||||
# Verify the downloaded file
|
||||
if [ -f "$temp_file" ] && [ -s "$temp_file" ]; then
|
||||
print_message $BLUE "📦 Extracting console assets..."
|
||||
|
||||
# Extract to static directory
|
||||
if unzip -o "$temp_file" -d "$static_dir"; then
|
||||
rm "$temp_file"
|
||||
local final_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
|
||||
print_message $GREEN "✅ Console assets downloaded successfully ($final_size)"
|
||||
else
|
||||
print_message $RED "❌ Failed to extract console assets"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $RED "❌ Downloaded file is empty or invalid"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $RED "❌ Failed to download console assets after 3 attempts"
|
||||
print_message $YELLOW "💡 Console assets are optional. Build will continue without them."
|
||||
rm -f "$temp_file"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify binary functionality
|
||||
verify_binary() {
|
||||
local binary_path="$1"
|
||||
|
||||
# Check if binary exists
|
||||
if [ ! -f "$binary_path" ]; then
|
||||
print_message $RED "❌ Binary file not found: $binary_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if binary is executable
|
||||
if [ ! -x "$binary_path" ]; then
|
||||
print_message $RED "❌ Binary is not executable: $binary_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check basic functionality - try to run help command
|
||||
print_message $YELLOW " Testing --help command..."
|
||||
if ! "$binary_path" --help >/dev/null 2>&1; then
|
||||
print_message $RED "❌ Binary failed to run --help command"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check version command
|
||||
print_message $YELLOW " Testing --version command..."
|
||||
if ! "$binary_path" --version >/dev/null 2>&1; then
|
||||
print_message $YELLOW "⚠️ Binary does not support --version command (this is optional)"
|
||||
fi
|
||||
|
||||
# Try to get some basic info about the binary
|
||||
local file_info=$(file "$binary_path" 2>/dev/null || echo "unknown")
|
||||
print_message $YELLOW " Binary info: $file_info"
|
||||
|
||||
# Check if it's a valid ELF/Mach-O binary
|
||||
if command -v readelf >/dev/null 2>&1; then
|
||||
if readelf -h "$binary_path" >/dev/null 2>&1; then
|
||||
print_message $YELLOW " ELF binary structure: valid"
|
||||
fi
|
||||
elif command -v otool >/dev/null 2>&1; then
|
||||
if otool -h "$binary_path" >/dev/null 2>&1; then
|
||||
print_message $YELLOW " Mach-O binary structure: valid"
|
||||
fi
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Build binary for current platform
|
||||
build_binary() {
|
||||
local version=$(get_version)
|
||||
local output_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
|
||||
|
||||
print_message $BLUE "🏗️ Building for platform: $PLATFORM"
|
||||
print_message $YELLOW " Version: $version"
|
||||
print_message $YELLOW " Output: $output_file"
|
||||
|
||||
# Create output directory
|
||||
mkdir -p "${OUTPUT_DIR}/${PLATFORM}"
|
||||
|
||||
# Simple build logic matching the working version (4fb4b353)
|
||||
# Force rebuild by touching build.rs
|
||||
touch rustfs/build.rs
|
||||
|
||||
# Determine build command based on platform and cross-compilation needs
|
||||
local build_cmd=""
|
||||
local current_platform=$(detect_platform)
|
||||
|
||||
print_message $BLUE "📦 Using working version build logic..."
|
||||
|
||||
# Check if we need cross-compilation
|
||||
if [ "$PLATFORM" != "$current_platform" ]; then
|
||||
# Cross-compilation needed
|
||||
if [[ "$PLATFORM" == *"apple-darwin"* ]]; then
|
||||
print_message $RED "❌ macOS cross-compilation not supported"
|
||||
print_message $YELLOW "💡 macOS targets must be built natively on macOS runners"
|
||||
return 1
|
||||
elif [[ "$PLATFORM" == *"windows"* ]]; then
|
||||
# Use cross for Windows ARM64
|
||||
if ! command -v cross &> /dev/null; then
|
||||
print_message $YELLOW "📦 Installing cross tool..."
|
||||
cargo install cross --git https://github.com/cross-rs/cross
|
||||
fi
|
||||
build_cmd="cross build"
|
||||
else
|
||||
# Use zigbuild for Linux ARM64 (matches working version)
|
||||
if ! command -v cargo-zigbuild &> /dev/null; then
|
||||
print_message $RED "❌ cargo-zigbuild not found. Please install it first."
|
||||
return 1
|
||||
fi
|
||||
build_cmd="cargo zigbuild"
|
||||
fi
|
||||
else
|
||||
# Native compilation
|
||||
build_cmd="cargo build"
|
||||
fi
|
||||
|
||||
if [ "$BUILD_TYPE" = "release" ]; then
|
||||
build_cmd+=" --release"
|
||||
fi
|
||||
|
||||
build_cmd+=" --target $PLATFORM"
|
||||
build_cmd+=" -p rustfs --bins"
|
||||
|
||||
print_message $BLUE "📦 Executing: $build_cmd"
|
||||
|
||||
# Execute build (this matches exactly what the working version does)
|
||||
if eval $build_cmd; then
|
||||
print_message $GREEN "✅ Successfully built for $PLATFORM"
|
||||
|
||||
# Copy binary to output directory
|
||||
cp "target/${PLATFORM}/${BUILD_TYPE}/${BINARY_NAME}" "$output_file"
|
||||
|
||||
# Generate checksums
|
||||
print_message $BLUE "🔐 Generating checksums..."
|
||||
(cd "${OUTPUT_DIR}/${PLATFORM}" && generate_sha256 "${BINARY_NAME}" "${BINARY_NAME}.sha256sum")
|
||||
|
||||
# Verify binary functionality (if not skipped)
|
||||
if [ "$SKIP_VERIFICATION" = false ]; then
|
||||
print_message $BLUE "🔍 Verifying binary functionality..."
|
||||
if verify_binary "$output_file"; then
|
||||
print_message $GREEN "✅ Binary verification passed"
|
||||
else
|
||||
print_message $RED "❌ Binary verification failed"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $YELLOW "⚠️ Binary verification skipped by user request"
|
||||
fi
|
||||
|
||||
# Sign binary if requested
|
||||
if [ "$SIGN" = true ]; then
|
||||
print_message $BLUE "✍️ Signing binary..."
|
||||
(cd "${OUTPUT_DIR}/${PLATFORM}" && minisign -S -m "${BINARY_NAME}" -s ~/.minisign/minisign.key)
|
||||
fi
|
||||
|
||||
print_message $GREEN "✅ Build completed successfully"
|
||||
else
|
||||
print_message $RED "❌ Failed to build for $PLATFORM"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
|
||||
# Main build function
|
||||
build_rustfs() {
|
||||
local version=$(get_version)
|
||||
|
||||
print_message $BLUE "🚀 Starting RustFS binary build process..."
|
||||
print_message $YELLOW " Version: $version"
|
||||
print_message $YELLOW " Platform: $PLATFORM"
|
||||
print_message $YELLOW " Output Directory: $OUTPUT_DIR"
|
||||
print_message $YELLOW " Build Type: $BUILD_TYPE"
|
||||
print_message $YELLOW " Sign: $SIGN"
|
||||
print_message $YELLOW " With Console: $WITH_CONSOLE"
|
||||
if [ "$WITH_CONSOLE" = true ]; then
|
||||
print_message $YELLOW " Console Version: $CONSOLE_VERSION"
|
||||
print_message $YELLOW " Force Console Update: $FORCE_CONSOLE_UPDATE"
|
||||
fi
|
||||
print_message $YELLOW " Skip Verification: $SKIP_VERIFICATION"
|
||||
echo ""
|
||||
|
||||
# Setup environment
|
||||
setup_rust_environment
|
||||
echo ""
|
||||
|
||||
# Download console assets if requested
|
||||
download_console_assets
|
||||
echo ""
|
||||
|
||||
# Build binary
|
||||
build_binary
|
||||
echo ""
|
||||
|
||||
print_message $GREEN "🎉 Build process completed successfully!"
|
||||
|
||||
# Show built binary
|
||||
local binary_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
|
||||
if [ -f "$binary_file" ]; then
|
||||
local size=$(ls -lh "$binary_file" | awk '{print $5}')
|
||||
print_message $BLUE "📋 Built binary: $binary_file ($size)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-o|--output-dir)
|
||||
OUTPUT_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
-b|--binary-name)
|
||||
BINARY_NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
-p|--platform)
|
||||
CUSTOM_PLATFORM="$2"
|
||||
shift 2
|
||||
;;
|
||||
--dev)
|
||||
BUILD_TYPE="debug"
|
||||
shift
|
||||
;;
|
||||
--sign)
|
||||
SIGN=true
|
||||
shift
|
||||
;;
|
||||
--with-console)
|
||||
WITH_CONSOLE=true
|
||||
shift
|
||||
;;
|
||||
--no-console)
|
||||
WITH_CONSOLE=false
|
||||
shift
|
||||
;;
|
||||
--force-console-update)
|
||||
FORCE_CONSOLE_UPDATE=true
|
||||
WITH_CONSOLE=true # Auto-enable download when forcing update
|
||||
shift
|
||||
;;
|
||||
--console-version)
|
||||
CONSOLE_VERSION="$2"
|
||||
shift 2
|
||||
;;
|
||||
--skip-verification)
|
||||
SKIP_VERIFICATION=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
print_message $RED "❌ Unknown option: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
print_message $BLUE "🦀 RustFS Binary Build Script"
|
||||
echo ""
|
||||
|
||||
# Check if we're in a Rust project
|
||||
if [ ! -f "Cargo.toml" ]; then
|
||||
print_message $RED "❌ No Cargo.toml found. Are you in a Rust project directory?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Override platform if specified
|
||||
if [ -n "$CUSTOM_PLATFORM" ]; then
|
||||
PLATFORM="$CUSTOM_PLATFORM"
|
||||
print_message $YELLOW "🎯 Using specified platform: $PLATFORM"
|
||||
|
||||
# Auto-enable skip verification for cross-compilation
|
||||
if [ "$PLATFORM" != "$(detect_platform)" ]; then
|
||||
SKIP_VERIFICATION=true
|
||||
print_message $YELLOW "⚠️ Cross-compilation detected, enabling --skip-verification"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Start build process
|
||||
build_rustfs
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
clear
|
||||
|
||||
# Get the current platform architecture
|
||||
ARCH=$(uname -m)
|
||||
|
||||
# Set the target directory according to the schema
|
||||
if [ "$ARCH" == "x86_64" ]; then
|
||||
TARGET_DIR="target/x86_64"
|
||||
elif [ "$ARCH" == "aarch64" ]; then
|
||||
TARGET_DIR="target/arm64"
|
||||
else
|
||||
TARGET_DIR="target/unknown"
|
||||
fi
|
||||
|
||||
# Set CARGO_TARGET_DIR and build the project
|
||||
CARGO_TARGET_DIR=$TARGET_DIR RUSTFLAGS="-C link-arg=-fuse-ld=mold" cargo build --release --package rustfs
|
||||
|
||||
echo -e "\a"
|
||||
echo -e "\a"
|
||||
echo -e "\a"
|
||||
@@ -26,7 +26,6 @@ dioxus = { workspace = true, features = ["router"] }
|
||||
dirs = { workspace = true }
|
||||
hex = { workspace = true }
|
||||
keyring = { workspace = true }
|
||||
lazy_static = { workspace = true }
|
||||
rfd = { workspace = true }
|
||||
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
|
||||
rust-i18n = { workspace = true }
|
||||
|
||||
@@ -37,7 +37,9 @@ copyright = "Copyright 2025 rustfs.com"
|
||||
|
||||
icon = [
|
||||
"assets/icons/icon.icns",
|
||||
"assets/icons/icon.ico"
|
||||
"assets/icons/icon.ico",
|
||||
"assets/icons/icon.png",
|
||||
"assets/icons/rustfs-icon.png",
|
||||
]
|
||||
#[bundle.macos]
|
||||
#provider_short_name = "RustFs"
|
||||
|
||||
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
@@ -1,20 +1,15 @@
|
||||
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z"
|
||||
fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
||||
|
Before Width: | Height: | Size: 3.5 KiB After Width: | Height: | Size: 3.4 KiB |
|
Before Width: | Height: | Size: 34 KiB |
@@ -1,15 +0,0 @@
|
||||
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 3.4 KiB |
@@ -14,12 +14,12 @@
|
||||
|
||||
use crate::utils::RustFSConfig;
|
||||
use dioxus::logger::tracing::{debug, error, info};
|
||||
use lazy_static::lazy_static;
|
||||
use rust_embed::RustEmbed;
|
||||
use sha2::{Digest, Sha256};
|
||||
use std::error::Error;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Command as StdCommand;
|
||||
use std::sync::LazyLock;
|
||||
use std::time::Duration;
|
||||
use tokio::fs;
|
||||
use tokio::fs::File;
|
||||
@@ -31,15 +31,13 @@ use tokio::sync::{Mutex, mpsc};
|
||||
#[folder = "$CARGO_MANIFEST_DIR/embedded-rustfs/"]
|
||||
struct Asset;
|
||||
|
||||
// Use `lazy_static` to cache the checksum of embedded resources
|
||||
lazy_static! {
|
||||
static ref RUSTFS_HASH: Mutex<String> = {
|
||||
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
|
||||
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
|
||||
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
|
||||
Mutex::new(hash)
|
||||
};
|
||||
}
|
||||
// Use `LazyLock` to cache the checksum of embedded resources
|
||||
static RUSTFS_HASH: LazyLock<Mutex<String>> = LazyLock::new(|| {
|
||||
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
|
||||
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
|
||||
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
|
||||
Mutex::new(hash)
|
||||
});
|
||||
|
||||
/// Service command
|
||||
/// This enum represents the commands that can be sent to the service manager
|
||||
|
||||
41
crates/ahm/Cargo.toml
Normal file
@@ -0,0 +1,41 @@
|
||||
[package]
|
||||
name = "rustfs-ahm"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
authors = ["RustFS Team"]
|
||||
license.workspace = true
|
||||
description = "RustFS AHM (Automatic Health Management) Scanner"
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
documentation = "https://docs.rs/rustfs-ahm/latest/rustfs_ahm/"
|
||||
keywords = ["RustFS", "AHM", "health-management", "scanner", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
|
||||
[dependencies]
|
||||
rustfs-ecstore = { workspace = true }
|
||||
rustfs-common = { workspace = true }
|
||||
rustfs-filemeta = { workspace = true }
|
||||
rustfs-madmin = { workspace = true }
|
||||
rustfs-utils = { workspace = true }
|
||||
tokio = { workspace = true, features = ["full"] }
|
||||
tokio-util = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
bytes = { workspace = true }
|
||||
time = { workspace = true, features = ["serde"] }
|
||||
uuid = { workspace = true, features = ["v4", "serde"] }
|
||||
anyhow = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
url = { workspace = true }
|
||||
rustfs-lock = { workspace = true }
|
||||
|
||||
lazy_static = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
rmp-serde = { workspace = true }
|
||||
tokio-test = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
45
crates/ahm/src/error.rs
Normal file
@@ -0,0 +1,45 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Debug, Error)]
|
||||
pub enum Error {
|
||||
#[error("I/O error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
||||
#[error("Storage error: {0}")]
|
||||
Storage(#[from] rustfs_ecstore::error::Error),
|
||||
|
||||
#[error("Configuration error: {0}")]
|
||||
Config(String),
|
||||
|
||||
#[error("Scanner error: {0}")]
|
||||
Scanner(String),
|
||||
|
||||
#[error("Metrics error: {0}")]
|
||||
Metrics(String),
|
||||
|
||||
#[error(transparent)]
|
||||
Other(#[from] anyhow::Error),
|
||||
}
|
||||
|
||||
pub type Result<T, E = Error> = std::result::Result<T, E>;
|
||||
|
||||
// Implement conversion from ahm::Error to std::io::Error for use in main.rs
|
||||
impl From<Error> for std::io::Error {
|
||||
fn from(err: Error) -> Self {
|
||||
std::io::Error::other(err)
|
||||
}
|
||||
}
|
||||
54
crates/ahm/src/lib.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::sync::OnceLock;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
|
||||
pub mod error;
|
||||
pub mod scanner;
|
||||
|
||||
pub use error::{Error, Result};
|
||||
pub use scanner::{
|
||||
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner, ScannerMetrics, load_data_usage_from_backend,
|
||||
store_data_usage_in_backend,
|
||||
};
|
||||
|
||||
// Global cancellation token for AHM services (scanner and other background tasks)
|
||||
static GLOBAL_AHM_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
/// Initialize the global AHM services cancellation token
|
||||
pub fn init_ahm_services_cancel_token(cancel_token: CancellationToken) -> Result<()> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN
|
||||
.set(cancel_token)
|
||||
.map_err(|_| Error::Config("AHM services cancel token already initialized".to_string()))
|
||||
}
|
||||
|
||||
/// Get the global AHM services cancellation token
|
||||
pub fn get_ahm_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global AHM services cancellation token
|
||||
pub fn create_ahm_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_ahm_services_cancel_token(cancel_token.clone()).expect("AHM services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all AHM services gracefully
|
||||
pub fn shutdown_ahm_services() {
|
||||
if let Some(cancel_token) = GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
1247
crates/ahm/src/scanner/data_scanner.rs
Normal file
671
crates/ahm/src/scanner/data_usage.rs
Normal file
@@ -0,0 +1,671 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{collections::HashMap, sync::Arc, time::SystemTime};
|
||||
|
||||
use rustfs_ecstore::{bucket::metadata_sys::get_replication_config, config::com::read_config, store::ECStore};
|
||||
use rustfs_utils::path::SLASH_SEPARATOR;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use crate::error::{Error, Result};
|
||||
|
||||
// Data usage storage constants
|
||||
pub const DATA_USAGE_ROOT: &str = SLASH_SEPARATOR;
|
||||
const DATA_USAGE_OBJ_NAME: &str = ".usage.json";
|
||||
const DATA_USAGE_BLOOM_NAME: &str = ".bloomcycle.bin";
|
||||
pub const DATA_USAGE_CACHE_NAME: &str = ".usage-cache.bin";
|
||||
|
||||
// Data usage storage paths
|
||||
lazy_static::lazy_static! {
|
||||
pub static ref DATA_USAGE_BUCKET: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::RUSTFS_META_BUCKET,
|
||||
SLASH_SEPARATOR,
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX
|
||||
);
|
||||
pub static ref DATA_USAGE_OBJ_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_OBJ_NAME
|
||||
);
|
||||
pub static ref DATA_USAGE_BLOOM_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_BLOOM_NAME
|
||||
);
|
||||
}
|
||||
|
||||
/// Bucket target usage info provides replication statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketTargetUsageInfo {
|
||||
pub replication_pending_size: u64,
|
||||
pub replication_failed_size: u64,
|
||||
pub replicated_size: u64,
|
||||
pub replica_size: u64,
|
||||
pub replication_pending_count: u64,
|
||||
pub replication_failed_count: u64,
|
||||
pub replicated_count: u64,
|
||||
}
|
||||
|
||||
/// Bucket usage info provides bucket-level statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketUsageInfo {
|
||||
pub size: u64,
|
||||
// Following five fields suffixed with V1 are here for backward compatibility
|
||||
// Total Size for objects that have not yet been replicated
|
||||
pub replication_pending_size_v1: u64,
|
||||
// Total size for objects that have witness one or more failures and will be retried
|
||||
pub replication_failed_size_v1: u64,
|
||||
// Total size for objects that have been replicated to destination
|
||||
pub replicated_size_v1: u64,
|
||||
// Total number of objects pending replication
|
||||
pub replication_pending_count_v1: u64,
|
||||
// Total number of objects that failed replication
|
||||
pub replication_failed_count_v1: u64,
|
||||
|
||||
pub objects_count: u64,
|
||||
pub object_size_histogram: HashMap<String, u64>,
|
||||
pub object_versions_histogram: HashMap<String, u64>,
|
||||
pub versions_count: u64,
|
||||
pub delete_markers_count: u64,
|
||||
pub replica_size: u64,
|
||||
pub replica_count: u64,
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
}
|
||||
|
||||
/// DataUsageInfo represents data usage stats of the underlying storage
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct DataUsageInfo {
|
||||
/// Total capacity
|
||||
pub total_capacity: u64,
|
||||
/// Total used capacity
|
||||
pub total_used_capacity: u64,
|
||||
/// Total free capacity
|
||||
pub total_free_capacity: u64,
|
||||
|
||||
/// LastUpdate is the timestamp of when the data usage info was last updated
|
||||
pub last_update: Option<SystemTime>,
|
||||
|
||||
/// Objects total count across all buckets
|
||||
pub objects_total_count: u64,
|
||||
/// Versions total count across all buckets
|
||||
pub versions_total_count: u64,
|
||||
/// Delete markers total count across all buckets
|
||||
pub delete_markers_total_count: u64,
|
||||
/// Objects total size across all buckets
|
||||
pub objects_total_size: u64,
|
||||
/// Replication info across all buckets
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
|
||||
/// Total number of buckets in this cluster
|
||||
pub buckets_count: u64,
|
||||
/// Buckets usage info provides following information across all buckets
|
||||
pub buckets_usage: HashMap<String, BucketUsageInfo>,
|
||||
/// Deprecated kept here for backward compatibility reasons
|
||||
pub bucket_sizes: HashMap<String, u64>,
|
||||
}
|
||||
|
||||
/// Size summary for a single object or group of objects
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct SizeSummary {
|
||||
/// Total size
|
||||
pub total_size: usize,
|
||||
/// Number of versions
|
||||
pub versions: usize,
|
||||
/// Number of delete markers
|
||||
pub delete_markers: usize,
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Replica size
|
||||
pub replica_size: usize,
|
||||
/// Replica count
|
||||
pub replica_count: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
/// Replication target stats
|
||||
pub repl_target_stats: HashMap<String, ReplTargetSizeSummary>,
|
||||
}
|
||||
|
||||
/// Replication target size summary
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct ReplTargetSizeSummary {
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
}
|
||||
|
||||
impl DataUsageInfo {
|
||||
/// Create a new DataUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add object metadata to data usage statistics
|
||||
pub fn add_object(&mut self, object_path: &str, meta_object: &rustfs_filemeta::MetaObject) {
|
||||
// This method is kept for backward compatibility
|
||||
// For accurate version counting, use add_object_from_file_meta instead
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += meta_object.size as u64;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += 1; // Simplified: assume 1 version per object
|
||||
|
||||
// Update size histogram
|
||||
let total_size = meta_object.size as u64;
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if total_size >= min_size && total_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram (simplified - count as single version)
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry("SINGLE_VERSION".to_string())
|
||||
.or_insert(0) += 1;
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: meta_object.size as u64,
|
||||
objects_count: 1,
|
||||
versions_count: 1,
|
||||
..Default::default()
|
||||
};
|
||||
bucket_usage.object_size_histogram.insert("0-1KB".to_string(), 1);
|
||||
bucket_usage.object_versions_histogram.insert("SINGLE_VERSION".to_string(), 1);
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += meta_object.size as u64;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += 1;
|
||||
}
|
||||
|
||||
/// Add object from FileMeta for accurate version counting
|
||||
pub fn add_object_from_file_meta(&mut self, object_path: &str, file_meta: &rustfs_filemeta::FileMeta) {
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Calculate accurate statistics from all versions
|
||||
let mut total_size = 0u64;
|
||||
let mut versions_count = 0u64;
|
||||
let mut delete_markers_count = 0u64;
|
||||
let mut latest_object_size = 0u64;
|
||||
|
||||
// Process all versions to get accurate counts
|
||||
for version in &file_meta.versions {
|
||||
match rustfs_filemeta::FileMetaVersion::try_from(version.clone()) {
|
||||
Ok(ver) => {
|
||||
if let Some(obj) = ver.object {
|
||||
total_size += obj.size as u64;
|
||||
versions_count += 1;
|
||||
latest_object_size = obj.size as u64; // Keep track of latest object size
|
||||
} else if ver.delete_marker.is_some() {
|
||||
delete_markers_count += 1;
|
||||
}
|
||||
}
|
||||
Err(_) => {
|
||||
// Skip invalid versions
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += total_size;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += versions_count;
|
||||
bucket_usage.delete_markers_count += delete_markers_count;
|
||||
|
||||
// Update size histogram based on latest object size
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram based on actual version count
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry(range_name.to_string())
|
||||
.or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: total_size,
|
||||
objects_count: 1,
|
||||
versions_count,
|
||||
delete_markers_count,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Set size histogram
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
bucket_usage.object_size_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Set version histogram
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
bucket_usage.object_versions_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
// Update buckets count when adding new bucket
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += total_size;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += versions_count;
|
||||
self.delete_markers_total_count += delete_markers_count;
|
||||
}
|
||||
|
||||
/// Extract bucket name from object path
|
||||
fn extract_bucket_from_path(&self, object_path: &str) -> Result<String> {
|
||||
let parts: Vec<&str> = object_path.split('/').collect();
|
||||
if parts.is_empty() {
|
||||
return Err(Error::Scanner("Invalid object path: empty".to_string()));
|
||||
}
|
||||
Ok(parts[0].to_string())
|
||||
}
|
||||
|
||||
/// Update capacity information
|
||||
pub fn update_capacity(&mut self, total: u64, used: u64, free: u64) {
|
||||
self.total_capacity = total;
|
||||
self.total_used_capacity = used;
|
||||
self.total_free_capacity = free;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Add bucket usage info
|
||||
pub fn add_bucket_usage(&mut self, bucket: String, usage: BucketUsageInfo) {
|
||||
self.buckets_usage.insert(bucket.clone(), usage);
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Get bucket usage info
|
||||
pub fn get_bucket_usage(&self, bucket: &str) -> Option<&BucketUsageInfo> {
|
||||
self.buckets_usage.get(bucket)
|
||||
}
|
||||
|
||||
/// Calculate total statistics from all buckets
|
||||
pub fn calculate_totals(&mut self) {
|
||||
self.objects_total_count = 0;
|
||||
self.versions_total_count = 0;
|
||||
self.delete_markers_total_count = 0;
|
||||
self.objects_total_size = 0;
|
||||
|
||||
for usage in self.buckets_usage.values() {
|
||||
self.objects_total_count += usage.objects_count;
|
||||
self.versions_total_count += usage.versions_count;
|
||||
self.delete_markers_total_count += usage.delete_markers_count;
|
||||
self.objects_total_size += usage.size;
|
||||
}
|
||||
}
|
||||
|
||||
/// Merge another DataUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &DataUsageInfo) {
|
||||
// Merge bucket usage
|
||||
for (bucket, usage) in &other.buckets_usage {
|
||||
if let Some(existing) = self.buckets_usage.get_mut(bucket) {
|
||||
existing.merge(usage);
|
||||
} else {
|
||||
self.buckets_usage.insert(bucket.clone(), usage.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Recalculate totals
|
||||
self.calculate_totals();
|
||||
|
||||
// Ensure buckets_count stays consistent with buckets_usage
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
|
||||
// Update last update time
|
||||
if let Some(other_update) = other.last_update {
|
||||
if self.last_update.is_none() || other_update > self.last_update.unwrap() {
|
||||
self.last_update = Some(other_update);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl BucketUsageInfo {
|
||||
/// Create a new BucketUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add size summary to this bucket usage
|
||||
pub fn add_size_summary(&mut self, summary: &SizeSummary) {
|
||||
self.size += summary.total_size as u64;
|
||||
self.versions_count += summary.versions as u64;
|
||||
self.delete_markers_count += summary.delete_markers as u64;
|
||||
self.replica_size += summary.replica_size as u64;
|
||||
self.replica_count += summary.replica_count as u64;
|
||||
}
|
||||
|
||||
/// Merge another BucketUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &BucketUsageInfo) {
|
||||
self.size += other.size;
|
||||
self.objects_count += other.objects_count;
|
||||
self.versions_count += other.versions_count;
|
||||
self.delete_markers_count += other.delete_markers_count;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
|
||||
// Merge histograms
|
||||
for (key, value) in &other.object_size_histogram {
|
||||
*self.object_size_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
for (key, value) in &other.object_versions_histogram {
|
||||
*self.object_versions_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
// Merge replication info
|
||||
for (target, info) in &other.replication_info {
|
||||
let entry = self.replication_info.entry(target.clone()).or_default();
|
||||
entry.replicated_size += info.replicated_size;
|
||||
entry.replica_size += info.replica_size;
|
||||
entry.replication_pending_size += info.replication_pending_size;
|
||||
entry.replication_failed_size += info.replication_failed_size;
|
||||
entry.replication_pending_count += info.replication_pending_count;
|
||||
entry.replication_failed_count += info.replication_failed_count;
|
||||
entry.replicated_count += info.replicated_count;
|
||||
}
|
||||
|
||||
// Merge backward compatibility fields
|
||||
self.replication_pending_size_v1 += other.replication_pending_size_v1;
|
||||
self.replication_failed_size_v1 += other.replication_failed_size_v1;
|
||||
self.replicated_size_v1 += other.replicated_size_v1;
|
||||
self.replication_pending_count_v1 += other.replication_pending_count_v1;
|
||||
self.replication_failed_count_v1 += other.replication_failed_count_v1;
|
||||
}
|
||||
}
|
||||
|
||||
impl SizeSummary {
|
||||
/// Create a new SizeSummary
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add another SizeSummary to this one
|
||||
pub fn add(&mut self, other: &SizeSummary) {
|
||||
self.total_size += other.total_size;
|
||||
self.versions += other.versions;
|
||||
self.delete_markers += other.delete_markers;
|
||||
self.replicated_size += other.replicated_size;
|
||||
self.replicated_count += other.replicated_count;
|
||||
self.pending_size += other.pending_size;
|
||||
self.failed_size += other.failed_size;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
self.pending_count += other.pending_count;
|
||||
self.failed_count += other.failed_count;
|
||||
|
||||
// Merge replication target stats
|
||||
for (target, stats) in &other.repl_target_stats {
|
||||
let entry = self.repl_target_stats.entry(target.clone()).or_default();
|
||||
entry.replicated_size += stats.replicated_size;
|
||||
entry.replicated_count += stats.replicated_count;
|
||||
entry.pending_size += stats.pending_size;
|
||||
entry.failed_size += stats.failed_size;
|
||||
entry.pending_count += stats.pending_count;
|
||||
entry.failed_count += stats.failed_count;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Store data usage info to backend storage
|
||||
pub async fn store_data_usage_in_backend(data_usage_info: DataUsageInfo, store: Arc<ECStore>) -> Result<()> {
|
||||
let data =
|
||||
serde_json::to_vec(&data_usage_info).map_err(|e| Error::Config(format!("Failed to serialize data usage info: {e}")))?;
|
||||
|
||||
// Save to backend using the same mechanism as original code
|
||||
rustfs_ecstore::config::com::save_config(store, &DATA_USAGE_OBJ_NAME_PATH, data)
|
||||
.await
|
||||
.map_err(Error::Storage)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load data usage info from backend storage
|
||||
pub async fn load_data_usage_from_backend(store: Arc<ECStore>) -> Result<DataUsageInfo> {
|
||||
let buf = match read_config(store, &DATA_USAGE_OBJ_NAME_PATH).await {
|
||||
Ok(data) => data,
|
||||
Err(e) => {
|
||||
error!("Failed to read data usage info from backend: {}", e);
|
||||
if e == rustfs_ecstore::error::Error::ConfigNotFound {
|
||||
return Ok(DataUsageInfo::default());
|
||||
}
|
||||
return Err(Error::Storage(e));
|
||||
}
|
||||
};
|
||||
|
||||
let mut data_usage_info: DataUsageInfo =
|
||||
serde_json::from_slice(&buf).map_err(|e| Error::Config(format!("Failed to deserialize data usage info: {e}")))?;
|
||||
|
||||
warn!("Loaded data usage info from backend {:?}", &data_usage_info);
|
||||
|
||||
// Handle backward compatibility like original code
|
||||
if data_usage_info.buckets_usage.is_empty() {
|
||||
data_usage_info.buckets_usage = data_usage_info
|
||||
.bucket_sizes
|
||||
.iter()
|
||||
.map(|(bucket, &size)| {
|
||||
(
|
||||
bucket.clone(),
|
||||
BucketUsageInfo {
|
||||
size,
|
||||
..Default::default()
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
}
|
||||
|
||||
if data_usage_info.bucket_sizes.is_empty() {
|
||||
data_usage_info.bucket_sizes = data_usage_info
|
||||
.buckets_usage
|
||||
.iter()
|
||||
.map(|(bucket, bui)| (bucket.clone(), bui.size))
|
||||
.collect();
|
||||
}
|
||||
|
||||
for (bucket, bui) in &data_usage_info.buckets_usage {
|
||||
if bui.replicated_size_v1 > 0
|
||||
|| bui.replication_failed_count_v1 > 0
|
||||
|| bui.replication_failed_size_v1 > 0
|
||||
|| bui.replication_pending_count_v1 > 0
|
||||
{
|
||||
if let Ok((cfg, _)) = get_replication_config(bucket).await {
|
||||
if !cfg.role.is_empty() {
|
||||
data_usage_info.replication_info.insert(
|
||||
cfg.role.clone(),
|
||||
BucketTargetUsageInfo {
|
||||
replication_failed_size: bui.replication_failed_size_v1,
|
||||
replication_failed_count: bui.replication_failed_count_v1,
|
||||
replicated_size: bui.replicated_size_v1,
|
||||
replication_pending_count: bui.replication_pending_count_v1,
|
||||
replication_pending_size: bui.replication_pending_size_v1,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(data_usage_info)
|
||||
}
|
||||
|
||||
/// Example function showing how to use AHM data usage functionality
|
||||
/// This demonstrates the integration pattern for DataUsageInfoHandler
|
||||
pub async fn example_data_usage_integration() -> Result<()> {
|
||||
// Get the global storage instance
|
||||
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
|
||||
return Err(Error::Config("Storage not initialized".to_string()));
|
||||
};
|
||||
|
||||
// Load data usage from backend (this replaces the original load_data_usage_from_backend)
|
||||
let data_usage = load_data_usage_from_backend(store).await?;
|
||||
|
||||
info!(
|
||||
"Loaded data usage info: {} buckets, {} total objects",
|
||||
data_usage.buckets_count, data_usage.objects_total_count
|
||||
);
|
||||
|
||||
// Example: Store updated data usage back to backend
|
||||
// This would typically be called by the scanner after collecting new statistics
|
||||
// store_data_usage_in_backend(data_usage, store).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_data_usage_info_creation() {
|
||||
let mut info = DataUsageInfo::new();
|
||||
info.update_capacity(1000, 500, 500);
|
||||
|
||||
assert_eq!(info.total_capacity, 1000);
|
||||
assert_eq!(info.total_used_capacity, 500);
|
||||
assert_eq!(info.total_free_capacity, 500);
|
||||
assert!(info.last_update.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bucket_usage_info_merge() {
|
||||
let mut usage1 = BucketUsageInfo::new();
|
||||
usage1.size = 100;
|
||||
usage1.objects_count = 10;
|
||||
usage1.versions_count = 5;
|
||||
|
||||
let mut usage2 = BucketUsageInfo::new();
|
||||
usage2.size = 200;
|
||||
usage2.objects_count = 20;
|
||||
usage2.versions_count = 10;
|
||||
|
||||
usage1.merge(&usage2);
|
||||
|
||||
assert_eq!(usage1.size, 300);
|
||||
assert_eq!(usage1.objects_count, 30);
|
||||
assert_eq!(usage1.versions_count, 15);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_size_summary_add() {
|
||||
let mut summary1 = SizeSummary::new();
|
||||
summary1.total_size = 100;
|
||||
summary1.versions = 5;
|
||||
|
||||
let mut summary2 = SizeSummary::new();
|
||||
summary2.total_size = 200;
|
||||
summary2.versions = 10;
|
||||
|
||||
summary1.add(&summary2);
|
||||
|
||||
assert_eq!(summary1.total_size, 300);
|
||||
assert_eq!(summary1.versions, 15);
|
||||
}
|
||||
}
|
||||
277
crates/ahm/src/scanner/histogram.rs
Normal file
@@ -0,0 +1,277 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Size interval for object size histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SizeInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Version interval for object versions histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct VersionInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Object size histogram intervals
|
||||
pub const OBJECTS_HISTOGRAM_INTERVALS: &[SizeInterval] = &[
|
||||
SizeInterval {
|
||||
start: 0,
|
||||
end: 1024 - 1,
|
||||
name: "LESS_THAN_1_KiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024,
|
||||
end: 1024 * 1024 - 1,
|
||||
name: "1_KiB_TO_1_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024 * 1024,
|
||||
end: 10 * 1024 * 1024 - 1,
|
||||
name: "1_MiB_TO_10_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 10 * 1024 * 1024,
|
||||
end: 64 * 1024 * 1024 - 1,
|
||||
name: "10_MiB_TO_64_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 64 * 1024 * 1024,
|
||||
end: 128 * 1024 * 1024 - 1,
|
||||
name: "64_MiB_TO_128_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 128 * 1024 * 1024,
|
||||
end: 512 * 1024 * 1024 - 1,
|
||||
name: "128_MiB_TO_512_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 512 * 1024 * 1024,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_512_MiB",
|
||||
},
|
||||
];
|
||||
|
||||
/// Object version count histogram intervals
|
||||
pub const OBJECTS_VERSION_COUNT_INTERVALS: &[VersionInterval] = &[
|
||||
VersionInterval {
|
||||
start: 1,
|
||||
end: 1,
|
||||
name: "1_VERSION",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 2,
|
||||
end: 10,
|
||||
name: "2_TO_10_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 11,
|
||||
end: 100,
|
||||
name: "11_TO_100_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 101,
|
||||
end: 1000,
|
||||
name: "101_TO_1000_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 1001,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_1000_VERSIONS",
|
||||
},
|
||||
];
|
||||
|
||||
/// Size histogram for object size distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct SizeHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
/// Versions histogram for object version count distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct VersionsHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
impl SizeHistogram {
|
||||
/// Create a new size histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_HISTOGRAM_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a size to the histogram
|
||||
pub fn add(&mut self, size: u64) {
|
||||
for (idx, interval) in OBJECTS_HISTOGRAM_INTERVALS.iter().enumerate() {
|
||||
if size >= interval.start && size <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_HISTOGRAM_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &SizeHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl VersionsHistogram {
|
||||
/// Create a new versions histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_VERSION_COUNT_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a version count to the histogram
|
||||
pub fn add(&mut self, versions: u64) {
|
||||
for (idx, interval) in OBJECTS_VERSION_COUNT_INTERVALS.iter().enumerate() {
|
||||
if versions >= interval.start && versions <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_VERSION_COUNT_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &VersionsHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_size_histogram() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
|
||||
// Add some sizes
|
||||
histogram.add(512); // LESS_THAN_1_KiB
|
||||
histogram.add(1024); // 1_KiB_TO_1_MiB
|
||||
histogram.add(1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
histogram.add(5 * 1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("LESS_THAN_1_KiB"), Some(&1));
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&1));
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2));
|
||||
assert_eq!(map.get("10_MiB_TO_64_MiB"), Some(&0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_versions_histogram() {
|
||||
let mut histogram = VersionsHistogram::new();
|
||||
|
||||
// Add some version counts
|
||||
histogram.add(1); // 1_VERSION
|
||||
histogram.add(5); // 2_TO_10_VERSIONS
|
||||
histogram.add(50); // 11_TO_100_VERSIONS
|
||||
histogram.add(500); // 101_TO_1000_VERSIONS
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("1_VERSION"), Some(&1));
|
||||
assert_eq!(map.get("2_TO_10_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("11_TO_100_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("101_TO_1000_VERSIONS"), Some(&1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_merge() {
|
||||
let mut histogram1 = SizeHistogram::new();
|
||||
histogram1.add(1024);
|
||||
histogram1.add(1024 * 1024);
|
||||
|
||||
let mut histogram2 = SizeHistogram::new();
|
||||
histogram2.add(1024);
|
||||
histogram2.add(5 * 1024 * 1024);
|
||||
|
||||
histogram1.merge(&histogram2);
|
||||
|
||||
let map = histogram1.to_map();
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_reset() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
histogram.add(1024);
|
||||
histogram.add(1024 * 1024);
|
||||
|
||||
assert_eq!(histogram.total_count(), 2);
|
||||
|
||||
histogram.reset();
|
||||
assert_eq!(histogram.total_count(), 0);
|
||||
}
|
||||
}
|
||||
284
crates/ahm/src/scanner/metrics.rs
Normal file
@@ -0,0 +1,284 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{
|
||||
collections::HashMap,
|
||||
sync::atomic::{AtomicU64, Ordering},
|
||||
time::{Duration, SystemTime},
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::info;
|
||||
|
||||
/// Scanner metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct ScannerMetrics {
|
||||
/// Total objects scanned since server start
|
||||
pub objects_scanned: u64,
|
||||
/// Total object versions scanned since server start
|
||||
pub versions_scanned: u64,
|
||||
/// Total directories scanned since server start
|
||||
pub directories_scanned: u64,
|
||||
/// Total bucket scans started since server start
|
||||
pub bucket_scans_started: u64,
|
||||
/// Total bucket scans finished since server start
|
||||
pub bucket_scans_finished: u64,
|
||||
/// Total objects with health issues found
|
||||
pub objects_with_issues: u64,
|
||||
/// Total heal tasks queued
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Total heal tasks completed
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Total heal tasks failed
|
||||
pub heal_tasks_failed: u64,
|
||||
/// Last scan activity time
|
||||
pub last_activity: Option<SystemTime>,
|
||||
/// Current scan cycle
|
||||
pub current_cycle: u64,
|
||||
/// Total scan cycles completed
|
||||
pub total_cycles: u64,
|
||||
/// Current scan duration
|
||||
pub current_scan_duration: Option<Duration>,
|
||||
/// Average scan duration
|
||||
pub avg_scan_duration: Duration,
|
||||
/// Objects scanned per second
|
||||
pub objects_per_second: f64,
|
||||
/// Buckets scanned per second
|
||||
pub buckets_per_second: f64,
|
||||
/// Storage metrics by bucket
|
||||
pub bucket_metrics: HashMap<String, BucketMetrics>,
|
||||
/// Disk metrics
|
||||
pub disk_metrics: HashMap<String, DiskMetrics>,
|
||||
}
|
||||
|
||||
/// Bucket-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct BucketMetrics {
|
||||
/// Bucket name
|
||||
pub bucket: String,
|
||||
/// Total objects in bucket
|
||||
pub total_objects: u64,
|
||||
/// Total size of objects in bucket (bytes)
|
||||
pub total_size: u64,
|
||||
/// Objects with health issues
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Scan duration
|
||||
pub scan_duration: Option<Duration>,
|
||||
/// Heal tasks queued for this bucket
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Heal tasks completed for this bucket
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Heal tasks failed for this bucket
|
||||
pub heal_tasks_failed: u64,
|
||||
}
|
||||
|
||||
/// Disk-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct DiskMetrics {
|
||||
/// Disk path
|
||||
pub disk_path: String,
|
||||
/// Total disk space (bytes)
|
||||
pub total_space: u64,
|
||||
/// Used disk space (bytes)
|
||||
pub used_space: u64,
|
||||
/// Free disk space (bytes)
|
||||
pub free_space: u64,
|
||||
/// Objects scanned on this disk
|
||||
pub objects_scanned: u64,
|
||||
/// Objects with issues on this disk
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Whether disk is online
|
||||
pub is_online: bool,
|
||||
/// Whether disk is being scanned
|
||||
pub is_scanning: bool,
|
||||
}
|
||||
|
||||
/// Thread-safe metrics collector
|
||||
pub struct MetricsCollector {
|
||||
/// Atomic counters for real-time metrics
|
||||
objects_scanned: AtomicU64,
|
||||
versions_scanned: AtomicU64,
|
||||
directories_scanned: AtomicU64,
|
||||
bucket_scans_started: AtomicU64,
|
||||
bucket_scans_finished: AtomicU64,
|
||||
objects_with_issues: AtomicU64,
|
||||
heal_tasks_queued: AtomicU64,
|
||||
heal_tasks_completed: AtomicU64,
|
||||
heal_tasks_failed: AtomicU64,
|
||||
current_cycle: AtomicU64,
|
||||
total_cycles: AtomicU64,
|
||||
}
|
||||
|
||||
impl MetricsCollector {
|
||||
/// Create a new metrics collector
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
objects_scanned: AtomicU64::new(0),
|
||||
versions_scanned: AtomicU64::new(0),
|
||||
directories_scanned: AtomicU64::new(0),
|
||||
bucket_scans_started: AtomicU64::new(0),
|
||||
bucket_scans_finished: AtomicU64::new(0),
|
||||
objects_with_issues: AtomicU64::new(0),
|
||||
heal_tasks_queued: AtomicU64::new(0),
|
||||
heal_tasks_completed: AtomicU64::new(0),
|
||||
heal_tasks_failed: AtomicU64::new(0),
|
||||
current_cycle: AtomicU64::new(0),
|
||||
total_cycles: AtomicU64::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
/// Increment objects scanned count
|
||||
pub fn increment_objects_scanned(&self, count: u64) {
|
||||
self.objects_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment versions scanned count
|
||||
pub fn increment_versions_scanned(&self, count: u64) {
|
||||
self.versions_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment directories scanned count
|
||||
pub fn increment_directories_scanned(&self, count: u64) {
|
||||
self.directories_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans started count
|
||||
pub fn increment_bucket_scans_started(&self, count: u64) {
|
||||
self.bucket_scans_started.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans finished count
|
||||
pub fn increment_bucket_scans_finished(&self, count: u64) {
|
||||
self.bucket_scans_finished.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment objects with issues count
|
||||
pub fn increment_objects_with_issues(&self, count: u64) {
|
||||
self.objects_with_issues.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks queued count
|
||||
pub fn increment_heal_tasks_queued(&self, count: u64) {
|
||||
self.heal_tasks_queued.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks completed count
|
||||
pub fn increment_heal_tasks_completed(&self, count: u64) {
|
||||
self.heal_tasks_completed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks failed count
|
||||
pub fn increment_heal_tasks_failed(&self, count: u64) {
|
||||
self.heal_tasks_failed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Set current cycle
|
||||
pub fn set_current_cycle(&self, cycle: u64) {
|
||||
self.current_cycle.store(cycle, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment total cycles
|
||||
pub fn increment_total_cycles(&self) {
|
||||
self.total_cycles.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Get current metrics snapshot
|
||||
pub fn get_metrics(&self) -> ScannerMetrics {
|
||||
ScannerMetrics {
|
||||
objects_scanned: self.objects_scanned.load(Ordering::Relaxed),
|
||||
versions_scanned: self.versions_scanned.load(Ordering::Relaxed),
|
||||
directories_scanned: self.directories_scanned.load(Ordering::Relaxed),
|
||||
bucket_scans_started: self.bucket_scans_started.load(Ordering::Relaxed),
|
||||
bucket_scans_finished: self.bucket_scans_finished.load(Ordering::Relaxed),
|
||||
objects_with_issues: self.objects_with_issues.load(Ordering::Relaxed),
|
||||
heal_tasks_queued: self.heal_tasks_queued.load(Ordering::Relaxed),
|
||||
heal_tasks_completed: self.heal_tasks_completed.load(Ordering::Relaxed),
|
||||
heal_tasks_failed: self.heal_tasks_failed.load(Ordering::Relaxed),
|
||||
last_activity: Some(SystemTime::now()),
|
||||
current_cycle: self.current_cycle.load(Ordering::Relaxed),
|
||||
total_cycles: self.total_cycles.load(Ordering::Relaxed),
|
||||
current_scan_duration: None, // Will be set by scanner
|
||||
avg_scan_duration: Duration::ZERO, // Will be calculated
|
||||
objects_per_second: 0.0, // Will be calculated
|
||||
buckets_per_second: 0.0, // Will be calculated
|
||||
bucket_metrics: HashMap::new(), // Will be populated by scanner
|
||||
disk_metrics: HashMap::new(), // Will be populated by scanner
|
||||
}
|
||||
}
|
||||
|
||||
/// Reset all metrics
|
||||
pub fn reset(&self) {
|
||||
self.objects_scanned.store(0, Ordering::Relaxed);
|
||||
self.versions_scanned.store(0, Ordering::Relaxed);
|
||||
self.directories_scanned.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_started.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_finished.store(0, Ordering::Relaxed);
|
||||
self.objects_with_issues.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_queued.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_completed.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_failed.store(0, Ordering::Relaxed);
|
||||
self.current_cycle.store(0, Ordering::Relaxed);
|
||||
self.total_cycles.store(0, Ordering::Relaxed);
|
||||
|
||||
info!("Scanner metrics reset");
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for MetricsCollector {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_metrics_collector_creation() {
|
||||
let collector = MetricsCollector::new();
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
assert_eq!(metrics.versions_scanned, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_increment() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.increment_versions_scanned(5);
|
||||
collector.increment_objects_with_issues(2);
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 10);
|
||||
assert_eq!(metrics.versions_scanned, 5);
|
||||
assert_eq!(metrics.objects_with_issues, 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_reset() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.reset();
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
}
|
||||
}
|
||||
@@ -11,3 +11,15 @@
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod data_scanner;
|
||||
pub mod data_usage;
|
||||
pub mod histogram;
|
||||
pub mod metrics;
|
||||
|
||||
// Re-export main types for convenience
|
||||
pub use data_scanner::Scanner;
|
||||
pub use data_usage::{
|
||||
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, load_data_usage_from_backend, store_data_usage_in_backend,
|
||||
};
|
||||
pub use metrics::ScannerMetrics;
|
||||
@@ -28,6 +28,5 @@ categories = ["web-programming", "development-tools", "data-structures"]
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
lazy_static.workspace = true
|
||||
tokio.workspace = true
|
||||
tonic = { workspace = true }
|
||||
|
||||
@@ -12,19 +12,19 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::collections::HashMap;
|
||||
#![allow(non_upper_case_globals)] // FIXME
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::LazyLock;
|
||||
|
||||
use lazy_static::lazy_static;
|
||||
use tokio::sync::RwLock;
|
||||
use tonic::transport::Channel;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref GLOBAL_Local_Node_Name: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Host: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Port: RwLock<String> = RwLock::new("9000".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Addr: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Conn_Map: RwLock<HashMap<String, Channel>> = RwLock::new(HashMap::new());
|
||||
}
|
||||
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
|
||||
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
|
||||
|
||||
pub async fn set_global_addr(addr: &str) {
|
||||
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();
|
||||
|
||||
@@ -109,8 +109,8 @@ winapi = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
temp-env = "0.3.6"
|
||||
criterion = { workspace = true, features = ["html_reports"] }
|
||||
temp-env = { workspace = true }
|
||||
|
||||
[build-dependencies]
|
||||
shadow-rs = { workspace = true, features = ["build", "metadata"] }
|
||||
|
||||
@@ -43,15 +43,16 @@ pub async fn create_bitrot_reader(
|
||||
) -> disk::error::Result<Option<BitrotReader<Box<dyn AsyncRead + Send + Sync + Unpin>>>> {
|
||||
// Calculate the total length to read, including the checksum overhead
|
||||
let length = length.div_ceil(shard_size) * checksum_algo.size() + length;
|
||||
|
||||
let offset = offset.div_ceil(shard_size) * checksum_algo.size() + offset;
|
||||
if let Some(data) = inline_data {
|
||||
// Use inline data
|
||||
let rd = Cursor::new(data.to_vec());
|
||||
let mut rd = Cursor::new(data.to_vec());
|
||||
rd.set_position(offset as u64);
|
||||
let reader = BitrotReader::new(Box::new(rd) as Box<dyn AsyncRead + Send + Sync + Unpin>, shard_size, checksum_algo);
|
||||
Ok(Some(reader))
|
||||
} else if let Some(disk) = disk {
|
||||
// Read from disk
|
||||
match disk.read_file_stream(bucket, path, offset, length).await {
|
||||
match disk.read_file_stream(bucket, path, offset, length - offset).await {
|
||||
Ok(rd) => {
|
||||
let reader = BitrotReader::new(rd, shard_size, checksum_algo);
|
||||
Ok(Some(reader))
|
||||
|
||||
@@ -18,7 +18,7 @@ use futures::future::join_all;
|
||||
use rustfs_filemeta::{MetaCacheEntries, MetaCacheEntry, MetacacheReader, is_io_eof};
|
||||
use std::{future::Future, pin::Pin, sync::Arc};
|
||||
use tokio::{spawn, sync::broadcast::Receiver as B_Receiver};
|
||||
use tracing::error;
|
||||
use tracing::{error, warn};
|
||||
|
||||
pub type AgreedFn = Box<dyn Fn(MetaCacheEntry) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + 'static>;
|
||||
pub type PartialFn =
|
||||
@@ -118,10 +118,14 @@ pub async fn list_path_raw(mut rx: B_Receiver<bool>, opts: ListPathRawOptions) -
|
||||
if let Some(disk) = d.clone() {
|
||||
disk
|
||||
} else {
|
||||
warn!("list_path_raw: fallback disk is none");
|
||||
break;
|
||||
}
|
||||
}
|
||||
None => break,
|
||||
None => {
|
||||
warn!("list_path_raw: fallback disk is none2");
|
||||
break;
|
||||
}
|
||||
};
|
||||
match disk
|
||||
.as_ref()
|
||||
|
||||
@@ -288,6 +288,12 @@ impl From<rmp_serde::encode::Error> for DiskError {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rmp_serde::decode::Error> for DiskError {
|
||||
fn from(e: rmp_serde::decode::Error) -> Self {
|
||||
DiskError::other(e)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rmp::encode::ValueWriteError> for DiskError {
|
||||
fn from(e: rmp::encode::ValueWriteError) -> Self {
|
||||
DiskError::other(e)
|
||||
|
||||
@@ -57,8 +57,8 @@ use bytes::Bytes;
|
||||
use path_absolutize::Absolutize;
|
||||
use rustfs_common::defer;
|
||||
use rustfs_filemeta::{
|
||||
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, Opts, RawFileInfo, UpdateFn, get_file_info,
|
||||
read_xl_meta_no_data,
|
||||
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, ObjectPartInfo, Opts, RawFileInfo, UpdateFn,
|
||||
get_file_info, read_xl_meta_no_data,
|
||||
};
|
||||
use rustfs_utils::HashAlgorithm;
|
||||
use rustfs_utils::os::get_info;
|
||||
@@ -1312,6 +1312,67 @@ impl DiskAPI for LocalDisk {
|
||||
Ok(resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
let volume_dir = self.get_bucket_path(bucket)?;
|
||||
|
||||
let mut ret = vec![ObjectPartInfo::default(); paths.len()];
|
||||
|
||||
for (i, path_str) in paths.iter().enumerate() {
|
||||
let path = Path::new(path_str);
|
||||
let file_name = path.file_name().and_then(|v| v.to_str()).unwrap_or_default();
|
||||
let num = file_name
|
||||
.strip_prefix("part.")
|
||||
.and_then(|v| v.strip_suffix(".meta"))
|
||||
.and_then(|v| v.parse::<usize>().ok())
|
||||
.unwrap_or_default();
|
||||
|
||||
if let Err(err) = access(
|
||||
volume_dir
|
||||
.clone()
|
||||
.join(path.parent().unwrap_or(Path::new("")).join(format!("part.{num}"))),
|
||||
)
|
||||
.await
|
||||
{
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
continue;
|
||||
}
|
||||
|
||||
let data = match self
|
||||
.read_all_data(bucket, volume_dir.clone(), volume_dir.clone().join(path))
|
||||
.await
|
||||
{
|
||||
Ok(data) => data,
|
||||
Err(err) => {
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
match ObjectPartInfo::unmarshal(&data) {
|
||||
Ok(meta) => {
|
||||
ret[i] = meta;
|
||||
}
|
||||
Err(err) => {
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
|
||||
let volume_dir = self.get_bucket_path(volume)?;
|
||||
@@ -1550,11 +1611,6 @@ impl DiskAPI for LocalDisk {
|
||||
|
||||
#[tracing::instrument(level = "debug", skip(self))]
|
||||
async fn read_file_stream(&self, volume: &str, path: &str, offset: usize, length: usize) -> Result<FileReader> {
|
||||
// warn!(
|
||||
// "disk read_file_stream: volume: {}, path: {}, offset: {}, length: {}",
|
||||
// volume, path, offset, length
|
||||
// );
|
||||
|
||||
let volume_dir = self.get_bucket_path(volume)?;
|
||||
if !skip_access_checks(volume) {
|
||||
access(&volume_dir)
|
||||
|
||||
@@ -41,7 +41,7 @@ use endpoint::Endpoint;
|
||||
use error::DiskError;
|
||||
use error::{Error, Result};
|
||||
use local::LocalDisk;
|
||||
use rustfs_filemeta::{FileInfo, RawFileInfo};
|
||||
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
|
||||
use rustfs_madmin::info_commands::DiskMetrics;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::{fmt::Debug, path::PathBuf, sync::Arc};
|
||||
@@ -331,6 +331,14 @@ impl DiskAPI for Disk {
|
||||
}
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
match self {
|
||||
Disk::Local(local_disk) => local_disk.read_parts(bucket, paths).await,
|
||||
Disk::Remote(remote_disk) => remote_disk.read_parts(bucket, paths).await,
|
||||
}
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn rename_part(&self, src_volume: &str, src_path: &str, dst_volume: &str, dst_path: &str, meta: Bytes) -> Result<()> {
|
||||
match self {
|
||||
@@ -513,7 +521,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
|
||||
// CheckParts
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp>;
|
||||
// StatInfoFile
|
||||
// ReadParts
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>>;
|
||||
async fn read_multiple(&self, req: ReadMultipleReq) -> Result<Vec<ReadMultipleResp>>;
|
||||
// CleanAbandonedData
|
||||
async fn write_all(&self, volume: &str, path: &str, data: Bytes) -> Result<()>;
|
||||
|
||||
@@ -30,6 +30,7 @@ use std::{
|
||||
time::SystemTime,
|
||||
};
|
||||
use tokio::sync::{OnceCell, RwLock};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const DISK_ASSUME_UNKNOWN_SIZE: u64 = 1 << 30;
|
||||
@@ -66,6 +67,9 @@ pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
|
||||
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
|
||||
}
|
||||
|
||||
// Global cancellation token for background services (data scanner and auto heal)
|
||||
static GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
static GLOBAL_ACTIVE_CRED: OnceLock<Credentials> = OnceLock::new();
|
||||
|
||||
pub fn init_global_action_cred(ak: Option<String>, sk: Option<String>) {
|
||||
@@ -192,3 +196,27 @@ pub fn set_global_region(region: String) {
|
||||
pub fn get_global_region() -> Option<String> {
|
||||
GLOBAL_REGION.get().cloned()
|
||||
}
|
||||
|
||||
/// Initialize the global background services cancellation token
|
||||
pub fn init_background_services_cancel_token(cancel_token: CancellationToken) -> Result<(), CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.set(cancel_token)
|
||||
}
|
||||
|
||||
/// Get the global background services cancellation token
|
||||
pub fn get_background_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global background services cancellation token
|
||||
pub fn create_background_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_background_services_cancel_token(cancel_token.clone()).expect("Background services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all background services gracefully
|
||||
pub fn shutdown_background_services() {
|
||||
if let Some(cancel_token) = GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ use tokio::{
|
||||
},
|
||||
time::interval,
|
||||
};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -32,7 +33,7 @@ use super::{
|
||||
heal_ops::{HealSequence, new_bg_heal_sequence},
|
||||
};
|
||||
use crate::error::{Error, Result};
|
||||
use crate::global::GLOBAL_MRFState;
|
||||
use crate::global::{GLOBAL_MRFState, get_background_services_cancel_token};
|
||||
use crate::heal::error::ERR_RETRY_HEALING;
|
||||
use crate::heal::heal_commands::{HEAL_ITEM_BUCKET, HealScanMode};
|
||||
use crate::heal::heal_ops::{BG_HEALING_UUID, HealSource};
|
||||
@@ -54,6 +55,13 @@ use crate::{
|
||||
pub static DEFAULT_MONITOR_NEW_DISK_INTERVAL: Duration = Duration::from_secs(10);
|
||||
|
||||
pub async fn init_auto_heal() {
|
||||
info!("Initializing auto heal background task");
|
||||
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
init_background_healing().await;
|
||||
let v = env::var("_RUSTFS_AUTO_DRIVE_HEALING").unwrap_or("on".to_string());
|
||||
if v == "on" {
|
||||
@@ -61,12 +69,16 @@ pub async fn init_auto_heal() {
|
||||
GLOBAL_BackgroundHealState
|
||||
.push_heal_local_disks(&get_local_disks_to_heal().await)
|
||||
.await;
|
||||
spawn(async {
|
||||
monitor_local_disks_and_heal().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
monitor_local_disks_and_heal(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
spawn(async {
|
||||
GLOBAL_MRFState.heal_routine().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
GLOBAL_MRFState.heal_routine_with_cancel(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -108,50 +120,66 @@ pub async fn get_local_disks_to_heal() -> Vec<Endpoint> {
|
||||
disks_to_heal
|
||||
}
|
||||
|
||||
async fn monitor_local_disks_and_heal() {
|
||||
async fn monitor_local_disks_and_heal(cancel_token: CancellationToken) {
|
||||
info!("Auto heal monitor started");
|
||||
let mut interval = interval(DEFAULT_MONITOR_NEW_DISK_INTERVAL);
|
||||
|
||||
loop {
|
||||
interval.tick().await;
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("Auto heal monitor received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
_ = interval.tick() => {
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
let cancel_clone = cancel_token.clone();
|
||||
futures.push(async move {
|
||||
let disk_for_cancel = disk_clone.clone();
|
||||
tokio::select! {
|
||||
_ = cancel_clone.cancelled() => {
|
||||
info!("Disk healing task cancelled for disk: {}", disk_for_cancel);
|
||||
}
|
||||
_ = async {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
} => {}
|
||||
}
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
futures.push(async move {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
return;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -20,14 +20,13 @@ use std::{
|
||||
path::{Path, PathBuf},
|
||||
pin::Pin,
|
||||
sync::{
|
||||
Arc, OnceLock,
|
||||
Arc,
|
||||
atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering},
|
||||
},
|
||||
time::{Duration, SystemTime},
|
||||
};
|
||||
|
||||
use time::{self, OffsetDateTime};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
|
||||
use super::{
|
||||
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
|
||||
@@ -51,7 +50,7 @@ use crate::{
|
||||
metadata_sys,
|
||||
},
|
||||
event_notification::{EventArgs, send_event},
|
||||
global::GLOBAL_LocalNodeName,
|
||||
global::{GLOBAL_LocalNodeName, get_background_services_cancel_token},
|
||||
store_api::{ObjectOptions, ObjectToDelete, StorageAPI},
|
||||
};
|
||||
use crate::{
|
||||
@@ -128,8 +127,6 @@ lazy_static! {
|
||||
pub static ref globalHealConfig: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::default()));
|
||||
}
|
||||
|
||||
static GLOBAL_SCANNER_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
struct DynamicSleeper {
|
||||
factor: f64,
|
||||
max_sleep: Duration,
|
||||
@@ -198,21 +195,18 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
|
||||
/// - Minimum sleep duration to avoid excessive CPU usage
|
||||
/// - Proper error handling and logging
|
||||
///
|
||||
/// # Returns
|
||||
/// A CancellationToken that can be used to gracefully shutdown the scanner
|
||||
///
|
||||
/// # Architecture
|
||||
/// 1. Initialize with random seed for sleep intervals
|
||||
/// 2. Run scanner cycles in a loop
|
||||
/// 3. Use randomized sleep between cycles to avoid thundering herd
|
||||
/// 4. Ensure minimum sleep duration to prevent CPU thrashing
|
||||
pub async fn init_data_scanner() -> CancellationToken {
|
||||
pub async fn init_data_scanner() {
|
||||
info!("Initializing data scanner background task");
|
||||
|
||||
let cancel_token = CancellationToken::new();
|
||||
GLOBAL_SCANNER_CANCEL_TOKEN
|
||||
.set(cancel_token.clone())
|
||||
.expect("Scanner already initialized");
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
tokio::spawn(async move {
|
||||
@@ -256,8 +250,6 @@ pub async fn init_data_scanner() -> CancellationToken {
|
||||
|
||||
info!("Data scanner background task stopped gracefully");
|
||||
});
|
||||
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Run a single data scanner cycle
|
||||
@@ -282,7 +274,7 @@ async fn run_data_scanner_cycle() {
|
||||
};
|
||||
|
||||
// Check for cancellation before starting expensive operations
|
||||
if let Some(token) = GLOBAL_SCANNER_CANCEL_TOKEN.get() {
|
||||
if let Some(token) = get_background_services_cancel_token() {
|
||||
if token.is_cancelled() {
|
||||
debug!("Scanner cancelled before starting cycle");
|
||||
return;
|
||||
@@ -397,9 +389,8 @@ async fn execute_namespace_scan(
|
||||
cycle: u64,
|
||||
scan_mode: HealScanMode,
|
||||
) -> Result<()> {
|
||||
let cancel_token = GLOBAL_SCANNER_CANCEL_TOKEN
|
||||
.get()
|
||||
.ok_or_else(|| Error::other("Scanner not initialized"))?;
|
||||
let cancel_token =
|
||||
get_background_services_cancel_token().ok_or_else(|| Error::other("Background services not initialized"))?;
|
||||
|
||||
tokio::select! {
|
||||
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {
|
||||
|
||||
@@ -25,7 +25,8 @@ use std::time::Duration;
|
||||
use tokio::sync::RwLock;
|
||||
use tokio::sync::mpsc::{Receiver, Sender};
|
||||
use tokio::time::sleep;
|
||||
use tracing::error;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const MRF_OPS_QUEUE_SIZE: u64 = 100000;
|
||||
@@ -87,56 +88,96 @@ impl MRFState {
|
||||
let _ = self.tx.send(op).await;
|
||||
}
|
||||
|
||||
pub async fn heal_routine(&self) {
|
||||
/// Enhanced heal routine with cancellation support
|
||||
///
|
||||
/// This method implements the same healing logic as the original heal_routine,
|
||||
/// but adds proper cancellation support via CancellationToken.
|
||||
/// The core logic remains identical to maintain compatibility.
|
||||
pub async fn heal_routine_with_cancel(&self, cancel_token: CancellationToken) {
|
||||
info!("MRF heal routine started with cancellation support");
|
||||
|
||||
loop {
|
||||
// rx used only there,
|
||||
if let Some(op) = self.rx.write().await.recv().await {
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
return;
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
op_result = async {
|
||||
let mut rx_guard = self.rx.write().await;
|
||||
rx_guard.recv().await
|
||||
} => {
|
||||
if let Some(op) = op_result {
|
||||
// Special path filtering (original logic)
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
continue; // Skip this operation, continue with next
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
// Network reconnection delay (original logic)
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine cancelled during reconnection delay");
|
||||
break;
|
||||
}
|
||||
_ = sleep(Duration::from_secs(1)) => {}
|
||||
}
|
||||
}
|
||||
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
if op.object.is_empty() {
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
if let Err(err) =
|
||||
heal_object(&op.bucket, &op.object, &op.version_id.clone().unwrap_or_default(), scan_mode).await
|
||||
{
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
// Core healing logic (original logic preserved)
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
|
||||
if op.object.is_empty() {
|
||||
// Heal bucket (original logic)
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
// Heal single object (original logic)
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
)
|
||||
.await
|
||||
{
|
||||
&op.version_id.clone().unwrap_or_default(),
|
||||
scan_mode
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
// Heal multiple versions (original logic)
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
// Check for cancellation before each version
|
||||
if cancel_token.is_cancelled() {
|
||||
info!("MRF heal routine cancelled during version processing");
|
||||
return;
|
||||
}
|
||||
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
info!("MRF heal routine channel closed, exiting");
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
info!("MRF heal routine stopped gracefully");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,8 +22,8 @@ use rustfs_protos::{
|
||||
proto_gen::node_service::{
|
||||
CheckPartsRequest, DeletePathsRequest, DeleteRequest, DeleteVersionRequest, DeleteVersionsRequest, DeleteVolumeRequest,
|
||||
DiskInfoRequest, ListDirRequest, ListVolumesRequest, MakeVolumeRequest, MakeVolumesRequest, NsScannerRequest,
|
||||
ReadAllRequest, ReadMultipleRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest, RenameFileRequest,
|
||||
StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
|
||||
ReadAllRequest, ReadMultipleRequest, ReadPartsRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest,
|
||||
RenameFileRequest, StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -44,7 +44,7 @@ use crate::{
|
||||
heal_commands::{HealScanMode, HealingTracker},
|
||||
},
|
||||
};
|
||||
use rustfs_filemeta::{FileInfo, RawFileInfo};
|
||||
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
|
||||
use rustfs_protos::proto_gen::node_service::RenamePartRequest;
|
||||
use rustfs_rio::{HttpReader, HttpWriter};
|
||||
use tokio::{
|
||||
@@ -790,6 +790,27 @@ impl DiskAPI for RemoteDisk {
|
||||
Ok(check_parts_resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
let mut client = node_service_time_out_client(&self.addr)
|
||||
.await
|
||||
.map_err(|err| Error::other(format!("can not get client, err: {err}")))?;
|
||||
let request = Request::new(ReadPartsRequest {
|
||||
disk: self.endpoint.to_string(),
|
||||
bucket: bucket.to_string(),
|
||||
paths: paths.to_vec(),
|
||||
});
|
||||
|
||||
let response = client.read_parts(request).await?.into_inner();
|
||||
if !response.success {
|
||||
return Err(response.error.unwrap_or_default().into());
|
||||
}
|
||||
|
||||
let read_parts_resp = rmp_serde::from_slice::<Vec<ObjectPartInfo>>(&response.object_part_infos)?;
|
||||
|
||||
Ok(read_parts_resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
|
||||
info!("check_parts");
|
||||
|
||||
@@ -404,7 +404,42 @@ impl Node for NodeService {
|
||||
}))
|
||||
}
|
||||
}
|
||||
async fn read_parts(&self, request: Request<ReadPartsRequest>) -> Result<Response<ReadPartsResponse>, Status> {
|
||||
let request = request.into_inner();
|
||||
if let Some(disk) = self.find_disk(&request.disk).await {
|
||||
match disk.read_parts(&request.bucket, &request.paths).await {
|
||||
Ok(data) => {
|
||||
let data = match rmp_serde::to_vec(&data) {
|
||||
Ok(data) => data,
|
||||
Err(err) => {
|
||||
return Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(DiskError::other(format!("encode data failed: {err}")).into()),
|
||||
}));
|
||||
}
|
||||
};
|
||||
Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: true,
|
||||
object_part_infos: Bytes::copy_from_slice(&data),
|
||||
error: None,
|
||||
}))
|
||||
}
|
||||
|
||||
Err(err) => Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(err.into()),
|
||||
})),
|
||||
}
|
||||
} else {
|
||||
Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(DiskError::other("can not find disk".to_string()).into()),
|
||||
}))
|
||||
}
|
||||
}
|
||||
async fn check_parts(&self, request: Request<CheckPartsRequest>) -> Result<Response<CheckPartsResponse>, Status> {
|
||||
let request = request.into_inner();
|
||||
if let Some(disk) = self.find_disk(&request.disk).await {
|
||||
|
||||
@@ -24,13 +24,13 @@ use crate::disk::{
|
||||
};
|
||||
use crate::erasure_coding;
|
||||
use crate::erasure_coding::bitrot_verify;
|
||||
use crate::error::ObjectApiError;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::error::{ObjectApiError, is_err_object_not_found};
|
||||
use crate::global::GLOBAL_MRFState;
|
||||
use crate::global::{GLOBAL_LocalNodeName, GLOBAL_TierConfigMgr};
|
||||
use crate::heal::data_usage_cache::DataUsageCache;
|
||||
use crate::heal::heal_ops::{HealEntryFn, HealSequence};
|
||||
use crate::store_api::ObjectToDelete;
|
||||
use crate::store_api::{ListPartsInfo, ObjectToDelete};
|
||||
use crate::{
|
||||
bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts},
|
||||
cache_value::metacache_set::{ListPathRawOptions, list_path_raw},
|
||||
@@ -119,6 +119,7 @@ use tracing::{debug, info, warn};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const DEFAULT_READ_BUFFER_SIZE: usize = 1024 * 1024;
|
||||
pub const MAX_PARTS_COUNT: usize = 10000;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SetDisks {
|
||||
@@ -316,6 +317,9 @@ impl SetDisks {
|
||||
.filter(|v| v.as_ref().is_some_and(|d| d.is_local()))
|
||||
.collect()
|
||||
}
|
||||
fn default_read_quorum(&self) -> usize {
|
||||
self.set_drive_count - self.default_parity_count
|
||||
}
|
||||
fn default_write_quorum(&self) -> usize {
|
||||
let mut data_count = self.set_drive_count - self.default_parity_count;
|
||||
if data_count == self.default_parity_count {
|
||||
@@ -550,6 +554,183 @@ impl SetDisks {
|
||||
}
|
||||
}
|
||||
|
||||
async fn read_parts(
|
||||
disks: &[Option<DiskStore>],
|
||||
bucket: &str,
|
||||
part_meta_paths: &[String],
|
||||
part_numbers: &[usize],
|
||||
read_quorum: usize,
|
||||
) -> disk::error::Result<Vec<ObjectPartInfo>> {
|
||||
let mut futures = Vec::with_capacity(disks.len());
|
||||
for (i, disk) in disks.iter().enumerate() {
|
||||
futures.push(async move {
|
||||
if let Some(disk) = disk {
|
||||
disk.read_parts(bucket, part_meta_paths).await
|
||||
} else {
|
||||
Err(DiskError::DiskNotFound)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
let mut errs = Vec::with_capacity(disks.len());
|
||||
let mut object_parts = Vec::with_capacity(disks.len());
|
||||
|
||||
let results = join_all(futures).await;
|
||||
for result in results {
|
||||
match result {
|
||||
Ok(res) => {
|
||||
errs.push(None);
|
||||
object_parts.push(res);
|
||||
}
|
||||
Err(e) => {
|
||||
errs.push(Some(e));
|
||||
object_parts.push(vec![]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
|
||||
return Err(err);
|
||||
}
|
||||
|
||||
let mut ret = vec![ObjectPartInfo::default(); part_meta_paths.len()];
|
||||
|
||||
for (part_idx, part_info) in part_meta_paths.iter().enumerate() {
|
||||
let mut part_meta_quorum = HashMap::new();
|
||||
let mut part_infos = Vec::new();
|
||||
for (j, parts) in object_parts.iter().enumerate() {
|
||||
if parts.len() != part_meta_paths.len() {
|
||||
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if !parts[part_idx].etag.is_empty() {
|
||||
*part_meta_quorum.entry(parts[part_idx].etag.clone()).or_insert(0) += 1;
|
||||
part_infos.push(parts[part_idx].clone());
|
||||
continue;
|
||||
}
|
||||
|
||||
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
|
||||
}
|
||||
|
||||
let mut max_quorum = 0;
|
||||
let mut max_etag = None;
|
||||
let mut max_part_meta = None;
|
||||
for (etag, quorum) in part_meta_quorum.iter() {
|
||||
if quorum > &max_quorum {
|
||||
max_quorum = *quorum;
|
||||
max_etag = Some(etag);
|
||||
max_part_meta = Some(etag);
|
||||
}
|
||||
}
|
||||
|
||||
let mut found = None;
|
||||
for info in part_infos.iter() {
|
||||
if let Some(etag) = max_etag
|
||||
&& info.etag == *etag
|
||||
{
|
||||
found = Some(info.clone());
|
||||
break;
|
||||
}
|
||||
|
||||
if let Some(part_meta) = max_part_meta
|
||||
&& info.etag.is_empty()
|
||||
&& part_meta.ends_with(format!("part.{0}.meta", info.number).as_str())
|
||||
{
|
||||
found = Some(info.clone());
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if let (Some(found), Some(max_etag)) = (found, max_etag)
|
||||
&& !found.etag.is_empty()
|
||||
&& part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum
|
||||
{
|
||||
ret[part_idx] = found;
|
||||
} else {
|
||||
ret[part_idx] = ObjectPartInfo {
|
||||
number: part_numbers[part_idx],
|
||||
error: Some(format!("part.{} not found", part_numbers[part_idx])),
|
||||
..Default::default()
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
async fn list_parts(disks: &[Option<DiskStore>], part_path: &str, read_quorum: usize) -> disk::error::Result<Vec<usize>> {
|
||||
let mut futures = Vec::with_capacity(disks.len());
|
||||
for (i, disk) in disks.iter().enumerate() {
|
||||
futures.push(async move {
|
||||
if let Some(disk) = disk {
|
||||
disk.list_dir(RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_MULTIPART_BUCKET, part_path, -1)
|
||||
.await
|
||||
} else {
|
||||
Err(DiskError::DiskNotFound)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
let mut errs = Vec::with_capacity(disks.len());
|
||||
let mut object_parts = Vec::with_capacity(disks.len());
|
||||
|
||||
let results = join_all(futures).await;
|
||||
for result in results {
|
||||
match result {
|
||||
Ok(res) => {
|
||||
errs.push(None);
|
||||
object_parts.push(res);
|
||||
}
|
||||
Err(e) => {
|
||||
errs.push(Some(e));
|
||||
object_parts.push(vec![]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
|
||||
return Err(err);
|
||||
}
|
||||
|
||||
let mut part_quorum_map: HashMap<usize, usize> = HashMap::new();
|
||||
|
||||
for drive_parts in object_parts {
|
||||
let mut parts_with_meta_count: HashMap<usize, usize> = HashMap::new();
|
||||
|
||||
// part files can be either part.N or part.N.meta
|
||||
for part_path in drive_parts {
|
||||
if let Some(num_str) = part_path.strip_prefix("part.") {
|
||||
if let Some(meta_idx) = num_str.find(".meta") {
|
||||
if let Ok(part_num) = num_str[..meta_idx].parse::<usize>() {
|
||||
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
} else if let Ok(part_num) = num_str.parse::<usize>() {
|
||||
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Include only part.N.meta files with corresponding part.N
|
||||
for (&part_num, &cnt) in &parts_with_meta_count {
|
||||
if cnt >= 2 {
|
||||
*part_quorum_map.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut part_numbers = Vec::with_capacity(part_quorum_map.len());
|
||||
for (part_num, count) in part_quorum_map {
|
||||
if count >= read_quorum {
|
||||
part_numbers.push(part_num);
|
||||
}
|
||||
}
|
||||
|
||||
part_numbers.sort();
|
||||
|
||||
Ok(part_numbers)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(disks, meta))]
|
||||
async fn rename_part(
|
||||
disks: &[Option<DiskStore>],
|
||||
@@ -1942,6 +2123,8 @@ impl SetDisks {
|
||||
|
||||
let till_offset = erasure.shard_file_offset(part_offset, part_length, part_size);
|
||||
|
||||
let read_offset = (part_offset / erasure.block_size) * erasure.shard_size();
|
||||
|
||||
let mut readers = Vec::with_capacity(disks.len());
|
||||
let mut errors = Vec::with_capacity(disks.len());
|
||||
for (idx, disk_op) in disks.iter().enumerate() {
|
||||
@@ -1950,7 +2133,7 @@ impl SetDisks {
|
||||
disk_op.as_ref(),
|
||||
bucket,
|
||||
&format!("{}/{}/part.{}", object, files[idx].data_dir.unwrap_or_default(), part_number),
|
||||
part_offset,
|
||||
read_offset,
|
||||
till_offset,
|
||||
erasure.shard_size(),
|
||||
HashAlgorithm::HighwayHash256,
|
||||
@@ -4884,7 +5067,7 @@ impl StorageAPI for SetDisks {
|
||||
) -> Result<PartInfo> {
|
||||
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
|
||||
|
||||
let (mut fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
|
||||
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
|
||||
|
||||
let write_quorum = fi.write_quorum(self.default_write_quorum());
|
||||
|
||||
@@ -5037,9 +5220,9 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
// debug!("put_object_part part_info {:?}", part_info);
|
||||
|
||||
fi.parts = vec![part_info];
|
||||
// fi.parts = vec![part_info.clone()];
|
||||
|
||||
let fi_buff = fi.marshal_msg()?;
|
||||
let part_info_buff = part_info.marshal_msg()?;
|
||||
|
||||
drop(writers); // drop writers to close all files
|
||||
|
||||
@@ -5050,7 +5233,7 @@ impl StorageAPI for SetDisks {
|
||||
&tmp_part_path,
|
||||
RUSTFS_META_MULTIPART_BUCKET,
|
||||
&part_path,
|
||||
fi_buff.into(),
|
||||
part_info_buff.into(),
|
||||
write_quorum,
|
||||
)
|
||||
.await?;
|
||||
@@ -5068,6 +5251,123 @@ impl StorageAPI for SetDisks {
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
mut max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, false).await?;
|
||||
|
||||
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
|
||||
|
||||
if max_parts > MAX_PARTS_COUNT {
|
||||
max_parts = MAX_PARTS_COUNT;
|
||||
}
|
||||
|
||||
let part_number_marker = part_number_marker.unwrap_or_default();
|
||||
|
||||
let mut ret = ListPartsInfo {
|
||||
bucket: bucket.to_owned(),
|
||||
object: object.to_owned(),
|
||||
upload_id: upload_id.to_owned(),
|
||||
max_parts,
|
||||
part_number_marker,
|
||||
user_defined: fi.metadata.clone(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
if max_parts == 0 {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
let online_disks = self.get_disks_internal().await;
|
||||
|
||||
let read_quorum = fi.read_quorum(self.default_read_quorum());
|
||||
|
||||
let part_path = format!(
|
||||
"{}{}",
|
||||
path_join_buf(&[
|
||||
&upload_id_path,
|
||||
fi.data_dir.map(|v| v.to_string()).unwrap_or_default().as_str(),
|
||||
]),
|
||||
SLASH_SEPARATOR
|
||||
);
|
||||
|
||||
let mut part_numbers = match Self::list_parts(&online_disks, &part_path, read_quorum).await {
|
||||
Ok(parts) => parts,
|
||||
Err(err) => {
|
||||
if err == DiskError::FileNotFound {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
return Err(to_object_err(err.into(), vec![bucket, object]));
|
||||
}
|
||||
};
|
||||
|
||||
if part_numbers.is_empty() {
|
||||
return Ok(ret);
|
||||
}
|
||||
let start_op = part_numbers.iter().find(|&&v| v != 0 && v == part_number_marker);
|
||||
if part_number_marker > 0 && start_op.is_none() {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
if let Some(start) = start_op {
|
||||
if start + 1 > part_numbers.len() {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
part_numbers = part_numbers[start + 1..].to_vec();
|
||||
}
|
||||
|
||||
let mut parts = Vec::with_capacity(part_numbers.len());
|
||||
|
||||
let part_meta_paths = part_numbers
|
||||
.iter()
|
||||
.map(|v| format!("{part_path}part.{v}.meta"))
|
||||
.collect::<Vec<String>>();
|
||||
|
||||
let object_parts =
|
||||
Self::read_parts(&online_disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, read_quorum)
|
||||
.await
|
||||
.map_err(|e| to_object_err(e.into(), vec![bucket, object, upload_id]))?;
|
||||
|
||||
let mut count = max_parts;
|
||||
|
||||
for (i, part) in object_parts.iter().enumerate() {
|
||||
if let Some(err) = &part.error {
|
||||
warn!("list_object_parts part error: {:?}", &err);
|
||||
}
|
||||
|
||||
parts.push(PartInfo {
|
||||
etag: Some(part.etag.clone()),
|
||||
part_num: part.number,
|
||||
last_mod: part.mod_time,
|
||||
size: part.size,
|
||||
actual_size: part.actual_size,
|
||||
});
|
||||
|
||||
count -= 1;
|
||||
if count == 0 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ret.parts = parts;
|
||||
|
||||
if object_parts.len() > ret.parts.len() {
|
||||
ret.is_truncated = true;
|
||||
ret.next_part_number_marker = ret.parts.last().map(|v| v.part_num).unwrap_or_default();
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
@@ -5143,8 +5443,8 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
let splits: Vec<&str> = upload_id.split("x").collect();
|
||||
if splits.len() == 2 {
|
||||
if let Ok(unix) = splits[1].parse::<i64>() {
|
||||
OffsetDateTime::from_unix_timestamp(unix)?
|
||||
if let Ok(unix) = splits[1].parse::<i128>() {
|
||||
OffsetDateTime::from_unix_timestamp_nanos(unix)?
|
||||
} else {
|
||||
now
|
||||
}
|
||||
@@ -5363,49 +5663,31 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
let part_path = format!("{}/{}/", upload_id_path, fi.data_dir.unwrap_or(Uuid::nil()));
|
||||
|
||||
let files: Vec<String> = uploaded_parts.iter().map(|v| format!("part.{}.meta", v.part_num)).collect();
|
||||
let part_meta_paths = uploaded_parts
|
||||
.iter()
|
||||
.map(|v| format!("{part_path}part.{0}.meta", v.part_num))
|
||||
.collect::<Vec<String>>();
|
||||
|
||||
// readMultipleFiles
|
||||
let part_numbers = uploaded_parts.iter().map(|v| v.part_num).collect::<Vec<usize>>();
|
||||
|
||||
let req = ReadMultipleReq {
|
||||
bucket: RUSTFS_META_MULTIPART_BUCKET.to_string(),
|
||||
prefix: part_path,
|
||||
files,
|
||||
max_size: 1 << 20,
|
||||
metadata_only: true,
|
||||
abort404: true,
|
||||
max_results: 0,
|
||||
};
|
||||
let object_parts =
|
||||
Self::read_parts(&disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, write_quorum).await?;
|
||||
|
||||
let part_files_resp = Self::read_multiple_files(&disks, req, write_quorum).await;
|
||||
|
||||
if part_files_resp.len() != uploaded_parts.len() {
|
||||
if object_parts.len() != uploaded_parts.len() {
|
||||
return Err(Error::other("part result number err"));
|
||||
}
|
||||
|
||||
for (i, res) in part_files_resp.iter().enumerate() {
|
||||
let part_id = uploaded_parts[i].part_num;
|
||||
if !res.error.is_empty() || !res.exists {
|
||||
error!("complete_multipart_upload part_id err {:?}, exists={}", res, res.exists);
|
||||
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
|
||||
for (i, part) in object_parts.iter().enumerate() {
|
||||
if let Some(err) = &part.error {
|
||||
error!("complete_multipart_upload part error: {:?}", &err);
|
||||
}
|
||||
|
||||
let part_fi = FileInfo::unmarshal(&res.data).map_err(|e| {
|
||||
if uploaded_parts[i].part_num != part.number {
|
||||
error!(
|
||||
"complete_multipart_upload FileInfo::unmarshal err {:?}, part_id={}, bucket={}, object={}",
|
||||
e, part_id, bucket, object
|
||||
"complete_multipart_upload part_id err part_id != part_num {} != {}",
|
||||
uploaded_parts[i].part_num, part.number
|
||||
);
|
||||
Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned())
|
||||
})?;
|
||||
let part = &part_fi.parts[0];
|
||||
let part_num = part.number;
|
||||
|
||||
// debug!("complete part {} file info {:?}", part_num, &part_fi);
|
||||
// debug!("complete part {} object info {:?}", part_num, &part);
|
||||
|
||||
if part_id != part_num {
|
||||
error!("complete_multipart_upload part_id err part_id != part_num {} != {}", part_id, part_num);
|
||||
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
|
||||
return Err(Error::InvalidPart(uploaded_parts[i].part_num, bucket.to_owned(), object.to_owned()));
|
||||
}
|
||||
|
||||
fi.add_object_part(
|
||||
|
||||
@@ -17,6 +17,7 @@ use std::{collections::HashMap, sync::Arc};
|
||||
|
||||
use crate::disk::error_reduce::count_errs;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::store_api::ListPartsInfo;
|
||||
use crate::{
|
||||
disk::{
|
||||
DiskAPI, DiskInfo, DiskOption, DiskStore,
|
||||
@@ -619,6 +620,20 @@ impl StorageAPI for Sets {
|
||||
Ok((del_objects, del_errs))
|
||||
}
|
||||
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
self.get_disks_by_key(object)
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
|
||||
@@ -38,7 +38,7 @@ use crate::new_object_layer_fn;
|
||||
use crate::notification_sys::get_global_notification_sys;
|
||||
use crate::pools::PoolMeta;
|
||||
use crate::rebalance::RebalanceMeta;
|
||||
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, MultipartInfo, ObjectIO};
|
||||
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, ListPartsInfo, MultipartInfo, ObjectIO};
|
||||
use crate::store_init::{check_disk_fatal_errs, ec_drives_no_config};
|
||||
use crate::{
|
||||
bucket::{lifecycle::bucket_lifecycle_ops::TransitionState, metadata::BucketMetadata},
|
||||
@@ -1810,6 +1810,47 @@ impl StorageAPI for ECStore {
|
||||
Ok((del_objects, del_errs))
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
check_list_parts_args(bucket, object, upload_id)?;
|
||||
|
||||
// TODO: nslock
|
||||
|
||||
if self.single_pool() {
|
||||
return self.pools[0]
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await;
|
||||
}
|
||||
|
||||
for pool in self.pools.iter() {
|
||||
if self.is_suspended(pool.pool_idx).await {
|
||||
continue;
|
||||
}
|
||||
match pool
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await
|
||||
{
|
||||
Ok(res) => return Ok(res),
|
||||
Err(err) => {
|
||||
if is_err_invalid_upload_id(&err) {
|
||||
continue;
|
||||
}
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
Err(StorageError::InvalidUploadID(bucket.to_owned(), object.to_owned(), upload_id.to_owned()))
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
|
||||
@@ -201,7 +201,7 @@ impl GetObjectReader {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct HTTPRangeSpec {
|
||||
pub is_suffix_length: bool,
|
||||
pub start: i64,
|
||||
@@ -548,6 +548,7 @@ impl ObjectInfo {
|
||||
mod_time: part.mod_time,
|
||||
checksums: part.checksums.clone(),
|
||||
number: part.number,
|
||||
error: part.error.clone(),
|
||||
})
|
||||
.collect();
|
||||
|
||||
@@ -844,6 +845,48 @@ pub struct ListMultipartsInfo {
|
||||
// encoding_type: String, // Not supported yet.
|
||||
}
|
||||
|
||||
/// ListPartsInfo - represents list of all parts.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ListPartsInfo {
|
||||
/// Name of the bucket.
|
||||
pub bucket: String,
|
||||
|
||||
/// Name of the object.
|
||||
pub object: String,
|
||||
|
||||
/// Upload ID identifying the multipart upload whose parts are being listed.
|
||||
pub upload_id: String,
|
||||
|
||||
/// The class of storage used to store the object.
|
||||
pub storage_class: String,
|
||||
|
||||
/// Part number after which listing begins.
|
||||
pub part_number_marker: usize,
|
||||
|
||||
/// When a list is truncated, this element specifies the last part in the list,
|
||||
/// as well as the value to use for the part-number-marker request parameter
|
||||
/// in a subsequent request.
|
||||
pub next_part_number_marker: usize,
|
||||
|
||||
/// Maximum number of parts that were allowed in the response.
|
||||
pub max_parts: usize,
|
||||
|
||||
/// Indicates whether the returned list of parts is truncated.
|
||||
pub is_truncated: bool,
|
||||
|
||||
/// List of all parts.
|
||||
pub parts: Vec<PartInfo>,
|
||||
|
||||
/// Any metadata set during InitMultipartUpload, including encryption headers.
|
||||
pub user_defined: HashMap<String, String>,
|
||||
|
||||
/// ChecksumAlgorithm if set
|
||||
pub checksum_algorithm: String,
|
||||
|
||||
/// ChecksumType if set
|
||||
pub checksum_type: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct ObjectToDelete {
|
||||
pub object_name: String,
|
||||
@@ -923,10 +966,7 @@ pub trait StorageAPI: ObjectIO {
|
||||
) -> Result<ListObjectVersionsInfo>;
|
||||
// Walk TODO:
|
||||
|
||||
// GetObjectNInfo ObjectIO
|
||||
async fn get_object_info(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
|
||||
// PutObject ObjectIO
|
||||
// CopyObject
|
||||
async fn copy_object(
|
||||
&self,
|
||||
src_bucket: &str,
|
||||
@@ -949,7 +989,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
// TransitionObject TODO:
|
||||
// RestoreTransitionedObject TODO:
|
||||
|
||||
// ListMultipartUploads
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
bucket: &str,
|
||||
@@ -960,7 +999,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
max_uploads: usize,
|
||||
) -> Result<ListMultipartsInfo>;
|
||||
async fn new_multipart_upload(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<MultipartUploadResult>;
|
||||
// CopyObjectPart
|
||||
async fn copy_object_part(
|
||||
&self,
|
||||
src_bucket: &str,
|
||||
@@ -984,7 +1022,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
data: &mut PutObjReader,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<PartInfo>;
|
||||
// GetMultipartInfo
|
||||
async fn get_multipart_info(
|
||||
&self,
|
||||
bucket: &str,
|
||||
@@ -992,7 +1029,15 @@ pub trait StorageAPI: ObjectIO {
|
||||
upload_id: &str,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<MultipartInfo>;
|
||||
// ListObjectParts
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo>;
|
||||
async fn abort_multipart_upload(&self, bucket: &str, object: &str, upload_id: &str, opts: &ObjectOptions) -> Result<()>;
|
||||
async fn complete_multipart_upload(
|
||||
self: Arc<Self>,
|
||||
@@ -1002,13 +1047,10 @@ pub trait StorageAPI: ObjectIO {
|
||||
uploaded_parts: Vec<CompletePart>,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ObjectInfo>;
|
||||
// GetDisks
|
||||
async fn get_disks(&self, pool_idx: usize, set_idx: usize) -> Result<Vec<Option<DiskStore>>>;
|
||||
// SetDriveCounts
|
||||
fn set_drive_counts(&self) -> Vec<usize>;
|
||||
|
||||
// Health TODO:
|
||||
// PutObjectMetadata
|
||||
async fn put_object_metadata(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
|
||||
// DecomTieredObject
|
||||
async fn get_object_tags(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<String>;
|
||||
|
||||
@@ -46,6 +46,20 @@ pub struct ObjectPartInfo {
|
||||
pub index: Option<Bytes>,
|
||||
// Checksums holds checksums of the part
|
||||
pub checksums: Option<HashMap<String, String>>,
|
||||
pub error: Option<String>,
|
||||
}
|
||||
|
||||
impl ObjectPartInfo {
|
||||
pub fn marshal_msg(&self) -> Result<Vec<u8>> {
|
||||
let mut buf = Vec::new();
|
||||
self.serialize(&mut Serializer::new(&mut buf))?;
|
||||
Ok(buf)
|
||||
}
|
||||
|
||||
pub fn unmarshal(buf: &[u8]) -> Result<Self> {
|
||||
let t: ObjectPartInfo = rmp_serde::from_slice(buf)?;
|
||||
Ok(t)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, PartialEq, Default, Clone)]
|
||||
@@ -287,6 +301,7 @@ impl FileInfo {
|
||||
actual_size,
|
||||
index,
|
||||
checksums: None,
|
||||
error: None,
|
||||
};
|
||||
|
||||
for p in self.parts.iter_mut() {
|
||||
|
||||
@@ -45,7 +45,6 @@ base64-simd = { workspace = true }
|
||||
jsonwebtoken = { workspace = true }
|
||||
tracing.workspace = true
|
||||
rustfs-madmin.workspace = true
|
||||
lazy_static.workspace = true
|
||||
rustfs-utils = { workspace = true, features = ["path"] }
|
||||
|
||||
[dev-dependencies]
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
// limitations under the License.
|
||||
|
||||
use crate::error::{Error, Result, is_err_config_not_found};
|
||||
use crate::sys::get_claims_from_token_with_secret;
|
||||
use crate::{
|
||||
cache::{Cache, CacheEntity},
|
||||
error::{Error as IamError, is_err_no_such_group, is_err_no_such_policy, is_err_no_such_user},
|
||||
@@ -26,7 +27,7 @@ use rustfs_ecstore::global::get_global_action_cred;
|
||||
use rustfs_madmin::{AccountStatus, AddOrUpdateUserReq, GroupDesc};
|
||||
use rustfs_policy::{
|
||||
arn::ARN,
|
||||
auth::{self, Credentials, UserIdentity, get_claims_from_token_with_secret, is_secret_key_valid, jwt_sign},
|
||||
auth::{self, Credentials, UserIdentity, is_secret_key_valid, jwt_sign},
|
||||
format::Format,
|
||||
policy::{
|
||||
EMBEDDED_POLICY_TYPE, INHERITED_POLICY_TYPE, Policy, PolicyDoc, default::DEFAULT_POLICIES, iam_policy_claim_name_sa,
|
||||
|
||||
@@ -20,7 +20,6 @@ use crate::{
|
||||
manager::{extract_jwt_claims, get_default_policyes},
|
||||
};
|
||||
use futures::future::join_all;
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_ecstore::{
|
||||
config::{
|
||||
RUSTFS_CONFIG_PREFIX,
|
||||
@@ -34,25 +33,28 @@ use rustfs_ecstore::{
|
||||
use rustfs_policy::{auth::UserIdentity, policy::PolicyDoc};
|
||||
use rustfs_utils::path::{SLASH_SEPARATOR, path_join_buf};
|
||||
use serde::{Serialize, de::DeserializeOwned};
|
||||
use std::sync::LazyLock;
|
||||
use std::{collections::HashMap, sync::Arc};
|
||||
use tokio::sync::broadcast::{self, Receiver as B_Receiver};
|
||||
use tokio::sync::mpsc::{self, Sender};
|
||||
use tracing::{info, warn};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
lazy_static! {
|
||||
pub static ref IAM_CONFIG_PREFIX: String = format!("{}/iam", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_USERS_PREFIX: String = format!("{}/iam/users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: String = format!("{}/iam/service-accounts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_GROUPS_PREFIX: String = format!("{}/iam/groups/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICIES_PREFIX: String = format!("{}/iam/policies/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_STS_PREFIX: String = format!("{}/iam/sts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_PREFIX: String = format!("{}/iam/policydb/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_USERS_PREFIX: String = format!("{}/iam/policydb/users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: String = format!("{}/iam/policydb/sts-users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: String =
|
||||
format!("{}/iam/policydb/service-accounts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: String = format!("{}/iam/policydb/groups/", RUSTFS_CONFIG_PREFIX);
|
||||
}
|
||||
pub static IAM_CONFIG_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam"));
|
||||
pub static IAM_CONFIG_USERS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/users/"));
|
||||
pub static IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/service-accounts/"));
|
||||
pub static IAM_CONFIG_GROUPS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/groups/"));
|
||||
pub static IAM_CONFIG_POLICIES_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policies/"));
|
||||
pub static IAM_CONFIG_STS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/sts/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_USERS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/users/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/sts-users/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/service-accounts/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/groups/"));
|
||||
|
||||
const IAM_IDENTITY_FILE: &str = "identity.json";
|
||||
const IAM_POLICY_FILE: &str = "policy.json";
|
||||
@@ -370,7 +372,15 @@ impl Store for ObjectStore {
|
||||
async fn load_iam_config<Item: DeserializeOwned>(&self, path: impl AsRef<str> + Send) -> Result<Item> {
|
||||
let mut data = read_config(self.object_api.clone(), path.as_ref()).await?;
|
||||
|
||||
data = Self::decrypt_data(&data)?;
|
||||
data = match Self::decrypt_data(&data) {
|
||||
Ok(v) => v,
|
||||
Err(err) => {
|
||||
debug!("decrypt_data failed: {}", err);
|
||||
// delete the config file when decrypt failed
|
||||
let _ = self.delete_iam_config(path.as_ref()).await;
|
||||
return Err(Error::ConfigNotFound);
|
||||
}
|
||||
};
|
||||
|
||||
Ok(serde_json::from_slice(&data)?)
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ use crate::store::GroupInfo;
|
||||
use crate::store::MappedPolicy;
|
||||
use crate::store::Store;
|
||||
use crate::store::UserType;
|
||||
use crate::utils::extract_claims;
|
||||
use rustfs_ecstore::global::get_global_action_cred;
|
||||
use rustfs_madmin::AddOrUpdateUserReq;
|
||||
use rustfs_madmin::GroupDesc;
|
||||
@@ -542,7 +543,7 @@ impl<T: Store> IamSys<T> {
|
||||
}
|
||||
};
|
||||
|
||||
if policies.is_empty() {
|
||||
if !is_owner && policies.is_empty() {
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -732,3 +733,18 @@ pub struct UpdateServiceAccountOpts {
|
||||
pub expiration: Option<OffsetDateTime>,
|
||||
pub status: Option<String>,
|
||||
}
|
||||
|
||||
pub fn get_claims_from_token_with_secret(token: &str, secret: &str) -> Result<HashMap<String, Value>> {
|
||||
let mut ms =
|
||||
extract_claims::<HashMap<String, Value>>(token, secret).map_err(|e| Error::other(format!("extract claims err {e}")))?;
|
||||
|
||||
if let Some(session_policy) = ms.claims.get(SESSION_POLICY_NAME) {
|
||||
let policy_str = session_policy.as_str().unwrap_or_default();
|
||||
let policy = base64_decode(policy_str.as_bytes()).map_err(|e| Error::other(format!("base64 decode err {e}")))?;
|
||||
ms.claims.insert(
|
||||
SESSION_POLICY_NAME_EXTRACTED.to_string(),
|
||||
Value::String(String::from_utf8(policy).map_err(|e| Error::other(format!("utf8 decode err {e}")))?),
|
||||
);
|
||||
}
|
||||
Ok(ms.claims)
|
||||
}
|
||||
|
||||
@@ -30,7 +30,6 @@ workspace = true
|
||||
|
||||
[dependencies]
|
||||
async-trait.workspace = true
|
||||
lazy_static.workspace = true
|
||||
rustfs-protos.workspace = true
|
||||
rand.workspace = true
|
||||
serde.workspace = true
|
||||
|
||||
@@ -14,12 +14,12 @@
|
||||
// limitations under the License.
|
||||
|
||||
use async_trait::async_trait;
|
||||
use lazy_static::lazy_static;
|
||||
use local_locker::LocalLocker;
|
||||
use lock_args::LockArgs;
|
||||
use remote_client::RemoteClient;
|
||||
use std::io::Result;
|
||||
use std::sync::Arc;
|
||||
use std::sync::LazyLock;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
pub mod drwmutex;
|
||||
@@ -29,9 +29,7 @@ pub mod lrwmutex;
|
||||
pub mod namespace_lock;
|
||||
pub mod remote_client;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref GLOBAL_LOCAL_SERVER: Arc<RwLock<LocalLocker>> = Arc::new(RwLock::new(LocalLocker::new()));
|
||||
}
|
||||
pub static GLOBAL_LOCAL_SERVER: LazyLock<Arc<RwLock<LocalLocker>>> = LazyLock::new(|| Arc::new(RwLock::new(LocalLocker::new())));
|
||||
|
||||
type LockClient = dyn Locker;
|
||||
|
||||
|
||||
@@ -16,8 +16,6 @@ use crate::error::Error as IamError;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::policy::{INHERITED_POLICY_TYPE, Policy, Validator, iam_policy_claim_name_sa};
|
||||
use crate::utils;
|
||||
use crate::utils::extract_claims;
|
||||
use serde::de::DeserializeOwned;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::{Value, json};
|
||||
use std::collections::HashMap;
|
||||
@@ -253,12 +251,6 @@ pub fn create_new_credentials_with_metadata(
|
||||
})
|
||||
}
|
||||
|
||||
pub fn get_claims_from_token_with_secret<T: DeserializeOwned>(token: &str, secret: &str) -> Result<T> {
|
||||
let ms = extract_claims::<T>(token, secret)?;
|
||||
// TODO SessionPolicyName
|
||||
Ok(ms.claims)
|
||||
}
|
||||
|
||||
pub fn jwt_sign<T: Serialize>(claims: &T, token_secret: &str) -> Result<String> {
|
||||
let token = utils::generate_jwt(claims, token_secret)?;
|
||||
Ok(token)
|
||||
|
||||
@@ -1,15 +1 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod models;
|
||||
|
||||
@@ -1,17 +1,3 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// automatically generated by the FlatBuffers compiler, do not modify
|
||||
|
||||
// @generated
|
||||
|
||||
@@ -1,17 +1,4 @@
|
||||
#![allow(unused_imports)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(clippy::all)]
|
||||
pub mod proto_gen;
|
||||
|
||||
|
||||
@@ -1,15 +1 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod node_service;
|
||||
|
||||
@@ -1,17 +1,3 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This file is @generated by prost-build.
|
||||
/// --------------------------------------------------------------------
|
||||
#[derive(Clone, PartialEq, ::prost::Message)]
|
||||
@@ -184,6 +170,24 @@ pub struct VerifyFileResponse {
|
||||
pub error: ::core::option::Option<Error>,
|
||||
}
|
||||
#[derive(Clone, PartialEq, ::prost::Message)]
|
||||
pub struct ReadPartsRequest {
|
||||
#[prost(string, tag = "1")]
|
||||
pub disk: ::prost::alloc::string::String,
|
||||
#[prost(string, tag = "2")]
|
||||
pub bucket: ::prost::alloc::string::String,
|
||||
#[prost(string, repeated, tag = "3")]
|
||||
pub paths: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
|
||||
}
|
||||
#[derive(Clone, PartialEq, ::prost::Message)]
|
||||
pub struct ReadPartsResponse {
|
||||
#[prost(bool, tag = "1")]
|
||||
pub success: bool,
|
||||
#[prost(bytes = "bytes", tag = "2")]
|
||||
pub object_part_infos: ::prost::bytes::Bytes,
|
||||
#[prost(message, optional, tag = "3")]
|
||||
pub error: ::core::option::Option<Error>,
|
||||
}
|
||||
#[derive(Clone, PartialEq, ::prost::Message)]
|
||||
pub struct CheckPartsRequest {
|
||||
/// indicate which one in the disks
|
||||
#[prost(string, tag = "1")]
|
||||
@@ -1295,6 +1299,21 @@ pub mod node_service_client {
|
||||
.insert(GrpcMethod::new("node_service.NodeService", "VerifyFile"));
|
||||
self.inner.unary(req, path, codec).await
|
||||
}
|
||||
pub async fn read_parts(
|
||||
&mut self,
|
||||
request: impl tonic::IntoRequest<super::ReadPartsRequest>,
|
||||
) -> std::result::Result<tonic::Response<super::ReadPartsResponse>, tonic::Status> {
|
||||
self.inner
|
||||
.ready()
|
||||
.await
|
||||
.map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?;
|
||||
let codec = tonic::codec::ProstCodec::default();
|
||||
let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadParts");
|
||||
let mut req = request.into_request();
|
||||
req.extensions_mut()
|
||||
.insert(GrpcMethod::new("node_service.NodeService", "ReadParts"));
|
||||
self.inner.unary(req, path, codec).await
|
||||
}
|
||||
pub async fn check_parts(
|
||||
&mut self,
|
||||
request: impl tonic::IntoRequest<super::CheckPartsRequest>,
|
||||
@@ -2338,6 +2357,10 @@ pub mod node_service_server {
|
||||
&self,
|
||||
request: tonic::Request<super::VerifyFileRequest>,
|
||||
) -> std::result::Result<tonic::Response<super::VerifyFileResponse>, tonic::Status>;
|
||||
async fn read_parts(
|
||||
&self,
|
||||
request: tonic::Request<super::ReadPartsRequest>,
|
||||
) -> std::result::Result<tonic::Response<super::ReadPartsResponse>, tonic::Status>;
|
||||
async fn check_parts(
|
||||
&self,
|
||||
request: tonic::Request<super::CheckPartsRequest>,
|
||||
@@ -2972,6 +2995,34 @@ pub mod node_service_server {
|
||||
};
|
||||
Box::pin(fut)
|
||||
}
|
||||
"/node_service.NodeService/ReadParts" => {
|
||||
#[allow(non_camel_case_types)]
|
||||
struct ReadPartsSvc<T: NodeService>(pub Arc<T>);
|
||||
impl<T: NodeService> tonic::server::UnaryService<super::ReadPartsRequest> for ReadPartsSvc<T> {
|
||||
type Response = super::ReadPartsResponse;
|
||||
type Future = BoxFuture<tonic::Response<Self::Response>, tonic::Status>;
|
||||
fn call(&mut self, request: tonic::Request<super::ReadPartsRequest>) -> Self::Future {
|
||||
let inner = Arc::clone(&self.0);
|
||||
let fut = async move { <T as NodeService>::read_parts(&inner, request).await };
|
||||
Box::pin(fut)
|
||||
}
|
||||
}
|
||||
let accept_compression_encodings = self.accept_compression_encodings;
|
||||
let send_compression_encodings = self.send_compression_encodings;
|
||||
let max_decoding_message_size = self.max_decoding_message_size;
|
||||
let max_encoding_message_size = self.max_encoding_message_size;
|
||||
let inner = self.inner.clone();
|
||||
let fut = async move {
|
||||
let method = ReadPartsSvc(inner);
|
||||
let codec = tonic::codec::ProstCodec::default();
|
||||
let mut grpc = tonic::server::Grpc::new(codec)
|
||||
.apply_compression_config(accept_compression_encodings, send_compression_encodings)
|
||||
.apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size);
|
||||
let res = grpc.unary(method, req).await;
|
||||
Ok(res)
|
||||
};
|
||||
Box::pin(fut)
|
||||
}
|
||||
"/node_service.NodeService/CheckParts" => {
|
||||
#[allow(non_camel_case_types)]
|
||||
struct CheckPartsSvc<T: NodeService>(pub Arc<T>);
|
||||
|
||||
@@ -45,7 +45,7 @@ fn main() -> Result<(), AnyError> {
|
||||
}
|
||||
|
||||
// path of proto file
|
||||
let project_root_dir = env::current_dir()?.join("");
|
||||
let project_root_dir = env::current_dir()?.join("crates/protos/src");
|
||||
let proto_dir = project_root_dir.clone();
|
||||
let proto_files = &["node.proto"];
|
||||
let proto_out_dir = project_root_dir.join("generated").join("proto_gen");
|
||||
@@ -268,7 +268,7 @@ fn protobuf_compiler_version() -> Result<Version, String> {
|
||||
}
|
||||
|
||||
fn fmt() {
|
||||
let output = Command::new("cargo").arg("fmt").arg("-p").arg("protos").status();
|
||||
let output = Command::new("cargo").arg("fmt").arg("-p").arg("rustfs-protos").status();
|
||||
|
||||
match output {
|
||||
Ok(status) => {
|
||||
|
||||
@@ -130,6 +130,18 @@ message VerifyFileResponse {
|
||||
optional Error error = 3;
|
||||
}
|
||||
|
||||
message ReadPartsRequest {
|
||||
string disk = 1;
|
||||
string bucket = 2;
|
||||
repeated string paths = 3;
|
||||
}
|
||||
|
||||
message ReadPartsResponse {
|
||||
bool success = 1;
|
||||
bytes object_part_infos = 2;
|
||||
optional Error error = 3;
|
||||
}
|
||||
|
||||
message CheckPartsRequest {
|
||||
string disk = 1; // indicate which one in the disks
|
||||
string volume = 2;
|
||||
@@ -768,6 +780,7 @@ service NodeService {
|
||||
rpc WriteAll(WriteAllRequest) returns (WriteAllResponse) {};
|
||||
rpc Delete(DeleteRequest) returns (DeleteResponse) {};
|
||||
rpc VerifyFile(VerifyFileRequest) returns (VerifyFileResponse) {};
|
||||
rpc ReadParts(ReadPartsRequest) returns (ReadPartsResponse) {};
|
||||
rpc CheckParts(CheckPartsRequest) returns (CheckPartsResponse) {};
|
||||
rpc RenamePart(RenamePartRequest) returns (RenamePartResponse) {};
|
||||
rpc RenameFile(RenameFileRequest) returns (RenameFileResponse) {};
|
||||
|
||||
@@ -45,5 +45,4 @@ serde_json.workspace = true
|
||||
md-5 = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
#criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
|
||||
tokio-test = "0.4"
|
||||
tokio-test = { workspace = true }
|
||||
|
||||
@@ -32,7 +32,6 @@ async-trait.workspace = true
|
||||
datafusion = { workspace = true }
|
||||
derive_builder = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
lazy_static = { workspace = true }
|
||||
parking_lot = { workspace = true }
|
||||
s3s.workspace = true
|
||||
snafu = { workspace = true, features = ["backtrace"] }
|
||||
|
||||
@@ -33,7 +33,6 @@ use datafusion::{
|
||||
execution::{RecordBatchStream, SendableRecordBatchStream},
|
||||
};
|
||||
use futures::{Stream, StreamExt};
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_s3select_api::{
|
||||
QueryError, QueryResult,
|
||||
query::{
|
||||
@@ -48,6 +47,7 @@ use rustfs_s3select_api::{
|
||||
},
|
||||
};
|
||||
use s3s::dto::{FileHeaderInfo, SelectObjectContentInput};
|
||||
use std::sync::LazyLock;
|
||||
|
||||
use crate::{
|
||||
execution::factory::QueryExecutionFactoryRef,
|
||||
@@ -55,11 +55,9 @@ use crate::{
|
||||
sql::logical::planner::DefaultLogicalPlanner,
|
||||
};
|
||||
|
||||
lazy_static! {
|
||||
static ref IGNORE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::IGNORE);
|
||||
static ref NONE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::NONE);
|
||||
static ref USE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::USE);
|
||||
}
|
||||
static IGNORE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::IGNORE));
|
||||
static NONE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::NONE));
|
||||
static USE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::USE));
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct SimpleQueryDispatcher {
|
||||
|
||||
@@ -27,7 +27,6 @@ documentation = "https://docs.rs/rustfs-signer/latest/rustfs_signer/"
|
||||
|
||||
[dependencies]
|
||||
tracing.workspace = true
|
||||
lazy_static.workspace = true
|
||||
bytes = { workspace = true }
|
||||
http.workspace = true
|
||||
time.workspace = true
|
||||
|
||||
@@ -13,8 +13,6 @@
|
||||
// limitations under the License.
|
||||
|
||||
use http::{HeaderMap, HeaderValue, request};
|
||||
use lazy_static::lazy_static;
|
||||
use std::collections::HashMap;
|
||||
use time::{OffsetDateTime, macros::format_description};
|
||||
|
||||
use super::request_signature_v4::{SERVICE_TYPE_S3, get_scope, get_signature, get_signing_key};
|
||||
@@ -32,15 +30,13 @@ const _CRLF_LEN: i64 = 2;
|
||||
const _TRAILER_KV_SEPARATOR: &str = ":";
|
||||
const _TRAILER_SIGNATURE: &str = "x-amz-trailer-signature";
|
||||
|
||||
lazy_static! {
|
||||
static ref ignored_streaming_headers: HashMap<String, bool> = {
|
||||
let mut m = <HashMap<String, bool>>::new();
|
||||
m.insert("authorization".to_string(), true);
|
||||
m.insert("user-agent".to_string(), true);
|
||||
m.insert("content-type".to_string(), true);
|
||||
m
|
||||
};
|
||||
}
|
||||
// static ignored_streaming_headers: LazyLock<HashMap<String, bool>> = LazyLock::new(|| {
|
||||
// let mut m = <HashMap<String, bool>>::new();
|
||||
// m.insert("authorization".to_string(), true);
|
||||
// m.insert("user-agent".to_string(), true);
|
||||
// m.insert("content-type".to_string(), true);
|
||||
// m
|
||||
// });
|
||||
|
||||
#[allow(dead_code)]
|
||||
fn build_chunk_string_to_sign(t: OffsetDateTime, region: &str, previous_sig: &str, chunk_check_sum: &str) -> String {
|
||||
|
||||
@@ -16,9 +16,9 @@ use bytes::BytesMut;
|
||||
use http::HeaderMap;
|
||||
use http::Uri;
|
||||
use http::request;
|
||||
use lazy_static::lazy_static;
|
||||
use std::collections::HashMap;
|
||||
use std::fmt::Write;
|
||||
use std::sync::LazyLock;
|
||||
use time::{OffsetDateTime, macros::format_description};
|
||||
use tracing::debug;
|
||||
|
||||
@@ -32,15 +32,14 @@ pub const SIGN_V4_ALGORITHM: &str = "AWS4-HMAC-SHA256";
|
||||
pub const SERVICE_TYPE_S3: &str = "s3";
|
||||
pub const SERVICE_TYPE_STS: &str = "sts";
|
||||
|
||||
lazy_static! {
|
||||
static ref v4_ignored_headers: HashMap<String, bool> = {
|
||||
let mut m = <HashMap<String, bool>>::new();
|
||||
m.insert("accept-encoding".to_string(), true);
|
||||
m.insert("authorization".to_string(), true);
|
||||
m.insert("user-agent".to_string(), true);
|
||||
m
|
||||
};
|
||||
}
|
||||
#[allow(non_upper_case_globals)] // FIXME
|
||||
static v4_ignored_headers: LazyLock<HashMap<String, bool>> = LazyLock::new(|| {
|
||||
let mut m = <HashMap<String, bool>>::new();
|
||||
m.insert("accept-encoding".to_string(), true);
|
||||
m.insert("authorization".to_string(), true);
|
||||
m.insert("user-agent".to_string(), true);
|
||||
m
|
||||
});
|
||||
|
||||
pub fn get_signing_key(secret: &str, loc: &str, t: OffsetDateTime, service_type: &str) -> [u8; 32] {
|
||||
let mut s = "AWS4".to_string();
|
||||
|
||||
@@ -30,7 +30,6 @@ blake3 = { workspace = true, optional = true }
|
||||
crc32fast.workspace = true
|
||||
hex-simd = { workspace = true, optional = true }
|
||||
highway = { workspace = true, optional = true }
|
||||
lazy_static = { workspace = true, optional = true }
|
||||
local-ip-address = { workspace = true, optional = true }
|
||||
md-5 = { workspace = true, optional = true }
|
||||
netif = { workspace = true, optional = true }
|
||||
@@ -77,12 +76,12 @@ workspace = true
|
||||
default = ["ip"] # features that are enabled by default
|
||||
ip = ["dep:local-ip-address"] # ip characteristics and their dependencies
|
||||
tls = ["dep:rustls", "dep:rustls-pemfile", "dep:rustls-pki-types"] # tls characteristics and their dependencies
|
||||
net = ["ip", "dep:url", "dep:netif", "dep:lazy_static", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hyper-util"] # empty network features
|
||||
net = ["ip", "dep:url", "dep:netif", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hyper-util"] # empty network features
|
||||
io = ["dep:tokio"]
|
||||
path = []
|
||||
notify = ["dep:hyper", "dep:s3s"] # file system notification features
|
||||
compress = ["dep:flate2", "dep:brotli", "dep:snap", "dep:lz4", "dep:zstd"]
|
||||
string = ["dep:regex", "dep:lazy_static", "dep:rand"]
|
||||
string = ["dep:regex", "dep:rand"]
|
||||
crypto = ["dep:base64-simd", "dep:hex-simd", "dep:hmac", "dep:hyper", "dep:sha1"]
|
||||
hash = ["dep:highway", "dep:md-5", "dep:sha2", "dep:blake3", "dep:serde", "dep:siphasher", "dep:hex-simd", "dep:base64-simd"]
|
||||
os = ["dep:nix", "dep:tempfile", "winapi"] # operating system utilities
|
||||
|
||||
@@ -17,8 +17,8 @@ use futures::pin_mut;
|
||||
use futures::{Stream, StreamExt};
|
||||
use hyper::client::conn::http2::Builder;
|
||||
use hyper_util::rt::TokioExecutor;
|
||||
use lazy_static::lazy_static;
|
||||
use std::net::Ipv6Addr;
|
||||
use std::sync::LazyLock;
|
||||
use std::{
|
||||
collections::HashSet,
|
||||
fmt::Display,
|
||||
@@ -27,9 +27,7 @@ use std::{
|
||||
use transform_stream::AsyncTryStream;
|
||||
use url::{Host, Url};
|
||||
|
||||
lazy_static! {
|
||||
static ref LOCAL_IPS: Vec<IpAddr> = must_get_local_ips().unwrap();
|
||||
}
|
||||
static LOCAL_IPS: LazyLock<Vec<IpAddr>> = LazyLock::new(|| must_get_local_ips().unwrap());
|
||||
|
||||
/// helper for validating if the provided arg is an ip address.
|
||||
pub fn is_socket_addr(addr: &str) -> bool {
|
||||
@@ -178,7 +176,7 @@ impl Display for XHost {
|
||||
impl TryFrom<String> for XHost {
|
||||
type Error = std::io::Error;
|
||||
|
||||
fn try_from(value: String) -> std::result::Result<Self, Self::Error> {
|
||||
fn try_from(value: String) -> Result<Self, Self::Error> {
|
||||
if let Some(addr) = value.to_socket_addrs()?.next() {
|
||||
Ok(Self {
|
||||
name: addr.ip().to_string(),
|
||||
@@ -214,9 +212,9 @@ pub fn parse_and_resolve_address(addr_str: &str) -> std::io::Result<SocketAddr>
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn bytes_stream<S, E>(stream: S, content_length: usize) -> impl Stream<Item = std::result::Result<Bytes, E>> + Send + 'static
|
||||
pub fn bytes_stream<S, E>(stream: S, content_length: usize) -> impl Stream<Item = Result<Bytes, E>> + Send + 'static
|
||||
where
|
||||
S: Stream<Item = std::result::Result<Bytes, E>> + Send + 'static,
|
||||
S: Stream<Item = Result<Bytes, E>> + Send + 'static,
|
||||
E: Send + 'static,
|
||||
{
|
||||
AsyncTryStream::<Bytes, E, _>::new(|mut y| async move {
|
||||
|
||||
@@ -28,7 +28,7 @@ pub fn get_info(p: impl AsRef<Path>) -> std::io::Result<DiskInfo> {
|
||||
let path_display = p.as_ref().display();
|
||||
let path_wide: Vec<WCHAR> = p
|
||||
.as_ref()
|
||||
.canonicalize()?
|
||||
.to_path_buf()
|
||||
.into_os_string()
|
||||
.encode_wide()
|
||||
.chain(std::iter::once(0)) // Null-terminate the string
|
||||
@@ -83,12 +83,21 @@ pub fn get_info(p: impl AsRef<Path>) -> std::io::Result<DiskInfo> {
|
||||
used: total - free,
|
||||
files: lp_total_number_of_clusters as u64,
|
||||
ffree: lp_number_of_free_clusters as u64,
|
||||
fstype: get_fs_type(&path_wide)?,
|
||||
|
||||
// TODO This field is currently unused, and since this logic causes a
|
||||
// NotFound error during startup on Windows systems, it has been commented out here
|
||||
//
|
||||
// The error occurs in GetVolumeInformationW where the path parameter
|
||||
// is of type [WCHAR; MAX_PATH]. For a drive letter, there are excessive
|
||||
// trailing zeros, which causes the failure here.
|
||||
//
|
||||
// fstype: get_fs_type(&path_wide)?,
|
||||
..Default::default()
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns leading volume name.
|
||||
#[allow(dead_code)]
|
||||
fn get_volume_name(v: &[WCHAR]) -> std::io::Result<LPCWSTR> {
|
||||
let volume_name_size: DWORD = MAX_PATH as _;
|
||||
let mut lp_volume_name_buffer: [WCHAR; MAX_PATH] = [0; MAX_PATH];
|
||||
@@ -102,12 +111,14 @@ fn get_volume_name(v: &[WCHAR]) -> std::io::Result<LPCWSTR> {
|
||||
Ok(lp_volume_name_buffer.as_ptr())
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
fn utf16_to_string(v: &[WCHAR]) -> String {
|
||||
let len = v.iter().position(|&x| x == 0).unwrap_or(v.len());
|
||||
String::from_utf16_lossy(&v[..len])
|
||||
}
|
||||
|
||||
/// Returns the filesystem type of the underlying mounted filesystem
|
||||
#[allow(dead_code)]
|
||||
fn get_fs_type(p: &[WCHAR]) -> std::io::Result<String> {
|
||||
let path = get_volume_name(p)?;
|
||||
|
||||
|
||||
@@ -12,10 +12,10 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use lazy_static::*;
|
||||
use rand::{Rng, RngCore};
|
||||
use regex::Regex;
|
||||
use std::io::{Error, Result};
|
||||
use std::sync::LazyLock;
|
||||
|
||||
pub fn parse_bool(str: &str) -> Result<bool> {
|
||||
match str {
|
||||
@@ -116,9 +116,7 @@ pub fn match_as_pattern_prefix(pattern: &str, text: &str) -> bool {
|
||||
text.len() <= pattern.len()
|
||||
}
|
||||
|
||||
lazy_static! {
|
||||
static ref ELLIPSES_RE: Regex = Regex::new(r"(.*)(\{[0-9a-z]*\.\.\.[0-9a-z]*\})(.*)").unwrap();
|
||||
}
|
||||
static ELLIPSES_RE: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"(.*)(\{[0-9a-z]*\.\.\.[0-9a-z]*\})(.*)").unwrap());
|
||||
|
||||
/// Ellipses constants
|
||||
const OPEN_BRACES: &str = "{";
|
||||
|
||||
@@ -25,17 +25,16 @@ managing and monitoring the system.
|
||||
|--certs
|
||||
| ├── rustfs_cert.pem // Default|fallback certificate
|
||||
| ├── rustfs_key.pem // Default|fallback private key
|
||||
| ├── example.com/ // certificate directory of specific domain names
|
||||
| ├── rustfs.com/ // certificate directory of specific domain names
|
||||
| │ ├── rustfs_cert.pem
|
||||
| │ └── rustfs_key.pem
|
||||
| ├── api.example.com/
|
||||
| ├── api.rustfs.com/
|
||||
| │ ├── rustfs_cert.pem
|
||||
| │ └── rustfs_key.pem
|
||||
| └── cdn.example.com/
|
||||
| └── cdn.rustfs.com/
|
||||
| ├── rustfs_cert.pem
|
||||
| └── rustfs_key.pem
|
||||
|--config
|
||||
| |--rustfs.env // env config
|
||||
| |--rustfs-zh.env // env config in Chinese
|
||||
| |--event.example.toml // event config
|
||||
```
|
||||
@@ -36,15 +36,11 @@ Environment=RUSTFS_SECRET_KEY=rustfsadmin
|
||||
ExecStart=/usr/local/bin/rustfs \
|
||||
--address 0.0.0.0:9000 \
|
||||
--volumes /data/rustfs/vol1,/data/rustfs/vol2 \
|
||||
--obs-config /etc/rustfs/obs.yaml \
|
||||
--console-enable \
|
||||
--console-address 0.0.0.0:9001
|
||||
--console-enable
|
||||
# 定义启动命令,运行 /usr/local/bin/rustfs,带参数:
|
||||
# --address 0.0.0.0:9000:服务监听所有接口的 9000 端口。
|
||||
# --volumes:指定存储卷路径为 /data/rustfs/vol1 和 /data/rustfs/vol2。
|
||||
# --obs-config:指定配置文件路径为 /etc/rustfs/obs.yaml。
|
||||
# --console-enable:启用控制台功能。
|
||||
# --console-address 0.0.0.0:9001:控制台监听所有接口的 9001 端口。
|
||||
|
||||
# 定义环境变量配置,用于传递给服务程序,推荐使用且简洁
|
||||
# rustfs 示例文件 详见: `../config/rustfs-zh.env`
|
||||
|
||||
@@ -83,7 +83,6 @@ sudo journalctl -u rustfs --since today
|
||||
```bash
|
||||
# 检查服务端口
|
||||
ss -tunlp | grep 9000
|
||||
ss -tunlp | grep 9001
|
||||
|
||||
# 测试服务可用性
|
||||
curl -I http://localhost:9000
|
||||
|
||||
@@ -83,7 +83,6 @@ sudo journalctl -u rustfs --since today
|
||||
```bash
|
||||
# Check service ports
|
||||
ss -tunlp | grep 9000
|
||||
ss -tunlp | grep 9001
|
||||
|
||||
# Test service availability
|
||||
curl -I http://localhost:9000
|
||||
|
||||
@@ -22,9 +22,7 @@ Environment=RUSTFS_SECRET_KEY=rustfsadmin
|
||||
ExecStart=/usr/local/bin/rustfs \
|
||||
--address 0.0.0.0:9000 \
|
||||
--volumes /data/rustfs/vol1,/data/rustfs/vol2 \
|
||||
--obs-config /etc/rustfs/obs.yaml \
|
||||
--console-enable \
|
||||
--console-address 0.0.0.0:9001
|
||||
--console-enable
|
||||
|
||||
# environment variable configuration (Option 2: Use environment variables)
|
||||
# rustfs example file see: `../config/rustfs.env`
|
||||
|
||||
@@ -36,13 +36,13 @@ cd deploy/certs/
|
||||
ls -la
|
||||
├── rustfs_cert.pem // Default|fallback certificate
|
||||
├── rustfs_key.pem // Default|fallback private key
|
||||
├── example.com/ // certificate directory of specific domain names
|
||||
├── rustfs.com/ // certificate directory of specific domain names
|
||||
│ ├── rustfs_cert.pem
|
||||
│ └── rustfs_key.pem
|
||||
├── api.example.com/
|
||||
├── api.rustfs.com/
|
||||
│ ├── rustfs_cert.pem
|
||||
│ └── rustfs_key.pem
|
||||
└── cdn.example.com/
|
||||
└── cdn.rustfs.com/
|
||||
├── rustfs_cert.pem
|
||||
└── rustfs_key.pem
|
||||
```
|
||||