Compare commits
90 Commits
1.0.0-alph
...
1.0.0-alph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5d17f5382 | ||
|
|
982cc66c74 | ||
|
|
74bf4909c8 | ||
|
|
9c956b4445 | ||
|
|
4c1fc9317e | ||
|
|
a9d77a618f | ||
|
|
38cdc87e93 | ||
|
|
f5ff93b65e | ||
|
|
6ef6f188e5 | ||
|
|
ccad91a4a9 | ||
|
|
63b79ae151 | ||
|
|
9284f64e2a | ||
|
|
b9bbae27de | ||
|
|
36e3efb5a5 | ||
|
|
04d1c8724d | ||
|
|
4fb4b353f8 | ||
|
|
564a02f344 | ||
|
|
5b582a4234 | ||
|
|
2e9792577f | ||
|
|
2066e0a03b | ||
|
|
a4d49a500f | ||
|
|
a8fbced928 | ||
|
|
99ca405279 | ||
|
|
2e1d1018aa | ||
|
|
c57b4be1c7 | ||
|
|
238a016242 | ||
|
|
2c0c7fafa3 | ||
|
|
ee4962fe31 | ||
|
|
55895d0a10 | ||
|
|
676897d389 | ||
|
|
5205ff6695 | ||
|
|
15cf3ce92b | ||
|
|
c0441b2412 | ||
|
|
6267872ddb | ||
|
|
618779a89d | ||
|
|
b3ec2325ed | ||
|
|
49a5643e76 | ||
|
|
657395af8a | ||
|
|
4de62ed77e | ||
|
|
505f493729 | ||
|
|
be05b704b0 | ||
|
|
b33c2fa3cf | ||
|
|
98674c60d4 | ||
|
|
e39eb86967 | ||
|
|
646070ae7a | ||
|
|
2525b66658 | ||
|
|
58c5a633e2 | ||
|
|
aefd894fc2 | ||
|
|
1e1d4646a2 | ||
|
|
b97845fffd | ||
|
|
84f5a4cb48 | ||
|
|
2832f0e089 | ||
|
|
a3b5445824 | ||
|
|
363e37c791 | ||
|
|
1b0b041530 | ||
|
|
7d5fc87002 | ||
|
|
13130e9dd4 | ||
|
|
1061ce11a3 | ||
|
|
9f9a74000d | ||
|
|
d1863018df | ||
|
|
166080aac8 | ||
|
|
78b2487639 | ||
|
|
79f4e81fea | ||
|
|
28da78d544 | ||
|
|
df2eb9bc6a | ||
|
|
7c20d92fe5 | ||
|
|
b4c316c662 | ||
|
|
411b511937 | ||
|
|
c902475443 | ||
|
|
00d8008a89 | ||
|
|
36acb5bce9 | ||
|
|
e033b019f6 | ||
|
|
259b80777e | ||
|
|
abdfad8521 | ||
|
|
c498fbcb27 | ||
|
|
874d486b1e | ||
|
|
21516251b0 | ||
|
|
a2f83b0d2d | ||
|
|
aa65766312 | ||
|
|
660f004cfd | ||
|
|
6d2c420f54 | ||
|
|
5f0b9a5fa8 | ||
|
|
8378e308e0 | ||
|
|
b9f54519fd | ||
|
|
4108a9649f | ||
|
|
6244e23451 | ||
|
|
713b322f99 | ||
|
|
e1a5a195c3 | ||
|
|
bc37417d6c | ||
|
|
3dbcaaa221 |
77
.cursorrules
@@ -1,19 +1,39 @@
|
||||
# RustFS Project Cursor Rules
|
||||
|
||||
## ⚠️ CRITICAL DEVELOPMENT RULES ⚠️
|
||||
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
|
||||
|
||||
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
|
||||
### ⛔️ ABSOLUTE PROHIBITION: NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH ⛔️
|
||||
|
||||
- **This is the most important rule - NEVER modify code directly on main or master branch**
|
||||
- **Always work on feature branches and use pull requests for all changes**
|
||||
- **Any direct commits to master/main branch are strictly forbidden**
|
||||
- Before starting any development, always:
|
||||
1. `git checkout main` (switch to main branch)
|
||||
2. `git pull` (get latest changes)
|
||||
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
|
||||
4. Make your changes on the feature branch
|
||||
5. Commit and push to the feature branch
|
||||
6. Create a pull request for review
|
||||
**🔥 THIS IS THE MOST CRITICAL RULE - VIOLATION WILL RESULT IN IMMEDIATE REVERSAL 🔥**
|
||||
|
||||
- **🚫 ZERO DIRECT COMMITS TO MAIN/MASTER BRANCH - ABSOLUTELY FORBIDDEN**
|
||||
- **🚫 ANY DIRECT COMMIT TO MAIN BRANCH MUST BE IMMEDIATELY REVERTED**
|
||||
- **🚫 NO EXCEPTIONS FOR HOTFIXES, EMERGENCIES, OR URGENT CHANGES**
|
||||
- **🚫 NO EXCEPTIONS FOR SMALL CHANGES, TYPOS, OR DOCUMENTATION UPDATES**
|
||||
- **🚫 NO EXCEPTIONS FOR ANYONE - MAINTAINERS, CONTRIBUTORS, OR ADMINS**
|
||||
|
||||
### 📋 MANDATORY WORKFLOW - STRICTLY ENFORCED
|
||||
|
||||
**EVERY SINGLE CHANGE MUST FOLLOW THIS WORKFLOW:**
|
||||
|
||||
1. **Check current branch**: `git branch` (MUST NOT be on main/master)
|
||||
2. **Switch to main**: `git checkout main`
|
||||
3. **Pull latest**: `git pull origin main`
|
||||
4. **Create feature branch**: `git checkout -b feat/your-feature-name`
|
||||
5. **Make changes ONLY on feature branch**
|
||||
6. **Test thoroughly before committing**
|
||||
7. **Commit and push to feature branch**: `git push origin feat/your-feature-name`
|
||||
8. **Create Pull Request**: Use `gh pr create` (MANDATORY)
|
||||
9. **Wait for PR approval**: NO self-merging allowed
|
||||
10. **Merge through GitHub interface**: ONLY after approval
|
||||
|
||||
### 🔒 ENFORCEMENT MECHANISMS
|
||||
|
||||
- **Branch protection rules**: Main branch is protected
|
||||
- **Pre-commit hooks**: Will block direct commits to main
|
||||
- **CI/CD checks**: All PRs must pass before merging
|
||||
- **Code review requirement**: At least one approval needed
|
||||
- **Automated reversal**: Direct commits to main will be automatically reverted
|
||||
|
||||
## Project Overview
|
||||
|
||||
@@ -514,7 +534,7 @@ let results = join_all(futures).await;
|
||||
|
||||
### 3. Caching Strategy
|
||||
|
||||
- Use `lazy_static` or `OnceCell` for global caching
|
||||
- Use `LazyLock` for global caching
|
||||
- Implement LRU cache to avoid memory leaks
|
||||
|
||||
## Testing Guidelines
|
||||
@@ -817,6 +837,7 @@ These rules should serve as guiding principles when developing the RustFS projec
|
||||
|
||||
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
|
||||
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
|
||||
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
|
||||
- **Always work on feature branches - NO EXCEPTIONS**
|
||||
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
|
||||
- **MANDATORY workflow for ALL changes:**
|
||||
@@ -826,13 +847,39 @@ These rules should serve as guiding principles when developing the RustFS projec
|
||||
4. Make your changes ONLY on the feature branch
|
||||
5. Test thoroughly before committing
|
||||
6. Commit and push to the feature branch
|
||||
7. Create a pull request for code review
|
||||
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
|
||||
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
|
||||
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
|
||||
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
|
||||
- Ensure all changes are made on feature branches and merged through pull requests
|
||||
- **Pull Request Requirements:**
|
||||
- All changes must be submitted via PR regardless of size or urgency
|
||||
- PRs must include comprehensive description and testing information
|
||||
- PRs must pass all CI/CD checks before merging
|
||||
- PRs require at least one approval from code reviewers
|
||||
- Even hotfixes and emergency changes must go through PR process
|
||||
- **Enforcement:**
|
||||
- Main branch should be protected with branch protection rules
|
||||
- Direct pushes to main should be blocked by repository settings
|
||||
- Any accidental direct commits to main must be immediately reverted via PR
|
||||
|
||||
#### Development Workflow
|
||||
|
||||
## 🎯 **Core Development Principles**
|
||||
|
||||
- **🔴 Every change must be precise - don't modify unless you're confident**
|
||||
- Carefully analyze code logic and ensure complete understanding before making changes
|
||||
- When uncertain, prefer asking users or consulting documentation over blind modifications
|
||||
- Use small iterative steps, modify only necessary parts at a time
|
||||
- Evaluate impact scope before changes to ensure no new issues are introduced
|
||||
|
||||
- **🚀 GitHub PR creation prioritizes gh command usage**
|
||||
- Prefer using `gh pr create` command to create Pull Requests
|
||||
- Avoid having users manually create PRs through web interface
|
||||
- Provide clear and professional PR titles and descriptions
|
||||
- Using `gh` commands ensures better integration and automation
|
||||
|
||||
## 📝 **Code Quality Requirements**
|
||||
|
||||
- Use English for all code comments, documentation, and variable names
|
||||
- Write meaningful and descriptive names for variables, functions, and methods
|
||||
- Avoid meaningless test content like "debug 111" or placeholder values
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
FROM ubuntu:22.04
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
|
||||
|
||||
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
|
||||
CMD [ "bash", "-c", "while true; do sleep 1; done" ]
|
||||
@@ -1,32 +0,0 @@
|
||||
FROM rockylinux:9.3 AS builder
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -e 's|^mirrorlist=|#mirrorlist=|g' \
|
||||
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.ustc.edu.cn/rocky|g' \
|
||||
-i.bak \
|
||||
/etc/yum.repos.d/rocky-extras.repo \
|
||||
/etc/yum.repos.d/rocky.repo
|
||||
|
||||
RUN dnf makecache
|
||||
|
||||
RUN yum install wget git unzip gcc openssl-devel pkgconf-pkg-config -y
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc \
|
||||
&& rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
199
.docker/README.md
Normal file
@@ -0,0 +1,199 @@
|
||||
# RustFS Docker Images
|
||||
|
||||
This directory contains Docker configuration files and supporting infrastructure for building and running RustFS container images.
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
rustfs/
|
||||
├── Dockerfile # Production image (Alpine + GitHub Releases)
|
||||
├── Dockerfile.source # Source build (Debian + cross-compilation)
|
||||
├── cargo.config.toml # Rust cargo configuration
|
||||
├── docker-buildx.sh # Multi-architecture build script
|
||||
└── .docker/ # Supporting infrastructure
|
||||
├── observability/ # Monitoring and observability configs
|
||||
├── compose/ # Docker Compose configurations
|
||||
├── mqtt/ # MQTT broker configs
|
||||
└── openobserve-otel/ # OpenObserve + OpenTelemetry configs
|
||||
```
|
||||
|
||||
## 🎯 Image Variants
|
||||
|
||||
### Core Images
|
||||
|
||||
| Image | Base OS | Build Method | Size | Use Case |
|
||||
|-------|---------|--------------|------|----------|
|
||||
| `production` (default) | Alpine 3.18 | GitHub Releases | Smallest | Production deployment |
|
||||
| `source` | Debian Bookworm | Source build | Medium | Custom builds with cross-compilation |
|
||||
| `dev` | Debian Bookworm | Development tools | Large | Interactive development |
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Quick Start (Production)
|
||||
|
||||
```bash
|
||||
# Default production image (Alpine + GitHub Releases)
|
||||
docker run -p 9000:9000 rustfs/rustfs:latest
|
||||
|
||||
# Specific version
|
||||
docker run -p 9000:9000 rustfs/rustfs:1.2.3
|
||||
```
|
||||
|
||||
### Complete Tag Strategy Examples
|
||||
|
||||
```bash
|
||||
# Stable Releases
|
||||
docker run rustfs/rustfs:1.2.3 # Main version (production)
|
||||
docker run rustfs/rustfs:1.2.3-production # Explicit production variant
|
||||
docker run rustfs/rustfs:1.2.3-source # Source build variant
|
||||
docker run rustfs/rustfs:latest # Latest stable
|
||||
|
||||
# Prerelease Versions
|
||||
docker run rustfs/rustfs:1.3.0-alpha.2 # Specific alpha version
|
||||
docker run rustfs/rustfs:alpha # Latest alpha
|
||||
docker run rustfs/rustfs:beta # Latest beta
|
||||
docker run rustfs/rustfs:rc # Latest release candidate
|
||||
|
||||
# Development Versions
|
||||
docker run rustfs/rustfs:dev # Latest main branch development
|
||||
docker run rustfs/rustfs:dev-13e4a0b # Specific commit
|
||||
docker run rustfs/rustfs:dev-latest # Latest development
|
||||
docker run rustfs/rustfs:main-latest # Main branch latest
|
||||
```
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Start development container
|
||||
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs/rustfs:latest-dev
|
||||
|
||||
# Build from source locally
|
||||
docker build -f Dockerfile.source -t rustfs:custom .
|
||||
|
||||
# Development with hot reload
|
||||
docker-compose up rustfs-dev
|
||||
```
|
||||
|
||||
## 🏗️ Build Arguments and Scripts
|
||||
|
||||
### Using docker-buildx.sh (Recommended)
|
||||
|
||||
For multi-architecture builds, use the provided script:
|
||||
|
||||
```bash
|
||||
# Build latest version for all architectures
|
||||
./docker-buildx.sh
|
||||
|
||||
# Build and push to registry
|
||||
./docker-buildx.sh --push
|
||||
|
||||
# Build specific version
|
||||
./docker-buildx.sh --release v1.2.3
|
||||
|
||||
# Build and push specific version
|
||||
./docker-buildx.sh --release v1.2.3 --push
|
||||
```
|
||||
|
||||
### Manual Docker Builds
|
||||
|
||||
All images support dynamic version selection:
|
||||
|
||||
```bash
|
||||
# Build production image with latest release
|
||||
docker build --build-arg RELEASE="latest" -t rustfs:latest .
|
||||
|
||||
# Build from source with specific target
|
||||
docker build -f Dockerfile.source \
|
||||
--build-arg TARGETPLATFORM="linux/amd64" \
|
||||
-t rustfs:source .
|
||||
|
||||
# Development build
|
||||
docker build -f Dockerfile.source -t rustfs:dev .
|
||||
```
|
||||
|
||||
## 🔧 Binary Download Sources
|
||||
|
||||
### Unified GitHub Releases
|
||||
|
||||
The production image downloads from GitHub Releases for reliability and transparency:
|
||||
|
||||
- ✅ **production** → GitHub Releases API with automatic latest detection
|
||||
- ✅ **Checksum verification** → SHA256SUMS validation when available
|
||||
- ✅ **Multi-architecture** → Supports amd64 and arm64
|
||||
|
||||
### Source Build
|
||||
|
||||
The source variant compiles from source code with advanced features:
|
||||
|
||||
- 🔧 **Cross-compilation** → Supports multiple target platforms via `TARGETPLATFORM`
|
||||
- ⚡ **Build caching** → sccache for faster compilation
|
||||
- 🎯 **Optimized builds** → Release optimizations with LTO and symbol stripping
|
||||
|
||||
## 📋 Architecture Support
|
||||
|
||||
All variants support multi-architecture builds:
|
||||
|
||||
- **linux/amd64** (x86_64)
|
||||
- **linux/arm64** (aarch64)
|
||||
|
||||
Architecture is automatically detected during build using Docker's `TARGETARCH` build argument.
|
||||
|
||||
## 🔐 Security Features
|
||||
|
||||
- **Checksum Verification**: Production image verifies SHA256SUMS when available
|
||||
- **Non-root User**: All images run as user `rustfs` (UID 1000)
|
||||
- **Minimal Runtime**: Production image only includes necessary dependencies
|
||||
- **Secure Defaults**: No hardcoded credentials or keys
|
||||
|
||||
## 🛠️ Development Workflow
|
||||
|
||||
For local development and testing:
|
||||
|
||||
```bash
|
||||
# Quick development setup
|
||||
docker-compose up rustfs-dev
|
||||
|
||||
# Custom source build
|
||||
docker build -f Dockerfile.source -t rustfs:custom .
|
||||
|
||||
# Run with development tools
|
||||
docker run -it -v $(pwd):/workspace rustfs:custom bash
|
||||
```
|
||||
|
||||
## 🚀 CI/CD Integration
|
||||
|
||||
The project uses GitHub Actions for automated multi-architecture Docker builds:
|
||||
|
||||
### Automated Builds
|
||||
|
||||
- **Tags**: Automatic builds triggered on version tags (e.g., `v1.2.3`)
|
||||
- **Main Branch**: Development builds with `dev-latest` and `main-latest` tags
|
||||
- **Pull Requests**: Test builds without registry push
|
||||
|
||||
### Build Variants
|
||||
|
||||
Each build creates three image variants:
|
||||
|
||||
- `rustfs/rustfs:v1.2.3` (production - Alpine-based)
|
||||
- `rustfs/rustfs:v1.2.3-source` (source build - Debian-based)
|
||||
- `rustfs/rustfs:v1.2.3-dev` (development - Debian-based with tools)
|
||||
|
||||
### Manual Builds
|
||||
|
||||
Trigger custom builds via GitHub Actions:
|
||||
|
||||
```bash
|
||||
# Use workflow_dispatch to build specific versions
|
||||
# Available options: latest, main-latest, dev-latest, v1.2.3, dev-abc123
|
||||
```
|
||||
|
||||
## 📦 Supporting Infrastructure
|
||||
|
||||
The `.docker/` directory contains supporting configuration files:
|
||||
|
||||
- **observability/** - Prometheus, Grafana, OpenTelemetry configs
|
||||
- **compose/** - Multi-service Docker Compose setups
|
||||
- **mqtt/** - MQTT broker configurations
|
||||
- **openobserve-otel/** - Log aggregation and tracing setup
|
||||
|
||||
See individual README files in each subdirectory for specific usage instructions.
|
||||
@@ -1,25 +1,40 @@
|
||||
FROM ubuntu:22.04
|
||||
FROM alpine:3.18
|
||||
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
|
||||
|
||||
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
|
||||
# Install base dependencies
|
||||
RUN apk add --no-cache \
|
||||
wget \
|
||||
git \
|
||||
curl \
|
||||
unzip \
|
||||
gcc \
|
||||
musl-dev \
|
||||
pkgconfig \
|
||||
openssl-dev \
|
||||
dbus-dev \
|
||||
wayland-dev \
|
||||
webkit2gtk-4.1-dev \
|
||||
build-base \
|
||||
linux-headers
|
||||
|
||||
# install protoc
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
|
||||
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
|
||||
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v30.2/protoc-30.2-linux-x86_64.zip \
|
||||
&& unzip protoc-30.2-linux-x86_64.zip -d protoc3 \
|
||||
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
|
||||
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-30.2-linux-x86_64.zip protoc3
|
||||
|
||||
# install flatc
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
|
||||
RUN wget https://github.com/google/flatbuffers/releases/download/v24.3.25/Linux.flatc.binary.g++-13.zip \
|
||||
&& unzip Linux.flatc.binary.g++-13.zip \
|
||||
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
|
||||
|
||||
# install rust
|
||||
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
|
||||
|
||||
# Set PATH for rust
|
||||
ENV PATH="/root/.cargo/bin:${PATH}"
|
||||
|
||||
COPY .docker/cargo.config.toml /root/.cargo/config.toml
|
||||
|
||||
WORKDIR /root/s3-rustfs
|
||||
WORKDIR /root/rustfs
|
||||
@@ -1,19 +0,0 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
[source.crates-io]
|
||||
registry = "https://github.com/rust-lang/crates.io-index"
|
||||
|
||||
[net]
|
||||
git-fetch-with-cli = true
|
||||
80
.docker/compose/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Docker Compose Configurations
|
||||
|
||||
This directory contains specialized Docker Compose configurations for different use cases.
|
||||
|
||||
## 📁 Configuration Files
|
||||
|
||||
This directory contains specialized Docker Compose configurations and their associated Dockerfiles, keeping related files organized together.
|
||||
|
||||
### Main Configuration (Root Directory)
|
||||
|
||||
- **`../../docker-compose.yml`** - **Default Production Setup**
|
||||
- Complete production-ready configuration
|
||||
- Includes RustFS server + full observability stack
|
||||
- Supports multiple profiles: `dev`, `observability`, `cache`, `proxy`
|
||||
- Recommended for most users
|
||||
|
||||
### Specialized Configurations
|
||||
|
||||
- **`docker-compose.cluster.yaml`** - **Distributed Testing**
|
||||
- 4-node cluster setup for testing distributed storage
|
||||
- Uses local compiled binaries
|
||||
- Simulates multi-node environment
|
||||
- Ideal for development and cluster testing
|
||||
|
||||
- **`docker-compose.observability.yaml`** - **Observability Focus**
|
||||
- Specialized setup for testing observability features
|
||||
- Includes OpenTelemetry, Jaeger, Prometheus, Loki, Grafana
|
||||
- Uses `../../Dockerfile.source` for builds
|
||||
- Perfect for observability development
|
||||
|
||||
## 🚀 Usage Examples
|
||||
|
||||
### Production Setup
|
||||
|
||||
```bash
|
||||
# Start main service
|
||||
docker-compose up -d
|
||||
|
||||
# Start with development profile
|
||||
docker-compose --profile dev up -d
|
||||
|
||||
# Start with full observability
|
||||
docker-compose --profile observability up -d
|
||||
```
|
||||
|
||||
### Cluster Testing
|
||||
|
||||
```bash
|
||||
# Build and start 4-node cluster (run from project root)
|
||||
cd .docker/compose
|
||||
docker-compose -f docker-compose.cluster.yaml up -d
|
||||
|
||||
# Or run directly from project root
|
||||
docker-compose -f .docker/compose/docker-compose.cluster.yaml up -d
|
||||
```
|
||||
|
||||
### Observability Testing
|
||||
|
||||
```bash
|
||||
# Start observability-focused environment (run from project root)
|
||||
cd .docker/compose
|
||||
docker-compose -f docker-compose.observability.yaml up -d
|
||||
|
||||
# Or run directly from project root
|
||||
docker-compose -f .docker/compose/docker-compose.observability.yaml up -d
|
||||
```
|
||||
|
||||
## 🔧 Configuration Overview
|
||||
|
||||
| Configuration | Nodes | Storage | Observability | Use Case |
|
||||
|---------------|-------|---------|---------------|----------|
|
||||
| **Main** | 1 | Volume mounts | Full stack | Production |
|
||||
| **Cluster** | 4 | HTTP endpoints | Basic | Testing |
|
||||
| **Observability** | 4 | Local data | Advanced | Development |
|
||||
|
||||
## 📝 Notes
|
||||
|
||||
- Always ensure you have built the required binaries before starting cluster tests
|
||||
- The main configuration is sufficient for most use cases
|
||||
- Specialized configurations are for specific testing scenarios
|
||||
@@ -14,70 +14,69 @@
|
||||
|
||||
services:
|
||||
node0:
|
||||
image: rustfs:v1 # 替换为你的镜像名称和标签
|
||||
image: rustfs/rustfs:latest # Replace with your image name and label
|
||||
container_name: node0
|
||||
hostname: node0
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9000:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "8000:9001" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "9000:9000" # Map port 9001 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node0:/data # 将当前路径挂载到容器内的 /root/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node1:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node1
|
||||
hostname: node1
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9001:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
|
||||
- "9001:9000" # Map port 9002 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node1:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node2:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node2
|
||||
hostname: node2
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9002:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
|
||||
- "9002:9000" # Map port 9003 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node2:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
|
||||
node3:
|
||||
image: rustfs:v1
|
||||
image: rustfs/rustfs:latest
|
||||
container_name: node3
|
||||
hostname: node3
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
|
||||
- RUSTFS_ACCESS_KEY=rustfsadmin
|
||||
- RUSTFS_SECRET_KEY=rustfsadmin
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9003:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
|
||||
- "9003:9000" # Map port 9004 of the host to port 9000 of the container
|
||||
volumes:
|
||||
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
# - ./data/node3:/data
|
||||
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
|
||||
command: "/app/rustfs"
|
||||
@@ -14,11 +14,11 @@
|
||||
|
||||
services:
|
||||
otel-collector:
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
|
||||
image: otel/opentelemetry-collector-contrib:0.129.1
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
- ../../.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888
|
||||
- 8888:8888
|
||||
@@ -30,7 +30,7 @@ services:
|
||||
networks:
|
||||
- rustfs-network
|
||||
jaeger:
|
||||
image: jaegertracing/jaeger:2.6.0
|
||||
image: jaegertracing/jaeger:2.8.0
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
ports:
|
||||
@@ -40,11 +40,11 @@ services:
|
||||
networks:
|
||||
- rustfs-network
|
||||
prometheus:
|
||||
image: prom/prometheus:v3.4.1
|
||||
image: prom/prometheus:v3.4.2
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
- ../../.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
networks:
|
||||
@@ -54,16 +54,16 @@ services:
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
|
||||
- ../../.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
|
||||
ports:
|
||||
- "3100:3100"
|
||||
command: -config.file=/etc/loki/local-config.yaml
|
||||
networks:
|
||||
- rustfs-network
|
||||
grafana:
|
||||
image: grafana/grafana:12.0.1
|
||||
image: grafana/grafana:12.0.2
|
||||
ports:
|
||||
- "3000:3000" # Web UI
|
||||
- "3000:3000" # Web UI
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=admin
|
||||
- TZ=Asia/Shanghai
|
||||
@@ -72,85 +72,69 @@ services:
|
||||
|
||||
node1:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node1
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9001:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
|
||||
- "9101:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data # 将当前路径挂载到容器内的 /root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9001:9000" # Map port 9001 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node2:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node2
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9002:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
|
||||
- "9102:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9002:9000" # Map port 9002 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node3:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node3
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9003:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
|
||||
- "9103:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9003:9000" # Map port 9003 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
node4:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.obs
|
||||
context: ../..
|
||||
dockerfile: Dockerfile.source
|
||||
container_name: node4
|
||||
environment:
|
||||
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
|
||||
- RUSTFS_ADDRESS=:9000
|
||||
- RUSTFS_CONSOLE_ENABLE=true
|
||||
- RUSTFS_CONSOLE_ADDRESS=:9002
|
||||
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
|
||||
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
|
||||
- RUSTFS_OBS_LOGGER_LEVEL=debug
|
||||
platform: linux/amd64
|
||||
ports:
|
||||
- "9004:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
|
||||
- "9104:9002"
|
||||
volumes:
|
||||
# - ./data:/root/data
|
||||
- ./.docker/observability/config:/etc/observability/config
|
||||
- "9004:9000" # Map port 9004 of the host to port 9000 of the container
|
||||
networks:
|
||||
- rustfs-network
|
||||
|
||||
@@ -13,24 +13,40 @@
|
||||
# limitations under the License.
|
||||
|
||||
services:
|
||||
|
||||
tempo:
|
||||
image: grafana/tempo:latest
|
||||
#user: root # The container must be started with root to execute chown in the script
|
||||
#entrypoint: [ "/etc/tempo/entrypoint.sh" ] # Specify a custom entry point
|
||||
command: [ "-config.file=/etc/tempo.yaml" ] # This is passed as a parameter to the entry point script
|
||||
volumes:
|
||||
- ./tempo-entrypoint.sh:/etc/tempo/entrypoint.sh # Mount entry point script
|
||||
- ./tempo.yaml:/etc/tempo.yaml
|
||||
- ./tempo-data:/var/tempo
|
||||
ports:
|
||||
- "3200:3200" # tempo
|
||||
- "24317:4317" # otlp grpc
|
||||
networks:
|
||||
- otel-network
|
||||
|
||||
otel-collector:
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
|
||||
image: otel/opentelemetry-collector-contrib:0.129.1
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888
|
||||
- 8888:8888
|
||||
- 8889:8889
|
||||
- 13133:13133
|
||||
- 4317:4317
|
||||
- 4318:4318
|
||||
- 55679:55679
|
||||
- "1888:1888"
|
||||
- "8888:8888"
|
||||
- "8889:8889"
|
||||
- "13133:13133"
|
||||
- "4317:4317"
|
||||
- "4318:4318"
|
||||
- "55679:55679"
|
||||
networks:
|
||||
- otel-network
|
||||
jaeger:
|
||||
image: jaegertracing/jaeger:2.7.0
|
||||
image: jaegertracing/jaeger:2.8.0
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
ports:
|
||||
@@ -40,7 +56,7 @@ services:
|
||||
networks:
|
||||
- otel-network
|
||||
prometheus:
|
||||
image: prom/prometheus:v3.4.1
|
||||
image: prom/prometheus:v3.4.2
|
||||
environment:
|
||||
- TZ=Asia/Shanghai
|
||||
volumes:
|
||||
@@ -64,6 +80,8 @@ services:
|
||||
image: grafana/grafana:12.0.2
|
||||
ports:
|
||||
- "3000:3000" # Web UI
|
||||
volumes:
|
||||
- ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=admin
|
||||
- TZ=Asia/Shanghai
|
||||
|
||||
32
.docker/observability/grafana-datasources.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- name: Prometheus
|
||||
type: prometheus
|
||||
uid: prometheus
|
||||
access: proxy
|
||||
orgId: 1
|
||||
url: http://prometheus:9090
|
||||
basicAuth: false
|
||||
isDefault: false
|
||||
version: 1
|
||||
editable: false
|
||||
jsonData:
|
||||
httpMethod: GET
|
||||
- name: Tempo
|
||||
type: tempo
|
||||
access: proxy
|
||||
orgId: 1
|
||||
url: http://tempo:3200
|
||||
basicAuth: false
|
||||
isDefault: true
|
||||
version: 1
|
||||
editable: false
|
||||
apiVersion: 1
|
||||
uid: tempo
|
||||
jsonData:
|
||||
httpMethod: GET
|
||||
serviceMap:
|
||||
datasourceUid: prometheus
|
||||
streamingEnabled:
|
||||
search: true
|
||||
@@ -33,6 +33,10 @@ exporters:
|
||||
endpoint: "jaeger:4317" # Jaeger 的 OTLP gRPC 端点
|
||||
tls:
|
||||
insecure: true # 开发环境禁用 TLS,生产环境需配置证书
|
||||
otlp/tempo: # OTLP 导出器,用于跟踪数据
|
||||
endpoint: "tempo:4317" # tempo 的 OTLP gRPC 端点
|
||||
tls:
|
||||
insecure: true # 开发环境禁用 TLS,生产环境需配置证书
|
||||
prometheus: # Prometheus 导出器,用于指标数据
|
||||
endpoint: "0.0.0.0:8889" # Prometheus 刮取端点
|
||||
namespace: "rustfs" # 指标前缀
|
||||
@@ -53,7 +57,7 @@ service:
|
||||
traces:
|
||||
receivers: [ otlp ]
|
||||
processors: [ memory_limiter,batch ]
|
||||
exporters: [ otlp/traces ]
|
||||
exporters: [ otlp/traces,otlp/tempo ]
|
||||
metrics:
|
||||
receivers: [ otlp ]
|
||||
processors: [ batch ]
|
||||
@@ -66,6 +70,12 @@ service:
|
||||
logs:
|
||||
level: "info" # Collector 日志级别
|
||||
metrics:
|
||||
address: "0.0.0.0:8888" # Collector 自身指标暴露
|
||||
level: "detailed" # 可以是 basic, normal, detailed
|
||||
readers:
|
||||
- periodic:
|
||||
exporter:
|
||||
otlp:
|
||||
protocol: http/protobuf
|
||||
endpoint: http://otel-collector:4318
|
||||
|
||||
|
||||
|
||||
@@ -18,8 +18,11 @@ global:
|
||||
scrape_configs:
|
||||
- job_name: 'otel-collector'
|
||||
static_configs:
|
||||
- targets: ['otel-collector:8888'] # 从 Collector 刮取指标
|
||||
- targets: [ 'otel-collector:8888' ] # 从 Collector 刮取指标
|
||||
- job_name: 'otel-metrics'
|
||||
static_configs:
|
||||
- targets: ['otel-collector:8889'] # 应用指标
|
||||
- targets: [ 'otel-collector:8889' ] # 应用指标
|
||||
- job_name: 'tempo'
|
||||
static_configs:
|
||||
- targets: [ 'tempo:3200' ]
|
||||
|
||||
|
||||
1
.docker/observability/tempo-data/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*
|
||||
8
.docker/observability/tempo-entrypoint.sh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/bin/sh
|
||||
# Run as root to fix directory permissions
|
||||
chown -R 10001:10001 /var/tempo
|
||||
|
||||
# Use su-exec (a lightweight sudo/gosu alternative, commonly used in Alpine mirroring)
|
||||
# Switch to user 10001 and execute the original command (CMD) passed to the script
|
||||
# "$@" represents all parameters passed to this script, i.e. command in docker-compose
|
||||
exec su-exec 10001:10001 /tempo "$@"
|
||||
55
.docker/observability/tempo.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
stream_over_http_enabled: true
|
||||
server:
|
||||
http_listen_port: 3200
|
||||
log_level: info
|
||||
|
||||
query_frontend:
|
||||
search:
|
||||
duration_slo: 5s
|
||||
throughput_bytes_slo: 1.073741824e+09
|
||||
metadata_slo:
|
||||
duration_slo: 5s
|
||||
throughput_bytes_slo: 1.073741824e+09
|
||||
trace_by_id:
|
||||
duration_slo: 5s
|
||||
|
||||
distributor:
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
endpoint: "tempo:4317"
|
||||
|
||||
ingester:
|
||||
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
|
||||
|
||||
compactor:
|
||||
compaction:
|
||||
block_retention: 1h # overall Tempo trace retention. set for demo purposes
|
||||
|
||||
metrics_generator:
|
||||
registry:
|
||||
external_labels:
|
||||
source: tempo
|
||||
cluster: docker-compose
|
||||
storage:
|
||||
path: /var/tempo/generator/wal
|
||||
remote_write:
|
||||
- url: http://prometheus:9090/api/v1/write
|
||||
send_exemplars: true
|
||||
traces_storage:
|
||||
path: /var/tempo/generator/traces
|
||||
|
||||
storage:
|
||||
trace:
|
||||
backend: local # backend configuration to use
|
||||
wal:
|
||||
path: /var/tempo/wal # where to store the wal locally
|
||||
local:
|
||||
path: /var/tempo/blocks
|
||||
|
||||
overrides:
|
||||
defaults:
|
||||
metrics_generator:
|
||||
processors: [ service-graphs, span-metrics, local-blocks ] # enables metrics generator
|
||||
generate_native_histograms: both
|
||||
16
.github/actions/setup/action.yml
vendored
@@ -60,18 +60,11 @@ runs:
|
||||
pkg-config \
|
||||
libssl-dev
|
||||
|
||||
- name: Cache protoc binary
|
||||
id: cache-protoc
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.local/bin/protoc
|
||||
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
|
||||
|
||||
- name: Install protoc
|
||||
if: steps.cache-protoc.outputs.cache-hit != 'true'
|
||||
uses: arduino/setup-protoc@v3
|
||||
with:
|
||||
version: "31.1"
|
||||
repo-token: ${{ inputs.github-token }}
|
||||
|
||||
- name: Install flatc
|
||||
uses: Nugine/setup-flatc@v1
|
||||
@@ -93,6 +86,9 @@ runs:
|
||||
if: inputs.install-cross-tools == 'true'
|
||||
uses: taiki-e/install-action@cargo-zigbuild
|
||||
|
||||
- name: Install cargo-nextest
|
||||
uses: taiki-e/install-action@cargo-nextest
|
||||
|
||||
- name: Setup Rust cache
|
||||
uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
@@ -100,7 +96,3 @@ runs:
|
||||
cache-on-failure: true
|
||||
shared-key: ${{ inputs.cache-shared-key }}
|
||||
save-if: ${{ inputs.cache-save-if }}
|
||||
# Cache workspace dependencies
|
||||
workspaces: |
|
||||
. -> target
|
||||
cli/rustfs-gui -> cli/rustfs-gui/target
|
||||
|
||||
546
.github/workflows/build.yml
vendored
@@ -16,24 +16,38 @@ name: Build and Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*"]
|
||||
tags: ["*.*.*"]
|
||||
branches: [main]
|
||||
paths:
|
||||
- "rustfs/**"
|
||||
- "cli/**"
|
||||
- "crates/**"
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- ".github/workflows/build.yml"
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "rustfs/**"
|
||||
- "cli/**"
|
||||
- "crates/**"
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- ".github/workflows/build.yml"
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
schedule:
|
||||
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
|
||||
workflow_dispatch:
|
||||
@@ -51,60 +65,85 @@ env:
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
# First layer: GitHub Actions level optimization (handling duplicates and concurrency)
|
||||
skip-duplicate:
|
||||
name: Skip Duplicate Actions
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||
steps:
|
||||
- name: Skip duplicate actions
|
||||
id: skip_check
|
||||
uses: fkirc/skip-duplicate-actions@v5
|
||||
with:
|
||||
concurrent_skipping: "same_content_newer"
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md", "docs/**", "deploy/**", "scripts/dev_*.sh"]'
|
||||
|
||||
# Second layer: Business logic level checks (handling build strategy)
|
||||
# Build strategy check - determine build type based on trigger
|
||||
build-check:
|
||||
name: Build Strategy Check
|
||||
needs: skip-duplicate
|
||||
if: needs.skip-duplicate.outputs.should_skip != 'true'
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
version: ${{ steps.check.outputs.version }}
|
||||
short_sha: ${{ steps.check.outputs.short_sha }}
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Determine build strategy
|
||||
id: check
|
||||
run: |
|
||||
should_build=false
|
||||
build_type="none"
|
||||
version=""
|
||||
short_sha=""
|
||||
is_prerelease=false
|
||||
|
||||
# Business logic: when we need to build
|
||||
if [[ "${{ github.event_name }}" == "schedule" ]] || \
|
||||
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
|
||||
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
|
||||
# Get short SHA for all builds
|
||||
short_sha=$(git rev-parse --short HEAD)
|
||||
|
||||
# Determine build type based on trigger
|
||||
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
# Tag push - release or prerelease
|
||||
should_build=true
|
||||
tag_name="${GITHUB_REF#refs/tags/}"
|
||||
version="${tag_name}"
|
||||
|
||||
# Check if this is a prerelease
|
||||
if [[ "$tag_name" == *"alpha"* ]] || [[ "$tag_name" == *"beta"* ]] || [[ "$tag_name" == *"rc"* ]]; then
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🚀 Prerelease build detected: $tag_name"
|
||||
else
|
||||
build_type="release"
|
||||
echo "📦 Release build detected: $tag_name"
|
||||
fi
|
||||
elif [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
|
||||
# Main branch push - development build
|
||||
should_build=true
|
||||
build_type="development"
|
||||
fi
|
||||
|
||||
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
version="dev-${short_sha}"
|
||||
echo "🛠️ Development build detected"
|
||||
elif [[ "${{ github.event_name }}" == "schedule" ]] || \
|
||||
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
|
||||
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
|
||||
# Scheduled or manual build
|
||||
should_build=true
|
||||
build_type="release"
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
echo "⚡ Manual/scheduled build detected"
|
||||
fi
|
||||
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "build_type=$build_type" >> $GITHUB_OUTPUT
|
||||
echo "Build needed: $should_build (type: $build_type)"
|
||||
echo "version=$version" >> $GITHUB_OUTPUT
|
||||
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📊 Build Summary:"
|
||||
echo " - Should build: $should_build"
|
||||
echo " - Build type: $build_type"
|
||||
echo " - Version: $version"
|
||||
echo " - Short SHA: $short_sha"
|
||||
echo " - Is prerelease: $is_prerelease"
|
||||
|
||||
# Build RustFS binaries
|
||||
build-rustfs:
|
||||
name: Build RustFS
|
||||
needs: [skip-duplicate, build-check]
|
||||
if: needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.should_build == 'true'
|
||||
needs: [build-check]
|
||||
if: needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
@@ -113,15 +152,33 @@ jobs:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
# Linux builds
|
||||
- os: ubuntu-latest
|
||||
target: x86_64-unknown-linux-musl
|
||||
cross: false
|
||||
platform: linux
|
||||
- os: ubuntu-latest
|
||||
target: aarch64-unknown-linux-musl
|
||||
cross: true
|
||||
platform: linux
|
||||
# macOS builds
|
||||
- os: macos-latest
|
||||
target: aarch64-apple-darwin
|
||||
cross: false
|
||||
platform: macos
|
||||
- os: macos-latest
|
||||
target: x86_64-apple-darwin
|
||||
cross: false
|
||||
platform: macos
|
||||
# # Windows builds (temporarily disabled)
|
||||
# - os: windows-latest
|
||||
# target: x86_64-pc-windows-msvc
|
||||
# cross: false
|
||||
# platform: windows
|
||||
# - os: windows-latest
|
||||
# target: aarch64-pc-windows-msvc
|
||||
# cross: true
|
||||
# platform: windows
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
@@ -141,225 +198,266 @@ jobs:
|
||||
- name: Download static console assets
|
||||
run: |
|
||||
mkdir -p ./rustfs/static
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
unzip -o console.zip -d ./rustfs/static
|
||||
rm console.zip
|
||||
if [[ "${{ matrix.platform }}" == "windows" ]]; then
|
||||
curl.exe -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
if [[ $? -eq 0 ]]; then
|
||||
unzip -o console.zip -d ./rustfs/static
|
||||
rm console.zip
|
||||
else
|
||||
echo "Warning: Failed to download console assets, continuing without them"
|
||||
echo "// Static assets not available" > ./rustfs/static/empty.txt
|
||||
fi
|
||||
else
|
||||
chmod +w ./rustfs/static/LICENSE || true
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
if [[ $? -eq 0 ]]; then
|
||||
unzip -o console.zip -d ./rustfs/static
|
||||
rm console.zip
|
||||
else
|
||||
echo "Warning: Failed to download console assets, continuing without them"
|
||||
echo "// Static assets not available" > ./rustfs/static/empty.txt
|
||||
fi
|
||||
fi
|
||||
|
||||
- name: Build RustFS
|
||||
run: |
|
||||
# Force rebuild by touching build.rs
|
||||
touch rustfs/build.rs
|
||||
|
||||
if [[ "${{ matrix.cross }}" == "true" ]]; then
|
||||
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
if [[ "${{ matrix.platform }}" == "windows" ]]; then
|
||||
# Use cross for Windows ARM64
|
||||
cargo install cross --git https://github.com/cross-rs/cross
|
||||
cross build --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
else
|
||||
# Use zigbuild for Linux ARM64
|
||||
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
fi
|
||||
else
|
||||
cargo build --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
fi
|
||||
|
||||
- name: Create release package
|
||||
id: package
|
||||
shell: bash
|
||||
run: |
|
||||
PACKAGE_NAME="rustfs-${{ matrix.target }}"
|
||||
mkdir -p "${PACKAGE_NAME}"/{bin,docs}
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
|
||||
|
||||
# Copy binary
|
||||
if [[ "${{ matrix.target }}" == *"windows"* ]]; then
|
||||
cp target/${{ matrix.target }}/release/rustfs.exe "${PACKAGE_NAME}/bin/"
|
||||
# Extract platform and arch from target
|
||||
TARGET="${{ matrix.target }}"
|
||||
PLATFORM="${{ matrix.platform }}"
|
||||
|
||||
# Map target to architecture
|
||||
case "$TARGET" in
|
||||
*x86_64*)
|
||||
ARCH="x86_64"
|
||||
;;
|
||||
*aarch64*|*arm64*)
|
||||
ARCH="aarch64"
|
||||
;;
|
||||
*armv7*)
|
||||
ARCH="armv7"
|
||||
;;
|
||||
*)
|
||||
ARCH="unknown"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Generate package name based on build type
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
# Development build: rustfs-${platform}-${arch}-dev-${short_sha}.zip
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-dev-${SHORT_SHA}"
|
||||
else
|
||||
cp target/${{ matrix.target }}/release/rustfs "${PACKAGE_NAME}/bin/"
|
||||
chmod +x "${PACKAGE_NAME}/bin/rustfs"
|
||||
# Release/Prerelease build: rustfs-${platform}-${arch}-v${version}.zip
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${VERSION}"
|
||||
fi
|
||||
|
||||
# Copy documentation
|
||||
[ -f "LICENSE" ] && cp LICENSE "${PACKAGE_NAME}/docs/"
|
||||
[ -f "README.md" ] && cp README.md "${PACKAGE_NAME}/docs/"
|
||||
# Create zip packages for all platforms
|
||||
# Ensure zip is available
|
||||
if ! command -v zip &> /dev/null; then
|
||||
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then
|
||||
sudo apt-get update && sudo apt-get install -y zip
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create archive
|
||||
tar -czf "${PACKAGE_NAME}.tar.gz" "${PACKAGE_NAME}"
|
||||
cd target/${{ matrix.target }}/release
|
||||
zip "../../../${PACKAGE_NAME}.zip" rustfs
|
||||
cd ../../..
|
||||
|
||||
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
|
||||
echo "Package created: ${PACKAGE_NAME}.tar.gz"
|
||||
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
|
||||
echo "build_type=${BUILD_TYPE}" >> $GITHUB_OUTPUT
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📦 Package created: ${PACKAGE_NAME}.zip"
|
||||
echo "🔧 Build type: ${BUILD_TYPE}"
|
||||
echo "📊 Version: ${VERSION}"
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ steps.package.outputs.package_name }}
|
||||
path: ${{ steps.package.outputs.package_name }}.tar.gz
|
||||
path: ${{ steps.package.outputs.package_file }}
|
||||
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
|
||||
|
||||
# Build GUI (only for releases)
|
||||
build-gui:
|
||||
name: Build GUI
|
||||
needs: [skip-duplicate, build-check, build-rustfs]
|
||||
if: needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release'
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 45
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: x86_64-unknown-linux-musl
|
||||
platform: linux
|
||||
- os: macos-latest
|
||||
target: aarch64-apple-darwin
|
||||
platform: macos
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: stable
|
||||
target: ${{ matrix.target }}
|
||||
cache-shared-key: gui-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Download RustFS binary
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: rustfs-${{ matrix.target }}
|
||||
path: ./artifacts
|
||||
|
||||
- name: Prepare embedded binary
|
||||
- name: Upload to Aliyun OSS
|
||||
if: env.OSS_ACCESS_KEY_ID != '' && (needs.build-check.outputs.build_type == 'release' || needs.build-check.outputs.build_type == 'prerelease' || needs.build-check.outputs.build_type == 'development')
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
run: |
|
||||
mkdir -p ./cli/rustfs-gui/embedded-rustfs/
|
||||
tar -xzf ./artifacts/rustfs-${{ matrix.target }}.tar.gz -C ./artifacts/
|
||||
cp ./artifacts/rustfs-${{ matrix.target }}/bin/rustfs ./cli/rustfs-gui/embedded-rustfs/
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
|
||||
- name: Install Dioxus CLI
|
||||
uses: taiki-e/cache-cargo-install-action@v2
|
||||
with:
|
||||
tool: dioxus-cli
|
||||
|
||||
- name: Build GUI
|
||||
working-directory: ./cli/rustfs-gui
|
||||
run: |
|
||||
# Install ossutil (platform-specific)
|
||||
OSSUTIL_VERSION="2.1.1"
|
||||
case "${{ matrix.platform }}" in
|
||||
"linux")
|
||||
dx bundle --platform linux --package-types deb --package-types appimage --release
|
||||
linux)
|
||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||
ARCH="arm64"
|
||||
else
|
||||
ARCH="amd64"
|
||||
fi
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}"
|
||||
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
|
||||
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
|
||||
chmod +x /usr/local/bin/ossutil
|
||||
OSSUTIL_BIN=ossutil
|
||||
;;
|
||||
"macos")
|
||||
dx bundle --platform macos --package-types dmg --release
|
||||
macos)
|
||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||
ARCH="arm64"
|
||||
else
|
||||
ARCH="amd64"
|
||||
fi
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}"
|
||||
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
|
||||
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
|
||||
chmod +x /usr/local/bin/ossutil
|
||||
OSSUTIL_BIN=ossutil
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Package GUI
|
||||
id: gui_package
|
||||
run: |
|
||||
GUI_PACKAGE="rustfs-gui-${{ matrix.target }}"
|
||||
mkdir -p "${GUI_PACKAGE}"
|
||||
|
||||
# Copy GUI bundles
|
||||
if [[ -d "cli/rustfs-gui/dist/bundle" ]]; then
|
||||
cp -r cli/rustfs-gui/dist/bundle/* "${GUI_PACKAGE}/"
|
||||
# Determine upload path based on build type
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/dev/"
|
||||
echo "📤 Uploading development build to OSS dev directory"
|
||||
else
|
||||
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/release/"
|
||||
echo "📤 Uploading release build to OSS release directory"
|
||||
fi
|
||||
|
||||
tar -czf "${GUI_PACKAGE}.tar.gz" "${GUI_PACKAGE}"
|
||||
echo "gui_package=${GUI_PACKAGE}" >> $GITHUB_OUTPUT
|
||||
# Upload the package file to OSS
|
||||
echo "Uploading ${{ steps.package.outputs.package_file }} to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" "$OSS_PATH" --force
|
||||
|
||||
- name: Upload GUI artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ steps.gui_package.outputs.gui_package }}
|
||||
path: ${{ steps.gui_package.outputs.gui_package }}.tar.gz
|
||||
retention-days: 30
|
||||
# For release and prerelease builds, also create a latest version
|
||||
if [[ "$BUILD_TYPE" == "release" ]] || [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
# Extract platform and arch from package name
|
||||
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
|
||||
|
||||
# Release management
|
||||
release:
|
||||
name: GitHub Release
|
||||
needs: [skip-duplicate, build-check, build-rustfs, build-gui]
|
||||
if: always() && needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release'
|
||||
# Create latest version filename
|
||||
# Convert from rustfs-linux-x86_64-v1.0.0 to rustfs-linux-x86_64-latest
|
||||
LATEST_FILE="${PACKAGE_NAME%-v*}-latest.zip"
|
||||
|
||||
# Copy the original file to latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$LATEST_FILE"
|
||||
|
||||
# Upload the latest version
|
||||
echo "Uploading latest version: $LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Latest version uploaded: $LATEST_FILE"
|
||||
fi
|
||||
|
||||
# For development builds, create dev-latest version
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
# Extract platform and arch from package name
|
||||
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
|
||||
|
||||
# Create dev-latest version filename
|
||||
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-dev-latest
|
||||
DEV_LATEST_FILE="${PACKAGE_NAME%-*}-latest.zip"
|
||||
|
||||
# Copy the original file to dev-latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$DEV_LATEST_FILE"
|
||||
|
||||
# Upload the dev-latest version
|
||||
echo "Uploading dev-latest version: $DEV_LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$DEV_LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Dev-latest version uploaded: $DEV_LATEST_FILE"
|
||||
|
||||
# For main branch builds, also create a main-latest version
|
||||
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
|
||||
# Create main-latest version filename
|
||||
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-main-latest
|
||||
MAIN_LATEST_FILE="${PACKAGE_NAME%-dev-*}-main-latest.zip"
|
||||
|
||||
# Copy the original file to main-latest version
|
||||
cp "${{ steps.package.outputs.package_file }}" "$MAIN_LATEST_FILE"
|
||||
|
||||
# Upload the main-latest version
|
||||
echo "Uploading main-latest version: $MAIN_LATEST_FILE to $OSS_PATH..."
|
||||
$OSSUTIL_BIN cp "$MAIN_LATEST_FILE" "$OSS_PATH" --force
|
||||
|
||||
echo "✅ Main-latest version uploaded: $MAIN_LATEST_FILE"
|
||||
|
||||
# Also create a generic main-latest for Docker builds
|
||||
if [[ "${{ matrix.platform }}" == "linux" ]]; then
|
||||
DOCKER_MAIN_LATEST_FILE="rustfs-linux-${{ matrix.target == 'x86_64-unknown-linux-musl' && 'x86_64' || 'aarch64' }}-main-latest.zip"
|
||||
|
||||
cp "${{ steps.package.outputs.package_file }}" "$DOCKER_MAIN_LATEST_FILE"
|
||||
$OSSUTIL_BIN cp "$DOCKER_MAIN_LATEST_FILE" "$OSS_PATH" --force
|
||||
echo "✅ Docker main-latest version uploaded: $DOCKER_MAIN_LATEST_FILE"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Upload completed successfully"
|
||||
|
||||
# Build summary
|
||||
build-summary:
|
||||
name: Build Summary
|
||||
needs: [build-check, build-rustfs]
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./release-artifacts
|
||||
|
||||
- name: Prepare release
|
||||
id: release_prep
|
||||
- name: Build completion summary
|
||||
run: |
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
VERSION_CLEAN="${VERSION#v}"
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
|
||||
echo "🎉 Build completed successfully!"
|
||||
echo "📦 Build type: $BUILD_TYPE"
|
||||
echo "🔢 Version: $VERSION"
|
||||
echo ""
|
||||
|
||||
# Organize artifacts
|
||||
mkdir -p ./release-files
|
||||
find ./release-artifacts -name "*.tar.gz" -exec cp {} ./release-files/ \;
|
||||
|
||||
# Create release notes
|
||||
cat > release_notes.md << EOF
|
||||
## RustFS ${VERSION_CLEAN}
|
||||
|
||||
### 🚀 Downloads
|
||||
|
||||
**Linux:**
|
||||
- \`rustfs-x86_64-unknown-linux-musl.tar.gz\` - Linux x86_64 (static)
|
||||
- \`rustfs-aarch64-unknown-linux-musl.tar.gz\` - Linux ARM64 (static)
|
||||
|
||||
**macOS:**
|
||||
- \`rustfs-aarch64-apple-darwin.tar.gz\` - macOS Apple Silicon
|
||||
|
||||
**GUI Applications:**
|
||||
- \`rustfs-gui-*.tar.gz\` - GUI applications
|
||||
|
||||
### 📦 Installation
|
||||
|
||||
1. Download the appropriate binary for your platform
|
||||
2. Extract: \`tar -xzf rustfs-*.tar.gz\`
|
||||
3. Install: \`sudo cp rustfs-*/bin/rustfs /usr/local/bin/\`
|
||||
|
||||
### 🔗 Mirror Downloads
|
||||
|
||||
- [OSS Mirror](https://rustfs-artifacts.oss-cn-beijing.aliyuncs.com/artifacts/rustfs/)
|
||||
EOF
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
tag_name: ${{ steps.release_prep.outputs.version }}
|
||||
name: "RustFS ${{ steps.release_prep.outputs.version_clean }}"
|
||||
body_path: release_notes.md
|
||||
files: ./release-files/*.tar.gz
|
||||
draft: false
|
||||
prerelease: ${{ contains(steps.release_prep.outputs.version, 'alpha') || contains(steps.release_prep.outputs.version, 'beta') || contains(steps.release_prep.outputs.version, 'rc') }}
|
||||
|
||||
# Upload to OSS (optional)
|
||||
upload-oss:
|
||||
name: Upload to OSS
|
||||
needs: [skip-duplicate, build-check, build-rustfs]
|
||||
if: always() && needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release' && needs.build-rustfs.result == 'success'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
steps:
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./artifacts
|
||||
|
||||
- name: Upload to Aliyun OSS
|
||||
if: ${{ env.OSS_ACCESS_KEY_ID != '' }}
|
||||
run: |
|
||||
# Install ossutil
|
||||
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-linux-amd64.zip
|
||||
unzip ossutil.zip
|
||||
sudo mv ossutil-*/ossutil /usr/local/bin/
|
||||
|
||||
# Upload files
|
||||
find ./artifacts -name "*.tar.gz" -exec ossutil cp {} oss://rustfs-artifacts/artifacts/rustfs/ --force \;
|
||||
|
||||
# Create latest.json
|
||||
VERSION="${GITHUB_REF#refs/tags/v}"
|
||||
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
|
||||
ossutil cp latest.json oss://rustfs-version/latest.json --force
|
||||
case "$BUILD_TYPE" in
|
||||
"development")
|
||||
echo "🛠️ Development build artifacts have been uploaded to OSS dev directory"
|
||||
echo "⚠️ This is a development build - not suitable for production use"
|
||||
;;
|
||||
"release")
|
||||
echo "🚀 Release build artifacts have been uploaded to OSS release directory"
|
||||
echo "✅ This build is ready for production use"
|
||||
echo "🏷️ GitHub Release will be created automatically by the release workflow"
|
||||
;;
|
||||
"prerelease")
|
||||
echo "🧪 Prerelease build artifacts have been uploaded to OSS release directory"
|
||||
echo "⚠️ This is a prerelease build - use with caution"
|
||||
echo "🏷️ GitHub Release will be created automatically by the release workflow"
|
||||
;;
|
||||
esac
|
||||
|
||||
6
.github/workflows/ci.yml
vendored
@@ -80,6 +80,8 @@ jobs:
|
||||
concurrent_skipping: "same_content_newer"
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
|
||||
# Never skip release events and tag pushes
|
||||
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
|
||||
|
||||
test-and-lint:
|
||||
name: Test and Lint
|
||||
@@ -100,7 +102,9 @@ jobs:
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Run tests
|
||||
run: cargo test --all --exclude e2e_test
|
||||
run: |
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
- name: Check code formatting
|
||||
run: cargo fmt --all --check
|
||||
|
||||
395
.github/workflows/docker.yml
vendored
@@ -16,26 +16,38 @@ name: Docker Images
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*"]
|
||||
tags: ["*.*.*"]
|
||||
branches: [main]
|
||||
paths:
|
||||
- "rustfs/**"
|
||||
- "crates/**"
|
||||
- "Dockerfile*"
|
||||
- ".docker/**"
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- ".github/workflows/docker.yml"
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "rustfs/**"
|
||||
- "crates/**"
|
||||
- "Dockerfile*"
|
||||
- ".docker/**"
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- ".github/workflows/docker.yml"
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
push_images:
|
||||
@@ -43,6 +55,16 @@ on:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
version:
|
||||
description: "Version to build (latest, main-latest, dev-latest, or specific version like v1.0.0 or dev-abc123)"
|
||||
required: false
|
||||
default: "main-latest"
|
||||
type: string
|
||||
force_rebuild:
|
||||
description: "Force rebuild even if binary exists (useful for testing)"
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
@@ -50,19 +72,37 @@ env:
|
||||
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
# Check if we should build
|
||||
# Docker build strategy check
|
||||
build-check:
|
||||
name: Build Check
|
||||
name: Docker Build Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
should_push: ${{ steps.check.outputs.should_push }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
version: ${{ steps.check.outputs.version }}
|
||||
short_sha: ${{ steps.check.outputs.short_sha }}
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
create_latest: ${{ steps.check.outputs.create_latest }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check build conditions
|
||||
id: check
|
||||
run: |
|
||||
should_build=false
|
||||
should_push=false
|
||||
build_type="none"
|
||||
version=""
|
||||
short_sha=""
|
||||
is_prerelease=false
|
||||
create_latest=false
|
||||
|
||||
# Get short SHA for all builds
|
||||
short_sha=$(git rev-parse --short HEAD)
|
||||
|
||||
# Always build on workflow_dispatch or when changes detected
|
||||
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
@@ -71,6 +111,80 @@ jobs:
|
||||
should_build=true
|
||||
fi
|
||||
|
||||
# Determine build type and version
|
||||
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] && [[ -n "${{ github.event.inputs.version }}" ]]; then
|
||||
# Manual trigger with version input
|
||||
input_version="${{ github.event.inputs.version }}"
|
||||
version="${input_version}"
|
||||
force_rebuild="${{ github.event.inputs.force_rebuild }}"
|
||||
|
||||
echo "🎯 Manual Docker build triggered:"
|
||||
echo " 📋 Requested version: $input_version"
|
||||
echo " 🔧 Force rebuild: $force_rebuild"
|
||||
|
||||
case "$input_version" in
|
||||
"latest")
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "🚀 Building with latest stable release version"
|
||||
;;
|
||||
"main-latest")
|
||||
build_type="development"
|
||||
version="main-latest"
|
||||
echo "🛠️ Building with main branch latest development version"
|
||||
;;
|
||||
"dev-latest")
|
||||
build_type="development"
|
||||
version="dev-latest"
|
||||
echo "🛠️ Building with development latest version"
|
||||
;;
|
||||
v[0-9]*)
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "📦 Building with specific release version: $input_version"
|
||||
;;
|
||||
v*alpha*|v*beta*|v*rc*)
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🧪 Building with prerelease version: $input_version"
|
||||
;;
|
||||
dev-[a-f0-9]*)
|
||||
build_type="development"
|
||||
echo "🔧 Building with specific development version: $input_version"
|
||||
;;
|
||||
*)
|
||||
build_type="development"
|
||||
echo "🔧 Building with custom version: $input_version"
|
||||
echo "⚠️ Warning: Custom version format may not follow standard patterns"
|
||||
;;
|
||||
esac
|
||||
elif [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
# Tag push - release or prerelease
|
||||
tag_name="${GITHUB_REF#refs/tags/}"
|
||||
version="${tag_name}"
|
||||
|
||||
# Check if this is a prerelease
|
||||
if [[ "$tag_name" == *"alpha"* ]] || [[ "$tag_name" == *"beta"* ]] || [[ "$tag_name" == *"rc"* ]]; then
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
echo "🚀 Docker prerelease build detected: $tag_name"
|
||||
else
|
||||
build_type="release"
|
||||
create_latest=true
|
||||
echo "📦 Docker release build detected: $tag_name"
|
||||
fi
|
||||
elif [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
|
||||
# Main branch push - development build
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
echo "🛠️ Docker development build detected"
|
||||
else
|
||||
# Other branches - development build
|
||||
build_type="development"
|
||||
version="dev-${short_sha}"
|
||||
echo "🔧 Docker development build detected"
|
||||
fi
|
||||
|
||||
# Push only on main branch, tags, or manual trigger
|
||||
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
|
||||
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
|
||||
@@ -80,7 +194,20 @@ jobs:
|
||||
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "should_push=$should_push" >> $GITHUB_OUTPUT
|
||||
echo "Build: $should_build, Push: $should_push"
|
||||
echo "build_type=$build_type" >> $GITHUB_OUTPUT
|
||||
echo "version=$version" >> $GITHUB_OUTPUT
|
||||
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
|
||||
echo "create_latest=$create_latest" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "🐳 Docker Build Summary:"
|
||||
echo " - Should build: $should_build"
|
||||
echo " - Should push: $should_push"
|
||||
echo " - Build type: $build_type"
|
||||
echo " - Version: $version"
|
||||
echo " - Short SHA: $short_sha"
|
||||
echo " - Is prerelease: $is_prerelease"
|
||||
echo " - Create latest: $create_latest"
|
||||
|
||||
# Build multi-arch Docker images
|
||||
build-docker:
|
||||
@@ -96,55 +223,119 @@ jobs:
|
||||
- name: production
|
||||
dockerfile: Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: ubuntu
|
||||
dockerfile: .docker/Dockerfile.ubuntu22.04
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: alpine
|
||||
dockerfile: .docker/Dockerfile.alpine
|
||||
platforms: linux/amd64,linux/arm64
|
||||
#- name: source
|
||||
# dockerfile: Dockerfile.source
|
||||
# platforms: linux/amd64,linux/arm64
|
||||
#- name: dev
|
||||
# dockerfile: Dockerfile.source
|
||||
# platforms: linux/amd64,linux/arm64
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Login to Docker Hub
|
||||
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
scopes: repository:rustfs/rustfs:pull,push
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
if: needs.build-check.outputs.should_push == 'true'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
# - name: Login to GitHub Container Registry
|
||||
# uses: docker/login-action@v3
|
||||
# with:
|
||||
# registry: ghcr.io
|
||||
# username: ${{ github.actor }}
|
||||
# password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Extract metadata and generate tags
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: |
|
||||
${{ env.REGISTRY_DOCKERHUB }}
|
||||
${{ env.REGISTRY_GHCR }}
|
||||
tags: |
|
||||
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
|
||||
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
|
||||
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
|
||||
flavor: |
|
||||
latest=false
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
|
||||
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
|
||||
VARIANT="${{ matrix.variant.name }}"
|
||||
|
||||
# Generate tags based on build type
|
||||
TAGS=""
|
||||
|
||||
if [[ "$BUILD_TYPE" == "development" ]]; then
|
||||
# Development build: dev-${short_sha}-${variant} and dev-${variant}
|
||||
TAGS="${{ env.REGISTRY_DOCKERHUB }}:dev-${SHORT_SHA}-${VARIANT}"
|
||||
|
||||
# Add rolling dev tag for each variant
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:dev-${VARIANT}"
|
||||
|
||||
# Special handling for production variant
|
||||
if [[ "$VARIANT" == "production" ]]; then
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:dev-${SHORT_SHA}"
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:dev"
|
||||
fi
|
||||
else
|
||||
# Release/Prerelease build: ${version}-${variant}
|
||||
TAGS="${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-${VARIANT}"
|
||||
|
||||
# Special handling for production variant - create main version tag
|
||||
if [[ "$VARIANT" == "production" ]]; then
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${VERSION}"
|
||||
fi
|
||||
|
||||
# Add channel tags for prereleases and latest for stable
|
||||
if [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
# Stable release
|
||||
if [[ "$VARIANT" == "production" ]]; then
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
|
||||
else
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest-${VARIANT}"
|
||||
fi
|
||||
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
# Prerelease channel tags (alpha, beta, rc)
|
||||
if [[ "$VERSION" == *"alpha"* ]]; then
|
||||
CHANNEL="alpha"
|
||||
elif [[ "$VERSION" == *"beta"* ]]; then
|
||||
CHANNEL="beta"
|
||||
elif [[ "$VERSION" == *"rc"* ]]; then
|
||||
CHANNEL="rc"
|
||||
fi
|
||||
|
||||
if [[ -n "$CHANNEL" ]]; then
|
||||
if [[ "$VARIANT" == "production" ]]; then
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${CHANNEL}"
|
||||
else
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${CHANNEL}-${VARIANT}"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Output tags
|
||||
echo "tags=$TAGS" >> $GITHUB_OUTPUT
|
||||
|
||||
# Generate labels
|
||||
LABELS="org.opencontainers.image.title=RustFS"
|
||||
LABELS="$LABELS,org.opencontainers.image.description=RustFS distributed object storage system"
|
||||
LABELS="$LABELS,org.opencontainers.image.version=$VERSION"
|
||||
LABELS="$LABELS,org.opencontainers.image.revision=${{ github.sha }}"
|
||||
LABELS="$LABELS,org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}"
|
||||
LABELS="$LABELS,org.opencontainers.image.created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
|
||||
LABELS="$LABELS,org.opencontainers.image.variant=$VARIANT"
|
||||
LABELS="$LABELS,org.opencontainers.image.build-type=$BUILD_TYPE"
|
||||
|
||||
echo "labels=$LABELS" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "🐳 Generated Docker tags:"
|
||||
echo "$TAGS" | tr ',' '\n' | sed 's/^/ - /'
|
||||
echo "📋 Build type: $BUILD_TYPE"
|
||||
echo "🔖 Version: $VERSION"
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ matrix.variant.dockerfile }}
|
||||
@@ -152,47 +343,89 @@ jobs:
|
||||
push: ${{ needs.build-check.outputs.should_push == 'true' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
|
||||
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
|
||||
cache-from: |
|
||||
type=gha,scope=docker-${{ matrix.variant.name }}
|
||||
cache-to: |
|
||||
type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
|
||||
build-args: |
|
||||
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
|
||||
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
|
||||
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
|
||||
BUILDTIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
|
||||
VERSION=${{ needs.build-check.outputs.version }}
|
||||
BUILD_TYPE=${{ needs.build-check.outputs.build_type }}
|
||||
REVISION=${{ github.sha }}
|
||||
BUILDKIT_INLINE_CACHE=1
|
||||
# Enable advanced BuildKit features for better performance
|
||||
provenance: false
|
||||
sbom: false
|
||||
# Add retry mechanism by splitting the build process
|
||||
no-cache: false
|
||||
pull: true
|
||||
|
||||
# Create manifest for main production image
|
||||
# Create manifest for main production image (only for stable releases)
|
||||
create-manifest:
|
||||
name: Create Manifest
|
||||
needs: [build-check, build-docker]
|
||||
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
|
||||
if: needs.build-check.outputs.should_push == 'true' && needs.build-check.outputs.create_latest == 'true' && needs.build-check.outputs.build_type == 'release'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Login to Docker Hub
|
||||
if: secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
# - name: Login to GitHub Container Registry
|
||||
# uses: docker/login-action@v3
|
||||
# with:
|
||||
# registry: ghcr.io
|
||||
# username: ${{ github.actor }}
|
||||
# password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Create and push manifest
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/}
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
|
||||
# Create main image tag (without variant suffix)
|
||||
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
|
||||
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
|
||||
fi
|
||||
echo "🐳 Creating manifest for stable release: $VERSION"
|
||||
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_GHCR }}:latest \
|
||||
${{ env.REGISTRY_GHCR }}:${VERSION}-production
|
||||
# Create main image tag (without variant suffix) for stable releases only
|
||||
# Note: The "production" variant already creates the main tags without suffix
|
||||
echo "Manifest creation is handled by the production variant build step"
|
||||
echo "Main tags ${VERSION} and latest are created directly by the production variant"
|
||||
|
||||
echo "✅ Manifest created successfully for stable release"
|
||||
|
||||
# Docker build summary
|
||||
docker-summary:
|
||||
name: Docker Build Summary
|
||||
needs: [build-check, build-docker]
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Docker build completion summary
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
VERSION="${{ needs.build-check.outputs.version }}"
|
||||
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
|
||||
|
||||
echo "🐳 Docker build completed successfully!"
|
||||
echo "📦 Build type: $BUILD_TYPE"
|
||||
echo "🔢 Version: $VERSION"
|
||||
echo ""
|
||||
|
||||
case "$BUILD_TYPE" in
|
||||
"development")
|
||||
echo "🛠️ Development Docker images have been built with dev-${VERSION} tags"
|
||||
echo "⚠️ These are development images - not suitable for production use"
|
||||
;;
|
||||
"release")
|
||||
echo "🚀 Release Docker images have been built with v${VERSION} tags"
|
||||
echo "✅ These images are ready for production use"
|
||||
if [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
echo "🏷️ Latest tags have been created for stable release"
|
||||
fi
|
||||
;;
|
||||
"prerelease")
|
||||
echo "🧪 Prerelease Docker images have been built with v${VERSION} tags"
|
||||
echo "⚠️ These are prerelease images - use with caution"
|
||||
echo "🚫 Latest tags NOT created for prerelease"
|
||||
;;
|
||||
esac
|
||||
|
||||
24
.github/workflows/issue-translator.yml
vendored
@@ -1,8 +1,22 @@
|
||||
name: 'issue-translator'
|
||||
on:
|
||||
issue_comment:
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: "issue-translator"
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
@@ -14,5 +28,5 @@ jobs:
|
||||
IS_MODIFY_TITLE: false
|
||||
# not require, default false, . Decide whether to modify the issue title
|
||||
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
|
||||
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
|
||||
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
|
||||
# not require. Customize the translation robot prefix message.
|
||||
|
||||
17
.github/workflows/performance.yml
vendored
@@ -18,10 +18,10 @@ on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "rustfs/**"
|
||||
- "crates/**"
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- "**/*.rs"
|
||||
- "**/Cargo.toml"
|
||||
- "**/Cargo.lock"
|
||||
- ".github/workflows/performance.yml"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
profile_duration:
|
||||
@@ -73,12 +73,11 @@ jobs:
|
||||
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
|
||||
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
|
||||
|
||||
- name: Download static files
|
||||
- name: Verify console static assets
|
||||
run: |
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o tempfile.zip --retry 3 --retry-delay 5
|
||||
unzip -o tempfile.zip -d ./rustfs/static
|
||||
rm tempfile.zip
|
||||
# Console static assets are already embedded in the repository
|
||||
echo "Console static assets size: $(du -sh rustfs/static/)"
|
||||
echo "Console static assets are embedded via rust-embed, no external download needed"
|
||||
|
||||
- name: Build with profiling optimizations
|
||||
run: |
|
||||
|
||||
78
.github/workflows/release-notes-template.md
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
## RustFS ${VERSION_CLEAN}
|
||||
|
||||
${ORIGINAL_NOTES}
|
||||
|
||||
---
|
||||
|
||||
### 🚀 Quick Download
|
||||
|
||||
**Linux (Static Binaries - No Dependencies):**
|
||||
|
||||
```bash
|
||||
# x86_64 (Intel/AMD)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-unknown-linux-musl.zip
|
||||
unzip rustfs-x86_64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# ARM64 (Graviton, Apple Silicon VMs)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-unknown-linux-musl.zip
|
||||
unzip rustfs-aarch64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
```
|
||||
|
||||
**macOS:**
|
||||
|
||||
```bash
|
||||
# Apple Silicon (M1/M2/M3)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-apple-darwin.zip
|
||||
unzip rustfs-aarch64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# Intel
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-apple-darwin.zip
|
||||
unzip rustfs-x86_64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
```
|
||||
|
||||
### 📁 Available Downloads
|
||||
|
||||
| Platform | Architecture | File | Description |
|
||||
|----------|-------------|------|-------------|
|
||||
| Linux | x86_64 | `rustfs-x86_64-unknown-linux-musl.zip` | Static binary, no dependencies |
|
||||
| Linux | ARM64 | `rustfs-aarch64-unknown-linux-musl.zip` | Static binary, no dependencies |
|
||||
| macOS | Apple Silicon | `rustfs-aarch64-apple-darwin.zip` | Native binary, ZIP archive |
|
||||
| macOS | Intel | `rustfs-x86_64-apple-darwin.zip` | Native binary, ZIP archive |
|
||||
|
||||
### 🔐 Verification
|
||||
|
||||
Download checksums and verify your download:
|
||||
|
||||
```bash
|
||||
# Download checksums
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/SHA256SUMS
|
||||
|
||||
# Verify (Linux)
|
||||
sha256sum -c SHA256SUMS --ignore-missing
|
||||
|
||||
# Verify (macOS)
|
||||
shasum -a 256 -c SHA256SUMS --ignore-missing
|
||||
```
|
||||
|
||||
### 🛠️ System Requirements
|
||||
|
||||
- **Linux**: Any distribution with glibc 2.17+ (CentOS 7+, Ubuntu 16.04+)
|
||||
- **macOS**: 10.15+ (Catalina or later)
|
||||
- **Windows**: Windows 10 version 1809 or later
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- [Installation Guide](https://github.com/rustfs/rustfs#installation)
|
||||
- [Quick Start](https://github.com/rustfs/rustfs#quick-start)
|
||||
- [Configuration](https://github.com/rustfs/rustfs/blob/main/docs/)
|
||||
- [API Documentation](https://docs.rs/rustfs)
|
||||
|
||||
### 🆘 Support
|
||||
|
||||
- 🐛 [Report Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- 💬 [Community Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- 📖 [Documentation](https://github.com/rustfs/rustfs/tree/main/docs)
|
||||
353
.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,353 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*.*.*"]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
tag:
|
||||
description: "Tag to create release for"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
# Determine release type
|
||||
release-check:
|
||||
name: Release Type Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
tag: ${{ steps.check.outputs.tag }}
|
||||
version: ${{ steps.check.outputs.version }}
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
release_type: ${{ steps.check.outputs.release_type }}
|
||||
steps:
|
||||
- name: Determine release type
|
||||
id: check
|
||||
run: |
|
||||
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
|
||||
TAG="${{ github.event.inputs.tag }}"
|
||||
else
|
||||
TAG="${GITHUB_REF#refs/tags/}"
|
||||
fi
|
||||
|
||||
VERSION="${TAG}"
|
||||
|
||||
# Check if this is a prerelease
|
||||
IS_PRERELEASE=false
|
||||
RELEASE_TYPE="release"
|
||||
|
||||
if [[ "$TAG" == *"alpha"* ]] || [[ "$TAG" == *"beta"* ]] || [[ "$TAG" == *"rc"* ]]; then
|
||||
IS_PRERELEASE=true
|
||||
if [[ "$TAG" == *"alpha"* ]]; then
|
||||
RELEASE_TYPE="alpha"
|
||||
elif [[ "$TAG" == *"beta"* ]]; then
|
||||
RELEASE_TYPE="beta"
|
||||
elif [[ "$TAG" == *"rc"* ]]; then
|
||||
RELEASE_TYPE="rc"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "tag=$TAG" >> $GITHUB_OUTPUT
|
||||
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||
echo "is_prerelease=$IS_PRERELEASE" >> $GITHUB_OUTPUT
|
||||
echo "release_type=$RELEASE_TYPE" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📦 Release Type: $RELEASE_TYPE"
|
||||
echo "🏷️ Tag: $TAG"
|
||||
echo "🔢 Version: $VERSION"
|
||||
echo "🚀 Is Prerelease: $IS_PRERELEASE"
|
||||
|
||||
# Create GitHub Release
|
||||
create-release:
|
||||
name: Create GitHub Release
|
||||
needs: release-check
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
outputs:
|
||||
release_id: ${{ steps.create.outputs.release_id }}
|
||||
release_url: ${{ steps.create.outputs.release_url }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Create GitHub Release
|
||||
id: create
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
TAG="${{ needs.release-check.outputs.tag }}"
|
||||
VERSION="${{ needs.release-check.outputs.version }}"
|
||||
IS_PRERELEASE="${{ needs.release-check.outputs.is_prerelease }}"
|
||||
RELEASE_TYPE="${{ needs.release-check.outputs.release_type }}"
|
||||
|
||||
# Check if release already exists
|
||||
if gh release view "$TAG" >/dev/null 2>&1; then
|
||||
echo "Release $TAG already exists"
|
||||
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
|
||||
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
|
||||
else
|
||||
# Get release notes from tag message
|
||||
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
|
||||
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
RELEASE_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
|
||||
else
|
||||
RELEASE_NOTES="Release ${VERSION}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create release title
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
TITLE="RustFS $VERSION (${RELEASE_TYPE})"
|
||||
else
|
||||
TITLE="RustFS $VERSION"
|
||||
fi
|
||||
|
||||
# Create the release
|
||||
PRERELEASE_FLAG=""
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
PRERELEASE_FLAG="--prerelease"
|
||||
fi
|
||||
|
||||
gh release create "$TAG" \
|
||||
--title "$TITLE" \
|
||||
--notes "$RELEASE_NOTES" \
|
||||
$PRERELEASE_FLAG \
|
||||
--draft
|
||||
|
||||
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
|
||||
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
|
||||
fi
|
||||
|
||||
echo "release_id=$RELEASE_ID" >> $GITHUB_OUTPUT
|
||||
echo "release_url=$RELEASE_URL" >> $GITHUB_OUTPUT
|
||||
echo "Created release: $RELEASE_URL"
|
||||
|
||||
# Wait for build artifacts from build.yml
|
||||
wait-for-artifacts:
|
||||
name: Wait for Build Artifacts
|
||||
needs: release-check
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Wait for build workflow
|
||||
uses: lewagon/wait-on-check-action@v1.3.1
|
||||
with:
|
||||
ref: ${{ needs.release-check.outputs.tag }}
|
||||
check-name: "Build RustFS"
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
wait-interval: 30
|
||||
allowed-conclusions: success
|
||||
|
||||
# Download and prepare release assets
|
||||
prepare-assets:
|
||||
name: Prepare Release Assets
|
||||
needs: [release-check, wait-for-artifacts]
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
assets_prepared: ${{ steps.prepare.outputs.assets_prepared }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download artifacts from build workflow
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./artifacts
|
||||
pattern: rustfs-*
|
||||
merge-multiple: true
|
||||
|
||||
- name: Prepare release assets
|
||||
id: prepare
|
||||
run: |
|
||||
VERSION="${{ needs.release-check.outputs.version }}"
|
||||
TAG="${{ needs.release-check.outputs.tag }}"
|
||||
|
||||
mkdir -p ./release-assets
|
||||
|
||||
# Copy and verify artifacts
|
||||
ASSETS_COUNT=0
|
||||
for file in ./artifacts/rustfs-*.zip; do
|
||||
if [[ -f "$file" ]]; then
|
||||
cp "$file" ./release-assets/
|
||||
ASSETS_COUNT=$((ASSETS_COUNT + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $ASSETS_COUNT -eq 0 ]]; then
|
||||
echo "❌ No artifacts found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd ./release-assets
|
||||
|
||||
# Generate checksums
|
||||
if ls *.zip >/dev/null 2>&1; then
|
||||
sha256sum *.zip > SHA256SUMS
|
||||
sha512sum *.zip > SHA512SUMS
|
||||
fi
|
||||
|
||||
# TODO: Add GPG signing for signatures
|
||||
# For now, create placeholder signature files
|
||||
for file in *.zip; do
|
||||
echo "# Signature for $file" > "${file}.asc"
|
||||
echo "# GPG signature will be added in future versions" >> "${file}.asc"
|
||||
done
|
||||
|
||||
echo "assets_prepared=true" >> $GITHUB_OUTPUT
|
||||
|
||||
echo "📦 Prepared assets:"
|
||||
ls -la
|
||||
|
||||
echo "🔢 Asset count: $ASSETS_COUNT"
|
||||
|
||||
- name: Upload prepared assets
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-assets-${{ needs.release-check.outputs.tag }}
|
||||
path: ./release-assets/
|
||||
retention-days: 30
|
||||
|
||||
# Upload assets to GitHub Release
|
||||
upload-assets:
|
||||
name: Upload Release Assets
|
||||
needs: [release-check, create-release, prepare-assets]
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Download prepared assets
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: release-assets-${{ needs.release-check.outputs.tag }}
|
||||
path: ./release-assets
|
||||
|
||||
- name: Upload to GitHub Release
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
TAG="${{ needs.release-check.outputs.tag }}"
|
||||
|
||||
cd ./release-assets
|
||||
|
||||
# Upload all files
|
||||
for file in *; do
|
||||
if [[ -f "$file" ]]; then
|
||||
echo "📤 Uploading $file..."
|
||||
gh release upload "$TAG" "$file" --clobber
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ All assets uploaded successfully"
|
||||
|
||||
# Update latest.json for stable releases only
|
||||
update-latest:
|
||||
name: Update Latest Version
|
||||
needs: [release-check, upload-assets]
|
||||
if: needs.release-check.outputs.is_prerelease == 'false'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Update latest.json
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
run: |
|
||||
if [[ -z "$OSS_ACCESS_KEY_ID" ]]; then
|
||||
echo "⚠️ OSS credentials not available, skipping latest.json update"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
VERSION="${{ needs.release-check.outputs.version }}"
|
||||
TAG="${{ needs.release-check.outputs.tag }}"
|
||||
|
||||
# Install ossutil
|
||||
OSSUTIL_VERSION="2.1.1"
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-amd64.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-amd64"
|
||||
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
chmod +x "${OSSUTIL_DIR}/ossutil"
|
||||
|
||||
# Create latest.json
|
||||
cat > latest.json << EOF
|
||||
{
|
||||
"version": "${VERSION}",
|
||||
"tag": "${TAG}",
|
||||
"release_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"release_type": "stable",
|
||||
"download_url": "https://github.com/${{ github.repository }}/releases/tag/${TAG}"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Upload to OSS
|
||||
./${OSSUTIL_DIR}/ossutil cp latest.json oss://rustfs-version/latest.json --force
|
||||
|
||||
echo "✅ Updated latest.json for stable release $VERSION"
|
||||
|
||||
# Publish release (remove draft status)
|
||||
publish-release:
|
||||
name: Publish Release
|
||||
needs: [release-check, create-release, upload-assets]
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Update release notes and publish
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
TAG="${{ needs.release-check.outputs.tag }}"
|
||||
VERSION="${{ needs.release-check.outputs.version }}"
|
||||
IS_PRERELEASE="${{ needs.release-check.outputs.is_prerelease }}"
|
||||
RELEASE_TYPE="${{ needs.release-check.outputs.release_type }}"
|
||||
|
||||
# Get original release notes from tag
|
||||
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
|
||||
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
if [[ "$IS_PRERELEASE" == "true" ]]; then
|
||||
ORIGINAL_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
|
||||
else
|
||||
ORIGINAL_NOTES="Release ${VERSION}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Use release notes template if available
|
||||
if [[ -f ".github/workflows/release-notes-template.md" ]]; then
|
||||
# Substitute variables in template
|
||||
sed -e "s/\${VERSION}/$TAG/g" \
|
||||
-e "s/\${VERSION_CLEAN}/$VERSION/g" \
|
||||
-e "s/\${ORIGINAL_NOTES}/$(echo "$ORIGINAL_NOTES" | sed 's/[[\.*^$()+?{|]/\\&/g')/g" \
|
||||
.github/workflows/release-notes-template.md > enhanced_notes.md
|
||||
|
||||
# Update release notes
|
||||
gh release edit "$TAG" --notes-file enhanced_notes.md
|
||||
fi
|
||||
|
||||
# Publish the release (remove draft status)
|
||||
gh release edit "$TAG" --draft=false
|
||||
|
||||
echo "🎉 Released $TAG successfully!"
|
||||
echo "📄 Release URL: ${{ needs.create-release.outputs.release_url }}"
|
||||
1
.gitignore
vendored
@@ -19,3 +19,4 @@ deploy/certs/*
|
||||
profile.json
|
||||
.docker/openobserve-otel/data
|
||||
*.zst
|
||||
.secrets
|
||||
|
||||
689
Cargo.lock
generated
82
Cargo.toml
@@ -36,6 +36,7 @@ members = [
|
||||
"crates/utils", # Utility functions and helpers
|
||||
"crates/workers", # Worker thread pools and task scheduling
|
||||
"crates/zip", # ZIP file handling and compression
|
||||
"crates/ahm",
|
||||
]
|
||||
resolver = "2"
|
||||
|
||||
@@ -44,7 +45,11 @@ edition = "2024"
|
||||
license = "Apache-2.0"
|
||||
repository = "https://github.com/rustfs/rustfs"
|
||||
rust-version = "1.85"
|
||||
version = "0.0.3"
|
||||
version = "0.0.5"
|
||||
homepage = "https://rustfs.com"
|
||||
description = "RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. "
|
||||
keywords = ["RustFS", "Minio", "object-storage", "filesystem", "s3"]
|
||||
categories = ["web-programming", "development-tools", "filesystem", "network-programming"]
|
||||
|
||||
[workspace.lints.rust]
|
||||
unsafe_code = "deny"
|
||||
@@ -52,33 +57,39 @@ unsafe_code = "deny"
|
||||
[workspace.lints.clippy]
|
||||
all = "warn"
|
||||
|
||||
[patch.crates-io]
|
||||
rustfs-utils = { path = "crates/utils" }
|
||||
rustfs-filemeta = { path = "crates/filemeta" }
|
||||
rustfs-rio = { path = "crates/rio" }
|
||||
|
||||
[workspace.dependencies]
|
||||
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.3" }
|
||||
rustfs-appauth = { path = "crates/appauth", version = "0.0.3" }
|
||||
rustfs-common = { path = "crates/common", version = "0.0.3" }
|
||||
rustfs-crypto = { path = "crates/crypto", version = "0.0.3" }
|
||||
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.3" }
|
||||
rustfs-iam = { path = "crates/iam", version = "0.0.3" }
|
||||
rustfs-lock = { path = "crates/lock", version = "0.0.3" }
|
||||
rustfs-madmin = { path = "crates/madmin", version = "0.0.3" }
|
||||
rustfs-policy = { path = "crates/policy", version = "0.0.3" }
|
||||
rustfs-protos = { path = "crates/protos", version = "0.0.3" }
|
||||
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.3" }
|
||||
rustfs = { path = "./rustfs", version = "0.0.3" }
|
||||
rustfs-zip = { path = "./crates/zip", version = "0.0.3" }
|
||||
rustfs-config = { path = "./crates/config", version = "0.0.3" }
|
||||
rustfs-obs = { path = "crates/obs", version = "0.0.3" }
|
||||
rustfs-notify = { path = "crates/notify", version = "0.0.3" }
|
||||
rustfs-utils = { path = "crates/utils", version = "0.0.3" }
|
||||
rustfs-rio = { path = "crates/rio", version = "0.0.3" }
|
||||
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.3" }
|
||||
rustfs-signer = { path = "crates/signer", version = "0.0.3" }
|
||||
rustfs-workers = { path = "crates/workers", version = "0.0.3" }
|
||||
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
|
||||
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
|
||||
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
|
||||
rustfs-common = { path = "crates/common", version = "0.0.5" }
|
||||
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
|
||||
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
|
||||
rustfs-iam = { path = "crates/iam", version = "0.0.5" }
|
||||
rustfs-lock = { path = "crates/lock", version = "0.0.5" }
|
||||
rustfs-madmin = { path = "crates/madmin", version = "0.0.5" }
|
||||
rustfs-policy = { path = "crates/policy", version = "0.0.5" }
|
||||
rustfs-protos = { path = "crates/protos", version = "0.0.5" }
|
||||
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.5" }
|
||||
rustfs = { path = "./rustfs", version = "0.0.5" }
|
||||
rustfs-zip = { path = "./crates/zip", version = "0.0.5" }
|
||||
rustfs-config = { path = "./crates/config", version = "0.0.5" }
|
||||
rustfs-obs = { path = "crates/obs", version = "0.0.5" }
|
||||
rustfs-notify = { path = "crates/notify", version = "0.0.5" }
|
||||
rustfs-utils = { path = "crates/utils", version = "0.0.5" }
|
||||
rustfs-rio = { path = "crates/rio", version = "0.0.5" }
|
||||
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.5" }
|
||||
rustfs-signer = { path = "crates/signer", version = "0.0.5" }
|
||||
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
|
||||
aes-gcm = { version = "0.10.3", features = ["std"] }
|
||||
arc-swap = "1.7.1"
|
||||
argon2 = { version = "0.5.3", features = ["std"] }
|
||||
atoi = "2.0.0"
|
||||
async-channel = "2.4.0"
|
||||
async-channel = "2.5.0"
|
||||
async-recursion = "1.1.1"
|
||||
async-trait = "0.1.88"
|
||||
async-compression = { version = "0.4.0" }
|
||||
@@ -96,7 +107,7 @@ byteorder = "1.5.0"
|
||||
cfg-if = "1.0.1"
|
||||
chacha20poly1305 = { version = "0.10.1" }
|
||||
chrono = { version = "0.4.41", features = ["serde"] }
|
||||
clap = { version = "4.5.40", features = ["derive", "env"] }
|
||||
clap = { version = "4.5.41", features = ["derive", "env"] }
|
||||
const-str = { version = "0.6.2", features = ["std", "proc"] }
|
||||
crc32fast = "1.4.2"
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
@@ -105,7 +116,7 @@ datafusion = "46.0.1"
|
||||
derive_builder = "0.20.2"
|
||||
dioxus = { version = "0.6.3", features = ["router"] }
|
||||
dirs = "6.0.0"
|
||||
enumset = "1.1.6"
|
||||
enumset = "1.1.7"
|
||||
flatbuffers = "25.2.10"
|
||||
flate2 = "1.1.2"
|
||||
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
|
||||
@@ -119,7 +130,7 @@ hex-simd = "0.8.0"
|
||||
highway = { version = "1.3.0" }
|
||||
hmac = "0.12.1"
|
||||
hyper = "1.6.0"
|
||||
hyper-util = { version = "0.1.14", features = [
|
||||
hyper-util = { version = "0.1.15", features = [
|
||||
"tokio",
|
||||
"server-auto",
|
||||
"server-graceful",
|
||||
@@ -171,10 +182,9 @@ pbkdf2 = "0.12.2"
|
||||
percent-encoding = "2.3.1"
|
||||
pin-project-lite = "0.2.16"
|
||||
prost = "0.13.5"
|
||||
prost-build = "0.13.5"
|
||||
quick-xml = "0.37.5"
|
||||
quick-xml = "0.38.0"
|
||||
rand = "0.9.1"
|
||||
rdkafka = { version = "0.37.0", features = ["tokio"] }
|
||||
rdkafka = { version = "0.38.0", features = ["tokio"] }
|
||||
reed-solomon-simd = { version = "3.0.1" }
|
||||
regex = { version = "1.11.1" }
|
||||
reqwest = { version = "0.12.22", default-features = false, features = [
|
||||
@@ -197,10 +207,10 @@ rumqttc = { version = "0.24" }
|
||||
rust-embed = { version = "8.7.2" }
|
||||
rust-i18n = { version = "3.1.5" }
|
||||
rustfs-rsc = "2025.506.1"
|
||||
rustls = { version = "0.23.28" }
|
||||
rustls = { version = "0.23.29" }
|
||||
rustls-pki-types = "1.12.0"
|
||||
rustls-pemfile = "2.2.0"
|
||||
s3s = { version = "0.12.0-minio-preview.1" }
|
||||
s3s = { version = "0.12.0-minio-preview.2" }
|
||||
shadow-rs = { version = "1.2.0", default-features = false }
|
||||
serde = { version = "1.0.219", features = ["derive"] }
|
||||
serde_json = { version = "1.0.140", features = ["raw_value"] }
|
||||
@@ -212,10 +222,12 @@ siphasher = "1.0.1"
|
||||
smallvec = { version = "1.15.1", features = ["serde"] }
|
||||
snafu = "0.8.6"
|
||||
snap = "1.1.1"
|
||||
socket2 = "0.5.10"
|
||||
socket2 = "0.6.0"
|
||||
strum = { version = "0.27.1", features = ["derive"] }
|
||||
sysinfo = "0.35.2"
|
||||
sysinfo = "0.36.0"
|
||||
sysctl = "0.6.0"
|
||||
tempfile = "3.20.0"
|
||||
temp-env = "0.3.6"
|
||||
test-case = "3.3.1"
|
||||
thiserror = "2.0.12"
|
||||
time = { version = "0.3.41", features = [
|
||||
@@ -225,10 +237,11 @@ time = { version = "0.3.41", features = [
|
||||
"macros",
|
||||
"serde",
|
||||
] }
|
||||
tokio = { version = "1.46.0", features = ["fs", "rt-multi-thread"] }
|
||||
tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
|
||||
tokio-rustls = { version = "0.26.2", default-features = false }
|
||||
tokio-stream = { version = "0.1.17" }
|
||||
tokio-tar = "0.3.1"
|
||||
tokio-test = "0.4.4"
|
||||
tokio-util = { version = "0.7.15", features = ["io", "compat"] }
|
||||
tonic = { version = "0.13.1", features = ["gzip"] }
|
||||
tonic-build = { version = "0.13.1" }
|
||||
@@ -253,6 +266,7 @@ winapi = { version = "0.3.9" }
|
||||
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
|
||||
zip = "2.4.2"
|
||||
zstd = "0.13.3"
|
||||
anyhow = "1.0.98"
|
||||
|
||||
[profile.wasm-dev]
|
||||
inherits = "dev"
|
||||
|
||||
142
Dockerfile
@@ -1,49 +1,129 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Multi-stage build for RustFS production image
|
||||
FROM alpine:latest AS build
|
||||
|
||||
FROM alpine:3.18 AS builder
|
||||
# Build arguments
|
||||
ARG TARGETARCH
|
||||
ARG RELEASE=latest
|
||||
ARG CHANNEL=release
|
||||
|
||||
RUN apk add -U --no-cache \
|
||||
# Install dependencies for downloading and verifying binaries
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
bash \
|
||||
unzip
|
||||
wget \
|
||||
unzip \
|
||||
jq
|
||||
|
||||
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-release-x86_64-unknown-linux-musl.latest.zip && \
|
||||
unzip /tmp/rustfs.zip -d /tmp && \
|
||||
mv /tmp/rustfs-release-x86_64-unknown-linux-musl/bin/rustfs /rustfs && \
|
||||
chmod +x /rustfs && \
|
||||
rm -rf /tmp/*
|
||||
# Create build directory
|
||||
WORKDIR /build
|
||||
|
||||
FROM alpine:3.18
|
||||
# Map TARGETARCH to architecture format used in builds
|
||||
RUN case "${TARGETARCH}" in \
|
||||
"amd64") ARCH="x86_64" ;; \
|
||||
"arm64") ARCH="aarch64" ;; \
|
||||
*) echo "Unsupported architecture: ${TARGETARCH}" && exit 1 ;; \
|
||||
esac && \
|
||||
echo "ARCH=${ARCH}" > /build/arch.env
|
||||
|
||||
RUN apk add -U --no-cache \
|
||||
# Download rustfs binary from dl.rustfs.com
|
||||
RUN . /build/arch.env && \
|
||||
BASE_URL="https://dl.rustfs.com/artifacts/rustfs" && \
|
||||
PLATFORM="linux" && \
|
||||
if [ "${RELEASE}" = "latest" ]; then \
|
||||
# Download latest version from specified channel \
|
||||
if [ "${CHANNEL}" = "dev" ]; then \
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-dev-latest.zip"; \
|
||||
DOWNLOAD_URL="${BASE_URL}/dev/${PACKAGE_NAME}"; \
|
||||
echo "📥 Downloading latest dev build: ${PACKAGE_NAME}"; \
|
||||
else \
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-latest.zip"; \
|
||||
DOWNLOAD_URL="${BASE_URL}/release/${PACKAGE_NAME}"; \
|
||||
echo "📥 Downloading latest release build: ${PACKAGE_NAME}"; \
|
||||
fi; \
|
||||
else \
|
||||
# Download specific version (always from release channel) \
|
||||
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${RELEASE}.zip"; \
|
||||
DOWNLOAD_URL="${BASE_URL}/release/${PACKAGE_NAME}"; \
|
||||
echo "📥 Downloading specific version: ${PACKAGE_NAME}"; \
|
||||
fi && \
|
||||
echo "🔗 Download URL: ${DOWNLOAD_URL}" && \
|
||||
curl -f -L "${DOWNLOAD_URL}" -o /build/rustfs.zip && \
|
||||
if [ ! -f /build/rustfs.zip ] || [ ! -s /build/rustfs.zip ]; then \
|
||||
echo "❌ Failed to download binary package"; \
|
||||
echo "💡 Make sure the package ${PACKAGE_NAME} exists"; \
|
||||
echo "🔗 Check: ${DOWNLOAD_URL}"; \
|
||||
exit 1; \
|
||||
fi && \
|
||||
unzip /build/rustfs.zip -d /build && \
|
||||
chmod +x /build/rustfs && \
|
||||
rm /build/rustfs.zip && \
|
||||
echo "✅ Successfully downloaded and extracted rustfs binary"
|
||||
|
||||
# Runtime stage
|
||||
FROM alpine:latest
|
||||
|
||||
# Set build arguments and labels
|
||||
ARG RELEASE=latest
|
||||
ARG CHANNEL=release
|
||||
ARG BUILD_DATE
|
||||
ARG VCS_REF
|
||||
|
||||
LABEL name="RustFS" \
|
||||
vendor="RustFS Team" \
|
||||
maintainer="RustFS Team <dev@rustfs.com>" \
|
||||
version="${RELEASE}" \
|
||||
release="${RELEASE}" \
|
||||
channel="${CHANNEL}" \
|
||||
build-date="${BUILD_DATE}" \
|
||||
vcs-ref="${VCS_REF}" \
|
||||
summary="RustFS is a high-performance distributed object storage system written in Rust, compatible with S3 API." \
|
||||
description="RustFS is a high-performance distributed object storage software built using Rust. It supports erasure coding storage, multi-tenant management, observability, and other enterprise-level features." \
|
||||
url="https://rustfs.com" \
|
||||
license="Apache-2.0"
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates \
|
||||
bash
|
||||
|
||||
COPY --from=builder /rustfs /usr/local/bin/rustfs
|
||||
curl \
|
||||
tzdata \
|
||||
bash \
|
||||
&& addgroup -g 1000 rustfs \
|
||||
&& adduser -u 1000 -G rustfs -s /bin/sh -D rustfs
|
||||
|
||||
# Environment variables
|
||||
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
|
||||
RUSTFS_SECRET_KEY=rustfsadmin \
|
||||
RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_CONSOLE_ADDRESS=":9001" \
|
||||
RUSTFS_CONSOLE_ENABLE=true \
|
||||
RUSTFS_VOLUMES=/data \
|
||||
RUST_LOG=warn
|
||||
|
||||
EXPOSE 9000 9001
|
||||
# Set permissions for /usr/bin (similar to MinIO's approach)
|
||||
RUN chmod -R 755 /usr/bin
|
||||
|
||||
RUN mkdir -p /data
|
||||
VOLUME /data
|
||||
# Copy CA certificates and binaries from build stage
|
||||
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
COPY --from=build /build/rustfs /usr/bin/
|
||||
|
||||
CMD ["rustfs", "/data"]
|
||||
# Set executable permissions
|
||||
RUN chmod +x /usr/bin/rustfs
|
||||
|
||||
# Create data directory
|
||||
RUN mkdir -p /data /config && chown -R rustfs:rustfs /data /config
|
||||
|
||||
# Switch to non-root user
|
||||
USER rustfs
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /data
|
||||
|
||||
# Expose port
|
||||
EXPOSE 9000
|
||||
|
||||
|
||||
# Volume for data
|
||||
VOLUME ["/data"]
|
||||
|
||||
# Set entrypoint
|
||||
ENTRYPOINT ["/usr/bin/rustfs"]
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
FROM ubuntu:latest
|
||||
|
||||
# RUN apk add --no-cache <package-name>
|
||||
# 如果 rustfs 有依赖,可以在这里添加,例如:
|
||||
# RUN apk add --no-cache openssl
|
||||
# RUN apk add --no-cache bash # 安装 Bash
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# 创建与 RUSTFS_VOLUMES 一致的目录
|
||||
RUN mkdir -p /root/data/target/volume/test1 /root/data/target/volume/test2 /root/data/target/volume/test3 /root/data/target/volume/test4
|
||||
|
||||
# COPY ./target/x86_64-unknown-linux-musl/release/rustfs /app/rustfs
|
||||
COPY ./target/x86_64-unknown-linux-gnu/release/rustfs /app/rustfs
|
||||
|
||||
RUN chmod +x /app/rustfs
|
||||
|
||||
EXPOSE 9000
|
||||
EXPOSE 9002
|
||||
|
||||
CMD ["/app/rustfs"]
|
||||
@@ -4,7 +4,7 @@ ARG TARGETPLATFORM
|
||||
ARG BUILDPLATFORM
|
||||
|
||||
# Build stage
|
||||
FROM --platform=$BUILDPLATFORM rust:1.85-bookworm AS builder
|
||||
FROM --platform=$BUILDPLATFORM rust:1.88-bookworm AS builder
|
||||
|
||||
# Install required build dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
@@ -18,6 +18,18 @@ RUN apt-get update && apt-get install -y \
|
||||
lld \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install sccache for Rust compilation caching
|
||||
RUN wget https://github.com/mozilla/sccache/releases/download/v0.10.0/sccache-dist-v0.10.0-x86_64-unknown-linux-musl.tar.gz \
|
||||
&& tar -xzf sccache-dist-v0.10.0-x86_64-unknown-linux-musl.tar.gz \
|
||||
&& mv sccache-dist-v0.10.0-x86_64-unknown-linux-musl/sccache-dist /usr/local/bin/sccache \
|
||||
&& chmod +x /usr/local/bin/sccache \
|
||||
&& rm -rf sccache-dist-v0.10.0-x86_64-unknown-linux-musl.tar.gz sccache-dist-v0.10.0-x86_64-unknown-linux-musl
|
||||
|
||||
# Set up sccache environment
|
||||
ENV RUSTC_WRAPPER=sccache \
|
||||
SCCACHE_DIR=/tmp/sccache \
|
||||
SCCACHE_CACHE_SIZE=2G
|
||||
|
||||
# Install cross-compilation tools for ARM64
|
||||
RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
|
||||
apt-get update && \
|
||||
@@ -50,6 +62,9 @@ ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
|
||||
|
||||
WORKDIR /usr/src/rustfs
|
||||
|
||||
# Copy cargo configuration for optimized builds
|
||||
COPY Cargo.toml ./.cargo/config.toml
|
||||
|
||||
# Copy Cargo files for dependency caching
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY */Cargo.toml ./*/
|
||||
@@ -59,10 +74,19 @@ RUN find . -name "Cargo.toml" -not -path "./Cargo.toml" | \
|
||||
xargs -I {} dirname {} | \
|
||||
xargs -I {} sh -c 'mkdir -p {}/src && echo "fn main() {}" > {}/src/main.rs'
|
||||
|
||||
# Build dependencies only (cache layer)
|
||||
RUN case "$TARGETPLATFORM" in \
|
||||
"linux/amd64") cargo build --release --target x86_64-unknown-linux-gnu ;; \
|
||||
"linux/arm64") cargo build --release --target aarch64-unknown-linux-gnu ;; \
|
||||
# Configure cargo for optimized builds
|
||||
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true \
|
||||
CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse \
|
||||
CARGO_INCREMENTAL=0 \
|
||||
CARGO_PROFILE_RELEASE_DEBUG=false \
|
||||
CARGO_PROFILE_RELEASE_SPLIT_DEBUGINFO=off \
|
||||
CARGO_PROFILE_RELEASE_STRIP=symbols
|
||||
|
||||
# Build dependencies only (cache layer) with optimizations
|
||||
RUN sccache --start-server 2>/dev/null || true && \
|
||||
case "$TARGETPLATFORM" in \
|
||||
"linux/amd64") cargo build --release --target x86_64-unknown-linux-gnu -j $(nproc) ;; \
|
||||
"linux/arm64") cargo build --release --target aarch64-unknown-linux-gnu -j $(nproc) ;; \
|
||||
esac
|
||||
|
||||
# Copy source code
|
||||
@@ -71,17 +95,19 @@ COPY . .
|
||||
# Generate protobuf code
|
||||
RUN cargo run --bin gproto
|
||||
|
||||
# Build the actual application
|
||||
RUN case "$TARGETPLATFORM" in \
|
||||
# Build the actual application with optimizations
|
||||
RUN sccache --start-server 2>/dev/null || true && \
|
||||
case "$TARGETPLATFORM" in \
|
||||
"linux/amd64") \
|
||||
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs && \
|
||||
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
|
||||
cp target/x86_64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
|
||||
;; \
|
||||
"linux/arm64") \
|
||||
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs && \
|
||||
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
|
||||
cp target/aarch64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
|
||||
;; \
|
||||
esac
|
||||
esac && \
|
||||
sccache --show-stats || true
|
||||
|
||||
# Runtime stage - Ubuntu minimal for better compatibility
|
||||
FROM ubuntu:22.04
|
||||
@@ -111,11 +137,19 @@ RUN chmod +x /app/rustfs && chown rustfs:rustfs /app/rustfs
|
||||
USER rustfs
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 9000 9001
|
||||
EXPOSE 9000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD wget --no-verbose --tries=1 --spider http://localhost:9000/health || exit 1
|
||||
# Environment variables
|
||||
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
|
||||
RUSTFS_SECRET_KEY=rustfsadmin \
|
||||
RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_CONSOLE_ENABLE=true \
|
||||
RUSTFS_VOLUMES=/data \
|
||||
RUST_LOG=warn
|
||||
|
||||
|
||||
# Volume for data
|
||||
VOLUME ["/data"]
|
||||
|
||||
# Set default command
|
||||
CMD ["/app/rustfs"]
|
||||
173
Makefile
@@ -5,7 +5,9 @@
|
||||
DOCKER_CLI ?= docker
|
||||
IMAGE_NAME ?= rustfs:v1.0.0
|
||||
CONTAINER_NAME ?= rustfs-dev
|
||||
DOCKERFILE_PATH = $(shell pwd)/.docker
|
||||
# Docker build configurations
|
||||
DOCKERFILE_PRODUCTION = Dockerfile
|
||||
DOCKERFILE_SOURCE = Dockerfile.source
|
||||
|
||||
# Code quality and formatting targets
|
||||
.PHONY: fmt
|
||||
@@ -31,7 +33,8 @@ check:
|
||||
.PHONY: test
|
||||
test:
|
||||
@echo "🧪 Running tests..."
|
||||
cargo test --all --exclude e2e_test
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
.PHONY: pre-commit
|
||||
pre-commit: fmt clippy check test
|
||||
@@ -45,7 +48,7 @@ setup-hooks:
|
||||
|
||||
.PHONY: init-devenv
|
||||
init-devenv:
|
||||
$(DOCKER_CLI) build -t $(IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.devenv .
|
||||
$(DOCKER_CLI) build -t $(IMAGE_NAME) -f Dockerfile.source .
|
||||
$(DOCKER_CLI) stop $(CONTAINER_NAME)
|
||||
$(DOCKER_CLI) rm $(CONTAINER_NAME)
|
||||
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) -p 9010:9010 -p 9000:9000 -v $(shell pwd):/root/s3-rustfs -it $(IMAGE_NAME)
|
||||
@@ -66,86 +69,89 @@ e2e-server:
|
||||
probe-e2e:
|
||||
sh $(shell pwd)/scripts/probe.sh
|
||||
|
||||
# make BUILD_OS=ubuntu22.04 build
|
||||
# in target/ubuntu22.04/release/rustfs
|
||||
|
||||
# make BUILD_OS=rockylinux9.3 build
|
||||
# in target/rockylinux9.3/release/rustfs
|
||||
BUILD_OS ?= rockylinux9.3
|
||||
# Native build using build-rustfs.sh script
|
||||
.PHONY: build
|
||||
build: ROCKYLINUX_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
|
||||
build: ROCKYLINUX_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
|
||||
build: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
|
||||
build:
|
||||
$(DOCKER_CLI) build -t $(ROCKYLINUX_BUILD_IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.$(BUILD_OS) .
|
||||
$(DOCKER_CLI) run --rm --name $(ROCKYLINUX_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(ROCKYLINUX_BUILD_IMAGE_NAME) $(BUILD_CMD)
|
||||
@echo "🔨 Building RustFS using build-rustfs.sh script..."
|
||||
./build-rustfs.sh
|
||||
|
||||
.PHONY: build-dev
|
||||
build-dev:
|
||||
@echo "🔨 Building RustFS in development mode..."
|
||||
./build-rustfs.sh --dev
|
||||
|
||||
|
||||
|
||||
# Docker-based build (alternative approach)
|
||||
# Usage: make BUILD_OS=ubuntu22.04 build-docker
|
||||
# Output: target/ubuntu22.04/release/rustfs
|
||||
BUILD_OS ?= rockylinux9.3
|
||||
.PHONY: build-docker
|
||||
build-docker: SOURCE_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
|
||||
build-docker: SOURCE_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
|
||||
build-docker: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
|
||||
build-docker:
|
||||
@echo "🐳 Building RustFS using Docker ($(BUILD_OS))..."
|
||||
$(DOCKER_CLI) build -t $(SOURCE_BUILD_IMAGE_NAME) -f $(DOCKERFILE_SOURCE) .
|
||||
$(DOCKER_CLI) run --rm --name $(SOURCE_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(SOURCE_BUILD_IMAGE_NAME) $(BUILD_CMD)
|
||||
|
||||
.PHONY: build-musl
|
||||
build-musl:
|
||||
@echo "🔨 Building rustfs for x86_64-unknown-linux-musl..."
|
||||
cargo build --target x86_64-unknown-linux-musl --bin rustfs -r
|
||||
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-buildx' instead"
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-musl
|
||||
|
||||
.PHONY: build-gnu
|
||||
build-gnu:
|
||||
@echo "🔨 Building rustfs for x86_64-unknown-linux-gnu..."
|
||||
cargo build --target x86_64-unknown-linux-gnu --bin rustfs -r
|
||||
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-buildx' instead"
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-gnu
|
||||
|
||||
.PHONY: deploy-dev
|
||||
deploy-dev: build-musl
|
||||
@echo "🚀 Deploying to dev server: $${IP}"
|
||||
./scripts/dev_deploy.sh $${IP}
|
||||
|
||||
# Multi-architecture Docker build targets
|
||||
.PHONY: docker-build-multiarch
|
||||
docker-build-multiarch:
|
||||
@echo "🏗️ Building multi-architecture Docker images..."
|
||||
./scripts/build-docker-multiarch.sh
|
||||
# Multi-architecture Docker build targets (NEW: using docker-buildx.sh)
|
||||
.PHONY: docker-buildx
|
||||
docker-buildx:
|
||||
@echo "🏗️ Building multi-architecture Docker images with buildx..."
|
||||
./docker-buildx.sh
|
||||
|
||||
.PHONY: docker-build-multiarch-push
|
||||
docker-build-multiarch-push:
|
||||
@echo "🚀 Building and pushing multi-architecture Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --push
|
||||
.PHONY: docker-buildx-push
|
||||
docker-buildx-push:
|
||||
@echo "🚀 Building and pushing multi-architecture Docker images with buildx..."
|
||||
./docker-buildx.sh --push
|
||||
|
||||
.PHONY: docker-build-multiarch-version
|
||||
docker-build-multiarch-version:
|
||||
.PHONY: docker-buildx-version
|
||||
docker-buildx-version:
|
||||
@if [ -z "$(VERSION)" ]; then \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-build-multiarch-version VERSION=v1.0.0"; \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-version VERSION=v1.0.0"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🏗️ Building multi-architecture Docker images (version: $(VERSION))..."
|
||||
./scripts/build-docker-multiarch.sh --version $(VERSION)
|
||||
./docker-buildx.sh --release $(VERSION)
|
||||
|
||||
.PHONY: docker-push-multiarch-version
|
||||
docker-push-multiarch-version:
|
||||
.PHONY: docker-buildx-push-version
|
||||
docker-buildx-push-version:
|
||||
@if [ -z "$(VERSION)" ]; then \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-push-multiarch-version VERSION=v1.0.0"; \
|
||||
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-push-version VERSION=v1.0.0"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "🚀 Building and pushing multi-architecture Docker images (version: $(VERSION))..."
|
||||
./scripts/build-docker-multiarch.sh --version $(VERSION) --push
|
||||
./docker-buildx.sh --release $(VERSION) --push
|
||||
|
||||
.PHONY: docker-build-ubuntu
|
||||
docker-build-ubuntu:
|
||||
@echo "🏗️ Building multi-architecture Ubuntu Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type ubuntu
|
||||
|
||||
.PHONY: docker-build-rockylinux
|
||||
docker-build-rockylinux:
|
||||
@echo "🏗️ Building multi-architecture RockyLinux Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type rockylinux
|
||||
|
||||
.PHONY: docker-build-devenv
|
||||
docker-build-devenv:
|
||||
@echo "🏗️ Building multi-architecture development environment Docker images..."
|
||||
./scripts/build-docker-multiarch.sh --type devenv
|
||||
.PHONY: docker-build-production
|
||||
docker-build-production:
|
||||
@echo "🏗️ Building production Docker image..."
|
||||
$(DOCKER_CLI) build -f $(DOCKERFILE_PRODUCTION) -t rustfs:latest .
|
||||
|
||||
.PHONY: docker-build-all-types
|
||||
docker-build-all-types:
|
||||
@echo "🏗️ Building all multi-architecture Docker image types..."
|
||||
./scripts/build-docker-multiarch.sh --type production
|
||||
./scripts/build-docker-multiarch.sh --type ubuntu
|
||||
./scripts/build-docker-multiarch.sh --type rockylinux
|
||||
./scripts/build-docker-multiarch.sh --type devenv
|
||||
.PHONY: docker-build-source
|
||||
docker-build-source:
|
||||
@echo "🏗️ Building source Docker image..."
|
||||
$(DOCKER_CLI) build -f $(DOCKERFILE_SOURCE) -t rustfs:source .
|
||||
|
||||
.PHONY: docker-inspect-multiarch
|
||||
docker-inspect-multiarch:
|
||||
@@ -159,41 +165,64 @@ docker-inspect-multiarch:
|
||||
.PHONY: build-cross-all
|
||||
build-cross-all:
|
||||
@echo "🔧 Building all target architectures..."
|
||||
@if ! command -v cross &> /dev/null; then \
|
||||
echo "📦 Installing cross..."; \
|
||||
cargo install cross; \
|
||||
fi
|
||||
@echo "💡 On macOS/Windows, use 'make docker-buildx' for reliable multi-arch builds"
|
||||
@echo "🔨 Generating protobuf code..."
|
||||
cargo run --bin gproto || true
|
||||
@echo "🔨 Building x86_64-unknown-linux-musl..."
|
||||
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
|
||||
./build-rustfs.sh --platform x86_64-unknown-linux-musl
|
||||
@echo "🔨 Building aarch64-unknown-linux-gnu..."
|
||||
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
|
||||
./build-rustfs.sh --platform aarch64-unknown-linux-gnu
|
||||
@echo "✅ All architectures built successfully!"
|
||||
|
||||
.PHONY: help-build
|
||||
help-build:
|
||||
@echo "🔨 RustFS 构建帮助:"
|
||||
@echo ""
|
||||
@echo "🚀 本地构建 (推荐使用):"
|
||||
@echo " make build # 构建 RustFS 二进制文件 (默认包含 console)"
|
||||
@echo " make build-dev # 开发模式构建"
|
||||
@echo " make build-musl # 构建 musl 版本"
|
||||
@echo " make build-gnu # 构建 GNU 版本"
|
||||
@echo ""
|
||||
@echo "🐳 Docker 构建:"
|
||||
@echo " make build-docker # 使用 Docker 容器构建"
|
||||
@echo " make build-docker BUILD_OS=ubuntu22.04 # 指定构建系统"
|
||||
@echo ""
|
||||
@echo "🏗️ 跨架构构建:"
|
||||
@echo " make build-cross-all # 构建所有架构的二进制文件"
|
||||
@echo ""
|
||||
@echo "🔧 直接使用 build-rustfs.sh 脚本:"
|
||||
@echo " ./build-rustfs.sh --help # 查看脚本帮助"
|
||||
@echo " ./build-rustfs.sh --no-console # 构建时跳过 console 资源"
|
||||
@echo " ./build-rustfs.sh --force-console-update # 强制更新 console 资源"
|
||||
@echo " ./build-rustfs.sh --dev # 开发模式构建"
|
||||
@echo " ./build-rustfs.sh --sign # 签名二进制文件"
|
||||
@echo " ./build-rustfs.sh --platform x86_64-unknown-linux-musl # 指定目标平台"
|
||||
@echo " ./build-rustfs.sh --skip-verification # 跳过二进制验证"
|
||||
@echo ""
|
||||
@echo "💡 build-rustfs.sh 脚本提供了更多选项、智能检测和二进制验证功能"
|
||||
|
||||
.PHONY: help-docker
|
||||
help-docker:
|
||||
@echo "🐳 Docker 多架构构建帮助:"
|
||||
@echo ""
|
||||
@echo "基本构建:"
|
||||
@echo " make docker-build-multiarch # 构建多架构镜像(不推送)"
|
||||
@echo " make docker-build-multiarch-push # 构建并推送多架构镜像"
|
||||
@echo "🚀 推荐使用 (新的 docker-buildx 方式):"
|
||||
@echo " make docker-buildx # 构建多架构镜像(不推送)"
|
||||
@echo " make docker-buildx-push # 构建并推送多架构镜像"
|
||||
@echo " make docker-buildx-version VERSION=v1.0.0 # 构建指定版本"
|
||||
@echo " make docker-buildx-push-version VERSION=v1.0.0 # 构建并推送指定版本"
|
||||
@echo ""
|
||||
@echo "版本构建:"
|
||||
@echo " make docker-build-multiarch-version VERSION=v1.0.0 # 构建指定版本"
|
||||
@echo " make docker-push-multiarch-version VERSION=v1.0.0 # 构建并推送指定版本"
|
||||
@echo "🏗️ 单架构构建:"
|
||||
@echo " make docker-build-production # 构建生产环境镜像"
|
||||
@echo " make docker-build-source # 构建源码构建镜像"
|
||||
@echo ""
|
||||
@echo "镜像类型:"
|
||||
@echo " make docker-build-ubuntu # 构建 Ubuntu 镜像"
|
||||
@echo " make docker-build-rockylinux # 构建 RockyLinux 镜像"
|
||||
@echo " make docker-build-devenv # 构建开发环境镜像"
|
||||
@echo " make docker-build-all-types # 构建所有类型镜像"
|
||||
@echo ""
|
||||
@echo "辅助工具:"
|
||||
@echo "🔧 辅助工具:"
|
||||
@echo " make build-cross-all # 构建所有架构的二进制文件"
|
||||
@echo " make docker-inspect-multiarch IMAGE=xxx # 检查镜像的架构支持"
|
||||
@echo ""
|
||||
@echo "环境变量 (在推送时需要设置):"
|
||||
@echo "📋 环境变量 (在推送时需要设置):"
|
||||
@echo " DOCKERHUB_USERNAME Docker Hub 用户名"
|
||||
@echo " DOCKERHUB_TOKEN Docker Hub 访问令牌"
|
||||
@echo " GITHUB_TOKEN GitHub 访问令牌"
|
||||
@echo ""
|
||||
@echo "💡 更多详情请参考项目根目录的 docker-buildx.sh 脚本"
|
||||
|
||||
64
README.md
@@ -7,6 +7,7 @@
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="Featured|HelloGitHub" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,11 +18,21 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a>
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
|
||||
<!-- Keep these links. Translations will automatically update with the README. -->
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
|
||||
</p>
|
||||
|
||||
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
|
||||
|
||||
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
|
||||
|
||||
## Features
|
||||
|
||||
- **High Performance**: Built with Rust, ensuring speed and efficiency.
|
||||
@@ -61,7 +72,7 @@ Stress test server parameters
|
||||
|
||||
To get started with RustFS, follow these steps:
|
||||
|
||||
1. **One-click installation script (Option 1)**
|
||||
1. **One-click installation script (Option 1)**
|
||||
|
||||
```bash
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
@@ -70,13 +81,52 @@ To get started with RustFS, follow these steps:
|
||||
2. **Docker Quick Start (Option 2)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
# Latest stable release
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
|
||||
|
||||
# Development version (main branch)
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:main-latest
|
||||
|
||||
# Specific version
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:v1.0.0
|
||||
```
|
||||
|
||||
3. **Build from Source (Option 3) - Advanced Users**
|
||||
|
||||
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
|
||||
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
|
||||
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
|
||||
For developers who want to build RustFS Docker images from source with multi-architecture support:
|
||||
|
||||
```bash
|
||||
# Build multi-architecture images locally
|
||||
./docker-buildx.sh --build-arg RELEASE=latest
|
||||
|
||||
# Build and push to registry
|
||||
./docker-buildx.sh --push
|
||||
|
||||
# Build specific version
|
||||
./docker-buildx.sh --release v1.0.0 --push
|
||||
|
||||
# Build for custom registry
|
||||
./docker-buildx.sh --registry your-registry.com --namespace yourname --push
|
||||
```
|
||||
|
||||
The `docker-buildx.sh` script supports:
|
||||
- **Multi-architecture builds**: `linux/amd64`, `linux/arm64`
|
||||
- **Automatic version detection**: Uses git tags or commit hashes
|
||||
- **Registry flexibility**: Supports Docker Hub, GitHub Container Registry, etc.
|
||||
- **Build optimization**: Includes caching and parallel builds
|
||||
|
||||
You can also use Make targets for convenience:
|
||||
|
||||
```bash
|
||||
make docker-buildx # Build locally
|
||||
make docker-buildx-push # Build and push
|
||||
make docker-buildx-version VERSION=v1.0.0 # Build specific version
|
||||
make help-docker # Show all Docker-related commands
|
||||
```
|
||||
|
||||
4. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console, default username and password is `rustfsadmin` .
|
||||
5. **Create a Bucket**: Use the console to create a new bucket for your objects.
|
||||
6. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -109,7 +159,7 @@ If you have any questions or need assistance, you can:
|
||||
RustFS is a community-driven project, and we appreciate all contributions. Check out the [Contributors](https://github.com/rustfs/rustfs/graphs/contributors) page to see the amazing people who have helped make RustFS better.
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
|
||||
10
README_ZH.md
@@ -7,6 +7,7 @@
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="Featured|HelloGitHub" /></a>
|
||||
</p >
|
||||
|
||||
<p align="center">
|
||||
@@ -61,7 +62,7 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
|
||||
要开始使用 RustFS,请按照以下步骤操作:
|
||||
|
||||
1. **一键脚本快速启动 (方案一)**
|
||||
1. **一键脚本快速启动 (方案一)**
|
||||
|
||||
```bash
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
@@ -70,11 +71,10 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
2. **Docker快速启动(方案二)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs
|
||||
```
|
||||
|
||||
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9000` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
|
||||
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
|
||||
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
|
||||
|
||||
@@ -109,7 +109,7 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
RustFS 是一个社区驱动的项目,我们感谢所有的贡献。查看[贡献者](https://github.com/rustfs/rustfs/graphs/contributors)页面,了解帮助 RustFS 变得更好的杰出人员。
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
|
||||
</a >
|
||||
|
||||
## 许可证
|
||||
|
||||
564
build-rustfs.sh
Executable file
@@ -0,0 +1,564 @@
|
||||
#!/bin/bash
|
||||
|
||||
# RustFS Binary Build Script
|
||||
# This script compiles RustFS binaries for different platforms and architectures
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Auto-detect current platform
|
||||
detect_platform() {
|
||||
local arch=$(uname -m)
|
||||
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$os" in
|
||||
"linux")
|
||||
case "$arch" in
|
||||
"x86_64")
|
||||
echo "x86_64-unknown-linux-musl"
|
||||
;;
|
||||
"aarch64"|"arm64")
|
||||
echo "aarch64-unknown-linux-musl"
|
||||
;;
|
||||
"armv7l")
|
||||
echo "armv7-unknown-linux-musleabihf"
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"darwin")
|
||||
case "$arch" in
|
||||
"x86_64")
|
||||
echo "x86_64-apple-darwin"
|
||||
;;
|
||||
"arm64"|"aarch64")
|
||||
echo "aarch64-apple-darwin"
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
echo "unknown-platform"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Cross-platform SHA256 checksum generation
|
||||
generate_sha256() {
|
||||
local file="$1"
|
||||
local output_file="$2"
|
||||
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$os" in
|
||||
"linux")
|
||||
if command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
elif command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found (sha256sum or shasum)"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
"darwin")
|
||||
if command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
elif command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found (shasum or sha256sum)"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
# Try common commands in order
|
||||
if command -v sha256sum &> /dev/null; then
|
||||
sha256sum "$file" > "$output_file"
|
||||
elif command -v shasum &> /dev/null; then
|
||||
shasum -a 256 "$file" > "$output_file"
|
||||
else
|
||||
print_message $RED "❌ No SHA256 command found"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Default values
|
||||
OUTPUT_DIR="target/release"
|
||||
PLATFORM=$(detect_platform) # Auto-detect current platform
|
||||
BINARY_NAME="rustfs"
|
||||
BUILD_TYPE="release"
|
||||
SIGN=false
|
||||
WITH_CONSOLE=true
|
||||
FORCE_CONSOLE_UPDATE=false
|
||||
CONSOLE_VERSION="latest"
|
||||
SKIP_VERIFICATION=false
|
||||
CUSTOM_PLATFORM=""
|
||||
|
||||
# Print usage
|
||||
usage() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Description:"
|
||||
echo " Build RustFS binary for the current platform. Designed for CI/CD pipelines"
|
||||
echo " where different runners build platform-specific binaries natively."
|
||||
echo " Includes automatic verification to ensure the built binary is functional."
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -o, --output-dir DIR Output directory (default: target/release)"
|
||||
echo " -b, --binary-name NAME Binary name (default: rustfs)"
|
||||
echo " -p, --platform TARGET Target platform (default: auto-detect)"
|
||||
echo " --dev Build in dev mode"
|
||||
echo " --sign Sign binaries after build"
|
||||
echo " --with-console Download console static assets (default)"
|
||||
echo " --no-console Skip console static assets"
|
||||
echo " --force-console-update Force update console assets even if they exist"
|
||||
echo " --console-version VERSION Console version to download (default: latest)"
|
||||
echo " --skip-verification Skip binary verification after build"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Build for current platform (includes console assets)"
|
||||
echo " $0 --dev # Development build"
|
||||
echo " $0 --sign # Build and sign binary (release CI)"
|
||||
echo " $0 --no-console # Build without console static assets"
|
||||
echo " $0 --force-console-update # Force update console assets"
|
||||
echo " $0 --platform x86_64-unknown-linux-musl # Build for specific platform"
|
||||
echo " $0 --skip-verification # Skip binary verification (for cross-compilation)"
|
||||
echo ""
|
||||
echo "Detected platform: $(detect_platform)"
|
||||
echo "CI Usage: Run this script on each platform's runner to build native binaries"
|
||||
}
|
||||
|
||||
# Print colored message
|
||||
print_message() {
|
||||
local color=$1
|
||||
local message=$2
|
||||
echo -e "${color}${message}${NC}"
|
||||
}
|
||||
|
||||
# Get version from git
|
||||
get_version() {
|
||||
if git describe --abbrev=0 --tags >/dev/null 2>&1; then
|
||||
git describe --abbrev=0 --tags
|
||||
else
|
||||
git rev-parse --short HEAD
|
||||
fi
|
||||
}
|
||||
|
||||
# Setup rust environment
|
||||
setup_rust_environment() {
|
||||
print_message $BLUE "🔧 Setting up Rust environment..."
|
||||
|
||||
# Install required target for current platform
|
||||
print_message $YELLOW "Installing target: $PLATFORM"
|
||||
rustup target add "$PLATFORM"
|
||||
|
||||
# Set up environment variables for musl targets
|
||||
if [[ "$PLATFORM" == *"musl"* ]]; then
|
||||
print_message $YELLOW "Setting up environment for musl target..."
|
||||
export RUSTFLAGS="-C target-feature=-crt-static"
|
||||
|
||||
# For cargo-zigbuild, set up additional environment variables
|
||||
if command -v cargo-zigbuild &> /dev/null; then
|
||||
print_message $YELLOW "Configuring cargo-zigbuild for musl target..."
|
||||
|
||||
# Set environment variables for better musl support
|
||||
export CC_x86_64_unknown_linux_musl="zig cc -target x86_64-linux-musl"
|
||||
export CXX_x86_64_unknown_linux_musl="zig c++ -target x86_64-linux-musl"
|
||||
export AR_x86_64_unknown_linux_musl="zig ar"
|
||||
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target x86_64-linux-musl"
|
||||
|
||||
export CC_aarch64_unknown_linux_musl="zig cc -target aarch64-linux-musl"
|
||||
export CXX_aarch64_unknown_linux_musl="zig c++ -target aarch64-linux-musl"
|
||||
export AR_aarch64_unknown_linux_musl="zig ar"
|
||||
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target aarch64-linux-musl"
|
||||
|
||||
# Set environment variables for zstd-sys to avoid target parsing issues
|
||||
export ZSTD_SYS_USE_PKG_CONFIG=1
|
||||
export PKG_CONFIG_ALLOW_CROSS=1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Install required tools
|
||||
if [ "$SIGN" = true ]; then
|
||||
if ! command -v minisign &> /dev/null; then
|
||||
print_message $YELLOW "Installing minisign for binary signing..."
|
||||
cargo install minisign
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Download console static assets
|
||||
download_console_assets() {
|
||||
local static_dir="rustfs/static"
|
||||
local console_exists=false
|
||||
|
||||
# Check if console assets already exist
|
||||
if [ -d "$static_dir" ] && [ -f "$static_dir/index.html" ]; then
|
||||
console_exists=true
|
||||
local static_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
|
||||
print_message $YELLOW "Console static assets already exist ($static_size)"
|
||||
fi
|
||||
|
||||
# Determine if we need to download
|
||||
local should_download=false
|
||||
if [ "$WITH_CONSOLE" = true ]; then
|
||||
if [ "$console_exists" = false ]; then
|
||||
print_message $BLUE "🎨 Console assets not found, downloading..."
|
||||
should_download=true
|
||||
elif [ "$FORCE_CONSOLE_UPDATE" = true ]; then
|
||||
print_message $BLUE "🎨 Force updating console assets..."
|
||||
should_download=true
|
||||
else
|
||||
print_message $GREEN "✅ Console assets already available, skipping download"
|
||||
fi
|
||||
else
|
||||
if [ "$console_exists" = true ]; then
|
||||
print_message $GREEN "✅ Using existing console assets"
|
||||
else
|
||||
print_message $YELLOW "⚠️ Console assets not found. Use --download-console to download them."
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$should_download" = true ]; then
|
||||
print_message $BLUE "📥 Downloading console static assets..."
|
||||
|
||||
# Create static directory
|
||||
mkdir -p "$static_dir"
|
||||
|
||||
# Download from GitHub Releases (consistent with Docker build)
|
||||
local download_url
|
||||
if [ "$CONSOLE_VERSION" = "latest" ]; then
|
||||
print_message $YELLOW "Getting latest console release info..."
|
||||
# For now, use dl.rustfs.com as fallback until GitHub Releases includes console assets
|
||||
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
|
||||
else
|
||||
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-${CONSOLE_VERSION}.zip"
|
||||
fi
|
||||
|
||||
print_message $YELLOW "Downloading from: $download_url"
|
||||
|
||||
# Download with retries
|
||||
local temp_file="console-assets-temp.zip"
|
||||
local download_success=false
|
||||
|
||||
for i in {1..3}; do
|
||||
if curl -L "$download_url" -o "$temp_file" --retry 3 --retry-delay 5 --max-time 300; then
|
||||
download_success=true
|
||||
break
|
||||
else
|
||||
print_message $YELLOW "Download attempt $i failed, retrying..."
|
||||
sleep 2
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$download_success" = true ]; then
|
||||
# Verify the downloaded file
|
||||
if [ -f "$temp_file" ] && [ -s "$temp_file" ]; then
|
||||
print_message $BLUE "📦 Extracting console assets..."
|
||||
|
||||
# Extract to static directory
|
||||
if unzip -o "$temp_file" -d "$static_dir"; then
|
||||
rm "$temp_file"
|
||||
local final_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
|
||||
print_message $GREEN "✅ Console assets downloaded successfully ($final_size)"
|
||||
else
|
||||
print_message $RED "❌ Failed to extract console assets"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $RED "❌ Downloaded file is empty or invalid"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $RED "❌ Failed to download console assets after 3 attempts"
|
||||
print_message $YELLOW "💡 Console assets are optional. Build will continue without them."
|
||||
rm -f "$temp_file"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify binary functionality
|
||||
verify_binary() {
|
||||
local binary_path="$1"
|
||||
|
||||
# Check if binary exists
|
||||
if [ ! -f "$binary_path" ]; then
|
||||
print_message $RED "❌ Binary file not found: $binary_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if binary is executable
|
||||
if [ ! -x "$binary_path" ]; then
|
||||
print_message $RED "❌ Binary is not executable: $binary_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check basic functionality - try to run help command
|
||||
print_message $YELLOW " Testing --help command..."
|
||||
if ! "$binary_path" --help >/dev/null 2>&1; then
|
||||
print_message $RED "❌ Binary failed to run --help command"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check version command
|
||||
print_message $YELLOW " Testing --version command..."
|
||||
if ! "$binary_path" --version >/dev/null 2>&1; then
|
||||
print_message $YELLOW "⚠️ Binary does not support --version command (this is optional)"
|
||||
fi
|
||||
|
||||
# Try to get some basic info about the binary
|
||||
local file_info=$(file "$binary_path" 2>/dev/null || echo "unknown")
|
||||
print_message $YELLOW " Binary info: $file_info"
|
||||
|
||||
# Check if it's a valid ELF/Mach-O binary
|
||||
if command -v readelf >/dev/null 2>&1; then
|
||||
if readelf -h "$binary_path" >/dev/null 2>&1; then
|
||||
print_message $YELLOW " ELF binary structure: valid"
|
||||
fi
|
||||
elif command -v otool >/dev/null 2>&1; then
|
||||
if otool -h "$binary_path" >/dev/null 2>&1; then
|
||||
print_message $YELLOW " Mach-O binary structure: valid"
|
||||
fi
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Build binary for current platform
|
||||
build_binary() {
|
||||
local version=$(get_version)
|
||||
local output_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
|
||||
|
||||
print_message $BLUE "🏗️ Building for platform: $PLATFORM"
|
||||
print_message $YELLOW " Version: $version"
|
||||
print_message $YELLOW " Output: $output_file"
|
||||
|
||||
# Create output directory
|
||||
mkdir -p "${OUTPUT_DIR}/${PLATFORM}"
|
||||
|
||||
# Simple build logic matching the working version (4fb4b353)
|
||||
# Force rebuild by touching build.rs
|
||||
touch rustfs/build.rs
|
||||
|
||||
# Determine build command based on platform and cross-compilation needs
|
||||
local build_cmd=""
|
||||
local current_platform=$(detect_platform)
|
||||
|
||||
print_message $BLUE "📦 Using working version build logic..."
|
||||
|
||||
# Check if we need cross-compilation
|
||||
if [ "$PLATFORM" != "$current_platform" ]; then
|
||||
# Cross-compilation needed
|
||||
if [[ "$PLATFORM" == *"apple-darwin"* ]]; then
|
||||
print_message $RED "❌ macOS cross-compilation not supported"
|
||||
print_message $YELLOW "💡 macOS targets must be built natively on macOS runners"
|
||||
return 1
|
||||
elif [[ "$PLATFORM" == *"windows"* ]]; then
|
||||
# Use cross for Windows ARM64
|
||||
if ! command -v cross &> /dev/null; then
|
||||
print_message $YELLOW "📦 Installing cross tool..."
|
||||
cargo install cross --git https://github.com/cross-rs/cross
|
||||
fi
|
||||
build_cmd="cross build"
|
||||
else
|
||||
# Use zigbuild for Linux ARM64 (matches working version)
|
||||
if ! command -v cargo-zigbuild &> /dev/null; then
|
||||
print_message $RED "❌ cargo-zigbuild not found. Please install it first."
|
||||
return 1
|
||||
fi
|
||||
build_cmd="cargo zigbuild"
|
||||
fi
|
||||
else
|
||||
# Native compilation
|
||||
build_cmd="cargo build"
|
||||
fi
|
||||
|
||||
if [ "$BUILD_TYPE" = "release" ]; then
|
||||
build_cmd+=" --release"
|
||||
fi
|
||||
|
||||
build_cmd+=" --target $PLATFORM"
|
||||
build_cmd+=" -p rustfs --bins"
|
||||
|
||||
print_message $BLUE "📦 Executing: $build_cmd"
|
||||
|
||||
# Execute build (this matches exactly what the working version does)
|
||||
if eval $build_cmd; then
|
||||
print_message $GREEN "✅ Successfully built for $PLATFORM"
|
||||
|
||||
# Copy binary to output directory
|
||||
cp "target/${PLATFORM}/${BUILD_TYPE}/${BINARY_NAME}" "$output_file"
|
||||
|
||||
# Generate checksums
|
||||
print_message $BLUE "🔐 Generating checksums..."
|
||||
(cd "${OUTPUT_DIR}/${PLATFORM}" && generate_sha256 "${BINARY_NAME}" "${BINARY_NAME}.sha256sum")
|
||||
|
||||
# Verify binary functionality (if not skipped)
|
||||
if [ "$SKIP_VERIFICATION" = false ]; then
|
||||
print_message $BLUE "🔍 Verifying binary functionality..."
|
||||
if verify_binary "$output_file"; then
|
||||
print_message $GREEN "✅ Binary verification passed"
|
||||
else
|
||||
print_message $RED "❌ Binary verification failed"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
print_message $YELLOW "⚠️ Binary verification skipped by user request"
|
||||
fi
|
||||
|
||||
# Sign binary if requested
|
||||
if [ "$SIGN" = true ]; then
|
||||
print_message $BLUE "✍️ Signing binary..."
|
||||
(cd "${OUTPUT_DIR}/${PLATFORM}" && minisign -S -m "${BINARY_NAME}" -s ~/.minisign/minisign.key)
|
||||
fi
|
||||
|
||||
print_message $GREEN "✅ Build completed successfully"
|
||||
else
|
||||
print_message $RED "❌ Failed to build for $PLATFORM"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
|
||||
# Main build function
|
||||
build_rustfs() {
|
||||
local version=$(get_version)
|
||||
|
||||
print_message $BLUE "🚀 Starting RustFS binary build process..."
|
||||
print_message $YELLOW " Version: $version"
|
||||
print_message $YELLOW " Platform: $PLATFORM"
|
||||
print_message $YELLOW " Output Directory: $OUTPUT_DIR"
|
||||
print_message $YELLOW " Build Type: $BUILD_TYPE"
|
||||
print_message $YELLOW " Sign: $SIGN"
|
||||
print_message $YELLOW " With Console: $WITH_CONSOLE"
|
||||
if [ "$WITH_CONSOLE" = true ]; then
|
||||
print_message $YELLOW " Console Version: $CONSOLE_VERSION"
|
||||
print_message $YELLOW " Force Console Update: $FORCE_CONSOLE_UPDATE"
|
||||
fi
|
||||
print_message $YELLOW " Skip Verification: $SKIP_VERIFICATION"
|
||||
echo ""
|
||||
|
||||
# Setup environment
|
||||
setup_rust_environment
|
||||
echo ""
|
||||
|
||||
# Download console assets if requested
|
||||
download_console_assets
|
||||
echo ""
|
||||
|
||||
# Build binary
|
||||
build_binary
|
||||
echo ""
|
||||
|
||||
print_message $GREEN "🎉 Build process completed successfully!"
|
||||
|
||||
# Show built binary
|
||||
local binary_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
|
||||
if [ -f "$binary_file" ]; then
|
||||
local size=$(ls -lh "$binary_file" | awk '{print $5}')
|
||||
print_message $BLUE "📋 Built binary: $binary_file ($size)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-o|--output-dir)
|
||||
OUTPUT_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
-b|--binary-name)
|
||||
BINARY_NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
-p|--platform)
|
||||
CUSTOM_PLATFORM="$2"
|
||||
shift 2
|
||||
;;
|
||||
--dev)
|
||||
BUILD_TYPE="debug"
|
||||
shift
|
||||
;;
|
||||
--sign)
|
||||
SIGN=true
|
||||
shift
|
||||
;;
|
||||
--with-console)
|
||||
WITH_CONSOLE=true
|
||||
shift
|
||||
;;
|
||||
--no-console)
|
||||
WITH_CONSOLE=false
|
||||
shift
|
||||
;;
|
||||
--force-console-update)
|
||||
FORCE_CONSOLE_UPDATE=true
|
||||
WITH_CONSOLE=true # Auto-enable download when forcing update
|
||||
shift
|
||||
;;
|
||||
--console-version)
|
||||
CONSOLE_VERSION="$2"
|
||||
shift 2
|
||||
;;
|
||||
--skip-verification)
|
||||
SKIP_VERIFICATION=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
print_message $RED "❌ Unknown option: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
print_message $BLUE "🦀 RustFS Binary Build Script"
|
||||
echo ""
|
||||
|
||||
# Check if we're in a Rust project
|
||||
if [ ! -f "Cargo.toml" ]; then
|
||||
print_message $RED "❌ No Cargo.toml found. Are you in a Rust project directory?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Override platform if specified
|
||||
if [ -n "$CUSTOM_PLATFORM" ]; then
|
||||
PLATFORM="$CUSTOM_PLATFORM"
|
||||
print_message $YELLOW "🎯 Using specified platform: $PLATFORM"
|
||||
|
||||
# Auto-enable skip verification for cross-compilation
|
||||
if [ "$PLATFORM" != "$(detect_platform)" ]; then
|
||||
SKIP_VERIFICATION=true
|
||||
print_message $YELLOW "⚠️ Cross-compilation detected, enabling --skip-verification"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Start build process
|
||||
build_rustfs
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
clear
|
||||
|
||||
# Get the current platform architecture
|
||||
ARCH=$(uname -m)
|
||||
|
||||
# Set the target directory according to the schema
|
||||
if [ "$ARCH" == "x86_64" ]; then
|
||||
TARGET_DIR="target/x86_64"
|
||||
elif [ "$ARCH" == "aarch64" ]; then
|
||||
TARGET_DIR="target/arm64"
|
||||
else
|
||||
TARGET_DIR="target/unknown"
|
||||
fi
|
||||
|
||||
# Set CARGO_TARGET_DIR and build the project
|
||||
CARGO_TARGET_DIR=$TARGET_DIR RUSTFLAGS="-C link-arg=-fuse-ld=mold" cargo build --release --package rustfs
|
||||
|
||||
echo -e "\a"
|
||||
echo -e "\a"
|
||||
echo -e "\a"
|
||||
@@ -26,7 +26,6 @@ dioxus = { workspace = true, features = ["router"] }
|
||||
dirs = { workspace = true }
|
||||
hex = { workspace = true }
|
||||
keyring = { workspace = true }
|
||||
lazy_static = { workspace = true }
|
||||
rfd = { workspace = true }
|
||||
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
|
||||
rust-i18n = { workspace = true }
|
||||
|
||||
@@ -37,7 +37,9 @@ copyright = "Copyright 2025 rustfs.com"
|
||||
|
||||
icon = [
|
||||
"assets/icons/icon.icns",
|
||||
"assets/icons/icon.ico"
|
||||
"assets/icons/icon.ico",
|
||||
"assets/icons/icon.png",
|
||||
"assets/icons/rustfs-icon.png",
|
||||
]
|
||||
#[bundle.macos]
|
||||
#provider_short_name = "RustFs"
|
||||
|
||||
BIN
cli/rustfs-gui/assets/icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_128x128.png
Normal file
|
After Width: | Height: | Size: 4.5 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_128x128@2x.png
Normal file
|
After Width: | Height: | Size: 9.9 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_16x16.png
Normal file
|
After Width: | Height: | Size: 498 B |
BIN
cli/rustfs-gui/assets/icons/icon_16x16@2x.png
Normal file
|
After Width: | Height: | Size: 969 B |
BIN
cli/rustfs-gui/assets/icons/icon_256x256.png
Normal file
|
After Width: | Height: | Size: 9.9 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_256x256@2x.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_32x32.png
Normal file
|
After Width: | Height: | Size: 969 B |
BIN
cli/rustfs-gui/assets/icons/icon_32x32@2x.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_512x512.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_512x512@2x.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
cli/rustfs-gui/assets/icons/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
@@ -1,20 +1,15 @@
|
||||
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z"
|
||||
fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
||||
|
Before Width: | Height: | Size: 3.5 KiB After Width: | Height: | Size: 3.4 KiB |
@@ -14,12 +14,12 @@
|
||||
|
||||
use crate::utils::RustFSConfig;
|
||||
use dioxus::logger::tracing::{debug, error, info};
|
||||
use lazy_static::lazy_static;
|
||||
use rust_embed::RustEmbed;
|
||||
use sha2::{Digest, Sha256};
|
||||
use std::error::Error;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Command as StdCommand;
|
||||
use std::sync::LazyLock;
|
||||
use std::time::Duration;
|
||||
use tokio::fs;
|
||||
use tokio::fs::File;
|
||||
@@ -31,15 +31,13 @@ use tokio::sync::{Mutex, mpsc};
|
||||
#[folder = "$CARGO_MANIFEST_DIR/embedded-rustfs/"]
|
||||
struct Asset;
|
||||
|
||||
// Use `lazy_static` to cache the checksum of embedded resources
|
||||
lazy_static! {
|
||||
static ref RUSTFS_HASH: Mutex<String> = {
|
||||
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
|
||||
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
|
||||
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
|
||||
Mutex::new(hash)
|
||||
};
|
||||
}
|
||||
// Use `LazyLock` to cache the checksum of embedded resources
|
||||
static RUSTFS_HASH: LazyLock<Mutex<String>> = LazyLock::new(|| {
|
||||
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
|
||||
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
|
||||
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
|
||||
Mutex::new(hash)
|
||||
});
|
||||
|
||||
/// Service command
|
||||
/// This enum represents the commands that can be sent to the service manager
|
||||
|
||||
41
crates/ahm/Cargo.toml
Normal file
@@ -0,0 +1,41 @@
|
||||
[package]
|
||||
name = "rustfs-ahm"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
authors = ["RustFS Team"]
|
||||
license.workspace = true
|
||||
description = "RustFS AHM (Automatic Health Management) Scanner"
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
documentation = "https://docs.rs/rustfs-ahm/latest/rustfs_ahm/"
|
||||
keywords = ["RustFS", "AHM", "health-management", "scanner", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
|
||||
[dependencies]
|
||||
rustfs-ecstore = { workspace = true }
|
||||
rustfs-common = { workspace = true }
|
||||
rustfs-filemeta = { workspace = true }
|
||||
rustfs-madmin = { workspace = true }
|
||||
rustfs-utils = { workspace = true }
|
||||
tokio = { workspace = true, features = ["full"] }
|
||||
tokio-util = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
bytes = { workspace = true }
|
||||
time = { workspace = true, features = ["serde"] }
|
||||
uuid = { workspace = true, features = ["v4", "serde"] }
|
||||
anyhow = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
url = { workspace = true }
|
||||
rustfs-lock = { workspace = true }
|
||||
|
||||
lazy_static = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
rmp-serde = { workspace = true }
|
||||
tokio-test = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
45
crates/ahm/src/error.rs
Normal file
@@ -0,0 +1,45 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Debug, Error)]
|
||||
pub enum Error {
|
||||
#[error("I/O error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
||||
#[error("Storage error: {0}")]
|
||||
Storage(#[from] rustfs_ecstore::error::Error),
|
||||
|
||||
#[error("Configuration error: {0}")]
|
||||
Config(String),
|
||||
|
||||
#[error("Scanner error: {0}")]
|
||||
Scanner(String),
|
||||
|
||||
#[error("Metrics error: {0}")]
|
||||
Metrics(String),
|
||||
|
||||
#[error(transparent)]
|
||||
Other(#[from] anyhow::Error),
|
||||
}
|
||||
|
||||
pub type Result<T, E = Error> = std::result::Result<T, E>;
|
||||
|
||||
// Implement conversion from ahm::Error to std::io::Error for use in main.rs
|
||||
impl From<Error> for std::io::Error {
|
||||
fn from(err: Error) -> Self {
|
||||
std::io::Error::other(err)
|
||||
}
|
||||
}
|
||||
54
crates/ahm/src/lib.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::sync::OnceLock;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
|
||||
pub mod error;
|
||||
pub mod scanner;
|
||||
|
||||
pub use error::{Error, Result};
|
||||
pub use scanner::{
|
||||
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner, ScannerMetrics, load_data_usage_from_backend,
|
||||
store_data_usage_in_backend,
|
||||
};
|
||||
|
||||
// Global cancellation token for AHM services (scanner and other background tasks)
|
||||
static GLOBAL_AHM_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
/// Initialize the global AHM services cancellation token
|
||||
pub fn init_ahm_services_cancel_token(cancel_token: CancellationToken) -> Result<()> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN
|
||||
.set(cancel_token)
|
||||
.map_err(|_| Error::Config("AHM services cancel token already initialized".to_string()))
|
||||
}
|
||||
|
||||
/// Get the global AHM services cancellation token
|
||||
pub fn get_ahm_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global AHM services cancellation token
|
||||
pub fn create_ahm_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_ahm_services_cancel_token(cancel_token.clone()).expect("AHM services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all AHM services gracefully
|
||||
pub fn shutdown_ahm_services() {
|
||||
if let Some(cancel_token) = GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
1247
crates/ahm/src/scanner/data_scanner.rs
Normal file
671
crates/ahm/src/scanner/data_usage.rs
Normal file
@@ -0,0 +1,671 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{collections::HashMap, sync::Arc, time::SystemTime};
|
||||
|
||||
use rustfs_ecstore::{bucket::metadata_sys::get_replication_config, config::com::read_config, store::ECStore};
|
||||
use rustfs_utils::path::SLASH_SEPARATOR;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use crate::error::{Error, Result};
|
||||
|
||||
// Data usage storage constants
|
||||
pub const DATA_USAGE_ROOT: &str = SLASH_SEPARATOR;
|
||||
const DATA_USAGE_OBJ_NAME: &str = ".usage.json";
|
||||
const DATA_USAGE_BLOOM_NAME: &str = ".bloomcycle.bin";
|
||||
pub const DATA_USAGE_CACHE_NAME: &str = ".usage-cache.bin";
|
||||
|
||||
// Data usage storage paths
|
||||
lazy_static::lazy_static! {
|
||||
pub static ref DATA_USAGE_BUCKET: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::RUSTFS_META_BUCKET,
|
||||
SLASH_SEPARATOR,
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX
|
||||
);
|
||||
pub static ref DATA_USAGE_OBJ_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_OBJ_NAME
|
||||
);
|
||||
pub static ref DATA_USAGE_BLOOM_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_BLOOM_NAME
|
||||
);
|
||||
}
|
||||
|
||||
/// Bucket target usage info provides replication statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketTargetUsageInfo {
|
||||
pub replication_pending_size: u64,
|
||||
pub replication_failed_size: u64,
|
||||
pub replicated_size: u64,
|
||||
pub replica_size: u64,
|
||||
pub replication_pending_count: u64,
|
||||
pub replication_failed_count: u64,
|
||||
pub replicated_count: u64,
|
||||
}
|
||||
|
||||
/// Bucket usage info provides bucket-level statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketUsageInfo {
|
||||
pub size: u64,
|
||||
// Following five fields suffixed with V1 are here for backward compatibility
|
||||
// Total Size for objects that have not yet been replicated
|
||||
pub replication_pending_size_v1: u64,
|
||||
// Total size for objects that have witness one or more failures and will be retried
|
||||
pub replication_failed_size_v1: u64,
|
||||
// Total size for objects that have been replicated to destination
|
||||
pub replicated_size_v1: u64,
|
||||
// Total number of objects pending replication
|
||||
pub replication_pending_count_v1: u64,
|
||||
// Total number of objects that failed replication
|
||||
pub replication_failed_count_v1: u64,
|
||||
|
||||
pub objects_count: u64,
|
||||
pub object_size_histogram: HashMap<String, u64>,
|
||||
pub object_versions_histogram: HashMap<String, u64>,
|
||||
pub versions_count: u64,
|
||||
pub delete_markers_count: u64,
|
||||
pub replica_size: u64,
|
||||
pub replica_count: u64,
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
}
|
||||
|
||||
/// DataUsageInfo represents data usage stats of the underlying storage
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct DataUsageInfo {
|
||||
/// Total capacity
|
||||
pub total_capacity: u64,
|
||||
/// Total used capacity
|
||||
pub total_used_capacity: u64,
|
||||
/// Total free capacity
|
||||
pub total_free_capacity: u64,
|
||||
|
||||
/// LastUpdate is the timestamp of when the data usage info was last updated
|
||||
pub last_update: Option<SystemTime>,
|
||||
|
||||
/// Objects total count across all buckets
|
||||
pub objects_total_count: u64,
|
||||
/// Versions total count across all buckets
|
||||
pub versions_total_count: u64,
|
||||
/// Delete markers total count across all buckets
|
||||
pub delete_markers_total_count: u64,
|
||||
/// Objects total size across all buckets
|
||||
pub objects_total_size: u64,
|
||||
/// Replication info across all buckets
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
|
||||
/// Total number of buckets in this cluster
|
||||
pub buckets_count: u64,
|
||||
/// Buckets usage info provides following information across all buckets
|
||||
pub buckets_usage: HashMap<String, BucketUsageInfo>,
|
||||
/// Deprecated kept here for backward compatibility reasons
|
||||
pub bucket_sizes: HashMap<String, u64>,
|
||||
}
|
||||
|
||||
/// Size summary for a single object or group of objects
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct SizeSummary {
|
||||
/// Total size
|
||||
pub total_size: usize,
|
||||
/// Number of versions
|
||||
pub versions: usize,
|
||||
/// Number of delete markers
|
||||
pub delete_markers: usize,
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Replica size
|
||||
pub replica_size: usize,
|
||||
/// Replica count
|
||||
pub replica_count: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
/// Replication target stats
|
||||
pub repl_target_stats: HashMap<String, ReplTargetSizeSummary>,
|
||||
}
|
||||
|
||||
/// Replication target size summary
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct ReplTargetSizeSummary {
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
}
|
||||
|
||||
impl DataUsageInfo {
|
||||
/// Create a new DataUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add object metadata to data usage statistics
|
||||
pub fn add_object(&mut self, object_path: &str, meta_object: &rustfs_filemeta::MetaObject) {
|
||||
// This method is kept for backward compatibility
|
||||
// For accurate version counting, use add_object_from_file_meta instead
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += meta_object.size as u64;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += 1; // Simplified: assume 1 version per object
|
||||
|
||||
// Update size histogram
|
||||
let total_size = meta_object.size as u64;
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if total_size >= min_size && total_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram (simplified - count as single version)
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry("SINGLE_VERSION".to_string())
|
||||
.or_insert(0) += 1;
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: meta_object.size as u64,
|
||||
objects_count: 1,
|
||||
versions_count: 1,
|
||||
..Default::default()
|
||||
};
|
||||
bucket_usage.object_size_histogram.insert("0-1KB".to_string(), 1);
|
||||
bucket_usage.object_versions_histogram.insert("SINGLE_VERSION".to_string(), 1);
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += meta_object.size as u64;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += 1;
|
||||
}
|
||||
|
||||
/// Add object from FileMeta for accurate version counting
|
||||
pub fn add_object_from_file_meta(&mut self, object_path: &str, file_meta: &rustfs_filemeta::FileMeta) {
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Calculate accurate statistics from all versions
|
||||
let mut total_size = 0u64;
|
||||
let mut versions_count = 0u64;
|
||||
let mut delete_markers_count = 0u64;
|
||||
let mut latest_object_size = 0u64;
|
||||
|
||||
// Process all versions to get accurate counts
|
||||
for version in &file_meta.versions {
|
||||
match rustfs_filemeta::FileMetaVersion::try_from(version.clone()) {
|
||||
Ok(ver) => {
|
||||
if let Some(obj) = ver.object {
|
||||
total_size += obj.size as u64;
|
||||
versions_count += 1;
|
||||
latest_object_size = obj.size as u64; // Keep track of latest object size
|
||||
} else if ver.delete_marker.is_some() {
|
||||
delete_markers_count += 1;
|
||||
}
|
||||
}
|
||||
Err(_) => {
|
||||
// Skip invalid versions
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += total_size;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += versions_count;
|
||||
bucket_usage.delete_markers_count += delete_markers_count;
|
||||
|
||||
// Update size histogram based on latest object size
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram based on actual version count
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry(range_name.to_string())
|
||||
.or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: total_size,
|
||||
objects_count: 1,
|
||||
versions_count,
|
||||
delete_markers_count,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Set size histogram
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
bucket_usage.object_size_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Set version histogram
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
bucket_usage.object_versions_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
// Update buckets count when adding new bucket
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += total_size;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += versions_count;
|
||||
self.delete_markers_total_count += delete_markers_count;
|
||||
}
|
||||
|
||||
/// Extract bucket name from object path
|
||||
fn extract_bucket_from_path(&self, object_path: &str) -> Result<String> {
|
||||
let parts: Vec<&str> = object_path.split('/').collect();
|
||||
if parts.is_empty() {
|
||||
return Err(Error::Scanner("Invalid object path: empty".to_string()));
|
||||
}
|
||||
Ok(parts[0].to_string())
|
||||
}
|
||||
|
||||
/// Update capacity information
|
||||
pub fn update_capacity(&mut self, total: u64, used: u64, free: u64) {
|
||||
self.total_capacity = total;
|
||||
self.total_used_capacity = used;
|
||||
self.total_free_capacity = free;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Add bucket usage info
|
||||
pub fn add_bucket_usage(&mut self, bucket: String, usage: BucketUsageInfo) {
|
||||
self.buckets_usage.insert(bucket.clone(), usage);
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Get bucket usage info
|
||||
pub fn get_bucket_usage(&self, bucket: &str) -> Option<&BucketUsageInfo> {
|
||||
self.buckets_usage.get(bucket)
|
||||
}
|
||||
|
||||
/// Calculate total statistics from all buckets
|
||||
pub fn calculate_totals(&mut self) {
|
||||
self.objects_total_count = 0;
|
||||
self.versions_total_count = 0;
|
||||
self.delete_markers_total_count = 0;
|
||||
self.objects_total_size = 0;
|
||||
|
||||
for usage in self.buckets_usage.values() {
|
||||
self.objects_total_count += usage.objects_count;
|
||||
self.versions_total_count += usage.versions_count;
|
||||
self.delete_markers_total_count += usage.delete_markers_count;
|
||||
self.objects_total_size += usage.size;
|
||||
}
|
||||
}
|
||||
|
||||
/// Merge another DataUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &DataUsageInfo) {
|
||||
// Merge bucket usage
|
||||
for (bucket, usage) in &other.buckets_usage {
|
||||
if let Some(existing) = self.buckets_usage.get_mut(bucket) {
|
||||
existing.merge(usage);
|
||||
} else {
|
||||
self.buckets_usage.insert(bucket.clone(), usage.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Recalculate totals
|
||||
self.calculate_totals();
|
||||
|
||||
// Ensure buckets_count stays consistent with buckets_usage
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
|
||||
// Update last update time
|
||||
if let Some(other_update) = other.last_update {
|
||||
if self.last_update.is_none() || other_update > self.last_update.unwrap() {
|
||||
self.last_update = Some(other_update);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl BucketUsageInfo {
|
||||
/// Create a new BucketUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add size summary to this bucket usage
|
||||
pub fn add_size_summary(&mut self, summary: &SizeSummary) {
|
||||
self.size += summary.total_size as u64;
|
||||
self.versions_count += summary.versions as u64;
|
||||
self.delete_markers_count += summary.delete_markers as u64;
|
||||
self.replica_size += summary.replica_size as u64;
|
||||
self.replica_count += summary.replica_count as u64;
|
||||
}
|
||||
|
||||
/// Merge another BucketUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &BucketUsageInfo) {
|
||||
self.size += other.size;
|
||||
self.objects_count += other.objects_count;
|
||||
self.versions_count += other.versions_count;
|
||||
self.delete_markers_count += other.delete_markers_count;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
|
||||
// Merge histograms
|
||||
for (key, value) in &other.object_size_histogram {
|
||||
*self.object_size_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
for (key, value) in &other.object_versions_histogram {
|
||||
*self.object_versions_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
// Merge replication info
|
||||
for (target, info) in &other.replication_info {
|
||||
let entry = self.replication_info.entry(target.clone()).or_default();
|
||||
entry.replicated_size += info.replicated_size;
|
||||
entry.replica_size += info.replica_size;
|
||||
entry.replication_pending_size += info.replication_pending_size;
|
||||
entry.replication_failed_size += info.replication_failed_size;
|
||||
entry.replication_pending_count += info.replication_pending_count;
|
||||
entry.replication_failed_count += info.replication_failed_count;
|
||||
entry.replicated_count += info.replicated_count;
|
||||
}
|
||||
|
||||
// Merge backward compatibility fields
|
||||
self.replication_pending_size_v1 += other.replication_pending_size_v1;
|
||||
self.replication_failed_size_v1 += other.replication_failed_size_v1;
|
||||
self.replicated_size_v1 += other.replicated_size_v1;
|
||||
self.replication_pending_count_v1 += other.replication_pending_count_v1;
|
||||
self.replication_failed_count_v1 += other.replication_failed_count_v1;
|
||||
}
|
||||
}
|
||||
|
||||
impl SizeSummary {
|
||||
/// Create a new SizeSummary
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add another SizeSummary to this one
|
||||
pub fn add(&mut self, other: &SizeSummary) {
|
||||
self.total_size += other.total_size;
|
||||
self.versions += other.versions;
|
||||
self.delete_markers += other.delete_markers;
|
||||
self.replicated_size += other.replicated_size;
|
||||
self.replicated_count += other.replicated_count;
|
||||
self.pending_size += other.pending_size;
|
||||
self.failed_size += other.failed_size;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
self.pending_count += other.pending_count;
|
||||
self.failed_count += other.failed_count;
|
||||
|
||||
// Merge replication target stats
|
||||
for (target, stats) in &other.repl_target_stats {
|
||||
let entry = self.repl_target_stats.entry(target.clone()).or_default();
|
||||
entry.replicated_size += stats.replicated_size;
|
||||
entry.replicated_count += stats.replicated_count;
|
||||
entry.pending_size += stats.pending_size;
|
||||
entry.failed_size += stats.failed_size;
|
||||
entry.pending_count += stats.pending_count;
|
||||
entry.failed_count += stats.failed_count;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Store data usage info to backend storage
|
||||
pub async fn store_data_usage_in_backend(data_usage_info: DataUsageInfo, store: Arc<ECStore>) -> Result<()> {
|
||||
let data =
|
||||
serde_json::to_vec(&data_usage_info).map_err(|e| Error::Config(format!("Failed to serialize data usage info: {e}")))?;
|
||||
|
||||
// Save to backend using the same mechanism as original code
|
||||
rustfs_ecstore::config::com::save_config(store, &DATA_USAGE_OBJ_NAME_PATH, data)
|
||||
.await
|
||||
.map_err(Error::Storage)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load data usage info from backend storage
|
||||
pub async fn load_data_usage_from_backend(store: Arc<ECStore>) -> Result<DataUsageInfo> {
|
||||
let buf = match read_config(store, &DATA_USAGE_OBJ_NAME_PATH).await {
|
||||
Ok(data) => data,
|
||||
Err(e) => {
|
||||
error!("Failed to read data usage info from backend: {}", e);
|
||||
if e == rustfs_ecstore::error::Error::ConfigNotFound {
|
||||
return Ok(DataUsageInfo::default());
|
||||
}
|
||||
return Err(Error::Storage(e));
|
||||
}
|
||||
};
|
||||
|
||||
let mut data_usage_info: DataUsageInfo =
|
||||
serde_json::from_slice(&buf).map_err(|e| Error::Config(format!("Failed to deserialize data usage info: {e}")))?;
|
||||
|
||||
warn!("Loaded data usage info from backend {:?}", &data_usage_info);
|
||||
|
||||
// Handle backward compatibility like original code
|
||||
if data_usage_info.buckets_usage.is_empty() {
|
||||
data_usage_info.buckets_usage = data_usage_info
|
||||
.bucket_sizes
|
||||
.iter()
|
||||
.map(|(bucket, &size)| {
|
||||
(
|
||||
bucket.clone(),
|
||||
BucketUsageInfo {
|
||||
size,
|
||||
..Default::default()
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
}
|
||||
|
||||
if data_usage_info.bucket_sizes.is_empty() {
|
||||
data_usage_info.bucket_sizes = data_usage_info
|
||||
.buckets_usage
|
||||
.iter()
|
||||
.map(|(bucket, bui)| (bucket.clone(), bui.size))
|
||||
.collect();
|
||||
}
|
||||
|
||||
for (bucket, bui) in &data_usage_info.buckets_usage {
|
||||
if bui.replicated_size_v1 > 0
|
||||
|| bui.replication_failed_count_v1 > 0
|
||||
|| bui.replication_failed_size_v1 > 0
|
||||
|| bui.replication_pending_count_v1 > 0
|
||||
{
|
||||
if let Ok((cfg, _)) = get_replication_config(bucket).await {
|
||||
if !cfg.role.is_empty() {
|
||||
data_usage_info.replication_info.insert(
|
||||
cfg.role.clone(),
|
||||
BucketTargetUsageInfo {
|
||||
replication_failed_size: bui.replication_failed_size_v1,
|
||||
replication_failed_count: bui.replication_failed_count_v1,
|
||||
replicated_size: bui.replicated_size_v1,
|
||||
replication_pending_count: bui.replication_pending_count_v1,
|
||||
replication_pending_size: bui.replication_pending_size_v1,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(data_usage_info)
|
||||
}
|
||||
|
||||
/// Example function showing how to use AHM data usage functionality
|
||||
/// This demonstrates the integration pattern for DataUsageInfoHandler
|
||||
pub async fn example_data_usage_integration() -> Result<()> {
|
||||
// Get the global storage instance
|
||||
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
|
||||
return Err(Error::Config("Storage not initialized".to_string()));
|
||||
};
|
||||
|
||||
// Load data usage from backend (this replaces the original load_data_usage_from_backend)
|
||||
let data_usage = load_data_usage_from_backend(store).await?;
|
||||
|
||||
info!(
|
||||
"Loaded data usage info: {} buckets, {} total objects",
|
||||
data_usage.buckets_count, data_usage.objects_total_count
|
||||
);
|
||||
|
||||
// Example: Store updated data usage back to backend
|
||||
// This would typically be called by the scanner after collecting new statistics
|
||||
// store_data_usage_in_backend(data_usage, store).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_data_usage_info_creation() {
|
||||
let mut info = DataUsageInfo::new();
|
||||
info.update_capacity(1000, 500, 500);
|
||||
|
||||
assert_eq!(info.total_capacity, 1000);
|
||||
assert_eq!(info.total_used_capacity, 500);
|
||||
assert_eq!(info.total_free_capacity, 500);
|
||||
assert!(info.last_update.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bucket_usage_info_merge() {
|
||||
let mut usage1 = BucketUsageInfo::new();
|
||||
usage1.size = 100;
|
||||
usage1.objects_count = 10;
|
||||
usage1.versions_count = 5;
|
||||
|
||||
let mut usage2 = BucketUsageInfo::new();
|
||||
usage2.size = 200;
|
||||
usage2.objects_count = 20;
|
||||
usage2.versions_count = 10;
|
||||
|
||||
usage1.merge(&usage2);
|
||||
|
||||
assert_eq!(usage1.size, 300);
|
||||
assert_eq!(usage1.objects_count, 30);
|
||||
assert_eq!(usage1.versions_count, 15);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_size_summary_add() {
|
||||
let mut summary1 = SizeSummary::new();
|
||||
summary1.total_size = 100;
|
||||
summary1.versions = 5;
|
||||
|
||||
let mut summary2 = SizeSummary::new();
|
||||
summary2.total_size = 200;
|
||||
summary2.versions = 10;
|
||||
|
||||
summary1.add(&summary2);
|
||||
|
||||
assert_eq!(summary1.total_size, 300);
|
||||
assert_eq!(summary1.versions, 15);
|
||||
}
|
||||
}
|
||||
277
crates/ahm/src/scanner/histogram.rs
Normal file
@@ -0,0 +1,277 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Size interval for object size histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SizeInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Version interval for object versions histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct VersionInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Object size histogram intervals
|
||||
pub const OBJECTS_HISTOGRAM_INTERVALS: &[SizeInterval] = &[
|
||||
SizeInterval {
|
||||
start: 0,
|
||||
end: 1024 - 1,
|
||||
name: "LESS_THAN_1_KiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024,
|
||||
end: 1024 * 1024 - 1,
|
||||
name: "1_KiB_TO_1_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024 * 1024,
|
||||
end: 10 * 1024 * 1024 - 1,
|
||||
name: "1_MiB_TO_10_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 10 * 1024 * 1024,
|
||||
end: 64 * 1024 * 1024 - 1,
|
||||
name: "10_MiB_TO_64_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 64 * 1024 * 1024,
|
||||
end: 128 * 1024 * 1024 - 1,
|
||||
name: "64_MiB_TO_128_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 128 * 1024 * 1024,
|
||||
end: 512 * 1024 * 1024 - 1,
|
||||
name: "128_MiB_TO_512_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 512 * 1024 * 1024,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_512_MiB",
|
||||
},
|
||||
];
|
||||
|
||||
/// Object version count histogram intervals
|
||||
pub const OBJECTS_VERSION_COUNT_INTERVALS: &[VersionInterval] = &[
|
||||
VersionInterval {
|
||||
start: 1,
|
||||
end: 1,
|
||||
name: "1_VERSION",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 2,
|
||||
end: 10,
|
||||
name: "2_TO_10_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 11,
|
||||
end: 100,
|
||||
name: "11_TO_100_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 101,
|
||||
end: 1000,
|
||||
name: "101_TO_1000_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 1001,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_1000_VERSIONS",
|
||||
},
|
||||
];
|
||||
|
||||
/// Size histogram for object size distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct SizeHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
/// Versions histogram for object version count distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct VersionsHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
impl SizeHistogram {
|
||||
/// Create a new size histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_HISTOGRAM_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a size to the histogram
|
||||
pub fn add(&mut self, size: u64) {
|
||||
for (idx, interval) in OBJECTS_HISTOGRAM_INTERVALS.iter().enumerate() {
|
||||
if size >= interval.start && size <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_HISTOGRAM_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &SizeHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl VersionsHistogram {
|
||||
/// Create a new versions histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_VERSION_COUNT_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a version count to the histogram
|
||||
pub fn add(&mut self, versions: u64) {
|
||||
for (idx, interval) in OBJECTS_VERSION_COUNT_INTERVALS.iter().enumerate() {
|
||||
if versions >= interval.start && versions <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_VERSION_COUNT_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &VersionsHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_size_histogram() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
|
||||
// Add some sizes
|
||||
histogram.add(512); // LESS_THAN_1_KiB
|
||||
histogram.add(1024); // 1_KiB_TO_1_MiB
|
||||
histogram.add(1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
histogram.add(5 * 1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("LESS_THAN_1_KiB"), Some(&1));
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&1));
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2));
|
||||
assert_eq!(map.get("10_MiB_TO_64_MiB"), Some(&0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_versions_histogram() {
|
||||
let mut histogram = VersionsHistogram::new();
|
||||
|
||||
// Add some version counts
|
||||
histogram.add(1); // 1_VERSION
|
||||
histogram.add(5); // 2_TO_10_VERSIONS
|
||||
histogram.add(50); // 11_TO_100_VERSIONS
|
||||
histogram.add(500); // 101_TO_1000_VERSIONS
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("1_VERSION"), Some(&1));
|
||||
assert_eq!(map.get("2_TO_10_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("11_TO_100_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("101_TO_1000_VERSIONS"), Some(&1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_merge() {
|
||||
let mut histogram1 = SizeHistogram::new();
|
||||
histogram1.add(1024);
|
||||
histogram1.add(1024 * 1024);
|
||||
|
||||
let mut histogram2 = SizeHistogram::new();
|
||||
histogram2.add(1024);
|
||||
histogram2.add(5 * 1024 * 1024);
|
||||
|
||||
histogram1.merge(&histogram2);
|
||||
|
||||
let map = histogram1.to_map();
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_reset() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
histogram.add(1024);
|
||||
histogram.add(1024 * 1024);
|
||||
|
||||
assert_eq!(histogram.total_count(), 2);
|
||||
|
||||
histogram.reset();
|
||||
assert_eq!(histogram.total_count(), 0);
|
||||
}
|
||||
}
|
||||
284
crates/ahm/src/scanner/metrics.rs
Normal file
@@ -0,0 +1,284 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{
|
||||
collections::HashMap,
|
||||
sync::atomic::{AtomicU64, Ordering},
|
||||
time::{Duration, SystemTime},
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::info;
|
||||
|
||||
/// Scanner metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct ScannerMetrics {
|
||||
/// Total objects scanned since server start
|
||||
pub objects_scanned: u64,
|
||||
/// Total object versions scanned since server start
|
||||
pub versions_scanned: u64,
|
||||
/// Total directories scanned since server start
|
||||
pub directories_scanned: u64,
|
||||
/// Total bucket scans started since server start
|
||||
pub bucket_scans_started: u64,
|
||||
/// Total bucket scans finished since server start
|
||||
pub bucket_scans_finished: u64,
|
||||
/// Total objects with health issues found
|
||||
pub objects_with_issues: u64,
|
||||
/// Total heal tasks queued
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Total heal tasks completed
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Total heal tasks failed
|
||||
pub heal_tasks_failed: u64,
|
||||
/// Last scan activity time
|
||||
pub last_activity: Option<SystemTime>,
|
||||
/// Current scan cycle
|
||||
pub current_cycle: u64,
|
||||
/// Total scan cycles completed
|
||||
pub total_cycles: u64,
|
||||
/// Current scan duration
|
||||
pub current_scan_duration: Option<Duration>,
|
||||
/// Average scan duration
|
||||
pub avg_scan_duration: Duration,
|
||||
/// Objects scanned per second
|
||||
pub objects_per_second: f64,
|
||||
/// Buckets scanned per second
|
||||
pub buckets_per_second: f64,
|
||||
/// Storage metrics by bucket
|
||||
pub bucket_metrics: HashMap<String, BucketMetrics>,
|
||||
/// Disk metrics
|
||||
pub disk_metrics: HashMap<String, DiskMetrics>,
|
||||
}
|
||||
|
||||
/// Bucket-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct BucketMetrics {
|
||||
/// Bucket name
|
||||
pub bucket: String,
|
||||
/// Total objects in bucket
|
||||
pub total_objects: u64,
|
||||
/// Total size of objects in bucket (bytes)
|
||||
pub total_size: u64,
|
||||
/// Objects with health issues
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Scan duration
|
||||
pub scan_duration: Option<Duration>,
|
||||
/// Heal tasks queued for this bucket
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Heal tasks completed for this bucket
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Heal tasks failed for this bucket
|
||||
pub heal_tasks_failed: u64,
|
||||
}
|
||||
|
||||
/// Disk-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct DiskMetrics {
|
||||
/// Disk path
|
||||
pub disk_path: String,
|
||||
/// Total disk space (bytes)
|
||||
pub total_space: u64,
|
||||
/// Used disk space (bytes)
|
||||
pub used_space: u64,
|
||||
/// Free disk space (bytes)
|
||||
pub free_space: u64,
|
||||
/// Objects scanned on this disk
|
||||
pub objects_scanned: u64,
|
||||
/// Objects with issues on this disk
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Whether disk is online
|
||||
pub is_online: bool,
|
||||
/// Whether disk is being scanned
|
||||
pub is_scanning: bool,
|
||||
}
|
||||
|
||||
/// Thread-safe metrics collector
|
||||
pub struct MetricsCollector {
|
||||
/// Atomic counters for real-time metrics
|
||||
objects_scanned: AtomicU64,
|
||||
versions_scanned: AtomicU64,
|
||||
directories_scanned: AtomicU64,
|
||||
bucket_scans_started: AtomicU64,
|
||||
bucket_scans_finished: AtomicU64,
|
||||
objects_with_issues: AtomicU64,
|
||||
heal_tasks_queued: AtomicU64,
|
||||
heal_tasks_completed: AtomicU64,
|
||||
heal_tasks_failed: AtomicU64,
|
||||
current_cycle: AtomicU64,
|
||||
total_cycles: AtomicU64,
|
||||
}
|
||||
|
||||
impl MetricsCollector {
|
||||
/// Create a new metrics collector
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
objects_scanned: AtomicU64::new(0),
|
||||
versions_scanned: AtomicU64::new(0),
|
||||
directories_scanned: AtomicU64::new(0),
|
||||
bucket_scans_started: AtomicU64::new(0),
|
||||
bucket_scans_finished: AtomicU64::new(0),
|
||||
objects_with_issues: AtomicU64::new(0),
|
||||
heal_tasks_queued: AtomicU64::new(0),
|
||||
heal_tasks_completed: AtomicU64::new(0),
|
||||
heal_tasks_failed: AtomicU64::new(0),
|
||||
current_cycle: AtomicU64::new(0),
|
||||
total_cycles: AtomicU64::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
/// Increment objects scanned count
|
||||
pub fn increment_objects_scanned(&self, count: u64) {
|
||||
self.objects_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment versions scanned count
|
||||
pub fn increment_versions_scanned(&self, count: u64) {
|
||||
self.versions_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment directories scanned count
|
||||
pub fn increment_directories_scanned(&self, count: u64) {
|
||||
self.directories_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans started count
|
||||
pub fn increment_bucket_scans_started(&self, count: u64) {
|
||||
self.bucket_scans_started.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans finished count
|
||||
pub fn increment_bucket_scans_finished(&self, count: u64) {
|
||||
self.bucket_scans_finished.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment objects with issues count
|
||||
pub fn increment_objects_with_issues(&self, count: u64) {
|
||||
self.objects_with_issues.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks queued count
|
||||
pub fn increment_heal_tasks_queued(&self, count: u64) {
|
||||
self.heal_tasks_queued.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks completed count
|
||||
pub fn increment_heal_tasks_completed(&self, count: u64) {
|
||||
self.heal_tasks_completed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks failed count
|
||||
pub fn increment_heal_tasks_failed(&self, count: u64) {
|
||||
self.heal_tasks_failed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Set current cycle
|
||||
pub fn set_current_cycle(&self, cycle: u64) {
|
||||
self.current_cycle.store(cycle, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment total cycles
|
||||
pub fn increment_total_cycles(&self) {
|
||||
self.total_cycles.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Get current metrics snapshot
|
||||
pub fn get_metrics(&self) -> ScannerMetrics {
|
||||
ScannerMetrics {
|
||||
objects_scanned: self.objects_scanned.load(Ordering::Relaxed),
|
||||
versions_scanned: self.versions_scanned.load(Ordering::Relaxed),
|
||||
directories_scanned: self.directories_scanned.load(Ordering::Relaxed),
|
||||
bucket_scans_started: self.bucket_scans_started.load(Ordering::Relaxed),
|
||||
bucket_scans_finished: self.bucket_scans_finished.load(Ordering::Relaxed),
|
||||
objects_with_issues: self.objects_with_issues.load(Ordering::Relaxed),
|
||||
heal_tasks_queued: self.heal_tasks_queued.load(Ordering::Relaxed),
|
||||
heal_tasks_completed: self.heal_tasks_completed.load(Ordering::Relaxed),
|
||||
heal_tasks_failed: self.heal_tasks_failed.load(Ordering::Relaxed),
|
||||
last_activity: Some(SystemTime::now()),
|
||||
current_cycle: self.current_cycle.load(Ordering::Relaxed),
|
||||
total_cycles: self.total_cycles.load(Ordering::Relaxed),
|
||||
current_scan_duration: None, // Will be set by scanner
|
||||
avg_scan_duration: Duration::ZERO, // Will be calculated
|
||||
objects_per_second: 0.0, // Will be calculated
|
||||
buckets_per_second: 0.0, // Will be calculated
|
||||
bucket_metrics: HashMap::new(), // Will be populated by scanner
|
||||
disk_metrics: HashMap::new(), // Will be populated by scanner
|
||||
}
|
||||
}
|
||||
|
||||
/// Reset all metrics
|
||||
pub fn reset(&self) {
|
||||
self.objects_scanned.store(0, Ordering::Relaxed);
|
||||
self.versions_scanned.store(0, Ordering::Relaxed);
|
||||
self.directories_scanned.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_started.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_finished.store(0, Ordering::Relaxed);
|
||||
self.objects_with_issues.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_queued.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_completed.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_failed.store(0, Ordering::Relaxed);
|
||||
self.current_cycle.store(0, Ordering::Relaxed);
|
||||
self.total_cycles.store(0, Ordering::Relaxed);
|
||||
|
||||
info!("Scanner metrics reset");
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for MetricsCollector {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_metrics_collector_creation() {
|
||||
let collector = MetricsCollector::new();
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
assert_eq!(metrics.versions_scanned, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_increment() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.increment_versions_scanned(5);
|
||||
collector.increment_objects_with_issues(2);
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 10);
|
||||
assert_eq!(metrics.versions_scanned, 5);
|
||||
assert_eq!(metrics.objects_with_issues, 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_reset() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.reset();
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
}
|
||||
}
|
||||
@@ -11,3 +11,15 @@
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod data_scanner;
|
||||
pub mod data_usage;
|
||||
pub mod histogram;
|
||||
pub mod metrics;
|
||||
|
||||
// Re-export main types for convenience
|
||||
pub use data_scanner::Scanner;
|
||||
pub use data_usage::{
|
||||
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, load_data_usage_from_backend, store_data_usage_in_backend,
|
||||
};
|
||||
pub use metrics::ScannerMetrics;
|
||||
@@ -19,6 +19,10 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Application authentication and authorization for RustFS, providing secure access control and user management."
|
||||
keywords = ["authentication", "authorization", "security", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "authentication"]
|
||||
|
||||
[dependencies]
|
||||
base64-simd = { workspace = true }
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# RustFS AppAuth - Application Authentication
|
||||
|
||||
<p align="center">
|
||||
<strong>Secure application authentication and authorization for RustFS object storage</strong>
|
||||
<strong>Application-level authentication and authorization module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,461 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS AppAuth** provides secure application authentication and authorization mechanisms for the [RustFS](https://rustfs.com) distributed object storage system. It implements modern cryptographic standards including RSA-based authentication, JWT tokens, and secure session management for application-level access control.
|
||||
|
||||
> **Note:** This is a security-critical submodule of RustFS that provides essential application authentication capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS AppAuth** provides application-level authentication and authorization capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔐 Authentication Methods
|
||||
|
||||
- **RSA Authentication**: Public-key cryptography for secure authentication
|
||||
- **JWT Tokens**: JSON Web Token support for stateless authentication
|
||||
- **API Keys**: Simple API key-based authentication
|
||||
- **Session Management**: Secure session handling and lifecycle management
|
||||
|
||||
### 🛡️ Security Features
|
||||
|
||||
- **Cryptographic Signing**: RSA digital signatures for request validation
|
||||
- **Token Encryption**: Encrypted token storage and transmission
|
||||
- **Key Rotation**: Automatic key rotation and management
|
||||
- **Audit Logging**: Comprehensive authentication event logging
|
||||
|
||||
### 🚀 Performance Features
|
||||
|
||||
- **Base64 Optimization**: High-performance base64 encoding/decoding
|
||||
- **Token Caching**: Efficient token validation caching
|
||||
- **Parallel Verification**: Concurrent authentication processing
|
||||
- **Hardware Acceleration**: Leverage CPU crypto extensions
|
||||
|
||||
### 🔧 Integration Features
|
||||
|
||||
- **S3 Compatibility**: AWS S3-compatible authentication
|
||||
- **Multi-Tenant**: Support for multiple application tenants
|
||||
- **Permission Mapping**: Fine-grained permission assignment
|
||||
- **External Integration**: LDAP, OAuth, and custom authentication providers
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-appauth = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Authentication Setup
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{AppAuthenticator, AuthConfig, AuthMethod};
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Configure authentication
|
||||
let config = AuthConfig {
|
||||
auth_method: AuthMethod::RSA,
|
||||
key_size: 2048,
|
||||
token_expiry: Duration::from_hours(24),
|
||||
enable_caching: true,
|
||||
audit_logging: true,
|
||||
};
|
||||
|
||||
// Initialize authenticator
|
||||
let authenticator = AppAuthenticator::new(config).await?;
|
||||
|
||||
// Generate application credentials
|
||||
let app_credentials = authenticator.generate_app_credentials("my-app").await?;
|
||||
|
||||
println!("App ID: {}", app_credentials.app_id);
|
||||
println!("Public Key: {}", app_credentials.public_key);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### RSA-Based Authentication
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{RSAAuthenticator, AuthRequest, AuthResponse};
|
||||
|
||||
async fn rsa_authentication_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create RSA authenticator
|
||||
let rsa_auth = RSAAuthenticator::new(2048).await?;
|
||||
|
||||
// Generate key pair for application
|
||||
let (private_key, public_key) = rsa_auth.generate_keypair().await?;
|
||||
|
||||
// Register application
|
||||
let app_id = rsa_auth.register_application("my-storage-app", &public_key).await?;
|
||||
println!("Application registered with ID: {}", app_id);
|
||||
|
||||
// Create authentication request
|
||||
let auth_request = AuthRequest {
|
||||
app_id: app_id.clone(),
|
||||
timestamp: chrono::Utc::now(),
|
||||
request_data: b"GET /bucket/object".to_vec(),
|
||||
};
|
||||
|
||||
// Sign request with private key
|
||||
let signed_request = rsa_auth.sign_request(&auth_request, &private_key).await?;
|
||||
|
||||
// Verify authentication
|
||||
let auth_response = rsa_auth.authenticate(&signed_request).await?;
|
||||
|
||||
match auth_response {
|
||||
AuthResponse::Success { session_token, permissions } => {
|
||||
println!("Authentication successful!");
|
||||
println!("Session token: {}", session_token);
|
||||
println!("Permissions: {:?}", permissions);
|
||||
}
|
||||
AuthResponse::Failed { reason } => {
|
||||
println!("Authentication failed: {}", reason);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### JWT Token Management
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{JWTManager, TokenClaims, TokenRequest};
|
||||
|
||||
async fn jwt_management_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create JWT manager
|
||||
let jwt_manager = JWTManager::new("your-secret-key").await?;
|
||||
|
||||
// Create token claims
|
||||
let claims = TokenClaims {
|
||||
app_id: "my-app".to_string(),
|
||||
user_id: Some("user123".to_string()),
|
||||
permissions: vec![
|
||||
"read:bucket".to_string(),
|
||||
"write:bucket".to_string(),
|
||||
],
|
||||
expires_at: chrono::Utc::now() + chrono::Duration::hours(24),
|
||||
issued_at: chrono::Utc::now(),
|
||||
};
|
||||
|
||||
// Generate JWT token
|
||||
let token = jwt_manager.generate_token(&claims).await?;
|
||||
println!("Generated token: {}", token);
|
||||
|
||||
// Validate token
|
||||
let validation_result = jwt_manager.validate_token(&token).await?;
|
||||
|
||||
match validation_result {
|
||||
Ok(validated_claims) => {
|
||||
println!("Token valid for app: {}", validated_claims.app_id);
|
||||
println!("Permissions: {:?}", validated_claims.permissions);
|
||||
}
|
||||
Err(e) => {
|
||||
println!("Token validation failed: {}", e);
|
||||
}
|
||||
}
|
||||
|
||||
// Refresh token
|
||||
let refreshed_token = jwt_manager.refresh_token(&token).await?;
|
||||
println!("Refreshed token: {}", refreshed_token);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### API Key Authentication
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{APIKeyManager, APIKeyConfig, KeyPermissions};
|
||||
|
||||
async fn api_key_authentication() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let api_key_manager = APIKeyManager::new().await?;
|
||||
|
||||
// Create API key configuration
|
||||
let key_config = APIKeyConfig {
|
||||
app_name: "storage-client".to_string(),
|
||||
permissions: KeyPermissions {
|
||||
read_buckets: vec!["public-*".to_string()],
|
||||
write_buckets: vec!["uploads".to_string()],
|
||||
admin_access: false,
|
||||
},
|
||||
expires_at: Some(chrono::Utc::now() + chrono::Duration::days(90)),
|
||||
rate_limit: Some(1000), // requests per hour
|
||||
};
|
||||
|
||||
// Generate API key
|
||||
let api_key = api_key_manager.generate_key(&key_config).await?;
|
||||
println!("Generated API key: {}", api_key.key);
|
||||
println!("Key ID: {}", api_key.key_id);
|
||||
|
||||
// Authenticate with API key
|
||||
let auth_result = api_key_manager.authenticate(&api_key.key).await?;
|
||||
|
||||
if auth_result.is_valid {
|
||||
println!("API key authentication successful");
|
||||
println!("Rate limit remaining: {}", auth_result.rate_limit_remaining);
|
||||
}
|
||||
|
||||
// List API keys for application
|
||||
let keys = api_key_manager.list_keys("storage-client").await?;
|
||||
for key in keys {
|
||||
println!("Key: {} - Status: {} - Expires: {:?}",
|
||||
key.key_id, key.status, key.expires_at);
|
||||
}
|
||||
|
||||
// Revoke API key
|
||||
api_key_manager.revoke_key(&api_key.key_id).await?;
|
||||
println!("API key revoked successfully");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{SessionManager, SessionConfig, SessionInfo};
|
||||
|
||||
async fn session_management_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Configure session management
|
||||
let session_config = SessionConfig {
|
||||
session_timeout: Duration::from_hours(8),
|
||||
max_sessions_per_app: 10,
|
||||
require_refresh: true,
|
||||
secure_cookies: true,
|
||||
};
|
||||
|
||||
let session_manager = SessionManager::new(session_config).await?;
|
||||
|
||||
// Create new session
|
||||
let session_info = SessionInfo {
|
||||
app_id: "web-app".to_string(),
|
||||
user_id: Some("user456".to_string()),
|
||||
ip_address: "192.168.1.100".to_string(),
|
||||
user_agent: "RustFS-Client/1.0".to_string(),
|
||||
};
|
||||
|
||||
let session = session_manager.create_session(&session_info).await?;
|
||||
println!("Session created: {}", session.session_id);
|
||||
|
||||
// Validate session
|
||||
let validation = session_manager.validate_session(&session.session_id).await?;
|
||||
|
||||
if validation.is_valid {
|
||||
println!("Session is valid, expires at: {}", validation.expires_at);
|
||||
}
|
||||
|
||||
// Refresh session
|
||||
session_manager.refresh_session(&session.session_id).await?;
|
||||
println!("Session refreshed");
|
||||
|
||||
// Get active sessions
|
||||
let active_sessions = session_manager.get_active_sessions("web-app").await?;
|
||||
println!("Active sessions: {}", active_sessions.len());
|
||||
|
||||
// Terminate session
|
||||
session_manager.terminate_session(&session.session_id).await?;
|
||||
println!("Session terminated");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Multi-Tenant Authentication
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{MultiTenantAuth, TenantConfig, TenantPermissions};
|
||||
|
||||
async fn multi_tenant_auth_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let multi_tenant_auth = MultiTenantAuth::new().await?;
|
||||
|
||||
// Create tenant configurations
|
||||
let tenant1_config = TenantConfig {
|
||||
tenant_id: "company-a".to_string(),
|
||||
name: "Company A".to_string(),
|
||||
permissions: TenantPermissions {
|
||||
max_buckets: 100,
|
||||
max_storage_gb: 1000,
|
||||
allowed_regions: vec!["us-east-1".to_string(), "us-west-2".to_string()],
|
||||
},
|
||||
auth_methods: vec![AuthMethod::RSA, AuthMethod::JWT],
|
||||
};
|
||||
|
||||
let tenant2_config = TenantConfig {
|
||||
tenant_id: "company-b".to_string(),
|
||||
name: "Company B".to_string(),
|
||||
permissions: TenantPermissions {
|
||||
max_buckets: 50,
|
||||
max_storage_gb: 500,
|
||||
allowed_regions: vec!["eu-west-1".to_string()],
|
||||
},
|
||||
auth_methods: vec![AuthMethod::APIKey],
|
||||
};
|
||||
|
||||
// Register tenants
|
||||
multi_tenant_auth.register_tenant(&tenant1_config).await?;
|
||||
multi_tenant_auth.register_tenant(&tenant2_config).await?;
|
||||
|
||||
// Authenticate application for specific tenant
|
||||
let auth_request = TenantAuthRequest {
|
||||
tenant_id: "company-a".to_string(),
|
||||
app_id: "app-1".to_string(),
|
||||
credentials: AuthCredentials::RSA {
|
||||
signature: "signed-data".to_string(),
|
||||
public_key: "public-key-data".to_string(),
|
||||
},
|
||||
};
|
||||
|
||||
let auth_result = multi_tenant_auth.authenticate(&auth_request).await?;
|
||||
|
||||
if auth_result.is_authenticated {
|
||||
println!("Multi-tenant authentication successful");
|
||||
println!("Tenant: {}", auth_result.tenant_id);
|
||||
println!("Permissions: {:?}", auth_result.permissions);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication Middleware
|
||||
|
||||
```rust
|
||||
use rustfs_appauth::{AuthMiddleware, AuthContext, MiddlewareConfig};
|
||||
use axum::{Router, middleware, Extension};
|
||||
|
||||
async fn setup_auth_middleware() -> Result<Router, Box<dyn std::error::Error>> {
|
||||
// Configure authentication middleware
|
||||
let middleware_config = MiddlewareConfig {
|
||||
skip_paths: vec!["/health".to_string(), "/metrics".to_string()],
|
||||
require_auth: true,
|
||||
audit_requests: true,
|
||||
};
|
||||
|
||||
let auth_middleware = AuthMiddleware::new(middleware_config).await?;
|
||||
|
||||
// Create router with authentication middleware
|
||||
let app = Router::new()
|
||||
.route("/api/buckets", axum::routing::get(list_buckets))
|
||||
.route("/api/objects", axum::routing::post(upload_object))
|
||||
.layer(middleware::from_fn(auth_middleware.authenticate))
|
||||
.layer(Extension(auth_middleware));
|
||||
|
||||
Ok(app)
|
||||
}
|
||||
|
||||
async fn list_buckets(
|
||||
Extension(auth_context): Extension<AuthContext>,
|
||||
) -> Result<String, Box<dyn std::error::Error>> {
|
||||
// Use authentication context
|
||||
println!("Authenticated app: {}", auth_context.app_id);
|
||||
println!("Permissions: {:?}", auth_context.permissions);
|
||||
|
||||
// Your bucket listing logic here
|
||||
Ok("Bucket list".to_string())
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### AppAuth Architecture
|
||||
|
||||
```
|
||||
AppAuth Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Authentication API │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ RSA Auth │ JWT Tokens │ API Keys │ Sessions │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Cryptographic Operations │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Signing/ │ Token │ Key │ Session │
|
||||
│ Verification │ Management │ Management │ Storage │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Security Infrastructure │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Authentication Methods
|
||||
|
||||
| Method | Security Level | Use Case | Performance |
|
||||
|--------|----------------|----------|-------------|
|
||||
| RSA | High | Enterprise applications | Medium |
|
||||
| JWT | Medium-High | Web applications | High |
|
||||
| API Key | Medium | Service-to-service | Very High |
|
||||
| Session | Medium | Interactive applications | High |
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Test RSA authentication
|
||||
cargo test rsa_auth
|
||||
|
||||
# Test JWT tokens
|
||||
cargo test jwt_tokens
|
||||
|
||||
# Test API key management
|
||||
cargo test api_keys
|
||||
|
||||
# Test session management
|
||||
cargo test sessions
|
||||
|
||||
# Integration tests
|
||||
cargo test --test integration
|
||||
```
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Dependencies**: RSA cryptographic libraries
|
||||
- **Security**: Secure key storage recommended
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS IAM](../iam) - Identity and access management
|
||||
- [RustFS Signer](../signer) - Request signing
|
||||
- [RustFS Crypto](../crypto) - Cryptographic operations
|
||||
- JWT-based authentication with secure token management
|
||||
- RBAC (Role-Based Access Control) for fine-grained permissions
|
||||
- Multi-tenant application isolation and management
|
||||
- OAuth 2.0 and OpenID Connect integration
|
||||
- API key management and rotation
|
||||
- Session management with configurable expiration
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [AppAuth API Reference](https://docs.rustfs.com/appauth/)
|
||||
- [Security Guide](https://docs.rustfs.com/security/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with 🔐 by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -23,14 +23,14 @@ use std::io::{Error, Result};
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Default, Clone)]
|
||||
pub struct Token {
|
||||
pub name: String, // 应用 ID
|
||||
pub expired: u64, // 到期时间 (UNIX 时间戳)
|
||||
pub name: String, // Application ID
|
||||
pub expired: u64, // Expiry time (UNIX timestamp)
|
||||
}
|
||||
|
||||
// 公钥生成 Token
|
||||
// [token] Token 对象
|
||||
// [key] 公钥字符串
|
||||
// 返回 base64 处理的加密字符串
|
||||
/// Public key generation Token
|
||||
/// [token] Token object
|
||||
/// [key] Public key string
|
||||
/// Returns the encrypted string processed by base64
|
||||
pub fn gencode(token: &Token, key: &str) -> Result<String> {
|
||||
let data = serde_json::to_vec(token)?;
|
||||
let public_key = RsaPublicKey::from_public_key_pem(key).map_err(Error::other)?;
|
||||
@@ -38,10 +38,10 @@ pub fn gencode(token: &Token, key: &str) -> Result<String> {
|
||||
Ok(base64_simd::URL_SAFE_NO_PAD.encode_to_string(&encrypted_data))
|
||||
}
|
||||
|
||||
// 私钥解析 Token
|
||||
// [token] base64 处理的加密字符串
|
||||
// [key] 私钥字符串
|
||||
// 返回 Token 对象
|
||||
/// Private key resolution Token
|
||||
/// [token] Encrypted string processed by base64
|
||||
/// [key] Private key string
|
||||
/// Return to the Token object
|
||||
pub fn parse(token: &str, key: &str) -> Result<Token> {
|
||||
let encrypted_data = base64_simd::URL_SAFE_NO_PAD
|
||||
.decode_to_vec(token.as_bytes())
|
||||
|
||||
@@ -19,11 +19,14 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Common utilities and data structures for RustFS, providing shared functionality across the project."
|
||||
keywords = ["common", "utilities", "data-structures", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "data-structures"]
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
lazy_static.workspace = true
|
||||
tokio.workspace = true
|
||||
tonic = { workspace = true }
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# RustFS Common - Shared Components
|
||||
|
||||
<p align="center">
|
||||
<strong>Common types, utilities, and shared components for RustFS distributed object storage</strong>
|
||||
<strong>Shared components and common utilities module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,279 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Common** provides shared components, types, and utilities used across all RustFS modules. This foundational library ensures consistency, reduces code duplication, and provides essential building blocks for the [RustFS](https://rustfs.com) distributed object storage system.
|
||||
|
||||
> **Note:** This is a foundational submodule of RustFS that provides essential shared components for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS Common** provides shared components and common utilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔧 Core Types
|
||||
|
||||
- **Common Data Structures**: Shared types and enums
|
||||
- **Error Handling**: Unified error types and utilities
|
||||
- **Result Types**: Consistent result handling patterns
|
||||
- **Constants**: System-wide constants and defaults
|
||||
|
||||
### 🛠️ Utilities
|
||||
|
||||
- **Async Helpers**: Common async patterns and utilities
|
||||
- **Serialization**: Shared serialization utilities
|
||||
- **Logging**: Common logging and tracing setup
|
||||
- **Metrics**: Shared metrics and observability
|
||||
|
||||
### 🌐 Network Components
|
||||
|
||||
- **gRPC Common**: Shared gRPC types and utilities
|
||||
- **Protocol Helpers**: Common protocol implementations
|
||||
- **Connection Management**: Shared connection utilities
|
||||
- **Request/Response Types**: Common API types
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-common = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Common Types
|
||||
|
||||
```rust
|
||||
use rustfs_common::{Result, Error, ObjectInfo, BucketInfo};
|
||||
|
||||
fn main() -> Result<()> {
|
||||
// Use common result type
|
||||
let result = some_operation()?;
|
||||
|
||||
// Use common object info
|
||||
let object = ObjectInfo {
|
||||
name: "example.txt".to_string(),
|
||||
size: 1024,
|
||||
etag: "d41d8cd98f00b204e9800998ecf8427e".to_string(),
|
||||
last_modified: chrono::Utc::now(),
|
||||
content_type: "text/plain".to_string(),
|
||||
};
|
||||
|
||||
println!("Object: {} ({} bytes)", object.name, object.size);
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
```rust
|
||||
use rustfs_common::{Error, ErrorKind, Result};
|
||||
|
||||
fn example_operation() -> Result<String> {
|
||||
// Different error types
|
||||
match some_condition {
|
||||
true => Ok("Success".to_string()),
|
||||
false => Err(Error::new(
|
||||
ErrorKind::InvalidInput,
|
||||
"Invalid operation parameters"
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_errors() {
|
||||
match example_operation() {
|
||||
Ok(value) => println!("Success: {}", value),
|
||||
Err(e) => {
|
||||
match e.kind() {
|
||||
ErrorKind::InvalidInput => println!("Input error: {}", e),
|
||||
ErrorKind::NotFound => println!("Not found: {}", e),
|
||||
ErrorKind::PermissionDenied => println!("Access denied: {}", e),
|
||||
_ => println!("Other error: {}", e),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Async Utilities
|
||||
|
||||
```rust
|
||||
use rustfs_common::async_utils::{timeout_with_default, retry_with_backoff, spawn_task};
|
||||
use std::time::Duration;
|
||||
|
||||
async fn async_operations() -> Result<()> {
|
||||
// Timeout with default value
|
||||
let result = timeout_with_default(
|
||||
Duration::from_secs(5),
|
||||
expensive_operation(),
|
||||
"default_value".to_string()
|
||||
).await;
|
||||
|
||||
// Retry with exponential backoff
|
||||
let result = retry_with_backoff(
|
||||
3, // max attempts
|
||||
Duration::from_millis(100), // initial delay
|
||||
|| async { fallible_operation().await }
|
||||
).await?;
|
||||
|
||||
// Spawn background task
|
||||
spawn_task("background-worker", async {
|
||||
background_work().await;
|
||||
});
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Metrics and Observability
|
||||
|
||||
```rust
|
||||
use rustfs_common::metrics::{Counter, Histogram, Gauge, MetricsRegistry};
|
||||
|
||||
fn setup_metrics() -> Result<()> {
|
||||
let registry = MetricsRegistry::new();
|
||||
|
||||
// Create metrics
|
||||
let requests_total = Counter::new("requests_total", "Total number of requests")?;
|
||||
let request_duration = Histogram::new(
|
||||
"request_duration_seconds",
|
||||
"Request duration in seconds"
|
||||
)?;
|
||||
let active_connections = Gauge::new(
|
||||
"active_connections",
|
||||
"Number of active connections"
|
||||
)?;
|
||||
|
||||
// Register metrics
|
||||
registry.register(Box::new(requests_total))?;
|
||||
registry.register(Box::new(request_duration))?;
|
||||
registry.register(Box::new(active_connections))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### gRPC Common Types
|
||||
|
||||
```rust
|
||||
use rustfs_common::grpc::{GrpcResult, GrpcError, TonicStatus};
|
||||
use tonic::{Request, Response, Status};
|
||||
|
||||
async fn grpc_service_example(
|
||||
request: Request<MyRequest>
|
||||
) -> GrpcResult<MyResponse> {
|
||||
let req = request.into_inner();
|
||||
|
||||
// Validate request
|
||||
if req.name.is_empty() {
|
||||
return Err(GrpcError::invalid_argument("Name cannot be empty"));
|
||||
}
|
||||
|
||||
// Process request
|
||||
let response = MyResponse {
|
||||
result: format!("Processed: {}", req.name),
|
||||
status: "success".to_string(),
|
||||
};
|
||||
|
||||
Ok(Response::new(response))
|
||||
}
|
||||
|
||||
// Error conversion
|
||||
impl From<Error> for Status {
|
||||
fn from(err: Error) -> Self {
|
||||
match err.kind() {
|
||||
ErrorKind::NotFound => Status::not_found(err.to_string()),
|
||||
ErrorKind::PermissionDenied => Status::permission_denied(err.to_string()),
|
||||
ErrorKind::InvalidInput => Status::invalid_argument(err.to_string()),
|
||||
_ => Status::internal(err.to_string()),
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Common Module Structure
|
||||
|
||||
```
|
||||
Common Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Public API Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Core Types │ Error Types │ Result Types │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Async Utils │ Metrics │ gRPC Common │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Constants │ Serialization │ Logging │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Foundation Types │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Core Components
|
||||
|
||||
| Component | Purpose | Usage |
|
||||
|-----------|---------|-------|
|
||||
| Types | Common data structures | Shared across all modules |
|
||||
| Errors | Unified error handling | Consistent error reporting |
|
||||
| Async Utils | Async patterns | Common async operations |
|
||||
| Metrics | Observability | Performance monitoring |
|
||||
| gRPC | Protocol support | Service communication |
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Test specific components
|
||||
cargo test types
|
||||
cargo test errors
|
||||
cargo test async_utils
|
||||
```
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Dependencies**: Minimal, focused on essential functionality
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS Utils](../utils) - Utility functions
|
||||
- [RustFS Config](../config) - Configuration management
|
||||
- Shared data structures and type definitions
|
||||
- Common error handling and result types
|
||||
- Utility functions used across modules
|
||||
- Configuration structures and validation
|
||||
- Logging and tracing infrastructure
|
||||
- Cross-platform compatibility helpers
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [Common API Reference](https://docs.rustfs.com/common/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with 🔧 by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -12,19 +12,19 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::collections::HashMap;
|
||||
#![allow(non_upper_case_globals)] // FIXME
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::sync::LazyLock;
|
||||
|
||||
use lazy_static::lazy_static;
|
||||
use tokio::sync::RwLock;
|
||||
use tonic::transport::Channel;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref GLOBAL_Local_Node_Name: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Host: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Port: RwLock<String> = RwLock::new("9000".to_string());
|
||||
pub static ref GLOBAL_Rustfs_Addr: RwLock<String> = RwLock::new("".to_string());
|
||||
pub static ref GLOBAL_Conn_Map: RwLock<HashMap<String, Channel>> = RwLock::new(HashMap::new());
|
||||
}
|
||||
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
|
||||
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
|
||||
|
||||
pub async fn set_global_addr(addr: &str) {
|
||||
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();
|
||||
|
||||
@@ -19,6 +19,10 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Configuration management for RustFS, providing a centralized way to manage application settings and features."
|
||||
keywords = ["configuration", "settings", "management", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "config"]
|
||||
|
||||
[dependencies]
|
||||
const-str = { workspace = true, optional = true }
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# RustFS Config - Configuration Management
|
||||
|
||||
<p align="center">
|
||||
<strong>Centralized configuration management for RustFS distributed object storage</strong>
|
||||
<strong>Configuration management and validation module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,388 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Config** is the configuration management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides centralized configuration handling, environment-based configuration loading, validation, and runtime configuration updates for all RustFS components.
|
||||
|
||||
> **Note:** This is a foundational submodule of RustFS that provides essential configuration management capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS Config** provides configuration management and validation capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### ⚙️ Configuration Management
|
||||
|
||||
- **Multi-Format Support**: JSON, YAML, TOML configuration formats
|
||||
- **Environment Variables**: Automatic environment variable override
|
||||
- **Default Values**: Comprehensive default configuration
|
||||
- **Validation**: Configuration validation and error reporting
|
||||
|
||||
### 🔧 Advanced Features
|
||||
|
||||
- **Hot Reload**: Runtime configuration updates without restart
|
||||
- **Profile Support**: Environment-specific configuration profiles
|
||||
- **Secret Management**: Secure handling of sensitive configuration
|
||||
- **Configuration Merging**: Hierarchical configuration composition
|
||||
|
||||
### 🛠️ Developer Features
|
||||
|
||||
- **Type Safety**: Strongly typed configuration structures
|
||||
- **Documentation**: Auto-generated configuration documentation
|
||||
- **CLI Integration**: Command-line configuration override
|
||||
- **Testing Support**: Configuration mocking for tests
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-config = "0.1.0"
|
||||
|
||||
# With specific features
|
||||
rustfs-config = { version = "0.1.0", features = ["constants", "notify"] }
|
||||
```
|
||||
|
||||
### Feature Flags
|
||||
|
||||
Available features:
|
||||
|
||||
- `constants` - Configuration constants and compile-time values
|
||||
- `notify` - Configuration change notification support
|
||||
- `observability` - Observability and metrics configuration
|
||||
- `default` - Core configuration functionality
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Configuration Loading
|
||||
|
||||
```rust
|
||||
use rustfs_config::{Config, ConfigBuilder, ConfigFormat};
|
||||
|
||||
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Load configuration from file
|
||||
let config = Config::from_file("config.yaml")?;
|
||||
|
||||
// Load with environment overrides
|
||||
let config = ConfigBuilder::new()
|
||||
.add_file("config.yaml")
|
||||
.add_env_prefix("RUSTFS")
|
||||
.build()?;
|
||||
|
||||
// Access configuration values
|
||||
println!("Server address: {}", config.server.address);
|
||||
println!("Storage path: {}", config.storage.path);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Environment-Based Configuration
|
||||
|
||||
```rust
|
||||
use rustfs_config::{Config, Environment};
|
||||
|
||||
async fn load_environment_config() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Load configuration based on environment
|
||||
let env = Environment::detect()?;
|
||||
let config = Config::for_environment(env).await?;
|
||||
|
||||
match env {
|
||||
Environment::Development => {
|
||||
println!("Using development configuration");
|
||||
println!("Debug mode: {}", config.debug.enabled);
|
||||
}
|
||||
Environment::Production => {
|
||||
println!("Using production configuration");
|
||||
println!("Log level: {}", config.logging.level);
|
||||
}
|
||||
Environment::Testing => {
|
||||
println!("Using test configuration");
|
||||
println!("Test database: {}", config.database.test_url);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Structure
|
||||
|
||||
```rust
|
||||
use rustfs_config::{Config, ServerConfig, StorageConfig, SecurityConfig};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize)]
|
||||
pub struct ApplicationConfig {
|
||||
pub server: ServerConfig,
|
||||
pub storage: StorageConfig,
|
||||
pub security: SecurityConfig,
|
||||
pub logging: LoggingConfig,
|
||||
pub monitoring: MonitoringConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize)]
|
||||
pub struct ServerConfig {
|
||||
pub address: String,
|
||||
pub port: u16,
|
||||
pub workers: usize,
|
||||
pub timeout: std::time::Duration,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Serialize)]
|
||||
pub struct StorageConfig {
|
||||
pub path: String,
|
||||
pub max_size: u64,
|
||||
pub compression: bool,
|
||||
pub erasure_coding: ErasureCodingConfig,
|
||||
}
|
||||
|
||||
fn load_typed_config() -> Result<ApplicationConfig, Box<dyn std::error::Error>> {
|
||||
let config: ApplicationConfig = Config::builder()
|
||||
.add_file("config.yaml")
|
||||
.add_env_prefix("RUSTFS")
|
||||
.set_default("server.port", 9000)?
|
||||
.set_default("server.workers", 4)?
|
||||
.build_typed()?;
|
||||
|
||||
Ok(config)
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
```rust
|
||||
use rustfs_config::{Config, ValidationError, Validator};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ConfigValidator;
|
||||
|
||||
impl Validator<ApplicationConfig> for ConfigValidator {
|
||||
fn validate(&self, config: &ApplicationConfig) -> Result<(), ValidationError> {
|
||||
// Validate server configuration
|
||||
if config.server.port < 1024 {
|
||||
return Err(ValidationError::new("server.port", "Port must be >= 1024"));
|
||||
}
|
||||
|
||||
if config.server.workers == 0 {
|
||||
return Err(ValidationError::new("server.workers", "Workers must be > 0"));
|
||||
}
|
||||
|
||||
// Validate storage configuration
|
||||
if !std::path::Path::new(&config.storage.path).exists() {
|
||||
return Err(ValidationError::new("storage.path", "Storage path does not exist"));
|
||||
}
|
||||
|
||||
// Validate erasure coding parameters
|
||||
if config.storage.erasure_coding.data_drives + config.storage.erasure_coding.parity_drives > 16 {
|
||||
return Err(ValidationError::new("storage.erasure_coding", "Total drives cannot exceed 16"));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn validate_configuration() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let config: ApplicationConfig = Config::load_with_validation(
|
||||
"config.yaml",
|
||||
ConfigValidator,
|
||||
)?;
|
||||
|
||||
println!("Configuration is valid!");
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Hot Configuration Reload
|
||||
|
||||
```rust
|
||||
use rustfs_config::{ConfigWatcher, ConfigEvent};
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
async fn watch_configuration_changes() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let (tx, mut rx) = mpsc::channel::<ConfigEvent>(100);
|
||||
|
||||
// Start configuration watcher
|
||||
let watcher = ConfigWatcher::new("config.yaml", tx)?;
|
||||
watcher.start().await?;
|
||||
|
||||
// Handle configuration changes
|
||||
while let Some(event) = rx.recv().await {
|
||||
match event {
|
||||
ConfigEvent::Changed(new_config) => {
|
||||
println!("Configuration changed, reloading...");
|
||||
// Apply new configuration
|
||||
apply_configuration(new_config).await?;
|
||||
}
|
||||
ConfigEvent::Error(err) => {
|
||||
eprintln!("Configuration error: {}", err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn apply_configuration(config: ApplicationConfig) -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Update server configuration
|
||||
// Update storage configuration
|
||||
// Update security settings
|
||||
// etc.
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Profiles
|
||||
|
||||
```rust
|
||||
use rustfs_config::{Config, Profile, ProfileManager};
|
||||
|
||||
fn load_profile_based_config() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let profile_manager = ProfileManager::new("configs/")?;
|
||||
|
||||
// Load specific profile
|
||||
let config = profile_manager.load_profile("production")?;
|
||||
|
||||
// Load with fallback
|
||||
let config = profile_manager
|
||||
.load_profile("staging")
|
||||
.or_else(|_| profile_manager.load_profile("default"))?;
|
||||
|
||||
// Merge multiple profiles
|
||||
let config = profile_manager
|
||||
.merge_profiles(&["base", "production", "regional"])?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Configuration Architecture
|
||||
|
||||
```
|
||||
Config Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Configuration API │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ File Loader │ Env Loader │ CLI Parser │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Configuration Merger │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Validation │ Watching │ Hot Reload │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Type System Integration │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Configuration Sources
|
||||
|
||||
| Source | Priority | Format | Example |
|
||||
|--------|----------|---------|---------|
|
||||
| Command Line | 1 (Highest) | Key-Value | `--server.port=8080` |
|
||||
| Environment Variables | 2 | Key-Value | `RUSTFS_SERVER_PORT=8080` |
|
||||
| Configuration File | 3 | JSON/YAML/TOML | `config.yaml` |
|
||||
| Default Values | 4 (Lowest) | Code | Compile-time defaults |
|
||||
|
||||
## 📋 Configuration Reference
|
||||
|
||||
### Server Configuration
|
||||
|
||||
```yaml
|
||||
server:
|
||||
address: "0.0.0.0"
|
||||
port: 9000
|
||||
workers: 4
|
||||
timeout: "30s"
|
||||
tls:
|
||||
enabled: true
|
||||
cert_file: "/etc/ssl/server.crt"
|
||||
key_file: "/etc/ssl/server.key"
|
||||
```
|
||||
|
||||
### Storage Configuration
|
||||
|
||||
```yaml
|
||||
storage:
|
||||
path: "/var/lib/rustfs"
|
||||
max_size: "1TB"
|
||||
compression: true
|
||||
erasure_coding:
|
||||
data_drives: 8
|
||||
parity_drives: 4
|
||||
stripe_size: "1MB"
|
||||
```
|
||||
|
||||
### Security Configuration
|
||||
|
||||
```yaml
|
||||
security:
|
||||
auth:
|
||||
enabled: true
|
||||
method: "jwt"
|
||||
secret_key: "${JWT_SECRET}"
|
||||
encryption:
|
||||
algorithm: "AES-256-GCM"
|
||||
key_rotation_interval: "24h"
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Test configuration loading
|
||||
cargo test config_loading
|
||||
|
||||
# Test validation
|
||||
cargo test validation
|
||||
|
||||
# Test hot reload
|
||||
cargo test hot_reload
|
||||
```
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Dependencies**: Minimal external dependencies
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS Utils](../utils) - Utility functions
|
||||
- [RustFS Common](../common) - Common types and utilities
|
||||
- Multi-format configuration support (TOML, YAML, JSON, ENV)
|
||||
- Environment variable integration and override
|
||||
- Configuration validation and type safety
|
||||
- Hot-reload capabilities for dynamic updates
|
||||
- Default value management and fallbacks
|
||||
- Secure credential handling and encryption
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [Config API Reference](https://docs.rustfs.com/config/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with ⚙️ by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Cryptography and security features for RustFS, providing encryption, hashing, and secure authentication mechanisms."
|
||||
keywords = ["cryptography", "encryption", "hashing", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "cryptography"]
|
||||
documentation = "https://docs.rs/rustfs-crypto/latest/rustfs_crypto/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS Crypto Module
|
||||
# RustFS Crypto - Cryptographic Operations
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance cryptographic module for RustFS distributed object storage</strong>
|
||||
<strong>High-performance cryptographic operations module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,313 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
The **RustFS Crypto Module** is a core cryptographic component of the [RustFS](https://rustfs.com) distributed object storage system. This module provides secure, high-performance encryption and decryption capabilities, JWT token management, and cross-platform cryptographic operations designed specifically for enterprise-grade storage systems.
|
||||
|
||||
> **Note:** This is a submodule of RustFS and is designed to work seamlessly within the RustFS ecosystem. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS Crypto** provides high-performance cryptographic operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔐 Encryption & Decryption
|
||||
|
||||
- **Multiple Algorithms**: Support for AES-GCM, ChaCha20Poly1305, and PBKDF2
|
||||
- **Key Derivation**: Argon2id and PBKDF2 for secure key generation
|
||||
- **Memory Safety**: Built with Rust's memory safety guarantees
|
||||
- **Cross-Platform**: Optimized for x86_64, aarch64, s390x, and other architectures
|
||||
|
||||
### 🎫 JWT Management
|
||||
|
||||
- **Token Generation**: Secure JWT token creation with HS512 algorithm
|
||||
- **Token Validation**: Robust JWT token verification and decoding
|
||||
- **Claims Management**: Flexible claims handling with JSON support
|
||||
|
||||
### 🛡️ Security Features
|
||||
|
||||
- **FIPS Compliance**: Optional FIPS 140-2 compatible mode
|
||||
- **Hardware Acceleration**: Automatic detection and utilization of CPU crypto extensions
|
||||
- **Secure Random**: Cryptographically secure random number generation
|
||||
- **Side-Channel Protection**: Resistant to timing attacks
|
||||
|
||||
### 🚀 Performance
|
||||
|
||||
- **Zero-Copy Operations**: Efficient memory usage with `Bytes` support
|
||||
- **Async/Await**: Full async support for non-blocking operations
|
||||
- **Hardware Optimization**: CPU-specific optimizations for better performance
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-crypto = "0.1.0"
|
||||
```
|
||||
|
||||
### Feature Flags
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-crypto = { version = "0.1.0", features = ["crypto", "fips"] }
|
||||
```
|
||||
|
||||
Available features:
|
||||
|
||||
- `crypto` (default): Enable all cryptographic functions
|
||||
- `fips`: Enable FIPS 140-2 compliance mode
|
||||
- `default`: Includes both `crypto` and `fips`
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Encryption/Decryption
|
||||
|
||||
```rust
|
||||
use rustfs_crypto::{encrypt_data, decrypt_data};
|
||||
|
||||
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let password = b"my_secure_password";
|
||||
let data = b"sensitive information";
|
||||
|
||||
// Encrypt data
|
||||
let encrypted = encrypt_data(password, data)?;
|
||||
println!("Encrypted {} bytes", encrypted.len());
|
||||
|
||||
// Decrypt data
|
||||
let decrypted = decrypt_data(password, &encrypted)?;
|
||||
assert_eq!(data, decrypted.as_slice());
|
||||
println!("Successfully decrypted data");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### JWT Token Management
|
||||
|
||||
```rust
|
||||
use rustfs_crypto::{jwt_encode, jwt_decode};
|
||||
use serde_json::json;
|
||||
|
||||
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let secret = b"jwt_secret_key";
|
||||
let claims = json!({
|
||||
"sub": "user123",
|
||||
"exp": 1234567890,
|
||||
"iat": 1234567890
|
||||
});
|
||||
|
||||
// Create JWT token
|
||||
let token = jwt_encode(secret, &claims)?;
|
||||
println!("Generated token: {}", token);
|
||||
|
||||
// Verify and decode token
|
||||
let decoded = jwt_decode(&token, secret)?;
|
||||
println!("Decoded claims: {:?}", decoded.claims);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Usage with Custom Configuration
|
||||
|
||||
```rust
|
||||
use rustfs_crypto::{encrypt_data, decrypt_data, Error};
|
||||
|
||||
#[cfg(feature = "crypto")]
|
||||
fn secure_storage_example() -> Result<(), Error> {
|
||||
// Large data encryption
|
||||
let large_data = vec![0u8; 1024 * 1024]; // 1MB
|
||||
let password = b"complex_password_123!@#";
|
||||
|
||||
// Encrypt with automatic algorithm selection
|
||||
let encrypted = encrypt_data(password, &large_data)?;
|
||||
|
||||
// Decrypt and verify
|
||||
let decrypted = decrypt_data(password, &encrypted)?;
|
||||
assert_eq!(large_data.len(), decrypted.len());
|
||||
|
||||
println!("Successfully processed {} bytes", large_data.len());
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Supported Encryption Algorithms
|
||||
|
||||
| Algorithm | Key Derivation | Use Case | FIPS Compliant |
|
||||
|-----------|---------------|----------|----------------|
|
||||
| AES-GCM | Argon2id | General purpose, hardware accelerated | ✅ |
|
||||
| ChaCha20Poly1305 | Argon2id | Software-only environments | ❌ |
|
||||
| AES-GCM | PBKDF2 | FIPS compliance required | ✅ |
|
||||
|
||||
### Cross-Platform Support
|
||||
|
||||
The module automatically detects and optimizes for:
|
||||
|
||||
- **x86/x86_64**: AES-NI and PCLMULQDQ instructions
|
||||
- **aarch64**: ARM Crypto Extensions
|
||||
- **s390x**: IBM Z Crypto Extensions
|
||||
- **Other architectures**: Fallback to software implementations
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run tests with all features
|
||||
cargo test --all-features
|
||||
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Test cross-platform compatibility
|
||||
cargo test --target x86_64-unknown-linux-gnu
|
||||
cargo test --target aarch64-unknown-linux-gnu
|
||||
```
|
||||
|
||||
## 📊 Performance
|
||||
|
||||
The crypto module is designed for high-performance scenarios:
|
||||
|
||||
- **Encryption Speed**: Up to 2GB/s on modern hardware
|
||||
- **Memory Usage**: Minimal heap allocation with zero-copy operations
|
||||
- **CPU Utilization**: Automatic hardware acceleration detection
|
||||
- **Scalability**: Thread-safe operations for concurrent access
|
||||
|
||||
## 🤝 Integration with RustFS
|
||||
|
||||
This module is specifically designed to integrate with other RustFS components:
|
||||
|
||||
- **Storage Layer**: Provides encryption for object storage
|
||||
- **Authentication**: JWT tokens for API authentication
|
||||
- **Configuration**: Secure configuration data encryption
|
||||
- **Metadata**: Encrypted metadata storage
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Architectures**: x86_64, aarch64, s390x, and more
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
- All cryptographic operations use industry-standard algorithms
|
||||
- Key derivation follows best practices (Argon2id, PBKDF2)
|
||||
- Memory is securely cleared after use
|
||||
- Timing attack resistance is built-in
|
||||
- Hardware security modules (HSM) support planned
|
||||
|
||||
## 🐛 Known Issues
|
||||
|
||||
- Hardware acceleration detection may not work on all virtualized environments
|
||||
- FIPS mode requires additional system-level configuration
|
||||
- Some older CPU architectures may have reduced performance
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
|
||||
- [RustFS IAM](../iam) - Identity and access management
|
||||
- [RustFS Policy](../policy) - Policy engine
|
||||
- AES-GCM encryption with hardware acceleration
|
||||
- RSA and ECDSA digital signature support
|
||||
- Secure hash functions (SHA-256, BLAKE3)
|
||||
- Key derivation and management utilities
|
||||
- Stream ciphers for large data encryption
|
||||
- Hardware security module integration
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [API Reference](https://docs.rustfs.com/crypto/)
|
||||
- [Security Guide](https://docs.rustfs.com/security/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
|
||||
|
||||
- Code style and formatting requirements
|
||||
- Testing procedures and coverage
|
||||
- Security considerations for cryptographic code
|
||||
- Pull request process and review guidelines
|
||||
|
||||
### Development Setup
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/rustfs/rustfs.git
|
||||
cd rustfs
|
||||
|
||||
# Navigate to crypto module
|
||||
cd crates/crypto
|
||||
|
||||
# Install dependencies
|
||||
cargo build
|
||||
|
||||
# Run tests
|
||||
cargo test
|
||||
|
||||
# Format code
|
||||
cargo fmt
|
||||
|
||||
# Run linter
|
||||
cargo clippy
|
||||
```
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
|
||||
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- **Security**: Report security issues to <security@rustfs.com>
|
||||
|
||||
## 📞 Contact
|
||||
|
||||
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Business**: <hello@rustfs.com>
|
||||
- **Jobs**: <jobs@rustfs.com>
|
||||
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
|
||||
## 👥 Contributors
|
||||
|
||||
This module is maintained by the RustFS team and community contributors. Special thanks to all who have contributed to making RustFS cryptography secure and efficient.
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
</a>
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
```
|
||||
Copyright 2024 RustFS Team
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with ❤️ by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -19,6 +19,12 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Erasure coding storage backend for RustFS, providing efficient data storage and retrieval with redundancy."
|
||||
keywords = ["erasure-coding", "storage", "rustfs", "Minio", "solomon"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
documentation = "https://docs.rs/rustfs-ecstore/latest/rustfs_ecstore/"
|
||||
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
[lints]
|
||||
@@ -103,8 +109,8 @@ winapi = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
temp-env = "0.3.6"
|
||||
criterion = { workspace = true, features = ["html_reports"] }
|
||||
temp-env = { workspace = true }
|
||||
|
||||
[build-dependencies]
|
||||
shadow-rs = { workspace = true, features = ["build", "metadata"] }
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS ECStore - Erasure Coding Storage Engine
|
||||
# RustFS ECStore - Erasure Coding Storage
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance erasure coding storage engine for RustFS distributed object storage</strong>
|
||||
@@ -17,425 +17,24 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS ECStore** is the core storage engine of the [RustFS](https://rustfs.com) distributed object storage system. It provides enterprise-grade erasure coding capabilities, data integrity protection, and high-performance object storage operations. This module serves as the foundation for RustFS's distributed storage architecture.
|
||||
|
||||
> **Note:** This is a core submodule of RustFS and provides the primary storage capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS ECStore** provides erasure coding storage capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔧 Erasure Coding Storage
|
||||
|
||||
- **Reed-Solomon Erasure Coding**: Advanced error correction with configurable redundancy
|
||||
- **Data Durability**: Protection against disk failures and bit rot
|
||||
- **Automatic Repair**: Self-healing capabilities for corrupted or missing data
|
||||
- **Configurable Parity**: Flexible parity configurations (4+2, 8+4, 16+4, etc.)
|
||||
|
||||
### 💾 Storage Management
|
||||
|
||||
- **Multi-Disk Support**: Intelligent disk management and load balancing
|
||||
- **Storage Classes**: Support for different storage tiers and policies
|
||||
- **Bucket Management**: Advanced bucket operations and lifecycle management
|
||||
- **Object Versioning**: Complete versioning support with metadata tracking
|
||||
|
||||
### 🚀 Performance & Scalability
|
||||
|
||||
- **High Throughput**: Optimized for large-scale data operations
|
||||
- **Parallel Processing**: Concurrent read/write operations across multiple disks
|
||||
- **Memory Efficient**: Smart caching and memory management
|
||||
- **SIMD Optimization**: Hardware-accelerated erasure coding operations
|
||||
|
||||
### 🛡️ Data Integrity
|
||||
|
||||
- **Bitrot Detection**: Real-time data corruption detection
|
||||
- **Checksum Verification**: Multiple checksum algorithms (MD5, SHA256, XXHash)
|
||||
- **Healing System**: Automatic background healing and repair
|
||||
- **Data Scrubbing**: Proactive data integrity scanning
|
||||
|
||||
### 🔄 Advanced Features
|
||||
|
||||
- **Compression**: Built-in compression support for space optimization
|
||||
- **Replication**: Cross-region replication capabilities
|
||||
- **Notification System**: Real-time event notifications
|
||||
- **Metrics & Monitoring**: Comprehensive performance metrics
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Storage Layout
|
||||
|
||||
```
|
||||
ECStore Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Storage API Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Bucket Management │ Object Operations │ Metadata Mgmt │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Erasure Coding Engine │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Disk Management │ Healing System │ Cache │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Physical Storage Devices │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Erasure Coding Schemes
|
||||
|
||||
| Configuration | Data Drives | Parity Drives | Fault Tolerance | Storage Efficiency |
|
||||
|---------------|-------------|---------------|-----------------|-------------------|
|
||||
| 4+2 | 4 | 2 | 2 disk failures | 66.7% |
|
||||
| 8+4 | 8 | 4 | 4 disk failures | 66.7% |
|
||||
| 16+4 | 16 | 4 | 4 disk failures | 80% |
|
||||
| Custom | N | K | K disk failures | N/(N+K) |
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-ecstore = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Storage Operations
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::{StorageAPI, new_object_layer_fn};
|
||||
use std::sync::Arc;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Initialize storage layer
|
||||
let storage = new_object_layer_fn("/path/to/storage").await?;
|
||||
|
||||
// Create a bucket
|
||||
storage.make_bucket("my-bucket", None).await?;
|
||||
|
||||
// Put an object
|
||||
let data = b"Hello, RustFS!";
|
||||
storage.put_object("my-bucket", "hello.txt", data.to_vec()).await?;
|
||||
|
||||
// Get an object
|
||||
let retrieved = storage.get_object("my-bucket", "hello.txt", None).await?;
|
||||
println!("Retrieved: {}", String::from_utf8_lossy(&retrieved.data));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::{StorageAPI, config::Config};
|
||||
|
||||
async fn setup_storage_with_config() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let config = Config {
|
||||
erasure_sets: vec![
|
||||
// 8+4 configuration for high durability
|
||||
ErasureSet::new(8, 4, vec![
|
||||
"/disk1", "/disk2", "/disk3", "/disk4",
|
||||
"/disk5", "/disk6", "/disk7", "/disk8",
|
||||
"/disk9", "/disk10", "/disk11", "/disk12"
|
||||
])
|
||||
],
|
||||
healing_enabled: true,
|
||||
compression_enabled: true,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let storage = new_object_layer_fn("/path/to/storage")
|
||||
.with_config(config)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Bucket Management
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::{StorageAPI, bucket::BucketInfo};
|
||||
|
||||
async fn bucket_operations(storage: Arc<dyn StorageAPI>) -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create bucket with specific configuration
|
||||
let bucket_info = BucketInfo {
|
||||
name: "enterprise-bucket".to_string(),
|
||||
versioning_enabled: true,
|
||||
lifecycle_config: Some(lifecycle_config()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
storage.make_bucket_with_config(bucket_info).await?;
|
||||
|
||||
// List buckets
|
||||
let buckets = storage.list_buckets().await?;
|
||||
for bucket in buckets {
|
||||
println!("Bucket: {}, Created: {}", bucket.name, bucket.created);
|
||||
}
|
||||
|
||||
// Set bucket policy
|
||||
storage.set_bucket_policy("enterprise-bucket", policy_json).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Healing and Maintenance
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::{heal::HealingManager, StorageAPI};
|
||||
|
||||
async fn healing_operations(storage: Arc<dyn StorageAPI>) -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Check storage health
|
||||
let health = storage.storage_info().await?;
|
||||
println!("Storage Health: {:?}", health);
|
||||
|
||||
// Trigger healing for specific bucket
|
||||
let healing_result = storage.heal_bucket("my-bucket").await?;
|
||||
println!("Healing completed: {:?}", healing_result);
|
||||
|
||||
// Background healing status
|
||||
let healing_status = storage.healing_status().await?;
|
||||
println!("Background healing: {:?}", healing_status);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run tests with specific features
|
||||
cargo test --features "compression,healing"
|
||||
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Run erasure coding benchmarks
|
||||
cargo bench --bench erasure_benchmark
|
||||
|
||||
# Run comparison benchmarks
|
||||
cargo bench --bench comparison_benchmark
|
||||
```
|
||||
|
||||
## 📊 Performance Benchmarks
|
||||
|
||||
ECStore is designed for high-performance storage operations:
|
||||
|
||||
### Throughput Performance
|
||||
|
||||
- **Sequential Write**: Up to 10GB/s on NVMe storage
|
||||
- **Sequential Read**: Up to 12GB/s with parallel reads
|
||||
- **Random I/O**: 100K+ IOPS for small objects
|
||||
- **Erasure Coding**: 5GB/s encoding/decoding throughput
|
||||
|
||||
### Scalability Metrics
|
||||
|
||||
- **Storage Capacity**: Exabyte-scale deployments
|
||||
- **Concurrent Operations**: 10,000+ concurrent requests
|
||||
- **Disk Scaling**: Support for 1000+ disks per node
|
||||
- **Fault Tolerance**: Up to 50% disk failure resilience
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Storage Configuration
|
||||
|
||||
```toml
|
||||
[storage]
|
||||
# Erasure coding configuration
|
||||
erasure_set_size = 12 # Total disks per set
|
||||
data_drives = 8 # Data drives per set
|
||||
parity_drives = 4 # Parity drives per set
|
||||
|
||||
# Performance tuning
|
||||
read_quorum = 6 # Minimum disks for read
|
||||
write_quorum = 7 # Minimum disks for write
|
||||
parallel_reads = true # Enable parallel reads
|
||||
compression = true # Enable compression
|
||||
|
||||
# Healing configuration
|
||||
healing_enabled = true
|
||||
healing_interval = "24h"
|
||||
bitrot_check_interval = "168h" # Weekly bitrot check
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::config::StorageConfig;
|
||||
|
||||
let config = StorageConfig {
|
||||
// Enable advanced features
|
||||
bitrot_protection: true,
|
||||
automatic_healing: true,
|
||||
compression_level: 6,
|
||||
checksum_algorithm: ChecksumAlgorithm::XXHash64,
|
||||
|
||||
// Performance tuning
|
||||
read_buffer_size: 1024 * 1024, // 1MB read buffer
|
||||
write_buffer_size: 4 * 1024 * 1024, // 4MB write buffer
|
||||
concurrent_operations: 1000,
|
||||
|
||||
// Storage optimization
|
||||
small_object_threshold: 128 * 1024, // 128KB
|
||||
large_object_threshold: 64 * 1024 * 1024, // 64MB
|
||||
|
||||
..Default::default()
|
||||
};
|
||||
```
|
||||
|
||||
## 🤝 Integration with RustFS
|
||||
|
||||
ECStore integrates seamlessly with other RustFS components:
|
||||
|
||||
- **API Server**: Provides S3-compatible storage operations
|
||||
- **IAM Module**: Handles authentication and authorization
|
||||
- **Policy Engine**: Implements bucket policies and access controls
|
||||
- **Notification System**: Publishes storage events
|
||||
- **Monitoring**: Provides detailed metrics and health status
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Storage**: Local disks, network storage, cloud storage
|
||||
- **Memory**: Minimum 4GB RAM (8GB+ recommended)
|
||||
- **Network**: High-speed network for distributed deployments
|
||||
|
||||
## 🚀 Performance Tuning
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Disk Configuration**:
|
||||
- Use dedicated disks for each erasure set
|
||||
- Prefer NVMe over SATA for better performance
|
||||
- Ensure consistent disk sizes within erasure sets
|
||||
|
||||
2. **Memory Settings**:
|
||||
- Allocate sufficient memory for caching
|
||||
- Tune read/write buffer sizes based on workload
|
||||
- Enable memory-mapped files for large objects
|
||||
|
||||
3. **Network Optimization**:
|
||||
- Use high-speed network connections
|
||||
- Configure proper MTU sizes
|
||||
- Enable network compression for WAN scenarios
|
||||
|
||||
4. **CPU Optimization**:
|
||||
- Utilize SIMD instructions for erasure coding
|
||||
- Balance CPU cores across erasure sets
|
||||
- Enable hardware-accelerated checksums
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Disk Failures**:
|
||||
- Check disk health using `storage_info()`
|
||||
- Trigger healing with `heal_bucket()`
|
||||
- Replace failed disks and re-add to cluster
|
||||
|
||||
2. **Performance Issues**:
|
||||
- Monitor disk I/O utilization
|
||||
- Check network bandwidth usage
|
||||
- Verify erasure coding configuration
|
||||
|
||||
3. **Data Integrity**:
|
||||
- Run bitrot detection scans
|
||||
- Verify checksums for critical data
|
||||
- Check healing system status
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS Crypto](../crypto) - Cryptographic operations
|
||||
- [RustFS IAM](../iam) - Identity and access management
|
||||
- [RustFS Policy](../policy) - Policy engine
|
||||
- [RustFS FileMeta](../filemeta) - File metadata management
|
||||
- Reed-Solomon erasure coding implementation
|
||||
- Configurable redundancy levels (N+K schemes)
|
||||
- Automatic data healing and reconstruction
|
||||
- Multi-drive support with intelligent placement
|
||||
- Parallel encoding/decoding for performance
|
||||
- Efficient disk space utilization
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [Storage API Reference](https://docs.rustfs.com/ecstore/)
|
||||
- [Erasure Coding Guide](https://docs.rustfs.com/erasure-coding/)
|
||||
- [Performance Tuning](https://docs.rustfs.com/performance/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
|
||||
|
||||
- Storage engine architecture and design patterns
|
||||
- Erasure coding implementation guidelines
|
||||
- Performance optimization techniques
|
||||
- Testing procedures for storage operations
|
||||
- Documentation standards for storage APIs
|
||||
|
||||
### Development Setup
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/rustfs/rustfs.git
|
||||
cd rustfs
|
||||
|
||||
# Navigate to ECStore module
|
||||
cd crates/ecstore
|
||||
|
||||
# Install dependencies
|
||||
cargo build
|
||||
|
||||
# Run tests
|
||||
cargo test
|
||||
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Format code
|
||||
cargo fmt
|
||||
|
||||
# Run linter
|
||||
cargo clippy
|
||||
```
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
|
||||
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- **Storage Support**: <storage-support@rustfs.com>
|
||||
|
||||
## 📞 Contact
|
||||
|
||||
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Business**: <hello@rustfs.com>
|
||||
- **Jobs**: <jobs@rustfs.com>
|
||||
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
|
||||
## 👥 Contributors
|
||||
|
||||
This module is maintained by the RustFS storage team and community contributors. Special thanks to all who have contributed to making RustFS storage reliable and efficient.
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
</a>
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
```
|
||||
Copyright 2024 RustFS Team
|
||||
|
||||
@@ -1,103 +1,19 @@
|
||||
# ECStore - Erasure Coding Storage
|
||||
|
||||
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD
|
||||
implementation for optimal performance.
|
||||
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD implementation for optimal performance.
|
||||
|
||||
## Reed-Solomon Implementation
|
||||
## Features
|
||||
|
||||
### SIMD Backend (Only)
|
||||
- **Reed-Solomon Implementation**: High-performance SIMD-optimized erasure coding
|
||||
- **Cross-Platform Compatibility**: Support for x86_64, aarch64, and other architectures
|
||||
- **Performance Optimized**: SIMD instructions for maximum throughput
|
||||
- **Thread Safety**: Safe concurrent access with caching optimizations
|
||||
- **Scalable**: Excellent performance for high-throughput scenarios
|
||||
|
||||
- **Performance**: Uses SIMD optimization for high-performance encoding/decoding
|
||||
- **Compatibility**: Works with any shard size through SIMD implementation
|
||||
- **Reliability**: High-performance SIMD implementation for large data processing
|
||||
- **Use case**: Optimized for maximum performance in large data processing scenarios
|
||||
## Documentation
|
||||
|
||||
### Usage Example
|
||||
For complete documentation, examples, and usage information, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::erasure_coding::Erasure;
|
||||
## License
|
||||
|
||||
// Create erasure coding instance
|
||||
// 4 data shards, 2 parity shards, 1KB block size
|
||||
let erasure = Erasure::new(4, 2, 1024);
|
||||
|
||||
// Encode data
|
||||
let data = b"hello world from rustfs erasure coding";
|
||||
let shards = erasure.encode_data(data) ?;
|
||||
|
||||
// Simulate loss of one shard
|
||||
let mut shards_opt: Vec<Option<Vec<u8> > > = shards
|
||||
.iter()
|
||||
.map( | b| Some(b.to_vec()))
|
||||
.collect();
|
||||
shards_opt[2] = None; // Lose shard 2
|
||||
|
||||
// Reconstruct missing data
|
||||
erasure.decode_data( & mut shards_opt) ?;
|
||||
|
||||
// Recover original data
|
||||
let mut recovered = Vec::new();
|
||||
for shard in shards_opt.iter().take(4) { // Only data shards
|
||||
recovered.extend_from_slice(shard.as_ref().unwrap());
|
||||
}
|
||||
recovered.truncate(data.len());
|
||||
assert_eq!(&recovered, data);
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### SIMD Implementation Benefits
|
||||
|
||||
- **High Throughput**: Optimized for large block sizes (>= 1KB recommended)
|
||||
- **CPU Optimization**: Leverages modern CPU SIMD instructions
|
||||
- **Scalability**: Excellent performance for high-throughput scenarios
|
||||
|
||||
### Implementation Details
|
||||
|
||||
#### `reed-solomon-simd`
|
||||
|
||||
- **Instance Caching**: Encoder/decoder instances are cached and reused for optimal performance
|
||||
- **Thread Safety**: Thread-safe with RwLock-based caching
|
||||
- **SIMD Optimization**: Leverages CPU SIMD instructions for maximum performance
|
||||
- **Reset Capability**: Cached instances are reset for different parameters, avoiding unnecessary allocations
|
||||
|
||||
### Performance Tips
|
||||
|
||||
1. **Batch Operations**: When possible, batch multiple small operations into larger blocks
|
||||
2. **Block Size Optimization**: Use block sizes that are multiples of 64 bytes for optimal SIMD performance
|
||||
3. **Memory Allocation**: Pre-allocate buffers when processing multiple blocks
|
||||
4. **Cache Warming**: Initial operations may be slower due to cache setup, subsequent operations benefit from caching
|
||||
|
||||
## Cross-Platform Compatibility
|
||||
|
||||
The SIMD implementation supports:
|
||||
|
||||
- x86_64 with advanced SIMD instructions (AVX2, SSE)
|
||||
- aarch64 (ARM64) with NEON SIMD optimizations
|
||||
- Other architectures with fallback implementations
|
||||
|
||||
The implementation automatically selects the best available SIMD instructions for the target platform, providing optimal
|
||||
performance across different architectures.
|
||||
|
||||
## Testing and Benchmarking
|
||||
|
||||
Run performance benchmarks:
|
||||
|
||||
```bash
|
||||
# Run erasure coding benchmarks
|
||||
cargo bench --bench erasure_benchmark
|
||||
|
||||
# Run comparison benchmarks
|
||||
cargo bench --bench comparison_benchmark
|
||||
|
||||
# Generate benchmark reports
|
||||
./run_benchmarks.sh
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
All operations return `Result` types with comprehensive error information:
|
||||
|
||||
- Encoding errors: Invalid parameters, insufficient memory
|
||||
- Decoding errors: Too many missing shards, corrupted data
|
||||
- Configuration errors: Invalid shard counts, unsupported parameters
|
||||
This project is licensed under the Apache License, Version 2.0.
|
||||
|
||||
@@ -43,15 +43,16 @@ pub async fn create_bitrot_reader(
|
||||
) -> disk::error::Result<Option<BitrotReader<Box<dyn AsyncRead + Send + Sync + Unpin>>>> {
|
||||
// Calculate the total length to read, including the checksum overhead
|
||||
let length = length.div_ceil(shard_size) * checksum_algo.size() + length;
|
||||
|
||||
let offset = offset.div_ceil(shard_size) * checksum_algo.size() + offset;
|
||||
if let Some(data) = inline_data {
|
||||
// Use inline data
|
||||
let rd = Cursor::new(data.to_vec());
|
||||
let mut rd = Cursor::new(data.to_vec());
|
||||
rd.set_position(offset as u64);
|
||||
let reader = BitrotReader::new(Box::new(rd) as Box<dyn AsyncRead + Send + Sync + Unpin>, shard_size, checksum_algo);
|
||||
Ok(Some(reader))
|
||||
} else if let Some(disk) = disk {
|
||||
// Read from disk
|
||||
match disk.read_file_stream(bucket, path, offset, length).await {
|
||||
match disk.read_file_stream(bucket, path, offset, length - offset).await {
|
||||
Ok(rd) => {
|
||||
let reader = BitrotReader::new(rd, shard_size, checksum_algo);
|
||||
Ok(Some(reader))
|
||||
|
||||
@@ -18,7 +18,7 @@ use futures::future::join_all;
|
||||
use rustfs_filemeta::{MetaCacheEntries, MetaCacheEntry, MetacacheReader, is_io_eof};
|
||||
use std::{future::Future, pin::Pin, sync::Arc};
|
||||
use tokio::{spawn, sync::broadcast::Receiver as B_Receiver};
|
||||
use tracing::error;
|
||||
use tracing::{error, warn};
|
||||
|
||||
pub type AgreedFn = Box<dyn Fn(MetaCacheEntry) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + 'static>;
|
||||
pub type PartialFn =
|
||||
@@ -118,10 +118,14 @@ pub async fn list_path_raw(mut rx: B_Receiver<bool>, opts: ListPathRawOptions) -
|
||||
if let Some(disk) = d.clone() {
|
||||
disk
|
||||
} else {
|
||||
warn!("list_path_raw: fallback disk is none");
|
||||
break;
|
||||
}
|
||||
}
|
||||
None => break,
|
||||
None => {
|
||||
warn!("list_path_raw: fallback disk is none2");
|
||||
break;
|
||||
}
|
||||
};
|
||||
match disk
|
||||
.as_ref()
|
||||
|
||||
@@ -288,6 +288,12 @@ impl From<rmp_serde::encode::Error> for DiskError {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rmp_serde::decode::Error> for DiskError {
|
||||
fn from(e: rmp_serde::decode::Error) -> Self {
|
||||
DiskError::other(e)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rmp::encode::ValueWriteError> for DiskError {
|
||||
fn from(e: rmp::encode::ValueWriteError) -> Self {
|
||||
DiskError::other(e)
|
||||
|
||||
@@ -57,8 +57,8 @@ use bytes::Bytes;
|
||||
use path_absolutize::Absolutize;
|
||||
use rustfs_common::defer;
|
||||
use rustfs_filemeta::{
|
||||
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, Opts, RawFileInfo, UpdateFn, get_file_info,
|
||||
read_xl_meta_no_data,
|
||||
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, ObjectPartInfo, Opts, RawFileInfo, UpdateFn,
|
||||
get_file_info, read_xl_meta_no_data,
|
||||
};
|
||||
use rustfs_utils::HashAlgorithm;
|
||||
use rustfs_utils::os::get_info;
|
||||
@@ -145,8 +145,8 @@ impl Debug for LocalDisk {
|
||||
impl LocalDisk {
|
||||
pub async fn new(ep: &Endpoint, cleanup: bool) -> Result<Self> {
|
||||
debug!("Creating local disk");
|
||||
let root = match fs::canonicalize(ep.get_file_path()).await {
|
||||
Ok(path) => path,
|
||||
let root = match PathBuf::from(ep.get_file_path()).absolutize() {
|
||||
Ok(path) => path.into_owned(),
|
||||
Err(e) => {
|
||||
if e.kind() == ErrorKind::NotFound {
|
||||
return Err(DiskError::VolumeNotFound);
|
||||
@@ -1312,6 +1312,67 @@ impl DiskAPI for LocalDisk {
|
||||
Ok(resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
let volume_dir = self.get_bucket_path(bucket)?;
|
||||
|
||||
let mut ret = vec![ObjectPartInfo::default(); paths.len()];
|
||||
|
||||
for (i, path_str) in paths.iter().enumerate() {
|
||||
let path = Path::new(path_str);
|
||||
let file_name = path.file_name().and_then(|v| v.to_str()).unwrap_or_default();
|
||||
let num = file_name
|
||||
.strip_prefix("part.")
|
||||
.and_then(|v| v.strip_suffix(".meta"))
|
||||
.and_then(|v| v.parse::<usize>().ok())
|
||||
.unwrap_or_default();
|
||||
|
||||
if let Err(err) = access(
|
||||
volume_dir
|
||||
.clone()
|
||||
.join(path.parent().unwrap_or(Path::new("")).join(format!("part.{num}"))),
|
||||
)
|
||||
.await
|
||||
{
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
continue;
|
||||
}
|
||||
|
||||
let data = match self
|
||||
.read_all_data(bucket, volume_dir.clone(), volume_dir.clone().join(path))
|
||||
.await
|
||||
{
|
||||
Ok(data) => data,
|
||||
Err(err) => {
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
match ObjectPartInfo::unmarshal(&data) {
|
||||
Ok(meta) => {
|
||||
ret[i] = meta;
|
||||
}
|
||||
Err(err) => {
|
||||
ret[i] = ObjectPartInfo {
|
||||
number: num,
|
||||
error: Some(err.to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
|
||||
let volume_dir = self.get_bucket_path(volume)?;
|
||||
@@ -1550,11 +1611,6 @@ impl DiskAPI for LocalDisk {
|
||||
|
||||
#[tracing::instrument(level = "debug", skip(self))]
|
||||
async fn read_file_stream(&self, volume: &str, path: &str, offset: usize, length: usize) -> Result<FileReader> {
|
||||
// warn!(
|
||||
// "disk read_file_stream: volume: {}, path: {}, offset: {}, length: {}",
|
||||
// volume, path, offset, length
|
||||
// );
|
||||
|
||||
let volume_dir = self.get_bucket_path(volume)?;
|
||||
if !skip_access_checks(volume) {
|
||||
access(&volume_dir)
|
||||
|
||||
@@ -41,7 +41,7 @@ use endpoint::Endpoint;
|
||||
use error::DiskError;
|
||||
use error::{Error, Result};
|
||||
use local::LocalDisk;
|
||||
use rustfs_filemeta::{FileInfo, RawFileInfo};
|
||||
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
|
||||
use rustfs_madmin::info_commands::DiskMetrics;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::{fmt::Debug, path::PathBuf, sync::Arc};
|
||||
@@ -331,6 +331,14 @@ impl DiskAPI for Disk {
|
||||
}
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
match self {
|
||||
Disk::Local(local_disk) => local_disk.read_parts(bucket, paths).await,
|
||||
Disk::Remote(remote_disk) => remote_disk.read_parts(bucket, paths).await,
|
||||
}
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn rename_part(&self, src_volume: &str, src_path: &str, dst_volume: &str, dst_path: &str, meta: Bytes) -> Result<()> {
|
||||
match self {
|
||||
@@ -513,7 +521,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
|
||||
// CheckParts
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp>;
|
||||
// StatInfoFile
|
||||
// ReadParts
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>>;
|
||||
async fn read_multiple(&self, req: ReadMultipleReq) -> Result<Vec<ReadMultipleResp>>;
|
||||
// CleanAbandonedData
|
||||
async fn write_all(&self, volume: &str, path: &str, data: Bytes) -> Result<()>;
|
||||
|
||||
@@ -30,6 +30,7 @@ use std::{
|
||||
time::SystemTime,
|
||||
};
|
||||
use tokio::sync::{OnceCell, RwLock};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const DISK_ASSUME_UNKNOWN_SIZE: u64 = 1 << 30;
|
||||
@@ -62,7 +63,12 @@ static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
|
||||
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
|
||||
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
|
||||
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
|
||||
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();}
|
||||
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
|
||||
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
|
||||
}
|
||||
|
||||
// Global cancellation token for background services (data scanner and auto heal)
|
||||
static GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
static GLOBAL_ACTIVE_CRED: OnceLock<Credentials> = OnceLock::new();
|
||||
|
||||
@@ -182,3 +188,35 @@ pub async fn update_erasure_type(setup_type: SetupType) {
|
||||
// }
|
||||
|
||||
type TypeLocalDiskSetDrives = Vec<Vec<Vec<Option<DiskStore>>>>;
|
||||
|
||||
pub fn set_global_region(region: String) {
|
||||
GLOBAL_REGION.set(region).unwrap();
|
||||
}
|
||||
|
||||
pub fn get_global_region() -> Option<String> {
|
||||
GLOBAL_REGION.get().cloned()
|
||||
}
|
||||
|
||||
/// Initialize the global background services cancellation token
|
||||
pub fn init_background_services_cancel_token(cancel_token: CancellationToken) -> Result<(), CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.set(cancel_token)
|
||||
}
|
||||
|
||||
/// Get the global background services cancellation token
|
||||
pub fn get_background_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global background services cancellation token
|
||||
pub fn create_background_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_background_services_cancel_token(cancel_token.clone()).expect("Background services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all background services gracefully
|
||||
pub fn shutdown_background_services() {
|
||||
if let Some(cancel_token) = GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ use tokio::{
|
||||
},
|
||||
time::interval,
|
||||
};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -32,7 +33,7 @@ use super::{
|
||||
heal_ops::{HealSequence, new_bg_heal_sequence},
|
||||
};
|
||||
use crate::error::{Error, Result};
|
||||
use crate::global::GLOBAL_MRFState;
|
||||
use crate::global::{GLOBAL_MRFState, get_background_services_cancel_token};
|
||||
use crate::heal::error::ERR_RETRY_HEALING;
|
||||
use crate::heal::heal_commands::{HEAL_ITEM_BUCKET, HealScanMode};
|
||||
use crate::heal::heal_ops::{BG_HEALING_UUID, HealSource};
|
||||
@@ -54,6 +55,13 @@ use crate::{
|
||||
pub static DEFAULT_MONITOR_NEW_DISK_INTERVAL: Duration = Duration::from_secs(10);
|
||||
|
||||
pub async fn init_auto_heal() {
|
||||
info!("Initializing auto heal background task");
|
||||
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
init_background_healing().await;
|
||||
let v = env::var("_RUSTFS_AUTO_DRIVE_HEALING").unwrap_or("on".to_string());
|
||||
if v == "on" {
|
||||
@@ -61,12 +69,16 @@ pub async fn init_auto_heal() {
|
||||
GLOBAL_BackgroundHealState
|
||||
.push_heal_local_disks(&get_local_disks_to_heal().await)
|
||||
.await;
|
||||
spawn(async {
|
||||
monitor_local_disks_and_heal().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
monitor_local_disks_and_heal(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
spawn(async {
|
||||
GLOBAL_MRFState.heal_routine().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
GLOBAL_MRFState.heal_routine_with_cancel(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -108,50 +120,66 @@ pub async fn get_local_disks_to_heal() -> Vec<Endpoint> {
|
||||
disks_to_heal
|
||||
}
|
||||
|
||||
async fn monitor_local_disks_and_heal() {
|
||||
async fn monitor_local_disks_and_heal(cancel_token: CancellationToken) {
|
||||
info!("Auto heal monitor started");
|
||||
let mut interval = interval(DEFAULT_MONITOR_NEW_DISK_INTERVAL);
|
||||
|
||||
loop {
|
||||
interval.tick().await;
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("Auto heal monitor received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
_ = interval.tick() => {
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
let cancel_clone = cancel_token.clone();
|
||||
futures.push(async move {
|
||||
let disk_for_cancel = disk_clone.clone();
|
||||
tokio::select! {
|
||||
_ = cancel_clone.cancelled() => {
|
||||
info!("Disk healing task cancelled for disk: {}", disk_for_cancel);
|
||||
}
|
||||
_ = async {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
} => {}
|
||||
}
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
futures.push(async move {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
return;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -20,14 +20,13 @@ use std::{
|
||||
path::{Path, PathBuf},
|
||||
pin::Pin,
|
||||
sync::{
|
||||
Arc, OnceLock,
|
||||
Arc,
|
||||
atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering},
|
||||
},
|
||||
time::{Duration, SystemTime},
|
||||
};
|
||||
|
||||
use time::{self, OffsetDateTime};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
|
||||
use super::{
|
||||
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
|
||||
@@ -51,7 +50,7 @@ use crate::{
|
||||
metadata_sys,
|
||||
},
|
||||
event_notification::{EventArgs, send_event},
|
||||
global::GLOBAL_LocalNodeName,
|
||||
global::{GLOBAL_LocalNodeName, get_background_services_cancel_token},
|
||||
store_api::{ObjectOptions, ObjectToDelete, StorageAPI},
|
||||
};
|
||||
use crate::{
|
||||
@@ -128,8 +127,6 @@ lazy_static! {
|
||||
pub static ref globalHealConfig: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::default()));
|
||||
}
|
||||
|
||||
static GLOBAL_SCANNER_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
struct DynamicSleeper {
|
||||
factor: f64,
|
||||
max_sleep: Duration,
|
||||
@@ -198,21 +195,18 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
|
||||
/// - Minimum sleep duration to avoid excessive CPU usage
|
||||
/// - Proper error handling and logging
|
||||
///
|
||||
/// # Returns
|
||||
/// A CancellationToken that can be used to gracefully shutdown the scanner
|
||||
///
|
||||
/// # Architecture
|
||||
/// 1. Initialize with random seed for sleep intervals
|
||||
/// 2. Run scanner cycles in a loop
|
||||
/// 3. Use randomized sleep between cycles to avoid thundering herd
|
||||
/// 4. Ensure minimum sleep duration to prevent CPU thrashing
|
||||
pub async fn init_data_scanner() -> CancellationToken {
|
||||
pub async fn init_data_scanner() {
|
||||
info!("Initializing data scanner background task");
|
||||
|
||||
let cancel_token = CancellationToken::new();
|
||||
GLOBAL_SCANNER_CANCEL_TOKEN
|
||||
.set(cancel_token.clone())
|
||||
.expect("Scanner already initialized");
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
tokio::spawn(async move {
|
||||
@@ -256,8 +250,6 @@ pub async fn init_data_scanner() -> CancellationToken {
|
||||
|
||||
info!("Data scanner background task stopped gracefully");
|
||||
});
|
||||
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Run a single data scanner cycle
|
||||
@@ -282,7 +274,7 @@ async fn run_data_scanner_cycle() {
|
||||
};
|
||||
|
||||
// Check for cancellation before starting expensive operations
|
||||
if let Some(token) = GLOBAL_SCANNER_CANCEL_TOKEN.get() {
|
||||
if let Some(token) = get_background_services_cancel_token() {
|
||||
if token.is_cancelled() {
|
||||
debug!("Scanner cancelled before starting cycle");
|
||||
return;
|
||||
@@ -397,9 +389,8 @@ async fn execute_namespace_scan(
|
||||
cycle: u64,
|
||||
scan_mode: HealScanMode,
|
||||
) -> Result<()> {
|
||||
let cancel_token = GLOBAL_SCANNER_CANCEL_TOKEN
|
||||
.get()
|
||||
.ok_or_else(|| Error::other("Scanner not initialized"))?;
|
||||
let cancel_token =
|
||||
get_background_services_cancel_token().ok_or_else(|| Error::other("Background services not initialized"))?;
|
||||
|
||||
tokio::select! {
|
||||
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {
|
||||
|
||||
@@ -25,7 +25,8 @@ use std::time::Duration;
|
||||
use tokio::sync::RwLock;
|
||||
use tokio::sync::mpsc::{Receiver, Sender};
|
||||
use tokio::time::sleep;
|
||||
use tracing::error;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const MRF_OPS_QUEUE_SIZE: u64 = 100000;
|
||||
@@ -87,56 +88,96 @@ impl MRFState {
|
||||
let _ = self.tx.send(op).await;
|
||||
}
|
||||
|
||||
pub async fn heal_routine(&self) {
|
||||
/// Enhanced heal routine with cancellation support
|
||||
///
|
||||
/// This method implements the same healing logic as the original heal_routine,
|
||||
/// but adds proper cancellation support via CancellationToken.
|
||||
/// The core logic remains identical to maintain compatibility.
|
||||
pub async fn heal_routine_with_cancel(&self, cancel_token: CancellationToken) {
|
||||
info!("MRF heal routine started with cancellation support");
|
||||
|
||||
loop {
|
||||
// rx used only there,
|
||||
if let Some(op) = self.rx.write().await.recv().await {
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
return;
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
op_result = async {
|
||||
let mut rx_guard = self.rx.write().await;
|
||||
rx_guard.recv().await
|
||||
} => {
|
||||
if let Some(op) = op_result {
|
||||
// Special path filtering (original logic)
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
continue; // Skip this operation, continue with next
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
// Network reconnection delay (original logic)
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine cancelled during reconnection delay");
|
||||
break;
|
||||
}
|
||||
_ = sleep(Duration::from_secs(1)) => {}
|
||||
}
|
||||
}
|
||||
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
if op.object.is_empty() {
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
if let Err(err) =
|
||||
heal_object(&op.bucket, &op.object, &op.version_id.clone().unwrap_or_default(), scan_mode).await
|
||||
{
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
// Core healing logic (original logic preserved)
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
|
||||
if op.object.is_empty() {
|
||||
// Heal bucket (original logic)
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
// Heal single object (original logic)
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
)
|
||||
.await
|
||||
{
|
||||
&op.version_id.clone().unwrap_or_default(),
|
||||
scan_mode
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
// Heal multiple versions (original logic)
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
// Check for cancellation before each version
|
||||
if cancel_token.is_cancelled() {
|
||||
info!("MRF heal routine cancelled during version processing");
|
||||
return;
|
||||
}
|
||||
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
info!("MRF heal routine channel closed, exiting");
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
info!("MRF heal routine stopped gracefully");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,8 +22,8 @@ use rustfs_protos::{
|
||||
proto_gen::node_service::{
|
||||
CheckPartsRequest, DeletePathsRequest, DeleteRequest, DeleteVersionRequest, DeleteVersionsRequest, DeleteVolumeRequest,
|
||||
DiskInfoRequest, ListDirRequest, ListVolumesRequest, MakeVolumeRequest, MakeVolumesRequest, NsScannerRequest,
|
||||
ReadAllRequest, ReadMultipleRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest, RenameFileRequest,
|
||||
StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
|
||||
ReadAllRequest, ReadMultipleRequest, ReadPartsRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest,
|
||||
RenameFileRequest, StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -44,7 +44,7 @@ use crate::{
|
||||
heal_commands::{HealScanMode, HealingTracker},
|
||||
},
|
||||
};
|
||||
use rustfs_filemeta::{FileInfo, RawFileInfo};
|
||||
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
|
||||
use rustfs_protos::proto_gen::node_service::RenamePartRequest;
|
||||
use rustfs_rio::{HttpReader, HttpWriter};
|
||||
use tokio::{
|
||||
@@ -790,6 +790,27 @@ impl DiskAPI for RemoteDisk {
|
||||
Ok(check_parts_resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
|
||||
let mut client = node_service_time_out_client(&self.addr)
|
||||
.await
|
||||
.map_err(|err| Error::other(format!("can not get client, err: {err}")))?;
|
||||
let request = Request::new(ReadPartsRequest {
|
||||
disk: self.endpoint.to_string(),
|
||||
bucket: bucket.to_string(),
|
||||
paths: paths.to_vec(),
|
||||
});
|
||||
|
||||
let response = client.read_parts(request).await?.into_inner();
|
||||
if !response.success {
|
||||
return Err(response.error.unwrap_or_default().into());
|
||||
}
|
||||
|
||||
let read_parts_resp = rmp_serde::from_slice::<Vec<ObjectPartInfo>>(&response.object_part_infos)?;
|
||||
|
||||
Ok(read_parts_resp)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
|
||||
info!("check_parts");
|
||||
|
||||
@@ -404,7 +404,42 @@ impl Node for NodeService {
|
||||
}))
|
||||
}
|
||||
}
|
||||
async fn read_parts(&self, request: Request<ReadPartsRequest>) -> Result<Response<ReadPartsResponse>, Status> {
|
||||
let request = request.into_inner();
|
||||
if let Some(disk) = self.find_disk(&request.disk).await {
|
||||
match disk.read_parts(&request.bucket, &request.paths).await {
|
||||
Ok(data) => {
|
||||
let data = match rmp_serde::to_vec(&data) {
|
||||
Ok(data) => data,
|
||||
Err(err) => {
|
||||
return Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(DiskError::other(format!("encode data failed: {err}")).into()),
|
||||
}));
|
||||
}
|
||||
};
|
||||
Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: true,
|
||||
object_part_infos: Bytes::copy_from_slice(&data),
|
||||
error: None,
|
||||
}))
|
||||
}
|
||||
|
||||
Err(err) => Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(err.into()),
|
||||
})),
|
||||
}
|
||||
} else {
|
||||
Ok(tonic::Response::new(ReadPartsResponse {
|
||||
success: false,
|
||||
object_part_infos: Bytes::new(),
|
||||
error: Some(DiskError::other("can not find disk".to_string()).into()),
|
||||
}))
|
||||
}
|
||||
}
|
||||
async fn check_parts(&self, request: Request<CheckPartsRequest>) -> Result<Response<CheckPartsResponse>, Status> {
|
||||
let request = request.into_inner();
|
||||
if let Some(disk) = self.find_disk(&request.disk).await {
|
||||
|
||||
@@ -24,13 +24,13 @@ use crate::disk::{
|
||||
};
|
||||
use crate::erasure_coding;
|
||||
use crate::erasure_coding::bitrot_verify;
|
||||
use crate::error::ObjectApiError;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::error::{ObjectApiError, is_err_object_not_found};
|
||||
use crate::global::GLOBAL_MRFState;
|
||||
use crate::global::{GLOBAL_LocalNodeName, GLOBAL_TierConfigMgr};
|
||||
use crate::heal::data_usage_cache::DataUsageCache;
|
||||
use crate::heal::heal_ops::{HealEntryFn, HealSequence};
|
||||
use crate::store_api::ObjectToDelete;
|
||||
use crate::store_api::{ListPartsInfo, ObjectToDelete};
|
||||
use crate::{
|
||||
bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts},
|
||||
cache_value::metacache_set::{ListPathRawOptions, list_path_raw},
|
||||
@@ -119,6 +119,7 @@ use tracing::{debug, info, warn};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const DEFAULT_READ_BUFFER_SIZE: usize = 1024 * 1024;
|
||||
pub const MAX_PARTS_COUNT: usize = 10000;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SetDisks {
|
||||
@@ -316,6 +317,9 @@ impl SetDisks {
|
||||
.filter(|v| v.as_ref().is_some_and(|d| d.is_local()))
|
||||
.collect()
|
||||
}
|
||||
fn default_read_quorum(&self) -> usize {
|
||||
self.set_drive_count - self.default_parity_count
|
||||
}
|
||||
fn default_write_quorum(&self) -> usize {
|
||||
let mut data_count = self.set_drive_count - self.default_parity_count;
|
||||
if data_count == self.default_parity_count {
|
||||
@@ -550,6 +554,183 @@ impl SetDisks {
|
||||
}
|
||||
}
|
||||
|
||||
async fn read_parts(
|
||||
disks: &[Option<DiskStore>],
|
||||
bucket: &str,
|
||||
part_meta_paths: &[String],
|
||||
part_numbers: &[usize],
|
||||
read_quorum: usize,
|
||||
) -> disk::error::Result<Vec<ObjectPartInfo>> {
|
||||
let mut futures = Vec::with_capacity(disks.len());
|
||||
for (i, disk) in disks.iter().enumerate() {
|
||||
futures.push(async move {
|
||||
if let Some(disk) = disk {
|
||||
disk.read_parts(bucket, part_meta_paths).await
|
||||
} else {
|
||||
Err(DiskError::DiskNotFound)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
let mut errs = Vec::with_capacity(disks.len());
|
||||
let mut object_parts = Vec::with_capacity(disks.len());
|
||||
|
||||
let results = join_all(futures).await;
|
||||
for result in results {
|
||||
match result {
|
||||
Ok(res) => {
|
||||
errs.push(None);
|
||||
object_parts.push(res);
|
||||
}
|
||||
Err(e) => {
|
||||
errs.push(Some(e));
|
||||
object_parts.push(vec![]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
|
||||
return Err(err);
|
||||
}
|
||||
|
||||
let mut ret = vec![ObjectPartInfo::default(); part_meta_paths.len()];
|
||||
|
||||
for (part_idx, part_info) in part_meta_paths.iter().enumerate() {
|
||||
let mut part_meta_quorum = HashMap::new();
|
||||
let mut part_infos = Vec::new();
|
||||
for (j, parts) in object_parts.iter().enumerate() {
|
||||
if parts.len() != part_meta_paths.len() {
|
||||
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if !parts[part_idx].etag.is_empty() {
|
||||
*part_meta_quorum.entry(parts[part_idx].etag.clone()).or_insert(0) += 1;
|
||||
part_infos.push(parts[part_idx].clone());
|
||||
continue;
|
||||
}
|
||||
|
||||
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
|
||||
}
|
||||
|
||||
let mut max_quorum = 0;
|
||||
let mut max_etag = None;
|
||||
let mut max_part_meta = None;
|
||||
for (etag, quorum) in part_meta_quorum.iter() {
|
||||
if quorum > &max_quorum {
|
||||
max_quorum = *quorum;
|
||||
max_etag = Some(etag);
|
||||
max_part_meta = Some(etag);
|
||||
}
|
||||
}
|
||||
|
||||
let mut found = None;
|
||||
for info in part_infos.iter() {
|
||||
if let Some(etag) = max_etag
|
||||
&& info.etag == *etag
|
||||
{
|
||||
found = Some(info.clone());
|
||||
break;
|
||||
}
|
||||
|
||||
if let Some(part_meta) = max_part_meta
|
||||
&& info.etag.is_empty()
|
||||
&& part_meta.ends_with(format!("part.{0}.meta", info.number).as_str())
|
||||
{
|
||||
found = Some(info.clone());
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if let (Some(found), Some(max_etag)) = (found, max_etag)
|
||||
&& !found.etag.is_empty()
|
||||
&& part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum
|
||||
{
|
||||
ret[part_idx] = found;
|
||||
} else {
|
||||
ret[part_idx] = ObjectPartInfo {
|
||||
number: part_numbers[part_idx],
|
||||
error: Some(format!("part.{} not found", part_numbers[part_idx])),
|
||||
..Default::default()
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
async fn list_parts(disks: &[Option<DiskStore>], part_path: &str, read_quorum: usize) -> disk::error::Result<Vec<usize>> {
|
||||
let mut futures = Vec::with_capacity(disks.len());
|
||||
for (i, disk) in disks.iter().enumerate() {
|
||||
futures.push(async move {
|
||||
if let Some(disk) = disk {
|
||||
disk.list_dir(RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_MULTIPART_BUCKET, part_path, -1)
|
||||
.await
|
||||
} else {
|
||||
Err(DiskError::DiskNotFound)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
let mut errs = Vec::with_capacity(disks.len());
|
||||
let mut object_parts = Vec::with_capacity(disks.len());
|
||||
|
||||
let results = join_all(futures).await;
|
||||
for result in results {
|
||||
match result {
|
||||
Ok(res) => {
|
||||
errs.push(None);
|
||||
object_parts.push(res);
|
||||
}
|
||||
Err(e) => {
|
||||
errs.push(Some(e));
|
||||
object_parts.push(vec![]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
|
||||
return Err(err);
|
||||
}
|
||||
|
||||
let mut part_quorum_map: HashMap<usize, usize> = HashMap::new();
|
||||
|
||||
for drive_parts in object_parts {
|
||||
let mut parts_with_meta_count: HashMap<usize, usize> = HashMap::new();
|
||||
|
||||
// part files can be either part.N or part.N.meta
|
||||
for part_path in drive_parts {
|
||||
if let Some(num_str) = part_path.strip_prefix("part.") {
|
||||
if let Some(meta_idx) = num_str.find(".meta") {
|
||||
if let Ok(part_num) = num_str[..meta_idx].parse::<usize>() {
|
||||
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
} else if let Ok(part_num) = num_str.parse::<usize>() {
|
||||
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Include only part.N.meta files with corresponding part.N
|
||||
for (&part_num, &cnt) in &parts_with_meta_count {
|
||||
if cnt >= 2 {
|
||||
*part_quorum_map.entry(part_num).or_insert(0) += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut part_numbers = Vec::with_capacity(part_quorum_map.len());
|
||||
for (part_num, count) in part_quorum_map {
|
||||
if count >= read_quorum {
|
||||
part_numbers.push(part_num);
|
||||
}
|
||||
}
|
||||
|
||||
part_numbers.sort();
|
||||
|
||||
Ok(part_numbers)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(disks, meta))]
|
||||
async fn rename_part(
|
||||
disks: &[Option<DiskStore>],
|
||||
@@ -1942,6 +2123,8 @@ impl SetDisks {
|
||||
|
||||
let till_offset = erasure.shard_file_offset(part_offset, part_length, part_size);
|
||||
|
||||
let read_offset = (part_offset / erasure.block_size) * erasure.shard_size();
|
||||
|
||||
let mut readers = Vec::with_capacity(disks.len());
|
||||
let mut errors = Vec::with_capacity(disks.len());
|
||||
for (idx, disk_op) in disks.iter().enumerate() {
|
||||
@@ -1950,7 +2133,7 @@ impl SetDisks {
|
||||
disk_op.as_ref(),
|
||||
bucket,
|
||||
&format!("{}/{}/part.{}", object, files[idx].data_dir.unwrap_or_default(), part_number),
|
||||
part_offset,
|
||||
read_offset,
|
||||
till_offset,
|
||||
erasure.shard_size(),
|
||||
HashAlgorithm::HighwayHash256,
|
||||
@@ -4099,6 +4282,8 @@ impl ObjectIO for SetDisks {
|
||||
}
|
||||
}
|
||||
|
||||
drop(writers); // drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data
|
||||
|
||||
let (online_disks, _, op_old_dir) = Self::rename_data(
|
||||
&shuffle_disks,
|
||||
RUSTFS_META_TMP_BUCKET,
|
||||
@@ -4882,7 +5067,7 @@ impl StorageAPI for SetDisks {
|
||||
) -> Result<PartInfo> {
|
||||
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
|
||||
|
||||
let (mut fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
|
||||
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
|
||||
|
||||
let write_quorum = fi.write_quorum(self.default_write_quorum());
|
||||
|
||||
@@ -5035,9 +5220,11 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
// debug!("put_object_part part_info {:?}", part_info);
|
||||
|
||||
fi.parts = vec![part_info];
|
||||
// fi.parts = vec![part_info.clone()];
|
||||
|
||||
let fi_buff = fi.marshal_msg()?;
|
||||
let part_info_buff = part_info.marshal_msg()?;
|
||||
|
||||
drop(writers); // drop writers to close all files
|
||||
|
||||
let part_path = format!("{}/{}/{}", upload_id_path, fi.data_dir.unwrap_or_default(), part_suffix);
|
||||
let _ = Self::rename_part(
|
||||
@@ -5046,7 +5233,7 @@ impl StorageAPI for SetDisks {
|
||||
&tmp_part_path,
|
||||
RUSTFS_META_MULTIPART_BUCKET,
|
||||
&part_path,
|
||||
fi_buff.into(),
|
||||
part_info_buff.into(),
|
||||
write_quorum,
|
||||
)
|
||||
.await?;
|
||||
@@ -5064,6 +5251,123 @@ impl StorageAPI for SetDisks {
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
mut max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, false).await?;
|
||||
|
||||
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
|
||||
|
||||
if max_parts > MAX_PARTS_COUNT {
|
||||
max_parts = MAX_PARTS_COUNT;
|
||||
}
|
||||
|
||||
let part_number_marker = part_number_marker.unwrap_or_default();
|
||||
|
||||
let mut ret = ListPartsInfo {
|
||||
bucket: bucket.to_owned(),
|
||||
object: object.to_owned(),
|
||||
upload_id: upload_id.to_owned(),
|
||||
max_parts,
|
||||
part_number_marker,
|
||||
user_defined: fi.metadata.clone(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
if max_parts == 0 {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
let online_disks = self.get_disks_internal().await;
|
||||
|
||||
let read_quorum = fi.read_quorum(self.default_read_quorum());
|
||||
|
||||
let part_path = format!(
|
||||
"{}{}",
|
||||
path_join_buf(&[
|
||||
&upload_id_path,
|
||||
fi.data_dir.map(|v| v.to_string()).unwrap_or_default().as_str(),
|
||||
]),
|
||||
SLASH_SEPARATOR
|
||||
);
|
||||
|
||||
let mut part_numbers = match Self::list_parts(&online_disks, &part_path, read_quorum).await {
|
||||
Ok(parts) => parts,
|
||||
Err(err) => {
|
||||
if err == DiskError::FileNotFound {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
return Err(to_object_err(err.into(), vec![bucket, object]));
|
||||
}
|
||||
};
|
||||
|
||||
if part_numbers.is_empty() {
|
||||
return Ok(ret);
|
||||
}
|
||||
let start_op = part_numbers.iter().find(|&&v| v != 0 && v == part_number_marker);
|
||||
if part_number_marker > 0 && start_op.is_none() {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
if let Some(start) = start_op {
|
||||
if start + 1 > part_numbers.len() {
|
||||
return Ok(ret);
|
||||
}
|
||||
|
||||
part_numbers = part_numbers[start + 1..].to_vec();
|
||||
}
|
||||
|
||||
let mut parts = Vec::with_capacity(part_numbers.len());
|
||||
|
||||
let part_meta_paths = part_numbers
|
||||
.iter()
|
||||
.map(|v| format!("{part_path}part.{v}.meta"))
|
||||
.collect::<Vec<String>>();
|
||||
|
||||
let object_parts =
|
||||
Self::read_parts(&online_disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, read_quorum)
|
||||
.await
|
||||
.map_err(|e| to_object_err(e.into(), vec![bucket, object, upload_id]))?;
|
||||
|
||||
let mut count = max_parts;
|
||||
|
||||
for (i, part) in object_parts.iter().enumerate() {
|
||||
if let Some(err) = &part.error {
|
||||
warn!("list_object_parts part error: {:?}", &err);
|
||||
}
|
||||
|
||||
parts.push(PartInfo {
|
||||
etag: Some(part.etag.clone()),
|
||||
part_num: part.number,
|
||||
last_mod: part.mod_time,
|
||||
size: part.size,
|
||||
actual_size: part.actual_size,
|
||||
});
|
||||
|
||||
count -= 1;
|
||||
if count == 0 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ret.parts = parts;
|
||||
|
||||
if object_parts.len() > ret.parts.len() {
|
||||
ret.is_truncated = true;
|
||||
ret.next_part_number_marker = ret.parts.last().map(|v| v.part_num).unwrap_or_default();
|
||||
}
|
||||
|
||||
Ok(ret)
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
@@ -5139,8 +5443,8 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
let splits: Vec<&str> = upload_id.split("x").collect();
|
||||
if splits.len() == 2 {
|
||||
if let Ok(unix) = splits[1].parse::<i64>() {
|
||||
OffsetDateTime::from_unix_timestamp(unix)?
|
||||
if let Ok(unix) = splits[1].parse::<i128>() {
|
||||
OffsetDateTime::from_unix_timestamp_nanos(unix)?
|
||||
} else {
|
||||
now
|
||||
}
|
||||
@@ -5359,49 +5663,31 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
let part_path = format!("{}/{}/", upload_id_path, fi.data_dir.unwrap_or(Uuid::nil()));
|
||||
|
||||
let files: Vec<String> = uploaded_parts.iter().map(|v| format!("part.{}.meta", v.part_num)).collect();
|
||||
let part_meta_paths = uploaded_parts
|
||||
.iter()
|
||||
.map(|v| format!("{part_path}part.{0}.meta", v.part_num))
|
||||
.collect::<Vec<String>>();
|
||||
|
||||
// readMultipleFiles
|
||||
let part_numbers = uploaded_parts.iter().map(|v| v.part_num).collect::<Vec<usize>>();
|
||||
|
||||
let req = ReadMultipleReq {
|
||||
bucket: RUSTFS_META_MULTIPART_BUCKET.to_string(),
|
||||
prefix: part_path,
|
||||
files,
|
||||
max_size: 1 << 20,
|
||||
metadata_only: true,
|
||||
abort404: true,
|
||||
max_results: 0,
|
||||
};
|
||||
let object_parts =
|
||||
Self::read_parts(&disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, write_quorum).await?;
|
||||
|
||||
let part_files_resp = Self::read_multiple_files(&disks, req, write_quorum).await;
|
||||
|
||||
if part_files_resp.len() != uploaded_parts.len() {
|
||||
if object_parts.len() != uploaded_parts.len() {
|
||||
return Err(Error::other("part result number err"));
|
||||
}
|
||||
|
||||
for (i, res) in part_files_resp.iter().enumerate() {
|
||||
let part_id = uploaded_parts[i].part_num;
|
||||
if !res.error.is_empty() || !res.exists {
|
||||
error!("complete_multipart_upload part_id err {:?}, exists={}", res, res.exists);
|
||||
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
|
||||
for (i, part) in object_parts.iter().enumerate() {
|
||||
if let Some(err) = &part.error {
|
||||
error!("complete_multipart_upload part error: {:?}", &err);
|
||||
}
|
||||
|
||||
let part_fi = FileInfo::unmarshal(&res.data).map_err(|e| {
|
||||
if uploaded_parts[i].part_num != part.number {
|
||||
error!(
|
||||
"complete_multipart_upload FileInfo::unmarshal err {:?}, part_id={}, bucket={}, object={}",
|
||||
e, part_id, bucket, object
|
||||
"complete_multipart_upload part_id err part_id != part_num {} != {}",
|
||||
uploaded_parts[i].part_num, part.number
|
||||
);
|
||||
Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned())
|
||||
})?;
|
||||
let part = &part_fi.parts[0];
|
||||
let part_num = part.number;
|
||||
|
||||
// debug!("complete part {} file info {:?}", part_num, &part_fi);
|
||||
// debug!("complete part {} object info {:?}", part_num, &part);
|
||||
|
||||
if part_id != part_num {
|
||||
error!("complete_multipart_upload part_id err part_id != part_num {} != {}", part_id, part_num);
|
||||
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
|
||||
return Err(Error::InvalidPart(uploaded_parts[i].part_num, bucket.to_owned(), object.to_owned()));
|
||||
}
|
||||
|
||||
fi.add_object_part(
|
||||
|
||||
@@ -17,6 +17,7 @@ use std::{collections::HashMap, sync::Arc};
|
||||
|
||||
use crate::disk::error_reduce::count_errs;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::store_api::ListPartsInfo;
|
||||
use crate::{
|
||||
disk::{
|
||||
DiskAPI, DiskInfo, DiskOption, DiskStore,
|
||||
@@ -619,6 +620,20 @@ impl StorageAPI for Sets {
|
||||
Ok((del_objects, del_errs))
|
||||
}
|
||||
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
self.get_disks_by_key(object)
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
|
||||
@@ -38,7 +38,7 @@ use crate::new_object_layer_fn;
|
||||
use crate::notification_sys::get_global_notification_sys;
|
||||
use crate::pools::PoolMeta;
|
||||
use crate::rebalance::RebalanceMeta;
|
||||
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, MultipartInfo, ObjectIO};
|
||||
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, ListPartsInfo, MultipartInfo, ObjectIO};
|
||||
use crate::store_init::{check_disk_fatal_errs, ec_drives_no_config};
|
||||
use crate::{
|
||||
bucket::{lifecycle::bucket_lifecycle_ops::TransitionState, metadata::BucketMetadata},
|
||||
@@ -1810,6 +1810,47 @@ impl StorageAPI for ECStore {
|
||||
Ok((del_objects, del_errs))
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo> {
|
||||
check_list_parts_args(bucket, object, upload_id)?;
|
||||
|
||||
// TODO: nslock
|
||||
|
||||
if self.single_pool() {
|
||||
return self.pools[0]
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await;
|
||||
}
|
||||
|
||||
for pool in self.pools.iter() {
|
||||
if self.is_suspended(pool.pool_idx).await {
|
||||
continue;
|
||||
}
|
||||
match pool
|
||||
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
|
||||
.await
|
||||
{
|
||||
Ok(res) => return Ok(res),
|
||||
Err(err) => {
|
||||
if is_err_invalid_upload_id(&err) {
|
||||
continue;
|
||||
}
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
Err(StorageError::InvalidUploadID(bucket.to_owned(), object.to_owned(), upload_id.to_owned()))
|
||||
}
|
||||
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
@@ -2508,14 +2549,14 @@ fn check_object_name_for_length_and_slash(bucket: &str, object: &str) -> Result<
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
if object.contains('\\')
|
||||
|| object.contains(':')
|
||||
if object.contains(':')
|
||||
|| object.contains('*')
|
||||
|| object.contains('?')
|
||||
|| object.contains('"')
|
||||
|| object.contains('|')
|
||||
|| object.contains('<')
|
||||
|| object.contains('>')
|
||||
// || object.contains('\\')
|
||||
{
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_owned(), object.to_owned()));
|
||||
}
|
||||
@@ -2549,9 +2590,9 @@ fn check_bucket_and_object_names(bucket: &str, object: &str) -> Result<()> {
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
}
|
||||
|
||||
if cfg!(target_os = "windows") && object.contains('\\') {
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
}
|
||||
// if cfg!(target_os = "windows") && object.contains('\\') {
|
||||
// return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
// }
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -201,7 +201,7 @@ impl GetObjectReader {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct HTTPRangeSpec {
|
||||
pub is_suffix_length: bool,
|
||||
pub start: i64,
|
||||
@@ -548,6 +548,7 @@ impl ObjectInfo {
|
||||
mod_time: part.mod_time,
|
||||
checksums: part.checksums.clone(),
|
||||
number: part.number,
|
||||
error: part.error.clone(),
|
||||
})
|
||||
.collect();
|
||||
|
||||
@@ -844,6 +845,48 @@ pub struct ListMultipartsInfo {
|
||||
// encoding_type: String, // Not supported yet.
|
||||
}
|
||||
|
||||
/// ListPartsInfo - represents list of all parts.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ListPartsInfo {
|
||||
/// Name of the bucket.
|
||||
pub bucket: String,
|
||||
|
||||
/// Name of the object.
|
||||
pub object: String,
|
||||
|
||||
/// Upload ID identifying the multipart upload whose parts are being listed.
|
||||
pub upload_id: String,
|
||||
|
||||
/// The class of storage used to store the object.
|
||||
pub storage_class: String,
|
||||
|
||||
/// Part number after which listing begins.
|
||||
pub part_number_marker: usize,
|
||||
|
||||
/// When a list is truncated, this element specifies the last part in the list,
|
||||
/// as well as the value to use for the part-number-marker request parameter
|
||||
/// in a subsequent request.
|
||||
pub next_part_number_marker: usize,
|
||||
|
||||
/// Maximum number of parts that were allowed in the response.
|
||||
pub max_parts: usize,
|
||||
|
||||
/// Indicates whether the returned list of parts is truncated.
|
||||
pub is_truncated: bool,
|
||||
|
||||
/// List of all parts.
|
||||
pub parts: Vec<PartInfo>,
|
||||
|
||||
/// Any metadata set during InitMultipartUpload, including encryption headers.
|
||||
pub user_defined: HashMap<String, String>,
|
||||
|
||||
/// ChecksumAlgorithm if set
|
||||
pub checksum_algorithm: String,
|
||||
|
||||
/// ChecksumType if set
|
||||
pub checksum_type: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct ObjectToDelete {
|
||||
pub object_name: String,
|
||||
@@ -923,10 +966,7 @@ pub trait StorageAPI: ObjectIO {
|
||||
) -> Result<ListObjectVersionsInfo>;
|
||||
// Walk TODO:
|
||||
|
||||
// GetObjectNInfo ObjectIO
|
||||
async fn get_object_info(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
|
||||
// PutObject ObjectIO
|
||||
// CopyObject
|
||||
async fn copy_object(
|
||||
&self,
|
||||
src_bucket: &str,
|
||||
@@ -949,7 +989,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
// TransitionObject TODO:
|
||||
// RestoreTransitionedObject TODO:
|
||||
|
||||
// ListMultipartUploads
|
||||
async fn list_multipart_uploads(
|
||||
&self,
|
||||
bucket: &str,
|
||||
@@ -960,7 +999,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
max_uploads: usize,
|
||||
) -> Result<ListMultipartsInfo>;
|
||||
async fn new_multipart_upload(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<MultipartUploadResult>;
|
||||
// CopyObjectPart
|
||||
async fn copy_object_part(
|
||||
&self,
|
||||
src_bucket: &str,
|
||||
@@ -984,7 +1022,6 @@ pub trait StorageAPI: ObjectIO {
|
||||
data: &mut PutObjReader,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<PartInfo>;
|
||||
// GetMultipartInfo
|
||||
async fn get_multipart_info(
|
||||
&self,
|
||||
bucket: &str,
|
||||
@@ -992,7 +1029,15 @@ pub trait StorageAPI: ObjectIO {
|
||||
upload_id: &str,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<MultipartInfo>;
|
||||
// ListObjectParts
|
||||
async fn list_object_parts(
|
||||
&self,
|
||||
bucket: &str,
|
||||
object: &str,
|
||||
upload_id: &str,
|
||||
part_number_marker: Option<usize>,
|
||||
max_parts: usize,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ListPartsInfo>;
|
||||
async fn abort_multipart_upload(&self, bucket: &str, object: &str, upload_id: &str, opts: &ObjectOptions) -> Result<()>;
|
||||
async fn complete_multipart_upload(
|
||||
self: Arc<Self>,
|
||||
@@ -1002,13 +1047,10 @@ pub trait StorageAPI: ObjectIO {
|
||||
uploaded_parts: Vec<CompletePart>,
|
||||
opts: &ObjectOptions,
|
||||
) -> Result<ObjectInfo>;
|
||||
// GetDisks
|
||||
async fn get_disks(&self, pool_idx: usize, set_idx: usize) -> Result<Vec<Option<DiskStore>>>;
|
||||
// SetDriveCounts
|
||||
fn set_drive_counts(&self) -> Vec<usize>;
|
||||
|
||||
// Health TODO:
|
||||
// PutObjectMetadata
|
||||
async fn put_object_metadata(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
|
||||
// DecomTieredObject
|
||||
async fn get_object_tags(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<String>;
|
||||
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "File metadata management for RustFS, providing efficient storage and retrieval of file metadata in a distributed system."
|
||||
keywords = ["file-metadata", "storage", "retrieval", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
documentation = "https://docs.rs/rustfs-filemeta/latest/rustfs_filemeta/"
|
||||
|
||||
[dependencies]
|
||||
crc32fast = { workspace = true }
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# RustFS FileMeta - File Metadata Management
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance file metadata management for RustFS distributed object storage</strong>
|
||||
<strong>Advanced file metadata management and indexing module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,246 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS FileMeta** is the metadata management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides efficient storage, retrieval, and management of file metadata, supporting features like versioning, tagging, and extended attributes with high performance and reliability.
|
||||
|
||||
> **Note:** This is a core submodule of RustFS that provides essential metadata management capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS FileMeta** provides advanced file metadata management and indexing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 📝 Metadata Management
|
||||
- **File Information**: Complete file metadata including size, timestamps, and checksums
|
||||
- **Object Versioning**: Version-aware metadata management
|
||||
- **Extended Attributes**: Custom metadata and tagging support
|
||||
- **Inline Metadata**: Optimized storage for small metadata
|
||||
|
||||
### 🚀 Performance Features
|
||||
- **FlatBuffers Serialization**: Zero-copy metadata serialization
|
||||
- **Efficient Storage**: Optimized metadata storage layout
|
||||
- **Fast Lookups**: High-performance metadata queries
|
||||
- **Batch Operations**: Bulk metadata operations
|
||||
|
||||
### 🔧 Advanced Capabilities
|
||||
- **Schema Evolution**: Forward and backward compatible metadata schemas
|
||||
- **Compression**: Metadata compression for space efficiency
|
||||
- **Validation**: Metadata integrity verification
|
||||
- **Migration**: Seamless metadata format migration
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-filemeta = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Metadata Operations
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::{FileInfo, XLMeta};
|
||||
use std::collections::HashMap;
|
||||
|
||||
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create file metadata
|
||||
let mut file_info = FileInfo::new();
|
||||
file_info.name = "example.txt".to_string();
|
||||
file_info.size = 1024;
|
||||
file_info.mod_time = chrono::Utc::now();
|
||||
|
||||
// Add custom metadata
|
||||
let mut user_defined = HashMap::new();
|
||||
user_defined.insert("author".to_string(), "john@example.com".to_string());
|
||||
user_defined.insert("department".to_string(), "engineering".to_string());
|
||||
file_info.user_defined = user_defined;
|
||||
|
||||
// Create XL metadata
|
||||
let xl_meta = XLMeta::new(file_info);
|
||||
|
||||
// Serialize metadata
|
||||
let serialized = xl_meta.serialize()?;
|
||||
|
||||
// Deserialize metadata
|
||||
let deserialized = XLMeta::deserialize(&serialized)?;
|
||||
|
||||
println!("File: {}, Size: {}", deserialized.file_info.name, deserialized.file_info.size);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Metadata Management
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::{XLMeta, FileInfo, VersionInfo};
|
||||
|
||||
async fn advanced_metadata_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create versioned metadata
|
||||
let mut xl_meta = XLMeta::new(FileInfo::default());
|
||||
|
||||
// Set version information
|
||||
xl_meta.set_version_info(VersionInfo {
|
||||
version_id: "v1.0.0".to_string(),
|
||||
is_latest: true,
|
||||
delete_marker: false,
|
||||
restore_ongoing: false,
|
||||
});
|
||||
|
||||
// Add checksums
|
||||
xl_meta.add_checksum("md5", "d41d8cd98f00b204e9800998ecf8427e");
|
||||
xl_meta.add_checksum("sha256", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855");
|
||||
|
||||
// Set object tags
|
||||
xl_meta.set_tags(vec![
|
||||
("Environment".to_string(), "Production".to_string()),
|
||||
("Owner".to_string(), "DataTeam".to_string()),
|
||||
]);
|
||||
|
||||
// Set retention information
|
||||
xl_meta.set_retention_info(
|
||||
chrono::Utc::now() + chrono::Duration::days(365),
|
||||
"GOVERNANCE".to_string(),
|
||||
);
|
||||
|
||||
// Validate metadata
|
||||
xl_meta.validate()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Inline Metadata Operations
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::{InlineMetadata, MetadataSize};
|
||||
|
||||
fn inline_metadata_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create inline metadata for small files
|
||||
let mut inline_meta = InlineMetadata::new();
|
||||
|
||||
// Set basic properties
|
||||
inline_meta.set_content_type("text/plain");
|
||||
inline_meta.set_content_encoding("gzip");
|
||||
inline_meta.set_cache_control("max-age=3600");
|
||||
|
||||
// Add custom headers
|
||||
inline_meta.add_header("x-custom-field", "custom-value");
|
||||
inline_meta.add_header("x-app-version", "1.2.3");
|
||||
|
||||
// Check if metadata fits inline storage
|
||||
if inline_meta.size() <= MetadataSize::INLINE_THRESHOLD {
|
||||
println!("Metadata can be stored inline");
|
||||
} else {
|
||||
println!("Metadata requires separate storage");
|
||||
}
|
||||
|
||||
// Serialize for storage
|
||||
let bytes = inline_meta.to_bytes()?;
|
||||
|
||||
// Deserialize from storage
|
||||
let restored = InlineMetadata::from_bytes(&bytes)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Metadata Storage Layout
|
||||
|
||||
```
|
||||
FileMeta Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Metadata API Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ XL Metadata │ Inline Metadata │ Version Info │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ FlatBuffers Serialization │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Compression │ Validation │ Migration │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Storage Backend Integration │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Metadata Types
|
||||
|
||||
| Type | Use Case | Storage | Performance |
|
||||
|------|----------|---------|-------------|
|
||||
| XLMeta | Large objects with rich metadata | Separate file | High durability |
|
||||
| InlineMeta | Small objects with minimal metadata | Embedded | Fastest access |
|
||||
| VersionMeta | Object versioning information | Version-specific | Version-aware |
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run serialization benchmarks
|
||||
cargo bench
|
||||
|
||||
# Test metadata validation
|
||||
cargo test validation
|
||||
|
||||
# Test schema migration
|
||||
cargo test migration
|
||||
```
|
||||
|
||||
## 🚀 Performance
|
||||
|
||||
FileMeta is optimized for high-performance metadata operations:
|
||||
|
||||
- **Serialization**: Zero-copy FlatBuffers serialization
|
||||
- **Storage**: Compact binary format reduces I/O
|
||||
- **Caching**: Intelligent metadata caching
|
||||
- **Batch Operations**: Efficient bulk metadata processing
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Memory**: Minimal memory footprint
|
||||
- **Storage**: Compatible with RustFS storage backend
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
|
||||
- [RustFS Utils](../utils) - Utility functions
|
||||
- [RustFS Proto](../protos) - Protocol definitions
|
||||
- High-performance metadata storage and retrieval
|
||||
- Advanced indexing with full-text search capabilities
|
||||
- File attribute management and custom metadata
|
||||
- Version tracking and history management
|
||||
- Distributed metadata replication
|
||||
- Real-time metadata synchronization
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [FileMeta API Reference](https://docs.rustfs.com/filemeta/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with ❤️ by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -46,6 +46,20 @@ pub struct ObjectPartInfo {
|
||||
pub index: Option<Bytes>,
|
||||
// Checksums holds checksums of the part
|
||||
pub checksums: Option<HashMap<String, String>>,
|
||||
pub error: Option<String>,
|
||||
}
|
||||
|
||||
impl ObjectPartInfo {
|
||||
pub fn marshal_msg(&self) -> Result<Vec<u8>> {
|
||||
let mut buf = Vec::new();
|
||||
self.serialize(&mut Serializer::new(&mut buf))?;
|
||||
Ok(buf)
|
||||
}
|
||||
|
||||
pub fn unmarshal(buf: &[u8]) -> Result<Self> {
|
||||
let t: ObjectPartInfo = rmp_serde::from_slice(buf)?;
|
||||
Ok(t)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, PartialEq, Default, Clone)]
|
||||
@@ -287,6 +301,7 @@ impl FileInfo {
|
||||
actual_size,
|
||||
index,
|
||||
checksums: None,
|
||||
error: None,
|
||||
};
|
||||
|
||||
for p in self.parts.iter_mut() {
|
||||
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Identity and Access Management (IAM) for RustFS, providing user management, roles, and permissions."
|
||||
keywords = ["iam", "identity", "access-management", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "authentication"]
|
||||
documentation = "https://docs.rs/rustfs-iam/latest/rustfs_iam/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
@@ -40,7 +45,6 @@ base64-simd = { workspace = true }
|
||||
jsonwebtoken = { workspace = true }
|
||||
tracing.workspace = true
|
||||
rustfs-madmin.workspace = true
|
||||
lazy_static.workspace = true
|
||||
rustfs-utils = { workspace = true, features = ["path"] }
|
||||
|
||||
[dev-dependencies]
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS IAM - Identity and Access Management
|
||||
# RustFS IAM - Identity & Access Management
|
||||
|
||||
<p align="center">
|
||||
<strong>Enterprise-grade identity and access management for RustFS distributed object storage</strong>
|
||||
<strong>Identity and access management system for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,592 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS IAM** is the identity and access management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides comprehensive authentication, authorization, and access control capabilities, ensuring secure and compliant access to storage resources.
|
||||
|
||||
> **Note:** This is a core submodule of RustFS and provides essential security and access control features for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS IAM** provides identity and access management capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔐 Authentication & Authorization
|
||||
|
||||
- **Multi-Factor Authentication**: Support for various authentication methods
|
||||
- **Access Key Management**: Secure generation and management of access keys
|
||||
- **JWT Token Support**: Stateless authentication with JWT tokens
|
||||
- **Session Management**: Secure session handling and token refresh
|
||||
|
||||
### 👥 User Management
|
||||
|
||||
- **User Accounts**: Complete user lifecycle management
|
||||
- **Service Accounts**: Automated service authentication
|
||||
- **Temporary Accounts**: Time-limited access credentials
|
||||
- **Group Management**: Organize users into groups for easier management
|
||||
|
||||
### 🛡️ Access Control
|
||||
|
||||
- **Role-Based Access Control (RBAC)**: Flexible role and permission system
|
||||
- **Policy-Based Access Control**: Fine-grained access policies
|
||||
- **Resource-Level Permissions**: Granular control over storage resources
|
||||
- **API-Level Authorization**: Secure API access control
|
||||
|
||||
### 🔑 Credential Management
|
||||
|
||||
- **Secure Key Generation**: Cryptographically secure key generation
|
||||
- **Key Rotation**: Automatic and manual key rotation capabilities
|
||||
- **Credential Validation**: Real-time credential verification
|
||||
- **Secret Management**: Secure storage and retrieval of secrets
|
||||
|
||||
### 🏢 Enterprise Features
|
||||
|
||||
- **LDAP Integration**: Enterprise directory service integration
|
||||
- **SSO Support**: Single Sign-On capabilities
|
||||
- **Audit Logging**: Comprehensive access audit trails
|
||||
- **Compliance Features**: Meet regulatory compliance requirements
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### IAM System Architecture
|
||||
|
||||
```
|
||||
IAM Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ IAM API Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Authentication │ Authorization │ User Management │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Policy Engine Integration │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Credential Store │ Cache Layer │ Token Manager │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Storage Backend Integration │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Security Model
|
||||
|
||||
| Component | Description | Security Level |
|
||||
|-----------|-------------|----------------|
|
||||
| Access Keys | API authentication credentials | High |
|
||||
| JWT Tokens | Stateless authentication tokens | High |
|
||||
| Session Management | User session handling | Medium |
|
||||
| Policy Enforcement | Access control policies | Critical |
|
||||
| Audit Logging | Security event tracking | High |
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-iam = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic IAM Setup
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{init_iam_sys, get};
|
||||
use rustfs_ecstore::ECStore;
|
||||
use std::sync::Arc;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Initialize with ECStore backend
|
||||
let ecstore = Arc::new(ECStore::new("/path/to/storage").await?);
|
||||
|
||||
// Initialize IAM system
|
||||
init_iam_sys(ecstore).await?;
|
||||
|
||||
// Get IAM system instance
|
||||
let iam = get()?;
|
||||
|
||||
println!("IAM system initialized successfully");
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### User Management
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, manager::UserInfo};
|
||||
|
||||
async fn user_management_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Create a new user
|
||||
let user_info = UserInfo {
|
||||
access_key: "AKIAIOSFODNN7EXAMPLE".to_string(),
|
||||
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY".to_string(),
|
||||
status: "enabled".to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
iam.create_user("john-doe", user_info).await?;
|
||||
|
||||
// List users
|
||||
let users = iam.list_users().await?;
|
||||
for user in users {
|
||||
println!("User: {}, Status: {}", user.name, user.status);
|
||||
}
|
||||
|
||||
// Update user status
|
||||
iam.set_user_status("john-doe", "disabled").await?;
|
||||
|
||||
// Delete user
|
||||
iam.delete_user("john-doe").await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Group Management
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, manager::GroupInfo};
|
||||
|
||||
async fn group_management_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Create a group
|
||||
let group_info = GroupInfo {
|
||||
name: "developers".to_string(),
|
||||
members: vec!["john-doe".to_string(), "jane-smith".to_string()],
|
||||
policies: vec!["read-only-policy".to_string()],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
iam.create_group(group_info).await?;
|
||||
|
||||
// Add user to group
|
||||
iam.add_user_to_group("alice", "developers").await?;
|
||||
|
||||
// Remove user from group
|
||||
iam.remove_user_from_group("alice", "developers").await?;
|
||||
|
||||
// List groups
|
||||
let groups = iam.list_groups().await?;
|
||||
for group in groups {
|
||||
println!("Group: {}, Members: {}", group.name, group.members.len());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Policy Management
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, manager::PolicyDocument};
|
||||
|
||||
async fn policy_management_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Create a policy
|
||||
let policy_doc = PolicyDocument {
|
||||
version: "2012-10-17".to_string(),
|
||||
statement: vec![
|
||||
Statement {
|
||||
effect: "Allow".to_string(),
|
||||
action: vec!["s3:GetObject".to_string()],
|
||||
resource: vec!["arn:aws:s3:::my-bucket/*".to_string()],
|
||||
..Default::default()
|
||||
}
|
||||
],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
iam.create_policy("read-only-policy", policy_doc).await?;
|
||||
|
||||
// Attach policy to user
|
||||
iam.attach_user_policy("john-doe", "read-only-policy").await?;
|
||||
|
||||
// Detach policy from user
|
||||
iam.detach_user_policy("john-doe", "read-only-policy").await?;
|
||||
|
||||
// List policies
|
||||
let policies = iam.list_policies().await?;
|
||||
for policy in policies {
|
||||
println!("Policy: {}", policy.name);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Service Account Management
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, manager::ServiceAccountInfo};
|
||||
|
||||
async fn service_account_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Create service account
|
||||
let service_account = ServiceAccountInfo {
|
||||
name: "backup-service".to_string(),
|
||||
description: "Automated backup service".to_string(),
|
||||
policies: vec!["backup-policy".to_string()],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
iam.create_service_account(service_account).await?;
|
||||
|
||||
// Generate credentials for service account
|
||||
let credentials = iam.generate_service_account_credentials("backup-service").await?;
|
||||
println!("Service Account Credentials: {:?}", credentials);
|
||||
|
||||
// Rotate service account credentials
|
||||
iam.rotate_service_account_credentials("backup-service").await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication and Authorization
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, auth::Credentials};
|
||||
|
||||
async fn auth_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Authenticate user
|
||||
let credentials = Credentials {
|
||||
access_key: "AKIAIOSFODNN7EXAMPLE".to_string(),
|
||||
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY".to_string(),
|
||||
session_token: None,
|
||||
};
|
||||
|
||||
let auth_result = iam.authenticate(&credentials).await?;
|
||||
println!("Authentication successful: {}", auth_result.user_name);
|
||||
|
||||
// Check authorization
|
||||
let authorized = iam.is_authorized(
|
||||
&auth_result.user_name,
|
||||
"s3:GetObject",
|
||||
"arn:aws:s3:::my-bucket/file.txt"
|
||||
).await?;
|
||||
|
||||
if authorized {
|
||||
println!("User is authorized to access the resource");
|
||||
} else {
|
||||
println!("User is not authorized to access the resource");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Temporary Credentials
|
||||
|
||||
```rust
|
||||
use rustfs_iam::{get, manager::TemporaryCredentials};
|
||||
use std::time::Duration;
|
||||
|
||||
async fn temp_credentials_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let iam = get()?;
|
||||
|
||||
// Create temporary credentials
|
||||
let temp_creds = iam.create_temporary_credentials(
|
||||
"john-doe",
|
||||
Duration::from_secs(3600), // 1 hour
|
||||
Some("read-only-policy".to_string())
|
||||
).await?;
|
||||
|
||||
println!("Temporary Access Key: {}", temp_creds.access_key);
|
||||
println!("Expires at: {}", temp_creds.expiration);
|
||||
|
||||
// Validate temporary credentials
|
||||
let is_valid = iam.validate_temporary_credentials(&temp_creds.access_key).await?;
|
||||
println!("Temporary credentials valid: {}", is_valid);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run tests with specific features
|
||||
cargo test --features "ldap,sso"
|
||||
|
||||
# Run integration tests
|
||||
cargo test --test integration
|
||||
|
||||
# Run authentication tests
|
||||
cargo test auth
|
||||
|
||||
# Run authorization tests
|
||||
cargo test authz
|
||||
```
|
||||
|
||||
## 🔒 Security Best Practices
|
||||
|
||||
### Key Management
|
||||
|
||||
- Rotate access keys regularly
|
||||
- Use strong, randomly generated keys
|
||||
- Store keys securely using environment variables or secret management systems
|
||||
- Implement key rotation policies
|
||||
|
||||
### Access Control
|
||||
|
||||
- Follow the principle of least privilege
|
||||
- Use groups for easier permission management
|
||||
- Regularly audit user permissions
|
||||
- Implement resource-based policies
|
||||
|
||||
### Monitoring and Auditing
|
||||
|
||||
- Enable comprehensive audit logging
|
||||
- Monitor failed authentication attempts
|
||||
- Set up alerts for suspicious activities
|
||||
- Regular security reviews
|
||||
|
||||
## 📊 Performance Considerations
|
||||
|
||||
### Caching Strategy
|
||||
|
||||
- **User Cache**: Cache user information for faster lookups
|
||||
- **Policy Cache**: Cache policy documents to reduce latency
|
||||
- **Token Cache**: Cache JWT tokens for stateless authentication
|
||||
- **Permission Cache**: Cache authorization decisions
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Distributed Cache**: Use distributed caching for multi-node deployments
|
||||
- **Database Optimization**: Optimize database queries for user/group lookups
|
||||
- **Connection Pooling**: Use connection pooling for database connections
|
||||
- **Async Operations**: Leverage async programming for better throughput
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Basic Configuration
|
||||
|
||||
```toml
|
||||
[iam]
|
||||
# Authentication settings
|
||||
jwt_secret = "your-jwt-secret-key"
|
||||
jwt_expiration = "24h"
|
||||
session_timeout = "30m"
|
||||
|
||||
# Password policy
|
||||
min_password_length = 8
|
||||
require_special_chars = true
|
||||
require_numbers = true
|
||||
require_uppercase = true
|
||||
|
||||
# Account lockout
|
||||
max_login_attempts = 5
|
||||
lockout_duration = "15m"
|
||||
|
||||
# Audit settings
|
||||
audit_enabled = true
|
||||
audit_log_path = "/var/log/rustfs/iam-audit.log"
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
```rust
|
||||
use rustfs_iam::config::IamConfig;
|
||||
|
||||
let config = IamConfig {
|
||||
// Authentication settings
|
||||
jwt_secret: "your-secure-jwt-secret".to_string(),
|
||||
jwt_expiration_hours: 24,
|
||||
session_timeout_minutes: 30,
|
||||
|
||||
// Security settings
|
||||
password_policy: PasswordPolicy {
|
||||
min_length: 8,
|
||||
require_special_chars: true,
|
||||
require_numbers: true,
|
||||
require_uppercase: true,
|
||||
max_age_days: 90,
|
||||
},
|
||||
|
||||
// Rate limiting
|
||||
rate_limit: RateLimit {
|
||||
max_requests_per_minute: 100,
|
||||
burst_size: 10,
|
||||
},
|
||||
|
||||
// Audit settings
|
||||
audit_enabled: true,
|
||||
audit_log_level: "info".to_string(),
|
||||
|
||||
..Default::default()
|
||||
};
|
||||
```
|
||||
|
||||
## 🤝 Integration with RustFS
|
||||
|
||||
IAM integrates seamlessly with other RustFS components:
|
||||
|
||||
- **ECStore**: Provides user and policy storage backend
|
||||
- **Policy Engine**: Implements fine-grained access control
|
||||
- **Crypto Module**: Handles secure key generation and JWT operations
|
||||
- **API Server**: Provides authentication and authorization for S3 API
|
||||
- **Admin Interface**: Manages users, groups, and policies
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Database**: Compatible with RustFS storage backend
|
||||
- **Memory**: Minimum 2GB RAM for caching
|
||||
- **Network**: Secure connections for authentication
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Authentication Failures**:
|
||||
- Check access key and secret key validity
|
||||
- Verify user account status (enabled/disabled)
|
||||
- Check for account lockout due to failed attempts
|
||||
|
||||
2. **Authorization Errors**:
|
||||
- Verify user has required permissions
|
||||
- Check policy attachments (user/group policies)
|
||||
- Validate resource ARN format
|
||||
|
||||
3. **Performance Issues**:
|
||||
- Monitor cache hit rates
|
||||
- Check database connection pool utilization
|
||||
- Verify JWT token size and complexity
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# Check IAM system status
|
||||
rustfs-cli iam status
|
||||
|
||||
# List all users
|
||||
rustfs-cli iam list-users
|
||||
|
||||
# Validate user credentials
|
||||
rustfs-cli iam validate-credentials --access-key <key>
|
||||
|
||||
# Test policy evaluation
|
||||
rustfs-cli iam test-policy --user <user> --action <action> --resource <resource>
|
||||
```
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
|
||||
- [RustFS Policy](../policy) - Policy engine for access control
|
||||
- [RustFS Crypto](../crypto) - Cryptographic operations
|
||||
- [RustFS MadAdmin](../madmin) - Administrative interface
|
||||
- User and group management with RBAC
|
||||
- Service account and API key authentication
|
||||
- Policy engine with fine-grained permissions
|
||||
- LDAP/Active Directory integration
|
||||
- Multi-factor authentication support
|
||||
- Session management and token validation
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [IAM API Reference](https://docs.rustfs.com/iam/)
|
||||
- [Security Guide](https://docs.rustfs.com/security/)
|
||||
- [Authentication Guide](https://docs.rustfs.com/auth/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
|
||||
|
||||
- Security-first development practices
|
||||
- IAM system architecture guidelines
|
||||
- Authentication and authorization patterns
|
||||
- Testing procedures for security features
|
||||
- Documentation standards for security APIs
|
||||
|
||||
### Development Setup
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/rustfs/rustfs.git
|
||||
cd rustfs
|
||||
|
||||
# Navigate to IAM module
|
||||
cd crates/iam
|
||||
|
||||
# Install dependencies
|
||||
cargo build
|
||||
|
||||
# Run tests
|
||||
cargo test
|
||||
|
||||
# Run security tests
|
||||
cargo test security
|
||||
|
||||
# Format code
|
||||
cargo fmt
|
||||
|
||||
# Run linter
|
||||
cargo clippy
|
||||
```
|
||||
|
||||
## 💬 Getting Help
|
||||
|
||||
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
|
||||
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- **Security**: Report security issues to <security@rustfs.com>
|
||||
|
||||
## 📞 Contact
|
||||
|
||||
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- **Business**: <hello@rustfs.com>
|
||||
- **Jobs**: <jobs@rustfs.com>
|
||||
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
|
||||
## 👥 Contributors
|
||||
|
||||
This module is maintained by the RustFS security team and community contributors. Special thanks to all who have contributed to making RustFS secure and compliant.
|
||||
|
||||
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
|
||||
</a>
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
```
|
||||
Copyright 2024 RustFS Team
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with 🔐 by the RustFS Security Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
// limitations under the License.
|
||||
|
||||
use crate::error::{Error, Result, is_err_config_not_found};
|
||||
use crate::sys::get_claims_from_token_with_secret;
|
||||
use crate::{
|
||||
cache::{Cache, CacheEntity},
|
||||
error::{Error as IamError, is_err_no_such_group, is_err_no_such_policy, is_err_no_such_user},
|
||||
@@ -26,7 +27,7 @@ use rustfs_ecstore::global::get_global_action_cred;
|
||||
use rustfs_madmin::{AccountStatus, AddOrUpdateUserReq, GroupDesc};
|
||||
use rustfs_policy::{
|
||||
arn::ARN,
|
||||
auth::{self, Credentials, UserIdentity, get_claims_from_token_with_secret, is_secret_key_valid, jwt_sign},
|
||||
auth::{self, Credentials, UserIdentity, is_secret_key_valid, jwt_sign},
|
||||
format::Format,
|
||||
policy::{
|
||||
EMBEDDED_POLICY_TYPE, INHERITED_POLICY_TYPE, Policy, PolicyDoc, default::DEFAULT_POLICIES, iam_policy_claim_name_sa,
|
||||
|
||||
@@ -20,7 +20,6 @@ use crate::{
|
||||
manager::{extract_jwt_claims, get_default_policyes},
|
||||
};
|
||||
use futures::future::join_all;
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_ecstore::{
|
||||
config::{
|
||||
RUSTFS_CONFIG_PREFIX,
|
||||
@@ -34,25 +33,28 @@ use rustfs_ecstore::{
|
||||
use rustfs_policy::{auth::UserIdentity, policy::PolicyDoc};
|
||||
use rustfs_utils::path::{SLASH_SEPARATOR, path_join_buf};
|
||||
use serde::{Serialize, de::DeserializeOwned};
|
||||
use std::sync::LazyLock;
|
||||
use std::{collections::HashMap, sync::Arc};
|
||||
use tokio::sync::broadcast::{self, Receiver as B_Receiver};
|
||||
use tokio::sync::mpsc::{self, Sender};
|
||||
use tracing::{info, warn};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
lazy_static! {
|
||||
pub static ref IAM_CONFIG_PREFIX: String = format!("{}/iam", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_USERS_PREFIX: String = format!("{}/iam/users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: String = format!("{}/iam/service-accounts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_GROUPS_PREFIX: String = format!("{}/iam/groups/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICIES_PREFIX: String = format!("{}/iam/policies/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_STS_PREFIX: String = format!("{}/iam/sts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_PREFIX: String = format!("{}/iam/policydb/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_USERS_PREFIX: String = format!("{}/iam/policydb/users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: String = format!("{}/iam/policydb/sts-users/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: String =
|
||||
format!("{}/iam/policydb/service-accounts/", RUSTFS_CONFIG_PREFIX);
|
||||
pub static ref IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: String = format!("{}/iam/policydb/groups/", RUSTFS_CONFIG_PREFIX);
|
||||
}
|
||||
pub static IAM_CONFIG_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam"));
|
||||
pub static IAM_CONFIG_USERS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/users/"));
|
||||
pub static IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/service-accounts/"));
|
||||
pub static IAM_CONFIG_GROUPS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/groups/"));
|
||||
pub static IAM_CONFIG_POLICIES_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policies/"));
|
||||
pub static IAM_CONFIG_STS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/sts/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_USERS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/users/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/sts-users/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/service-accounts/"));
|
||||
pub static IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: LazyLock<String> =
|
||||
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/groups/"));
|
||||
|
||||
const IAM_IDENTITY_FILE: &str = "identity.json";
|
||||
const IAM_POLICY_FILE: &str = "policy.json";
|
||||
@@ -370,7 +372,15 @@ impl Store for ObjectStore {
|
||||
async fn load_iam_config<Item: DeserializeOwned>(&self, path: impl AsRef<str> + Send) -> Result<Item> {
|
||||
let mut data = read_config(self.object_api.clone(), path.as_ref()).await?;
|
||||
|
||||
data = Self::decrypt_data(&data)?;
|
||||
data = match Self::decrypt_data(&data) {
|
||||
Ok(v) => v,
|
||||
Err(err) => {
|
||||
debug!("decrypt_data failed: {}", err);
|
||||
// delete the config file when decrypt failed
|
||||
let _ = self.delete_iam_config(path.as_ref()).await;
|
||||
return Err(Error::ConfigNotFound);
|
||||
}
|
||||
};
|
||||
|
||||
Ok(serde_json::from_slice(&data)?)
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ use crate::store::GroupInfo;
|
||||
use crate::store::MappedPolicy;
|
||||
use crate::store::Store;
|
||||
use crate::store::UserType;
|
||||
use crate::utils::extract_claims;
|
||||
use rustfs_ecstore::global::get_global_action_cred;
|
||||
use rustfs_madmin::AddOrUpdateUserReq;
|
||||
use rustfs_madmin::GroupDesc;
|
||||
@@ -542,7 +543,7 @@ impl<T: Store> IamSys<T> {
|
||||
}
|
||||
};
|
||||
|
||||
if policies.is_empty() {
|
||||
if !is_owner && policies.is_empty() {
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -732,3 +733,18 @@ pub struct UpdateServiceAccountOpts {
|
||||
pub expiration: Option<OffsetDateTime>,
|
||||
pub status: Option<String>,
|
||||
}
|
||||
|
||||
pub fn get_claims_from_token_with_secret(token: &str, secret: &str) -> Result<HashMap<String, Value>> {
|
||||
let mut ms =
|
||||
extract_claims::<HashMap<String, Value>>(token, secret).map_err(|e| Error::other(format!("extract claims err {e}")))?;
|
||||
|
||||
if let Some(session_policy) = ms.claims.get(SESSION_POLICY_NAME) {
|
||||
let policy_str = session_policy.as_str().unwrap_or_default();
|
||||
let policy = base64_decode(policy_str.as_bytes()).map_err(|e| Error::other(format!("base64 decode err {e}")))?;
|
||||
ms.claims.insert(
|
||||
SESSION_POLICY_NAME_EXTRACTED.to_string(),
|
||||
Value::String(String::from_utf8(policy).map_err(|e| Error::other(format!("utf8 decode err {e}")))?),
|
||||
);
|
||||
}
|
||||
Ok(ms.claims)
|
||||
}
|
||||
|
||||
@@ -19,13 +19,17 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Distributed locking mechanism for RustFS, providing synchronization and coordination across distributed systems."
|
||||
keywords = ["locking", "asynchronous", "distributed", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "asynchronous"]
|
||||
documentation = "https://docs.rs/rustfs-lock/latest/rustfs_lock/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
async-trait.workspace = true
|
||||
lazy_static.workspace = true
|
||||
rustfs-protos.workspace = true
|
||||
rand.workspace = true
|
||||
serde.workspace = true
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# RustFS Lock - Distributed Locking
|
||||
|
||||
<p align="center">
|
||||
<strong>Distributed locking and synchronization for RustFS object storage</strong>
|
||||
<strong>High-performance distributed locking system for RustFS object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -17,376 +17,21 @@
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Lock** provides distributed locking and synchronization primitives for the [RustFS](https://rustfs.com) distributed object storage system. It ensures data consistency and prevents race conditions in multi-node environments through various locking mechanisms and coordination protocols.
|
||||
|
||||
> **Note:** This is a core submodule of RustFS that provides essential distributed locking capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
**RustFS Lock** provides distributed locking capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🔒 Distributed Locking
|
||||
|
||||
- **Exclusive Locks**: Mutual exclusion across cluster nodes
|
||||
- **Shared Locks**: Reader-writer lock semantics
|
||||
- **Timeout Support**: Configurable lock timeouts and expiration
|
||||
- **Deadlock Prevention**: Automatic deadlock detection and resolution
|
||||
|
||||
### 🔄 Synchronization Primitives
|
||||
|
||||
- **Distributed Mutex**: Cross-node mutual exclusion
|
||||
- **Distributed Semaphore**: Resource counting across nodes
|
||||
- **Distributed Barrier**: Coordination point for multiple nodes
|
||||
- **Distributed Condition Variables**: Wait/notify across nodes
|
||||
|
||||
### 🛡️ Consistency Guarantees
|
||||
|
||||
- **Linearizable Operations**: Strong consistency guarantees
|
||||
- **Fault Tolerance**: Automatic recovery from node failures
|
||||
- **Network Partition Handling**: CAP theorem aware implementations
|
||||
- **Consensus Integration**: Raft-based consensus for critical locks
|
||||
|
||||
### 🚀 Performance Features
|
||||
|
||||
- **Lock Coalescing**: Efficient batching of lock operations
|
||||
- **Adaptive Timeouts**: Dynamic timeout adjustment
|
||||
- **Lock Hierarchy**: Hierarchical locking for better scalability
|
||||
- **Optimistic Locking**: Reduced contention through optimistic approaches
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Add this to your `Cargo.toml`:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rustfs-lock = "0.1.0"
|
||||
```
|
||||
|
||||
## 🔧 Usage
|
||||
|
||||
### Basic Distributed Lock
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{DistributedLock, LockManager, LockOptions};
|
||||
use std::time::Duration;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Create lock manager
|
||||
let lock_manager = LockManager::new("cluster-endpoint").await?;
|
||||
|
||||
// Acquire distributed lock
|
||||
let lock_options = LockOptions {
|
||||
timeout: Duration::from_secs(30),
|
||||
auto_renew: true,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let lock = lock_manager.acquire_lock("resource-key", lock_options).await?;
|
||||
|
||||
// Critical section
|
||||
{
|
||||
println!("Lock acquired, performing critical operations...");
|
||||
// Your critical code here
|
||||
tokio::time::sleep(Duration::from_secs(2)).await;
|
||||
}
|
||||
|
||||
// Release lock
|
||||
lock.release().await?;
|
||||
println!("Lock released");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Distributed Mutex
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{DistributedMutex, LockManager};
|
||||
use std::sync::Arc;
|
||||
|
||||
async fn distributed_mutex_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
|
||||
|
||||
// Create distributed mutex
|
||||
let mutex = DistributedMutex::new(lock_manager.clone(), "shared-resource");
|
||||
|
||||
// Spawn multiple tasks
|
||||
let mut handles = vec![];
|
||||
|
||||
for i in 0..5 {
|
||||
let mutex = mutex.clone();
|
||||
let handle = tokio::spawn(async move {
|
||||
let _guard = mutex.lock().await.unwrap();
|
||||
println!("Task {} acquired mutex", i);
|
||||
|
||||
// Simulate work
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
|
||||
println!("Task {} releasing mutex", i);
|
||||
// Guard is automatically released when dropped
|
||||
});
|
||||
|
||||
handles.push(handle);
|
||||
}
|
||||
|
||||
// Wait for all tasks to complete
|
||||
for handle in handles {
|
||||
handle.await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Distributed Semaphore
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{DistributedSemaphore, LockManager};
|
||||
use std::sync::Arc;
|
||||
|
||||
async fn distributed_semaphore_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
|
||||
|
||||
// Create distributed semaphore with 3 permits
|
||||
let semaphore = DistributedSemaphore::new(
|
||||
lock_manager.clone(),
|
||||
"resource-pool",
|
||||
3
|
||||
);
|
||||
|
||||
// Spawn multiple tasks
|
||||
let mut handles = vec![];
|
||||
|
||||
for i in 0..10 {
|
||||
let semaphore = semaphore.clone();
|
||||
let handle = tokio::spawn(async move {
|
||||
let _permit = semaphore.acquire().await.unwrap();
|
||||
println!("Task {} acquired permit", i);
|
||||
|
||||
// Simulate work
|
||||
tokio::time::sleep(Duration::from_secs(2)).await;
|
||||
|
||||
println!("Task {} releasing permit", i);
|
||||
// Permit is automatically released when dropped
|
||||
});
|
||||
|
||||
handles.push(handle);
|
||||
}
|
||||
|
||||
// Wait for all tasks to complete
|
||||
for handle in handles {
|
||||
handle.await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Distributed Barrier
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{DistributedBarrier, LockManager};
|
||||
use std::sync::Arc;
|
||||
|
||||
async fn distributed_barrier_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
|
||||
|
||||
// Create distributed barrier for 3 participants
|
||||
let barrier = DistributedBarrier::new(
|
||||
lock_manager.clone(),
|
||||
"sync-point",
|
||||
3
|
||||
);
|
||||
|
||||
// Spawn multiple tasks
|
||||
let mut handles = vec![];
|
||||
|
||||
for i in 0..3 {
|
||||
let barrier = barrier.clone();
|
||||
let handle = tokio::spawn(async move {
|
||||
println!("Task {} doing work...", i);
|
||||
|
||||
// Simulate different work durations
|
||||
tokio::time::sleep(Duration::from_secs(i + 1)).await;
|
||||
|
||||
println!("Task {} waiting at barrier", i);
|
||||
barrier.wait().await.unwrap();
|
||||
|
||||
println!("Task {} passed barrier", i);
|
||||
});
|
||||
|
||||
handles.push(handle);
|
||||
}
|
||||
|
||||
// Wait for all tasks to complete
|
||||
for handle in handles {
|
||||
handle.await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Lock with Automatic Renewal
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{DistributedLock, LockManager, LockOptions};
|
||||
use std::time::Duration;
|
||||
|
||||
async fn auto_renewal_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let lock_manager = LockManager::new("cluster-endpoint").await?;
|
||||
|
||||
let lock_options = LockOptions {
|
||||
timeout: Duration::from_secs(10),
|
||||
auto_renew: true,
|
||||
renew_interval: Duration::from_secs(3),
|
||||
max_renewals: 5,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let lock = lock_manager.acquire_lock("long-running-task", lock_options).await?;
|
||||
|
||||
// Long-running operation
|
||||
for i in 0..20 {
|
||||
println!("Working on step {}", i);
|
||||
tokio::time::sleep(Duration::from_secs(2)).await;
|
||||
|
||||
// Check if lock is still valid
|
||||
if !lock.is_valid().await? {
|
||||
println!("Lock lost, aborting operation");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
lock.release().await?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### Hierarchical Locking
|
||||
|
||||
```rust
|
||||
use rustfs_lock::{LockManager, LockHierarchy, LockOptions};
|
||||
|
||||
async fn hierarchical_locking_example() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let lock_manager = LockManager::new("cluster-endpoint").await?;
|
||||
|
||||
// Create lock hierarchy
|
||||
let hierarchy = LockHierarchy::new(vec![
|
||||
"global-lock".to_string(),
|
||||
"bucket-lock".to_string(),
|
||||
"object-lock".to_string(),
|
||||
]);
|
||||
|
||||
// Acquire locks in hierarchy order
|
||||
let locks = lock_manager.acquire_hierarchical_locks(
|
||||
hierarchy,
|
||||
LockOptions::default()
|
||||
).await?;
|
||||
|
||||
// Critical section with hierarchical locks
|
||||
{
|
||||
println!("All hierarchical locks acquired");
|
||||
// Perform operations that require the full lock hierarchy
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
|
||||
// Locks are automatically released in reverse order
|
||||
locks.release_all().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Lock Architecture
|
||||
|
||||
```
|
||||
Lock Architecture:
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Lock API Layer │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Mutex │ Semaphore │ Barrier │ Condition │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Lock Manager │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Consensus │ Heartbeat │ Timeout │ Recovery │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Distributed Coordination │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Lock Types
|
||||
|
||||
| Type | Use Case | Guarantees |
|
||||
|------|----------|------------|
|
||||
| Exclusive | Critical sections | Mutual exclusion |
|
||||
| Shared | Reader-writer | Multiple readers |
|
||||
| Semaphore | Resource pooling | Counting semaphore |
|
||||
| Barrier | Synchronization | Coordination point |
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Test distributed locking
|
||||
cargo test distributed_lock
|
||||
|
||||
# Test synchronization primitives
|
||||
cargo test sync_primitives
|
||||
|
||||
# Test fault tolerance
|
||||
cargo test fault_tolerance
|
||||
```
|
||||
|
||||
## 📋 Requirements
|
||||
|
||||
- **Rust**: 1.70.0 or later
|
||||
- **Platforms**: Linux, macOS, Windows
|
||||
- **Network**: Cluster connectivity required
|
||||
- **Consensus**: Raft consensus for critical operations
|
||||
|
||||
## 🌍 Related Projects
|
||||
|
||||
This module is part of the RustFS ecosystem:
|
||||
|
||||
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
|
||||
- [RustFS Common](../common) - Common types and utilities
|
||||
- [RustFS Protos](../protos) - Protocol buffer definitions
|
||||
- Distributed lock management across cluster nodes
|
||||
- Read-write lock support with concurrent readers
|
||||
- Lock timeout and automatic lease renewal
|
||||
- Deadlock detection and prevention
|
||||
- High-availability with leader election
|
||||
- Performance-optimized locking algorithms
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, visit:
|
||||
|
||||
- [RustFS Documentation](https://docs.rustfs.com)
|
||||
- [Lock API Reference](https://docs.rustfs.com/lock/)
|
||||
- [Distributed Systems Guide](https://docs.rustfs.com/distributed/)
|
||||
|
||||
## 🔗 Links
|
||||
|
||||
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
|
||||
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
|
||||
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with 🔒 by the RustFS Team
|
||||
</p>
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -14,12 +14,12 @@
|
||||
// limitations under the License.
|
||||
|
||||
use async_trait::async_trait;
|
||||
use lazy_static::lazy_static;
|
||||
use local_locker::LocalLocker;
|
||||
use lock_args::LockArgs;
|
||||
use remote_client::RemoteClient;
|
||||
use std::io::Result;
|
||||
use std::sync::Arc;
|
||||
use std::sync::LazyLock;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
pub mod drwmutex;
|
||||
@@ -29,9 +29,7 @@ pub mod lrwmutex;
|
||||
pub mod namespace_lock;
|
||||
pub mod remote_client;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref GLOBAL_LOCAL_SERVER: Arc<RwLock<LocalLocker>> = Arc::new(RwLock::new(LocalLocker::new()));
|
||||
}
|
||||
pub static GLOBAL_LOCAL_SERVER: LazyLock<Arc<RwLock<LocalLocker>>> = LazyLock::new(|| Arc::new(RwLock::new(LocalLocker::new())));
|
||||
|
||||
type LockClient = dyn Locker;
|
||||
|
||||
|
||||