mirror of
https://github.com/rustfs/rustfs.git
synced 2026-01-17 01:30:33 +00:00
feat: consolidate AI rules into unified AGENTS.md (#501)
- Merge all AI rules from .rules.md, .cursorrules, and CLAUDE.md into AGENTS.md - Add competitor keyword prohibition rules (minio, ceph, swift, etc.) - Simplify rules by removing overly detailed code examples - Integrate new development principles as highest priority - Remove old tool-specific rule files - Fix clippy warnings for format string improvements
This commit is contained in:
@@ -1,58 +0,0 @@
|
||||
# GitHub Copilot Rules for RustFS Project
|
||||
|
||||
## Core Rules Reference
|
||||
|
||||
This project follows the comprehensive AI coding rules defined in `.rules.md`. Please refer to that file for the complete set of development guidelines, coding standards, and best practices.
|
||||
|
||||
## Copilot-Specific Configuration
|
||||
|
||||
When using GitHub Copilot for this project, ensure you:
|
||||
|
||||
1. **Review the unified rules**: Always check `.rules.md` for the latest project guidelines
|
||||
2. **Follow branch protection**: Never attempt to commit directly to main/master branch
|
||||
3. **Use English**: All code comments, documentation, and variable names must be in English
|
||||
4. **Clean code practices**: Only make modifications you're confident about
|
||||
5. **Test thoroughly**: Ensure all changes pass formatting, linting, and testing requirements
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Critical Rules
|
||||
- 🚫 **NEVER commit directly to main/master branch**
|
||||
- ✅ **ALWAYS work on feature branches**
|
||||
- 📝 **ALWAYS use English for code and documentation**
|
||||
- 🧹 **ALWAYS clean up temporary files after use**
|
||||
- 🎯 **ONLY make confident, necessary modifications**
|
||||
|
||||
### Pre-commit Checklist
|
||||
```bash
|
||||
# Before committing, always run:
|
||||
cargo fmt --all
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
cargo check --all-targets
|
||||
cargo test
|
||||
```
|
||||
|
||||
### Branch Workflow
|
||||
```bash
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout -b feat/your-feature-name
|
||||
# Make your changes
|
||||
git add .
|
||||
git commit -m "feat: your feature description"
|
||||
git push origin feat/your-feature-name
|
||||
gh pr create
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- This file serves as an entry point for GitHub Copilot
|
||||
- All detailed rules and guidelines are maintained in `.rules.md`
|
||||
- Updates to coding standards should be made in `.rules.md` to ensure consistency across all AI tools
|
||||
- When in doubt, always refer to `.rules.md` for authoritative guidance
|
||||
|
||||
## See Also
|
||||
|
||||
- [.rules.md](./.rules.md) - Complete AI coding rules and guidelines
|
||||
- [CONTRIBUTING.md](./CONTRIBUTING.md) - Contribution guidelines
|
||||
- [README.md](./README.md) - Project overview and setup instructions
|
||||
927
.cursorrules
927
.cursorrules
@@ -1,927 +0,0 @@
|
||||
# RustFS Project Cursor Rules
|
||||
|
||||
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
|
||||
|
||||
### ⛔️ ABSOLUTE PROHIBITION: NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH ⛔️
|
||||
|
||||
**🔥 THIS IS THE MOST CRITICAL RULE - VIOLATION WILL RESULT IN IMMEDIATE REVERSAL 🔥**
|
||||
|
||||
- **🚫 ZERO DIRECT COMMITS TO MAIN/MASTER BRANCH - ABSOLUTELY FORBIDDEN**
|
||||
- **🚫 ANY DIRECT COMMIT TO MAIN BRANCH MUST BE IMMEDIATELY REVERTED**
|
||||
- **🚫 NO EXCEPTIONS FOR HOTFIXES, EMERGENCIES, OR URGENT CHANGES**
|
||||
- **🚫 NO EXCEPTIONS FOR SMALL CHANGES, TYPOS, OR DOCUMENTATION UPDATES**
|
||||
- **🚫 NO EXCEPTIONS FOR ANYONE - MAINTAINERS, CONTRIBUTORS, OR ADMINS**
|
||||
|
||||
### 📋 MANDATORY WORKFLOW - STRICTLY ENFORCED
|
||||
|
||||
**EVERY SINGLE CHANGE MUST FOLLOW THIS WORKFLOW:**
|
||||
|
||||
1. **Check current branch**: `git branch` (MUST NOT be on main/master)
|
||||
2. **Switch to main**: `git checkout main`
|
||||
3. **Pull latest**: `git pull origin main`
|
||||
4. **Create feature branch**: `git checkout -b feat/your-feature-name`
|
||||
5. **Make changes ONLY on feature branch**
|
||||
6. **Test thoroughly before committing**
|
||||
7. **Commit and push to feature branch**: `git push origin feat/your-feature-name`
|
||||
8. **Create Pull Request**: Use `gh pr create` (MANDATORY)
|
||||
9. **Wait for PR approval**: NO self-merging allowed
|
||||
10. **Merge through GitHub interface**: ONLY after approval
|
||||
|
||||
### 🔒 ENFORCEMENT MECHANISMS
|
||||
|
||||
- **Branch protection rules**: Main branch is protected
|
||||
- **Pre-commit hooks**: Will block direct commits to main
|
||||
- **CI/CD checks**: All PRs must pass before merging
|
||||
- **Code review requirement**: At least one approval needed
|
||||
- **Automated reversal**: Direct commits to main will be automatically reverted
|
||||
|
||||
## Project Overview
|
||||
|
||||
RustFS is a high-performance distributed object storage system written in Rust, compatible with S3 API. The project adopts a modular architecture, supporting erasure coding storage, multi-tenant management, observability, and other enterprise-level features.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
### 1. Modular Design
|
||||
|
||||
- Project uses Cargo workspace structure, containing multiple independent crates
|
||||
- Core modules: `rustfs` (main service), `ecstore` (erasure coding storage), `common` (shared components)
|
||||
- Functional modules: `iam` (identity management), `madmin` (management interface), `crypto` (encryption), etc.
|
||||
- Tool modules: `cli` (command line tool), `crates/*` (utility libraries)
|
||||
|
||||
### 2. Asynchronous Programming Pattern
|
||||
|
||||
- Comprehensive use of `tokio` async runtime
|
||||
- Prioritize `async/await` syntax
|
||||
- Use `async-trait` for async methods in traits
|
||||
- Avoid blocking operations, use `spawn_blocking` when necessary
|
||||
|
||||
### 3. Error Handling Strategy
|
||||
|
||||
- **Use modular, type-safe error handling with `thiserror`**
|
||||
- Each module should define its own error type using `thiserror::Error` derive macro
|
||||
- Support error chains and context information through `#[from]` and `#[source]` attributes
|
||||
- Use `Result<T>` type aliases for consistency within each module
|
||||
- Error conversion between modules should use explicit `From` implementations
|
||||
- Follow the pattern: `pub type Result<T> = core::result::Result<T, Error>`
|
||||
- Use `#[error("description")]` attributes for clear error messages
|
||||
- Support error downcasting when needed through `other()` helper methods
|
||||
- Implement `Clone` for errors when required by the domain logic
|
||||
- **Current module error types:**
|
||||
- `ecstore::error::StorageError` - Storage layer errors
|
||||
- `ecstore::disk::error::DiskError` - Disk operation errors
|
||||
- `iam::error::Error` - Identity and access management errors
|
||||
- `policy::error::Error` - Policy-related errors
|
||||
- `crypto::error::Error` - Cryptographic operation errors
|
||||
- `filemeta::error::Error` - File metadata errors
|
||||
- `rustfs::error::ApiError` - API layer errors
|
||||
- Module-specific error types for specialized functionality
|
||||
|
||||
## Code Style Guidelines
|
||||
|
||||
### 1. Formatting Configuration
|
||||
|
||||
```toml
|
||||
max_width = 130
|
||||
fn_call_width = 90
|
||||
single_line_let_else_max_width = 100
|
||||
```
|
||||
|
||||
### 2. **🔧 MANDATORY Code Formatting Rules**
|
||||
|
||||
**CRITICAL**: All code must be properly formatted before committing. This project enforces strict formatting standards to maintain code consistency and readability.
|
||||
|
||||
#### Pre-commit Requirements (MANDATORY)
|
||||
|
||||
Before every commit, you **MUST**:
|
||||
|
||||
1. **Format your code**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all
|
||||
```
|
||||
|
||||
2. **Verify formatting**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all --check
|
||||
```
|
||||
|
||||
3. **Pass clippy checks**:
|
||||
|
||||
```bash
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
```
|
||||
|
||||
4. **Ensure compilation**:
|
||||
|
||||
```bash
|
||||
cargo check --all-targets
|
||||
```
|
||||
|
||||
#### Quick Commands
|
||||
|
||||
Use these convenient Makefile targets for common tasks:
|
||||
|
||||
```bash
|
||||
# Format all code
|
||||
make fmt
|
||||
|
||||
# Check if code is properly formatted
|
||||
make fmt-check
|
||||
|
||||
# Run clippy checks
|
||||
make clippy
|
||||
|
||||
# Run compilation check
|
||||
make check
|
||||
|
||||
# Run tests
|
||||
make test
|
||||
|
||||
# Run all pre-commit checks (format + clippy + check + test)
|
||||
make pre-commit
|
||||
|
||||
# Setup git hooks (one-time setup)
|
||||
make setup-hooks
|
||||
```
|
||||
|
||||
#### 🔒 Automated Pre-commit Hooks
|
||||
|
||||
This project includes a pre-commit hook that automatically runs before each commit to ensure:
|
||||
|
||||
- ✅ Code is properly formatted (`cargo fmt --all --check`)
|
||||
- ✅ No clippy warnings (`cargo clippy --all-targets --all-features -- -D warnings`)
|
||||
- ✅ Code compiles successfully (`cargo check --all-targets`)
|
||||
|
||||
**Setting Up Pre-commit Hooks** (MANDATORY for all developers):
|
||||
|
||||
Run this command once after cloning the repository:
|
||||
|
||||
```bash
|
||||
make setup-hooks
|
||||
```
|
||||
|
||||
Or manually:
|
||||
|
||||
```bash
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
#### 🚫 Commit Prevention
|
||||
|
||||
If your code doesn't meet the formatting requirements, the pre-commit hook will:
|
||||
|
||||
1. **Block the commit** and show clear error messages
|
||||
2. **Provide exact commands** to fix the issues
|
||||
3. **Guide you through** the resolution process
|
||||
|
||||
Example output when formatting fails:
|
||||
|
||||
```
|
||||
❌ Code formatting check failed!
|
||||
💡 Please run 'cargo fmt --all' to format your code before committing.
|
||||
|
||||
🔧 Quick fix:
|
||||
cargo fmt --all
|
||||
git add .
|
||||
git commit
|
||||
```
|
||||
|
||||
### 3. Naming Conventions
|
||||
|
||||
- Use `snake_case` for functions, variables, modules
|
||||
- Use `PascalCase` for types, traits, enums
|
||||
- Constants use `SCREAMING_SNAKE_CASE`
|
||||
- Global variables prefix `GLOBAL_`, e.g., `GLOBAL_Endpoints`
|
||||
- Use meaningful and descriptive names for variables, functions, and methods
|
||||
- Avoid meaningless names like `temp`, `data`, `foo`, `bar`, `test123`
|
||||
- Choose names that clearly express the purpose and intent
|
||||
|
||||
### 4. Type Declaration Guidelines
|
||||
|
||||
- **Prefer type inference over explicit type declarations** when the type is obvious from context
|
||||
- Let the Rust compiler infer types whenever possible to reduce verbosity and improve maintainability
|
||||
- Only specify types explicitly when:
|
||||
- The type cannot be inferred by the compiler
|
||||
- Explicit typing improves code clarity and readability
|
||||
- Required for API boundaries (function signatures, public struct fields)
|
||||
- Needed to resolve ambiguity between multiple possible types
|
||||
|
||||
**Good examples (prefer these):**
|
||||
|
||||
```rust
|
||||
// Compiler can infer the type
|
||||
let items = vec![1, 2, 3, 4];
|
||||
let config = Config::default();
|
||||
let result = process_data(&input);
|
||||
|
||||
// Iterator chains with clear context
|
||||
let filtered: Vec<_> = items.iter().filter(|&&x| x > 2).collect();
|
||||
```
|
||||
|
||||
**Avoid unnecessary explicit types:**
|
||||
|
||||
```rust
|
||||
// Unnecessary - type is obvious
|
||||
let items: Vec<i32> = vec![1, 2, 3, 4];
|
||||
let config: Config = Config::default();
|
||||
let result: ProcessResult = process_data(&input);
|
||||
```
|
||||
|
||||
**When explicit types are beneficial:**
|
||||
|
||||
```rust
|
||||
// API boundaries - always specify types
|
||||
pub fn process_data(input: &[u8]) -> Result<ProcessResult, Error> { ... }
|
||||
|
||||
// Ambiguous cases - explicit type needed
|
||||
let value: f64 = "3.14".parse().unwrap();
|
||||
|
||||
// Complex generic types - explicit for clarity
|
||||
let cache: HashMap<String, Arc<Mutex<CacheEntry>>> = HashMap::new();
|
||||
```
|
||||
|
||||
### 5. Documentation Comments
|
||||
|
||||
- Public APIs must have documentation comments
|
||||
- Use `///` for documentation comments
|
||||
- Complex functions add `# Examples` and `# Parameters` descriptions
|
||||
- Error cases use `# Errors` descriptions
|
||||
- Always use English for all comments and documentation
|
||||
- Avoid meaningless comments like "debug 111" or placeholder text
|
||||
|
||||
### 6. Import Guidelines
|
||||
|
||||
- Standard library imports first
|
||||
- Third-party crate imports in the middle
|
||||
- Project internal imports last
|
||||
- Group `use` statements with blank lines between groups
|
||||
|
||||
## Asynchronous Programming Guidelines
|
||||
|
||||
### 1. Trait Definition
|
||||
|
||||
```rust
|
||||
#[async_trait::async_trait]
|
||||
pub trait StorageAPI: Send + Sync {
|
||||
async fn get_object(&self, bucket: &str, object: &str) -> Result<ObjectInfo>;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Error Handling
|
||||
|
||||
```rust
|
||||
// Use ? operator to propagate errors
|
||||
async fn example_function() -> Result<()> {
|
||||
let data = read_file("path").await?;
|
||||
process_data(data).await?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Concurrency Control
|
||||
|
||||
- Use `Arc` and `Mutex`/`RwLock` for shared state management
|
||||
- Prioritize async locks from `tokio::sync`
|
||||
- Avoid holding locks for long periods
|
||||
|
||||
## Logging and Tracing Guidelines
|
||||
|
||||
### 1. Tracing Usage
|
||||
|
||||
```rust
|
||||
#[tracing::instrument(skip(self, data))]
|
||||
async fn process_data(&self, data: &[u8]) -> Result<()> {
|
||||
info!("Processing {} bytes", data.len());
|
||||
// Implementation logic
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Log Levels
|
||||
|
||||
- `error!`: System errors requiring immediate attention
|
||||
- `warn!`: Warning information that may affect functionality
|
||||
- `info!`: Important business information
|
||||
- `debug!`: Debug information for development use
|
||||
- `trace!`: Detailed execution paths
|
||||
|
||||
### 3. Structured Logging
|
||||
|
||||
```rust
|
||||
info!(
|
||||
counter.rustfs_api_requests_total = 1_u64,
|
||||
key_request_method = %request.method(),
|
||||
key_request_uri_path = %request.uri().path(),
|
||||
"API request processed"
|
||||
);
|
||||
```
|
||||
|
||||
## Error Handling Guidelines
|
||||
|
||||
### 1. Error Type Definition
|
||||
|
||||
```rust
|
||||
// Use thiserror for module-specific error types
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum MyError {
|
||||
#[error("IO error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
||||
#[error("Storage error: {0}")]
|
||||
Storage(#[from] ecstore::error::StorageError),
|
||||
|
||||
#[error("Custom error: {message}")]
|
||||
Custom { message: String },
|
||||
|
||||
#[error("File not found: {path}")]
|
||||
FileNotFound { path: String },
|
||||
|
||||
#[error("Invalid configuration: {0}")]
|
||||
InvalidConfig(String),
|
||||
}
|
||||
|
||||
// Provide Result type alias for the module
|
||||
pub type Result<T> = core::result::Result<T, MyError>;
|
||||
```
|
||||
|
||||
### 2. Error Helper Methods
|
||||
|
||||
```rust
|
||||
impl MyError {
|
||||
/// Create error from any compatible error type
|
||||
pub fn other<E>(error: E) -> Self
|
||||
where
|
||||
E: Into<Box<dyn std::error::Error + Send + Sync>>,
|
||||
{
|
||||
MyError::Io(std::io::Error::other(error))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Error Conversion Between Modules
|
||||
|
||||
```rust
|
||||
// Convert between different module error types
|
||||
impl From<ecstore::error::StorageError> for MyError {
|
||||
fn from(e: ecstore::error::StorageError) -> Self {
|
||||
match e {
|
||||
ecstore::error::StorageError::FileNotFound => {
|
||||
MyError::FileNotFound { path: "unknown".to_string() }
|
||||
}
|
||||
_ => MyError::Storage(e),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Provide reverse conversion when needed
|
||||
impl From<MyError> for ecstore::error::StorageError {
|
||||
fn from(e: MyError) -> Self {
|
||||
match e {
|
||||
MyError::FileNotFound { .. } => ecstore::error::StorageError::FileNotFound,
|
||||
MyError::Storage(e) => e,
|
||||
_ => ecstore::error::StorageError::other(e),
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Error Context and Propagation
|
||||
|
||||
```rust
|
||||
// Use ? operator for clean error propagation
|
||||
async fn example_function() -> Result<()> {
|
||||
let data = read_file("path").await?;
|
||||
process_data(data).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Add context to errors
|
||||
fn process_with_context(path: &str) -> Result<()> {
|
||||
std::fs::read(path)
|
||||
.map_err(|e| MyError::Custom {
|
||||
message: format!("Failed to read {}: {}", path, e)
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### 5. API Error Conversion (S3 Example)
|
||||
|
||||
```rust
|
||||
// Convert storage errors to API-specific errors
|
||||
use s3s::{S3Error, S3ErrorCode};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ApiError {
|
||||
pub code: S3ErrorCode,
|
||||
pub message: String,
|
||||
pub source: Option<Box<dyn std::error::Error + Send + Sync>>,
|
||||
}
|
||||
|
||||
impl From<ecstore::error::StorageError> for ApiError {
|
||||
fn from(err: ecstore::error::StorageError) -> Self {
|
||||
let code = match &err {
|
||||
ecstore::error::StorageError::BucketNotFound(_) => S3ErrorCode::NoSuchBucket,
|
||||
ecstore::error::StorageError::ObjectNotFound(_, _) => S3ErrorCode::NoSuchKey,
|
||||
ecstore::error::StorageError::BucketExists(_) => S3ErrorCode::BucketAlreadyExists,
|
||||
ecstore::error::StorageError::InvalidArgument(_, _, _) => S3ErrorCode::InvalidArgument,
|
||||
ecstore::error::StorageError::MethodNotAllowed => S3ErrorCode::MethodNotAllowed,
|
||||
ecstore::error::StorageError::StorageFull => S3ErrorCode::ServiceUnavailable,
|
||||
_ => S3ErrorCode::InternalError,
|
||||
};
|
||||
|
||||
ApiError {
|
||||
code,
|
||||
message: err.to_string(),
|
||||
source: Some(Box::new(err)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ApiError> for S3Error {
|
||||
fn from(err: ApiError) -> Self {
|
||||
let mut s3e = S3Error::with_message(err.code, err.message);
|
||||
if let Some(source) = err.source {
|
||||
s3e.set_source(source);
|
||||
}
|
||||
s3e
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Error Handling Best Practices
|
||||
|
||||
#### Pattern Matching and Error Classification
|
||||
|
||||
```rust
|
||||
// Use pattern matching for specific error handling
|
||||
async fn handle_storage_operation() -> Result<()> {
|
||||
match storage.get_object("bucket", "key").await {
|
||||
Ok(object) => process_object(object),
|
||||
Err(ecstore::error::StorageError::ObjectNotFound(bucket, key)) => {
|
||||
warn!("Object not found: {}/{}", bucket, key);
|
||||
create_default_object(bucket, key).await
|
||||
}
|
||||
Err(ecstore::error::StorageError::BucketNotFound(bucket)) => {
|
||||
error!("Bucket not found: {}", bucket);
|
||||
Err(MyError::Custom {
|
||||
message: format!("Bucket {} does not exist", bucket)
|
||||
})
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Storage operation failed: {}", e);
|
||||
Err(MyError::Storage(e))
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Error Aggregation and Reporting
|
||||
|
||||
```rust
|
||||
// Collect and report multiple errors
|
||||
pub fn validate_configuration(config: &Config) -> Result<()> {
|
||||
let mut errors = Vec::new();
|
||||
|
||||
if config.bucket_name.is_empty() {
|
||||
errors.push("Bucket name cannot be empty");
|
||||
}
|
||||
|
||||
if config.region.is_empty() {
|
||||
errors.push("Region must be specified");
|
||||
}
|
||||
|
||||
if !errors.is_empty() {
|
||||
return Err(MyError::Custom {
|
||||
message: format!("Configuration validation failed: {}", errors.join(", "))
|
||||
});
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
#### Contextual Error Information
|
||||
|
||||
```rust
|
||||
// Add operation context to errors
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn upload_file(&self, bucket: &str, key: &str, data: Vec<u8>) -> Result<()> {
|
||||
self.storage
|
||||
.put_object(bucket, key, data)
|
||||
.await
|
||||
.map_err(|e| MyError::Custom {
|
||||
message: format!("Failed to upload {}/{}: {}", bucket, key, e)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Optimization Guidelines
|
||||
|
||||
### 1. Memory Management
|
||||
|
||||
- Use `Bytes` instead of `Vec<u8>` for zero-copy operations
|
||||
- Avoid unnecessary cloning, use reference passing
|
||||
- Use `Arc` for sharing large objects
|
||||
|
||||
### 2. Concurrency Optimization
|
||||
|
||||
```rust
|
||||
// Use join_all for concurrent operations
|
||||
let futures = disks.iter().map(|disk| disk.operation());
|
||||
let results = join_all(futures).await;
|
||||
```
|
||||
|
||||
### 3. Caching Strategy
|
||||
|
||||
- Use `LazyLock` for global caching
|
||||
- Implement LRU cache to avoid memory leaks
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### 1. Unit Tests
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use test_case::test_case;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_async_function() {
|
||||
let result = async_function().await;
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test_case("input1", "expected1")]
|
||||
#[test_case("input2", "expected2")]
|
||||
fn test_with_cases(input: &str, expected: &str) {
|
||||
assert_eq!(function(input), expected);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_error_conversion() {
|
||||
use ecstore::error::StorageError;
|
||||
|
||||
let storage_err = StorageError::BucketNotFound("test-bucket".to_string());
|
||||
let api_err: ApiError = storage_err.into();
|
||||
|
||||
assert_eq!(api_err.code, S3ErrorCode::NoSuchBucket);
|
||||
assert!(api_err.message.contains("test-bucket"));
|
||||
assert!(api_err.source.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_error_types() {
|
||||
let io_err = std::io::Error::new(std::io::ErrorKind::NotFound, "file not found");
|
||||
let my_err = MyError::Io(io_err);
|
||||
|
||||
// Test error matching
|
||||
match my_err {
|
||||
MyError::Io(_) => {}, // Expected
|
||||
_ => panic!("Unexpected error type"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_error_context() {
|
||||
let result = process_with_context("nonexistent_file.txt");
|
||||
assert!(result.is_err());
|
||||
|
||||
let err = result.unwrap_err();
|
||||
match err {
|
||||
MyError::Custom { message } => {
|
||||
assert!(message.contains("Failed to read"));
|
||||
assert!(message.contains("nonexistent_file.txt"));
|
||||
}
|
||||
_ => panic!("Expected Custom error"),
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Integration Tests
|
||||
|
||||
- Use `e2e_test` module for end-to-end testing
|
||||
- Simulate real storage environments
|
||||
|
||||
### 3. Test Quality Standards
|
||||
|
||||
- Write meaningful test cases that verify actual functionality
|
||||
- Avoid placeholder or debug content like "debug 111", "test test", etc.
|
||||
- Use descriptive test names that clearly indicate what is being tested
|
||||
- Each test should have a clear purpose and verify specific behavior
|
||||
- Test data should be realistic and representative of actual use cases
|
||||
|
||||
## Cross-Platform Compatibility Guidelines
|
||||
|
||||
### 1. CPU Architecture Compatibility
|
||||
|
||||
- **Always consider multi-platform and different CPU architecture compatibility** when writing code
|
||||
- Support major architectures: x86_64, aarch64 (ARM64), and other target platforms
|
||||
- Use conditional compilation for architecture-specific code:
|
||||
|
||||
```rust
|
||||
#[cfg(target_arch = "x86_64")]
|
||||
fn optimized_x86_64_function() { /* x86_64 specific implementation */ }
|
||||
|
||||
#[cfg(target_arch = "aarch64")]
|
||||
fn optimized_aarch64_function() { /* ARM64 specific implementation */ }
|
||||
|
||||
#[cfg(not(any(target_arch = "x86_64", target_arch = "aarch64")))]
|
||||
fn generic_function() { /* Generic fallback implementation */ }
|
||||
```
|
||||
|
||||
### 2. Platform-Specific Dependencies
|
||||
|
||||
- Use feature flags for platform-specific dependencies
|
||||
- Provide fallback implementations for unsupported platforms
|
||||
- Test on multiple architectures in CI/CD pipeline
|
||||
|
||||
### 3. Endianness Considerations
|
||||
|
||||
- Use explicit byte order conversion when dealing with binary data
|
||||
- Prefer `to_le_bytes()`, `from_le_bytes()` for consistent little-endian format
|
||||
- Use `byteorder` crate for complex binary format handling
|
||||
|
||||
### 4. SIMD and Performance Optimizations
|
||||
|
||||
- Use portable SIMD libraries like `wide` or `packed_simd`
|
||||
- Provide fallback implementations for non-SIMD architectures
|
||||
- Use runtime feature detection when appropriate
|
||||
|
||||
## Security Guidelines
|
||||
|
||||
### 1. Memory Safety
|
||||
|
||||
- Disable `unsafe` code (workspace.lints.rust.unsafe_code = "deny")
|
||||
- Use `rustls` instead of `openssl`
|
||||
|
||||
### 2. Authentication and Authorization
|
||||
|
||||
```rust
|
||||
// Use IAM system for permission checks
|
||||
let identity = iam.authenticate(&access_key, &secret_key).await?;
|
||||
iam.authorize(&identity, &action, &resource).await?;
|
||||
```
|
||||
|
||||
## Configuration Management Guidelines
|
||||
|
||||
### 1. Environment Variables
|
||||
|
||||
- Use `RUSTFS_` prefix
|
||||
- Support both configuration files and environment variables
|
||||
- Provide reasonable default values
|
||||
|
||||
### 2. Configuration Structure
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct Config {
|
||||
pub address: String,
|
||||
pub volumes: String,
|
||||
#[serde(default)]
|
||||
pub console_enable: bool,
|
||||
}
|
||||
```
|
||||
|
||||
## Dependency Management Guidelines
|
||||
|
||||
### 1. Workspace Dependencies
|
||||
|
||||
- Manage versions uniformly at workspace level
|
||||
- Use `workspace = true` to inherit configuration
|
||||
|
||||
### 2. Feature Flags
|
||||
|
||||
```rust
|
||||
[features]
|
||||
default = ["file"]
|
||||
gpu = ["dep:nvml-wrapper"]
|
||||
kafka = ["dep:rdkafka"]
|
||||
```
|
||||
|
||||
## Deployment and Operations Guidelines
|
||||
|
||||
### 1. Containerization
|
||||
|
||||
- Provide Dockerfile and docker-compose configuration
|
||||
- Support multi-stage builds to optimize image size
|
||||
|
||||
### 2. Observability
|
||||
|
||||
- Integrate OpenTelemetry for distributed tracing
|
||||
- Support Prometheus metrics collection
|
||||
- Provide Grafana dashboards
|
||||
|
||||
### 3. Health Checks
|
||||
|
||||
```rust
|
||||
// Implement health check endpoint
|
||||
async fn health_check() -> Result<HealthStatus> {
|
||||
// Check component status
|
||||
}
|
||||
```
|
||||
|
||||
## Code Review Checklist
|
||||
|
||||
### 1. **Code Formatting and Quality (MANDATORY)**
|
||||
|
||||
- [ ] **Code is properly formatted** (`cargo fmt --all --check` passes)
|
||||
- [ ] **All clippy warnings are resolved** (`cargo clippy --all-targets --all-features -- -D warnings` passes)
|
||||
- [ ] **Code compiles successfully** (`cargo check --all-targets` passes)
|
||||
- [ ] **Pre-commit hooks are working** and all checks pass
|
||||
- [ ] **No formatting-related changes** mixed with functional changes (separate commits)
|
||||
|
||||
### 2. Functionality
|
||||
|
||||
- [ ] Are all error cases properly handled?
|
||||
- [ ] Is there appropriate logging?
|
||||
- [ ] Is there necessary test coverage?
|
||||
|
||||
### 3. Performance
|
||||
|
||||
- [ ] Are unnecessary memory allocations avoided?
|
||||
- [ ] Are async operations used correctly?
|
||||
- [ ] Are there potential deadlock risks?
|
||||
|
||||
### 4. Security
|
||||
|
||||
- [ ] Are input parameters properly validated?
|
||||
- [ ] Are there appropriate permission checks?
|
||||
- [ ] Is information leakage avoided?
|
||||
|
||||
### 5. Cross-Platform Compatibility
|
||||
|
||||
- [ ] Does the code work on different CPU architectures (x86_64, aarch64)?
|
||||
- [ ] Are platform-specific features properly gated with conditional compilation?
|
||||
- [ ] Is byte order handling correct for binary data?
|
||||
- [ ] Are there appropriate fallback implementations for unsupported platforms?
|
||||
|
||||
### 6. Code Commits and Documentation
|
||||
|
||||
- [ ] Does it comply with [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)?
|
||||
- [ ] Are commit messages concise and under 72 characters for the title line?
|
||||
- [ ] Commit titles should be concise and in English, avoid Chinese
|
||||
- [ ] Is PR description provided in copyable markdown format for easy copying?
|
||||
|
||||
## Common Patterns and Best Practices
|
||||
|
||||
### 1. Resource Management
|
||||
|
||||
```rust
|
||||
// Use RAII pattern for resource management
|
||||
pub struct ResourceGuard {
|
||||
resource: Resource,
|
||||
}
|
||||
|
||||
impl Drop for ResourceGuard {
|
||||
fn drop(&mut self) {
|
||||
// Clean up resources
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Dependency Injection
|
||||
|
||||
```rust
|
||||
// Use dependency injection pattern
|
||||
pub struct Service {
|
||||
config: Arc<Config>,
|
||||
storage: Arc<dyn StorageAPI>,
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Graceful Shutdown
|
||||
|
||||
```rust
|
||||
// Implement graceful shutdown
|
||||
async fn shutdown_gracefully(shutdown_rx: &mut Receiver<()>) {
|
||||
tokio::select! {
|
||||
_ = shutdown_rx.recv() => {
|
||||
info!("Received shutdown signal");
|
||||
// Perform cleanup operations
|
||||
}
|
||||
_ = tokio::time::sleep(SHUTDOWN_TIMEOUT) => {
|
||||
warn!("Shutdown timeout reached");
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Domain-Specific Guidelines
|
||||
|
||||
### 1. Storage Operations
|
||||
|
||||
- All storage operations must support erasure coding
|
||||
- Implement read/write quorum mechanisms
|
||||
- Support data integrity verification
|
||||
|
||||
### 2. Network Communication
|
||||
|
||||
- Use gRPC for internal service communication
|
||||
- HTTP/HTTPS support for S3-compatible API
|
||||
- Implement connection pooling and retry mechanisms
|
||||
|
||||
### 3. Metadata Management
|
||||
|
||||
- Use FlatBuffers for serialization
|
||||
- Support version control and migration
|
||||
- Implement metadata caching
|
||||
|
||||
These rules should serve as guiding principles when developing the RustFS project, ensuring code quality, performance, and maintainability.
|
||||
|
||||
### 4. Code Operations
|
||||
|
||||
#### Branch Management
|
||||
|
||||
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
|
||||
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
|
||||
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
|
||||
- **Always work on feature branches - NO EXCEPTIONS**
|
||||
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
|
||||
- **MANDATORY workflow for ALL changes:**
|
||||
1. `git checkout main` (switch to main branch)
|
||||
2. `git pull` (get latest changes)
|
||||
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
|
||||
4. Make your changes ONLY on the feature branch
|
||||
5. Test thoroughly before committing
|
||||
6. Commit and push to the feature branch
|
||||
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
|
||||
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
|
||||
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
|
||||
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
|
||||
- **Pull Request Requirements:**
|
||||
- All changes must be submitted via PR regardless of size or urgency
|
||||
- PRs must include comprehensive description and testing information
|
||||
- PRs must pass all CI/CD checks before merging
|
||||
- PRs require at least one approval from code reviewers
|
||||
- Even hotfixes and emergency changes must go through PR process
|
||||
- **Enforcement:**
|
||||
- Main branch should be protected with branch protection rules
|
||||
- Direct pushes to main should be blocked by repository settings
|
||||
- Any accidental direct commits to main must be immediately reverted via PR
|
||||
|
||||
#### Development Workflow
|
||||
|
||||
## 🎯 **Core Development Principles**
|
||||
|
||||
- **🔴 Every change must be precise - don't modify unless you're confident**
|
||||
- Carefully analyze code logic and ensure complete understanding before making changes
|
||||
- When uncertain, prefer asking users or consulting documentation over blind modifications
|
||||
- Use small iterative steps, modify only necessary parts at a time
|
||||
- Evaluate impact scope before changes to ensure no new issues are introduced
|
||||
|
||||
- **🚀 GitHub PR creation prioritizes gh command usage**
|
||||
- Prefer using `gh pr create` command to create Pull Requests
|
||||
- Avoid having users manually create PRs through web interface
|
||||
- Provide clear and professional PR titles and descriptions
|
||||
- Using `gh` commands ensures better integration and automation
|
||||
|
||||
## 📝 **Code Quality Requirements**
|
||||
|
||||
- Use English for all code comments, documentation, and variable names
|
||||
- Write meaningful and descriptive names for variables, functions, and methods
|
||||
- Avoid meaningless test content like "debug 111" or placeholder values
|
||||
- Before each change, carefully read the existing code to ensure you understand the code structure and implementation, do not break existing logic implementation, do not introduce new issues
|
||||
- Ensure each change provides sufficient test cases to guarantee code correctness
|
||||
- Do not arbitrarily modify numbers and constants in test cases, carefully analyze their meaning to ensure test case correctness
|
||||
- When writing or modifying tests, check existing test cases to ensure they have scientific naming and rigorous logic testing, if not compliant, modify test cases to ensure scientific and rigorous testing
|
||||
- **Before committing any changes, run `cargo clippy --all-targets --all-features -- -D warnings` to ensure all code passes Clippy checks**
|
||||
- After each development completion, first git add . then git commit -m "feat: feature description" or "fix: issue description", ensure compliance with [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
- **Keep commit messages concise and under 72 characters** for the title line, use body for detailed explanations if needed
|
||||
- After each development completion, first git push to remote repository
|
||||
- After each change completion, summarize the changes, do not create summary files, provide a brief change description, ensure compliance with [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
- Provide change descriptions needed for PR in the conversation, ensure compliance with [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
- **Always provide PR descriptions in English** after completing any changes, including:
|
||||
- Clear and concise title following Conventional Commits format
|
||||
- Detailed description of what was changed and why
|
||||
- List of key changes and improvements
|
||||
- Any breaking changes or migration notes if applicable
|
||||
- Testing information and verification steps
|
||||
- **Provide PR descriptions in copyable markdown format** enclosed in code blocks for easy one-click copying
|
||||
|
||||
## 🚫 AI 文档生成限制
|
||||
|
||||
### 禁止生成总结文档
|
||||
|
||||
- **严格禁止创建任何形式的AI生成总结文档**
|
||||
- **不得创建包含大量表情符号、详细格式化表格和典型AI风格的文档**
|
||||
- **不得在项目中生成以下类型的文档:**
|
||||
- 基准测试总结文档(BENCHMARK*.md)
|
||||
- 实现对比分析文档(IMPLEMENTATION_COMPARISON*.md)
|
||||
- 性能分析报告文档
|
||||
- 架构总结文档
|
||||
- 功能对比文档
|
||||
- 任何带有大量表情符号和格式化内容的文档
|
||||
- **如果需要文档,请只在用户明确要求时创建,并保持简洁实用的风格**
|
||||
- **文档应当专注于实际需要的信息,避免过度格式化和装饰性内容**
|
||||
- **任何发现的AI生成总结文档都应该立即删除**
|
||||
|
||||
### 允许的文档类型
|
||||
|
||||
- README.md(项目介绍,保持简洁)
|
||||
- 技术文档(仅在明确需要时创建)
|
||||
- 用户手册(仅在明确需要时创建)
|
||||
- API文档(从代码生成)
|
||||
- 变更日志(CHANGELOG.md)
|
||||
@@ -1,4 +1,4 @@
|
||||
# RustFS Project AI Coding Rules
|
||||
# RustFS Project AI Agents Rules
|
||||
|
||||
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
|
||||
|
||||
@@ -35,46 +35,194 @@
|
||||
- **Code review requirement**: At least one approval needed
|
||||
- **Automated reversal**: Direct commits to main will be automatically reverted
|
||||
|
||||
## 🎯 Core AI Development Principles
|
||||
## 🎯 Core Development Principles (HIGHEST PRIORITY)
|
||||
|
||||
### Five Execution Steps
|
||||
### Philosophy
|
||||
|
||||
#### 1. Task Analysis and Planning
|
||||
- **Clear Objectives**: Deeply understand task requirements and expected results before starting coding
|
||||
- **Plan Development**: List specific files, components, and functions that need modification, explaining the reasons for changes
|
||||
- **Risk Assessment**: Evaluate the impact of changes on existing functionality, develop rollback plans
|
||||
#### Core Beliefs
|
||||
|
||||
#### 2. Precise Code Location
|
||||
- **File Identification**: Determine specific files and line numbers that need modification
|
||||
- **Impact Analysis**: Avoid modifying irrelevant files, clearly state the reason for each file modification
|
||||
- **Minimization Principle**: Unless explicitly required by the task, do not create new abstraction layers or refactor existing code
|
||||
- **Incremental progress over big bangs** - Small changes that compile and pass tests
|
||||
- **Learning from existing code** - Study and plan before implementing
|
||||
- **Pragmatic over dogmatic** - Adapt to project reality
|
||||
- **Clear intent over clever code** - Be boring and obvious
|
||||
|
||||
#### 3. Minimal Code Changes
|
||||
- **Focus on Core**: Only write code directly required by the task
|
||||
- **Avoid Redundancy**: Do not add unnecessary logs, comments, tests, or error handling
|
||||
- **Isolation**: Ensure new code does not interfere with existing functionality, maintain code independence
|
||||
#### Simplicity Means
|
||||
|
||||
#### 4. Strict Code Review
|
||||
- **Correctness Check**: Verify the correctness and completeness of code logic
|
||||
- **Style Consistency**: Ensure code conforms to established project coding style
|
||||
- **Side Effect Assessment**: Evaluate the impact of changes on downstream systems
|
||||
- Single responsibility per function/class
|
||||
- Avoid premature abstractions
|
||||
- No clever tricks - choose the boring solution
|
||||
- If you need to explain it, it's too complex
|
||||
|
||||
#### 5. Clear Delivery Documentation
|
||||
- **Change Summary**: Detailed explanation of all modifications and reasons
|
||||
- **File List**: List all modified files and their specific changes
|
||||
- **Risk Statement**: Mark any assumptions or potential risk points
|
||||
### Process
|
||||
|
||||
### Core Principles
|
||||
- **🎯 Precise Execution**: Strictly follow task requirements, no arbitrary innovation
|
||||
- **⚡ Efficient Development**: Avoid over-design, only do necessary work
|
||||
- **🛡️ Safe and Reliable**: Always follow development processes, ensure code quality and system stability
|
||||
- **🔒 Cautious Modification**: Only modify when clearly knowing what needs to be changed and having confidence
|
||||
#### 1. Planning & Staging
|
||||
|
||||
### Additional AI Behavior Rules
|
||||
Break complex work into 3-5 stages. Document in `IMPLEMENTATION_PLAN.md`:
|
||||
|
||||
1. **Use English for all code comments and documentation** - All comments, variable names, function names, documentation, and user-facing text in code should be in English
|
||||
2. **Clean up temporary scripts after use** - Any temporary scripts, test files, or helper files created during AI work should be removed after task completion
|
||||
3. **Only make confident modifications** - Do not make speculative changes or "convenient" modifications outside the task scope. If uncertain about a change, ask for clarification rather than guessing
|
||||
```markdown
|
||||
## Stage N: [Name]
|
||||
**Goal**: [Specific deliverable]
|
||||
**Success Criteria**: [Testable outcomes]
|
||||
**Tests**: [Specific test cases]
|
||||
**Status**: [Not Started|In Progress|Complete]
|
||||
```
|
||||
|
||||
- Update status as you progress
|
||||
- Remove file when all stages are done
|
||||
|
||||
#### 2. Implementation Flow
|
||||
|
||||
1. **Understand** - Study existing patterns in codebase
|
||||
2. **Test** - Write test first (red)
|
||||
3. **Implement** - Minimal code to pass (green)
|
||||
4. **Refactor** - Clean up with tests passing
|
||||
5. **Commit** - With clear message linking to plan
|
||||
|
||||
#### 3. When Stuck (After 3 Attempts)
|
||||
|
||||
**CRITICAL**: Maximum 3 attempts per issue, then STOP.
|
||||
|
||||
1. **Document what failed**:
|
||||
- What you tried
|
||||
- Specific error messages
|
||||
- Why you think it failed
|
||||
|
||||
2. **Research alternatives**:
|
||||
- Find 2-3 similar implementations
|
||||
- Note different approaches used
|
||||
|
||||
3. **Question fundamentals**:
|
||||
- Is this the right abstraction level?
|
||||
- Can this be split into smaller problems?
|
||||
- Is there a simpler approach entirely?
|
||||
|
||||
4. **Try different angle**:
|
||||
- Different library/framework feature?
|
||||
- Different architectural pattern?
|
||||
- Remove abstraction instead of adding?
|
||||
|
||||
### Technical Standards
|
||||
|
||||
#### Architecture Principles
|
||||
|
||||
- **Composition over inheritance** - Use dependency injection
|
||||
- **Interfaces over singletons** - Enable testing and flexibility
|
||||
- **Explicit over implicit** - Clear data flow and dependencies
|
||||
- **Test-driven when possible** - Never disable tests, fix them
|
||||
|
||||
#### Code Quality
|
||||
|
||||
- **Every commit must**:
|
||||
- Compile successfully
|
||||
- Pass all existing tests
|
||||
- Include tests for new functionality
|
||||
- Follow project formatting/linting
|
||||
|
||||
- **Before committing**:
|
||||
- Run formatters/linters
|
||||
- Self-review changes
|
||||
- Ensure commit message explains "why"
|
||||
|
||||
#### Error Handling
|
||||
|
||||
- Fail fast with descriptive messages
|
||||
- Include context for debugging
|
||||
- Handle errors at appropriate level
|
||||
- Never silently swallow exceptions
|
||||
|
||||
### Decision Framework
|
||||
|
||||
When multiple valid approaches exist, choose based on:
|
||||
|
||||
1. **Testability** - Can I easily test this?
|
||||
2. **Readability** - Will someone understand this in 6 months?
|
||||
3. **Consistency** - Does this match project patterns?
|
||||
4. **Simplicity** - Is this the simplest solution that works?
|
||||
5. **Reversibility** - How hard to change later?
|
||||
|
||||
### Project Integration
|
||||
|
||||
#### Learning the Codebase
|
||||
|
||||
- Find 3 similar features/components
|
||||
- Identify common patterns and conventions
|
||||
- Use same libraries/utilities when possible
|
||||
- Follow existing test patterns
|
||||
|
||||
#### Tooling
|
||||
|
||||
- Use project's existing build system
|
||||
- Use project's test framework
|
||||
- Use project's formatter/linter settings
|
||||
- Don't introduce new tools without strong justification
|
||||
|
||||
### Quality Gates
|
||||
|
||||
#### Definition of Done
|
||||
|
||||
- [ ] Tests written and passing
|
||||
- [ ] Code follows project conventions
|
||||
- [ ] No linter/formatter warnings
|
||||
- [ ] Commit messages are clear
|
||||
- [ ] Implementation matches plan
|
||||
- [ ] No TODOs without issue numbers
|
||||
|
||||
#### Test Guidelines
|
||||
|
||||
- Test behavior, not implementation
|
||||
- One assertion per test when possible
|
||||
- Clear test names describing scenario
|
||||
- Use existing test utilities/helpers
|
||||
- Tests should be deterministic
|
||||
|
||||
### Important Reminders
|
||||
|
||||
**NEVER**:
|
||||
|
||||
- Use `--no-verify` to bypass commit hooks
|
||||
- Disable tests instead of fixing them
|
||||
- Commit code that doesn't compile
|
||||
- Make assumptions - verify with existing code
|
||||
|
||||
**ALWAYS**:
|
||||
|
||||
- Commit working code incrementally
|
||||
- Update plan documentation as you go
|
||||
- Learn from existing implementations
|
||||
- Stop after 3 failed attempts and reassess
|
||||
|
||||
## 🚫 Competitor Keywords Prohibition
|
||||
|
||||
### Strictly Forbidden Keywords
|
||||
|
||||
**CRITICAL**: The following competitor keywords are absolutely forbidden in any code, documentation, comments, or project files:
|
||||
|
||||
- **minio** (and any variations like MinIO, MINIO)
|
||||
- **aws-s3** (when referring to competing implementations)
|
||||
- **ceph** (and any variations like Ceph, CEPH)
|
||||
- **swift** (OpenStack Swift)
|
||||
- **glusterfs** (and any variations like GlusterFS, Gluster)
|
||||
- **seaweedfs** (and any variations like SeaweedFS, Seaweed)
|
||||
- **garage** (and any variations like Garage)
|
||||
- **zenko** (and any variations like Zenko)
|
||||
- **scality** (and any variations like Scality)
|
||||
|
||||
### Enforcement
|
||||
|
||||
- **Code Review**: All PRs will be checked for competitor keywords
|
||||
- **Automated Scanning**: CI/CD pipeline will scan for forbidden keywords
|
||||
- **Immediate Rejection**: Any PR containing competitor keywords will be immediately rejected
|
||||
- **Documentation**: All documentation must use generic terms like "S3-compatible storage" instead of specific competitor names
|
||||
|
||||
### Acceptable Alternatives
|
||||
|
||||
Instead of competitor names, use these generic terms:
|
||||
|
||||
- "S3-compatible storage system"
|
||||
- "Object storage solution"
|
||||
- "Distributed storage platform"
|
||||
- "Cloud storage service"
|
||||
- "Storage backend"
|
||||
|
||||
## Project Overview
|
||||
|
||||
@@ -127,21 +275,25 @@ single_line_let_else_max_width = 100
|
||||
Before every commit, you **MUST**:
|
||||
|
||||
1. **Format your code**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all
|
||||
```
|
||||
|
||||
2. **Verify formatting**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all --check
|
||||
```
|
||||
|
||||
3. **Pass clippy checks**:
|
||||
|
||||
```bash
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
```
|
||||
|
||||
4. **Ensure compilation**:
|
||||
|
||||
```bash
|
||||
cargo check --all-targets
|
||||
```
|
||||
@@ -211,292 +363,94 @@ make setup-hooks
|
||||
|
||||
## Asynchronous Programming Guidelines
|
||||
|
||||
### 1. Trait Definition
|
||||
|
||||
```rust
|
||||
#[async_trait::async_trait]
|
||||
pub trait StorageAPI: Send + Sync {
|
||||
async fn get_object(&self, bucket: &str, object: &str) -> Result<ObjectInfo>;
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Error Handling
|
||||
|
||||
```rust
|
||||
// Use ? operator to propagate errors
|
||||
async fn example_function() -> Result<()> {
|
||||
let data = read_file("path").await?;
|
||||
process_data(data).await?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Concurrency Control
|
||||
|
||||
- Comprehensive use of `tokio` async runtime
|
||||
- Prioritize `async/await` syntax
|
||||
- Use `async-trait` for async methods in traits
|
||||
- Avoid blocking operations, use `spawn_blocking` when necessary
|
||||
- Use `Arc` and `Mutex`/`RwLock` for shared state management
|
||||
- Prioritize async locks from `tokio::sync`
|
||||
- Avoid holding locks for long periods
|
||||
|
||||
## Logging and Tracing Guidelines
|
||||
|
||||
### 1. Tracing Usage
|
||||
|
||||
```rust
|
||||
#[tracing::instrument(skip(self, data))]
|
||||
async fn process_data(&self, data: &[u8]) -> Result<()> {
|
||||
info!("Processing {} bytes", data.len());
|
||||
// Implementation logic
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Log Levels
|
||||
|
||||
- `error!`: System errors requiring immediate attention
|
||||
- `warn!`: Warning information that may affect functionality
|
||||
- `info!`: Important business information
|
||||
- `debug!`: Debug information for development use
|
||||
- `trace!`: Detailed execution paths
|
||||
|
||||
### 3. Structured Logging
|
||||
|
||||
```rust
|
||||
info!(
|
||||
counter.rustfs_api_requests_total = 1_u64,
|
||||
key_request_method = %request.method(),
|
||||
key_request_uri_path = %request.uri().path(),
|
||||
"API request processed"
|
||||
);
|
||||
```
|
||||
- Use `#[tracing::instrument(skip(self, data))]` for function tracing
|
||||
- Log levels: `error!` (system errors), `warn!` (warnings), `info!` (business info), `debug!` (development), `trace!` (detailed paths)
|
||||
- Use structured logging with key-value pairs for better observability
|
||||
|
||||
## Error Handling Guidelines
|
||||
|
||||
### 1. Error Type Definition
|
||||
|
||||
```rust
|
||||
// Use thiserror for module-specific error types
|
||||
#[derive(thiserror::Error, Debug)]
|
||||
pub enum MyError {
|
||||
#[error("IO error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
||||
#[error("Storage error: {0}")]
|
||||
Storage(#[from] ecstore::error::StorageError),
|
||||
|
||||
#[error("Custom error: {message}")]
|
||||
Custom { message: String },
|
||||
|
||||
#[error("File not found: {path}")]
|
||||
FileNotFound { path: String },
|
||||
|
||||
#[error("Invalid configuration: {0}")]
|
||||
InvalidConfig(String),
|
||||
}
|
||||
|
||||
// Provide Result type alias for the module
|
||||
pub type Result<T> = core::result::Result<T, MyError>;
|
||||
```
|
||||
|
||||
### 2. Error Helper Methods
|
||||
|
||||
```rust
|
||||
impl MyError {
|
||||
/// Create error from any compatible error type
|
||||
pub fn other<E>(error: E) -> Self
|
||||
where
|
||||
E: Into<Box<dyn std::error::Error + Send + Sync>>,
|
||||
{
|
||||
MyError::Io(std::io::Error::other(error))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Error Context and Propagation
|
||||
|
||||
```rust
|
||||
// Use ? operator for clean error propagation
|
||||
async fn example_function() -> Result<()> {
|
||||
let data = read_file("path").await?;
|
||||
process_data(data).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Add context to errors
|
||||
fn process_with_context(path: &str) -> Result<()> {
|
||||
std::fs::read(path)
|
||||
.map_err(|e| MyError::Custom {
|
||||
message: format!("Failed to read {}: {}", path, e)
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
- Use `thiserror` for module-specific error types
|
||||
- Support error chains and context information through `#[from]` and `#[source]` attributes
|
||||
- Use `Result<T>` type aliases for consistency within each module
|
||||
- Error conversion between modules should use explicit `From` implementations
|
||||
- Follow the pattern: `pub type Result<T> = core::result::Result<T, Error>`
|
||||
- Use `#[error("description")]` attributes for clear error messages
|
||||
- Support error downcasting when needed through `other()` helper methods
|
||||
- Implement `Clone` for errors when required by the domain logic
|
||||
|
||||
## Performance Optimization Guidelines
|
||||
|
||||
### 1. Memory Management
|
||||
|
||||
- Use `Bytes` instead of `Vec<u8>` for zero-copy operations
|
||||
- Avoid unnecessary cloning, use reference passing
|
||||
- Use `Arc` for sharing large objects
|
||||
|
||||
### 2. Concurrency Optimization
|
||||
|
||||
```rust
|
||||
// Use join_all for concurrent operations
|
||||
let futures = disks.iter().map(|disk| disk.operation());
|
||||
let results = join_all(futures).await;
|
||||
```
|
||||
|
||||
### 3. Caching Strategy
|
||||
|
||||
- Use `join_all` for concurrent operations
|
||||
- Use `LazyLock` for global caching
|
||||
- Implement LRU cache to avoid memory leaks
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### 1. Unit Tests
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use test_case::test_case;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_async_function() {
|
||||
let result = async_function().await;
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test_case("input1", "expected1")]
|
||||
#[test_case("input2", "expected2")]
|
||||
fn test_with_cases(input: &str, expected: &str) {
|
||||
assert_eq!(function(input), expected);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Integration Tests
|
||||
|
||||
- Use `e2e_test` module for end-to-end testing
|
||||
- Simulate real storage environments
|
||||
|
||||
### 3. Test Quality Standards
|
||||
|
||||
- Write meaningful test cases that verify actual functionality
|
||||
- Avoid placeholder or debug content like "debug 111", "test test", etc.
|
||||
- Use descriptive test names that clearly indicate what is being tested
|
||||
- Each test should have a clear purpose and verify specific behavior
|
||||
- Test data should be realistic and representative of actual use cases
|
||||
- Use `e2e_test` module for end-to-end testing
|
||||
- Simulate real storage environments
|
||||
|
||||
## Cross-Platform Compatibility Guidelines
|
||||
|
||||
### 1. CPU Architecture Compatibility
|
||||
|
||||
- **Always consider multi-platform and different CPU architecture compatibility** when writing code
|
||||
- Support major architectures: x86_64, aarch64 (ARM64), and other target platforms
|
||||
- Use conditional compilation for architecture-specific code:
|
||||
|
||||
```rust
|
||||
#[cfg(target_arch = "x86_64")]
|
||||
fn optimized_x86_64_function() { /* x86_64 specific implementation */ }
|
||||
|
||||
#[cfg(target_arch = "aarch64")]
|
||||
fn optimized_aarch64_function() { /* ARM64 specific implementation */ }
|
||||
|
||||
#[cfg(not(any(target_arch = "x86_64", target_arch = "aarch64")))]
|
||||
fn generic_function() { /* Generic fallback implementation */ }
|
||||
```
|
||||
|
||||
### 2. Platform-Specific Dependencies
|
||||
|
||||
- Use conditional compilation for architecture-specific code
|
||||
- Use feature flags for platform-specific dependencies
|
||||
- Provide fallback implementations for unsupported platforms
|
||||
- Test on multiple architectures in CI/CD pipeline
|
||||
|
||||
### 3. Endianness Considerations
|
||||
|
||||
- Use explicit byte order conversion when dealing with binary data
|
||||
- Prefer `to_le_bytes()`, `from_le_bytes()` for consistent little-endian format
|
||||
- Use `byteorder` crate for complex binary format handling
|
||||
|
||||
### 4. SIMD and Performance Optimizations
|
||||
|
||||
- Use portable SIMD libraries like `wide` or `packed_simd`
|
||||
- Provide fallback implementations for non-SIMD architectures
|
||||
- Use runtime feature detection when appropriate
|
||||
|
||||
## Security Guidelines
|
||||
|
||||
### 1. Memory Safety
|
||||
|
||||
- Disable `unsafe` code (workspace.lints.rust.unsafe_code = "deny")
|
||||
- Use `rustls` instead of `openssl`
|
||||
|
||||
### 2. Authentication and Authorization
|
||||
|
||||
```rust
|
||||
// Use IAM system for permission checks
|
||||
let identity = iam.authenticate(&access_key, &secret_key).await?;
|
||||
iam.authorize(&identity, &action, &resource).await?;
|
||||
```
|
||||
- Use IAM system for permission checks
|
||||
- Validate input parameters properly
|
||||
- Implement appropriate permission checks
|
||||
- Avoid information leakage
|
||||
|
||||
## Configuration Management Guidelines
|
||||
|
||||
### 1. Environment Variables
|
||||
|
||||
- Use `RUSTFS_` prefix
|
||||
- Use `RUSTFS_` prefix for environment variables
|
||||
- Support both configuration files and environment variables
|
||||
- Provide reasonable default values
|
||||
|
||||
### 2. Configuration Structure
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct Config {
|
||||
pub address: String,
|
||||
pub volumes: String,
|
||||
#[serde(default)]
|
||||
pub console_enable: bool,
|
||||
}
|
||||
```
|
||||
- Use `serde` for configuration serialization/deserialization
|
||||
|
||||
## Dependency Management Guidelines
|
||||
|
||||
### 1. Workspace Dependencies
|
||||
|
||||
- Manage versions uniformly at workspace level
|
||||
- Use `workspace = true` to inherit configuration
|
||||
|
||||
### 2. Feature Flags
|
||||
|
||||
```rust
|
||||
[features]
|
||||
default = ["file"]
|
||||
gpu = ["dep:nvml-wrapper"]
|
||||
kafka = ["dep:rdkafka"]
|
||||
```
|
||||
- Use feature flags for optional dependencies
|
||||
- Don't introduce new tools without strong justification
|
||||
|
||||
## Deployment and Operations Guidelines
|
||||
|
||||
### 1. Containerization
|
||||
|
||||
- Provide Dockerfile and docker-compose configuration
|
||||
- Support multi-stage builds to optimize image size
|
||||
|
||||
### 2. Observability
|
||||
|
||||
- Integrate OpenTelemetry for distributed tracing
|
||||
- Support Prometheus metrics collection
|
||||
- Provide Grafana dashboards
|
||||
|
||||
### 3. Health Checks
|
||||
|
||||
```rust
|
||||
// Implement health check endpoint
|
||||
async fn health_check() -> Result<HealthStatus> {
|
||||
// Check component status
|
||||
}
|
||||
```
|
||||
- Implement health check endpoints
|
||||
|
||||
## Code Review Checklist
|
||||
|
||||
@@ -540,49 +494,11 @@ async fn health_check() -> Result<HealthStatus> {
|
||||
- [ ] Commit titles should be concise and in English, avoid Chinese
|
||||
- [ ] Is PR description provided in copyable markdown format for easy copying?
|
||||
|
||||
## Common Patterns and Best Practices
|
||||
### 7. Competitor Keywords Check
|
||||
|
||||
### 1. Resource Management
|
||||
|
||||
```rust
|
||||
// Use RAII pattern for resource management
|
||||
pub struct ResourceGuard {
|
||||
resource: Resource,
|
||||
}
|
||||
|
||||
impl Drop for ResourceGuard {
|
||||
fn drop(&mut self) {
|
||||
// Clean up resources
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Dependency Injection
|
||||
|
||||
```rust
|
||||
// Use dependency injection pattern
|
||||
pub struct Service {
|
||||
config: Arc<Config>,
|
||||
storage: Arc<dyn StorageAPI>,
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Graceful Shutdown
|
||||
|
||||
```rust
|
||||
// Implement graceful shutdown
|
||||
async fn shutdown_gracefully(shutdown_rx: &mut Receiver<()>) {
|
||||
tokio::select! {
|
||||
_ = shutdown_rx.recv() => {
|
||||
info!("Received shutdown signal");
|
||||
// Perform cleanup operations
|
||||
}
|
||||
_ = tokio::time::sleep(SHUTDOWN_TIMEOUT) => {
|
||||
warn!("Shutdown timeout reached");
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- [ ] No competitor keywords found in code, comments, or documentation
|
||||
- [ ] All references use generic terms like "S3-compatible storage"
|
||||
- [ ] No specific competitor product names mentioned
|
||||
|
||||
## Domain-Specific Guidelines
|
||||
|
||||
@@ -612,7 +528,7 @@ async fn shutdown_gracefully(shutdown_rx: &mut Receiver<()>) {
|
||||
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
|
||||
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
|
||||
- **Always work on feature branches - NO EXCEPTIONS**
|
||||
- Always check the .rules.md file before starting to ensure you understand the project guidelines
|
||||
- Always check the AGENTS.md file before starting to ensure you understand the project guidelines
|
||||
- **MANDATORY workflow for ALL changes:**
|
||||
1. `git checkout main` (switch to main branch)
|
||||
2. `git pull` (get latest changes)
|
||||
@@ -699,4 +615,4 @@ async fn shutdown_gracefully(shutdown_rx: &mut Receiver<()>) {
|
||||
- API documentation (generated from code)
|
||||
- Changelog (CHANGELOG.md)
|
||||
|
||||
These rules should serve as guiding principles when developing the RustFS project, ensuring code quality, performance, and maintainability.
|
||||
These rules should serve as guiding principles when developing the RustFS project, ensuring code quality, performance, and maintainability.
|
||||
68
CLAUDE.md
68
CLAUDE.md
@@ -1,68 +0,0 @@
|
||||
# Claude AI Rules for RustFS Project
|
||||
|
||||
## Core Rules Reference
|
||||
|
||||
This project follows the comprehensive AI coding rules defined in `.rules.md`. Please refer to that file for the complete set of development guidelines, coding standards, and best practices.
|
||||
|
||||
## Claude-Specific Configuration
|
||||
|
||||
When using Claude for this project, ensure you:
|
||||
|
||||
1. **Review the unified rules**: Always check `.rules.md` for the latest project guidelines
|
||||
2. **Follow branch protection**: Never attempt to commit directly to main/master branch
|
||||
3. **Use English**: All code comments, documentation, and variable names must be in English
|
||||
4. **Clean code practices**: Only make modifications you're confident about
|
||||
5. **Test thoroughly**: Ensure all changes pass formatting, linting, and testing requirements
|
||||
6. **Clean up after yourself**: Remove any temporary scripts or test files created during the session
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Critical Rules
|
||||
- 🚫 **NEVER commit directly to main/master branch**
|
||||
- ✅ **ALWAYS work on feature branches**
|
||||
- 📝 **ALWAYS use English for code and documentation**
|
||||
- 🧹 **ALWAYS clean up temporary files after use**
|
||||
- 🎯 **ONLY make confident, necessary modifications**
|
||||
|
||||
### Pre-commit Checklist
|
||||
```bash
|
||||
# Before committing, always run:
|
||||
cargo fmt --all
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
cargo check --all-targets
|
||||
cargo test
|
||||
```
|
||||
|
||||
### Branch Workflow
|
||||
```bash
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout -b feat/your-feature-name
|
||||
# Make your changes
|
||||
git add .
|
||||
git commit -m "feat: your feature description"
|
||||
git push origin feat/your-feature-name
|
||||
gh pr create
|
||||
```
|
||||
|
||||
## Claude-Specific Best Practices
|
||||
|
||||
1. **Task Analysis**: Always thoroughly analyze the task before starting implementation
|
||||
2. **Minimal Changes**: Make only the necessary changes to accomplish the task
|
||||
3. **Clear Communication**: Provide clear explanations of changes and their rationale
|
||||
4. **Error Prevention**: Verify code correctness before suggesting changes
|
||||
5. **Documentation**: Ensure all code changes are properly documented in English
|
||||
|
||||
## Important Notes
|
||||
|
||||
- This file serves as an entry point for Claude AI
|
||||
- All detailed rules and guidelines are maintained in `.rules.md`
|
||||
- Updates to coding standards should be made in `.rules.md` to ensure consistency across all AI tools
|
||||
- When in doubt, always refer to `.rules.md` for authoritative guidance
|
||||
- Claude should prioritize code quality, safety, and maintainability over speed
|
||||
|
||||
## See Also
|
||||
|
||||
- [.rules.md](./.rules.md) - Complete AI coding rules and guidelines
|
||||
- [CONTRIBUTING.md](./CONTRIBUTING.md) - Contribution guidelines
|
||||
- [README.md](./README.md) - Project overview and setup instructions
|
||||
@@ -91,9 +91,9 @@ impl CheckpointManager {
|
||||
}
|
||||
}
|
||||
|
||||
let checkpoint_file = data_dir.join(format!("scanner_checkpoint_{}.json", node_id));
|
||||
let backup_file = data_dir.join(format!("scanner_checkpoint_{}.backup", node_id));
|
||||
let temp_file = data_dir.join(format!("scanner_checkpoint_{}.tmp", node_id));
|
||||
let checkpoint_file = data_dir.join(format!("scanner_checkpoint_{node_id}.json"));
|
||||
let backup_file = data_dir.join(format!("scanner_checkpoint_{node_id}.backup"));
|
||||
let temp_file = data_dir.join(format!("scanner_checkpoint_{node_id}.tmp"));
|
||||
|
||||
Self {
|
||||
checkpoint_file,
|
||||
@@ -116,21 +116,21 @@ impl CheckpointManager {
|
||||
let checkpoint_data = CheckpointData::new(progress.clone(), self.node_id.clone());
|
||||
|
||||
let json_data = serde_json::to_string_pretty(&checkpoint_data)
|
||||
.map_err(|e| Error::Serialization(format!("serialize checkpoint failed: {}", e)))?;
|
||||
.map_err(|e| Error::Serialization(format!("serialize checkpoint failed: {e}")))?;
|
||||
|
||||
tokio::fs::write(&self.temp_file, json_data)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("write temp checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("write temp checkpoint file failed: {e}")))?;
|
||||
|
||||
if self.checkpoint_file.exists() {
|
||||
tokio::fs::copy(&self.checkpoint_file, &self.backup_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("backup checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("backup checkpoint file failed: {e}")))?;
|
||||
}
|
||||
|
||||
tokio::fs::rename(&self.temp_file, &self.checkpoint_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("replace checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("replace checkpoint file failed: {e}")))?;
|
||||
|
||||
*self.last_save.write().await = now;
|
||||
|
||||
@@ -183,17 +183,17 @@ impl CheckpointManager {
|
||||
/// load checkpoint from file
|
||||
async fn load_checkpoint_from_file(&self, file_path: &Path) -> Result<ScanProgress> {
|
||||
if !file_path.exists() {
|
||||
return Err(Error::NotFound(format!("checkpoint file not exists: {:?}", file_path)));
|
||||
return Err(Error::NotFound(format!("checkpoint file not exists: {file_path:?}")));
|
||||
}
|
||||
|
||||
// read file content
|
||||
let content = tokio::fs::read_to_string(file_path)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("read checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("read checkpoint file failed: {e}")))?;
|
||||
|
||||
// deserialize
|
||||
let checkpoint_data: CheckpointData =
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize checkpoint failed: {}", e)))?;
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize checkpoint failed: {e}")))?;
|
||||
|
||||
// validate checkpoint data
|
||||
self.validate_checkpoint(&checkpoint_data)?;
|
||||
@@ -223,7 +223,7 @@ impl CheckpointManager {
|
||||
|
||||
// checkpoint is too old (more than 24 hours), may be data expired
|
||||
if checkpoint_age > Duration::from_secs(24 * 3600) {
|
||||
return Err(Error::InvalidCheckpoint(format!("checkpoint data is too old: {:?}", checkpoint_age)));
|
||||
return Err(Error::InvalidCheckpoint(format!("checkpoint data is too old: {checkpoint_age:?}")));
|
||||
}
|
||||
|
||||
// validate version compatibility
|
||||
@@ -245,21 +245,21 @@ impl CheckpointManager {
|
||||
if self.checkpoint_file.exists() {
|
||||
tokio::fs::remove_file(&self.checkpoint_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete main checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete main checkpoint file failed: {e}")))?;
|
||||
}
|
||||
|
||||
// delete backup file
|
||||
if self.backup_file.exists() {
|
||||
tokio::fs::remove_file(&self.backup_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete backup checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete backup checkpoint file failed: {e}")))?;
|
||||
}
|
||||
|
||||
// delete temp file
|
||||
if self.temp_file.exists() {
|
||||
tokio::fs::remove_file(&self.temp_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete temp checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete temp checkpoint file failed: {e}")))?;
|
||||
}
|
||||
|
||||
info!("cleaned up all checkpoint files");
|
||||
@@ -274,14 +274,14 @@ impl CheckpointManager {
|
||||
|
||||
let metadata = tokio::fs::metadata(&self.checkpoint_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("get checkpoint file metadata failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("get checkpoint file metadata failed: {e}")))?;
|
||||
|
||||
let content = tokio::fs::read_to_string(&self.checkpoint_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("read checkpoint file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("read checkpoint file failed: {e}")))?;
|
||||
|
||||
let checkpoint_data: CheckpointData =
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize checkpoint failed: {}", e)))?;
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize checkpoint failed: {e}")))?;
|
||||
|
||||
Ok(Some(CheckpointInfo {
|
||||
file_size: metadata.len(),
|
||||
|
||||
@@ -2580,11 +2580,11 @@ mod tests {
|
||||
let temp_dir = std::path::PathBuf::from(test_base_dir);
|
||||
if temp_dir.exists() {
|
||||
if let Err(e) = fs::remove_dir_all(&temp_dir) {
|
||||
panic!("Failed to remove test directory: {}", e);
|
||||
panic!("Failed to remove test directory: {e}");
|
||||
}
|
||||
}
|
||||
if let Err(e) = fs::create_dir_all(&temp_dir) {
|
||||
panic!("Failed to create test directory: {}", e);
|
||||
panic!("Failed to create test directory: {e}");
|
||||
}
|
||||
|
||||
// create 4 disk dirs
|
||||
@@ -2597,7 +2597,7 @@ mod tests {
|
||||
|
||||
for disk_path in &disk_paths {
|
||||
if let Err(e) = fs::create_dir_all(disk_path) {
|
||||
panic!("Failed to create disk directory {:?}: {}", disk_path, e);
|
||||
panic!("Failed to create disk directory {disk_path:?}: {e}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2768,13 +2768,13 @@ mod tests {
|
||||
// Try to create bucket, handle case where it might already exist
|
||||
match ecstore.make_bucket(bucket, &Default::default()).await {
|
||||
Ok(_) => {
|
||||
println!("Successfully created bucket: {}", bucket);
|
||||
println!("Successfully created bucket: {bucket}");
|
||||
}
|
||||
Err(rustfs_ecstore::error::StorageError::BucketExists(_)) => {
|
||||
println!("Bucket {} already exists, continuing with test", bucket);
|
||||
println!("Bucket {bucket} already exists, continuing with test");
|
||||
}
|
||||
Err(e) => {
|
||||
panic!("Failed to create bucket {}: {}", bucket, e);
|
||||
panic!("Failed to create bucket {bucket}: {e}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2833,14 +2833,13 @@ mod tests {
|
||||
if retry_count >= 3 {
|
||||
// If we still can't load after 3 retries, log and skip this verification
|
||||
println!(
|
||||
"Warning: Could not load persisted data after {} retries: {}. Skipping persistence verification.",
|
||||
retry_count, e
|
||||
"Warning: Could not load persisted data after {retry_count} retries: {e}. Skipping persistence verification."
|
||||
);
|
||||
println!("This is likely due to concurrent test execution and doesn't indicate a functional issue.");
|
||||
// Just continue with the rest of the test
|
||||
break DataUsageInfo::new(); // Use empty data to skip assertions
|
||||
}
|
||||
println!("Retry {} loading persisted data after error: {}", retry_count, e);
|
||||
println!("Retry {retry_count} loading persisted data after error: {e}");
|
||||
tokio::time::sleep(std::time::Duration::from_millis(200)).await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -121,9 +121,9 @@ impl LocalStatsManager {
|
||||
}
|
||||
}
|
||||
|
||||
let stats_file = data_dir.join(format!("scanner_stats_{}.json", node_id));
|
||||
let backup_file = data_dir.join(format!("scanner_stats_{}.backup", node_id));
|
||||
let temp_file = data_dir.join(format!("scanner_stats_{}.tmp", node_id));
|
||||
let stats_file = data_dir.join(format!("scanner_stats_{node_id}.json"));
|
||||
let backup_file = data_dir.join(format!("scanner_stats_{node_id}.backup"));
|
||||
let temp_file = data_dir.join(format!("scanner_stats_{node_id}.tmp"));
|
||||
|
||||
Self {
|
||||
node_id: node_id.to_string(),
|
||||
@@ -172,10 +172,10 @@ impl LocalStatsManager {
|
||||
async fn load_stats_from_file(&self, file_path: &Path) -> Result<LocalScanStats> {
|
||||
let content = tokio::fs::read_to_string(file_path)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("read stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("read stats file failed: {e}")))?;
|
||||
|
||||
let stats: LocalScanStats =
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize stats data failed: {}", e)))?;
|
||||
serde_json::from_str(&content).map_err(|e| Error::Serialization(format!("deserialize stats data failed: {e}")))?;
|
||||
|
||||
Ok(stats)
|
||||
}
|
||||
@@ -194,24 +194,24 @@ impl LocalStatsManager {
|
||||
|
||||
// serialize
|
||||
let json_data = serde_json::to_string_pretty(&stats)
|
||||
.map_err(|e| Error::Serialization(format!("serialize stats data failed: {}", e)))?;
|
||||
.map_err(|e| Error::Serialization(format!("serialize stats data failed: {e}")))?;
|
||||
|
||||
// atomic write
|
||||
tokio::fs::write(&self.temp_file, json_data)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("write temp stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("write temp stats file failed: {e}")))?;
|
||||
|
||||
// backup existing file
|
||||
if self.stats_file.exists() {
|
||||
tokio::fs::copy(&self.stats_file, &self.backup_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("backup stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("backup stats file failed: {e}")))?;
|
||||
}
|
||||
|
||||
// atomic replace
|
||||
tokio::fs::rename(&self.temp_file, &self.stats_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("replace stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("replace stats file failed: {e}")))?;
|
||||
|
||||
*self.last_save.write().await = now;
|
||||
|
||||
@@ -374,21 +374,21 @@ impl LocalStatsManager {
|
||||
if self.stats_file.exists() {
|
||||
tokio::fs::remove_file(&self.stats_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete stats file failed: {e}")))?;
|
||||
}
|
||||
|
||||
// delete backup file
|
||||
if self.backup_file.exists() {
|
||||
tokio::fs::remove_file(&self.backup_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete backup stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete backup stats file failed: {e}")))?;
|
||||
}
|
||||
|
||||
// delete temp file
|
||||
if self.temp_file.exists() {
|
||||
tokio::fs::remove_file(&self.temp_file)
|
||||
.await
|
||||
.map_err(|e| Error::IO(format!("delete temp stats file failed: {}", e)))?;
|
||||
.map_err(|e| Error::IO(format!("delete temp stats file failed: {e}")))?;
|
||||
}
|
||||
|
||||
info!("cleanup all stats files");
|
||||
|
||||
@@ -1007,7 +1007,7 @@ impl NodeScanner {
|
||||
let object_size = self.estimate_object_size(&entry_path).await;
|
||||
|
||||
let entry = ScanResultEntry {
|
||||
object_path: format!("{}/{}", bucket_name, object_name),
|
||||
object_path: format!("{bucket_name}/{object_name}"),
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_size,
|
||||
is_healthy: true, // assume most objects are healthy
|
||||
@@ -1047,7 +1047,7 @@ impl NodeScanner {
|
||||
// simulate some scan results
|
||||
for i in 0..5 {
|
||||
let entry = ScanResultEntry {
|
||||
object_path: format!("/fallback-bucket/object_{}", i),
|
||||
object_path: format!("/fallback-bucket/object_{i}"),
|
||||
bucket_name: "fallback-bucket".to_string(),
|
||||
object_size: 1024 * (i + 1),
|
||||
is_healthy: true,
|
||||
|
||||
@@ -203,7 +203,7 @@ impl NodeClient {
|
||||
.get(url)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| Error::Other(format!("HTTP request failed: {}", e)))?;
|
||||
.map_err(|e| Error::Other(format!("HTTP request failed: {e}")))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
return Err(Error::Other(format!("HTTP status error: {}", response.status())));
|
||||
@@ -212,7 +212,7 @@ impl NodeClient {
|
||||
let summary = response
|
||||
.json::<StatsSummary>()
|
||||
.await
|
||||
.map_err(|e| Error::Serialization(format!("deserialize stats data failed: {}", e)))?;
|
||||
.map_err(|e| Error::Serialization(format!("deserialize stats data failed: {e}")))?;
|
||||
|
||||
Ok(summary)
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ async fn test_endpoint_index_settings() -> anyhow::Result<()> {
|
||||
let temp_dir = TempDir::new()?;
|
||||
|
||||
// create test disk paths
|
||||
let disk_paths: Vec<_> = (0..4).map(|i| temp_dir.path().join(format!("disk{}", i))).collect();
|
||||
let disk_paths: Vec<_> = (0..4).map(|i| temp_dir.path().join(format!("disk{i}"))).collect();
|
||||
|
||||
for path in &disk_paths {
|
||||
tokio::fs::create_dir_all(path).await?;
|
||||
@@ -60,9 +60,9 @@ async fn test_endpoint_index_settings() -> anyhow::Result<()> {
|
||||
|
||||
// validate all endpoint indexes are in valid range
|
||||
for (i, ep) in endpoints.iter().enumerate() {
|
||||
assert_eq!(ep.pool_idx, 0, "Endpoint {} pool_idx should be 0", i);
|
||||
assert_eq!(ep.set_idx, 0, "Endpoint {} set_idx should be 0", i);
|
||||
assert_eq!(ep.disk_idx, i as i32, "Endpoint {} disk_idx should be {}", i, i);
|
||||
assert_eq!(ep.pool_idx, 0, "Endpoint {i} pool_idx should be 0");
|
||||
assert_eq!(ep.set_idx, 0, "Endpoint {i} set_idx should be 0");
|
||||
assert_eq!(ep.disk_idx, i as i32, "Endpoint {i} disk_idx should be {i}");
|
||||
println!(
|
||||
"Endpoint {} indices are valid: pool={}, set={}, disk={}",
|
||||
i, ep.pool_idx, ep.set_idx, ep.disk_idx
|
||||
|
||||
@@ -270,15 +270,15 @@ async fn test_performance_impact_measurement() {
|
||||
};
|
||||
|
||||
println!("Performance impact measurement:");
|
||||
println!(" Baseline duration: {:?}", baseline_duration);
|
||||
println!(" With scanner duration: {:?}", with_scanner_duration);
|
||||
println!(" Overhead: {} ms", overhead_ms);
|
||||
println!(" Impact percentage: {:.2}%", impact_percentage);
|
||||
println!(" Baseline duration: {baseline_duration:?}");
|
||||
println!(" With scanner duration: {with_scanner_duration:?}");
|
||||
println!(" Overhead: {overhead_ms} ms");
|
||||
println!(" Impact percentage: {impact_percentage:.2}%");
|
||||
println!(" Meets optimization goals: {}", benchmark.meets_optimization_goals());
|
||||
|
||||
// Verify optimization target (business impact < 10%)
|
||||
// Note: In real environment this test may need longer time and real load
|
||||
assert!(impact_percentage < 50.0, "Performance impact too high: {:.2}%", impact_percentage);
|
||||
assert!(impact_percentage < 50.0, "Performance impact too high: {impact_percentage:.2}%");
|
||||
|
||||
io_monitor.stop().await;
|
||||
}
|
||||
@@ -308,7 +308,7 @@ async fn test_concurrent_scanner_operations() {
|
||||
tokio::spawn(async move {
|
||||
for _i in 0..5 {
|
||||
if let Err(e) = scanner.force_save_checkpoint().await {
|
||||
eprintln!("Checkpoint save failed: {}", e);
|
||||
eprintln!("Checkpoint save failed: {e}");
|
||||
}
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
}
|
||||
|
||||
@@ -319,14 +319,14 @@ async fn test_optimized_performance_characteristics() {
|
||||
|
||||
// Create several test objects
|
||||
for i in 0..10 {
|
||||
let object_name = format!("perf-object-{}", i);
|
||||
let object_name = format!("perf-object-{i}");
|
||||
let test_data = vec![b'A' + (i % 26) as u8; 1024 * (i + 1)]; // Variable size objects
|
||||
let mut put_reader = PutObjReader::from_vec(test_data);
|
||||
let object_opts = rustfs_ecstore::store_api::ObjectOptions::default();
|
||||
ecstore
|
||||
.put_object(bucket_name, &object_name, &mut put_reader, &object_opts)
|
||||
.await
|
||||
.unwrap_or_else(|_| panic!("Failed to create object {}", object_name));
|
||||
.unwrap_or_else(|_| panic!("Failed to create object {object_name}"));
|
||||
}
|
||||
|
||||
// Create optimized scanner
|
||||
@@ -340,7 +340,7 @@ async fn test_optimized_performance_characteristics() {
|
||||
let scan_result = scanner.scan_cycle().await;
|
||||
let scan_duration = start_time.elapsed();
|
||||
|
||||
println!("Optimized scan completed in: {:?}", scan_duration);
|
||||
println!("Optimized scan completed in: {scan_duration:?}");
|
||||
assert!(scan_result.is_ok(), "Performance scan should succeed");
|
||||
|
||||
// Verify the scan was reasonably fast (should be faster than old concurrent scanner)
|
||||
@@ -359,11 +359,11 @@ async fn test_optimized_performance_characteristics() {
|
||||
let _scan_result2 = scanner.scan_cycle().await;
|
||||
let scan_duration2 = start_time2.elapsed();
|
||||
|
||||
println!("Second optimized scan completed in: {:?}", scan_duration2);
|
||||
println!("Second optimized scan completed in: {scan_duration2:?}");
|
||||
|
||||
// Second scan should be similar or faster due to caching
|
||||
let performance_ratio = scan_duration2.as_millis() as f64 / scan_duration.as_millis() as f64;
|
||||
println!("Performance ratio (second/first): {:.2}", performance_ratio);
|
||||
println!("Performance ratio (second/first): {performance_ratio:.2}");
|
||||
|
||||
// Clean up
|
||||
let _ = std::fs::remove_dir_all(std::path::Path::new(TEST_DIR_PERF));
|
||||
@@ -404,7 +404,7 @@ async fn test_optimized_load_balancing_and_throttling() {
|
||||
];
|
||||
|
||||
for (expected_level, latency, qps, error_rate, connections) in load_scenarios {
|
||||
println!("Testing load scenario: {:?}", expected_level);
|
||||
println!("Testing load scenario: {expected_level:?}");
|
||||
|
||||
// Update business metrics to simulate load
|
||||
node_scanner
|
||||
@@ -416,7 +416,7 @@ async fn test_optimized_load_balancing_and_throttling() {
|
||||
|
||||
// Get current load level
|
||||
let current_level = io_monitor.get_business_load_level().await;
|
||||
println!("Detected load level: {:?}", current_level);
|
||||
println!("Detected load level: {current_level:?}");
|
||||
|
||||
// Get throttling decision
|
||||
let _current_metrics = io_monitor.get_current_metrics().await;
|
||||
|
||||
@@ -298,7 +298,7 @@ async fn test_scanner_performance_impact() {
|
||||
let throttle_stats = throttler.get_throttle_stats().await;
|
||||
|
||||
println!("Performance test results:");
|
||||
println!(" Load level: {:?}", load_level);
|
||||
println!(" Load level: {load_level:?}");
|
||||
println!(" Throttle decisions: {}", throttle_stats.total_decisions);
|
||||
println!(" Average delay: {:?}", throttle_stats.average_delay);
|
||||
|
||||
|
||||
Reference in New Issue
Block a user