diff --git a/README.md b/README.md
index b6e5ae37..82c33139 100644
--- a/README.md
+++ b/README.md
@@ -25,7 +25,7 @@ English | 简
français |
日本語 |
한국어 |
- Português |
+ Portuguese |
Русский
diff --git a/crates/audit/tests/performance_test.rs b/crates/audit/tests/performance_test.rs
index 6921d21f..32dc87e0 100644
--- a/crates/audit/tests/performance_test.rs
+++ b/crates/audit/tests/performance_test.rs
@@ -94,7 +94,7 @@ async fn test_audit_log_dispatch_performance() {
let start_result = system.start(config).await;
if start_result.is_err() {
println!("AuditSystem failed to start: {start_result:?}");
- return; // 或 assert!(false, "AuditSystem failed to start");
+ return; // Alternatively: assert!(false, "AuditSystem failed to start");
}
use chrono::Utc;
diff --git a/crates/crypto/src/encdec/tests.rs b/crates/crypto/src/encdec/tests.rs
index 03b4ace0..79e2dcbb 100644
--- a/crates/crypto/src/encdec/tests.rs
+++ b/crates/crypto/src/encdec/tests.rs
@@ -226,7 +226,7 @@ fn test_password_variations() -> Result<(), crate::Error> {
b"12345".as_slice(), // Numeric
b"!@#$%^&*()".as_slice(), // Special characters
b"\x00\x01\x02\x03".as_slice(), // Binary password
- "密码测试".as_bytes(), // Unicode password
+ "пароль тест".as_bytes(), // Unicode password
&[0xFF; 64], // Long binary password
];
diff --git a/crates/e2e_test/src/kms/README.md b/crates/e2e_test/src/kms/README.md
index 5293de90..ef25bdaf 100644
--- a/crates/e2e_test/src/kms/README.md
+++ b/crates/e2e_test/src/kms/README.md
@@ -1,267 +1,253 @@
# KMS End-to-End Tests
-本目录包含 RustFS KMS (Key Management Service) 的端到端集成测试,用于验证完整的 KMS 功能流程。
+This directory contains the integration suites used to validate the full RustFS KMS (Key Management Service) workflow.
-## 📁 测试文件说明
+## 📁 Test Overview
### `kms_local_test.rs`
-本地KMS后端的端到端测试,包含:
-- 自动启动和配置本地KMS后端
-- 通过动态配置API配置KMS服务
-- 测试SSE-C(客户端提供密钥)加密流程
-- 验证S3兼容的对象加密/解密操作
-- 密钥生命周期管理测试
+End-to-end coverage for the local KMS backend:
+- Auto-start and configure the local backend
+- Configure KMS through the dynamic configuration API
+- Verify SSE-C (client-provided keys)
+- Exercise S3-compatible encryption/decryption
+- Validate key lifecycle management
### `kms_vault_test.rs`
-Vault KMS后端的端到端测试,包含:
-- 自动启动Vault开发服务器
-- 配置Vault transit engine和密钥
-- 通过动态配置API配置KMS服务
-- 测试完整的Vault KMS集成
-- 验证Token认证和加密操作
+End-to-end coverage for the Vault backend:
+- Launch a Vault dev server automatically
+- Configure the transit engine and encryption keys
+- Configure KMS via the dynamic configuration API
+- Run the full Vault integration flow
+- Validate token authentication and encryption operations
### `kms_comprehensive_test.rs`
-**完整的KMS功能测试套件**(当前因AWS SDK API兼容性问题暂时禁用),包含:
-- **Bucket加密配置**: SSE-S3和SSE-KMS默认加密设置
-- **完整的SSE加密模式测试**:
- - SSE-S3: S3管理的服务端加密
- - SSE-KMS: KMS管理的服务端加密
- - SSE-C: 客户端提供密钥的服务端加密
-- **对象操作测试**: 上传、下载、验证三种SSE模式
-- **分片上传测试**: 多部分上传支持所有SSE模式
-- **对象复制测试**: 不同SSE模式间的复制操作
-- **完整KMS API管理**:
- - 密钥生命周期管理(创建、列表、描述、删除、取消删除)
- - 直接加密/解密操作
- - 数据密钥生成和操作
- - KMS服务管理(启动、停止、状态查询)
+**Full KMS capability suite** (currently disabled because of AWS SDK compatibility issues):
+- **Bucket encryption configuration**: SSE-S3 and SSE-KMS defaults
+- **All SSE encryption modes**:
+ - SSE-S3 (S3-managed server-side encryption)
+ - SSE-KMS (KMS-managed server-side encryption)
+ - SSE-C (client-provided keys)
+- **Object operations**: upload, download, and validation for every SSE mode
+- **Multipart uploads**: cover each SSE mode
+- **Object replication**: cross-mode replication scenarios
+- **Complete KMS API management**:
+ - Key lifecycle (create, list, describe, delete, cancel delete)
+ - Direct encrypt/decrypt operations
+ - Data key generation and handling
+ - KMS service lifecycle (start, stop, status)
### `kms_integration_test.rs`
-综合性KMS集成测试,包含:
-- 多后端兼容性测试
-- KMS服务生命周期测试
-- 错误处理和恢复测试
-- **注意**: 当前因AWS SDK API兼容性问题暂时禁用
+Broad integration tests that exercise:
+- Multiple backends
+- KMS lifecycle management
+- Error handling and recovery
+- **Note**: currently disabled because of AWS SDK compatibility gaps
-## 🚀 如何运行测试
+## 🚀 Running Tests
-### 前提条件
+### Prerequisites
-1. **系统依赖**:
+1. **System dependencies**
```bash
# macOS
brew install vault awscurl
-
+
# Ubuntu/Debian
apt-get install vault
pip install awscurl
```
-2. **构建RustFS**:
+2. **Build RustFS**
```bash
- # 在项目根目录
cargo build
```
-### 运行单个测试
+### Run individual suites
-#### 本地KMS测试
+#### Local backend
```bash
cd crates/e2e_test
cargo test test_local_kms_end_to_end -- --nocapture
```
-#### Vault KMS测试
+#### Vault backend
```bash
cd crates/e2e_test
cargo test test_vault_kms_end_to_end -- --nocapture
```
-#### 高可用性测试
+#### High availability
```bash
cd crates/e2e_test
cargo test test_vault_kms_high_availability -- --nocapture
```
-#### 完整功能测试(开发中)
+#### Comprehensive features (disabled)
```bash
cd crates/e2e_test
-# 注意:以下测试因AWS SDK API兼容性问题暂时禁用
+# Disabled due to AWS SDK compatibility gaps
# cargo test test_comprehensive_kms_functionality -- --nocapture
-# cargo test test_sse_modes_compatibility -- --nocapture
+# cargo test test_sse_modes_compatibility -- --nocapture
# cargo test test_kms_api_comprehensive -- --nocapture
```
-### 运行所有KMS测试
+### Run all KMS suites
```bash
cd crates/e2e_test
cargo test kms -- --nocapture
```
-### 串行运行(避免端口冲突)
+### Run serially (avoid port conflicts)
```bash
cd crates/e2e_test
cargo test kms -- --nocapture --test-threads=1
```
-## 🔧 测试配置
+## 🔧 Configuration
-### 环境变量
+### Environment variables
```bash
-# 可选:自定义端口(默认使用9050)
+# Optional: custom RustFS port (default 9050)
export RUSTFS_TEST_PORT=9050
-# 可选:自定义Vault端口(默认使用8200)
+# Optional: custom Vault port (default 8200)
export VAULT_TEST_PORT=8200
-# 可选:启用详细日志
+# Optional: enable verbose logging
export RUST_LOG=debug
```
-### 依赖的二进制文件路径
+### Required binaries
-测试会自动查找以下二进制文件:
-- `../../target/debug/rustfs` - RustFS服务器
-- `vault` - Vault (需要在PATH中)
-- `/Users/dandan/Library/Python/3.9/bin/awscurl` - AWS签名工具
+Tests look for:
+- `../../target/debug/rustfs` – RustFS server
+- `vault` – Vault CLI (must be on PATH)
+- `/Users/dandan/Library/Python/3.9/bin/awscurl` – AWS SigV4 helper
-## 📋 测试流程说明
+## 📋 Test Flow
-### Local KMS测试流程
-1. **环境准备**:创建临时目录,设置KMS密钥存储路径
-2. **启动服务**:启动RustFS服务器,启用KMS功能
-3. **等待就绪**:检查端口监听和S3 API响应
-4. **配置KMS**:通过awscurl发送配置请求到admin API
-5. **启动KMS**:激活KMS服务
-6. **功能测试**:
- - 创建测试存储桶
- - 测试SSE-C加密(客户端提供密钥)
- - 验证对象加密/解密
-7. **清理**:终止进程,清理临时文件
+### Local backend
+1. **Prepare environment** – create temporary directories and key storage paths
+2. **Start RustFS** – launch the server with KMS enabled
+3. **Wait for readiness** – confirm the port listener and S3 API
+4. **Configure KMS** – send configuration via awscurl to the admin API
+5. **Start KMS** – activate the KMS service
+6. **Exercise functionality**
+ - Create a test bucket
+ - Run SSE-C encryption with client-provided keys
+ - Validate encryption/decryption behavior
+7. **Cleanup** – stop processes and remove temporary files
-### Vault KMS测试流程
-1. **启动Vault**:使用开发模式启动Vault服务器
-2. **配置Vault**:
- - 启用transit secrets engine
- - 创建加密密钥(rustfs-master-key)
-3. **启动RustFS**:启用KMS功能的RustFS服务器
-4. **配置KMS**:通过API配置Vault后端,包含:
- - Vault地址和Token认证
- - Transit engine配置
- - 密钥路径设置
-5. **功能测试**:完整的加密/解密流程测试
-6. **清理**:终止所有进程
+### Vault backend
+1. **Launch Vault** – start the dev-mode server
+2. **Configure Vault**
+ - Enable the transit secrets engine
+ - Create the `rustfs-master-key`
+3. **Start RustFS** – run the server with KMS enabled
+4. **Configure KMS** – point RustFS at Vault (address, token, transit config, key path)
+5. **Exercise functionality** – complete the encryption/decryption workflow
+6. **Cleanup** – stop all services
-## 🛠️ 故障排除
+## 🛠️ Troubleshooting
-### 常见问题
+### Common issues
-**Q: 测试失败 "RustFS server failed to become ready"**
-```
-A: 检查端口是否被占用:
+**Q: `RustFS server failed to become ready`**
+```bash
lsof -i :9050
-kill -9 # 如果有进程占用端口
+kill -9 # Free the port if necessary
```
-**Q: Vault服务启动失败**
-```
-A: 确保Vault已安装且在PATH中:
+**Q: Vault fails to start**
+```bash
which vault
vault version
```
-**Q: awscurl认证失败**
-```
-A: 检查awscurl路径是否正确:
+**Q: awscurl authentication fails**
+```bash
ls /Users/dandan/Library/Python/3.9/bin/awscurl
-# 或安装到不同路径:
+# Or install elsewhere
pip install awscurl
-which awscurl # 然后更新测试中的路径
+which awscurl # Update the path in tests accordingly
```
-**Q: 测试超时**
-```
-A: 增加等待时间或检查日志:
+**Q: Tests time out**
+```bash
RUST_LOG=debug cargo test test_local_kms_end_to_end -- --nocapture
```
-### 调试技巧
+### Debug tips
-1. **查看详细日志**:
+1. **Enable verbose logs**
```bash
RUST_LOG=rustfs_kms=debug,rustfs=info cargo test -- --nocapture
```
-2. **保留临时文件**:
- 修改测试代码,注释掉清理部分,检查生成的配置文件
+2. **Keep temporary files** – comment out cleanup logic to inspect generated configs
-3. **单步调试**:
- 在测试中添加 `std::thread::sleep` 来暂停执行,手动检查服务状态
+3. **Pause execution** – add `std::thread::sleep` for manual inspection during tests
-4. **端口检查**:
+4. **Monitor ports**
```bash
- # 测试运行时检查端口状态
netstat -an | grep 9050
curl http://127.0.0.1:9050/minio/health/ready
```
-## 📊 测试覆盖范围
+## 📊 Coverage
-### 功能覆盖
-- ✅ KMS服务动态配置
-- ✅ 本地和Vault后端支持
-- ✅ AWS S3兼容加密接口
-- ✅ 密钥管理和生命周期
-- ✅ 错误处理和恢复
-- ✅ 高可用性场景
+### Functional
+- ✅ Dynamic KMS configuration
+- ✅ Local and Vault backends
+- ✅ AWS S3-compatible encryption APIs
+- ✅ Key lifecycle management
+- ✅ Error handling and recovery paths
+- ✅ High-availability behavior
-### 加密模式覆盖
-- ✅ SSE-C (Server-Side Encryption with Customer-Provided Keys)
-- ✅ SSE-S3 (Server-Side Encryption with S3-Managed Keys)
-- ✅ SSE-KMS (Server-Side Encryption with KMS-Managed Keys)
+### Encryption modes
+- ✅ SSE-C (customer-provided)
+- ✅ SSE-S3 (S3-managed)
+- ✅ SSE-KMS (KMS-managed)
-### S3操作覆盖
-- ✅ 对象上传/下载 (SSE-C模式)
-- 🚧 分片上传 (需要AWS SDK兼容性修复)
-- 🚧 对象复制 (需要AWS SDK兼容性修复)
-- 🚧 Bucket加密配置 (需要AWS SDK兼容性修复)
+### S3 operations
+- ✅ Object upload/download (SSE-C)
+- 🚧 Multipart uploads (pending AWS SDK fixes)
+- 🚧 Object replication (pending AWS SDK fixes)
+- 🚧 Bucket encryption defaults (pending AWS SDK fixes)
-### KMS API覆盖
-- ✅ 基础密钥管理 (创建、列表)
-- 🚧 完整密钥生命周期 (需要AWS SDK兼容性修复)
-- 🚧 直接加密/解密操作 (需要AWS SDK兼容性修复)
-- 🚧 数据密钥生成和解密 (需要AWS SDK兼容性修复)
-- ✅ KMS服务管理 (配置、启动、停止、状态)
+### KMS API
+- ✅ Basic key management (create/list)
+- 🚧 Full key lifecycle (pending AWS SDK fixes)
+- 🚧 Direct encrypt/decrypt (pending AWS SDK fixes)
+- 🚧 Data key operations (pending AWS SDK fixes)
+- ✅ Service lifecycle (configure/start/stop/status)
-### 认证方式覆盖
-- ✅ Vault Token认证
-- 🚧 Vault AppRole认证
+### Authentication
+- ✅ Vault token auth
+- 🚧 Vault AppRole auth
-## 🔄 持续集成
+## 🔄 CI Integration
-这些测试设计为可在CI/CD环境中运行:
+Designed to run inside CI/CD pipelines:
```yaml
-# GitHub Actions 示例
- name: Run KMS E2E Tests
run: |
- # 安装依赖
sudo apt-get update
sudo apt-get install -y vault
pip install awscurl
-
- # 构建并测试
+
cargo build
cd crates/e2e_test
cargo test kms -- --nocapture --test-threads=1
```
-## 📚 相关文档
+## 📚 References
-- [KMS 配置文档](../../../../docs/kms/README.md) - KMS功能完整文档
-- [动态配置API](../../../../docs/kms/http-api.md) - REST API接口说明
-- [故障排除指南](../../../../docs/kms/troubleshooting.md) - 常见问题解决
+- [KMS configuration guide](../../../../docs/kms/README.md)
+- [Dynamic configuration API](../../../../docs/kms/http-api.md)
+- [Troubleshooting](../../../../docs/kms/troubleshooting.md)
---
-*这些测试确保KMS功能的稳定性和可靠性,为生产环境部署提供信心。*
\ No newline at end of file
+*These suites ensure KMS stability and reliability, building confidence for production deployments.*
diff --git a/crates/e2e_test/src/kms/common.rs b/crates/e2e_test/src/kms/common.rs
index daeafb9c..390da7ad 100644
--- a/crates/e2e_test/src/kms/common.rs
+++ b/crates/e2e_test/src/kms/common.rs
@@ -547,9 +547,9 @@ pub async fn test_multipart_upload_with_config(
) -> Result<(), Box> {
let total_size = config.total_size();
- info!("🧪 开始分片上传测试 - {:?}", config.encryption_type);
+ info!("🧪 Starting multipart upload test - {:?}", config.encryption_type);
info!(
- " 对象: {}, 分片: {}个, 每片: {}MB, 总计: {}MB",
+ " Object: {}, parts: {}, part size: {} MB, total: {} MB",
config.object_key,
config.total_parts,
config.part_size / (1024 * 1024),
@@ -589,7 +589,7 @@ pub async fn test_multipart_upload_with_config(
let create_multipart_output = create_request.send().await?;
let upload_id = create_multipart_output.upload_id().unwrap();
- info!("📋 创建分片上传,ID: {}", upload_id);
+ info!("📋 Created multipart upload, ID: {}", upload_id);
// Step 2: Upload parts
let mut completed_parts = Vec::new();
@@ -598,7 +598,7 @@ pub async fn test_multipart_upload_with_config(
let end = std::cmp::min(start + config.part_size, total_size);
let part_data = &test_data[start..end];
- info!("📤 上传分片 {} ({:.2}MB)", part_number, part_data.len() as f64 / (1024.0 * 1024.0));
+ info!("📤 Uploading part {} ({:.2} MB)", part_number, part_data.len() as f64 / (1024.0 * 1024.0));
let mut upload_request = s3_client
.upload_part()
@@ -625,7 +625,7 @@ pub async fn test_multipart_upload_with_config(
.build(),
);
- debug!("分片 {} 上传完成,ETag: {}", part_number, etag);
+ debug!("Part {} uploaded with ETag {}", part_number, etag);
}
// Step 3: Complete multipart upload
@@ -633,7 +633,7 @@ pub async fn test_multipart_upload_with_config(
.set_parts(Some(completed_parts))
.build();
- info!("🔗 完成分片上传");
+ info!("🔗 Completing multipart upload");
let complete_output = s3_client
.complete_multipart_upload()
.bucket(bucket)
@@ -643,10 +643,10 @@ pub async fn test_multipart_upload_with_config(
.send()
.await?;
- debug!("完成分片上传,ETag: {:?}", complete_output.e_tag());
+ debug!("Multipart upload finalized with ETag {:?}", complete_output.e_tag());
// Step 4: Download and verify
- info!("📥 下载文件并验证");
+ info!("📥 Downloading object for verification");
let mut get_request = s3_client.get_object().bucket(bucket).key(&config.object_key);
// Add encryption headers for SSE-C GET
@@ -680,7 +680,7 @@ pub async fn test_multipart_upload_with_config(
assert_eq!(downloaded_data.len(), total_size);
assert_eq!(&downloaded_data[..], &test_data[..]);
- info!("✅ 分片上传测试通过 - {:?}", config.encryption_type);
+ info!("✅ Multipart upload test passed - {:?}", config.encryption_type);
Ok(())
}
@@ -700,7 +700,7 @@ pub async fn test_all_multipart_encryption_types(
bucket: &str,
base_object_key: &str,
) -> Result<(), Box> {
- info!("🧪 测试所有加密类型的分片上传");
+ info!("🧪 Testing multipart uploads for every encryption type");
let part_size = 5 * 1024 * 1024; // 5MB per part
let total_parts = 2;
@@ -718,7 +718,7 @@ pub async fn test_all_multipart_encryption_types(
test_multipart_upload_with_config(s3_client, bucket, &config).await?;
}
- info!("✅ 所有加密类型的分片上传测试通过");
+ info!("✅ Multipart uploads succeeded for every encryption type");
Ok(())
}
diff --git a/crates/e2e_test/src/kms/multipart_encryption_test.rs b/crates/e2e_test/src/kms/multipart_encryption_test.rs
index d55c2bfe..b744f48a 100644
--- a/crates/e2e_test/src/kms/multipart_encryption_test.rs
+++ b/crates/e2e_test/src/kms/multipart_encryption_test.rs
@@ -201,7 +201,7 @@ async fn test_step3_multipart_upload_with_sse_s3() -> Result<(), Box = (0..total_size).map(|i| ((i / 1000) % 256) as u8).collect();
info!(
@@ -275,43 +275,43 @@ async fn test_step3_multipart_upload_with_sse_s3() -> Result<(), Box Result<(), Box> {
init_logging();
- info!("🧪 步骤 4:测试大文件分片上传加密");
+ info!("🧪 Step 4: test large-file multipart encryption");
let mut kms_env = LocalKMSTestEnvironment::new().await?;
let _default_key_id = kms_env.start_rustfs_for_local_kms().await?;
@@ -321,18 +321,18 @@ async fn test_step4_large_multipart_upload_with_encryption() -> Result<(), Box = (0..total_size)
.map(|i| {
let part_num = i / part_size;
@@ -341,9 +341,9 @@ async fn test_step4_large_multipart_upload_with_encryption() -> Result<(), Box Result<(), Box Result<(), Box Result<(), Box Result<(), Box Result<(), Box> {
init_logging();
- info!("🧪 步骤 5:测试所有加密类型的分片上传");
+ info!("🧪 Step 5: test multipart uploads for every encryption mode");
let mut kms_env = LocalKMSTestEnvironment::new().await?;
let _default_key_id = kms_env.start_rustfs_for_local_kms().await?;
@@ -450,8 +450,8 @@ async fn test_step5_all_encryption_types_multipart() -> Result<(), Box Result<(), Box Result<(), Box Result<(), Box> {
- // 生成测试数据
+ // Generate test data
let test_data: Vec = (0..total_size).map(|i| ((i * 7) % 256) as u8).collect();
- // 准备 SSE-C 所需的密钥(如果需要)
+ // Prepare SSE-C keys when required
let (sse_c_key, sse_c_md5) = if matches!(encryption_type, EncryptionType::SSEC) {
let key = "01234567890123456789012345678901";
let key_b64 = base64::Engine::encode(&base64::engine::general_purpose::STANDARD, key);
@@ -510,9 +510,9 @@ async fn test_multipart_encryption_type(
(None, None)
};
- info!("📋 创建分片上传 - {:?}", encryption_type);
+ info!("📋 Creating multipart upload - {:?}", encryption_type);
- // 创建分片上传
+ // Create multipart upload
let mut create_request = s3_client.create_multipart_upload().bucket(bucket).key(object_key);
create_request = match encryption_type {
@@ -526,7 +526,7 @@ async fn test_multipart_encryption_type(
let create_multipart_output = create_request.send().await?;
let upload_id = create_multipart_output.upload_id().unwrap();
- // 上传分片
+ // Upload parts
let mut completed_parts = Vec::new();
for part_number in 1..=total_parts {
let start = (part_number - 1) * part_size;
@@ -541,7 +541,7 @@ async fn test_multipart_encryption_type(
.part_number(part_number as i32)
.body(aws_sdk_s3::primitives::ByteStream::from(part_data.to_vec()));
- // SSE-C 需要在每个 UploadPart 请求中包含密钥
+ // SSE-C requires the key on each UploadPart request
if matches!(encryption_type, EncryptionType::SSEC) {
upload_request = upload_request
.sse_customer_algorithm("AES256")
@@ -558,10 +558,10 @@ async fn test_multipart_encryption_type(
.build(),
);
- debug!("{:?} 分片 {} 上传完成", encryption_type, part_number);
+ debug!("{:?} part {} uploaded", encryption_type, part_number);
}
- // 完成分片上传
+ // Complete the multipart upload
let completed_multipart_upload = aws_sdk_s3::types::CompletedMultipartUpload::builder()
.set_parts(Some(completed_parts))
.build();
@@ -575,10 +575,10 @@ async fn test_multipart_encryption_type(
.send()
.await?;
- // 下载并验证
+ // Download and verify
let mut get_request = s3_client.get_object().bucket(bucket).key(object_key);
- // SSE-C 需要在 GET 请求中包含密钥
+ // SSE-C requires the key on GET requests
if matches!(encryption_type, EncryptionType::SSEC) {
get_request = get_request
.sse_customer_algorithm("AES256")
@@ -588,7 +588,7 @@ async fn test_multipart_encryption_type(
let get_response = get_request.send().await?;
- // 验证加密头
+ // Verify encryption headers
match encryption_type {
EncryptionType::SSEKMS => {
assert_eq!(
@@ -601,11 +601,11 @@ async fn test_multipart_encryption_type(
}
}
- // 验证数据完整性
+ // Verify data integrity
let downloaded_data = get_response.body.collect().await?.into_bytes();
assert_eq!(downloaded_data.len(), total_size);
assert_eq!(&downloaded_data[..], &test_data[..]);
- info!("✅ {:?} 分片上传测试通过", encryption_type);
+ info!("✅ {:?} multipart upload test passed", encryption_type);
Ok(())
}
diff --git a/crates/e2e_test/src/kms/test_runner.rs b/crates/e2e_test/src/kms/test_runner.rs
index 4d6591c4..efa144f6 100644
--- a/crates/e2e_test/src/kms/test_runner.rs
+++ b/crates/e2e_test/src/kms/test_runner.rs
@@ -346,7 +346,7 @@ impl KMSTestSuite {
/// Run the complete test suite
pub async fn run_test_suite(&self) -> Vec {
init_logging();
- info!("🚀 开始KMS统一测试套件");
+ info!("🚀 Starting unified KMS test suite");
let start_time = Instant::now();
let mut results = Vec::new();
@@ -359,17 +359,17 @@ impl KMSTestSuite {
.filter(|test| !self.config.include_critical_only || test.is_critical)
.collect();
- info!("📊 测试计划: {} 个测试将被执行", tests_to_run.len());
+ info!("📊 Test plan: {} test(s) scheduled", tests_to_run.len());
for (i, test) in tests_to_run.iter().enumerate() {
info!(" {}. {} ({})", i + 1, test.name, test.category.as_str());
}
// Execute tests
for (i, test_def) in tests_to_run.iter().enumerate() {
- info!("🧪 执行测试 {}/{}: {}", i + 1, tests_to_run.len(), test_def.name);
- info!(" 📝 描述: {}", test_def.description);
- info!(" 🏷️ 分类: {}", test_def.category.as_str());
- info!(" ⏱️ 预计时间: {:?}", test_def.estimated_duration);
+ info!("🧪 Running test {}/{}: {}", i + 1, tests_to_run.len(), test_def.name);
+ info!(" 📝 Description: {}", test_def.description);
+ info!(" 🏷️ Category: {}", test_def.category.as_str());
+ info!(" ⏱️ Estimated duration: {:?}", test_def.estimated_duration);
let test_start = Instant::now();
let result = self.run_single_test(test_def).await;
@@ -377,11 +377,11 @@ impl KMSTestSuite {
match result {
Ok(_) => {
- info!("✅ 测试通过: {} ({:.2}s)", test_def.name, test_duration.as_secs_f64());
+ info!("✅ Test passed: {} ({:.2}s)", test_def.name, test_duration.as_secs_f64());
results.push(TestResult::success(test_def.name.clone(), test_def.category.clone(), test_duration));
}
Err(e) => {
- error!("❌ 测试失败: {} ({:.2}s): {}", test_def.name, test_duration.as_secs_f64(), e);
+ error!("❌ Test failed: {} ({:.2}s): {}", test_def.name, test_duration.as_secs_f64(), e);
results.push(TestResult::failure(
test_def.name.clone(),
test_def.category.clone(),
@@ -393,7 +393,7 @@ impl KMSTestSuite {
// Add delay between tests to avoid resource conflicts
if i < tests_to_run.len() - 1 {
- debug!("⏸️ 等待2秒后执行下一个测试...");
+ debug!("⏸️ Waiting two seconds before the next test...");
sleep(Duration::from_secs(2)).await;
}
}
@@ -408,22 +408,22 @@ impl KMSTestSuite {
async fn run_single_test(&self, test_def: &TestDefinition) -> Result<(), Box> {
// This is a placeholder for test dispatch logic
// In a real implementation, this would dispatch to actual test functions
- warn!("⚠️ 测试函数 '{}' 在统一运行器中尚未实现,跳过", test_def.name);
+ warn!("⚠️ Test '{}' is not implemented in the unified runner; skipping", test_def.name);
Ok(())
}
/// Print comprehensive test summary
fn print_test_summary(&self, results: &[TestResult], total_duration: Duration) {
- info!("📊 KMS测试套件总结");
- info!("⏱️ 总执行时间: {:.2}秒", total_duration.as_secs_f64());
- info!("📈 总测试数量: {}", results.len());
+ info!("📊 KMS test suite summary");
+ info!("⏱️ Total duration: {:.2} seconds", total_duration.as_secs_f64());
+ info!("📈 Total tests: {}", results.len());
let passed = results.iter().filter(|r| r.success).count();
let failed = results.iter().filter(|r| !r.success).count();
- info!("✅ 通过: {}", passed);
- info!("❌ 失败: {}", failed);
- info!("📊 成功率: {:.1}%", (passed as f64 / results.len() as f64) * 100.0);
+ info!("✅ Passed: {}", passed);
+ info!("❌ Failed: {}", failed);
+ info!("📊 Success rate: {:.1}%", (passed as f64 / results.len() as f64) * 100.0);
// Summary by category
let mut category_summary: std::collections::HashMap = std::collections::HashMap::new();
@@ -435,7 +435,7 @@ impl KMSTestSuite {
}
}
- info!("📊 分类汇总:");
+ info!("📊 Category summary:");
for (category, (total, passed_count)) in category_summary {
info!(
" 🏷️ {}: {}/{} ({:.1}%)",
@@ -448,7 +448,7 @@ impl KMSTestSuite {
// List failed tests
if failed > 0 {
- warn!("❌ 失败的测试:");
+ warn!("❌ Failing tests:");
for result in results.iter().filter(|r| !r.success) {
warn!(
" - {}: {}",
@@ -479,7 +479,7 @@ async fn test_kms_critical_suite() -> Result<(), Box Result<(), Box /dev/null; then
- print_error "Cargo 未找到,请确保已安装 Rust"
+ print_error "Cargo not found; install Rust first"
exit 1
fi
- # 检查 criterion
+ # Check criterion support
if ! cargo --list | grep -q "bench"; then
- print_error "未找到基准测试支持,请确保使用的是支持基准测试的 Rust 版本"
+ print_error "Benchmark support missing; use a Rust toolchain with criterion support"
exit 1
fi
- print_success "系统要求检查通过"
+ print_success "System requirements satisfied"
}
-# 清理之前的测试结果
+# Remove previous benchmark artifacts
cleanup() {
- print_info "清理之前的测试结果..."
+ print_info "Cleaning previous benchmark artifacts..."
rm -rf target/criterion
- print_success "清理完成"
+ print_success "Cleanup complete"
}
-# 运行 SIMD 模式基准测试
+# Run SIMD-only benchmarks
run_simd_benchmark() {
- print_info "🎯 开始运行 SIMD 模式基准测试..."
+ print_info "🎯 Starting SIMD-only benchmark run..."
echo "================================================"
cargo bench --bench comparison_benchmark \
-- --save-baseline simd_baseline
- print_success "SIMD 模式基准测试完成"
+ print_success "SIMD-only benchmarks completed"
}
-# 运行完整的基准测试套件
+# Run the full benchmark suite
run_full_benchmark() {
- print_info "🚀 开始运行完整基准测试套件..."
+ print_info "🚀 Starting full benchmark suite..."
echo "================================================"
- # 运行详细的基准测试
+ # Execute detailed benchmarks
cargo bench --bench erasure_benchmark
- print_success "完整基准测试套件完成"
+ print_success "Full benchmark suite finished"
}
-# 运行性能测试
+# Run performance tests
run_performance_test() {
- print_info "📊 开始运行性能测试..."
+ print_info "📊 Starting performance tests..."
echo "================================================"
- print_info "步骤 1: 运行编码基准测试..."
+ print_info "Step 1: running encoding benchmarks..."
cargo bench --bench comparison_benchmark \
-- encode --save-baseline encode_baseline
- print_info "步骤 2: 运行解码基准测试..."
+ print_info "Step 2: running decoding benchmarks..."
cargo bench --bench comparison_benchmark \
-- decode --save-baseline decode_baseline
- print_success "性能测试完成"
+ print_success "Performance tests completed"
}
-# 运行大数据集测试
+# Run large dataset tests
run_large_data_test() {
- print_info "🗂️ 开始运行大数据集测试..."
+ print_info "🗂️ Starting large-dataset tests..."
echo "================================================"
cargo bench --bench erasure_benchmark \
-- large_data --save-baseline large_data_baseline
- print_success "大数据集测试完成"
+ print_success "Large-dataset tests completed"
}
-# 生成比较报告
+# Generate comparison report
generate_comparison_report() {
- print_info "📊 生成性能报告..."
+ print_info "📊 Generating performance report..."
if [ -d "target/criterion" ]; then
- print_info "基准测试结果已保存到 target/criterion/ 目录"
- print_info "你可以打开 target/criterion/report/index.html 查看详细报告"
+ print_info "Benchmark results saved under target/criterion/"
+ print_info "Open target/criterion/report/index.html for the HTML report"
- # 如果有 python 环境,可以启动简单的 HTTP 服务器查看报告
+ # If Python is available, start a simple HTTP server to browse the report
if command -v python3 &> /dev/null; then
- print_info "你可以运行以下命令启动本地服务器查看报告:"
+ print_info "Run the following command to serve the report locally:"
echo " cd target/criterion && python3 -m http.server 8080"
- echo " 然后在浏览器中访问 http://localhost:8080/report/index.html"
+ echo " Then open http://localhost:8080/report/index.html"
fi
else
- print_warning "未找到基准测试结果目录"
+ print_warning "Benchmark result directory not found"
fi
}
-# 快速测试模式
+# Quick test mode
run_quick_test() {
- print_info "🏃 运行快速性能测试..."
+ print_info "🏃 Running quick performance test..."
- print_info "测试 SIMD 编码性能..."
+ print_info "Testing SIMD encoding performance..."
cargo bench --bench comparison_benchmark \
-- encode --quick
- print_info "测试 SIMD 解码性能..."
+ print_info "Testing SIMD decoding performance..."
cargo bench --bench comparison_benchmark \
-- decode --quick
- print_success "快速测试完成"
+ print_success "Quick test complete"
}
-# 显示帮助信息
+# Display help
show_help() {
- echo "Reed-Solomon SIMD 性能基准测试脚本"
+ echo "Reed-Solomon SIMD performance benchmark script"
echo ""
- echo "实现模式:"
- echo " 🎯 SIMD 模式 - 高性能 SIMD 优化的 reed-solomon-simd 实现"
+ echo "Modes:"
+ echo " 🎯 simd High-performance reed-solomon-simd implementation"
echo ""
- echo "使用方法:"
+ echo "Usage:"
echo " $0 [command]"
echo ""
- echo "命令:"
- echo " quick 运行快速性能测试"
- echo " full 运行完整基准测试套件"
- echo " performance 运行详细的性能测试"
- echo " simd 运行 SIMD 模式测试"
- echo " large 运行大数据集测试"
- echo " clean 清理测试结果"
- echo " help 显示此帮助信息"
+ echo "Commands:"
+ echo " quick Run the quick performance test"
+ echo " full Run the full benchmark suite"
+ echo " performance Run detailed performance tests"
+ echo " simd Run the SIMD-only tests"
+ echo " large Run large-dataset tests"
+ echo " clean Remove previous results"
+ echo " help Show this help message"
echo ""
- echo "示例:"
- echo " $0 quick # 快速性能测试"
- echo " $0 performance # 详细性能测试"
- echo " $0 full # 完整测试套件"
- echo " $0 simd # SIMD 模式测试"
- echo " $0 large # 大数据集测试"
+ echo "Examples:"
+ echo " $0 quick # Quick performance test"
+ echo " $0 performance # Detailed performance test"
+ echo " $0 full # Full benchmark suite"
+ echo " $0 simd # SIMD-only benchmark"
+ echo " $0 large # Large-dataset benchmark"
echo ""
- echo "实现特性:"
- echo " - 使用 reed-solomon-simd 高性能 SIMD 实现"
- echo " - 支持编码器/解码器实例缓存"
- echo " - 优化的内存管理和线程安全"
- echo " - 跨平台 SIMD 指令支持"
+ echo "Features:"
+ echo " - Uses the high-performance reed-solomon-simd implementation"
+ echo " - Caches encoder/decoder instances"
+ echo " - Optimized memory management and thread safety"
+ echo " - Cross-platform SIMD instruction support"
}
-# 显示测试配置信息
+# Show benchmark configuration
show_test_info() {
- print_info "📋 测试配置信息:"
- echo " - 当前目录: $(pwd)"
- echo " - Rust 版本: $(rustc --version)"
- echo " - Cargo 版本: $(cargo --version)"
- echo " - CPU 架构: $(uname -m)"
- echo " - 操作系统: $(uname -s)"
+ print_info "📋 Benchmark configuration:"
+ echo " - Working directory: $(pwd)"
+ echo " - Rust version: $(rustc --version)"
+ echo " - Cargo version: $(cargo --version)"
+ echo " - CPU architecture: $(uname -m)"
+ echo " - Operating system: $(uname -s)"
- # 检查 CPU 特性
+ # Inspect CPU capabilities
if [ -f "/proc/cpuinfo" ]; then
- echo " - CPU 型号: $(grep 'model name' /proc/cpuinfo | head -1 | cut -d: -f2 | xargs)"
+ echo " - CPU model: $(grep 'model name' /proc/cpuinfo | head -1 | cut -d: -f2 | xargs)"
if grep -q "avx2" /proc/cpuinfo; then
- echo " - SIMD 支持: AVX2 ✅ (将使用高级 SIMD 优化)"
+ echo " - SIMD support: AVX2 ✅ (using advanced SIMD optimizations)"
elif grep -q "sse4" /proc/cpuinfo; then
- echo " - SIMD 支持: SSE4 ✅ (将使用 SIMD 优化)"
+ echo " - SIMD support: SSE4 ✅ (using SIMD optimizations)"
else
- echo " - SIMD 支持: 基础 SIMD 特性"
+ echo " - SIMD support: baseline features"
fi
fi
- echo " - 实现: reed-solomon-simd (高性能 SIMD 优化)"
- echo " - 特性: 实例缓存、线程安全、跨平台 SIMD"
+ echo " - Implementation: reed-solomon-simd (SIMD-optimized)"
+ echo " - Highlights: instance caching, thread safety, cross-platform SIMD"
echo ""
}
-# 主函数
+# Main entry point
main() {
- print_info "🧪 Reed-Solomon SIMD 实现性能基准测试"
+ print_info "🧪 Reed-Solomon SIMD benchmark suite"
echo "================================================"
check_requirements
@@ -252,15 +252,15 @@ main() {
show_help
;;
*)
- print_error "未知命令: $1"
+ print_error "Unknown command: $1"
echo ""
show_help
exit 1
;;
esac
- print_success "✨ 基准测试执行完成!"
+ print_success "✨ Benchmark run completed!"
}
-# 启动脚本
+# Launch script
main "$@"
\ No newline at end of file
diff --git a/crates/ecstore/src/admin_server_info.rs b/crates/ecstore/src/admin_server_info.rs
index 4ee7d94c..8b9699c5 100644
--- a/crates/ecstore/src/admin_server_info.rs
+++ b/crates/ecstore/src/admin_server_info.rs
@@ -96,21 +96,21 @@ async fn is_server_resolvable(endpoint: &Endpoint) -> Result<()> {
let decoded_payload = flatbuffers::root::(finished_data);
assert!(decoded_payload.is_ok());
- // 创建客户端
+ // Create the client
let mut client = node_service_time_out_client(&addr)
.await
.map_err(|err| Error::other(err.to_string()))?;
- // 构造 PingRequest
+ // Build the PingRequest
let request = Request::new(PingRequest {
version: 1,
body: bytes::Bytes::copy_from_slice(finished_data),
});
- // 发送请求并获取响应
+ // Send the request and obtain the response
let response: PingResponse = client.ping(request).await?.into_inner();
- // 打印响应
+ // Print the response
let ping_response_body = flatbuffers::root::(&response.body);
if let Err(e) = ping_response_body {
eprintln!("{e}");
diff --git a/crates/ecstore/src/bucket/metadata.rs b/crates/ecstore/src/bucket/metadata.rs
index f388cd0c..87884300 100644
--- a/crates/ecstore/src/bucket/metadata.rs
+++ b/crates/ecstore/src/bucket/metadata.rs
@@ -428,8 +428,8 @@ where
let sec = t.unix_timestamp() - 62135596800;
let nsec = t.nanosecond();
buf[0] = 0xc7; // mext8
- buf[1] = 0x0c; // 长度
- buf[2] = 0x05; // 时间扩展类型
+ buf[1] = 0x0c; // Length
+ buf[2] = 0x05; // Time extension type
BigEndian::write_u64(&mut buf[3..], sec as u64);
BigEndian::write_u32(&mut buf[11..], nsec);
s.serialize_bytes(&buf)
diff --git a/crates/ecstore/src/bucket/quota/mod.rs b/crates/ecstore/src/bucket/quota/mod.rs
index c2588d87..b9e778fd 100644
--- a/crates/ecstore/src/bucket/quota/mod.rs
+++ b/crates/ecstore/src/bucket/quota/mod.rs
@@ -16,16 +16,16 @@ use crate::error::Result;
use rmp_serde::Serializer as rmpSerializer;
use serde::{Deserialize, Serialize};
-// 定义 QuotaType 枚举类型
+// Define the QuotaType enum
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub enum QuotaType {
Hard,
}
-// 定义 BucketQuota 结构体
+// Define the BucketQuota structure
#[derive(Debug, Deserialize, Serialize, Default, Clone)]
pub struct BucketQuota {
- quota: Option, // 使用 Option 来表示可能不存在的字段
+ quota: Option, // Use Option to represent optional fields
size: u64,
diff --git a/crates/ecstore/src/bucket/replication/config.rs b/crates/ecstore/src/bucket/replication/config.rs
index 88b3a8ed..4a983498 100644
--- a/crates/ecstore/src/bucket/replication/config.rs
+++ b/crates/ecstore/src/bucket/replication/config.rs
@@ -46,7 +46,7 @@ pub trait ReplicationConfigurationExt {
}
impl ReplicationConfigurationExt for ReplicationConfiguration {
- /// 检查是否有现有对象复制规则
+ /// Check whether any object-replication rules exist
fn has_existing_object_replication(&self, arn: &str) -> (bool, bool) {
let mut has_arn = false;
@@ -117,7 +117,7 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
rules
}
- /// 获取目标配置
+ /// Retrieve the destination configuration
fn get_destination(&self) -> Destination {
if !self.rules.is_empty() {
self.rules[0].destination.clone()
@@ -134,7 +134,7 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
}
}
- /// 判断对象是否应该被复制
+ /// Determine whether an object should be replicated
fn replicate(&self, obj: &ObjectOpts) -> bool {
let rules = self.filter_actionable_rules(obj);
@@ -164,16 +164,16 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
}
}
- // 常规对象/元数据复制
+ // Regular object/metadata replication
return rule.metadata_replicate(obj);
}
false
}
- /// 检查是否有活跃的规则
- /// 可选择性地提供前缀
- /// 如果recursive为true,函数还会在前缀下的任何级别有活跃规则时返回true
- /// 如果没有指定前缀,recursive实际上为true
+ /// Check for an active rule
+ /// Optionally accept a prefix
+ /// When recursive is true, return true if any level under the prefix has an active rule
+ /// Without a prefix, recursive behaves as true
fn has_active_rules(&self, prefix: &str, recursive: bool) -> bool {
if self.rules.is_empty() {
return false;
@@ -187,13 +187,13 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
if let Some(filter) = &rule.filter {
if let Some(filter_prefix) = &filter.prefix {
if !prefix.is_empty() && !filter_prefix.is_empty() {
- // 传入的前缀必须在规则前缀中
+ // The provided prefix must fall within the rule prefix
if !recursive && !prefix.starts_with(filter_prefix) {
continue;
}
}
- // 如果是递归的,我们可以跳过这个规则,如果它不匹配测试前缀或前缀下的级别不匹配
+ // When recursive, skip this rule if it does not match the test prefix or hierarchy
if recursive && !rule.prefix().starts_with(prefix) && !prefix.starts_with(rule.prefix()) {
continue;
}
@@ -204,7 +204,7 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
false
}
- /// 过滤目标ARN,返回配置中不同目标ARN的切片
+ /// Filter target ARNs and return a slice of the distinct values in the config
fn filter_target_arns(&self, obj: &ObjectOpts) -> Vec {
let mut arns = Vec::new();
let mut targets_map: HashSet = HashSet::new();
@@ -216,7 +216,7 @@ impl ReplicationConfigurationExt for ReplicationConfiguration {
}
if !self.role.is_empty() {
- arns.push(self.role.clone()); // 如果存在,使用传统的RoleArn
+ arns.push(self.role.clone()); // Use the legacy RoleArn when present
return arns;
}
diff --git a/crates/ecstore/src/chunk_stream.rs b/crates/ecstore/src/chunk_stream.rs
index 5689d6af..41b3b2d9 100644
--- a/crates/ecstore/src/chunk_stream.rs
+++ b/crates/ecstore/src/chunk_stream.rs
@@ -39,13 +39,13 @@
// #[allow(clippy::shadow_same)] // necessary for `pin_mut!`
// Box::pin(async move {
// pin_mut!(body);
-// // 上一次没用完的数据
+// // Data left over from the previous call
// let mut prev_bytes = Bytes::new();
// let mut read_size = 0;
// loop {
// let data: Vec = {
-// // 读固定大小的数据
+// // Read a fixed-size chunk
// match Self::read_data(body.as_mut(), prev_bytes, chunk_size).await {
// None => break,
// Some(Err(e)) => return Err(e),
@@ -72,13 +72,13 @@
// if read_size + prev_bytes.len() >= content_length {
// // debug!(
-// // "读完了 read_size:{} + prev_bytes.len({}) == content_length {}",
+// // "Finished reading: read_size:{} + prev_bytes.len({}) == content_length {}",
// // read_size,
// // prev_bytes.len(),
// // content_length,
// // );
-// // 填充 0?
+// // Pad with zeros?
// if !need_padding {
// y.yield_ok(prev_bytes).await;
// break;
@@ -115,7 +115,7 @@
// {
// let mut bytes_buffer = Vec::new();
-// // 只执行一次
+// // Run only once
// let mut push_data_bytes = |mut bytes: Bytes| {
// // debug!("read from body {} split per {}, prev_bytes: {}", bytes.len(), data_size, prev_bytes.len());
@@ -127,11 +127,11 @@
// return Some(bytes);
// }
-// // 合并上一次数据
+// // Merge with the previous data
// if !prev_bytes.is_empty() {
// let need_size = data_size.wrapping_sub(prev_bytes.len());
// // debug!(
-// // " 上一次有剩余{},从这一次中取{},共:{}",
+// // "Previous leftover {}, take {} now, total: {}",
// // prev_bytes.len(),
// // need_size,
// // prev_bytes.len() + need_size
@@ -143,7 +143,7 @@
// combined.extend_from_slice(&data);
// // debug!(
-// // "取到的长度大于所需,取出需要的长度:{},与上一次合并得到:{},bytes 剩余:{}",
+// // "Fetched more bytes than needed: {}, merged result {}, remaining bytes {}",
// // need_size,
// // combined.len(),
// // bytes.len(),
@@ -156,7 +156,7 @@
// combined.extend_from_slice(&bytes);
// // debug!(
-// // "取到的长度小于所需,取出需要的长度:{},与上一次合并得到:{},bytes 剩余:{},直接返回",
+// // "Fetched fewer bytes than needed: {}, merged result {}, remaining bytes {}, return immediately",
// // need_size,
// // combined.len(),
// // bytes.len(),
@@ -166,29 +166,29 @@
// }
// }
-// // 取到的数据比需要的块大,从 bytes 中截取需要的块大小
+// // If the fetched data exceeds the chunk, slice the required size
// if data_size <= bytes.len() {
// let n = bytes.len() / data_size;
// for _ in 0..n {
// let data = bytes.split_to(data_size);
-// // println!("bytes_buffer.push: {},剩余:{}", data.len(), bytes.len());
+// // println!("bytes_buffer.push: {}, remaining: {}", data.len(), bytes.len());
// bytes_buffer.push(data);
// }
// Some(bytes)
// } else {
-// // 不够
+// // Insufficient data
// Some(bytes)
// }
// };
-// // 剩余数据
+// // Remaining data
// let remaining_bytes = 'outer: {
-// // // 如果上一次数据足够,跳出
+// // // Exit if the previous data was sufficient
// // if let Some(remaining_bytes) = push_data_bytes(prev_bytes) {
-// // println!("从剩下的取");
+// // println!("Consuming leftovers");
// // break 'outer remaining_bytes;
// // }
diff --git a/crates/ecstore/src/disk/error_reduce.rs b/crates/ecstore/src/disk/error_reduce.rs
index 956b57c4..d3264334 100644
--- a/crates/ecstore/src/disk/error_reduce.rs
+++ b/crates/ecstore/src/disk/error_reduce.rs
@@ -49,12 +49,12 @@ pub fn reduce_quorum_errs(errors: &[Option], ignored_errs: &[Error], quor
pub fn reduce_errs(errors: &[Option], ignored_errs: &[Error]) -> (usize, Option) {
let nil_error = Error::other("nil".to_string());
- // 首先统计 None 的数量(作为 nil 错误)
+ // First count the number of None values (treated as nil errors)
let nil_count = errors.iter().filter(|e| e.is_none()).count();
let err_counts = errors
.iter()
- .filter_map(|e| e.as_ref()) // 只处理 Some 的错误
+ .filter_map(|e| e.as_ref()) // Only process errors stored in Some
.fold(std::collections::HashMap::new(), |mut acc, e| {
if is_ignored_err(ignored_errs, e) {
return acc;
@@ -63,13 +63,13 @@ pub fn reduce_errs(errors: &[Option], ignored_errs: &[Error]) -> (usize,
acc
});
- // 找到最高频率的非 nil 错误
+ // Find the most frequent non-nil error
let (best_err, best_count) = err_counts
.into_iter()
.max_by(|(_, c1), (_, c2)| c1.cmp(c2))
.unwrap_or((nil_error.clone(), 0));
- // 比较 nil 错误和最高频率的非 nil 错误, 优先选择 nil 错误
+ // Compare nil errors with the top non-nil error and prefer the nil error
if nil_count > best_count || (nil_count == best_count && nil_count > 0) {
(nil_count, None)
} else {
diff --git a/crates/ecstore/src/disk/local.rs b/crates/ecstore/src/disk/local.rs
index cccc1e26..ba96de5f 100644
--- a/crates/ecstore/src/disk/local.rs
+++ b/crates/ecstore/src/disk/local.rs
@@ -319,8 +319,8 @@ impl LocalDisk {
}
if cfg!(target_os = "windows") {
- // 在 Windows 上,卷名不应该包含保留字符。
- // 这个正则表达式匹配了不允许的字符。
+ // Windows volume names must not include reserved characters.
+ // This regular expression matches disallowed characters.
if volname.contains('|')
|| volname.contains('<')
|| volname.contains('>')
@@ -333,7 +333,7 @@ impl LocalDisk {
return false;
}
} else {
- // 对于非 Windows 系统,可能需要其他的验证逻辑。
+ // Non-Windows systems may require additional validation rules.
}
true
@@ -563,7 +563,7 @@ impl LocalDisk {
// return Ok(());
- // TODO: 异步通知 检测硬盘空间 清空回收站
+ // TODO: async notifications for disk space checks and trash cleanup
let trash_path = self.get_object_path(super::RUSTFS_META_TMP_DELETED_BUCKET, Uuid::new_v4().to_string().as_str())?;
// if let Some(parent) = trash_path.parent() {
@@ -846,13 +846,13 @@ impl LocalDisk {
}
}
- // 没有版本了,删除 xl.meta
+ // Remove xl.meta when no versions remain
if fm.versions.is_empty() {
self.delete_file(&volume_dir, &xlpath, true, false).await?;
return Ok(());
}
- // 更新 xl.meta
+ // Update xl.meta
let buf = fm.marshal_msg()?;
let volume_dir = self.get_bucket_path(volume)?;
@@ -1050,7 +1050,7 @@ impl LocalDisk {
let mut dir_objes = HashSet::new();
- // 第一层过滤
+ // First-level filtering
for item in entries.iter_mut() {
let entry = item.clone();
// check limit
@@ -1229,7 +1229,7 @@ fn is_root_path(path: impl AsRef) -> bool {
path.as_ref().components().count() == 1 && path.as_ref().has_root()
}
-// 过滤 std::io::ErrorKind::NotFound
+// Filter std::io::ErrorKind::NotFound
pub async fn read_file_exists(path: impl AsRef) -> Result<(Bytes, Option)> {
let p = path.as_ref();
let (data, meta) = match read_file_all(&p).await {
@@ -1920,11 +1920,11 @@ impl DiskAPI for LocalDisk {
}
}
- // xl.meta 路径
+ // xl.meta path
let src_file_path = src_volume_dir.join(Path::new(format!("{}/{}", &src_path, STORAGE_FORMAT_FILE).as_str()));
let dst_file_path = dst_volume_dir.join(Path::new(format!("{}/{}", &dst_path, STORAGE_FORMAT_FILE).as_str()));
- // data_dir 路径
+ // data_dir path
let has_data_dir_path = {
let has_data_dir = {
if !fi.is_remote() {
@@ -1952,7 +1952,7 @@ impl DiskAPI for LocalDisk {
check_path_length(src_file_path.to_string_lossy().to_string().as_str())?;
check_path_length(dst_file_path.to_string_lossy().to_string().as_str())?;
- // 读旧 xl.meta
+ // Read the previous xl.meta
let has_dst_buf = match super::fs::read_file(&dst_file_path).await {
Ok(res) => Some(res),
@@ -2437,7 +2437,7 @@ impl DiskAPI for LocalDisk {
async fn delete_volume(&self, volume: &str) -> Result<()> {
let p = self.get_bucket_path(volume)?;
- // TODO: 不能用递归删除,如果目录下面有文件,返回 errVolumeNotEmpty
+ // TODO: avoid recursive deletion; return errVolumeNotEmpty when files remain
if let Err(err) = fs::remove_dir_all(&p).await {
let e: DiskError = to_volume_error(err).into();
@@ -2591,7 +2591,7 @@ mod test {
assert!(object_path.to_string_lossy().contains("test-bucket"));
assert!(object_path.to_string_lossy().contains("test-object"));
- // 清理测试目录
+ // Clean up the test directory
let _ = fs::remove_dir_all(&test_dir).await;
}
@@ -2656,7 +2656,7 @@ mod test {
disk.delete_volume(vol).await.unwrap();
}
- // 清理测试目录
+ // Clean up the test directory
let _ = fs::remove_dir_all(&test_dir).await;
}
@@ -2680,7 +2680,7 @@ mod test {
assert!(!disk_info.fs_type.is_empty());
assert!(disk_info.total > 0);
- // 清理测试目录
+ // Clean up the test directory
let _ = fs::remove_dir_all(&test_dir).await;
}
diff --git a/crates/ecstore/src/disk/mod.rs b/crates/ecstore/src/disk/mod.rs
index 1e8af91c..3716f5eb 100644
--- a/crates/ecstore/src/disk/mod.rs
+++ b/crates/ecstore/src/disk/mod.rs
@@ -431,7 +431,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
async fn stat_volume(&self, volume: &str) -> Result;
async fn delete_volume(&self, volume: &str) -> Result<()>;
- // 并发边读边写 w <- MetaCacheEntry
+ // Concurrent read/write pipeline w <- MetaCacheEntry
async fn walk_dir(&self, opts: WalkDirOptions, wr: &mut W) -> Result<()>;
// Metadata operations
@@ -466,7 +466,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
) -> Result;
// File operations.
- // 读目录下的所有文件、目录
+ // Read every file and directory within the folder
async fn list_dir(&self, origvolume: &str, volume: &str, dir_path: &str, count: i32) -> Result>;
async fn read_file(&self, volume: &str, path: &str) -> Result;
async fn read_file_stream(&self, volume: &str, path: &str, offset: usize, length: usize) -> Result;
@@ -1000,7 +1000,7 @@ mod tests {
// Note: is_online() might return false for local disks without proper initialization
// This is expected behavior for test environments
- // 清理测试目录
+ // Clean up the test directory
let _ = fs::remove_dir_all(&test_dir).await;
}
@@ -1031,7 +1031,7 @@ mod tests {
let location = disk.get_disk_location();
assert!(location.valid() || (!location.valid() && endpoint.pool_idx < 0));
- // 清理测试目录
+ // Clean up the test directory
let _ = fs::remove_dir_all(&test_dir).await;
}
}
diff --git a/crates/ecstore/src/disk/os.rs b/crates/ecstore/src/disk/os.rs
index 62670206..097cd61f 100644
--- a/crates/ecstore/src/disk/os.rs
+++ b/crates/ecstore/src/disk/os.rs
@@ -203,7 +203,7 @@ pub async fn os_mkdir_all(dir_path: impl AsRef, base_dir: impl AsRef
}
if let Some(parent) = dir_path.as_ref().parent() {
- // 不支持递归,直接 create_dir_all 了
+ // Without recursion support, fall back to create_dir_all
if let Err(e) = super::fs::make_dir_all(&parent).await {
if e.kind() == io::ErrorKind::AlreadyExists {
return Ok(());
diff --git a/crates/ecstore/src/erasure.rs b/crates/ecstore/src/erasure.rs
index 2ad3e270..2939fe13 100644
--- a/crates/ecstore/src/erasure.rs
+++ b/crates/ecstore/src/erasure.rs
@@ -297,24 +297,24 @@ impl Erasure {
pub fn encode_data(self: Arc, data: &[u8]) -> Result> {
let (shard_size, total_size) = self.need_size(data.len());
- // 生成一个新的 所需的所有分片数据长度
+ // Generate the total length required for all shards
let mut data_buffer = BytesMut::with_capacity(total_size);
- // 复制源数据
+ // Copy the source data
data_buffer.extend_from_slice(data);
data_buffer.resize(total_size, 0u8);
{
- // ec encode, 结果会写进 data_buffer
+ // Perform EC encoding; the results go into data_buffer
let data_slices: SmallVec<[&mut [u8]; 16]> = data_buffer.chunks_exact_mut(shard_size).collect();
- // parity 数量大于 0 才 ec
+ // Only perform EC encoding when parity shards are present
if self.parity_shards > 0 {
self.encoder.as_ref().unwrap().encode(data_slices).map_err(Error::other)?;
}
}
- // 零拷贝分片,所有 shard 引用 data_buffer
+ // Zero-copy shards: every shard references data_buffer
let mut data_buffer = data_buffer.freeze();
let mut shards = Vec::with_capacity(self.total_shard_count());
for _ in 0..self.total_shard_count() {
@@ -333,13 +333,13 @@ impl Erasure {
Ok(())
}
- // 每个分片长度,所需要的总长度
+ // The length per shard and the total required length
fn need_size(&self, data_size: usize) -> (usize, usize) {
let shard_size = self.shard_size(data_size);
(shard_size, shard_size * (self.total_shard_count()))
}
- // 算出每个分片大小
+ // Compute each shard size
pub fn shard_size(&self, data_size: usize) -> usize {
data_size.div_ceil(self.data_shards)
}
@@ -354,7 +354,7 @@ impl Erasure {
let last_shard_size = last_block_size.div_ceil(self.data_shards);
num_shards * self.shard_size(self.block_size) + last_shard_size
- // // 因为写入的时候 ec 需要补全,所以最后一个长度应该也是一样的
+ // When writing, EC pads the data so the last shard length should match
// if last_block_size != 0 {
// num_shards += 1
// }
@@ -447,12 +447,12 @@ pub trait ReadAt {
}
pub struct ShardReader {
- readers: Vec