todo suport x-minio-internal-

This commit is contained in:
weisd
2026-03-06 18:36:36 +08:00
parent e4a9bbd9f9
commit 79e4e00b03
10 changed files with 496 additions and 26 deletions

View File

@@ -0,0 +1,191 @@
# feat/xlmeta-compat 分支 MinIO 兼容性差异报告
> 对比基准:`main` vs `feat/xlmeta-compat`
> 生成时间2025-03-06
## 一、概述
`feat/xlmeta-compat` 分支为实现与 MinIO 的存储格式兼容,对 xl.meta、bucket metadata、erasure coding、checksum 等进行了多处修改。主要目标包括:
1. **xl.meta 格式**:对齐 MinIO 的 MessagePack 编码格式
2. **跨存储兼容**:更换 Reed-Solomon 实现以兼容 MinIO 数据
3. **Checksum**HighwayHash256S 与 Blake2b512 与 MinIO 一致
4. **Bucket 元数据**:支持 MinIO 格式读写与迁移
5. **生命周期**:实现 DelMarkerExpiration 及规则校验
---
## 二、提交历史11 个提交)
| 提交 | 描述 |
|------|------|
| `078035a6` | feat(filemeta): align xl.meta msgpack format |
| `36cd8ece` | fix xlmeta decode |
| `390eb482` | fix download compat |
| `3bb6cbf1` | to HighwayHash256S |
| `3d259480` | feat(ecstore): switch to reed-solomon-erasure for cross-storage compatibility |
| `809b4612` | feat(ecstore): enable simd-accel for reed-solomon-erasure |
| `57e98593` | refactor(ecstore): extract bucket metadata tests to separate file |
| `bbfef77e` | feat: add migration from legacy format and bucket metadata |
| `cdd787e6` | refactor: improve bucket metadata migration and related fixes |
| `5601af8e` | Merge origin/main into feat/xlmeta-compat |
| `6b4c3656` | feat(lifecycle): implement DelMarkerExpiration and rule validation |
---
## 三、变更统计
- **修改文件**48 个
- **新增行数**:约 2434 行
- **删除行数**:约 1298 行
- **净增**:约 1136 行
---
## 四、核心模块变更
### 4.1 xl.meta (FileMeta) 格式兼容
**涉及文件**`crates/filemeta/`
#### 4.1.1 MessagePack 编解码重写
- **`version.rs`**:由 `rmp_serde` 自动序列化改为手写 `decode_from` / `encode_to`
- 使用 MinIO 字段名:`Type`, `V1Obj`, `V2Obj`, `DelObj`, `v`
- 支持 `V1Obj` 跳过legacy
- 支持 `V2Obj` / `DelObj` 为 nil
- `MetaObject` 字段:`ID`, `DDir`, `EcAlgo`, `EcM`, `EcN`, `EcBSize`, `EcIndex`, `EcDist`, `CSumAlgo`, `PartNums`, `PartETags`, `PartSizes`, `PartASizes`, `PartIdx`, `Size`, `MTime`, `MetaSys`, `MetaUsr`
- 空数组/空 map 按 nil 或省略处理,与 MinIO 一致
- **新增 `msgp_decode.rs`**
- `PrependByteReader`:在流前预读一个字节
- `read_nil_or_array_len` / `read_nil_or_map_len`
- `skip_msgp_value`:跳过未知字段
#### 4.1.2 版本与格式
- **`XL_META_VERSION`**2 → 3
- **`codec.rs`**xl header 由 `write_uint8` 改为 `write_uint`,与 MinIO 编码一致
#### 4.1.3 Null 版本处理
- **新增 `data_key_for_version()`**`None` / `nil` 映射为 `"null"`
- **`add_version`**:统一用 `data_key_for_version` 作为 inline data 的 key
- **`shared_data_dir_count`**:使用 `data_key_for_version(version_id)` 替代 `version_id.to_string()`
- **版本匹配**`None``Some(nil)` 视为等价
#### 4.1.4 版本数量限制
- 版本上限由 1000 改为 `DEFAULT_OBJECT_MAX_VERSIONS`10000
---
### 4.2 校验算法 (Checksum / Bitrot)
**涉及文件**`crates/utils/src/hash.rs`
| 变更项 | main | feat/xlmeta-compat |
|--------|------|---------------------|
| **HighwayHash256 密钥** | 固定 `[3,4,2,1]` | MinIO 魔法密钥(π 前 100 位 UTF-8 的 HH-256 |
| **BLAKE2b512** | 使用 blake3 (32 字节) | 使用 blake2 (64 字节) |
| **Bitrot 默认** | HighwayHash256 | HighwayHash256S |
- 新增 `test_bitrot_selftest``test_highwayhash_compat` 用于与 MinIO 结果对比
---
### 4.3 Erasure Coding 实现
**涉及文件**`crates/ecstore/src/erasure_coding/erasure.rs`
| 变更项 | main | feat/xlmeta-compat |
|--------|------|---------------------|
| **库** | `reed-solomon-simd` | `reed-solomon-erasure` (GF(2^8)) |
| **SIMD** | 内置 | `simd-accel` feature |
| **API** | 自定义 encoder/decoder 缓存 | 直接使用 `ReedSolomon::encode` / `reconstruct_data` |
- 目的:与 MinIO 使用相同 GF(2^8) 实现,实现跨存储数据兼容
---
### 4.4 Bucket 元数据
**涉及文件**`crates/ecstore/src/bucket/`
#### 4.4.1 序列化格式
- **`metadata.rs`**:移除 `rmp_serde`,改为手写 `decode_from` / `encode_to`
- 字段顺序与 MinIO `BucketMetadata` 一致
- 时间使用 MessagePack Ext8 格式(`read_msgp_ext8_time` / `write_msgp_time`
#### 4.4.2 新增模块
- **`msgp_decode.rs`**MessagePack 编解码辅助(`skip_msgp_value`, `read_msgp_ext8_time`, `write_msgp_time`
- **`migration.rs`**:从 `.minio.sys` 迁移 bucket metadata 到 `.rustfs.sys`
- **`metadata_test.rs`**bucket metadata 单元测试
#### 4.4.3 迁移流程
- 启动时调用 `try_migrate_bucket_metadata`
-`MIGRATING_META_BUCKET` (`.minio.sys`) 读取 `.metadata.bin`,写入 `RUSTFS_META_BUCKET` (`.rustfs.sys`)
---
### 4.5 存储初始化与格式迁移
**涉及文件**`crates/ecstore/src/store_init.rs`
- **`try_migrate_format`**:在未格式化磁盘上尝试从 MinIO 的 `format.json` 迁移
- 读取 `MIGRATING_META_BUCKET` 下的 `format.json`,解析为 `FormatV3`
- 校验 set 数量与 drive 数量后写入 RustFS 格式
---
### 4.6 生命周期 (Lifecycle)
**涉及文件**`crates/ecstore/src/bucket/lifecycle/lifecycle.rs`
- **DelMarkerExpiration**:支持删除标记在 N 天后过期
- **规则校验**
- DelMarkerExpiration 不能与 tag 过滤同时使用
- 规则必须至少包含一种动作Expiration / Transition / NoncurrentVersionExpiration / NoncurrentVersionTransition / DelMarkerExpiration
---
### 4.7 策略 (Policy)
**涉及文件**`crates/policy/src/policy/policy.rs`
- **deny_only 行为**:由“仅校验 Deny无 Deny 则允许”改为“无 Allow 则拒绝”
- 测试名:`test_deny_only_checks_only_deny_statements``test_deny_only_security_fix`
---
## 五、依赖变更
| 包 | main | feat/xlmeta-compat |
|----|------|---------------------|
| `reed-solomon-simd` | 3.1.0 | 移除 |
| `reed-solomon-erasure` | - | 6.0 (std, simd-accel) |
| `blake2` | - | 0.11.0-rc.5 |
| `serde_bytes` | - | 0.11 |
| `s3s` | s3s-project/s3s (rev) | weisd/s3s (feature/minio-compatibility) |
---
## 六、其他变更
- **AGENTS.md**:多处 AGENTS.md 删除或调整(与兼容性无关)
- **bitrot 测试**:默认 checksum 由 `HighwayHash256` 改为 `HighwayHash256S`
- **scanner**`data_usage_define.rs` 中部分定义移除
- **user handler**`rustfs/src/admin/handlers/user.rs` 有调整
---
## 七、兼容性要点总结
1. **xl.meta**字段名、类型、nil 处理与 MinIO 一致
2. **Erasure Coding**:使用 reed-solomon-erasure与 MinIO 数据可互读
3. **Checksum**HighwayHash256S 密钥与 Blake2b512 实现与 MinIO 一致
4. **Bucket 元数据**MessagePack 格式与 MinIO 兼容,支持从 `.minio.sys` 迁移
5. **Format**:支持从 MinIO `format.json` 迁移到 RustFS 格式

View File

@@ -1498,6 +1498,12 @@ impl DiskAPI for LocalDisk {
let erasure = &fi.erasure;
for (i, part) in fi.parts.iter().enumerate() {
let checksum_info = erasure.get_checksum_info(part.number);
let checksum_algo =
if fi.uses_legacy_checksum && checksum_info.algorithm == rustfs_utils::HashAlgorithm::HighwayHash256S {
rustfs_utils::HashAlgorithm::HighwayHash256SLegacy
} else {
checksum_info.algorithm
};
let part_path = self.get_object_path(
volume,
path_join_buf(&[
@@ -1511,7 +1517,7 @@ impl DiskAPI for LocalDisk {
.bitrot_verify(
&part_path,
erasure.shard_file_size(part.size as i64) as usize,
checksum_info.algorithm,
checksum_algo,
&checksum_info.hash,
erasure.shard_size(),
)

View File

@@ -179,7 +179,7 @@ where
}
pub fn bitrot_shard_file_size(size: usize, shard_size: usize, algo: HashAlgorithm) -> usize {
if algo != HashAlgorithm::HighwayHash256S {
if algo != HashAlgorithm::HighwayHash256S && algo != HashAlgorithm::HighwayHash256SLegacy {
return size;
}
size.div_ceil(shard_size) * algo.size() + size

View File

@@ -3351,12 +3351,18 @@ async fn disks_with_all_parts(
if (meta.data.is_some() || meta.size == 0) && !meta.parts.is_empty() {
if let Some(data) = &meta.data {
let checksum_info = meta.erasure.get_checksum_info(meta.parts[0].number);
let checksum_algo =
if meta.uses_legacy_checksum && checksum_info.algorithm == rustfs_utils::HashAlgorithm::HighwayHash256S {
rustfs_utils::HashAlgorithm::HighwayHash256SLegacy
} else {
checksum_info.algorithm
};
let data_len = data.len();
let verify_err = bitrot_verify(
Box::new(Cursor::new(data.clone())),
data_len,
meta.erasure.shard_file_size(meta.size) as usize,
checksum_info.algorithm,
checksum_algo,
checksum_info.hash,
meta.erasure.shard_size(),
)

View File

@@ -347,7 +347,14 @@ impl SetDisks {
for (part_index, part) in latest_meta.parts.iter().enumerate() {
let till_offset = erasure.shard_file_offset(0, part.size, part.size);
let checksum_algo = erasure_info.get_checksum_info(part.number).algorithm;
let checksum_info = erasure_info.get_checksum_info(part.number);
let checksum_algo = if latest_meta.uses_legacy_checksum
&& checksum_info.algorithm == rustfs_utils::HashAlgorithm::HighwayHash256S
{
rustfs_utils::HashAlgorithm::HighwayHash256SLegacy
} else {
checksum_info.algorithm
};
let mut readers = Vec::with_capacity(latest_disks.len());
let mut writers = Vec::with_capacity(out_dated_disks.len());
// let mut errors = Vec::with_capacity(out_dated_disks.len());

View File

@@ -647,6 +647,14 @@ impl SetDisks {
"Streaming multipart part"
);
let checksum_info = fi.erasure.get_checksum_info(part_number);
let checksum_algo =
if fi.uses_legacy_checksum && checksum_info.algorithm == rustfs_utils::HashAlgorithm::HighwayHash256S {
rustfs_utils::HashAlgorithm::HighwayHash256SLegacy
} else {
checksum_info.algorithm
};
let mut readers = Vec::with_capacity(disks.len());
let mut errors = Vec::with_capacity(disks.len());
for (idx, disk_op) in disks.iter().enumerate() {
@@ -658,7 +666,7 @@ impl SetDisks {
read_offset,
till_offset,
erasure.shard_size(),
HashAlgorithm::HighwayHash256S,
checksum_algo.clone(),
)
.await
{

View File

@@ -0,0 +1,227 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Integration test: verify create_bitrot_reader with HighwayHash256SLegacy reads main-version objects.
//!
//! Supports both EC (part files) and inline objects. Reads xl.meta to compute params.
//! Uses create_bitrot_reader only (no ECStore/get_object_reader) to isolate bitrot compatibility.
//!
//! Run from workspace root:
//! cargo test -p rustfs-ecstore test_legacy_bitrot_read -- --nocapture
//!
//! Environment:
//! RUSTFS_LEGACY_TEST_ROOT - Test data root (default: workspace root)
//! RUSTFS_LEGACY_TEST_DISK - Disk name under root (default: "test1" for rustfs, "test" for minio)
//! RUSTFS_SKIP_LEGACY_TEST - Set to 1 to skip
//!
//! For MinIO data: RUSTFS_LEGACY_TEST_ROOT=/path/to/minio (disk "test" and .minio.sys auto-detected).
use rustfs_ecstore::bitrot::create_bitrot_reader;
use rustfs_ecstore::disk::endpoint::Endpoint;
use rustfs_ecstore::disk::{DiskOption, STORAGE_FORMAT_FILE, new_disk};
use rustfs_filemeta::{FileInfoOpts, get_file_info};
use rustfs_utils::HashAlgorithm;
use std::path::PathBuf;
use tokio::fs;
fn workspace_root() -> PathBuf {
let manifest = std::env::var("CARGO_MANIFEST_DIR").unwrap_or_else(|_| ".".into());
PathBuf::from(&manifest)
.ancestors()
.nth(2)
.unwrap_or(std::path::Path::new("."))
.to_path_buf()
}
fn legacy_test_data_exists() -> bool {
if std::env::var("RUSTFS_SKIP_LEGACY_TEST").unwrap_or_default() == "1" {
return false;
}
let root = workspace_root();
let format_path = root.join("test1/.rustfs.sys/format.json");
let ktvzip_meta = root.join("test1/vvvv/ktvzip.tar.gz").join(STORAGE_FORMAT_FILE);
let path_traversal_meta = root.join("test1/vvvv/path_traversal.md").join(STORAGE_FORMAT_FILE);
format_path.exists() && (ktvzip_meta.exists() || path_traversal_meta.exists())
}
async fn run_legacy_bitrot_test_for_object(root: &PathBuf, disk_name: &str, bucket: &str, object: &str) -> bool {
let xl_meta_path = root.join(disk_name).join(bucket).join(object).join(STORAGE_FORMAT_FILE);
if !xl_meta_path.exists() {
eprintln!("xl_meta_path not found: {:?}", xl_meta_path);
return false;
}
let buf = match fs::read(&xl_meta_path).await {
Ok(b) => b,
Err(_) => {
eprintln!("Failed to read xl_meta_path: {:?}", xl_meta_path);
return false;
}
};
let fi = match get_file_info(
&buf,
bucket,
object,
"",
FileInfoOpts {
data: true, // need inline data for inline objects
include_free_versions: false,
},
) {
Ok(f) => f,
Err(_) => {
eprintln!("Failed to get file info: {:?}", xl_meta_path);
return false;
}
};
if fi.deleted || fi.parts.is_empty() {
eprintln!("File is deleted or has no parts: {:?}", xl_meta_path);
return false;
}
let shard_size = fi.erasure.shard_size();
let part_number = 1;
let checksum_info = fi.erasure.get_checksum_info(part_number);
let checksum_algo = if fi.uses_legacy_checksum && checksum_info.algorithm == HashAlgorithm::HighwayHash256S {
HashAlgorithm::HighwayHash256SLegacy
} else {
checksum_info.algorithm
};
eprintln!("fi.inline_data(): {:?}", fi.inline_data());
eprintln!("fi.is_remote(): {:?}", fi.metadata);
// Inline path: use fi.data
if fi.inline_data() {
let inline_bytes = match &fi.data {
Some(b) if !b.is_empty() => b.as_ref(),
_ => {
eprintln!("Inline data is empty: {:?}", xl_meta_path);
return false;
}
};
let read_length = fi.size as usize;
if read_length == 0 {
eprintln!("Read length is 0: {:?}", xl_meta_path);
return false;
}
let mut reader =
match create_bitrot_reader(Some(inline_bytes), None, bucket, "", 0, read_length, shard_size, checksum_algo.clone())
.await
{
Ok(Some(r)) => r,
_ => {
eprintln!("Failed to create bitrot reader for inline data: {:?}", xl_meta_path);
return false;
}
};
let mut buf = vec![0u8; shard_size];
match reader.read(&mut buf).await {
Ok(n) if n > 0 => {
eprintln!("Successfully read {} bytes (inline) via create_bitrot_reader with {:?}", n, checksum_algo);
return true;
}
_ => {
eprintln!("Failed to read inline data: {:?}", xl_meta_path);
return false;
}
}
}
// EC path: use disk + part file
let data_dir = match fi.data_dir {
Some(d) => d,
None => {
eprintln!("Data dir is empty: {:?}", xl_meta_path);
return false;
}
};
let path = format!("{object}/{data_dir}/part.{}", part_number);
let part_path = root.join(disk_name).join(bucket).join(&path);
if !part_path.exists() {
eprintln!("Part file not found: {:?}", part_path);
return false;
}
let disk_path = root.join(disk_name);
let path_str = disk_path.to_str().expect("path");
let mut ep = Endpoint::try_from(path_str).expect("endpoint");
ep.set_pool_index(0);
ep.set_set_index(0);
ep.set_disk_index(0);
let opt = DiskOption {
cleanup: false,
health_check: false,
};
let disk = match new_disk(&ep, &opt).await {
Ok(d) => d,
Err(_) => {
eprintln!("Failed to create disk: {:?}", disk_path);
return false;
}
};
let read_length = shard_size;
let mut reader =
match create_bitrot_reader(None, Some(&disk), bucket, &path, 0, read_length, shard_size, checksum_algo.clone()).await {
Ok(Some(r)) => r,
_ => {
eprintln!("Failed to create bitrot reader for EC part: {:?}", part_path);
return false;
}
};
let mut buf = vec![0u8; shard_size];
match reader.read(&mut buf).await {
Ok(n) if n > 0 => {
eprintln!(
"Successfully read {} bytes (EC part) via create_bitrot_reader with {:?}",
n, checksum_algo
);
true
}
_ => {
eprintln!("Failed to read EC part: {:?}", part_path);
return false;
}
}
}
#[tokio::test]
async fn test_legacy_bitrot_read() {
if !legacy_test_data_exists() {
eprintln!("Skipping legacy bitrot test: test1/vvvv xl.meta or RUSTFS_SKIP_LEGACY_TEST");
return;
}
// let root = workspace_root();
let root = PathBuf::from("/Users/weisd/project/minio");
let disk_name = "test";
let bucket = "vvvv";
// Try both EC (part files) and inline objects
let ok = run_legacy_bitrot_test_for_object(&root, disk_name, bucket, "ktvzip.tar.gz").await;
eprintln!("ok: {:?}", ok);
assert!(ok, "create_bitrot_reader failed for both ktvzip.tar.gz and path_traversal.md");
let ok = run_legacy_bitrot_test_for_object(&root, disk_name, bucket, "path_traversal.md").await;
assert!(ok, "create_bitrot_reader failed for path_traversal.md");
}

View File

@@ -412,8 +412,12 @@ impl FileInfo {
}
pub fn inline_data(&self) -> bool {
self.metadata
// check if the object is inline data,
(self
.metadata
.contains_key(format!("{RESERVED_METADATA_PREFIX_LOWER}inline-data").as_str())
|| self.metadata.contains_key("x-minio-internal-inline-data"))
&& !self.is_remote()
}

View File

@@ -343,23 +343,21 @@ impl TryFrom<&[u8]> for FileMetaVersion {
impl From<FileInfo> for FileMetaVersion {
fn from(value: FileInfo) -> Self {
{
if value.deleted {
FileMetaVersion {
version_type: VersionType::Delete,
delete_marker: Some(MetaDeleteMarker::from(value)),
object: None,
write_version: 0,
uses_legacy_checksum: value.uses_legacy_checksum,
}
} else {
FileMetaVersion {
version_type: VersionType::Object,
delete_marker: None,
object: Some(MetaObject::from(value)),
write_version: 0,
uses_legacy_checksum: value.uses_legacy_checksum,
}
if value.deleted {
FileMetaVersion {
version_type: VersionType::Delete,
delete_marker: Some(MetaDeleteMarker::from(value)),
object: None,
write_version: 0,
uses_legacy_checksum: false,
}
} else {
FileMetaVersion {
version_type: VersionType::Object,
delete_marker: None,
object: Some(MetaObject::from(value)),
write_version: 0,
uses_legacy_checksum: false,
}
}
}
@@ -1176,6 +1174,7 @@ impl MetaObject {
if k.starts_with(RESERVED_METADATA_PREFIX)
|| k.starts_with(RESERVED_METADATA_PREFIX_LOWER)
|| lower_k == VERSION_PURGE_STATUS_KEY.to_lowercase()
|| k.starts_with("x-minio-internal-")
{
metadata.insert(k.to_owned(), String::from_utf8(v.to_owned()).unwrap_or_default());
}
@@ -1292,8 +1291,10 @@ impl MetaObject {
}
pub fn inlinedata(&self) -> bool {
self.meta_sys
(self
.meta_sys
.contains_key(format!("{RESERVED_METADATA_PREFIX_LOWER}inline-data").as_str())
|| self.meta_sys.contains_key("x-minio-internal-inline-data"))
}
pub fn reset_inline_data(&mut self) {

View File

@@ -24,6 +24,11 @@ const MAGIC_HIGHWAY_HASH256_KEY: [u8; 32] = [
0x9f, 0x44, 0x14, 0x97, 0xe0, 0x9d, 0x13, 0x22, 0xde, 0x36, 0xa0,
];
/// Legacy HH-256 key (main branch): fixed [3,4,2,1] as u64 LE.
const LEGACY_HIGHWAY_HASH256_KEY: [u8; 32] = [
3, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
];
fn highway_key_from_bytes(bytes: &[u8; 32]) -> [u64; 4] {
let mut key = [0u64; 4];
for (i, chunk) in bytes.chunks_exact(8).enumerate() {
@@ -42,6 +47,8 @@ pub enum HashAlgorithm {
// HighwayHash256S represents the Streaming HighwayHash-256 hash function
#[default]
HighwayHash256S,
/// Legacy HighwayHash256S (main branch) with fixed key [3,4,2,1]
HighwayHash256SLegacy,
// BLAKE2b512 represents the BLAKE2b-512 hash function
BLAKE2b512,
/// MD5 (128-bit)
@@ -55,6 +62,7 @@ enum HashEncoded {
Sha256([u8; 32]),
HighwayHash256([u8; 32]),
HighwayHash256S([u8; 32]),
HighwayHash256SLegacy([u8; 32]),
Blake2b512([u8; 64]),
None,
}
@@ -67,6 +75,7 @@ impl AsRef<[u8]> for HashEncoded {
HashEncoded::Sha256(hash) => hash.as_ref(),
HashEncoded::HighwayHash256(hash) => hash.as_ref(),
HashEncoded::HighwayHash256S(hash) => hash.as_ref(),
HashEncoded::HighwayHash256SLegacy(hash) => hash.as_ref(),
HashEncoded::Blake2b512(hash) => hash.as_ref(),
HashEncoded::None => &[],
}
@@ -107,6 +116,12 @@ impl HashAlgorithm {
hasher.append(data);
HashEncoded::HighwayHash256S(u8x32_from_u64x4(hasher.finalize256()))
}
HashAlgorithm::HighwayHash256SLegacy => {
let key = Key(highway_key_from_bytes(&LEGACY_HIGHWAY_HASH256_KEY));
let mut hasher = HighwayHasher::new(key);
hasher.append(data);
HashEncoded::HighwayHash256SLegacy(u8x32_from_u64x4(hasher.finalize256()))
}
HashAlgorithm::BLAKE2b512 => {
let hash = Blake2b512::digest(data);
let mut out = [0u8; 64];
@@ -127,6 +142,7 @@ impl HashAlgorithm {
HashAlgorithm::SHA256 => 32,
HashAlgorithm::HighwayHash256 => 32,
HashAlgorithm::HighwayHash256S => 32,
HashAlgorithm::HighwayHash256SLegacy => 32,
HashAlgorithm::BLAKE2b512 => 64,
HashAlgorithm::Md5 => 16,
HashAlgorithm::None => 0,
@@ -248,7 +264,7 @@ mod tests {
#[test]
fn test_bitrot_selftest() {
let checksums: [(HashAlgorithm, &str); 4] = [
let checksums: [(HashAlgorithm, &str); 5] = [
(HashAlgorithm::SHA256, "a7677ff19e0182e4d52e3a3db727804abc82a5818749336369552e54b838b004"),
(
HashAlgorithm::BLAKE2b512,
@@ -262,12 +278,16 @@ mod tests {
HashAlgorithm::HighwayHash256S,
"39c0407ed3f01b18d22c85db4aeff11e060ca5f43131b0126731ca197cd42313",
),
(
HashAlgorithm::HighwayHash256SLegacy,
"a5592a831588836b0f61bff43da4bd957c376d9b6412a9ecbbd144a3ecf34649",
),
];
for (algo, expected_hex) in checksums {
let block_size = match algo {
HashAlgorithm::SHA256 => 64,
HashAlgorithm::BLAKE2b512 => 128,
HashAlgorithm::HighwayHash256 | HashAlgorithm::HighwayHash256S => 32,
HashAlgorithm::HighwayHash256 | HashAlgorithm::HighwayHash256S | HashAlgorithm::HighwayHash256SLegacy => 32,
_ => continue,
};
let mut msg = Vec::new();