mirror of
https://github.com/rustfs/rustfs.git
synced 2026-01-17 09:40:32 +00:00
* fix getobject content length resp * Fix regression in exception handling for non-existent key with enhanced compression predicate and metadata improvements (#915) * Initial plan * Fix GetObject regression by excluding error responses from compression The issue was that CompressionLayer was attempting to compress error responses, which could cause Content-Length header mismatches. By excluding 4xx and 5xx responses from compression, we ensure error responses (like NoSuchKey) are sent correctly without body truncation. Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> * Enhance NoSuchKey fix with improved compression predicate and comprehensive tests - Enhanced ShouldCompress predicate with size-based exclusion (< 256 bytes) - Added detailed documentation explaining the compression logic - Added debug logging for better observability - Created comprehensive test suite with 4 test cases: - test_get_deleted_object_returns_nosuchkey - test_head_deleted_object_returns_nosuchkey - test_get_nonexistent_object_returns_nosuchkey - test_multiple_gets_deleted_object - Added extensive inline documentation and comments - Created docs/fix-nosuchkey-regression.md with full analysis Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> * Add compression best practices documentation Added comprehensive guide covering: - Best practices for HTTP response compression - Common pitfalls and solutions - Performance considerations and trade-offs - Testing guidelines and examples - Monitoring and alerting recommendations - Migration guide for existing services Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> * fix * fmt * fmt * Fix/objectdelete (#917) * fix getobject content length resp * fix delete object --------- Co-authored-by: houseme <housemecn@gmail.com> * Add comprehensive analysis of NoSuchKey fix and related improvements Created detailed documentation analyzing: - HTTP compression layer fix (primary issue) - Content-length calculation fix from PR #917 - Delete object metadata fixes from PR #917 - How all components work together - Complete scenario walkthrough - Performance impact analysis - Testing strategy and deployment checklist This ties together all the changes in the PR branch including the merged improvements from PR #917. Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> * replace `once_cell` to `std` * fmt --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> Co-authored-by: houseme <housemecn@gmail.com> Co-authored-by: weisd <im@weisd.in> * fmt --------- Co-authored-by: weisd <weishidavip@163.com> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: houseme <4829346+houseme@users.noreply.github.com> Co-authored-by: weisd <im@weisd.in>
5.2 KiB
5.2 KiB
Fix for NoSuchKey Error Response Regression (Issue #901)
Problem Statement
In RustFS version 1.0.69, a regression was introduced where attempting to download a non-existent or deleted object would return a networking error instead of the expected NoSuchKey S3 error:
Expected: Aws::S3::Errors::NoSuchKey
Actual: Seahorse::Client::NetworkingError: "http response body truncated, expected 119 bytes, received 0 bytes"
Root Cause Analysis
The issue was caused by the CompressionLayer middleware being applied to all HTTP responses, including S3 error responses. The sequence of events that led to the bug:
- Client requests a non-existent object via
GetObject - RustFS determines the object doesn't exist
- The s3s library generates a
NoSuchKeyerror response (XML format, ~119 bytes) - HTTP headers are written, including
Content-Length: 119 - The
CompressionLayerattempts to compress the error response body - Due to compression buffering or encoding issues with small payloads, the body becomes empty (0 bytes)
- The client receives
Content-Length: 119but the actual body is 0 bytes - AWS SDK throws a "truncated body" networking error instead of parsing the S3 error
Solution
The fix implements an intelligent compression predicate (ShouldCompress) that excludes certain responses from compression:
Exclusion Criteria
- Error Responses (4xx and 5xx): Never compress error responses to ensure error details are preserved and transmitted accurately
- Small Responses (< 256 bytes): Skip compression for very small responses where compression overhead outweighs benefits
Implementation Details
impl Predicate for ShouldCompress {
fn should_compress<B>(&self, response: &Response<B>) -> bool
where
B: http_body::Body,
{
let status = response.status();
// Never compress error responses (4xx and 5xx status codes)
if status.is_client_error() || status.is_server_error() {
debug!("Skipping compression for error response: status={}", status.as_u16());
return false;
}
// Check Content-Length header to avoid compressing very small responses
if let Some(content_length) = response.headers().get(http::header::CONTENT_LENGTH) {
if let Ok(length_str) = content_length.to_str() {
if let Ok(length) = length_str.parse::<u64>() {
if length < 256 {
debug!("Skipping compression for small response: size={} bytes", length);
return false;
}
}
}
}
// Compress successful responses with sufficient size
true
}
}
Benefits
- Correctness: Error responses are now transmitted with accurate Content-Length headers
- Compatibility: AWS SDKs and other S3 clients correctly receive and parse error responses
- Performance: Small responses avoid unnecessary compression overhead
- Observability: Debug logging provides visibility into compression decisions
Testing
Comprehensive test coverage was added to prevent future regressions:
Test Cases
test_get_deleted_object_returns_nosuchkey: Verifies that getting a deleted object returns NoSuchKeytest_head_deleted_object_returns_nosuchkey: Verifies HeadObject also returns NoSuchKey for deleted objectstest_get_nonexistent_object_returns_nosuchkey: Tests objects that never existedtest_multiple_gets_deleted_object: Ensures stability across multiple consecutive requests
Running Tests
# Run the specific test
cargo test --test get_deleted_object_test -- --ignored
# Or start RustFS server and run tests
./scripts/dev_rustfs.sh
cargo test --test get_deleted_object_test
Impact Assessment
Affected APIs
GetObjectHeadObject- Any S3 API that returns 4xx/5xx error responses
Backward Compatibility
- No breaking changes: The fix only affects error response handling
- Improved compatibility: Better alignment with S3 specification and AWS SDK expectations
- No performance degradation: Small responses were already not compressed by default in most cases
Deployment Considerations
Verification Steps
- Deploy the fix to a staging environment
- Run the provided Ruby reproduction script to verify the fix
- Monitor error logs for any compression-related warnings
- Verify that large successful responses are still being compressed
Monitoring
Enable debug logging to observe compression decisions:
RUST_LOG=rustfs::server::http=debug
Look for log messages like:
Skipping compression for error response: status=404Skipping compression for small response: size=119 bytes
Related Issues
- Issue #901: Regression in exception when downloading non-existent key in alpha 69
- Commit:
8618570383(getobjectlength fix)