Compare commits

..

17 Commits

Author SHA1 Message Date
zzhpro
c2d782bed1 feat: support conditional writes (#409)
* feat: support conditional writes

* refactor: avoid using unwrap

* fix: obtain lock before check in CompleteMultiPartUpload

* refactor: do not obtain a lock when getting object meta

* fix: avoid using unwrap and modifying incoming arguments

* test: add e2e tests for conditional writes

---------

Co-authored-by: guojidan <63799833+guojidan@users.noreply.github.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
2025-08-25 18:35:24 -07:00
likewu
e00f5be746 Fix/addtier (#454)
* fix retry

* fmt

* fix

* fix

* fix

---------

Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
2025-08-25 10:24:48 +08:00
shiro.lee
e23297f695 fix: add the default port number to the given server domains (#373) (#458) 2025-08-25 07:49:36 +08:00
0xdx2
d6840a6e04 feat: add support for range requests in upload_part_copy and implement parse_copy_source_range function (#453)
* feat: add support for range requests in upload_part_copy and implement parse_copy_source_range function

* style: format debug and error logging for improved readability

* feat: implement parse_copy_source_range function and improve error handling in range requests

* Update rustfs/src/storage/ecfs.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: correct return type in parse_copy_source_range function

* fix: remove unnecessary unwrap in parse_copy_source_range tests

* fix: simplify etag comparison in copy condition validation

---------

Co-authored-by: DamonXue <damonxue2@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
2025-08-24 10:54:48 +08:00
houseme
3557a52dc4 Potential fix for code scanning alert no. 7: Workflow does not contain permissions (#457)
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-08-24 10:10:04 +08:00
houseme
fd2aab2bd9 fix:revet #443 #446 (#452)
* fix: revet #443 #446

* fix
2025-08-23 17:30:06 +08:00
houseme
f1c50fcb74 fix:Workflow does not contain permissions (#451) 2025-08-23 12:35:23 +08:00
houseme
bdcba3460e Potential fix for code scanning alert no. 13: Code injection (#447)
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
2025-08-23 10:05:00 +08:00
houseme
8857f31b07 Comment out error log for missing subscribers (#448) 2025-08-22 21:15:46 +08:00
loverustfs
5b85bf7a00 lock: dedicate unlock worker to thread runtime; robust fallback in Drop (#446)
* lock: dedicate unlock worker to thread runtime; robust fallback in Drop

* Update crates/lock/src/guard.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update crates/lock/src/guard.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update crates/lock/src/guard.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Refactor logging in UNLOCK_TX error handling

Removed redundant logging of lock_id in warning message.

---------

Co-authored-by: houseme <housemecn@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-22 16:51:56 +08:00
loverustfs
46bd75c0f8 ahm(scanner): throttle scanning, skip recently-modified objects, and … (#443)
* ahm(scanner): throttle scanning, skip recently-modified objects, and gate missing-object heals to deep mode; adjust conservative defaults

Signed-off-by: loverustfs <hello@rustfs.com>

* ecstore: enable virtual-host AUTO heuristics and URL building; signer: fix SigV2 canonical resource for vhost; add unit tests

* ecstore: AUTO virtual-host style URL selection; signer: SigV2 canonical resource fixes for vhost; tests added.\nahm: fix clippy drop_non_drop; integration tests robust to existing bucket; ignore flaky lifecycle test.\nMakefile: test target falls back to cargo test when nextest missing.\npre-commit: all checks green.

---------

Signed-off-by: loverustfs <hello@rustfs.com>
2025-08-22 16:03:29 +08:00
houseme
5fc5dd0fd9 Remove rustfs-gui module (#445)
This commit completely removes the rustfs-gui module from the project. The deletion includes:

- All source code files (*.rs) and associated resources
- GUI-specific dependencies from Cargo.toml
- Build scripts and configuration files specific to the GUI module
- Documentation and assets related to the graphical interface

The removal is performed because:
- The GUI component is no longer maintained
- Focus is shifting to core functionality and CLI interface
- Limited resources available for GUI development and maintenance

The core filesystem functionality remains available through the rustfs library and CLI interface.
2025-08-22 09:15:22 +08:00
houseme
adc07e5209 feat(targets): extract targets module into a standalone crate (#441)
* init audit logger module

* add audit webhook default config kvs

* feat: Add comprehensive tests for authentication module (#309)

* feat: add comprehensive tests for authentication module

- Add 33 unit tests covering all public functions in auth.rs
- Test IAMAuth struct creation and secret key validation
- Test check_claims_from_token with various credential types and scenarios
- Test session token extraction from headers and query parameters
- Test condition values generation for different user types
- Test query parameter parsing with edge cases
- Test Credentials helper methods (is_expired, is_temp, is_service_account)
- Ensure tests handle global state dependencies gracefully
- All tests pass successfully with 100% coverage of testable functions

* style: fix code formatting issues

* Add verification script for checking PR branch statuses and tests

Co-authored-by: anzhengchao <anzhengchao@gmail.com>

* fix: resolve clippy uninlined format args warning

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* feat: add basic tests for core storage module (#313)

* feat: add basic tests for core storage module

- Add 6 unit tests for FS struct and basic functionality
- Test FS creation, Debug and Clone trait implementations
- Test RUSTFS_OWNER constant definition and values
- Test S3 error code creation and handling
- Test compression format detection for common file types
- Include comprehensive documentation about integration test needs

Note: Full S3 API testing requires complex setup with storage backend,
global configuration, and network infrastructure - better suited for
integration tests rather than unit tests.

* style: fix code formatting issues

* fix: resolve clippy warnings in storage tests

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* feat: add tests for admin handlers module (#314)

* feat: add tests for admin handlers module

- Add 5 new unit tests for admin handler functionality
- Test AccountInfo struct creation, serialization and default values
- Test creation of all admin handler structs (13 handlers)
- Test HealOpts JSON serialization and deserialization
- Test HealOpts URL encoding/decoding with proper field types
- Maintain existing test while adding comprehensive coverage
- Include documentation about integration test requirements

All tests pass successfully with proper error handling for complex dependencies.

* style: fix code formatting issues

* fix: resolve clippy warnings in admin handlers tests

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* build(deps): bump the dependencies group with 3 updates (#326)

* perf: avoid transmitting parity shards when the object is good (#322)

* upgrade version

* Fix: fix data integrity check

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Fix: Separate Clippy's fix and check commands into two commands.

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: miss inline metadata (#345)

* Update dependabot.yml

* fix: Fixed an issue where the list_objects_v2 API did not return dire… (#352)

* fix: Fixed an issue where the list_objects_v2 API did not return directory names when they conflicted with file names in the same bucket (e.g., test/ vs. test.txt, aaa/ vs. aaa.csv) (#335)

* fix: adjusted the order of directory listings

* init

* fix

* fix

* feat: add docker usage for rustfs mcp (#365)

* feat: enhance metadata extraction with object name for MIME type detection

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Feature: lock support auto release

Signed-off-by: junxiang Mu <1948535941@qq.com>

* improve lock

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Fix: fix scanner detect

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Fix: clippy && fmt

Signed-off-by: junxiang Mu <1948535941@qq.com>

* refactor(ecstore): Optimize memory usage for object integrity verification

Change the object integrity verification from reading all data to streaming processing to avoid memory overflow caused by large objects.

Modify the TLS key log check to use environment variables directly instead of configuration constants.

Add memory limits for object data reading in the AHM module.

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Chore: reduce PR template checklist

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Chore: remove comment code (#376)

Signed-off-by: junxiang Mu <1948535941@qq.com>

* chore: upgrade actions/checkout from v4 to v5 (#381)

* chore: upgrade actions/checkout from v4 to v5

- Update GitHub Actions checkout action version
- Ensure compatibility with latest workflow features
- Maintain existing checkout behavior and configuration

* upgrade version

* fix

* add and improve code for notify

* feat: extend rustfs mcp with bucket creation and deletion (#416)

* feat: extend rustfs mcp with bucket creation and deletion

* update file to fix pipeline error

* change variable name to fix pipeline error

* fix(ecstore): add async-recursion to resolve nightly trait solver reg… (#415)

* fix(ecstore): add async-recursion to resolve nightly trait solver regression

The newest nightly compiler switched to the new trait solver, which
currently rejects async recursive functions that were previously accepted.
This causes the following compilation failures:

- `LocalDisk::delete_file()`
- `LocalDisk::scan_dir()`

Add `async-recursion` as a workspace dependency and annotate both functions with `#[async_recursion]` so that the crate compiles cleanly with the latest nightly and will continue to build once the new solver lands in stable.

Signed-off-by: reigadegr <2722688642@qq.com>

* fix: resolve duplicate bound error in scan_dir function

Replaced inline trait bounds with where clause to avoid duplication caused by macro expansion.

Signed-off-by: reigadegr <2722688642@qq.com>

---------

Signed-off-by: reigadegr <2722688642@qq.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>

* fix:make bucket exists (#428)

* feat: include user-defined metadata in S3 response (#431)

* fix: simplify Docker entrypoint following efficient user switching pattern (#421)

* fix: simplify Docker entrypoint following efficient user switching pattern

- Remove ALL file permission modifications (no chown at all)
- Use chroot --userspec or gosu to switch user context
- Extremely simple and fast implementation
- Zero filesystem modifications for permissions

Fixes #388

* Update entrypoint.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update entrypoint.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update entrypoint.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* wip

* wip

* wip

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* docs: update doc/docker-data-dir README.md (#432)

* add targets crates

* feat(targets): extract targets module into a standalone crate

- Move all target-related code (MQTT, Webhook, etc.) into a new `targets` crate
- Update imports and dependencies to reference the new crate
- Refactor interfaces to ensure compatibility with the new crate structure
- Adjust Cargo.toml and workspace configuration accordingly

* fix

* fix

---------

Signed-off-by: junxiang Mu <1948535941@qq.com>
Signed-off-by: reigadegr <2722688642@qq.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: zzhpro <56196563+zzhpro@users.noreply.github.com>
Co-authored-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: weisd <im@weisd.in>
Co-authored-by: shiro.lee <69624924+shiroleeee@users.noreply.github.com>
Co-authored-by: majinghe <42570491+majinghe@users.noreply.github.com>
Co-authored-by: guojidan <63799833+guojidan@users.noreply.github.com>
Co-authored-by: reigadegr <103645642+reigadegr@users.noreply.github.com>
Co-authored-by: 0xdx2 <xuedamon2@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-21 22:33:07 +08:00
安正超
357cced49c Replace prints with logs and fix grammar (#437)
* refactor: replace print statements with proper logging and fix grammar

- Fix English grammar errors in existing log messages
- Add tracing imports where needed
- Improve log message clarity and consistency
- Follow project logging best practices using tracing crate

* fix: resolve clippy warnings and format code

- Fix unused import warnings by making test imports conditional with #[cfg(test)]
- Fix unused variable warning by prefixing with underscore
- Run cargo fmt to fix formatting issues
- Ensure all code passes clippy checks with -D warnings flag

* refactor: move tracing::debug import into test module

Move the tracing::debug import from file-level #[cfg(test)] into the test module itself for better code organization and consistency with other test modules

* Checkpoint before follow-up message

Co-authored-by: anzhengchao <anzhengchao@gmail.com>

* refactor: move tracing::debug import into test module in user_agent.rs

Complete the refactoring by moving the tracing::debug import from file-level #[cfg(test)] into the test module for consistency across all test files

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-08-21 05:49:40 +08:00
Csrayz
a104c33974 [FEAT] add error message (#435)
When the bucket is not found, return a helpful message
2025-08-20 23:49:59 +08:00
安正超
516e00f15f fix: Dockerfile with error permission change. (#436)
* fix: dockerfile and permission error.

* fix: dockerfile and permission error.
2025-08-20 23:32:03 +08:00
安正超
a64c3c28b8 docs: update doc/docker-data-dir README.md (#432) 2025-08-20 00:00:20 +08:00
140 changed files with 4185 additions and 8458 deletions

View File

@@ -31,6 +31,9 @@ on:
- cron: '0 0 * * 0' # Weekly on Sunday at midnight UTC
workflow_dispatch:
permissions:
contents: read
env:
CARGO_TERM_COLOR: always

View File

@@ -70,6 +70,9 @@ on:
default: true
type: boolean
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1

View File

@@ -59,6 +59,9 @@ on:
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1

View File

@@ -58,6 +58,10 @@ on:
type: boolean
env:
CONCLUSION: ${{ github.event.workflow_run.conclusion }}
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
TRIGGERING_EVENT: ${{ github.event.workflow_run.event }}
DOCKERHUB_USERNAME: rustfs
CARGO_TERM_COLOR: always
REGISTRY_DOCKERHUB: rustfs/rustfs
@@ -102,27 +106,27 @@ jobs:
# Check if the triggering workflow was successful
# If the workflow succeeded, it means ALL builds (including Linux x86_64 and aarch64) succeeded
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
if [[ "$CONCLUSION" == "success" ]]; then
echo "✅ Build workflow succeeded, all builds including Linux are successful"
should_build=true
should_push=true
else
echo "❌ Build workflow failed (conclusion: ${{ github.event.workflow_run.conclusion }}), skipping Docker build"
echo "❌ Build workflow failed (conclusion: $CONCLUSION), skipping Docker build"
should_build=false
fi
# Extract version info from commit message or use commit SHA
# Use Git to generate consistent short SHA (ensures uniqueness like build.yml)
short_sha=$(git rev-parse --short "${{ github.event.workflow_run.head_sha }}")
short_sha=$(git rev-parse --short "$HEAD_SHA")
# Determine build type based on triggering workflow event and ref
triggering_event="${{ github.event.workflow_run.event }}"
head_branch="${{ github.event.workflow_run.head_branch }}"
triggering_event="$TRIGGERING_EVENT"
head_branch="$HEAD_BRANCH"
echo "🔍 Analyzing triggering workflow:"
echo " 📋 Event: $triggering_event"
echo " 🌿 Head branch: $head_branch"
echo " 📎 Head SHA: ${{ github.event.workflow_run.head_sha }}"
echo " 📎 Head SHA: $HEAD_SHA"
# Check if this was triggered by a tag push
if [[ "$triggering_event" == "push" ]]; then
@@ -174,10 +178,10 @@ jobs:
fi
echo "🔄 Build triggered by workflow_run:"
echo " 📋 Conclusion: ${{ github.event.workflow_run.conclusion }}"
echo " 🌿 Branch: ${{ github.event.workflow_run.head_branch }}"
echo " 📎 SHA: ${{ github.event.workflow_run.head_sha }}"
echo " 🎯 Event: ${{ github.event.workflow_run.event }}"
echo " 📋 Conclusion: $CONCLUSION"
echo " 🌿 Branch: $HEAD_BRANCH"
echo " 📎 SHA: $HEAD_SHA"
echo " 🎯 Event: $TRIGGERING_EVENT"
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
# Manual trigger

View File

@@ -15,9 +15,13 @@
name: "issue-translator"
on:
issue_comment:
types: [created]
types: [ created ]
issues:
types: [opened]
types: [ opened ]
permissions:
contents: read
issues: write
jobs:
build:

View File

@@ -30,6 +30,9 @@ on:
default: "120"
type: string
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1

4220
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -15,8 +15,8 @@
[workspace]
members = [
"rustfs", # Core file system implementation
"cli/rustfs-gui", # Graphical user interface client
"crates/appauth", # Application authentication and authorization
"crates/audit-logger", # Audit logging system for file operations
"crates/common", # Shared utilities and data structures
"crates/config", # Configuration management
"crates/crypto", # Cryptography and security features
@@ -30,6 +30,7 @@ members = [
"crates/obs", # Observability utilities
"crates/protos", # Protocol buffer definitions
"crates/rio", # Rust I/O utilities and abstractions
"crates/targets", # Target-specific configurations and utilities
"crates/s3select-api", # S3 Select API interface
"crates/s3select-query", # S3 Select query engine
"crates/signer", # client signer
@@ -37,7 +38,7 @@ members = [
"crates/utils", # Utility functions and helpers
"crates/workers", # Worker thread pools and task scheduling
"crates/zip", # ZIP file handling and compression
"crates/ahm",
"crates/ahm", # Asynchronous Hash Map for concurrent data structures
"crates/mcp", # MCP server for S3 operations
]
resolver = "2"
@@ -59,15 +60,11 @@ unsafe_code = "deny"
[workspace.lints.clippy]
all = "warn"
[patch.crates-io]
rustfs-utils = { path = "crates/utils" }
rustfs-filemeta = { path = "crates/filemeta" }
rustfs-rio = { path = "crates/rio" }
[workspace.dependencies]
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
rustfs-audit-logger = { path = "crates/audit-logger", version = "0.0.5" }
rustfs-common = { path = "crates/common", version = "0.0.5" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
@@ -89,6 +86,7 @@ rustfs-signer = { path = "crates/signer", version = "0.0.5" }
rustfs-checksums = { path = "crates/checksums", version = "0.0.5" }
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
rustfs-mcp = { path = "crates/mcp", version = "0.0.5" }
rustfs-targets = { path = "crates/targets", version = "0.0.5" }
aes-gcm = { version = "0.10.3", features = ["std"] }
anyhow = "1.0.99"
arc-swap = "1.7.1"
@@ -96,15 +94,15 @@ argon2 = { version = "0.5.3", features = ["std"] }
atoi = "2.0.0"
async-channel = "2.5.0"
async-recursion = "1.1.1"
async-trait = "0.1.88"
async-trait = "0.1.89"
async-compression = { version = "0.4.19" }
atomic_enum = "0.3.0"
aws-config = { version = "1.8.4" }
aws-config = { version = "1.8.5" }
aws-sdk-s3 = "1.101.0"
axum = "0.8.4"
base64-simd = "0.8.0"
base64 = "0.22.1"
brotli = "8.0.1"
brotli = "8.0.2"
bytes = { version = "1.10.1", features = ["serde"] }
bytesize = "2.0.1"
byteorder = "1.5.0"
@@ -112,16 +110,14 @@ cfg-if = "1.0.1"
crc-fast = "1.4.0"
chacha20poly1305 = { version = "0.10.1" }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.44", features = ["derive", "env"] }
clap = { version = "4.5.45", features = ["derive", "env"] }
const-str = { version = "0.6.4", features = ["std", "proc"] }
crc32fast = "1.5.0"
criterion = { version = "0.7", features = ["html_reports"] }
dashmap = "6.1.0"
datafusion = "46.0.1"
derive_builder = "0.20.2"
dioxus = { version = "0.6.3", features = ["router"] }
dirs = "6.0.0"
enumset = "1.1.7"
enumset = "1.1.9"
flatbuffers = "25.2.10"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
@@ -134,7 +130,7 @@ hex = "0.4.3"
hex-simd = "0.8.0"
highway = { version = "1.3.0" }
hmac = "0.12.1"
hyper = "1.6.0"
hyper = "1.7.0"
hyper-util = { version = "0.1.16", features = [
"tokio",
"server-auto",
@@ -146,11 +142,6 @@ http-body = "1.0.1"
humantime = "2.2.0"
ipnetwork = { version = "0.21.1", features = ["serde"] }
jsonwebtoken = "9.3.1"
keyring = { version = "3.6.3", features = [
"apple-native",
"windows-native",
"sync-secret-service",
] }
lazy_static = "1.5.0"
libsystemd = { version = "0.7.2" }
local-ip-address = "0.6.5"
@@ -193,7 +184,7 @@ rand = "0.9.2"
rdkafka = { version = "0.38.0", features = ["tokio"] }
reed-solomon-simd = { version = "3.0.1" }
regex = { version = "1.11.1" }
reqwest = { version = "0.12.22", default-features = false, features = [
reqwest = { version = "0.12.23", default-features = false, features = [
"rustls-tls",
"charset",
"http2",
@@ -202,17 +193,12 @@ reqwest = { version = "0.12.22", default-features = false, features = [
"json",
"blocking",
] }
rfd = { version = "0.15.4", default-features = false, features = [
"xdg-portal",
"tokio",
] }
rmcp = { version = "0.5.0" }
rmp = "0.8.14"
rmp-serde = "1.3.0"
rsa = "0.9.8"
rumqttc = { version = "0.24" }
rust-embed = { version = "8.7.2" }
rust-i18n = { version = "3.1.5" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.31" }
rustls-pki-types = "1.12.0"
@@ -220,7 +206,7 @@ rustls-pemfile = "2.2.0"
s3s = { version = "0.12.0-minio-preview.3" }
schemars = "1.0.4"
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.142", features = ["raw_value"] }
serde_json = { version = "1.0.143", features = ["raw_value"] }
serde_urlencoded = "0.7.1"
serial_test = "3.2.0"
sha1 = "0.10.6"
@@ -237,7 +223,7 @@ sysctl = "0.6.0"
tempfile = "3.20.0"
temp-env = "0.3.6"
test-case = "3.3.1"
thiserror = "2.0.14"
thiserror = "2.0.15"
time = { version = "0.3.41", features = [
"std",
"parsing",
@@ -257,7 +243,6 @@ tonic-prost-build = { version = "0.14.1" }
tower = { version = "0.5.2", features = ["timeout"] }
tower-http = { version = "0.6.6", features = ["cors"] }
tracing = "0.1.41"
tracing-appender = "0.2.3"
tracing-core = "0.1.34"
tracing-error = "0.2.1"
tracing-opentelemetry = "0.31.0"
@@ -278,7 +263,7 @@ zstd = "0.13.3"
[workspace.metadata.cargo-shear]
ignored = ["rustfs", "rust-i18n", "rustfs-mcp"]
ignored = ["rustfs", "rust-i18n", "rustfs-mcp", "rustfs-audit-logger", "tokio-test"]
[profile.wasm-dev]
inherits = "dev"

View File

@@ -1,6 +1,3 @@
# -------------------
# Build stage
# -------------------
FROM alpine:3.22 AS build
ARG TARGETARCH
@@ -9,9 +6,6 @@ ARG RELEASE=latest
RUN apk add --no-cache ca-certificates curl unzip
WORKDIR /build
# Download and extract release package matching current TARGETARCH
# - If RELEASE=latest: take first tag_name from /releases (may include pre-releases)
# - Otherwise use specified tag (e.g. v0.1.2)
RUN set -eux; \
case "$TARGETARCH" in \
amd64) ARCH_SUBSTR="x86_64-musl" ;; \
@@ -46,9 +40,6 @@ RUN set -eux; \
rm -rf rustfs.zip /build/.tmp || true
# -------------------
# Runtime stage
# -------------------
FROM alpine:3.22
ARG RELEASE=latest
@@ -67,22 +58,16 @@ LABEL name="RustFS" \
url="https://rustfs.com" \
license="Apache-2.0"
# Install only runtime requirements: certificates and coreutils (provides chroot --userspec)
RUN apk add --no-cache ca-certificates coreutils && \
addgroup -g 1000 rustfs && \
adduser -u 1000 -G rustfs -s /sbin/nologin -D rustfs
RUN apk add --no-cache ca-certificates coreutils
# Copy binary and entry script (ensure fixed entrypoint.sh exists in repository)
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /build/rustfs /usr/bin/rustfs
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /usr/bin/rustfs /entrypoint.sh && \
mkdir -p /data /logs && \
chown rustfs:rustfs /data /logs && \
chmod 0750 /data /logs
# Default environment (can be overridden in docker run/compose)
ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_ACCESS_KEY="rustfsadmin" \
RUSTFS_SECRET_KEY="rustfsadmin" \
@@ -90,14 +75,11 @@ ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_VOLUMES="/data" \
RUST_LOG="warn" \
RUSTFS_OBS_LOG_DIRECTORY="/logs" \
RUSTFS_SINKS_FILE_PATH="/logs" \
RUSTFS_USERNAME="rustfs" \
RUSTFS_GROUPNAME="rustfs" \
RUSTFS_UID="1000" \
RUSTFS_GID="1000"
RUSTFS_SINKS_FILE_PATH="/logs"
EXPOSE 9000
VOLUME ["/data", "/logs"]
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/rustfs"]
CMD ["rustfs"]

View File

@@ -34,7 +34,12 @@ check:
.PHONY: test
test:
@echo "🧪 Running tests..."
cargo nextest run --all --exclude e2e_test
@if command -v cargo-nextest >/dev/null 2>&1; then \
cargo nextest run --all --exclude e2e_test; \
else \
echo " cargo-nextest not found; falling back to 'cargo test'"; \
cargo test --workspace --exclude e2e_test -- --nocapture; \
fi
cargo test --all --doc
.PHONY: pre-commit

View File

@@ -81,14 +81,14 @@ To get started with RustFS, follow these steps:
2. **Docker Quick Start (Option 2)**
```bash
# Latest stable release
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
# create data and logs directories
mkdir -p data logs
# Development version (main branch)
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:main-latest
# using latest alpha version
docker run -d -p 9000:9000 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:alpha
# Specific version
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:v1.0.0
docker run -d -p 9000:9000 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0.alpha.45
```
3. **Build from Source (Option 3) - Advanced Users**

View File

@@ -1,46 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[package]
name = "rustfs-gui"
edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
[dependencies]
chrono = { workspace = true }
dioxus = { workspace = true, features = ["router"] }
dirs = { workspace = true }
hex = { workspace = true }
keyring = { workspace = true }
rfd = { workspace = true }
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
rust-i18n = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sha2 = { workspace = true }
tokio = { workspace = true, features = ["io-util", "net", "process", "sync"] }
tracing-subscriber = { workspace = true, features = ["fmt", "env-filter", "tracing-log", "time", "local-time", "json"] }
tracing-appender = { workspace = true }
[features]
default = ["desktop"]
web = ["dioxus/web"]
desktop = ["dioxus/desktop"]
mobile = ["dioxus/mobile"]
[lints]
workspace = true

View File

@@ -1,52 +0,0 @@
[application]
# App (Project) Name
name = "rustfs-gui"
# The static resource path
asset_dir = "public"
[web.app]
# HTML title tag content
title = "rustfs-gui"
# include `assets` in web platform
[web.resource]
# Additional CSS style files
style = []
# Additional JavaScript files
script = []
[web.resource.dev]
# Javascript code file
# serve: [dev-server] only
script = []
[bundle]
identifier = "com.rustfs.cli.gui"
publisher = "RustFsGUI"
category = "Utility"
copyright = "Copyright 2025 rustfs.com"
icon = [
"assets/icons/icon.icns",
"assets/icons/icon.ico",
"assets/icons/icon.png",
"assets/icons/rustfs-icon.png",
]
#[bundle.macos]
#provider_short_name = "RustFs"
[bundle.windows]
tsp = true
icon_path = "assets/icons/icon.ico"
allow_downgrades = true
[bundle.windows.webview_install_mode]
[bundle.windows.webview_install_mode.EmbedBootstrapper]
silent = true

View File

@@ -1,34 +0,0 @@
## Rustfs GUI
### Tailwind
1. Install npm: https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
2. Install the Tailwind CSS CLI: https://tailwindcss.com/docs/installation
3. Run the following command in the root of the project to start the Tailwind CSS compiler:
```bash
npx tailwindcss -i ./input.css -o ./assets/tailwind.css --watch
```
### Dioxus CLI
#### Install the stable version (recommended)
```shell
cargo install dioxus-cli
```
### Serving Your App
Run the following command in the root of your project to start developing with the default platform:
```bash
dx serve
```
To run for a different platform, use the `--platform platform` flag. E.g.
```bash
dx serve --platform desktop
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 498 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -1,48 +0,0 @@
/**
* Copyright 2024 RustFS Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
window.switchTab = function (tabId) {
// Hide everything
document.querySelectorAll('.tab-content').forEach(content => {
content.classList.add('hidden');
});
// Reset all label styles
document.querySelectorAll('.tab-btn').forEach(btn => {
btn.classList.remove('border-b-2', 'border-black');
btn.classList.add('text-gray-500');
});
// Displays the selected content
const activeContent = document.getElementById(tabId);
if (activeContent) {
activeContent.classList.remove('hidden');
}
// Updates the selected label style
const activeBtn = document.querySelector(`[data-tab="${tabId}"]`);
if (activeBtn) {
activeBtn.classList.add('border-b-2', 'border-black');
activeBtn.classList.remove('text-gray-500');
}
};
window.togglePassword = function (button) {
const input = button.parentElement.querySelector('input[type="password"], input[type="text"]');
if (input) {
input.type = input.type === 'password' ? 'text' : 'password';
}
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1,15 +0,0 @@
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_0_3)">
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
</g>
<defs>
<clipPath id="clip0_0_3">
<rect width="1558" height="260" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -1,33 +0,0 @@
/**
* Copyright 2024 RustFS Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#navbar {
display: flex;
flex-direction: row;
}
#navbar a {
color: #ffffff;
margin-right: 20px;
text-decoration: none;
transition: color 0.2s ease;
}
#navbar a:hover {
cursor: pointer;
color: #ffffff;
/ / #91a4d2;
}

View File

@@ -1,972 +0,0 @@
/**
* Copyright 2024 RustFS Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
*, ::before, ::after {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
--tw-contain-size: ;
--tw-contain-layout: ;
--tw-contain-paint: ;
--tw-contain-style: ;
}
::backdrop {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
--tw-contain-size: ;
--tw-contain-layout: ;
--tw-contain-paint: ;
--tw-contain-style: ;
}
/*
! tailwindcss v3.4.17 | MIT License | https://tailwindcss.com
*/
/*
1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)
2. Allow adding a border to an element by just adding a border-width. (https://github.com/tailwindcss/tailwindcss/pull/116)
*/
*,
::before,
::after {
box-sizing: border-box;
/* 1 */
border-width: 0;
/* 2 */
border-style: solid;
/* 2 */
border-color: #e5e7eb;
/* 2 */
}
::before,
::after {
--tw-content: '';
}
/*
1. Use a consistent sensible line-height in all browsers.
2. Prevent adjustments of font size after orientation changes in iOS.
3. Use a more readable tab size.
4. Use the user's configured `sans` font-family by default.
5. Use the user's configured `sans` font-feature-settings by default.
6. Use the user's configured `sans` font-variation-settings by default.
7. Disable tap highlights on iOS
*/
html,
:host {
line-height: 1.5;
/* 1 */
-webkit-text-size-adjust: 100%;
/* 2 */
-moz-tab-size: 4;
/* 3 */
-o-tab-size: 4;
tab-size: 4;
/* 3 */
font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
/* 4 */
font-feature-settings: normal;
/* 5 */
font-variation-settings: normal;
/* 6 */
-webkit-tap-highlight-color: transparent;
/* 7 */
}
/*
1. Remove the margin in all browsers.
2. Inherit line-height from `html` so users can set them as a class directly on the `html` element.
*/
body {
margin: 0;
/* 1 */
line-height: inherit;
/* 2 */
}
/*
1. Add the correct height in Firefox.
2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)
3. Ensure horizontal rules are visible by default.
*/
hr {
height: 0;
/* 1 */
color: inherit;
/* 2 */
border-top-width: 1px;
/* 3 */
}
/*
Add the correct text decoration in Chrome, Edge, and Safari.
*/
abbr:where([title]) {
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
}
/*
Remove the default font size and weight for headings.
*/
h1,
h2,
h3,
h4,
h5,
h6 {
font-size: inherit;
font-weight: inherit;
}
/*
Reset links to optimize for opt-in styling instead of opt-out.
*/
a {
color: inherit;
text-decoration: inherit;
}
/*
Add the correct font weight in Edge and Safari.
*/
b,
strong {
font-weight: bolder;
}
/*
1. Use the user's configured `mono` font-family by default.
2. Use the user's configured `mono` font-feature-settings by default.
3. Use the user's configured `mono` font-variation-settings by default.
4. Correct the odd `em` font sizing in all browsers.
*/
code,
kbd,
samp,
pre {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
/* 1 */
font-feature-settings: normal;
/* 2 */
font-variation-settings: normal;
/* 3 */
font-size: 1em;
/* 4 */
}
/*
Add the correct font size in all browsers.
*/
small {
font-size: 80%;
}
/*
Prevent `sub` and `sup` elements from affecting the line height in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sub {
bottom: -0.25em;
}
sup {
top: -0.5em;
}
/*
1. Remove text indentation from table contents in Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=999088, https://bugs.webkit.org/show_bug.cgi?id=201297)
2. Correct table border color inheritance in all Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=935729, https://bugs.webkit.org/show_bug.cgi?id=195016)
3. Remove gaps between table borders by default.
*/
table {
text-indent: 0;
/* 1 */
border-color: inherit;
/* 2 */
border-collapse: collapse;
/* 3 */
}
/*
1. Change the font styles in all browsers.
2. Remove the margin in Firefox and Safari.
3. Remove default padding in all browsers.
*/
button,
input,
optgroup,
select,
textarea {
font-family: inherit;
/* 1 */
font-feature-settings: inherit;
/* 1 */
font-variation-settings: inherit;
/* 1 */
font-size: 100%;
/* 1 */
font-weight: inherit;
/* 1 */
line-height: inherit;
/* 1 */
letter-spacing: inherit;
/* 1 */
color: inherit;
/* 1 */
margin: 0;
/* 2 */
padding: 0;
/* 3 */
}
/*
Remove the inheritance of text transform in Edge and Firefox.
*/
button,
select {
text-transform: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Remove default button styles.
*/
button,
input:where([type='button']),
input:where([type='reset']),
input:where([type='submit']) {
-webkit-appearance: button;
/* 1 */
background-color: transparent;
/* 2 */
background-image: none;
/* 2 */
}
/*
Use the modern Firefox focus style for all focusable elements.
*/
:-moz-focusring {
outline: auto;
}
/*
Remove the additional `:invalid` styles in Firefox. (https://github.com/mozilla/gecko-dev/blob/2f9eacd9d3d995c937b4251a5557d95d494c9be1/layout/style/res/forms.css#L728-L737)
*/
:-moz-ui-invalid {
box-shadow: none;
}
/*
Add the correct vertical alignment in Chrome and Firefox.
*/
progress {
vertical-align: baseline;
}
/*
Correct the cursor style of increment and decrement buttons in Safari.
*/
::-webkit-inner-spin-button,
::-webkit-outer-spin-button {
height: auto;
}
/*
1. Correct the odd appearance in Chrome and Safari.
2. Correct the outline style in Safari.
*/
[type='search'] {
-webkit-appearance: textfield;
/* 1 */
outline-offset: -2px;
/* 2 */
}
/*
Remove the inner padding in Chrome and Safari on macOS.
*/
::-webkit-search-decoration {
-webkit-appearance: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Change font properties to `inherit` in Safari.
*/
::-webkit-file-upload-button {
-webkit-appearance: button;
/* 1 */
font: inherit;
/* 2 */
}
/*
Add the correct display in Chrome and Safari.
*/
summary {
display: list-item;
}
/*
Removes the default spacing and border for appropriate elements.
*/
blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre {
margin: 0;
}
fieldset {
margin: 0;
padding: 0;
}
legend {
padding: 0;
}
ol,
ul,
menu {
list-style: none;
margin: 0;
padding: 0;
}
/*
Reset default styling for dialogs.
*/
dialog {
padding: 0;
}
/*
Prevent resizing textareas horizontally by default.
*/
textarea {
resize: vertical;
}
/*
1. Reset the default placeholder opacity in Firefox. (https://github.com/tailwindlabs/tailwindcss/issues/3300)
2. Set the default placeholder color to the user's configured gray 400 color.
*/
input::-moz-placeholder, textarea::-moz-placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
input::placeholder,
textarea::placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
/*
Set the default cursor for buttons.
*/
button,
[role="button"] {
cursor: pointer;
}
/*
Make sure disabled buttons don't get the pointer cursor.
*/
:disabled {
cursor: default;
}
/*
1. Make replaced elements `display: block` by default. (https://github.com/mozdevs/cssremedy/issues/14)
2. Add `vertical-align: middle` to align replaced elements more sensibly by default. (https://github.com/jensimmons/cssremedy/issues/14#issuecomment-634934210)
This can trigger a poorly considered lint error in some tools but is included by design.
*/
img,
svg,
video,
canvas,
audio,
iframe,
embed,
object {
display: block;
/* 1 */
vertical-align: middle;
/* 2 */
}
/*
Constrain images and videos to the parent width and preserve their intrinsic aspect ratio. (https://github.com/mozdevs/cssremedy/issues/14)
*/
img,
video {
max-width: 100%;
height: auto;
}
/* Make elements with the HTML hidden attribute stay hidden by default */
[hidden]:where(:not([hidden="until-found"])) {
display: none;
}
.static {
position: static;
}
.absolute {
position: absolute;
}
.relative {
position: relative;
}
.right-2 {
right: 0.5rem;
}
.right-6 {
right: 1.5rem;
}
.top-1\/2 {
top: 50%;
}
.top-4 {
top: 1rem;
}
.z-10 {
z-index: 10;
}
.mb-2 {
margin-bottom: 0.5rem;
}
.mb-4 {
margin-bottom: 1rem;
}
.mb-6 {
margin-bottom: 1.5rem;
}
.mb-8 {
margin-bottom: 2rem;
}
.ml-2 {
margin-left: 0.5rem;
}
.flex {
display: flex;
}
.hidden {
display: none;
}
.h-16 {
height: 4rem;
}
.h-24 {
height: 6rem;
}
.h-4 {
height: 1rem;
}
.h-5 {
height: 1.25rem;
}
.h-6 {
height: 1.5rem;
}
.min-h-screen {
min-height: 100vh;
}
.w-16 {
width: 4rem;
}
.w-20 {
width: 5rem;
}
.w-24 {
width: 6rem;
}
.w-4 {
width: 1rem;
}
.w-48 {
width: 12rem;
}
.w-5 {
width: 1.25rem;
}
.w-6 {
width: 1.5rem;
}
.w-full {
width: 100%;
}
.flex-1 {
flex: 1 1 0%;
}
.-translate-y-1\/2 {
--tw-translate-y: -50%;
transform: translate(var(--tw-translate-x), var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));
}
.transform {
transform: translate(var(--tw-translate-x), var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
.animate-spin {
animation: spin 1s linear infinite;
}
.flex-col {
flex-direction: column;
}
.items-center {
align-items: center;
}
.justify-center {
justify-content: center;
}
.space-x-2 > :not([hidden]) ~ :not([hidden]) {
--tw-space-x-reverse: 0;
margin-right: calc(0.5rem * var(--tw-space-x-reverse));
margin-left: calc(0.5rem * calc(1 - var(--tw-space-x-reverse)));
}
.space-x-4 > :not([hidden]) ~ :not([hidden]) {
--tw-space-x-reverse: 0;
margin-right: calc(1rem * var(--tw-space-x-reverse));
margin-left: calc(1rem * calc(1 - var(--tw-space-x-reverse)));
}
.space-x-8 > :not([hidden]) ~ :not([hidden]) {
--tw-space-x-reverse: 0;
margin-right: calc(2rem * var(--tw-space-x-reverse));
margin-left: calc(2rem * calc(1 - var(--tw-space-x-reverse)));
}
.space-y-4 > :not([hidden]) ~ :not([hidden]) {
--tw-space-y-reverse: 0;
margin-top: calc(1rem * calc(1 - var(--tw-space-y-reverse)));
margin-bottom: calc(1rem * var(--tw-space-y-reverse));
}
.space-y-6 > :not([hidden]) ~ :not([hidden]) {
--tw-space-y-reverse: 0;
margin-top: calc(1.5rem * calc(1 - var(--tw-space-y-reverse)));
margin-bottom: calc(1.5rem * var(--tw-space-y-reverse));
}
.rounded {
border-radius: 0.25rem;
}
.rounded-full {
border-radius: 9999px;
}
.rounded-lg {
border-radius: 0.5rem;
}
.rounded-md {
border-radius: 0.375rem;
}
.border {
border-width: 1px;
}
.border-b {
border-bottom-width: 1px;
}
.border-b-2 {
border-bottom-width: 2px;
}
.border-black {
--tw-border-opacity: 1;
border-color: rgb(0 0 0 / var(--tw-border-opacity, 1));
}
.border-gray-200 {
--tw-border-opacity: 1;
border-color: rgb(229 231 235 / var(--tw-border-opacity, 1));
}
.bg-\[\#111827\] {
--tw-bg-opacity: 1;
background-color: rgb(17 24 39 / var(--tw-bg-opacity, 1));
}
.bg-gray-100 {
--tw-bg-opacity: 1;
background-color: rgb(243 244 246 / var(--tw-bg-opacity, 1));
}
.bg-gray-900 {
--tw-bg-opacity: 1;
background-color: rgb(17 24 39 / var(--tw-bg-opacity, 1));
}
.bg-red-500 {
--tw-bg-opacity: 1;
background-color: rgb(239 68 68 / var(--tw-bg-opacity, 1));
}
.bg-white {
--tw-bg-opacity: 1;
background-color: rgb(255 255 255 / var(--tw-bg-opacity, 1));
}
.p-2 {
padding: 0.5rem;
}
.p-4 {
padding: 1rem;
}
.p-8 {
padding: 2rem;
}
.px-1 {
padding-left: 0.25rem;
padding-right: 0.25rem;
}
.px-3 {
padding-left: 0.75rem;
padding-right: 0.75rem;
}
.px-4 {
padding-left: 1rem;
padding-right: 1rem;
}
.py-0\.5 {
padding-top: 0.125rem;
padding-bottom: 0.125rem;
}
.py-2 {
padding-top: 0.5rem;
padding-bottom: 0.5rem;
}
.py-4 {
padding-top: 1rem;
padding-bottom: 1rem;
}
.py-6 {
padding-top: 1.5rem;
padding-bottom: 1.5rem;
}
.pr-10 {
padding-right: 2.5rem;
}
.text-2xl {
font-size: 1.5rem;
line-height: 2rem;
}
.text-base {
font-size: 1rem;
line-height: 1.5rem;
}
.text-sm {
font-size: 0.875rem;
line-height: 1.25rem;
}
.font-medium {
font-weight: 500;
}
.font-semibold {
font-weight: 600;
}
.text-blue-500 {
--tw-text-opacity: 1;
color: rgb(59 130 246 / var(--tw-text-opacity, 1));
}
.text-blue-600 {
--tw-text-opacity: 1;
color: rgb(37 99 235 / var(--tw-text-opacity, 1));
}
.text-gray-400 {
--tw-text-opacity: 1;
color: rgb(156 163 175 / var(--tw-text-opacity, 1));
}
.text-gray-500 {
--tw-text-opacity: 1;
color: rgb(107 114 128 / var(--tw-text-opacity, 1));
}
.text-gray-600 {
--tw-text-opacity: 1;
color: rgb(75 85 99 / var(--tw-text-opacity, 1));
}
.text-white {
--tw-text-opacity: 1;
color: rgb(255 255 255 / var(--tw-text-opacity, 1));
}
.opacity-25 {
opacity: 0.25;
}
.opacity-75 {
opacity: 0.75;
}
.filter {
filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow);
}
.hover\:bg-\[\#1f2937\]:hover {
--tw-bg-opacity: 1;
background-color: rgb(31 41 55 / var(--tw-bg-opacity, 1));
}
.hover\:bg-gray-100:hover {
--tw-bg-opacity: 1;
background-color: rgb(243 244 246 / var(--tw-bg-opacity, 1));
}
.hover\:bg-red-600:hover {
--tw-bg-opacity: 1;
background-color: rgb(220 38 38 / var(--tw-bg-opacity, 1));
}
.hover\:text-gray-700:hover {
--tw-text-opacity: 1;
color: rgb(55 65 81 / var(--tw-text-opacity, 1));
}
.hover\:text-gray-900:hover {
--tw-text-opacity: 1;
color: rgb(17 24 39 / var(--tw-text-opacity, 1));
}
.focus\:outline-none:focus {
outline: 2px solid transparent;
outline-offset: 2px;
}
.focus\:ring-2:focus {
--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);
box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
}
.focus\:ring-blue-500:focus {
--tw-ring-opacity: 1;
--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity, 1));
}

View File

@@ -1 +0,0 @@
rustfs bin path, do not delete

View File

@@ -1,19 +0,0 @@
/**
* Copyright 2024 RustFS Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
@tailwind base;
@tailwind components;
@tailwind utilities;

View File

@@ -1,330 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::components::navbar::LoadingSpinner;
use crate::route::Route;
use crate::utils::{RustFSConfig, ServiceManager};
use chrono::Datelike;
use dioxus::logger::tracing::debug;
use dioxus::prelude::*;
use std::time::Duration;
const HEADER_LOGO: Asset = asset!("/assets/rustfs-logo.svg");
const TAILWIND_CSS: Asset = asset!("/assets/tailwind.css");
/// Define the state of the service
#[derive(PartialEq, Debug, Clone)]
enum ServiceState {
Start,
Stop,
}
/// Define the Home component
/// The Home component is the main component of the application
/// It is responsible for starting and stopping the service
/// It also displays the service status and provides a button to toggle the service
/// The Home component also displays the footer of the application
/// The footer contains links to the official site, documentation, GitHub, and license
/// The footer also displays the version of the application
/// The Home component also contains a button to change the theme of the application
/// The Home component also contains a button to go to the settings page
#[component]
pub fn Home() -> Element {
#[allow(clippy::redundant_closure)]
let service = use_signal(|| ServiceManager::new());
let conf = RustFSConfig::load().unwrap_or_else(|e| {
ServiceManager::show_error(&format!("load config failed: {e}"));
RustFSConfig::default()
});
debug!("loaded configurations: {:?}", conf);
let config = use_signal(|| conf.clone());
use dioxus_router::prelude::Link;
use document::{Meta, Stylesheet, Title};
let mut service_state = use_signal(|| ServiceState::Start);
// Create a periodic check on the effect of the service status
use_effect(move || {
spawn(async move {
loop {
if let Some(pid) = ServiceManager::check_service_status().await {
debug!("service_running true pid: {:?}", pid);
service_state.set(ServiceState::Stop);
} else {
debug!("service_running true pid: 0");
service_state.set(ServiceState::Start);
}
tokio::time::sleep(Duration::from_secs(2)).await;
}
});
});
debug!("project start service_state: {:?}", service_state.read());
// Use 'use_signal' to manage service status
let mut loading = use_signal(|| false);
let mut start_service = move |_| {
let service = service;
let config = config.read().clone();
let mut service_state = service_state;
// set the loading status
loading.set(true);
debug!("stop loading_state: {:?}", loading.read());
spawn(async move {
match service.read().start(config).await {
Ok(result) => {
if result.success {
let duration = result.end_time - result.start_time;
debug!("The service starts successfully and takes a long time:{}ms", duration.num_milliseconds());
service_state.set(ServiceState::Stop);
} else {
ServiceManager::show_error(&result.message);
service_state.set(ServiceState::Start);
}
}
Err(e) => {
ServiceManager::show_error(&format!("start service failed: {e}"));
}
}
// Only set loading to false when it's actually done
loading.set(false);
debug!("start loading_state: {:?}", loading.read());
});
};
let mut stop_service = move |_| {
let service = service;
let mut service_state = service_state;
// set the loading status
loading.set(true);
spawn(async move {
match service.read().stop().await {
Ok(result) => {
if result.success {
let duration = result.end_time - result.start_time;
debug!("The service stops successfully and takes a long time:{}ms", duration.num_milliseconds());
service_state.set(ServiceState::Start);
} else {
ServiceManager::show_error(&result.message);
}
}
Err(e) => {
ServiceManager::show_error(&format!("stop service failed: {e}"));
}
}
debug!("service_state: {:?}", service_state.read());
// Only set loading to false when it's actually done
loading.set(false);
debug!("stop loading_state: {:?}", loading.read());
});
};
// Toggle the state when the button is clicked
let toggle_service = {
let mut service_state = service_state;
debug!("toggle_service service_state: {:?}", service_state.read());
move |_| {
if service_state.read().eq(&ServiceState::Stop) {
// If the service status is started, you need to run a command to stop the service
stop_service(());
service_state.set(ServiceState::Start);
} else {
start_service(());
service_state.set(ServiceState::Stop);
}
}
};
// Define dynamic styles based on state
let button_class = if service_state.read().eq(&ServiceState::Start) {
"bg-[#111827] hover:bg-[#1f2937] text-white px-4 py-2 rounded-md flex items-center space-x-2"
} else {
"bg-red-500 hover:bg-red-600 text-white px-4 py-2 rounded-md flex items-center space-x-2"
};
rsx! {
// The Stylesheet component inserts a style link into the head of the document
Stylesheet {href: TAILWIND_CSS,}
Title { "RustFS APP" }
Meta {
name: "description",
// TODO: translate to english
content: "RustFS is developed in the popular and secure Rust language, compatible with S3 protocol. Suitable for all scenarios including AI/ML and massive data storage, big data, internet, industrial and secure storage. Nearly free to use. Follows Apache 2 license, supports domestic security devices and systems.",
}
div { class: "min-h-screen flex flex-col items-center bg-white",
div { class: "absolute top-4 right-6 flex space-x-2",
// change theme
button { class: "p-2 hover:bg-gray-100 rounded-lg", ChangeThemeButton {} }
// setting button
Link {
class: "p-2 hover:bg-gray-100 rounded-lg",
to: Route::SettingViews {},
SettingButton {}
}
}
main { class: "flex-1 flex flex-col items-center justify-center space-y-6 p-4",
div { class: "w-24 h-24 bg-gray-900 rounded-full flex items-center justify-center",
img { alt: "Logo", class: "w-16 h-16", src: HEADER_LOGO }
}
div { class: "text-gray-600",
"Service is running on "
span { class: "text-blue-600", " 127.0.0.1:9000 " }
}
LoadingSpinner {
loading: loading.read().to_owned(),
text: "processing...",
}
button { class: button_class, onclick: toggle_service,
svg {
class: "h-4 w-4",
fill: "none",
stroke: "currentColor",
view_box: "0 0 24 24",
xmlns: "http://www.w3.org/2000/svg",
if service_state.read().eq(&ServiceState::Start) {
path {
d: "M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z",
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
}
path {
d: "M21 12a9 9 0 11-18 0 9 9 0 0118 0z",
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
}
} else {
path {
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
d: "M21 12a9 9 0 11-18 0 9 9 0 0118 0z",
}
path {
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
d: "M9 10h6v4H9z",
}
}
}
span { id: "serviceStatus",
if service_state.read().eq(&ServiceState::Start) {
"Start service"
} else {
"Stop service"
}
}
}
}
Footer { version: "v1.0.0".to_string() }
}
}
}
#[component]
pub fn Footer(version: String) -> Element {
let now = chrono::Local::now();
let year = now.naive_local().year();
rsx! {
footer { class: "w-full py-6 flex flex-col items-center space-y-4 mb-6",
nav { class: "flex space-x-4 text-gray-600",
a { class: "hover:text-gray-900", href: "https://rustfs.com", "Official Site" }
a {
class: "hover:text-gray-900",
href: "https://rustfs.com/docs",
"Documentation"
}
a {
class: "hover:text-gray-900",
href: "https://github.com/rustfs/rustfs",
"GitHub"
}
a {
class: "hover:text-gray-900",
href: "https://rustfs.com/docs/license/",
"License"
}
a { class: "hover:text-gray-900", href: "#", "Sponsors" }
}
div { class: "text-gray-500 text-sm", " © rustfs.com {year}, All rights reserved." }
div { class: "text-gray-400 text-sm mb-8", " version {version} " }
}
}
}
#[component]
pub fn GoBackButtons() -> Element {
rsx! {
button {
class: "p-2 hover:bg-gray-100 rounded-lg",
"onclick": "window.history.back()",
"Back to the Past"
}
}
}
#[component]
pub fn GoForwardButtons() -> Element {
rsx! {
button {
class: "p-2 hover:bg-gray-100 rounded-lg",
"onclick": "window.history.forward()",
"Back to the Future"
}
}
}
#[component]
pub fn ChangeThemeButton() -> Element {
rsx! {
svg {
class: "h-6 w-6 text-gray-600",
fill: "none",
stroke: "currentColor",
view_box: "0 0 24 24",
xmlns: "http://www.w3.org/2000/svg",
path {
d: "M9 3v2m6-2v2M9 19v2m6-2v2M5 9H3m2 6H3m18-6h-2m2 6h-2M7 19h10a2 2 0 002-2V7a2 2 0 00-2-2H7a2 2 0 00-2 2v10a2 2 0 002 2zM9 9h6v6H9V9z",
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
}
}
}
}
#[component]
pub fn SettingButton() -> Element {
rsx! {
svg {
class: "h-6 w-6 text-gray-600",
fill: "none",
stroke: "currentColor",
view_box: "0 0 24 24",
xmlns: "http://www.w3.org/2000/svg",
path {
d: "M10.325 4.317c.426-1.756 2.924-1.756 3.35 0a1.724 1.724 0 002.573 1.066c1.543-.94 3.31.826 2.37 2.37a1.724 1.724 0 001.065 2.572c1.756.426 1.756 2.924 0 3.35a1.724 1.724 0 00-1.066 2.573c.94 1.543-.826 3.31-2.37 2.37a1.724 1.724 0 00-2.572 1.065c-.426 1.756-2.924 1.756-3.35 0a1.724 1.724 0 00-2.573-1.066c-1.543.94-3.31-.826-2.37-2.37a1.724 1.724 0 00-1.065-2.572c-1.756-.426-1.756-2.924 0-3.35a1.724 1.724 0 001.066-2.573c-.94-1.543.826-3.31 2.37-2.37.996.608 2.296.07 2.572-1.065z",
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
}
path {
d: "M15 12a3 3 0 11-6 0 3 3 0 016 0z",
stroke_linecap: "round",
stroke_linejoin: "round",
stroke_width: "2",
}
}
}
}

View File

@@ -1,20 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod home;
pub use home::Home;
mod navbar;
pub use navbar::Navbar;
mod setting;
pub use setting::Setting;

View File

@@ -1,74 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::route::Route;
use dioxus::logger::tracing::debug;
use dioxus::prelude::*;
const NAVBAR_CSS: Asset = asset!("/assets/styling/navbar.css");
#[component]
pub fn Navbar() -> Element {
rsx! {
document::Link { rel: "stylesheet", href: NAVBAR_CSS }
div { id: "navbar", class: "hidden", style: "display: none;",
Link { to: Route::HomeViews {}, "Home" }
Link { to: Route::SettingViews {}, "Setting" }
}
Outlet::<Route> {}
}
}
#[derive(Props, PartialEq, Debug, Clone)]
pub struct LoadingSpinnerProps {
#[props(default = true)]
loading: bool,
#[props(default = "Processing...")]
text: &'static str,
}
#[component]
pub fn LoadingSpinner(props: LoadingSpinnerProps) -> Element {
debug!("loading: {}", props.loading);
if !props.loading {
debug!("LoadingSpinner false loading: {}", props.loading);
return rsx! {};
}
rsx! {
div { class: "flex items-center justify-center z-10",
svg {
class: "animate-spin h-5 w-5 text-blue-500",
xmlns: "http://www.w3.org/2000/svg",
fill: "none",
view_box: "0 0 24 24",
circle {
class: "opacity-25",
cx: "12",
cy: "12",
r: "10",
stroke: "currentColor",
stroke_width: "4",
}
path {
class: "opacity-75",
fill: "currentColor",
d: "M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z",
}
}
span { class: "ml-2 text-gray-600", "{props.text}" }
}
}
}

View File

@@ -1,216 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::components::navbar::LoadingSpinner;
use dioxus::logger::tracing::{debug, error};
use dioxus::prelude::*;
const SETTINGS_JS: Asset = asset!("/assets/js/sts.js");
const TAILWIND_CSS: Asset = asset!("/assets/tailwind.css");
#[component]
pub fn Setting() -> Element {
use crate::utils::{RustFSConfig, ServiceManager};
use document::{Meta, Script, Stylesheet, Title};
#[allow(clippy::redundant_closure)]
let service = use_signal(|| ServiceManager::new());
let conf = RustFSConfig::load().unwrap_or_else(|e| {
error!("load config error: {}", e);
RustFSConfig::default_config()
});
debug!("conf address: {:?}", conf.clone().address);
let config = use_signal(|| conf.clone());
let address_state = use_signal(|| conf.address.to_string());
let mut host_state = use_signal(|| conf.host.to_string());
let mut port_state = use_signal(|| conf.port.to_string());
let mut access_key_state = use_signal(|| conf.access_key.to_string());
let mut secret_key_state = use_signal(|| conf.secret_key.to_string());
let mut volume_name_state = use_signal(|| conf.volume_name.to_string());
let loading = use_signal(|| false);
let save_and_restart = {
let host_state = host_state;
let port_state = port_state;
let access_key_state = access_key_state;
let secret_key_state = secret_key_state;
let volume_name_state = volume_name_state;
let mut loading = loading;
debug!("save_and_restart access_key:{}", access_key_state.read());
move |_| {
// set the loading status
loading.set(true);
let mut config = config;
config.write().address = format!("{}:{}", host_state.read(), port_state.read());
config.write().host = host_state.read().to_string();
config.write().port = port_state.read().to_string();
config.write().access_key = access_key_state.read().to_string();
config.write().secret_key = secret_key_state.read().to_string();
config.write().volume_name = volume_name_state.read().to_string();
// restart service
let service = service;
let config = config.read().clone();
spawn(async move {
if let Err(e) = service.read().restart(config).await {
ServiceManager::show_error(&format!("Failed to send restart command: {e}"));
}
// reset the status when you're done
loading.set(false);
});
}
};
rsx! {
Title { "Settings - RustFS App" }
Meta { name: "description", content: "Settings - RustFS App." }
// The Stylesheet component inserts a style link into the head of the document
Stylesheet { href: TAILWIND_CSS }
Script { src: SETTINGS_JS }
div { class: "bg-white p-8",
h1 { class: "text-2xl font-semibold mb-6", "Settings" }
div { class: "border-b border-gray-200 mb-6",
nav { class: "flex space-x-8",
button {
class: "tab-btn px-1 py-4 text-sm font-medium border-b-2 border-black",
"data-tab": "service",
"onclick": "switchTab('service')",
"Service "
}
button {
class: "tab-btn px-1 py-4 text-sm font-medium text-gray-500 hover:text-gray-700",
"data-tab": "user",
"onclick": "switchTab('user')",
"User "
}
button {
class: "tab-btn px-1 py-4 text-sm font-medium text-gray-500 hover:text-gray-700 hidden",
"data-tab": "logs",
"onclick": "switchTab('logs')",
"Logs "
}
}
}
div { id: "tabContent",
div { class: "tab-content", id: "service",
div { class: "mb-8",
h2 { class: "text-base font-medium mb-2", "Service address" }
p { class: "text-gray-600 mb-4",
" The service address is the IP address and port number of the service. the default address is "
code { class: "bg-gray-100 px-1 py-0.5 rounded", {address_state} }
". "
}
div { class: "flex space-x-2",
input {
class: "border rounded px-3 py-2 w-48 focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "text",
value: host_state,
oninput: move |evt| host_state.set(evt.value().clone()),
}
span { class: "flex items-center", ":" }
input {
class: "border rounded px-3 py-2 w-20 focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "text",
value: port_state,
oninput: move |evt| port_state.set(evt.value().clone()),
}
}
}
div { class: "mb-8",
h2 { class: "text-base font-medium mb-2", "Storage path" }
p { class: "text-gray-600 mb-4",
"Update the storage path of the service. the default path is {volume_name_state}."
}
input {
class: "border rounded px-3 py-2 w-full focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "text",
value: volume_name_state,
oninput: move |evt| volume_name_state.set(evt.value().clone()),
}
}
}
div { class: "tab-content hidden", id: "user",
div { class: "mb-8",
h2 { class: "text-base font-medium mb-2", "User" }
p { class: "text-gray-600 mb-4",
"The user is the owner of the service. the default user is "
code { class: "bg-gray-100 px-1 py-0.5 rounded", {access_key_state} }
}
input {
class: "border rounded px-3 py-2 w-full focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "text",
value: access_key_state,
oninput: move |evt| access_key_state.set(evt.value().clone()),
}
}
div { class: "mb-8",
h2 { class: "text-base font-medium mb-2", "Password" }
p { class: "text-gray-600 mb-4",
"The password is the password of the user. the default password is "
code { class: "bg-gray-100 px-1 py-0.5 rounded", {secret_key_state} }
}
div { class: "relative",
input {
class: "border rounded px-3 py-2 w-full pr-10 focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "password",
value: secret_key_state,
oninput: move |evt| secret_key_state.set(evt.value().clone()),
}
button {
class: "absolute right-2 top-1/2 transform -translate-y-1/2 text-gray-500 hover:text-gray-700",
"onclick": "togglePassword(this)",
svg {
class: "h-5 w-5",
fill: "currentColor",
view_box: "0 0 20 20",
xmlns: "http://www.w3.org/2000/svg",
path { d: "M10 12a2 2 0 100-4 2 2 0 000 4z" }
path {
clip_rule: "evenodd",
d: "M.458 10C1.732 5.943 5.522 3 10 3s8.268 2.943 9.542 7c-1.274 4.057-5.064 7-9.542 7S1.732 14.057.458 10zM14 10a4 4 0 11-8 0 4 4 0 018 0z",
fill_rule: "evenodd",
}
}
}
}
}
}
div { class: "tab-content hidden", id: "logs",
div { class: "mb-8",
h2 { class: "text-base font-medium mb-2", "Logs storage path" }
p { class: "text-gray-600 mb-4",
"The logs storage path is the path where the logs are stored. the default path is /var/log/rustfs. "
}
input {
class: "border rounded px-3 py-2 w-full focus:outline-none focus:ring-2 focus:ring-blue-500",
r#type: "text",
value: "/var/logs/rustfs",
}
}
}
}
div { class: "flex space-x-4",
button {
class: "bg-[#111827] text-white px-4 py-2 rounded hover:bg-[#1f2937]",
onclick: save_and_restart,
" Save and restart "
}
GoBackButton { "Back" }
}
LoadingSpinner {
loading: loading.read().to_owned(),
text: "Service processing...",
}
}
}
}

View File

@@ -1,23 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod components;
mod route;
mod utils;
mod views;
fn main() {
let _worker_guard = utils::init_logger();
dioxus::launch(views::App);
}

View File

@@ -1,17 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod router;
pub use router::Route;

View File

@@ -1,564 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use keyring::Entry;
use serde::{Deserialize, Serialize};
use std::error::Error;
/// Configuration for the RustFS service
///
/// # Fields
/// * `address` - The address of the RustFS service
/// * `host` - The host of the RustFS service
/// * `port` - The port of the RustFS service
/// * `access_key` - The access key of the RustFS service
/// * `secret_key` - The secret key of the RustFS service
/// * `domain_name` - The domain name of the RustFS service
/// * `volume_name` - The volume name of the RustFS service
/// * `console_address` - The console address of the RustFS service
///
/// # Example
/// ```
/// let config = RustFSConfig {
/// address: "127.0.0.1:9000".to_string(),
/// host: "127.0.0.1".to_string(),
/// port: "9000".to_string(),
/// access_key: "rustfsadmin".to_string(),
/// secret_key: "rustfsadmin".to_string(),
/// domain_name: "demo.rustfs.com".to_string(),
/// volume_name: "data".to_string(),
/// console_address: "127.0.0.1:9001".to_string(),
/// };
/// println!("{:?}", config);
/// assert_eq!(config.address, "127.0.0.1:9000");
/// ```
#[derive(Debug, Clone, Default, Deserialize, Serialize, Ord, PartialOrd, Eq, PartialEq)]
pub struct RustFSConfig {
pub address: String,
pub host: String,
pub port: String,
pub access_key: String,
pub secret_key: String,
pub domain_name: String,
pub volume_name: String,
pub console_address: String,
}
impl RustFSConfig {
/// keyring the name of the service
const SERVICE_NAME: &'static str = "rustfs-service";
/// keyring the key of the service
const SERVICE_KEY: &'static str = "rustfs_key";
/// default domain name
const DEFAULT_DOMAIN_NAME_VALUE: &'static str = "demo.rustfs.com";
/// default address value
const DEFAULT_ADDRESS_VALUE: &'static str = "127.0.0.1:9000";
/// default port value
const DEFAULT_PORT_VALUE: &'static str = "9000";
/// default host value
const DEFAULT_HOST_VALUE: &'static str = "127.0.0.1";
/// default access key value
const DEFAULT_ACCESS_KEY_VALUE: &'static str = "rustfsadmin";
/// default secret key value
const DEFAULT_SECRET_KEY_VALUE: &'static str = "rustfsadmin";
/// default console address value
const DEFAULT_CONSOLE_ADDRESS_VALUE: &'static str = "127.0.0.1:9001";
/// get the default volume_name
///
/// # Returns
/// * The default volume name
///
/// # Example
/// ```
/// let volume_name = RustFSConfig::default_volume_name();
/// ```
pub fn default_volume_name() -> String {
dirs::home_dir()
.map(|home| home.join("rustfs").join("data"))
.and_then(|path| path.to_str().map(String::from))
.unwrap_or_else(|| "data".to_string())
}
/// create a default configuration
///
/// # Returns
/// * The default configuration
///
/// # Example
/// ```
/// let config = RustFSConfig::default_config();
/// println!("{:?}", config);
/// assert_eq!(config.address, "127.0.0.1:9000");
/// ```
pub fn default_config() -> Self {
Self {
address: Self::DEFAULT_ADDRESS_VALUE.to_string(),
host: Self::DEFAULT_HOST_VALUE.to_string(),
port: Self::DEFAULT_PORT_VALUE.to_string(),
access_key: Self::DEFAULT_ACCESS_KEY_VALUE.to_string(),
secret_key: Self::DEFAULT_SECRET_KEY_VALUE.to_string(),
domain_name: Self::DEFAULT_DOMAIN_NAME_VALUE.to_string(),
volume_name: Self::default_volume_name(),
console_address: Self::DEFAULT_CONSOLE_ADDRESS_VALUE.to_string(),
}
}
/// Load the configuration from the keyring
///
/// # Errors
/// * If the configuration cannot be loaded from the keyring
/// * If the configuration cannot be deserialized
/// * If the address cannot be extracted from the configuration
///
/// # Example
/// ```
/// let config = RustFSConfig::load().unwrap();
/// println!("{:?}", config);
/// assert_eq!(config.address, "127.0.0.1:9000");
/// ```
pub fn load() -> Result<Self, Box<dyn Error>> {
let mut config = Self::default_config();
// Try to get the configuration of the storage from the keyring
let entry = Entry::new(Self::SERVICE_NAME, Self::SERVICE_KEY)?;
if let Ok(stored_json) = entry.get_password() {
if let Ok(stored_config) = serde_json::from_str::<RustFSConfig>(&stored_json) {
// update fields that are not empty and non default
if !stored_config.address.is_empty() && stored_config.address != Self::DEFAULT_ADDRESS_VALUE {
config.address = stored_config.address;
let (host, port) = Self::extract_host_port(config.address.as_str())
.ok_or_else(|| format!("Unable to extract host and port from address '{}'", config.address))?;
config.host = host.to_string();
config.port = port.to_string();
}
if !stored_config.access_key.is_empty() && stored_config.access_key != Self::DEFAULT_ACCESS_KEY_VALUE {
config.access_key = stored_config.access_key;
}
if !stored_config.secret_key.is_empty() && stored_config.secret_key != Self::DEFAULT_SECRET_KEY_VALUE {
config.secret_key = stored_config.secret_key;
}
if !stored_config.domain_name.is_empty() && stored_config.domain_name != Self::DEFAULT_DOMAIN_NAME_VALUE {
config.domain_name = stored_config.domain_name;
}
// The stored volume_name is updated only if it is not empty and different from the default
if !stored_config.volume_name.is_empty() && stored_config.volume_name != Self::default_volume_name() {
config.volume_name = stored_config.volume_name;
}
if !stored_config.console_address.is_empty()
&& stored_config.console_address != Self::DEFAULT_CONSOLE_ADDRESS_VALUE
{
config.console_address = stored_config.console_address;
}
}
}
Ok(config)
}
/// Auxiliary method: Extract the host and port from the address string
/// # Arguments
/// * `address` - The address string
///
/// # Returns
/// * `Some((host, port))` - The host and port
///
/// # Errors
/// * If the address is not in the form 'host:port'
/// * If the port is not a valid u16
///
/// # Example
/// ```
/// let (host, port) = RustFSConfig::extract_host_port("127.0.0.1:9000").unwrap();
/// assert_eq!(host, "127.0.0.1");
/// assert_eq!(port, 9000);
/// ```
pub fn extract_host_port(address: &str) -> Option<(&str, u16)> {
let parts: Vec<&str> = address.split(':').collect();
if parts.len() == 2 {
if let Ok(port) = parts[1].parse::<u16>() {
return Some((parts[0], port));
}
}
None
}
/// save the configuration to keyring
///
/// # Errors
/// * If the configuration cannot be serialized
/// * If the configuration cannot be saved to the keyring
///
/// # Example
/// ```
/// let config = RustFSConfig::default_config();
/// config.save().unwrap();
/// ```
pub fn save(&self) -> Result<(), Box<dyn Error>> {
let entry = Entry::new(Self::SERVICE_NAME, Self::SERVICE_KEY)?;
let json = serde_json::to_string(self)?;
entry.set_password(&json)?;
Ok(())
}
/// Clear the stored configuration from the system keyring
///
/// # Returns
/// `Ok(())` if the configuration was successfully cleared, or an error if the operation failed.
///
/// # Example
/// ```
/// RustFSConfig::clear().unwrap();
/// ```
#[allow(dead_code)]
pub fn clear() -> Result<(), Box<dyn Error>> {
let entry = Entry::new(Self::SERVICE_NAME, Self::SERVICE_KEY)?;
entry.delete_credential()?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_rustfs_config_default() {
let config = RustFSConfig::default();
assert!(config.address.is_empty());
assert!(config.host.is_empty());
assert!(config.port.is_empty());
assert!(config.access_key.is_empty());
assert!(config.secret_key.is_empty());
assert!(config.domain_name.is_empty());
assert!(config.volume_name.is_empty());
assert!(config.console_address.is_empty());
}
#[test]
fn test_rustfs_config_creation() {
let config = RustFSConfig {
address: "192.168.1.100:9000".to_string(),
host: "192.168.1.100".to_string(),
port: "9000".to_string(),
access_key: "testuser".to_string(),
secret_key: "testpass".to_string(),
domain_name: "test.rustfs.com".to_string(),
volume_name: "/data/rustfs".to_string(),
console_address: "192.168.1.100:9001".to_string(),
};
assert_eq!(config.address, "192.168.1.100:9000");
assert_eq!(config.host, "192.168.1.100");
assert_eq!(config.port, "9000");
assert_eq!(config.access_key, "testuser");
assert_eq!(config.secret_key, "testpass");
assert_eq!(config.domain_name, "test.rustfs.com");
assert_eq!(config.volume_name, "/data/rustfs");
assert_eq!(config.console_address, "192.168.1.100:9001");
}
#[test]
fn test_default_volume_name() {
let volume_name = RustFSConfig::default_volume_name();
assert!(!volume_name.is_empty());
// Should either be the home directory path or fallback to "data"
assert!(volume_name.contains("rustfs") || volume_name == "data");
}
#[test]
fn test_default_config() {
let config = RustFSConfig::default_config();
assert_eq!(config.address, RustFSConfig::DEFAULT_ADDRESS_VALUE);
assert_eq!(config.host, RustFSConfig::DEFAULT_HOST_VALUE);
assert_eq!(config.port, RustFSConfig::DEFAULT_PORT_VALUE);
assert_eq!(config.access_key, RustFSConfig::DEFAULT_ACCESS_KEY_VALUE);
assert_eq!(config.secret_key, RustFSConfig::DEFAULT_SECRET_KEY_VALUE);
assert_eq!(config.domain_name, RustFSConfig::DEFAULT_DOMAIN_NAME_VALUE);
assert_eq!(config.console_address, RustFSConfig::DEFAULT_CONSOLE_ADDRESS_VALUE);
assert!(!config.volume_name.is_empty());
}
#[test]
fn test_extract_host_port_valid() {
let test_cases = vec![
("127.0.0.1:9000", Some(("127.0.0.1", 9000))),
("localhost:8080", Some(("localhost", 8080))),
("192.168.1.100:3000", Some(("192.168.1.100", 3000))),
("0.0.0.0:80", Some(("0.0.0.0", 80))),
("example.com:443", Some(("example.com", 443))),
];
for (input, expected) in test_cases {
let result = RustFSConfig::extract_host_port(input);
assert_eq!(result, expected, "Failed for input: {input}");
}
}
#[test]
fn test_extract_host_port_invalid() {
let invalid_cases = vec![
"127.0.0.1", // Missing port
"127.0.0.1:", // Empty port
"127.0.0.1:abc", // Invalid port
"127.0.0.1:99999", // Port out of range
"", // Empty string
"127.0.0.1:9000:extra", // Too many parts
"invalid", // No colon
];
for input in invalid_cases {
let result = RustFSConfig::extract_host_port(input);
assert_eq!(result, None, "Should be None for input: {input}");
}
// Special case: empty host but valid port should still work
let result = RustFSConfig::extract_host_port(":9000");
assert_eq!(result, Some(("", 9000)));
}
#[test]
fn test_extract_host_port_edge_cases() {
// Test edge cases for port numbers
assert_eq!(RustFSConfig::extract_host_port("host:0"), Some(("host", 0)));
assert_eq!(RustFSConfig::extract_host_port("host:65535"), Some(("host", 65535)));
assert_eq!(RustFSConfig::extract_host_port("host:65536"), None); // Out of range
}
#[test]
fn test_serialization() {
let config = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "admin".to_string(),
secret_key: "password".to_string(),
domain_name: "test.com".to_string(),
volume_name: "/data".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
let json = serde_json::to_string(&config).unwrap();
assert!(json.contains("127.0.0.1:9000"));
assert!(json.contains("admin"));
assert!(json.contains("test.com"));
}
#[test]
fn test_deserialization() {
let json = r#"{
"address": "192.168.1.100:9000",
"host": "192.168.1.100",
"port": "9000",
"access_key": "testuser",
"secret_key": "testpass",
"domain_name": "example.com",
"volume_name": "/opt/data",
"console_address": "192.168.1.100:9001"
}"#;
let config: RustFSConfig = serde_json::from_str(json).unwrap();
assert_eq!(config.address, "192.168.1.100:9000");
assert_eq!(config.host, "192.168.1.100");
assert_eq!(config.port, "9000");
assert_eq!(config.access_key, "testuser");
assert_eq!(config.secret_key, "testpass");
assert_eq!(config.domain_name, "example.com");
assert_eq!(config.volume_name, "/opt/data");
assert_eq!(config.console_address, "192.168.1.100:9001");
}
#[test]
fn test_serialization_deserialization_roundtrip() {
let original_config = RustFSConfig {
address: "10.0.0.1:8080".to_string(),
host: "10.0.0.1".to_string(),
port: "8080".to_string(),
access_key: "roundtrip_user".to_string(),
secret_key: "roundtrip_pass".to_string(),
domain_name: "roundtrip.test".to_string(),
volume_name: "/tmp/roundtrip".to_string(),
console_address: "10.0.0.1:8081".to_string(),
};
let json = serde_json::to_string(&original_config).unwrap();
let deserialized_config: RustFSConfig = serde_json::from_str(&json).unwrap();
assert_eq!(original_config, deserialized_config);
}
#[test]
fn test_config_ordering() {
let config1 = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "admin".to_string(),
secret_key: "password".to_string(),
domain_name: "test.com".to_string(),
volume_name: "/data".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
let config2 = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "admin".to_string(),
secret_key: "password".to_string(),
domain_name: "test.com".to_string(),
volume_name: "/data".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
let config3 = RustFSConfig {
address: "127.0.0.1:9001".to_string(), // Different port
host: "127.0.0.1".to_string(),
port: "9001".to_string(),
access_key: "admin".to_string(),
secret_key: "password".to_string(),
domain_name: "test.com".to_string(),
volume_name: "/data".to_string(),
console_address: "127.0.0.1:9002".to_string(),
};
assert_eq!(config1, config2);
assert_ne!(config1, config3);
assert!(config1 < config3); // Lexicographic ordering
}
#[test]
fn test_clone() {
let original = RustFSConfig::default_config();
let cloned = original.clone();
assert_eq!(original, cloned);
assert_eq!(original.address, cloned.address);
assert_eq!(original.access_key, cloned.access_key);
}
#[test]
fn test_debug_format() {
let config = RustFSConfig::default_config();
let debug_str = format!("{config:?}");
assert!(debug_str.contains("RustFSConfig"));
assert!(debug_str.contains("address"));
assert!(debug_str.contains("127.0.0.1:9000"));
}
#[test]
fn test_constants() {
assert_eq!(RustFSConfig::SERVICE_NAME, "rustfs-service");
assert_eq!(RustFSConfig::SERVICE_KEY, "rustfs_key");
assert_eq!(RustFSConfig::DEFAULT_DOMAIN_NAME_VALUE, "demo.rustfs.com");
assert_eq!(RustFSConfig::DEFAULT_ADDRESS_VALUE, "127.0.0.1:9000");
assert_eq!(RustFSConfig::DEFAULT_PORT_VALUE, "9000");
assert_eq!(RustFSConfig::DEFAULT_HOST_VALUE, "127.0.0.1");
assert_eq!(RustFSConfig::DEFAULT_ACCESS_KEY_VALUE, "rustfsadmin");
assert_eq!(RustFSConfig::DEFAULT_SECRET_KEY_VALUE, "rustfsadmin");
assert_eq!(RustFSConfig::DEFAULT_CONSOLE_ADDRESS_VALUE, "127.0.0.1:9001");
}
#[test]
fn test_empty_strings() {
let config = RustFSConfig {
address: "".to_string(),
host: "".to_string(),
port: "".to_string(),
access_key: "".to_string(),
secret_key: "".to_string(),
domain_name: "".to_string(),
volume_name: "".to_string(),
console_address: "".to_string(),
};
assert!(config.address.is_empty());
assert!(config.host.is_empty());
assert!(config.port.is_empty());
assert!(config.access_key.is_empty());
assert!(config.secret_key.is_empty());
assert!(config.domain_name.is_empty());
assert!(config.volume_name.is_empty());
assert!(config.console_address.is_empty());
}
#[test]
fn test_very_long_strings() {
let long_string = "a".repeat(1000);
let config = RustFSConfig {
address: format!("{long_string}:9000"),
host: long_string.clone(),
port: "9000".to_string(),
access_key: long_string.clone(),
secret_key: long_string.clone(),
domain_name: format!("{long_string}.com"),
volume_name: format!("/data/{long_string}"),
console_address: format!("{long_string}:9001"),
};
assert_eq!(config.host.len(), 1000);
assert_eq!(config.access_key.len(), 1000);
assert_eq!(config.secret_key.len(), 1000);
}
#[test]
fn test_special_characters() {
let config = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "user@domain.com".to_string(),
secret_key: "p@ssw0rd!#$%".to_string(),
domain_name: "test-domain.example.com".to_string(),
volume_name: "/data/rust-fs/storage".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
assert!(config.access_key.contains("@"));
assert!(config.secret_key.contains("!#$%"));
assert!(config.domain_name.contains("-"));
assert!(config.volume_name.contains("/"));
}
#[test]
fn test_unicode_strings() {
let config = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "username".to_string(),
secret_key: "password123".to_string(),
domain_name: "test.com".to_string(),
volume_name: "/data/storage".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
assert_eq!(config.access_key, "username");
assert_eq!(config.secret_key, "password123");
assert_eq!(config.domain_name, "test.com");
assert_eq!(config.volume_name, "/data/storage");
}
#[test]
fn test_memory_efficiency() {
// Test that the structure doesn't use excessive memory
assert!(std::mem::size_of::<RustFSConfig>() < 1000);
}
// Note: Keyring-related tests (load, save, clear) are not included here
// because they require actual keyring access and would be integration tests
// rather than unit tests. They should be tested separately in an integration
// test environment where keyring access can be properly mocked or controlled.
}

View File

@@ -1,899 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::utils::RustFSConfig;
use dioxus::logger::tracing::{debug, error, info};
use rust_embed::RustEmbed;
use sha2::{Digest, Sha256};
use std::error::Error;
use std::path::{Path, PathBuf};
use std::process::Command as StdCommand;
use std::sync::LazyLock;
use std::time::Duration;
use tokio::fs;
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream;
use tokio::sync::{Mutex, mpsc};
#[derive(RustEmbed)]
#[folder = "$CARGO_MANIFEST_DIR/embedded-rustfs/"]
struct Asset;
// Use `LazyLock` to cache the checksum of embedded resources
static RUSTFS_HASH: LazyLock<Mutex<String>> = LazyLock::new(|| {
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
Mutex::new(hash)
});
/// Service command
/// This enum represents the commands that can be sent to the service manager
/// to start, stop, or restart the service
/// The `Start` variant contains the configuration for the service
/// The `Restart` variant contains the configuration for the service
///
/// # Example
/// ```
/// let config = RustFSConfig {
/// address: "127.0.0.1:9000".to_string(),
/// host: "127.0.0.1".to_string(),
/// port: "9000".to_string(),
/// access_key: "rustfsadmin".to_string(),
/// secret_key: "rustfsadmin".to_string(),
/// domain_name: "demo.rustfs.com".to_string(),
/// volume_name: "data".to_string(),
/// console_address: "127.0.0.1:9001".to_string(),
/// };
///
/// let command = ServiceCommand::Start(config);
/// println!("{:?}", command);
///
/// assert_eq!(command, ServiceCommand::Start(config));
/// ```
pub enum ServiceCommand {
Start(RustFSConfig),
Stop,
Restart(RustFSConfig),
}
/// Service operation result
/// This struct represents the result of a service operation
/// It contains information about the success of the operation,
///
/// # Example
/// ```
/// use chrono::Local;
///
/// let result = ServiceOperationResult {
/// success: true,
/// start_time: chrono::Local::now(),
/// end_time: chrono::Local::now(),
/// message: "Service started successfully".to_string(),
/// };
///
/// println!("{:?}", result);
/// assert_eq!(result.success, true);
/// ```
#[derive(Debug)]
pub struct ServiceOperationResult {
pub success: bool,
pub start_time: chrono::DateTime<chrono::Local>,
pub end_time: chrono::DateTime<chrono::Local>,
pub message: String,
}
/// Service manager
/// This struct represents a service manager that can be used to start, stop, or restart a service
/// It contains a command sender that can be used to send commands to the service manager
///
/// # Example
/// ```
/// let service_manager = ServiceManager::new();
/// println!("{:?}", service_manager);
/// ```
#[derive(Debug, Clone)]
pub struct ServiceManager {
command_tx: mpsc::Sender<ServiceCommand>,
// process: Arc<Mutex<Option<Child>>>,
// pid: Arc<Mutex<Option<u32>>>, // Add PID storage
// current_config: Arc<Mutex<Option<RustFSConfig>>>, // Add configuration storage
}
impl ServiceManager {
/// check if the service is running and return a pid
/// This function is platform dependent
/// On Unix systems, it uses the `ps` command to check for the service
/// On Windows systems, it uses the `wmic` command to check for the service
///
/// # Example
/// ```
/// let pid = check_service_status().await;
/// println!("{:?}", pid);
/// ```
pub async fn check_service_status() -> Option<u32> {
#[cfg(unix)]
{
// use the ps command on a unix system
if let Ok(output) = StdCommand::new("ps").arg("-ef").output() {
let output_str = String::from_utf8_lossy(&output.stdout);
for line in output_str.lines() {
// match contains `rustfs/bin/rustfs` of the line
if line.contains("rustfs/bin/rustfs") && !line.contains("grep") {
if let Some(pid_str) = line.split_whitespace().nth(1) {
if let Ok(pid) = pid_str.parse::<u32>() {
return Some(pid);
}
}
}
}
}
}
#[cfg(windows)]
{
if let Ok(output) = StdCommand::new("wmic")
.arg("process")
.arg("where")
.arg("caption='rustfs.exe'")
.arg("get")
.arg("processid")
.output()
{
let output_str = String::from_utf8_lossy(&output.stdout);
for line in output_str.lines() {
if let Ok(pid) = line.trim().parse::<u32>() {
return Some(pid);
}
}
}
}
None
}
/// Prepare the service
/// This function downloads the service executable if it doesn't exist
/// It also creates the necessary directories for the service
///
/// # Example
/// ```
/// let executable_path = prepare_service().await;
/// println!("{:?}", executable_path);
/// ```
async fn prepare_service() -> Result<PathBuf, Box<dyn Error>> {
// get the user directory
let home_dir = dirs::home_dir().ok_or("Unable to get user directory")?;
let rustfs_dir = home_dir.join("rustfs");
let bin_dir = rustfs_dir.join("bin");
let data_dir = rustfs_dir.join("data");
let logs_dir = rustfs_dir.join("logs");
// create the necessary directories
for dir in [&bin_dir, &data_dir, &logs_dir] {
if !dir.exists() {
tokio::fs::create_dir_all(dir).await?;
}
}
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let executable_path = bin_dir.join(rustfs_file);
let hash_path = bin_dir.join("embedded_rustfs.sha256");
if executable_path.exists() && hash_path.exists() {
let cached_hash = fs::read_to_string(&hash_path).await?;
let expected_hash = RUSTFS_HASH.lock().await;
if cached_hash == *expected_hash {
println!("Use cached rustfs: {executable_path:?}");
return Ok(executable_path);
}
}
// Extract and write files
let rustfs_data = Asset::get(rustfs_file).expect("RustFS binary not embedded");
let mut file = File::create(&executable_path).await?;
file.write_all(&rustfs_data.data).await?;
let expected_hash = hex::encode(Sha256::digest(&rustfs_data.data));
fs::write(&hash_path, expected_hash).await?;
// set execution permissions on unix systems
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
let mut perms = std::fs::metadata(&executable_path)?.permissions();
perms.set_mode(0o755);
std::fs::set_permissions(&executable_path, perms)?;
}
Ok(executable_path)
}
/// Helper function: Extracts the port from the address string
///
/// # Example
/// ```
/// let address = "127.0.0.1:9000";
/// let port = extract_port(address);
/// println!("{:?}", port);
/// ```
fn extract_port(address: &str) -> Option<u16> {
address.split(':').nth(1)?.parse().ok()
}
/// Create a new instance of the service manager
///
/// # Example
/// ```
/// let service_manager = ServiceManager::new();
/// println!("{:?}", service_manager);
/// ```
pub(crate) fn new() -> Self {
let (command_tx, mut command_rx) = mpsc::channel(10);
// Start the control loop
tokio::spawn(async move {
while let Some(cmd) = command_rx.recv().await {
match cmd {
ServiceCommand::Start(config) => {
if let Err(e) = Self::start_service(&config).await {
Self::show_error(&format!("Failed to start service: {e}"));
}
}
ServiceCommand::Stop => {
if let Err(e) = Self::stop_service().await {
Self::show_error(&format!("Failed to stop service: {e}"));
}
}
ServiceCommand::Restart(config) => {
if Self::check_service_status().await.is_some() {
if let Err(e) = Self::stop_service().await {
Self::show_error(&format!("Failed to restart service: {e}"));
continue;
}
}
if let Err(e) = Self::start_service(&config).await {
Self::show_error(&format!("Failed to restart service: {e}"));
}
}
}
}
});
ServiceManager { command_tx }
}
/// Start the service
/// This function starts the service with the given configuration
///
/// # Example
/// ```
/// let config = RustFSConfig {
/// address: "127.0.0.1:9000".to_string(),
/// host: "127.0.0.1".to_string(),
/// port: "9000".to_string(),
/// access_key: "rustfsadmin".to_string(),
/// secret_key: "rustfsadmin".to_string(),
/// domain_name: "demo.rustfs.com".to_string(),
/// volume_name: "data".to_string(),
/// console_address: "127.0.0.1:9001".to_string(),
/// };
///
/// let result = start_service(&config).await;
/// println!("{:?}", result);
/// ```
async fn start_service(config: &RustFSConfig) -> Result<(), Box<dyn Error>> {
// Check if the service is already running
if let Some(existing_pid) = Self::check_service_status().await {
return Err(format!("Service is already running, PID: {existing_pid}").into());
}
// Prepare the service program
let executable_path = Self::prepare_service().await?;
// Check the data catalog
let volume_name_path = Path::new(&config.volume_name);
if !volume_name_path.exists() {
tokio::fs::create_dir_all(&config.volume_name).await?;
}
// Extract the port from the configuration
let main_port = Self::extract_port(&config.address).ok_or("Unable to parse main service port")?;
let console_port = Self::extract_port(&config.console_address).ok_or("Unable to parse console port")?;
let host = config.address.split(':').next().ok_or("Unable to parse host address")?;
// Check the port
let ports = vec![main_port, console_port];
for port in ports {
if Self::is_port_in_use(host, port).await {
return Err(format!("Port {port} is already in use").into());
}
}
// Start the service
let mut child = tokio::process::Command::new(executable_path)
.arg("--address")
.arg(&config.address)
.arg("--access-key")
.arg(&config.access_key)
.arg("--secret-key")
.arg(&config.secret_key)
.arg("--console-address")
.arg(&config.console_address)
.arg(config.volume_name.clone())
.spawn()?;
let process_pid = child.id().unwrap();
// Wait for the service to start
tokio::time::sleep(Duration::from_secs(2)).await;
// Check if the service started successfully
if Self::is_port_in_use(host, main_port).await {
Self::show_info(&format!("Service started successfully! Process ID: {process_pid}"));
Ok(())
} else {
child.kill().await?;
Err("Service failed to start".into())
}
}
/// Stop the service
/// This function stops the service
///
/// # Example
/// ```
/// let result = stop_service().await;
/// println!("{:?}", result);
/// ```
async fn stop_service() -> Result<(), Box<dyn Error>> {
let existing_pid = Self::check_service_status().await;
debug!("existing_pid: {:?}", existing_pid);
if let Some(service_pid) = existing_pid {
// An attempt was made to terminate the process
#[cfg(unix)]
{
StdCommand::new("kill").arg("-9").arg(service_pid.to_string()).output()?;
}
#[cfg(windows)]
{
StdCommand::new("taskkill")
.arg("/F")
.arg("/PID")
.arg(service_pid.to_string())
.output()?;
}
// Verify that the service is indeed stopped
tokio::time::sleep(Duration::from_secs(1)).await;
if Self::check_service_status().await.is_some() {
return Err("Service failed to stop".into());
}
Self::show_info("Service stopped successfully");
Ok(())
} else {
Err("Service is not running".into())
}
}
/// Check if the port is in use
/// This function checks if the given port is in use on the given host
///
/// # Example
/// ```
/// let host = "127.0.0.1";
/// let port = 9000;
/// let result = is_port_in_use(host, port).await;
/// println!("{:?}", result);
/// ```
async fn is_port_in_use(host: &str, port: u16) -> bool {
TcpStream::connect(format!("{host}:{port}")).await.is_ok()
}
/// Show an error message
/// This function shows an error message dialog
///
/// # Example
/// ```
/// show_error("This is an error message");
/// ```
pub(crate) fn show_error(message: &str) {
rfd::MessageDialog::new()
.set_title("Error")
.set_description(message)
.set_level(rfd::MessageLevel::Error)
.show();
}
/// Show an information message
/// This function shows an information message dialog
///
/// # Example
/// ```
/// show_info("This is an information message");
/// ```
pub(crate) fn show_info(message: &str) {
rfd::MessageDialog::new()
.set_title("Success")
.set_description(message)
.set_level(rfd::MessageLevel::Info)
.show();
}
/// Start the service
/// This function sends a `Start` command to the service manager
///
/// # Example
/// ```
/// let config = RustFSConfig {
/// address: "127.0.0.1:9000".to_string(),
/// host: "127.0.0.1".to_string(),
/// port: "9000".to_string(),
/// access_key: "rustfsadmin".to_string(),
/// secret_key: "rustfsadmin".to_string(),
/// domain_name: "demo.rustfs.com".to_string(),
/// volume_name: "data".to_string(),
/// console_address: "127.0.0.1:9001".to_string(),
/// };
///
/// let service_manager = ServiceManager::new();
/// let result = service_manager.start(config).await;
/// println!("{:?}", result);
/// ```
///
/// # Errors
/// This function returns an error if the service fails to start
///
/// # Panics
/// This function panics if the port number is invalid
///
/// # Safety
/// This function is not marked as unsafe
///
/// # Performance
/// This function is not optimized for performance
///
/// # Design
/// This function is designed to be simple and easy to use
///
/// # Security
/// This function does not have any security implications
pub async fn start(&self, config: RustFSConfig) -> Result<ServiceOperationResult, Box<dyn Error>> {
let start_time = chrono::Local::now();
self.command_tx.send(ServiceCommand::Start(config.clone())).await?;
let host = &config.host;
let port = config.port.parse::<u16>().expect("Invalid port number");
// wait for the service to actually start
let mut retries = 0;
while retries < 30 {
// wait up to 30 seconds
if Self::check_service_status().await.is_some() && Self::is_port_in_use(host, port).await {
let end_time = chrono::Local::now();
return Ok(ServiceOperationResult {
success: true,
start_time,
end_time,
message: "Service started successfully".to_string(),
});
}
tokio::time::sleep(Duration::from_secs(1)).await;
retries += 1;
}
Err("Service start timeout".into())
}
/// Stop the service
/// This function sends a `Stop` command to the service manager
///
/// # Example
/// ```
/// let service_manager = ServiceManager::new();
/// let result = service_manager.stop().await;
/// println!("{:?}", result);
/// ```
///
/// # Errors
/// This function returns an error if the service fails to stop
///
/// # Panics
/// This function panics if the port number is invalid
///
/// # Safety
/// This function is not marked as unsafe
///
/// # Performance
/// This function is not optimized for performance
///
/// # Design
/// This function is designed to be simple and easy to use
///
/// # Security
/// This function does not have any security implications
pub async fn stop(&self) -> Result<ServiceOperationResult, Box<dyn Error>> {
let start_time = chrono::Local::now();
self.command_tx.send(ServiceCommand::Stop).await?;
// Wait for the service to actually stop
let mut retries = 0;
while retries < 15 {
// Wait up to 15 seconds
if Self::check_service_status().await.is_none() {
let end_time = chrono::Local::now();
return Ok(ServiceOperationResult {
success: true,
start_time,
end_time,
message: "Service stopped successfully".to_string(),
});
}
tokio::time::sleep(Duration::from_secs(1)).await;
retries += 1;
}
Err("Service stop timeout".into())
}
/// Restart the service
/// This function sends a `Restart` command to the service manager
///
/// # Example
/// ```
/// let config = RustFSConfig {
/// address: "127.0.0.1:9000".to_string(),
/// host: "127.0.0.1".to_string(),
/// port: "9000".to_string(),
/// access_key: "rustfsadmin".to_string(),
/// secret_key: "rustfsadmin".to_string(),
/// domain_name: "demo.rustfs.com".to_string(),
/// volume_name: "data".to_string(),
/// console_address: "127.0.0.1:9001".to_string(),
/// };
///
/// let service_manager = ServiceManager::new();
/// let result = service_manager.restart(config).await;
/// println!("{:?}", result);
/// ```
///
/// # Errors
/// This function returns an error if the service fails to restart
///
/// # Panics
/// This function panics if the port number is invalid
///
/// # Safety
/// This function is not marked as unsafe
///
/// # Performance
/// This function is not optimized for performance
///
/// # Design
/// This function is designed to be simple and easy to use
///
/// # Security
/// This function does not have any security implications
pub async fn restart(&self, config: RustFSConfig) -> Result<ServiceOperationResult, Box<dyn Error>> {
let start_time = chrono::Local::now();
self.command_tx.send(ServiceCommand::Restart(config.clone())).await?;
let host = &config.host;
let port = config.port.parse::<u16>().expect("Invalid port number");
// wait for the service to restart
let mut retries = 0;
while retries < 45 {
// Longer waiting time is given as both the stop and start processes are involved
if Self::check_service_status().await.is_some() && Self::is_port_in_use(host, port).await {
match config.save() {
Ok(_) => info!("save config success"),
Err(e) => {
error!("save config error: {}", e);
self.command_tx.send(ServiceCommand::Stop).await?;
Self::show_error("Failed to save configuration");
return Err("Failed to save configuration".into());
}
}
let end_time = chrono::Local::now();
return Ok(ServiceOperationResult {
success: true,
start_time,
end_time,
message: "Service restarted successfully".to_string(),
});
}
tokio::time::sleep(Duration::from_secs(1)).await;
retries += 1;
}
Err("Service restart timeout".into())
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::time::Duration;
#[test]
fn test_service_command_creation() {
let config = RustFSConfig::default_config();
let start_cmd = ServiceCommand::Start(config.clone());
let stop_cmd = ServiceCommand::Stop;
let restart_cmd = ServiceCommand::Restart(config);
// Test that commands can be created
match start_cmd {
ServiceCommand::Start(_) => {}
_ => panic!("Expected Start command"),
}
match stop_cmd {
ServiceCommand::Stop => {}
_ => panic!("Expected Stop command"),
}
match restart_cmd {
ServiceCommand::Restart(_) => {}
_ => panic!("Expected Restart command"),
}
}
#[test]
fn test_service_operation_result_creation() {
let start_time = chrono::Local::now();
let end_time = chrono::Local::now();
let success_result = ServiceOperationResult {
success: true,
start_time,
end_time,
message: "Operation successful".to_string(),
};
let failure_result = ServiceOperationResult {
success: false,
start_time,
end_time,
message: "Operation failed".to_string(),
};
assert!(success_result.success);
assert_eq!(success_result.message, "Operation successful");
assert!(!failure_result.success);
assert_eq!(failure_result.message, "Operation failed");
}
#[test]
fn test_service_operation_result_debug() {
let result = ServiceOperationResult {
success: true,
start_time: chrono::Local::now(),
end_time: chrono::Local::now(),
message: "Test message".to_string(),
};
let debug_str = format!("{result:?}");
assert!(debug_str.contains("ServiceOperationResult"));
assert!(debug_str.contains("success: true"));
assert!(debug_str.contains("Test message"));
}
#[test]
fn test_service_manager_creation() {
// Test ServiceManager creation in a tokio runtime
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
let service_manager = ServiceManager::new();
// Test that ServiceManager can be created and cloned
let cloned_manager = service_manager.clone();
// Both should be valid (we can't test much more without async runtime)
assert!(format!("{service_manager:?}").contains("ServiceManager"));
assert!(format!("{cloned_manager:?}").contains("ServiceManager"));
});
}
#[test]
fn test_extract_port_valid() {
let test_cases = vec![
("127.0.0.1:9000", Some(9000)),
("localhost:8080", Some(8080)),
("192.168.1.100:3000", Some(3000)),
("0.0.0.0:80", Some(80)),
("example.com:443", Some(443)),
("host:65535", Some(65535)),
("host:1", Some(1)),
];
for (input, expected) in test_cases {
let result = ServiceManager::extract_port(input);
assert_eq!(result, expected, "Failed for input: {input}");
}
}
#[test]
fn test_extract_port_invalid() {
let invalid_cases = vec![
"127.0.0.1", // Missing port
"127.0.0.1:", // Empty port
"127.0.0.1:abc", // Invalid port
"127.0.0.1:99999", // Port out of range
"", // Empty string
"invalid", // No colon
"host:-1", // Negative port
"host:0.5", // Decimal port
];
for input in invalid_cases {
let result = ServiceManager::extract_port(input);
assert_eq!(result, None, "Should be None for input: {input}");
}
// Special case: empty host but valid port should still work
assert_eq!(ServiceManager::extract_port(":9000"), Some(9000));
// Special case: multiple colons - extract_port takes the second part
// For "127.0.0.1:9000:extra", it takes "9000" which is valid
assert_eq!(ServiceManager::extract_port("127.0.0.1:9000:extra"), Some(9000));
}
#[test]
fn test_extract_port_edge_cases() {
// Test edge cases for port numbers
assert_eq!(ServiceManager::extract_port("host:0"), Some(0));
assert_eq!(ServiceManager::extract_port("host:65535"), Some(65535));
assert_eq!(ServiceManager::extract_port("host:65536"), None); // Out of range
// IPv6-like address - extract_port takes the second part after split(':')
// For "::1:8080", split(':') gives ["", "", "1", "8080"], nth(1) gives ""
assert_eq!(ServiceManager::extract_port("::1:8080"), None); // Second part is empty
// For "[::1]:8080", split(':') gives ["[", "", "1]", "8080"], nth(1) gives ""
assert_eq!(ServiceManager::extract_port("[::1]:8080"), None); // Second part is empty
}
#[test]
fn test_show_error() {
// Test that show_error function exists and can be called
// We can't actually test the dialog in a test environment
// so we just verify the function signature
}
#[test]
fn test_show_info() {
// Test that show_info function exists and can be called
// We can't actually test the dialog in a test environment
// so we just verify the function signature
}
#[test]
fn test_service_operation_result_timing() {
let start_time = chrono::Local::now();
std::thread::sleep(Duration::from_millis(10)); // Small delay
let end_time = chrono::Local::now();
let result = ServiceOperationResult {
success: true,
start_time,
end_time,
message: "Timing test".to_string(),
};
// End time should be after start time
assert!(result.end_time >= result.start_time);
}
#[test]
fn test_service_operation_result_with_unicode() {
let result = ServiceOperationResult {
success: true,
start_time: chrono::Local::now(),
end_time: chrono::Local::now(),
message: "Operation successful 🎉".to_string(),
};
assert_eq!(result.message, "Operation successful 🎉");
assert!(result.success);
}
#[test]
fn test_service_operation_result_with_long_message() {
let long_message = "A".repeat(10000);
let result = ServiceOperationResult {
success: false,
start_time: chrono::Local::now(),
end_time: chrono::Local::now(),
message: long_message.clone(),
};
assert_eq!(result.message.len(), 10000);
assert_eq!(result.message, long_message);
assert!(!result.success);
}
#[test]
fn test_service_command_with_different_configs() {
let config1 = RustFSConfig {
address: "127.0.0.1:9000".to_string(),
host: "127.0.0.1".to_string(),
port: "9000".to_string(),
access_key: "admin1".to_string(),
secret_key: "pass1".to_string(),
domain_name: "test1.com".to_string(),
volume_name: "/data1".to_string(),
console_address: "127.0.0.1:9001".to_string(),
};
let config2 = RustFSConfig {
address: "192.168.1.100:8080".to_string(),
host: "192.168.1.100".to_string(),
port: "8080".to_string(),
access_key: "admin2".to_string(),
secret_key: "pass2".to_string(),
domain_name: "test2.com".to_string(),
volume_name: "/data2".to_string(),
console_address: "192.168.1.100:8081".to_string(),
};
let start_cmd1 = ServiceCommand::Start(config1);
let restart_cmd2 = ServiceCommand::Restart(config2);
// Test that different configs can be used
match start_cmd1 {
ServiceCommand::Start(config) => {
assert_eq!(config.address, "127.0.0.1:9000");
assert_eq!(config.access_key, "admin1");
}
_ => panic!("Expected Start command"),
}
match restart_cmd2 {
ServiceCommand::Restart(config) => {
assert_eq!(config.address, "192.168.1.100:8080");
assert_eq!(config.access_key, "admin2");
}
_ => panic!("Expected Restart command"),
}
}
#[test]
fn test_memory_efficiency() {
// Test that structures don't use excessive memory
assert!(std::mem::size_of::<ServiceCommand>() < 2000);
assert!(std::mem::size_of::<ServiceOperationResult>() < 1000);
assert!(std::mem::size_of::<ServiceManager>() < 1000);
}
// Note: The following methods are not tested here because they require:
// - Async runtime (tokio)
// - File system access
// - Network access
// - Process management
// - External dependencies (embedded assets)
//
// These should be tested in integration tests:
// - check_service_status()
// - prepare_service()
// - start_service()
// - stop_service()
// - is_port_in_use()
// - ServiceManager::start()
// - ServiceManager::stop()
// - ServiceManager::restart()
//
// The RUSTFS_HASH lazy_static is also not tested here as it depends
// on embedded assets that may not be available in unit test environment.
}

View File

@@ -1,300 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use dioxus::logger::tracing::debug;
use tracing_appender::non_blocking::WorkerGuard;
use tracing_appender::rolling::{RollingFileAppender, Rotation};
use tracing_subscriber::fmt;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
/// Initialize the logger with a rolling file appender
/// that rotates log files daily
pub fn init_logger() -> WorkerGuard {
// configuring rolling logs rolling by day
let home_dir = dirs::home_dir().expect("Unable to get user directory");
let rustfs_dir = home_dir.join("rustfs");
let logs_dir = rustfs_dir.join("logs");
let file_appender = RollingFileAppender::builder()
.rotation(Rotation::DAILY) // rotate log files once every hour
.filename_prefix("rustfs-cli") // log file names will be prefixed with `myapp.`
.filename_suffix("log") // log file names will be suffixed with `.log`
.build(logs_dir) // try to build an appender that stores log files in `/ var/ log`
.expect("initializing rolling file appender failed");
// non-blocking writer for improved performance
let (non_blocking_file, worker_guard) = tracing_appender::non_blocking(file_appender);
// console output layer
let console_layer = fmt::layer()
.with_writer(std::io::stdout)
.with_ansi(true)
.with_line_number(true); // enable colors in the console
// file output layer
let file_layer = fmt::layer()
.with_writer(non_blocking_file)
.with_ansi(false)
.with_thread_names(true)
.with_target(true)
.with_thread_ids(true)
.with_level(true)
.with_line_number(true); // disable colors in the file
// Combine all tiers and initialize global subscribers
tracing_subscriber::registry()
.with(console_layer)
.with(file_layer)
.with(tracing_subscriber::EnvFilter::new("info")) // filter the log level by environment variables
.init();
debug!("Logger initialized");
worker_guard
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Once;
static INIT: Once = Once::new();
// Helper function to ensure logger is only initialized once in tests
fn ensure_logger_init() {
INIT.call_once(|| {
// Initialize a simple test logger to avoid conflicts
let _ = tracing_subscriber::fmt().with_test_writer().try_init();
});
}
#[test]
fn test_logger_initialization_components() {
ensure_logger_init();
// Test that we can create the components used in init_logger
// without actually initializing the global logger again
// Test home directory access
let home_dir_result = dirs::home_dir();
assert!(home_dir_result.is_some(), "Should be able to get home directory");
let home_dir = home_dir_result.unwrap();
let rustfs_dir = home_dir.join("rustfs");
let logs_dir = rustfs_dir.join("logs");
// Test path construction
assert!(rustfs_dir.to_string_lossy().contains("rustfs"));
assert!(logs_dir.to_string_lossy().contains("logs"));
}
#[test]
fn test_rolling_file_appender_builder() {
ensure_logger_init();
// Test that we can create a RollingFileAppender builder
let builder = RollingFileAppender::builder()
.rotation(Rotation::DAILY)
.filename_prefix("test-rustfs-cli")
.filename_suffix("log");
// We can't actually build it without creating directories,
// but we can verify the builder pattern works
let debug_str = format!("{builder:?}");
// The actual debug format might be different, so just check it's not empty
assert!(!debug_str.is_empty());
// Check that it contains some expected parts
assert!(debug_str.contains("Builder") || debug_str.contains("builder") || debug_str.contains("RollingFileAppender"));
}
#[test]
fn test_rotation_types() {
ensure_logger_init();
// Test different rotation types
let daily = Rotation::DAILY;
let hourly = Rotation::HOURLY;
let minutely = Rotation::MINUTELY;
let never = Rotation::NEVER;
// Test that rotation types can be created and formatted
assert!(!format!("{daily:?}").is_empty());
assert!(!format!("{hourly:?}").is_empty());
assert!(!format!("{minutely:?}").is_empty());
assert!(!format!("{never:?}").is_empty());
}
#[test]
fn test_fmt_layer_configuration() {
ensure_logger_init();
// Test that we can create fmt layers with different configurations
// We can't actually test the layers directly due to type complexity,
// but we can test that the configuration values are correct
// Test console layer settings
let console_ansi = true;
let console_line_number = true;
assert!(console_ansi);
assert!(console_line_number);
// Test file layer settings
let file_ansi = false;
let file_thread_names = true;
let file_target = true;
let file_thread_ids = true;
let file_level = true;
let file_line_number = true;
assert!(!file_ansi);
assert!(file_thread_names);
assert!(file_target);
assert!(file_thread_ids);
assert!(file_level);
assert!(file_line_number);
}
#[test]
fn test_env_filter_creation() {
ensure_logger_init();
// Test that EnvFilter can be created with different levels
let info_filter = tracing_subscriber::EnvFilter::new("info");
let debug_filter = tracing_subscriber::EnvFilter::new("debug");
let warn_filter = tracing_subscriber::EnvFilter::new("warn");
let error_filter = tracing_subscriber::EnvFilter::new("error");
// Test that filters can be created
assert!(!format!("{info_filter:?}").is_empty());
assert!(!format!("{debug_filter:?}").is_empty());
assert!(!format!("{warn_filter:?}").is_empty());
assert!(!format!("{error_filter:?}").is_empty());
}
#[test]
fn test_path_construction() {
ensure_logger_init();
// Test path construction logic used in init_logger
if let Some(home_dir) = dirs::home_dir() {
let rustfs_dir = home_dir.join("rustfs");
let logs_dir = rustfs_dir.join("logs");
// Test that paths are constructed correctly
assert!(rustfs_dir.ends_with("rustfs"));
assert!(logs_dir.ends_with("logs"));
assert!(logs_dir.parent().unwrap().ends_with("rustfs"));
// Test path string representation
let rustfs_str = rustfs_dir.to_string_lossy();
let logs_str = logs_dir.to_string_lossy();
assert!(rustfs_str.contains("rustfs"));
assert!(logs_str.contains("rustfs"));
assert!(logs_str.contains("logs"));
}
}
#[test]
fn test_filename_patterns() {
ensure_logger_init();
// Test the filename patterns used in the logger
let prefix = "rustfs-cli";
let suffix = "log";
assert_eq!(prefix, "rustfs-cli");
assert_eq!(suffix, "log");
// Test that these would create valid filenames
let sample_filename = format!("{prefix}.2024-01-01.{suffix}");
assert_eq!(sample_filename, "rustfs-cli.2024-01-01.log");
}
#[test]
fn test_worker_guard_type() {
ensure_logger_init();
// Test that WorkerGuard type exists and can be referenced
// We can't actually create one without the full setup, but we can test the type
let guard_size = std::mem::size_of::<WorkerGuard>();
assert!(guard_size > 0, "WorkerGuard should have non-zero size");
}
#[test]
fn test_logger_configuration_constants() {
ensure_logger_init();
// Test the configuration values used in the logger
let default_log_level = "info";
let filename_prefix = "rustfs-cli";
let filename_suffix = "log";
let rotation = Rotation::DAILY;
assert_eq!(default_log_level, "info");
assert_eq!(filename_prefix, "rustfs-cli");
assert_eq!(filename_suffix, "log");
assert!(matches!(rotation, Rotation::DAILY));
}
#[test]
fn test_directory_names() {
ensure_logger_init();
// Test the directory names used in the logger setup
let rustfs_dir_name = "rustfs";
let logs_dir_name = "logs";
assert_eq!(rustfs_dir_name, "rustfs");
assert_eq!(logs_dir_name, "logs");
// Test path joining
let combined = format!("{rustfs_dir_name}/{logs_dir_name}");
assert_eq!(combined, "rustfs/logs");
}
#[test]
fn test_layer_settings() {
ensure_logger_init();
// Test the boolean settings used in layer configuration
let console_ansi = true;
let console_line_number = true;
let file_ansi = false;
let file_thread_names = true;
let file_target = true;
let file_thread_ids = true;
let file_level = true;
let file_line_number = true;
// Verify the settings
assert!(console_ansi);
assert!(console_line_number);
assert!(!file_ansi);
assert!(file_thread_names);
assert!(file_target);
assert!(file_thread_ids);
assert!(file_level);
assert!(file_line_number);
}
// Note: The actual init_logger() function is not tested here because:
// 1. It initializes a global tracing subscriber which can only be done once
// 2. It requires file system access to create directories
// 3. It has side effects that would interfere with other tests
// 4. It returns a WorkerGuard that needs to be kept alive
//
// This function should be tested in integration tests where:
// - File system access can be properly controlled
// - The global state can be managed
// - The actual logging behavior can be verified
// - The WorkerGuard lifecycle can be properly managed
}

View File

@@ -1,21 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod config;
mod helper;
mod logger;
pub use config::RustFSConfig;
pub use helper::ServiceManager;
pub use logger::init_logger;

View File

@@ -1,38 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::route::Route;
use dioxus::logger::tracing::info;
use dioxus::prelude::*;
const FAVICON: Asset = asset!("/assets/favicon.ico");
const TAILWIND_CSS: Asset = asset!("/assets/tailwind.css");
/// The main application component
/// This is the root component of the application
/// It contains the global resources and the router
/// for the application
#[component]
pub fn App() -> Element {
// Build cool things ✌️
use document::{Link, Title};
info!("App rendered");
rsx! {
// Global app resources
Link { rel: "icon", href: FAVICON }
Link { rel: "stylesheet", href: TAILWIND_CSS }
Title { "RustFS" }
Router::<Route> {}
}
}

View File

@@ -1,23 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::components::Home;
use dioxus::prelude::*;
#[component]
pub fn HomeViews() -> Element {
rsx! {
Home {}
}
}

View File

@@ -1,21 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod app;
mod home;
mod setting;
pub use app::App;
pub use home::HomeViews;
pub use setting::SettingViews;

View File

@@ -1,24 +0,0 @@
/**
* Copyright 2024 RustFS Team
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
module.exports = {
mode: "all",
content: ["./src/**/*.{rs,html,css}", "./dist/**/*.html"],
theme: {
extend: {},
},
plugins: [],
};

View File

@@ -1258,7 +1258,7 @@ impl Scanner {
objects_with_issues += 1;
warn!("Object {} has no versions", entry.name);
// 对象元数据损坏提交元数据heal任务
// 对象元数据损坏,提交元数据 heal 任务
let enable_healing = self.config.read().await.enable_healing;
if enable_healing {
if let Some(heal_manager) = &self.heal_manager {
@@ -1296,7 +1296,7 @@ impl Scanner {
objects_with_issues += 1;
warn!("Failed to parse metadata for object {}", entry.name);
// 对象元数据解析失败提交元数据heal任务
// 对象元数据解析失败,提交元数据 heal 任务
let enable_healing = self.config.read().await.enable_healing;
if enable_healing {
if let Some(heal_manager) = &self.heal_manager {

View File

@@ -1,3 +1,17 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use rustfs_ahm::heal::{
manager::{HealConfig, HealManager},
storage::{ECStoreHealStorage, HealStorageAPI},

View File

@@ -1,3 +1,17 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use rustfs_ahm::scanner::{Scanner, data_scanner::ScannerConfig};
use rustfs_ecstore::{
bucket::metadata::BUCKET_LIFECYCLE_CONFIG,

View File

@@ -0,0 +1,44 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[package]
name = "rustfs-audit-logger"
edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Audit logging system for RustFS, providing detailed logging of file operations and system events."
documentation = "https://docs.rs/audit-logger/latest/audit_logger/"
keywords = ["audit", "logging", "file-operations", "system-events", "RustFS"]
categories = ["web-programming", "development-tools::profiling", "asynchronous", "api-bindings", "development-tools::debugging"]
[dependencies]
rustfs-targets = { workspace = true }
async-trait = { workspace = true }
chrono = { workspace = true }
reqwest = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true, features = ["std", "attributes"] }
tracing-core = { workspace = true }
tokio = { workspace = true, features = ["sync", "fs", "rt-multi-thread", "rt", "time", "macros"] }
url = { workspace = true }
uuid = { workspace = true }
thiserror = { workspace = true }
figment = { version = "0.10", features = ["json", "env"] }
[lints]
workspace = true

View File

@@ -0,0 +1,34 @@
{
"console": {
"enabled": true
},
"logger_webhook": {
"default": {
"enabled": true,
"endpoint": "http://localhost:3000/logs",
"auth_token": "secret-token-for-logs",
"batch_size": 5,
"queue_size": 1000,
"max_retry": 3,
"retry_interval": "2s"
}
},
"audit_webhook": {
"splunk": {
"enabled": true,
"endpoint": "http://localhost:3000/audit",
"auth_token": "secret-token-for-audit",
"batch_size": 10
}
},
"audit_kafka": {
"default": {
"enabled": false,
"brokers": [
"kafka1:9092",
"kafka2:9092"
],
"topic": "minio-audit-events"
}
}
}

View File

@@ -0,0 +1,17 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
fn main() {
println!("Audit Logger Example");
}

View File

@@ -0,0 +1,90 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::entry::ObjectVersion;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Args - defines the arguments for API operations
/// Args is used to define the arguments for API operations.
///
/// # Example
/// ```
/// use rustfs_audit_logger::Args;
/// use std::collections::HashMap;
///
/// let args = Args::new()
/// .set_bucket(Some("my-bucket".to_string()))
/// .set_object(Some("my-object".to_string()))
/// .set_version_id(Some("123".to_string()))
/// .set_metadata(Some(HashMap::new()));
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, Default, Eq, PartialEq)]
pub struct Args {
#[serde(rename = "bucket", skip_serializing_if = "Option::is_none")]
pub bucket: Option<String>,
#[serde(rename = "object", skip_serializing_if = "Option::is_none")]
pub object: Option<String>,
#[serde(rename = "versionId", skip_serializing_if = "Option::is_none")]
pub version_id: Option<String>,
#[serde(rename = "objects", skip_serializing_if = "Option::is_none")]
pub objects: Option<Vec<ObjectVersion>>,
#[serde(rename = "metadata", skip_serializing_if = "Option::is_none")]
pub metadata: Option<HashMap<String, String>>,
}
impl Args {
/// Create a new Args object
pub fn new() -> Self {
Args {
bucket: None,
object: None,
version_id: None,
objects: None,
metadata: None,
}
}
/// Set the bucket
pub fn set_bucket(mut self, bucket: Option<String>) -> Self {
self.bucket = bucket;
self
}
/// Set the object
pub fn set_object(mut self, object: Option<String>) -> Self {
self.object = object;
self
}
/// Set the version ID
pub fn set_version_id(mut self, version_id: Option<String>) -> Self {
self.version_id = version_id;
self
}
/// Set the objects
pub fn set_objects(mut self, objects: Option<Vec<ObjectVersion>>) -> Self {
self.objects = objects;
self
}
/// Set the metadata
pub fn set_metadata(mut self, metadata: Option<HashMap<String, String>>) -> Self {
self.metadata = metadata;
self
}
}

View File

@@ -0,0 +1,469 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::{BaseLogEntry, LogRecord, ObjectVersion};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
/// API details structure
/// ApiDetails is used to define the details of an API operation
///
/// The `ApiDetails` structure contains the following fields:
/// - `name` - the name of the API operation
/// - `bucket` - the bucket name
/// - `object` - the object name
/// - `objects` - the list of objects
/// - `status` - the status of the API operation
/// - `status_code` - the status code of the API operation
/// - `input_bytes` - the input bytes
/// - `output_bytes` - the output bytes
/// - `header_bytes` - the header bytes
/// - `time_to_first_byte` - the time to first byte
/// - `time_to_first_byte_in_ns` - the time to first byte in nanoseconds
/// - `time_to_response` - the time to response
/// - `time_to_response_in_ns` - the time to response in nanoseconds
///
/// The `ApiDetails` structure contains the following methods:
/// - `new` - create a new `ApiDetails` with default values
/// - `set_name` - set the name
/// - `set_bucket` - set the bucket
/// - `set_object` - set the object
/// - `set_objects` - set the objects
/// - `set_status` - set the status
/// - `set_status_code` - set the status code
/// - `set_input_bytes` - set the input bytes
/// - `set_output_bytes` - set the output bytes
/// - `set_header_bytes` - set the header bytes
/// - `set_time_to_first_byte` - set the time to first byte
/// - `set_time_to_first_byte_in_ns` - set the time to first byte in nanoseconds
/// - `set_time_to_response` - set the time to response
/// - `set_time_to_response_in_ns` - set the time to response in nanoseconds
///
/// # Example
/// ```
/// use rustfs_audit_logger::ApiDetails;
/// use rustfs_audit_logger::ObjectVersion;
///
/// let api = ApiDetails::new()
/// .set_name(Some("GET".to_string()))
/// .set_bucket(Some("my-bucket".to_string()))
/// .set_object(Some("my-object".to_string()))
/// .set_objects(vec![ObjectVersion::new_with_object_name("my-object".to_string())])
/// .set_status(Some("OK".to_string()))
/// .set_status_code(Some(200))
/// .set_input_bytes(100)
/// .set_output_bytes(200)
/// .set_header_bytes(Some(50))
/// .set_time_to_first_byte(Some("100ms".to_string()))
/// .set_time_to_first_byte_in_ns(Some("100000000ns".to_string()))
/// .set_time_to_response(Some("200ms".to_string()))
/// .set_time_to_response_in_ns(Some("200000000ns".to_string()));
/// ```
#[derive(Debug, Serialize, Deserialize, Clone, Default, PartialEq, Eq)]
pub struct ApiDetails {
#[serde(rename = "name", skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
#[serde(rename = "bucket", skip_serializing_if = "Option::is_none")]
pub bucket: Option<String>,
#[serde(rename = "object", skip_serializing_if = "Option::is_none")]
pub object: Option<String>,
#[serde(rename = "objects", skip_serializing_if = "Vec::is_empty", default)]
pub objects: Vec<ObjectVersion>,
#[serde(rename = "status", skip_serializing_if = "Option::is_none")]
pub status: Option<String>,
#[serde(rename = "statusCode", skip_serializing_if = "Option::is_none")]
pub status_code: Option<i32>,
#[serde(rename = "rx")]
pub input_bytes: i64,
#[serde(rename = "tx")]
pub output_bytes: i64,
#[serde(rename = "txHeaders", skip_serializing_if = "Option::is_none")]
pub header_bytes: Option<i64>,
#[serde(rename = "timeToFirstByte", skip_serializing_if = "Option::is_none")]
pub time_to_first_byte: Option<String>,
#[serde(rename = "timeToFirstByteInNS", skip_serializing_if = "Option::is_none")]
pub time_to_first_byte_in_ns: Option<String>,
#[serde(rename = "timeToResponse", skip_serializing_if = "Option::is_none")]
pub time_to_response: Option<String>,
#[serde(rename = "timeToResponseInNS", skip_serializing_if = "Option::is_none")]
pub time_to_response_in_ns: Option<String>,
}
impl ApiDetails {
/// Create a new `ApiDetails` with default values
pub fn new() -> Self {
ApiDetails {
name: None,
bucket: None,
object: None,
objects: Vec::new(),
status: None,
status_code: None,
input_bytes: 0,
output_bytes: 0,
header_bytes: None,
time_to_first_byte: None,
time_to_first_byte_in_ns: None,
time_to_response: None,
time_to_response_in_ns: None,
}
}
/// Set the name
pub fn set_name(mut self, name: Option<String>) -> Self {
self.name = name;
self
}
/// Set the bucket
pub fn set_bucket(mut self, bucket: Option<String>) -> Self {
self.bucket = bucket;
self
}
/// Set the object
pub fn set_object(mut self, object: Option<String>) -> Self {
self.object = object;
self
}
/// Set the objects
pub fn set_objects(mut self, objects: Vec<ObjectVersion>) -> Self {
self.objects = objects;
self
}
/// Set the status
pub fn set_status(mut self, status: Option<String>) -> Self {
self.status = status;
self
}
/// Set the status code
pub fn set_status_code(mut self, status_code: Option<i32>) -> Self {
self.status_code = status_code;
self
}
/// Set the input bytes
pub fn set_input_bytes(mut self, input_bytes: i64) -> Self {
self.input_bytes = input_bytes;
self
}
/// Set the output bytes
pub fn set_output_bytes(mut self, output_bytes: i64) -> Self {
self.output_bytes = output_bytes;
self
}
/// Set the header bytes
pub fn set_header_bytes(mut self, header_bytes: Option<i64>) -> Self {
self.header_bytes = header_bytes;
self
}
/// Set the time to first byte
pub fn set_time_to_first_byte(mut self, time_to_first_byte: Option<String>) -> Self {
self.time_to_first_byte = time_to_first_byte;
self
}
/// Set the time to first byte in nanoseconds
pub fn set_time_to_first_byte_in_ns(mut self, time_to_first_byte_in_ns: Option<String>) -> Self {
self.time_to_first_byte_in_ns = time_to_first_byte_in_ns;
self
}
/// Set the time to response
pub fn set_time_to_response(mut self, time_to_response: Option<String>) -> Self {
self.time_to_response = time_to_response;
self
}
/// Set the time to response in nanoseconds
pub fn set_time_to_response_in_ns(mut self, time_to_response_in_ns: Option<String>) -> Self {
self.time_to_response_in_ns = time_to_response_in_ns;
self
}
}
/// Entry - audit entry logs
/// AuditLogEntry is used to define the structure of an audit log entry
///
/// The `AuditLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `version` - the version of the audit log entry
/// - `deployment_id` - the deployment ID
/// - `event` - the event
/// - `entry_type` - the type of audit message
/// - `api` - the API details
/// - `remote_host` - the remote host
/// - `user_agent` - the user agent
/// - `req_path` - the request path
/// - `req_host` - the request host
/// - `req_claims` - the request claims
/// - `req_query` - the request query
/// - `req_header` - the request header
/// - `resp_header` - the response header
/// - `access_key` - the access key
/// - `parent_user` - the parent user
/// - `error` - the error
///
/// The `AuditLogEntry` structure contains the following methods:
/// - `new` - create a new `AuditEntry` with default values
/// - `new_with_values` - create a new `AuditEntry` with version, time, event and api details
/// - `with_base` - set the base log entry
/// - `set_version` - set the version
/// - `set_deployment_id` - set the deployment ID
/// - `set_event` - set the event
/// - `set_entry_type` - set the entry type
/// - `set_api` - set the API details
/// - `set_remote_host` - set the remote host
/// - `set_user_agent` - set the user agent
/// - `set_req_path` - set the request path
/// - `set_req_host` - set the request host
/// - `set_req_claims` - set the request claims
/// - `set_req_query` - set the request query
/// - `set_req_header` - set the request header
/// - `set_resp_header` - set the response header
/// - `set_access_key` - set the access key
/// - `set_parent_user` - set the parent user
/// - `set_error` - set the error
///
/// # Example
/// ```
/// use rustfs_audit_logger::AuditLogEntry;
/// use rustfs_audit_logger::ApiDetails;
/// use std::collections::HashMap;
///
/// let entry = AuditLogEntry::new()
/// .set_version("1.0".to_string())
/// .set_deployment_id(Some("123".to_string()))
/// .set_event("event".to_string())
/// .set_entry_type(Some("type".to_string()))
/// .set_api(ApiDetails::new())
/// .set_remote_host(Some("remote-host".to_string()))
/// .set_user_agent(Some("user-agent".to_string()))
/// .set_req_path(Some("req-path".to_string()))
/// .set_req_host(Some("req-host".to_string()))
/// .set_req_claims(Some(HashMap::new()))
/// .set_req_query(Some(HashMap::new()))
/// .set_req_header(Some(HashMap::new()))
/// .set_resp_header(Some(HashMap::new()))
/// .set_access_key(Some("access-key".to_string()))
/// .set_parent_user(Some("parent-user".to_string()))
/// .set_error(Some("error".to_string()));
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct AuditLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub version: String,
#[serde(rename = "deploymentid", skip_serializing_if = "Option::is_none")]
pub deployment_id: Option<String>,
pub event: String,
// Class of audit message - S3, admin ops, bucket management
#[serde(rename = "type", skip_serializing_if = "Option::is_none")]
pub entry_type: Option<String>,
pub api: ApiDetails,
#[serde(rename = "remotehost", skip_serializing_if = "Option::is_none")]
pub remote_host: Option<String>,
#[serde(rename = "userAgent", skip_serializing_if = "Option::is_none")]
pub user_agent: Option<String>,
#[serde(rename = "requestPath", skip_serializing_if = "Option::is_none")]
pub req_path: Option<String>,
#[serde(rename = "requestHost", skip_serializing_if = "Option::is_none")]
pub req_host: Option<String>,
#[serde(rename = "requestClaims", skip_serializing_if = "Option::is_none")]
pub req_claims: Option<HashMap<String, Value>>,
#[serde(rename = "requestQuery", skip_serializing_if = "Option::is_none")]
pub req_query: Option<HashMap<String, String>>,
#[serde(rename = "requestHeader", skip_serializing_if = "Option::is_none")]
pub req_header: Option<HashMap<String, String>>,
#[serde(rename = "responseHeader", skip_serializing_if = "Option::is_none")]
pub resp_header: Option<HashMap<String, String>>,
#[serde(rename = "accessKey", skip_serializing_if = "Option::is_none")]
pub access_key: Option<String>,
#[serde(rename = "parentUser", skip_serializing_if = "Option::is_none")]
pub parent_user: Option<String>,
#[serde(rename = "error", skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
impl AuditLogEntry {
/// Create a new `AuditEntry` with default values
pub fn new() -> Self {
AuditLogEntry {
base: BaseLogEntry::new(),
version: String::new(),
deployment_id: None,
event: String::new(),
entry_type: None,
api: ApiDetails::new(),
remote_host: None,
user_agent: None,
req_path: None,
req_host: None,
req_claims: None,
req_query: None,
req_header: None,
resp_header: None,
access_key: None,
parent_user: None,
error: None,
}
}
/// Create a new `AuditEntry` with version, time, event and api details
pub fn new_with_values(version: String, time: DateTime<Utc>, event: String, api: ApiDetails) -> Self {
let mut base = BaseLogEntry::new();
base.timestamp = time;
AuditLogEntry {
base,
version,
deployment_id: None,
event,
entry_type: None,
api,
remote_host: None,
user_agent: None,
req_path: None,
req_host: None,
req_claims: None,
req_query: None,
req_header: None,
resp_header: None,
access_key: None,
parent_user: None,
error: None,
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the version
pub fn set_version(mut self, version: String) -> Self {
self.version = version;
self
}
/// Set the deployment ID
pub fn set_deployment_id(mut self, deployment_id: Option<String>) -> Self {
self.deployment_id = deployment_id;
self
}
/// Set the event
pub fn set_event(mut self, event: String) -> Self {
self.event = event;
self
}
/// Set the entry type
pub fn set_entry_type(mut self, entry_type: Option<String>) -> Self {
self.entry_type = entry_type;
self
}
/// Set the API details
pub fn set_api(mut self, api: ApiDetails) -> Self {
self.api = api;
self
}
/// Set the remote host
pub fn set_remote_host(mut self, remote_host: Option<String>) -> Self {
self.remote_host = remote_host;
self
}
/// Set the user agent
pub fn set_user_agent(mut self, user_agent: Option<String>) -> Self {
self.user_agent = user_agent;
self
}
/// Set the request path
pub fn set_req_path(mut self, req_path: Option<String>) -> Self {
self.req_path = req_path;
self
}
/// Set the request host
pub fn set_req_host(mut self, req_host: Option<String>) -> Self {
self.req_host = req_host;
self
}
/// Set the request claims
pub fn set_req_claims(mut self, req_claims: Option<HashMap<String, Value>>) -> Self {
self.req_claims = req_claims;
self
}
/// Set the request query
pub fn set_req_query(mut self, req_query: Option<HashMap<String, String>>) -> Self {
self.req_query = req_query;
self
}
/// Set the request header
pub fn set_req_header(mut self, req_header: Option<HashMap<String, String>>) -> Self {
self.req_header = req_header;
self
}
/// Set the response header
pub fn set_resp_header(mut self, resp_header: Option<HashMap<String, String>>) -> Self {
self.resp_header = resp_header;
self
}
/// Set the access key
pub fn set_access_key(mut self, access_key: Option<String>) -> Self {
self.access_key = access_key;
self
}
/// Set the parent user
pub fn set_parent_user(mut self, parent_user: Option<String>) -> Self {
self.parent_user = parent_user;
self
}
/// Set the error
pub fn set_error(mut self, error: Option<String>) -> Self {
self.error = error;
self
}
}
impl LogRecord for AuditLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}

View File

@@ -0,0 +1,108 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
/// Base log entry structure shared by all log types
/// This structure is used to serialize log entries to JSON
/// and send them to the log sinks
/// This structure is also used to deserialize log entries from JSON
/// This structure is also used to store log entries in the database
/// This structure is also used to query log entries from the database
///
/// The `BaseLogEntry` structure contains the following fields:
/// - `timestamp` - the timestamp of the log entry
/// - `request_id` - the request ID of the log entry
/// - `message` - the message of the log entry
/// - `tags` - the tags of the log entry
///
/// The `BaseLogEntry` structure contains the following methods:
/// - `new` - create a new `BaseLogEntry` with default values
/// - `message` - set the message
/// - `request_id` - set the request ID
/// - `tags` - set the tags
/// - `timestamp` - set the timestamp
///
/// # Example
/// ```
/// use rustfs_audit_logger::BaseLogEntry;
/// use chrono::{DateTime, Utc};
/// use std::collections::HashMap;
///
/// let timestamp = Utc::now();
/// let request = Some("req-123".to_string());
/// let message = Some("This is a log message".to_string());
/// let tags = Some(HashMap::new());
///
/// let entry = BaseLogEntry::new()
/// .timestamp(timestamp)
/// .request_id(request)
/// .message(message)
/// .tags(tags);
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq, Default)]
pub struct BaseLogEntry {
#[serde(rename = "time")]
pub timestamp: DateTime<Utc>,
#[serde(rename = "requestID", skip_serializing_if = "Option::is_none")]
pub request_id: Option<String>,
#[serde(rename = "message", skip_serializing_if = "Option::is_none")]
pub message: Option<String>,
#[serde(rename = "tags", skip_serializing_if = "Option::is_none")]
pub tags: Option<HashMap<String, Value>>,
}
impl BaseLogEntry {
/// Create a new BaseLogEntry with default values
pub fn new() -> Self {
BaseLogEntry {
timestamp: Utc::now(),
request_id: None,
message: None,
tags: None,
}
}
/// Set the message
pub fn message(mut self, message: Option<String>) -> Self {
self.message = message;
self
}
/// Set the request ID
pub fn request_id(mut self, request_id: Option<String>) -> Self {
self.request_id = request_id;
self
}
/// Set the tags
pub fn tags(mut self, tags: Option<HashMap<String, Value>>) -> Self {
self.tags = tags;
self
}
/// Set the timestamp
pub fn timestamp(mut self, timestamp: DateTime<Utc>) -> Self {
self.timestamp = timestamp;
self
}
}

View File

@@ -0,0 +1,159 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
pub(crate) mod args;
pub(crate) mod audit;
pub(crate) mod base;
pub(crate) mod unified;
use serde::de::Error;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use tracing_core::Level;
/// ObjectVersion is used across multiple modules
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)]
pub struct ObjectVersion {
#[serde(rename = "name")]
pub object_name: String,
#[serde(rename = "versionId", skip_serializing_if = "Option::is_none")]
pub version_id: Option<String>,
}
impl ObjectVersion {
/// Create a new ObjectVersion object
pub fn new() -> Self {
ObjectVersion {
object_name: String::new(),
version_id: None,
}
}
/// Create a new ObjectVersion with object name
pub fn new_with_object_name(object_name: String) -> Self {
ObjectVersion {
object_name,
version_id: None,
}
}
/// Set the object name
pub fn set_object_name(mut self, object_name: String) -> Self {
self.object_name = object_name;
self
}
/// Set the version ID
pub fn set_version_id(mut self, version_id: Option<String>) -> Self {
self.version_id = version_id;
self
}
}
impl Default for ObjectVersion {
fn default() -> Self {
Self::new()
}
}
/// Log kind/level enum
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
pub enum LogKind {
#[serde(rename = "INFO")]
#[default]
Info,
#[serde(rename = "WARNING")]
Warning,
#[serde(rename = "ERROR")]
Error,
#[serde(rename = "FATAL")]
Fatal,
}
/// Trait for types that can be serialized to JSON and have a timestamp
/// This trait is used by `ServerLogEntry` to convert the log entry to JSON
/// and get the timestamp of the log entry
/// This trait is implemented by `ServerLogEntry`
///
/// # Example
/// ```
/// use rustfs_audit_logger::LogRecord;
/// use chrono::{DateTime, Utc};
/// use rustfs_audit_logger::ServerLogEntry;
/// use tracing_core::Level;
///
/// let log_entry = ServerLogEntry::new(Level::INFO, "api_handler".to_string());
/// let json = log_entry.to_json();
/// let timestamp = log_entry.get_timestamp();
/// ```
pub trait LogRecord {
fn to_json(&self) -> String;
fn get_timestamp(&self) -> chrono::DateTime<chrono::Utc>;
}
/// Wrapper for `tracing_core::Level` to implement `Serialize` and `Deserialize`
/// for `ServerLogEntry`
/// This is necessary because `tracing_core::Level` does not implement `Serialize`
/// and `Deserialize`
/// This is a workaround to allow `ServerLogEntry` to be serialized and deserialized
/// using `serde`
///
/// # Example
/// ```
/// use rustfs_audit_logger::SerializableLevel;
/// use tracing_core::Level;
///
/// let level = Level::INFO;
/// let serializable_level = SerializableLevel::from(level);
/// ```
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SerializableLevel(pub Level);
impl From<Level> for SerializableLevel {
fn from(level: Level) -> Self {
SerializableLevel(level)
}
}
impl From<SerializableLevel> for Level {
fn from(serializable_level: SerializableLevel) -> Self {
serializable_level.0
}
}
impl Serialize for SerializableLevel {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(self.0.as_str())
}
}
impl<'de> Deserialize<'de> for SerializableLevel {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
match s.as_str() {
"TRACE" => Ok(SerializableLevel(Level::TRACE)),
"DEBUG" => Ok(SerializableLevel(Level::DEBUG)),
"INFO" => Ok(SerializableLevel(Level::INFO)),
"WARN" => Ok(SerializableLevel(Level::WARN)),
"ERROR" => Ok(SerializableLevel(Level::ERROR)),
_ => Err(D::Error::custom("unknown log level")),
}
}
}

View File

@@ -0,0 +1,266 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::{AuditLogEntry, BaseLogEntry, LogKind, LogRecord, SerializableLevel};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use tracing_core::Level;
/// Server log entry with structured fields
/// ServerLogEntry is used to log structured log entries from the server
///
/// The `ServerLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `level` - the log level
/// - `source` - the source of the log entry
/// - `user_id` - the user ID
/// - `fields` - the structured fields of the log entry
///
/// The `ServerLogEntry` structure contains the following methods:
/// - `new` - create a new `ServerLogEntry` with specified level and source
/// - `with_base` - set the base log entry
/// - `user_id` - set the user ID
/// - `fields` - set the fields
/// - `add_field` - add a field
///
/// # Example
/// ```
/// use rustfs_audit_logger::ServerLogEntry;
/// use tracing_core::Level;
///
/// let entry = ServerLogEntry::new(Level::INFO, "test_module".to_string())
/// .user_id(Some("user-456".to_string()))
/// .add_field("operation".to_string(), "login".to_string());
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct ServerLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub level: SerializableLevel,
pub source: String,
#[serde(rename = "userId", skip_serializing_if = "Option::is_none")]
pub user_id: Option<String>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub fields: Vec<(String, String)>,
}
impl ServerLogEntry {
/// Create a new ServerLogEntry with specified level and source
pub fn new(level: Level, source: String) -> Self {
ServerLogEntry {
base: BaseLogEntry::new(),
level: SerializableLevel(level),
source,
user_id: None,
fields: Vec::new(),
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the user ID
pub fn user_id(mut self, user_id: Option<String>) -> Self {
self.user_id = user_id;
self
}
/// Set fields
pub fn fields(mut self, fields: Vec<(String, String)>) -> Self {
self.fields = fields;
self
}
/// Add a field
pub fn add_field(mut self, key: String, value: String) -> Self {
self.fields.push((key, value));
self
}
}
impl LogRecord for ServerLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}
/// Console log entry structure
/// ConsoleLogEntry is used to log console log entries
/// The `ConsoleLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `level` - the log level
/// - `console_msg` - the console message
/// - `node_name` - the node name
/// - `err` - the error message
///
/// The `ConsoleLogEntry` structure contains the following methods:
/// - `new` - create a new `ConsoleLogEntry`
/// - `new_with_console_msg` - create a new `ConsoleLogEntry` with console message and node name
/// - `with_base` - set the base log entry
/// - `set_level` - set the log level
/// - `set_node_name` - set the node name
/// - `set_console_msg` - set the console message
/// - `set_err` - set the error message
///
/// # Example
/// ```
/// use rustfs_audit_logger::ConsoleLogEntry;
///
/// let entry = ConsoleLogEntry::new_with_console_msg("Test message".to_string(), "node-123".to_string());
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ConsoleLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub level: LogKind,
pub console_msg: String,
pub node_name: String,
#[serde(skip)]
pub err: Option<String>,
}
impl ConsoleLogEntry {
/// Create a new ConsoleLogEntry
pub fn new() -> Self {
ConsoleLogEntry {
base: BaseLogEntry::new(),
level: LogKind::Info,
console_msg: String::new(),
node_name: String::new(),
err: None,
}
}
/// Create a new ConsoleLogEntry with console message and node name
pub fn new_with_console_msg(console_msg: String, node_name: String) -> Self {
ConsoleLogEntry {
base: BaseLogEntry::new(),
level: LogKind::Info,
console_msg,
node_name,
err: None,
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the log level
pub fn set_level(mut self, level: LogKind) -> Self {
self.level = level;
self
}
/// Set the node name
pub fn set_node_name(mut self, node_name: String) -> Self {
self.node_name = node_name;
self
}
/// Set the console message
pub fn set_console_msg(mut self, console_msg: String) -> Self {
self.console_msg = console_msg;
self
}
/// Set the error message
pub fn set_err(mut self, err: Option<String>) -> Self {
self.err = err;
self
}
}
impl Default for ConsoleLogEntry {
fn default() -> Self {
Self::new()
}
}
impl LogRecord for ConsoleLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}
/// Unified log entry type
/// UnifiedLogEntry is used to log different types of log entries
///
/// The `UnifiedLogEntry` enum contains the following variants:
/// - `Server` - a server log entry
/// - `Audit` - an audit log entry
/// - `Console` - a console log entry
///
/// The `UnifiedLogEntry` enum contains the following methods:
/// - `to_json` - convert the log entry to JSON
/// - `get_timestamp` - get the timestamp of the log entry
///
/// # Example
/// ```
/// use rustfs_audit_logger::{UnifiedLogEntry, ServerLogEntry};
/// use tracing_core::Level;
///
/// let server_entry = ServerLogEntry::new(Level::INFO, "test_module".to_string());
/// let unified = UnifiedLogEntry::Server(server_entry);
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum UnifiedLogEntry {
#[serde(rename = "server")]
Server(ServerLogEntry),
#[serde(rename = "audit")]
Audit(Box<AuditLogEntry>),
#[serde(rename = "console")]
Console(ConsoleLogEntry),
}
impl LogRecord for UnifiedLogEntry {
fn to_json(&self) -> String {
match self {
UnifiedLogEntry::Server(entry) => entry.to_json(),
UnifiedLogEntry::Audit(entry) => entry.to_json(),
UnifiedLogEntry::Console(entry) => entry.to_json(),
}
}
fn get_timestamp(&self) -> DateTime<Utc> {
match self {
UnifiedLogEntry::Server(entry) => entry.get_timestamp(),
UnifiedLogEntry::Audit(entry) => entry.get_timestamp(),
UnifiedLogEntry::Console(entry) => entry.get_timestamp(),
}
}
}

View File

@@ -0,0 +1,8 @@
mod entry;
mod logger;
pub use entry::args::Args;
pub use entry::audit::{ApiDetails, AuditLogEntry};
pub use entry::base::BaseLogEntry;
pub use entry::unified::{ConsoleLogEntry, ServerLogEntry, UnifiedLogEntry};
pub use entry::{LogKind, LogRecord, ObjectVersion, SerializableLevel};

View File

@@ -12,12 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::components::Setting;
use dioxus::prelude::*;
#![allow(dead_code)]
#[component]
pub fn SettingViews() -> Element {
rsx! {
Setting {}
}
// Default value function
fn default_batch_size() -> usize {
10
}
fn default_queue_size() -> usize {
10000
}
fn default_max_retry() -> u32 {
5
}
fn default_retry_interval() -> std::time::Duration {
std::time::Duration::from_secs(3)
}

View File

@@ -0,0 +1,13 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

View File

@@ -0,0 +1,108 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use chrono::{DateTime, Utc};
use serde::Serialize;
use std::collections::HashMap;
use uuid::Uuid;
///A Trait for a log entry that can be serialized and sent
pub trait Loggable: Serialize + Send + Sync + 'static {
fn to_json(&self) -> Result<String, serde_json::Error> {
serde_json::to_string(self)
}
}
/// Standard log entries
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct LogEntry {
pub deployment_id: String,
pub level: String,
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub trace: Option<Trace>,
pub time: DateTime<Utc>,
pub request_id: String,
}
impl Loggable for LogEntry {}
/// Audit log entry
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct AuditEntry {
pub version: String,
pub deployment_id: String,
pub time: DateTime<Utc>,
pub trigger: String,
pub api: ApiDetails,
pub remote_host: String,
pub request_id: String,
pub user_agent: String,
pub access_key: String,
#[serde(skip_serializing_if = "HashMap::is_empty")]
pub tags: HashMap<String, String>,
}
impl Loggable for AuditEntry {}
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Trace {
pub message: String,
pub source: Vec<String>,
#[serde(skip_serializing_if = "HashMap::is_empty")]
pub variables: HashMap<String, String>,
}
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ApiDetails {
pub name: String,
pub bucket: String,
pub object: String,
pub status: String,
pub status_code: u16,
pub time_to_first_byte: String,
pub time_to_response: String,
}
// Helper functions to create entries
impl AuditEntry {
pub fn new(api_name: &str, bucket: &str, object: &str) -> Self {
AuditEntry {
version: "1".to_string(),
deployment_id: "global-deployment-id".to_string(),
time: Utc::now(),
trigger: "incoming".to_string(),
api: ApiDetails {
name: api_name.to_string(),
bucket: bucket.to_string(),
object: object.to_string(),
status: "OK".to_string(),
status_code: 200,
time_to_first_byte: "10ms".to_string(),
time_to_response: "50ms".to_string(),
},
remote_host: "127.0.0.1".to_string(),
request_id: Uuid::new_v4().to_string(),
user_agent: "Rust-Client/1.0".to_string(),
access_key: "minioadmin".to_string(),
tags: HashMap::new(),
}
}
}

View File

@@ -0,0 +1,13 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

View File

@@ -0,0 +1,36 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
pub mod config;
pub mod dispatch;
pub mod entry;
pub mod factory;
use async_trait::async_trait;
use std::error::Error;
/// General Log Target Trait
#[async_trait]
pub trait Target: Send + Sync {
/// Send a single logizable entry
async fn send(&self, entry: Box<Self>) -> Result<(), Box<dyn Error + Send>>;
/// Returns the unique name of the target
fn name(&self) -> &str;
/// Close target gracefully, ensuring all buffered logs are processed
async fn shutdown(&self);
}

View File

@@ -31,8 +31,9 @@ const-str = { workspace = true, optional = true }
workspace = true
[features]
default = []
default = ["constants"]
audit = ["dep:const-str", "constants"]
constants = ["dep:const-str"]
notify = ["dep:const-str"]
observability = []
notify = ["dep:const-str", "constants"]
observability = ["constants"]

View File

@@ -0,0 +1,31 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Audit configuration module
//! //! This module defines the configuration for audit systems, including
//! webhook and other audit-related settings.
pub const AUDIT_WEBHOOK_SUB_SYS: &str = "audit_webhook";
pub const AUDIT_STORE_EXTENSION: &str = ".audit";
pub const WEBHOOK_ENDPOINT: &str = "endpoint";
pub const WEBHOOK_AUTH_TOKEN: &str = "auth_token";
pub const WEBHOOK_CLIENT_CERT: &str = "client_cert";
pub const WEBHOOK_CLIENT_KEY: &str = "client_key";
pub const WEBHOOK_BATCH_SIZE: &str = "batch_size";
pub const WEBHOOK_QUEUE_SIZE: &str = "queue_size";
pub const WEBHOOK_QUEUE_DIR: &str = "queue_dir";
pub const WEBHOOK_MAX_RETRY: &str = "max_retry";
pub const WEBHOOK_RETRY_INTERVAL: &str = "retry_interval";
pub const WEBHOOK_HTTP_TIMEOUT: &str = "http_timeout";

View File

@@ -16,6 +16,13 @@ pub const DEFAULT_DELIMITER: &str = "_";
pub const ENV_PREFIX: &str = "RUSTFS_";
pub const ENV_WORD_DELIMITER: &str = "_";
pub const DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
pub const DEFAULT_LIMIT: u64 = 100000; // Default store limit
/// Standard config keys and values.
pub const ENABLE_KEY: &str = "enable";
pub const COMMENT_KEY: &str = "comment";
/// Medium-drawn lines separator
/// This is used to separate words in environment variable names.
pub const ENV_WORD_DELIMITER_DASH: &str = "-";

View File

@@ -20,6 +20,8 @@ pub use constants::app::*;
pub use constants::env::*;
#[cfg(feature = "constants")]
pub use constants::tls::*;
#[cfg(feature = "audit")]
pub mod audit;
#[cfg(feature = "notify")]
pub mod notify;
#[cfg(feature = "observability")]

View File

@@ -29,14 +29,6 @@ pub const NOTIFY_PREFIX: &str = "notify";
pub const NOTIFY_ROUTE_PREFIX: &str = const_str::concat!(NOTIFY_PREFIX, "_");
/// Standard config keys and values.
pub const ENABLE_KEY: &str = "enable";
pub const COMMENT_KEY: &str = "comment";
/// Enable values
pub const ENABLE_ON: &str = "on";
pub const ENABLE_OFF: &str = "off";
#[allow(dead_code)]
pub const NOTIFY_SUB_SYSTEMS: &[&str] = &[NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS];

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::notify::{COMMENT_KEY, ENABLE_KEY};
use crate::{COMMENT_KEY, ENABLE_KEY};
// MQTT Keys
pub const MQTT_BROKER: &str = "broker";

View File

@@ -12,8 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub const DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
pub const DEFAULT_LIMIT: u64 = 100000; // Default store limit
pub const DEFAULT_EXT: &str = ".unknown"; // Default file extension
pub const COMPRESS_EXT: &str = ".snappy"; // Extension for compressed files

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::notify::{COMMENT_KEY, ENABLE_KEY};
use crate::{COMMENT_KEY, ENABLE_KEY};
// Webhook Keys
pub const WEBHOOK_ENDPOINT: &str = "endpoint";

View File

@@ -0,0 +1,347 @@
#![cfg(test)]
use aws_config::meta::region::RegionProviderChain;
use aws_sdk_s3::Client;
use aws_sdk_s3::config::{Credentials, Region};
use aws_sdk_s3::error::SdkError;
use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart};
use bytes::Bytes;
use serial_test::serial;
use std::error::Error;
const ENDPOINT: &str = "http://localhost:9000";
const ACCESS_KEY: &str = "rustfsadmin";
const SECRET_KEY: &str = "rustfsadmin";
const BUCKET: &str = "api-test";
async fn create_aws_s3_client() -> Result<Client, Box<dyn Error>> {
let region_provider = RegionProviderChain::default_provider().or_else(Region::new("us-east-1"));
let shared_config = aws_config::defaults(aws_config::BehaviorVersion::latest())
.region(region_provider)
.credentials_provider(Credentials::new(ACCESS_KEY, SECRET_KEY, None, None, "static"))
.endpoint_url(ENDPOINT)
.load()
.await;
let client = Client::from_conf(
aws_sdk_s3::Config::from(&shared_config)
.to_builder()
.force_path_style(true)
.build(),
);
Ok(client)
}
/// Setup test bucket, creating it if it doesn't exist
async fn setup_test_bucket(client: &Client) -> Result<(), Box<dyn Error>> {
match client.create_bucket().bucket(BUCKET).send().await {
Ok(_) => {}
Err(SdkError::ServiceError(e)) => {
let e = e.into_err();
let error_code = e.meta().code().unwrap_or("");
if !error_code.eq("BucketAlreadyExists") {
return Err(e.into());
}
}
Err(e) => {
return Err(e.into());
}
}
Ok(())
}
/// Generate test data of specified size
fn generate_test_data(size: usize) -> Vec<u8> {
let pattern = b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
let mut data = Vec::with_capacity(size);
for i in 0..size {
data.push(pattern[i % pattern.len()]);
}
data
}
/// Upload an object and return its ETag
async fn upload_object_with_metadata(client: &Client, bucket: &str, key: &str, data: &[u8]) -> Result<String, Box<dyn Error>> {
let response = client
.put_object()
.bucket(bucket)
.key(key)
.body(Bytes::from(data.to_vec()).into())
.send()
.await?;
let etag = response.e_tag().unwrap_or("").to_string();
Ok(etag)
}
/// Cleanup test objects from bucket
async fn cleanup_objects(client: &Client, bucket: &str, keys: &[&str]) {
for key in keys {
let _ = client.delete_object().bucket(bucket).key(*key).send().await;
}
}
/// Generate unique test object key
fn generate_test_key(prefix: &str) -> String {
use std::time::{SystemTime, UNIX_EPOCH};
let timestamp = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_nanos();
format!("{prefix}-{timestamp}")
}
#[tokio::test]
#[serial]
#[ignore = "requires running RustFS server at localhost:9000"]
async fn test_conditional_put_okay() -> Result<(), Box<dyn std::error::Error>> {
let client = create_aws_s3_client().await?;
setup_test_bucket(&client).await?;
let test_key = generate_test_key("conditional-put-ok");
let initial_data = generate_test_data(1024); // 1KB test data
let updated_data = generate_test_data(2048); // 2KB updated data
// Upload initial object and get its ETag
let initial_etag = upload_object_with_metadata(&client, BUCKET, &test_key, &initial_data).await?;
// Test 1: PUT with matching If-Match condition (should succeed)
let response1 = client
.put_object()
.bucket(BUCKET)
.key(&test_key)
.body(Bytes::from(updated_data.clone()).into())
.if_match(&initial_etag)
.send()
.await;
assert!(response1.is_ok(), "PUT with matching If-Match should succeed");
// Test 2: PUT with non-matching If-None-Match condition (should succeed)
let fake_etag = "\"fake-etag-12345\"";
let response2 = client
.put_object()
.bucket(BUCKET)
.key(&test_key)
.body(Bytes::from(updated_data.clone()).into())
.if_none_match(fake_etag)
.send()
.await;
assert!(response2.is_ok(), "PUT with non-matching If-None-Match should succeed");
// Cleanup
cleanup_objects(&client, BUCKET, &[&test_key]).await;
Ok(())
}
#[tokio::test]
#[serial]
#[ignore = "requires running RustFS server at localhost:9000"]
async fn test_conditional_put_failed() -> Result<(), Box<dyn std::error::Error>> {
let client = create_aws_s3_client().await?;
setup_test_bucket(&client).await?;
let test_key = generate_test_key("conditional-put-failed");
let initial_data = generate_test_data(1024);
let updated_data = generate_test_data(2048);
// Upload initial object and get its ETag
let initial_etag = upload_object_with_metadata(&client, BUCKET, &test_key, &initial_data).await?;
// Test 1: PUT with non-matching If-Match condition (should fail with 412)
let fake_etag = "\"fake-etag-should-not-match\"";
let response1 = client
.put_object()
.bucket(BUCKET)
.key(&test_key)
.body(Bytes::from(updated_data.clone()).into())
.if_match(fake_etag)
.send()
.await;
assert!(response1.is_err(), "PUT with non-matching If-Match should fail");
if let Err(e) = response1 {
if let SdkError::ServiceError(e) = e {
let e = e.into_err();
let error_code = e.meta().code().unwrap_or("");
assert_eq!("PreconditionFailed", error_code);
} else {
panic!("Unexpected error: {e:?}");
}
}
// Test 2: PUT with matching If-None-Match condition (should fail with 412)
let response2 = client
.put_object()
.bucket(BUCKET)
.key(&test_key)
.body(Bytes::from(updated_data.clone()).into())
.if_none_match(&initial_etag)
.send()
.await;
assert!(response2.is_err(), "PUT with matching If-None-Match should fail");
if let Err(e) = response2 {
if let SdkError::ServiceError(e) = e {
let e = e.into_err();
let error_code = e.meta().code().unwrap_or("");
assert_eq!("PreconditionFailed", error_code);
} else {
panic!("Unexpected error: {e:?}");
}
}
// Cleanup - only need to clean up the initial object since failed PUTs shouldn't create objects
cleanup_objects(&client, BUCKET, &[&test_key]).await;
Ok(())
}
#[tokio::test]
#[serial]
#[ignore = "requires running RustFS server at localhost:9000"]
async fn test_conditional_put_when_object_does_not_exist() -> Result<(), Box<dyn std::error::Error>> {
let client = create_aws_s3_client().await?;
setup_test_bucket(&client).await?;
let key = "some_key";
cleanup_objects(&client, BUCKET, &[key]).await;
// When the object does not exist, the If-Match condition should always fail
let response1 = client
.put_object()
.bucket(BUCKET)
.key(key)
.body(Bytes::from(generate_test_data(1024)).into())
.if_match("*")
.send()
.await;
assert!(response1.is_err());
if let Err(e) = response1 {
if let SdkError::ServiceError(e) = e {
let e = e.into_err();
let error_code = e.meta().code().unwrap_or("");
assert_eq!("NoSuchKey", error_code);
} else {
panic!("Unexpected error: {e:?}");
}
}
// When the object does not exist, the If-None-Match condition should be able to succeed
let response2 = client
.put_object()
.bucket(BUCKET)
.key(key)
.body(Bytes::from(generate_test_data(1024)).into())
.if_none_match("*")
.send()
.await;
assert!(response2.is_ok());
cleanup_objects(&client, BUCKET, &[key]).await;
Ok(())
}
#[tokio::test]
#[serial]
#[ignore = "requires running RustFS server at localhost:9000"]
async fn test_conditional_multi_part_upload() -> Result<(), Box<dyn std::error::Error>> {
let client = create_aws_s3_client().await?;
setup_test_bucket(&client).await?;
let test_key = generate_test_key("multipart-upload-ok");
let test_data = generate_test_data(1024);
let initial_etag = upload_object_with_metadata(&client, BUCKET, &test_key, &test_data).await?;
let part_size = 5 * 1024 * 1024; // 5MB per part (minimum for multipart)
let num_parts = 3;
let mut parts = Vec::new();
// Initiate multipart upload
let initiate_response = client.create_multipart_upload().bucket(BUCKET).key(&test_key).send().await?;
let upload_id = initiate_response
.upload_id()
.ok_or(std::io::Error::other("No upload ID returned"))?;
// Upload parts
for part_number in 1..=num_parts {
let part_data = generate_test_data(part_size);
let upload_part_response = client
.upload_part()
.bucket(BUCKET)
.key(&test_key)
.upload_id(upload_id)
.part_number(part_number)
.body(Bytes::from(part_data).into())
.send()
.await?;
let part_etag = upload_part_response
.e_tag()
.ok_or(std::io::Error::other("Do not have etag"))?
.to_string();
let completed_part = CompletedPart::builder().part_number(part_number).e_tag(part_etag).build();
parts.push(completed_part);
}
// Complete multipart upload
let completed_upload = CompletedMultipartUpload::builder().set_parts(Some(parts)).build();
// Test 1: Multipart upload with wildcard If-None-Match, should fail
let complete_response = client
.complete_multipart_upload()
.bucket(BUCKET)
.key(&test_key)
.upload_id(upload_id)
.multipart_upload(completed_upload.clone())
.if_none_match("*")
.send()
.await;
assert!(complete_response.is_err());
// Test 2: Multipart upload with matching If-None-Match, should fail
let complete_response = client
.complete_multipart_upload()
.bucket(BUCKET)
.key(&test_key)
.upload_id(upload_id)
.multipart_upload(completed_upload.clone())
.if_none_match(initial_etag.clone())
.send()
.await;
assert!(complete_response.is_err());
// Test 3: Multipart upload with unmatching If-Match, should fail
let complete_response = client
.complete_multipart_upload()
.bucket(BUCKET)
.key(&test_key)
.upload_id(upload_id)
.multipart_upload(completed_upload.clone())
.if_match("\"abcdef\"")
.send()
.await;
assert!(complete_response.is_err());
// Test 4: Multipart upload with matching If-Match, should succeed
let complete_response = client
.complete_multipart_upload()
.bucket(BUCKET)
.key(&test_key)
.upload_id(upload_id)
.multipart_upload(completed_upload.clone())
.if_match(initial_etag)
.send()
.await;
assert!(complete_response.is_ok());
// Cleanup
cleanup_objects(&client, BUCKET, &[&test_key]).await;
Ok(())
}

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod conditional_writes;
mod lifecycle;
mod lock;
mod node_interact_test;

View File

@@ -34,7 +34,7 @@ workspace = true
default = []
[dependencies]
rustfs-config = { workspace = true, features = ["constants", "notify"] }
rustfs-config = { workspace = true, features = ["constants", "notify", "audit"] }
async-trait.workspace = true
bytes.workspace = true
byteorder = { workspace = true }

View File

@@ -317,7 +317,7 @@ impl TransitionClient {
//}
let mut retry_timer = RetryTimer::new(req_retry, DEFAULT_RETRY_UNIT, DEFAULT_RETRY_CAP, MAX_JITTER, self.random);
while let Some(v) = retry_timer.next().await {
while retry_timer.next().await.is_some() {
let req = self.new_request(&method, metadata).await?;
resp = self.doit(req).await?;
@@ -569,7 +569,16 @@ impl TransitionClient {
}
pub fn is_virtual_host_style_request(&self, url: &Url, bucket_name: &str) -> bool {
if bucket_name == "" {
// Contract:
// - return true if we should use virtual-hosted-style addressing (bucket as subdomain)
// Heuristics (aligned with AWS S3/MinIO clients):
// - explicit DNS mode => true
// - explicit PATH mode => false
// - AUTO:
// - bucket must be non-empty and DNS compatible
// - endpoint host must be a DNS name (not an IPv4/IPv6 literal)
// - when using TLS (https), buckets with dots are avoided due to wildcard/cert issues
if bucket_name.is_empty() {
return false;
}

View File

@@ -728,7 +728,7 @@ impl ReplicationPool {
// Either already satisfied or worker count changed while waiting for the lock.
return;
}
println!("2 resize_lrg_workers");
debug!("Resizing large workers pool");
let active_workers = Arc::clone(&self.active_lrg_workers);
let obj_layer = Arc::clone(&self.obj_layer);
@@ -743,7 +743,7 @@ impl ReplicationPool {
tokio::spawn(async move {
while let Some(operation) = receiver.recv().await {
println!("resize workers 1");
debug!("Processing replication operation in worker");
active_workers_clone.fetch_add(1, Ordering::SeqCst);
if let Some(info) = operation.as_any().downcast_ref::<ReplicateObjectInfo>() {

View File

@@ -0,0 +1,84 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::config::{KV, KVS};
use rustfs_config::audit::{
WEBHOOK_AUTH_TOKEN, WEBHOOK_BATCH_SIZE, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_HTTP_TIMEOUT,
WEBHOOK_MAX_RETRY, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_SIZE, WEBHOOK_RETRY_INTERVAL,
};
use rustfs_config::{DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, EnableState};
use std::sync::LazyLock;
#[allow(dead_code)]
#[allow(clippy::declare_interior_mutable_const)]
/// Default KVS for audit webhook settings.
pub const DEFAULT_AUDIT_WEBHOOK_KVS: LazyLock<KVS> = LazyLock::new(|| {
KVS(vec![
KV {
key: ENABLE_KEY.to_owned(),
value: EnableState::Off.to_string(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_ENDPOINT.to_owned(),
value: "".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_AUTH_TOKEN.to_owned(),
value: "".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_CLIENT_CERT.to_owned(),
value: "".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_CLIENT_KEY.to_owned(),
value: "".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_BATCH_SIZE.to_owned(),
value: "1".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_QUEUE_SIZE.to_owned(),
value: DEFAULT_LIMIT.to_string(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_QUEUE_DIR.to_owned(),
value: DEFAULT_DIR.to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_MAX_RETRY.to_owned(),
value: "0".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_RETRY_INTERVAL.to_owned(),
value: "3s".to_owned(),
hidden_if_empty: false,
},
KV {
key: WEBHOOK_HTTP_TIMEOUT.to_owned(),
value: "5s".to_owned(),
hidden_if_empty: false,
},
])
});

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod audit;
pub mod com;
#[allow(dead_code)]
pub mod heal;
@@ -21,8 +22,9 @@ pub mod storageclass;
use crate::error::Result;
use crate::store::ECStore;
use com::{STORAGE_CLASS_SUB_SYS, lookup_configs, read_config_without_migrate};
use rustfs_config::COMMENT_KEY;
use rustfs_config::DEFAULT_DELIMITER;
use rustfs_config::notify::{COMMENT_KEY, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
use rustfs_config::notify::{NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::LazyLock;

View File

@@ -14,10 +14,11 @@
use crate::config::{KV, KVS};
use rustfs_config::notify::{
COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, ENABLE_OFF, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN,
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL,
MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR,
WEBHOOK_QUEUE_LIMIT,
};
use rustfs_config::{COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, EnableState};
use std::sync::LazyLock;
/// The default configuration collection of webhooks
@@ -26,7 +27,7 @@ pub static DEFAULT_WEBHOOK_KVS: LazyLock<KVS> = LazyLock::new(|| {
KVS(vec![
KV {
key: ENABLE_KEY.to_owned(),
value: ENABLE_OFF.to_owned(),
value: EnableState::Off.to_string(),
hidden_if_empty: false,
},
KV {
@@ -73,7 +74,7 @@ pub static DEFAULT_MQTT_KVS: LazyLock<KVS> = LazyLock::new(|| {
KVS(vec![
KV {
key: ENABLE_KEY.to_owned(),
value: ENABLE_OFF.to_owned(),
value: EnableState::Off.to_string(),
hidden_if_empty: false,
},
KV {

View File

@@ -187,6 +187,9 @@ pub enum StorageError {
#[error("Lock error: {0}")]
Lock(#[from] rustfs_lock::LockError),
#[error("Precondition failed")]
PreconditionFailed,
}
impl StorageError {
@@ -416,6 +419,7 @@ impl Clone for StorageError {
StorageError::Lock(e) => StorageError::Lock(e.clone()),
StorageError::InsufficientReadQuorum(a, b) => StorageError::InsufficientReadQuorum(a.clone(), b.clone()),
StorageError::InsufficientWriteQuorum(a, b) => StorageError::InsufficientWriteQuorum(a.clone(), b.clone()),
StorageError::PreconditionFailed => StorageError::PreconditionFailed,
}
}
}
@@ -481,6 +485,7 @@ impl StorageError {
StorageError::Lock(_) => 0x38,
StorageError::InsufficientReadQuorum(_, _) => 0x39,
StorageError::InsufficientWriteQuorum(_, _) => 0x3A,
StorageError::PreconditionFailed => 0x3B,
}
}
@@ -548,6 +553,7 @@ impl StorageError {
0x38 => Some(StorageError::Lock(rustfs_lock::LockError::internal("Generic lock error".to_string()))),
0x39 => Some(StorageError::InsufficientReadQuorum(Default::default(), Default::default())),
0x3A => Some(StorageError::InsufficientWriteQuorum(Default::default(), Default::default())),
0x3B => Some(StorageError::PreconditionFailed),
_ => None,
}
}

View File

@@ -61,6 +61,7 @@ use glob::Pattern;
use http::HeaderMap;
use md5::{Digest as Md5Digest, Md5};
use rand::{Rng, seq::SliceRandom};
use regex::Regex;
use rustfs_common::heal_channel::{DriveState, HealChannelPriority, HealItemType, HealOpts, HealScanMode, send_heal_disk};
use rustfs_filemeta::headers::RESERVED_METADATA_PREFIX_LOWER;
use rustfs_filemeta::{
@@ -3218,6 +3219,44 @@ impl SetDisks {
obj?;
Ok(())
}
async fn check_write_precondition(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Option<StorageError> {
let mut opts = opts.clone();
let http_preconditions = opts.http_preconditions?;
opts.http_preconditions = None;
// Never claim a lock here, to avoid deadlock
// - If no_lock is false, we must have obtained the lock out side of this function
// - If no_lock is true, we should not obtain locks
opts.no_lock = true;
let oi = self.get_object_info(bucket, object, &opts).await;
match oi {
Ok(oi) => {
if should_prevent_write(&oi, http_preconditions.if_none_match, http_preconditions.if_match) {
return Some(StorageError::PreconditionFailed);
}
}
Err(StorageError::VersionNotFound(_, _, _))
| Err(StorageError::ObjectNotFound(_, _))
| Err(StorageError::ErasureReadQuorum) => {
// When the object is not found,
// - if If-Match is set, we should return 404 NotFound
// - if If-None-Match is set, we should be able to proceed with the request
if http_preconditions.if_match.is_some() {
return Some(StorageError::ObjectNotFound(bucket.to_string(), object.to_string()));
}
}
Err(e) => {
return Some(e);
}
}
None
}
}
#[async_trait::async_trait]
@@ -3335,6 +3374,12 @@ impl ObjectIO for SetDisks {
_object_lock_guard = guard_opt;
}
if let Some(http_preconditions) = opts.http_preconditions.clone() {
if let Some(err) = self.check_write_precondition(bucket, object, opts).await {
return Err(err);
}
}
let mut user_defined = opts.user_defined.clone();
let sc_parity_drives = {
@@ -5123,6 +5168,26 @@ impl StorageAPI for SetDisks {
let disks = disks.clone();
// let disks = Self::shuffle_disks(&disks, &fi.erasure.distribution);
// Acquire per-object exclusive lock via RAII guard. It auto-releases asynchronously on drop.
let mut _object_lock_guard: Option<rustfs_lock::LockGuard> = None;
if let Some(http_preconditions) = opts.http_preconditions.clone() {
if !opts.no_lock {
let guard_opt = self
.namespace_lock
.lock_guard(object, &self.locker_owner, Duration::from_secs(5), Duration::from_secs(10))
.await?;
if guard_opt.is_none() {
return Err(Error::other("can not get lock. please retry".to_string()));
}
_object_lock_guard = guard_opt;
}
if let Some(err) = self.check_write_precondition(bucket, object, opts).await {
return Err(err);
}
}
let part_path = format!("{}/{}/", upload_id_path, fi.data_dir.unwrap_or(Uuid::nil()));
let part_meta_paths = uploaded_parts
@@ -5942,13 +6007,45 @@ fn get_complete_multipart_md5(parts: &[CompletePart]) -> String {
format!("{:x}-{}", hasher.finalize(), parts.len())
}
pub fn canonicalize_etag(etag: &str) -> String {
let re = Regex::new("\"*?([^\"]*?)\"*?$").unwrap();
re.replace_all(etag, "$1").to_string()
}
pub fn e_tag_matches(etag: &str, condition: &str) -> bool {
if condition.trim() == "*" {
return true;
}
canonicalize_etag(etag) == canonicalize_etag(condition)
}
pub fn should_prevent_write(oi: &ObjectInfo, if_none_match: Option<String>, if_match: Option<String>) -> bool {
match &oi.etag {
Some(etag) => {
if let Some(if_none_match) = if_none_match {
if e_tag_matches(etag, &if_none_match) {
return true;
}
}
if let Some(if_match) = if_match {
if !e_tag_matches(etag, &if_match) {
return true;
}
}
false
}
// If we can't obtain the etag of the object, perevent the write only when we have at least one condition
None => if_none_match.is_some() || if_match.is_some(),
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::disk::CHECK_PART_UNKNOWN;
use crate::disk::CHECK_PART_VOLUME_NOT_FOUND;
use crate::disk::error::DiskError;
use crate::store_api::CompletePart;
use crate::store_api::{CompletePart, ObjectInfo};
use rustfs_filemeta::ErasureInfo;
use std::collections::HashMap;
use time::OffsetDateTime;
@@ -6373,4 +6470,62 @@ mod tests {
assert_eq!(result2.len(), 3);
assert!(result2.iter().all(|d| d.is_none()));
}
#[test]
fn test_etag_matches() {
assert!(e_tag_matches("abc", "abc"));
assert!(e_tag_matches("\"abc\"", "abc"));
assert!(e_tag_matches("\"abc\"", "*"));
}
#[test]
fn test_should_prevent_write() {
let oi = ObjectInfo {
etag: Some("abc".to_string()),
..Default::default()
};
let if_none_match = Some("abc".to_string());
let if_match = None;
assert!(should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = Some("*".to_string());
let if_match = None;
assert!(should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = None;
let if_match = Some("def".to_string());
assert!(should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = None;
let if_match = Some("*".to_string());
assert!(!should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = Some("def".to_string());
let if_match = None;
assert!(!should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = Some("def".to_string());
let if_match = Some("*".to_string());
assert!(!should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = Some("def".to_string());
let if_match = Some("\"abc\"".to_string());
assert!(!should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = Some("*".to_string());
let if_match = Some("\"abc\"".to_string());
assert!(should_prevent_write(&oi, if_none_match, if_match));
let oi = ObjectInfo {
etag: None,
..Default::default()
};
let if_none_match = Some("*".to_string());
let if_match = Some("\"abc\"".to_string());
assert!(should_prevent_write(&oi, if_none_match, if_match));
let if_none_match = None;
let if_match = None;
assert!(!should_prevent_write(&oi, if_none_match, if_match));
}
}

View File

@@ -132,30 +132,50 @@ impl GetObjectReader {
if is_compressed {
let actual_size = oi.get_actual_size()?;
let (off, length) = (0, oi.size);
let (_dec_off, dec_length) = (0, actual_size);
if let Some(_rs) = rs {
// TODO: range spec is not supported for compressed object
return Err(Error::other("The requested range is not satisfiable"));
// let (off, length) = rs.get_offset_length(actual_size)?;
}
let (off, length, dec_off, dec_length) = if let Some(rs) = rs {
// Support range requests for compressed objects
let (dec_off, dec_length) = rs.get_offset_length(actual_size)?;
(0, oi.size, dec_off, dec_length)
} else {
(0, oi.size, 0, actual_size)
};
let dec_reader = DecompressReader::new(reader, algo);
let actual_size = if actual_size > 0 {
let actual_size_usize = if actual_size > 0 {
actual_size as usize
} else {
return Err(Error::other(format!("invalid decompressed size {actual_size}")));
};
let dec_reader = LimitReader::new(dec_reader, actual_size);
let final_reader: Box<dyn AsyncRead + Unpin + Send + Sync> = if dec_off > 0 || dec_length != actual_size {
// Use RangedDecompressReader for streaming range processing
// The new implementation supports any offset size by streaming and skipping data
match RangedDecompressReader::new(dec_reader, dec_off, dec_length, actual_size_usize) {
Ok(ranged_reader) => {
tracing::debug!(
"Successfully created RangedDecompressReader for offset={}, length={}",
dec_off,
dec_length
);
Box::new(ranged_reader)
}
Err(e) => {
// Only fail if the range parameters are fundamentally invalid (e.g., offset >= file size)
tracing::error!("RangedDecompressReader failed with invalid range parameters: {}", e);
return Err(e);
}
}
} else {
Box::new(LimitReader::new(dec_reader, actual_size_usize))
};
let mut oi = oi.clone();
oi.size = dec_length;
return Ok((
GetObjectReader {
stream: Box::new(dec_reader),
stream: final_reader,
object_info: oi,
},
off,
@@ -283,6 +303,12 @@ impl HTTPRangeSpec {
}
}
#[derive(Debug, Default, Clone)]
pub struct HTTPPreconditions {
pub if_match: Option<String>,
pub if_none_match: Option<String>,
}
#[derive(Debug, Default, Clone)]
pub struct ObjectOptions {
// Use the maximum parity (N/2), used when saving server configuration files
@@ -306,6 +332,7 @@ pub struct ObjectOptions {
pub user_defined: HashMap<String, String>,
pub preserve_etag: Option<String>,
pub metadata_chg: bool,
pub http_preconditions: Option<HTTPPreconditions>,
pub replication_request: bool,
pub delete_marker: bool,
@@ -1084,3 +1111,338 @@ pub trait StorageAPI: ObjectIO {
async fn get_pool_and_set(&self, id: &str) -> Result<(Option<usize>, Option<usize>, Option<usize>)>;
async fn check_abandoned_parts(&self, bucket: &str, object: &str, opts: &HealOpts) -> Result<()>;
}
/// A streaming decompression reader that supports range requests by skipping data in the decompressed stream.
/// This implementation acknowledges that compressed streams (like LZ4) must be decompressed sequentially
/// from the beginning, so it streams and discards data until reaching the target offset.
#[derive(Debug)]
pub struct RangedDecompressReader<R> {
inner: R,
target_offset: usize,
target_length: usize,
current_offset: usize,
bytes_returned: usize,
}
impl<R: AsyncRead + Unpin + Send + Sync> RangedDecompressReader<R> {
pub fn new(inner: R, offset: usize, length: i64, total_size: usize) -> Result<Self> {
// Validate the range request
if offset >= total_size {
tracing::debug!("Range offset {} exceeds total size {}", offset, total_size);
return Err(Error::other("Range offset exceeds file size"));
}
// Adjust length if it extends beyond file end
let actual_length = std::cmp::min(length as usize, total_size - offset);
tracing::debug!(
"Creating RangedDecompressReader: offset={}, length={}, total_size={}, actual_length={}",
offset,
length,
total_size,
actual_length
);
Ok(Self {
inner,
target_offset: offset,
target_length: actual_length,
current_offset: 0,
bytes_returned: 0,
})
}
}
impl<R: AsyncRead + Unpin + Send + Sync> AsyncRead for RangedDecompressReader<R> {
fn poll_read(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
use std::pin::Pin;
use std::task::Poll;
use tokio::io::ReadBuf;
loop {
// If we've returned all the bytes we need, return EOF
if self.bytes_returned >= self.target_length {
return Poll::Ready(Ok(()));
}
// Read from the inner stream
let buf_capacity = buf.remaining();
if buf_capacity == 0 {
return Poll::Ready(Ok(()));
}
// Prepare a temporary buffer for reading
let mut temp_buf = vec![0u8; std::cmp::min(buf_capacity, 8192)];
let mut temp_read_buf = ReadBuf::new(&mut temp_buf);
match Pin::new(&mut self.inner).poll_read(cx, &mut temp_read_buf) {
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(e)) => return Poll::Ready(Err(e)),
Poll::Ready(Ok(())) => {
let n = temp_read_buf.filled().len();
if n == 0 {
// EOF from inner stream
if self.current_offset < self.target_offset {
// We haven't reached the target offset yet - this is an error
return Poll::Ready(Err(std::io::Error::new(
std::io::ErrorKind::UnexpectedEof,
format!(
"Unexpected EOF: only read {} bytes, target offset is {}",
self.current_offset, self.target_offset
),
)));
}
// Normal EOF after reaching target
return Poll::Ready(Ok(()));
}
// Update current position
let old_offset = self.current_offset;
self.current_offset += n;
// Check if we're still in the skip phase
if old_offset < self.target_offset {
// We're still skipping data
let skip_end = std::cmp::min(self.current_offset, self.target_offset);
let bytes_to_skip_in_this_read = skip_end - old_offset;
if self.current_offset <= self.target_offset {
// All data in this read should be skipped
tracing::trace!("Skipping {} bytes at offset {}", n, old_offset);
// Continue reading in the loop instead of recursive call
continue;
} else {
// Partial skip: some data should be returned
let data_start_in_buffer = bytes_to_skip_in_this_read;
let available_data = n - data_start_in_buffer;
let bytes_to_return = std::cmp::min(
available_data,
std::cmp::min(buf.remaining(), self.target_length - self.bytes_returned),
);
if bytes_to_return > 0 {
let data_slice =
&temp_read_buf.filled()[data_start_in_buffer..data_start_in_buffer + bytes_to_return];
buf.put_slice(data_slice);
self.bytes_returned += bytes_to_return;
tracing::trace!(
"Skipped {} bytes, returned {} bytes at offset {}",
bytes_to_skip_in_this_read,
bytes_to_return,
old_offset
);
}
return Poll::Ready(Ok(()));
}
} else {
// We're in the data return phase
let bytes_to_return =
std::cmp::min(n, std::cmp::min(buf.remaining(), self.target_length - self.bytes_returned));
if bytes_to_return > 0 {
buf.put_slice(&temp_read_buf.filled()[..bytes_to_return]);
self.bytes_returned += bytes_to_return;
tracing::trace!("Returned {} bytes at offset {}", bytes_to_return, old_offset);
}
return Poll::Ready(Ok(()));
}
}
}
}
}
}
/// A wrapper that ensures the inner stream is fully consumed even if the outer reader stops early.
/// This prevents broken pipe errors in erasure coding scenarios where the writer expects
/// the full stream to be consumed.
pub struct StreamConsumer<R: AsyncRead + Unpin + Send + 'static> {
inner: Option<R>,
consumer_task: Option<tokio::task::JoinHandle<()>>,
}
impl<R: AsyncRead + Unpin + Send + 'static> StreamConsumer<R> {
pub fn new(inner: R) -> Self {
Self {
inner: Some(inner),
consumer_task: None,
}
}
fn ensure_consumer_started(&mut self) {
if self.consumer_task.is_none() && self.inner.is_some() {
let mut inner = self.inner.take().unwrap();
let task = tokio::spawn(async move {
let mut buf = [0u8; 8192];
loop {
match inner.read(&mut buf).await {
Ok(0) => break, // EOF
Ok(_) => continue, // Keep consuming
Err(_) => break, // Error, stop consuming
}
}
});
self.consumer_task = Some(task);
}
}
}
impl<R: AsyncRead + Unpin + Send + 'static> AsyncRead for StreamConsumer<R> {
fn poll_read(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> std::task::Poll<std::io::Result<()>> {
use std::pin::Pin;
use std::task::Poll;
if let Some(ref mut inner) = self.inner {
Pin::new(inner).poll_read(cx, buf)
} else {
Poll::Ready(Ok(())) // EOF
}
}
}
impl<R: AsyncRead + Unpin + Send + 'static> Drop for StreamConsumer<R> {
fn drop(&mut self) {
if self.consumer_task.is_none() && self.inner.is_some() {
let mut inner = self.inner.take().unwrap();
let task = tokio::spawn(async move {
let mut buf = [0u8; 8192];
loop {
match inner.read(&mut buf).await {
Ok(0) => break, // EOF
Ok(_) => continue, // Keep consuming
Err(_) => break, // Error, stop consuming
}
}
});
self.consumer_task = Some(task);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Cursor;
use tokio::io::AsyncReadExt;
#[tokio::test]
async fn test_ranged_decompress_reader() {
// Create test data
let original_data = b"Hello, World! This is a test for range requests on compressed data.";
// For this test, we'll simulate using the original data directly as "decompressed"
let cursor = Cursor::new(original_data.to_vec());
// Test reading a range from the middle
let mut ranged_reader = RangedDecompressReader::new(cursor, 7, 5, original_data.len()).unwrap();
let mut result = Vec::new();
ranged_reader.read_to_end(&mut result).await.unwrap();
// Should read "World" (5 bytes starting from position 7)
assert_eq!(result, b"World");
}
#[tokio::test]
async fn test_ranged_decompress_reader_from_start() {
let original_data = b"Hello, World! This is a test.";
let cursor = Cursor::new(original_data.to_vec());
let mut ranged_reader = RangedDecompressReader::new(cursor, 0, 5, original_data.len()).unwrap();
let mut result = Vec::new();
ranged_reader.read_to_end(&mut result).await.unwrap();
// Should read "Hello" (5 bytes from the start)
assert_eq!(result, b"Hello");
}
#[tokio::test]
async fn test_ranged_decompress_reader_to_end() {
let original_data = b"Hello, World!";
let cursor = Cursor::new(original_data.to_vec());
let mut ranged_reader = RangedDecompressReader::new(cursor, 7, 6, original_data.len()).unwrap();
let mut result = Vec::new();
ranged_reader.read_to_end(&mut result).await.unwrap();
// Should read "World!" (6 bytes starting from position 7)
assert_eq!(result, b"World!");
}
#[tokio::test]
async fn test_http_range_spec_with_compressed_data() {
// Test that HTTPRangeSpec::get_offset_length works correctly
let range_spec = HTTPRangeSpec {
is_suffix_length: false,
start: 5,
end: 14, // inclusive
};
let total_size = 100i64;
let (offset, length) = range_spec.get_offset_length(total_size).unwrap();
assert_eq!(offset, 5);
assert_eq!(length, 10); // end - start + 1 = 14 - 5 + 1 = 10
}
#[tokio::test]
async fn test_ranged_decompress_reader_zero_length() {
let original_data = b"Hello, World!";
let cursor = Cursor::new(original_data.to_vec());
let mut ranged_reader = RangedDecompressReader::new(cursor, 5, 0, original_data.len()).unwrap();
let mut result = Vec::new();
ranged_reader.read_to_end(&mut result).await.unwrap();
// Should read nothing
assert_eq!(result, b"");
}
#[tokio::test]
async fn test_ranged_decompress_reader_skip_entire_data() {
let original_data = b"Hello, World!";
let cursor = Cursor::new(original_data.to_vec());
// Skip to end of data with length 0 - this should read nothing
let mut ranged_reader = RangedDecompressReader::new(cursor, original_data.len() - 1, 0, original_data.len()).unwrap();
let mut result = Vec::new();
ranged_reader.read_to_end(&mut result).await.unwrap();
assert_eq!(result, b"");
}
#[tokio::test]
async fn test_ranged_decompress_reader_out_of_bounds_offset() {
let original_data = b"Hello, World!";
let cursor = Cursor::new(original_data.to_vec());
// Offset beyond EOF should return error in constructor
let result = RangedDecompressReader::new(cursor, original_data.len() + 10, 5, original_data.len());
assert!(result.is_err());
// Use pattern matching to avoid requiring Debug on the error type
if let Err(e) = result {
assert!(e.to_string().contains("Range offset exceeds file size"));
}
}
#[tokio::test]
async fn test_ranged_decompress_reader_partial_read() {
let original_data = b"abcdef";
let cursor = Cursor::new(original_data.to_vec());
let mut ranged_reader = RangedDecompressReader::new(cursor, 2, 3, original_data.len()).unwrap();
let mut buf = [0u8; 2];
let n = ranged_reader.read(&mut buf).await.unwrap();
assert_eq!(n, 2);
assert_eq!(&buf, b"cd");
let mut buf2 = [0u8; 2];
let n2 = ranged_reader.read(&mut buf2).await.unwrap();
assert_eq!(n2, 1);
assert_eq!(&buf2[..1], b"e");
}
}

View File

@@ -114,7 +114,7 @@ impl Drop for LockGuard {
let _ = futures::future::join_all(futures_iter).await;
});
// Explicitly drop the JoinHandle to acknowledge detaching the task.
std::mem::drop(handle);
drop(handle);
}
}
}

View File

@@ -29,6 +29,7 @@ documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/"
rustfs-config = { workspace = true, features = ["notify", "constants"] }
rustfs-ecstore = { workspace = true }
rustfs-utils = { workspace = true, features = ["path", "sys"] }
rustfs-targets = { workspace = true }
async-trait = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
dashmap = { workspace = true }
@@ -36,23 +37,18 @@ futures = { workspace = true }
form_urlencoded = { workspace = true }
once_cell = { workspace = true }
quick-xml = { workspace = true, features = ["serialize", "async-tokio"] }
reqwest = { workspace = true }
rumqttc = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
snap = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread", "sync", "time"] }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
uuid = { workspace = true, features = ["v4", "serde"] }
url = { workspace = true }
urlencoding = { workspace = true }
wildmatch = { workspace = true, features = ["serde"] }
[dev-dependencies]
tokio = { workspace = true, features = ["test-util"] }
reqwest = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
axum = { workspace = true }
[lints]

View File

@@ -0,0 +1,59 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::io::IsTerminal;
use tracing_subscriber::{EnvFilter, fmt, prelude::*, util::SubscriberInitExt};
#[allow(dead_code)]
fn main() {
init_logger(LogLevel::Info);
tracing::info!("Tracing logger initialized with Info level");
}
/// Initialize the tracing log system
pub fn init_logger(level: LogLevel) {
let filter = EnvFilter::default().add_directive(level.into());
tracing_subscriber::registry()
.with(filter)
.with(
fmt::layer()
.with_target(true)
.with_target(true)
.with_ansi(std::io::stdout().is_terminal())
.with_thread_names(true)
.with_thread_ids(true)
.with_file(true)
.with_line_number(true),
)
.init();
}
/// Log level definition
pub enum LogLevel {
Debug,
Info,
Warn,
Error,
}
impl From<LogLevel> for tracing_subscriber::filter::Directive {
fn from(level: LogLevel) -> Self {
match level {
LogLevel::Debug => "debug".parse().unwrap(),
LogLevel::Info => "info".parse().unwrap(),
LogLevel::Warn => "warn".parse().unwrap(),
LogLevel::Error => "error".parse().unwrap(),
}
}
}

View File

@@ -12,15 +12,20 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod base;
use base::{LogLevel, init_logger};
use rustfs_config::EnableState::On;
use rustfs_config::notify::{
DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT,
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC, MQTT_USERNAME,
NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
};
use rustfs_config::{DEFAULT_LIMIT, ENABLE_KEY};
use rustfs_ecstore::config::{Config, KV, KVS};
use rustfs_notify::arn::TargetID;
use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger};
use rustfs_notify::{BucketNotificationConfig, Event, NotificationError};
use rustfs_notify::{initialize, notification_system};
use rustfs_targets::EventName;
use rustfs_targets::arn::TargetID;
use std::sync::Arc;
use std::time::Duration;
use tracing::info;
@@ -46,7 +51,7 @@ async fn main() -> Result<(), NotificationError> {
let webhook_kvs_vec = vec![
KV {
key: ENABLE_KEY.to_string(),
value: ENABLE_ON.to_string(),
value: On.to_string(),
hidden_if_empty: false,
},
KV {
@@ -85,7 +90,7 @@ async fn main() -> Result<(), NotificationError> {
let mqtt_kvs_vec = vec![
KV {
key: ENABLE_KEY.to_string(),
value: ENABLE_ON.to_string(),
value: On.to_string(),
hidden_if_empty: false,
},
KV {
@@ -136,6 +141,10 @@ async fn main() -> Result<(), NotificationError> {
// Load the configuration and initialize the system
*system.config.write().await = config;
info!("---> Initializing notification system with Webhook and MQTT targets...");
info!("Webhook Endpoint: {}", WEBHOOK_ENDPOINT);
info!("MQTT Broker: {}", MQTT_BROKER);
info!("system.init config: {:?}", system.config.read().await);
system.init().await?;
info!("✅ System initialized with Webhook and MQTT targets.");

View File

@@ -12,16 +12,20 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Using Global Accessories
mod base;
use base::{LogLevel, init_logger};
use rustfs_config::EnableState::On;
use rustfs_config::notify::{
DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT,
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC, MQTT_USERNAME,
NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
};
use rustfs_config::{DEFAULT_LIMIT, ENABLE_KEY};
use rustfs_ecstore::config::{Config, KV, KVS};
use rustfs_notify::arn::TargetID;
use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger};
use rustfs_notify::{BucketNotificationConfig, Event, NotificationError};
use rustfs_notify::{initialize, notification_system};
use rustfs_targets::EventName;
use rustfs_targets::arn::TargetID;
use std::sync::Arc;
use std::time::Duration;
use tracing::info;
@@ -47,7 +51,7 @@ async fn main() -> Result<(), NotificationError> {
let webhook_kvs_vec = vec![
KV {
key: ENABLE_KEY.to_string(),
value: ENABLE_ON.to_string(),
value: On.to_string(),
hidden_if_empty: false,
},
KV {
@@ -95,7 +99,7 @@ async fn main() -> Result<(), NotificationError> {
let mqtt_kvs_vec = vec![
KV {
key: ENABLE_KEY.to_string(),
value: ENABLE_ON.to_string(),
value: On.to_string(),
hidden_if_empty: false,
},
KV {

View File

@@ -1,98 +1,22 @@
// Copyright 2024 RustFS Team
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::arn::TargetID;
use rustfs_targets::TargetError;
use rustfs_targets::arn::TargetID;
use std::io;
use thiserror::Error;
/// Error types for the store
#[derive(Debug, Error)]
pub enum StoreError {
#[error("I/O error: {0}")]
Io(#[from] io::Error),
#[error("Serialization error: {0}")]
Serialization(String),
#[error("Deserialization error: {0}")]
Deserialization(String),
#[error("Compression error: {0}")]
Compression(String),
#[error("Entry limit exceeded")]
LimitExceeded,
#[error("Entry not found")]
NotFound,
#[error("Invalid entry: {0}")]
Internal(String), // Added internal error type
}
/// Error types for targets
#[derive(Debug, Error)]
pub enum TargetError {
#[error("Storage error: {0}")]
Storage(String),
#[error("Network error: {0}")]
Network(String),
#[error("Request error: {0}")]
Request(String),
#[error("Timeout error: {0}")]
Timeout(String),
#[error("Authentication error: {0}")]
Authentication(String),
#[error("Configuration error: {0}")]
Configuration(String),
#[error("Encoding error: {0}")]
Encoding(String),
#[error("Serialization error: {0}")]
Serialization(String),
#[error("Target not connected")]
NotConnected,
#[error("Target initialization failed: {0}")]
Initialization(String),
#[error("Invalid ARN: {0}")]
InvalidARN(String),
#[error("Unknown error: {0}")]
Unknown(String),
#[error("Target is disabled")]
Disabled,
#[error("Configuration parsing error: {0}")]
ParseError(String),
#[error("Failed to save configuration: {0}")]
SaveConfig(String),
#[error("Server not initialized: {0}")]
ServerNotInitialized(String),
}
/// Error types for the notification system
#[derive(Debug, Error)]
pub enum NotificationError {
@@ -135,9 +59,3 @@ pub enum NotificationError {
#[error("Server not initialized")]
ServerNotInitialized,
}
impl From<url::ParseError> for TargetError {
fn from(err: url::ParseError) -> Self {
TargetError::Configuration(format!("URL parse error: {err}"))
}
}

View File

@@ -13,285 +13,11 @@
// limitations under the License.
use chrono::{DateTime, Utc};
use rustfs_targets::EventName;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::fmt;
use url::form_urlencoded;
/// Error returned when parsing event name string fails。
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ParseEventNameError(String);
impl fmt::Display for ParseEventNameError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "Invalid event name:{}", self.0)
}
}
impl std::error::Error for ParseEventNameError {}
/// Represents the type of event that occurs on the object.
/// Based on AWS S3 event type and includes RustFS extension.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub enum EventName {
// Single event type (values are 1-32 for compatible mask logic)
ObjectAccessedGet = 1,
ObjectAccessedGetRetention = 2,
ObjectAccessedGetLegalHold = 3,
ObjectAccessedHead = 4,
ObjectAccessedAttributes = 5,
ObjectCreatedCompleteMultipartUpload = 6,
ObjectCreatedCopy = 7,
ObjectCreatedPost = 8,
ObjectCreatedPut = 9,
ObjectCreatedPutRetention = 10,
ObjectCreatedPutLegalHold = 11,
ObjectCreatedPutTagging = 12,
ObjectCreatedDeleteTagging = 13,
ObjectRemovedDelete = 14,
ObjectRemovedDeleteMarkerCreated = 15,
ObjectRemovedDeleteAllVersions = 16,
ObjectRemovedNoOP = 17,
BucketCreated = 18,
BucketRemoved = 19,
ObjectReplicationFailed = 20,
ObjectReplicationComplete = 21,
ObjectReplicationMissedThreshold = 22,
ObjectReplicationReplicatedAfterThreshold = 23,
ObjectReplicationNotTracked = 24,
ObjectRestorePost = 25,
ObjectRestoreCompleted = 26,
ObjectTransitionFailed = 27,
ObjectTransitionComplete = 28,
ScannerManyVersions = 29, // ObjectManyVersions corresponding to Go
ScannerLargeVersions = 30, // ObjectLargeVersions corresponding to Go
ScannerBigPrefix = 31, // PrefixManyFolders corresponding to Go
LifecycleDelMarkerExpirationDelete = 32, // ILMDelMarkerExpirationDelete corresponding to Go
// Compound "All" event type (no sequential value for mask)
ObjectAccessedAll,
ObjectCreatedAll,
ObjectRemovedAll,
ObjectReplicationAll,
ObjectRestoreAll,
ObjectTransitionAll,
ObjectScannerAll, // New, from Go
Everything, // New, from Go
}
// Single event type sequential array for Everything.expand()
const SINGLE_EVENT_NAMES_IN_ORDER: [EventName; 32] = [
EventName::ObjectAccessedGet,
EventName::ObjectAccessedGetRetention,
EventName::ObjectAccessedGetLegalHold,
EventName::ObjectAccessedHead,
EventName::ObjectAccessedAttributes,
EventName::ObjectCreatedCompleteMultipartUpload,
EventName::ObjectCreatedCopy,
EventName::ObjectCreatedPost,
EventName::ObjectCreatedPut,
EventName::ObjectCreatedPutRetention,
EventName::ObjectCreatedPutLegalHold,
EventName::ObjectCreatedPutTagging,
EventName::ObjectCreatedDeleteTagging,
EventName::ObjectRemovedDelete,
EventName::ObjectRemovedDeleteMarkerCreated,
EventName::ObjectRemovedDeleteAllVersions,
EventName::ObjectRemovedNoOP,
EventName::BucketCreated,
EventName::BucketRemoved,
EventName::ObjectReplicationFailed,
EventName::ObjectReplicationComplete,
EventName::ObjectReplicationMissedThreshold,
EventName::ObjectReplicationReplicatedAfterThreshold,
EventName::ObjectReplicationNotTracked,
EventName::ObjectRestorePost,
EventName::ObjectRestoreCompleted,
EventName::ObjectTransitionFailed,
EventName::ObjectTransitionComplete,
EventName::ScannerManyVersions,
EventName::ScannerLargeVersions,
EventName::ScannerBigPrefix,
EventName::LifecycleDelMarkerExpirationDelete,
];
const LAST_SINGLE_TYPE_VALUE: u32 = EventName::LifecycleDelMarkerExpirationDelete as u32;
impl EventName {
/// The parsed string is EventName.
pub fn parse(s: &str) -> Result<Self, ParseEventNameError> {
match s {
"s3:BucketCreated:*" => Ok(EventName::BucketCreated),
"s3:BucketRemoved:*" => Ok(EventName::BucketRemoved),
"s3:ObjectAccessed:*" => Ok(EventName::ObjectAccessedAll),
"s3:ObjectAccessed:Get" => Ok(EventName::ObjectAccessedGet),
"s3:ObjectAccessed:GetRetention" => Ok(EventName::ObjectAccessedGetRetention),
"s3:ObjectAccessed:GetLegalHold" => Ok(EventName::ObjectAccessedGetLegalHold),
"s3:ObjectAccessed:Head" => Ok(EventName::ObjectAccessedHead),
"s3:ObjectAccessed:Attributes" => Ok(EventName::ObjectAccessedAttributes),
"s3:ObjectCreated:*" => Ok(EventName::ObjectCreatedAll),
"s3:ObjectCreated:CompleteMultipartUpload" => Ok(EventName::ObjectCreatedCompleteMultipartUpload),
"s3:ObjectCreated:Copy" => Ok(EventName::ObjectCreatedCopy),
"s3:ObjectCreated:Post" => Ok(EventName::ObjectCreatedPost),
"s3:ObjectCreated:Put" => Ok(EventName::ObjectCreatedPut),
"s3:ObjectCreated:PutRetention" => Ok(EventName::ObjectCreatedPutRetention),
"s3:ObjectCreated:PutLegalHold" => Ok(EventName::ObjectCreatedPutLegalHold),
"s3:ObjectCreated:PutTagging" => Ok(EventName::ObjectCreatedPutTagging),
"s3:ObjectCreated:DeleteTagging" => Ok(EventName::ObjectCreatedDeleteTagging),
"s3:ObjectRemoved:*" => Ok(EventName::ObjectRemovedAll),
"s3:ObjectRemoved:Delete" => Ok(EventName::ObjectRemovedDelete),
"s3:ObjectRemoved:DeleteMarkerCreated" => Ok(EventName::ObjectRemovedDeleteMarkerCreated),
"s3:ObjectRemoved:NoOP" => Ok(EventName::ObjectRemovedNoOP),
"s3:ObjectRemoved:DeleteAllVersions" => Ok(EventName::ObjectRemovedDeleteAllVersions),
"s3:LifecycleDelMarkerExpiration:Delete" => Ok(EventName::LifecycleDelMarkerExpirationDelete),
"s3:Replication:*" => Ok(EventName::ObjectReplicationAll),
"s3:Replication:OperationFailedReplication" => Ok(EventName::ObjectReplicationFailed),
"s3:Replication:OperationCompletedReplication" => Ok(EventName::ObjectReplicationComplete),
"s3:Replication:OperationMissedThreshold" => Ok(EventName::ObjectReplicationMissedThreshold),
"s3:Replication:OperationReplicatedAfterThreshold" => Ok(EventName::ObjectReplicationReplicatedAfterThreshold),
"s3:Replication:OperationNotTracked" => Ok(EventName::ObjectReplicationNotTracked),
"s3:ObjectRestore:*" => Ok(EventName::ObjectRestoreAll),
"s3:ObjectRestore:Post" => Ok(EventName::ObjectRestorePost),
"s3:ObjectRestore:Completed" => Ok(EventName::ObjectRestoreCompleted),
"s3:ObjectTransition:Failed" => Ok(EventName::ObjectTransitionFailed),
"s3:ObjectTransition:Complete" => Ok(EventName::ObjectTransitionComplete),
"s3:ObjectTransition:*" => Ok(EventName::ObjectTransitionAll),
"s3:Scanner:ManyVersions" => Ok(EventName::ScannerManyVersions),
"s3:Scanner:LargeVersions" => Ok(EventName::ScannerLargeVersions),
"s3:Scanner:BigPrefix" => Ok(EventName::ScannerBigPrefix),
// ObjectScannerAll and Everything cannot be parsed from strings, because the Go version also does not define their string representation.
_ => Err(ParseEventNameError(s.to_string())),
}
}
/// Returns a string representation of the event type.
pub fn as_str(&self) -> &'static str {
match self {
EventName::BucketCreated => "s3:BucketCreated:*",
EventName::BucketRemoved => "s3:BucketRemoved:*",
EventName::ObjectAccessedAll => "s3:ObjectAccessed:*",
EventName::ObjectAccessedGet => "s3:ObjectAccessed:Get",
EventName::ObjectAccessedGetRetention => "s3:ObjectAccessed:GetRetention",
EventName::ObjectAccessedGetLegalHold => "s3:ObjectAccessed:GetLegalHold",
EventName::ObjectAccessedHead => "s3:ObjectAccessed:Head",
EventName::ObjectAccessedAttributes => "s3:ObjectAccessed:Attributes",
EventName::ObjectCreatedAll => "s3:ObjectCreated:*",
EventName::ObjectCreatedCompleteMultipartUpload => "s3:ObjectCreated:CompleteMultipartUpload",
EventName::ObjectCreatedCopy => "s3:ObjectCreated:Copy",
EventName::ObjectCreatedPost => "s3:ObjectCreated:Post",
EventName::ObjectCreatedPut => "s3:ObjectCreated:Put",
EventName::ObjectCreatedPutTagging => "s3:ObjectCreated:PutTagging",
EventName::ObjectCreatedDeleteTagging => "s3:ObjectCreated:DeleteTagging",
EventName::ObjectCreatedPutRetention => "s3:ObjectCreated:PutRetention",
EventName::ObjectCreatedPutLegalHold => "s3:ObjectCreated:PutLegalHold",
EventName::ObjectRemovedAll => "s3:ObjectRemoved:*",
EventName::ObjectRemovedDelete => "s3:ObjectRemoved:Delete",
EventName::ObjectRemovedDeleteMarkerCreated => "s3:ObjectRemoved:DeleteMarkerCreated",
EventName::ObjectRemovedNoOP => "s3:ObjectRemoved:NoOP",
EventName::ObjectRemovedDeleteAllVersions => "s3:ObjectRemoved:DeleteAllVersions",
EventName::LifecycleDelMarkerExpirationDelete => "s3:LifecycleDelMarkerExpiration:Delete",
EventName::ObjectReplicationAll => "s3:Replication:*",
EventName::ObjectReplicationFailed => "s3:Replication:OperationFailedReplication",
EventName::ObjectReplicationComplete => "s3:Replication:OperationCompletedReplication",
EventName::ObjectReplicationNotTracked => "s3:Replication:OperationNotTracked",
EventName::ObjectReplicationMissedThreshold => "s3:Replication:OperationMissedThreshold",
EventName::ObjectReplicationReplicatedAfterThreshold => "s3:Replication:OperationReplicatedAfterThreshold",
EventName::ObjectRestoreAll => "s3:ObjectRestore:*",
EventName::ObjectRestorePost => "s3:ObjectRestore:Post",
EventName::ObjectRestoreCompleted => "s3:ObjectRestore:Completed",
EventName::ObjectTransitionAll => "s3:ObjectTransition:*",
EventName::ObjectTransitionFailed => "s3:ObjectTransition:Failed",
EventName::ObjectTransitionComplete => "s3:ObjectTransition:Complete",
EventName::ScannerManyVersions => "s3:Scanner:ManyVersions",
EventName::ScannerLargeVersions => "s3:Scanner:LargeVersions",
EventName::ScannerBigPrefix => "s3:Scanner:BigPrefix",
// Go's String() returns "" for ObjectScannerAll and Everything
EventName::ObjectScannerAll => "s3:Scanner:*", // Follow the pattern in Go Expand
EventName::Everything => "", // Go String() returns "" to unprocessed
}
}
/// Returns the extended value of the abbreviation event type.
pub fn expand(&self) -> Vec<Self> {
match self {
EventName::ObjectAccessedAll => vec![
EventName::ObjectAccessedGet,
EventName::ObjectAccessedHead,
EventName::ObjectAccessedGetRetention,
EventName::ObjectAccessedGetLegalHold,
EventName::ObjectAccessedAttributes,
],
EventName::ObjectCreatedAll => vec![
EventName::ObjectCreatedCompleteMultipartUpload,
EventName::ObjectCreatedCopy,
EventName::ObjectCreatedPost,
EventName::ObjectCreatedPut,
EventName::ObjectCreatedPutRetention,
EventName::ObjectCreatedPutLegalHold,
EventName::ObjectCreatedPutTagging,
EventName::ObjectCreatedDeleteTagging,
],
EventName::ObjectRemovedAll => vec![
EventName::ObjectRemovedDelete,
EventName::ObjectRemovedDeleteMarkerCreated,
EventName::ObjectRemovedNoOP,
EventName::ObjectRemovedDeleteAllVersions,
],
EventName::ObjectReplicationAll => vec![
EventName::ObjectReplicationFailed,
EventName::ObjectReplicationComplete,
EventName::ObjectReplicationNotTracked,
EventName::ObjectReplicationMissedThreshold,
EventName::ObjectReplicationReplicatedAfterThreshold,
],
EventName::ObjectRestoreAll => vec![EventName::ObjectRestorePost, EventName::ObjectRestoreCompleted],
EventName::ObjectTransitionAll => vec![EventName::ObjectTransitionFailed, EventName::ObjectTransitionComplete],
EventName::ObjectScannerAll => vec![
// New
EventName::ScannerManyVersions,
EventName::ScannerLargeVersions,
EventName::ScannerBigPrefix,
],
EventName::Everything => {
// New
SINGLE_EVENT_NAMES_IN_ORDER.to_vec()
}
// A single type returns to itself directly
_ => vec![*self],
}
}
/// Returns the mask of type.
/// The compound "All" type will be expanded.
pub fn mask(&self) -> u64 {
let value = *self as u32;
if value > 0 && value <= LAST_SINGLE_TYPE_VALUE {
// It's a single type
1u64 << (value - 1)
} else {
// It's a compound type
let mut mask = 0u64;
for n in self.expand() {
mask |= n.mask(); // Recursively call mask
}
mask
}
}
}
impl fmt::Display for EventName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.as_str())
}
}
/// Convert to `EventName` according to string
impl From<&str> for EventName {
fn from(event_str: &str) -> Self {
EventName::parse(event_str).unwrap_or_else(|e| panic!("{}", e))
}
}
/// Represents the identity of the user who triggered the event
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Identity {
@@ -532,17 +258,6 @@ fn initialize_response_elements(elements: &mut HashMap<String, String>, keys: &[
}
}
/// Represents a log of events for sending to targets
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EventLog {
/// The event name
pub event_name: EventName,
/// The object key
pub key: String,
/// The list of events
pub records: Vec<Event>,
}
#[derive(Debug, Clone)]
pub struct EventArgs {
pub event_name: EventName,

View File

@@ -12,19 +12,22 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::{
error::TargetError,
target::{Target, mqtt::MQTTArgs, webhook::WebhookArgs},
};
use async_trait::async_trait;
use rumqttc::QoS;
use rustfs_config::notify::{
DEFAULT_DIR, DEFAULT_LIMIT, ENV_NOTIFY_MQTT_KEYS, ENV_NOTIFY_WEBHOOK_KEYS, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL,
MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME,
NOTIFY_MQTT_KEYS, NOTIFY_WEBHOOK_KEYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT,
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
ENV_NOTIFY_MQTT_KEYS, ENV_NOTIFY_WEBHOOK_KEYS, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS,
MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_KEYS, NOTIFY_WEBHOOK_KEYS,
WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
};
use crate::Event;
use rustfs_config::{DEFAULT_DIR, DEFAULT_LIMIT};
use rustfs_ecstore::config::KVS;
use rustfs_targets::{
Target,
error::TargetError,
target::{mqtt::MQTTArgs, webhook::WebhookArgs},
};
use std::collections::HashSet;
use std::time::Duration;
use tracing::{debug, warn};
@@ -34,7 +37,7 @@ use url::Url;
#[async_trait]
pub trait TargetFactory: Send + Sync {
/// Creates a target from configuration
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target + Send + Sync>, TargetError>;
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<Event> + Send + Sync>, TargetError>;
/// Validates target configuration
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError>;
@@ -53,7 +56,7 @@ pub struct WebhookTargetFactory;
#[async_trait]
impl TargetFactory for WebhookTargetFactory {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target + Send + Sync>, TargetError> {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<Event> + Send + Sync>, TargetError> {
// All config values are now read directly from the merged `config` KVS.
let endpoint = config
.lookup(WEBHOOK_ENDPOINT)
@@ -72,9 +75,10 @@ impl TargetFactory for WebhookTargetFactory {
.unwrap_or(DEFAULT_LIMIT),
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
target_type: rustfs_targets::target::TargetType::NotifyEvent,
};
let target = crate::target::webhook::WebhookTarget::new(id, args)?;
let target = rustfs_targets::target::webhook::WebhookTarget::new(id, args)?;
Ok(Box::new(target))
}
@@ -119,7 +123,7 @@ pub struct MQTTTargetFactory;
#[async_trait]
impl TargetFactory for MQTTTargetFactory {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target + Send + Sync>, TargetError> {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<Event> + Send + Sync>, TargetError> {
let broker = config
.lookup(MQTT_BROKER)
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
@@ -161,9 +165,10 @@ impl TargetFactory for MQTTTargetFactory {
.lookup(MQTT_QUEUE_LIMIT)
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(DEFAULT_LIMIT),
target_type: rustfs_targets::target::TargetType::NotifyEvent,
};
let target = crate::target::mqtt::MQTTTarget::new(id, args)?;
let target = rustfs_targets::target::mqtt::MQTTTarget::new(id, args)?;
Ok(Box::new(target))
}

View File

@@ -12,11 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::{Event, EventArgs, NotificationError, NotificationSystem};
use crate::{BucketNotificationConfig, Event, EventArgs, NotificationError, NotificationSystem};
use once_cell::sync::Lazy;
use rustfs_ecstore::config::Config;
use rustfs_targets::EventName;
use rustfs_targets::arn::TargetID;
use std::sync::{Arc, OnceLock};
use tracing::instrument;
use tracing::{error, instrument};
static NOTIFICATION_SYSTEM: OnceLock<Arc<NotificationSystem>> = OnceLock::new();
// Create a globally unique Notifier instance
@@ -57,6 +59,14 @@ pub struct Notifier {}
impl Notifier {
/// Notify an event asynchronously.
/// This is the only entry point for all event notifications in the system.
/// # Parameter
/// - `args`: The event arguments containing details about the event to be notified.
///
/// # Return value
/// Returns `()`, indicating that the notification has been sent.
///
/// # Using
/// This function is used to notify events in the system, such as object creation, deletion, or updates.
#[instrument(skip(self, args))]
pub async fn notify(&self, args: EventArgs) {
// Dependency injection or service positioning mode obtain NotificationSystem instance
@@ -64,7 +74,7 @@ impl Notifier {
// If the notification system itself cannot be retrieved, it will be returned directly
Some(sys) => sys,
None => {
tracing::error!("Notification system is not initialized.");
error!("Notification system is not initialized.");
return;
}
};
@@ -76,6 +86,7 @@ impl Notifier {
// Check if any subscribers are interested in the event
if !notification_sys.has_subscriber(&args.bucket_name, &args.event_name).await {
// error!("No subscribers for event: {} in bucket: {}", args.event_name, args.bucket_name);
return;
}
@@ -83,4 +94,96 @@ impl Notifier {
let event = Arc::new(Event::new(args));
notification_sys.send_event(event).await;
}
/// Add notification rules for the specified bucket and load configuration
/// # Parameter
/// - `bucket_name`: The name of the target bucket.
/// - `region`: The area where bucket is located.
/// - `event_names`: A list of event names that trigger notifications.
/// - `prefix`: The prefix of the object key that triggers notifications.
/// - `suffix`: The suffix of the object key that triggers notifications.
/// - `target_ids`: A list of target IDs that will receive notifications.
///
/// # Return value
/// Returns `Result<(), NotificationError>`, Ok on success, and an error on failure
///
/// # Using
/// This function allows you to dynamically add notification rules for a specific bucket.
pub async fn add_bucket_notification_rule(
&self,
bucket_name: &str,
region: &str,
event_names: &[EventName],
prefix: &str,
suffix: &str,
target_ids: &[TargetID],
) -> Result<(), NotificationError> {
// Construct pattern, simple splicing of prefixes and suffixes
let mut pattern = String::new();
if !prefix.is_empty() {
pattern.push_str(prefix);
}
pattern.push('*');
if !suffix.is_empty() {
pattern.push_str(suffix);
}
// Create BucketNotificationConfig
let mut bucket_config = BucketNotificationConfig::new(region);
for target_id in target_ids {
bucket_config.add_rule(event_names, pattern.clone(), target_id.clone());
}
// Get global NotificationSystem
let notification_sys = match notification_system() {
Some(sys) => sys,
None => return Err(NotificationError::ServerNotInitialized),
};
// Loading configuration
notification_sys
.load_bucket_notification_config(bucket_name, &bucket_config)
.await
}
/// Dynamically add notification rules according to different event types.
///
/// # Parameter
/// - `bucket_name`: The name of the target bucket.
/// - `region`: The area where bucket is located.
/// - `event_rules`: Each rule contains a list of event types, prefixes, suffixes, and target IDs.
///
/// # Return value
/// Returns `Result<(), NotificationError>`, Ok on success, and an error on failure.
///
/// # Using
/// Supports notification rules for adding multiple event types, prefixes, suffixes, and targets to the same bucket in batches.
pub async fn add_event_specific_rules(
&self,
bucket_name: &str,
region: &str,
event_rules: &[(Vec<EventName>, &str, &str, Vec<TargetID>)],
) -> Result<(), NotificationError> {
let mut bucket_config = BucketNotificationConfig::new(region);
for (event_names, prefix, suffix, target_ids) in event_rules {
// Use `new_pattern` to construct a matching pattern
let pattern = crate::rules::pattern::new_pattern(Some(prefix), Some(suffix));
for target_id in target_ids {
bucket_config.add_rule(event_names, pattern.clone(), target_id.clone());
}
}
// Get global NotificationSystem instance
let notification_sys = match notification_system() {
Some(sys) => sys,
None => return Err(NotificationError::ServerNotInitialized),
};
// Loading configuration
notification_sys
.load_bucket_notification_config(bucket_name, &bucket_config)
.await
}
}

View File

@@ -12,13 +12,15 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::arn::TargetID;
use crate::store::{Key, Store};
use crate::{
Event, EventName, StoreError, Target, error::NotificationError, notifier::EventNotifier, registry::TargetRegistry,
rules::BucketNotificationConfig, stream,
Event, error::NotificationError, notifier::EventNotifier, registry::TargetRegistry, rules::BucketNotificationConfig, stream,
};
use rustfs_ecstore::config::{Config, KVS};
use rustfs_targets::EventName;
use rustfs_targets::arn::TargetID;
use rustfs_targets::store::{Key, Store};
use rustfs_targets::target::EntityTarget;
use rustfs_targets::{StoreError, Target};
use std::collections::HashMap;
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
@@ -127,7 +129,7 @@ impl NotificationSystem {
let config = self.config.read().await;
debug!("Initializing notification system with config: {:?}", *config);
let targets: Vec<Box<dyn Target + Send + Sync>> = self.registry.create_targets_from_config(&config).await?;
let targets: Vec<Box<dyn Target<Event> + Send + Sync>> = self.registry.create_targets_from_config(&config).await?;
info!("{} notification targets were created", targets.len());
@@ -318,8 +320,8 @@ impl NotificationSystem {
/// Enhanced event stream startup function, including monitoring and concurrency control
fn enhanced_start_event_stream(
&self,
store: Box<dyn Store<Event, Error = StoreError, Key = Key> + Send>,
target: Arc<dyn Target + Send + Sync>,
store: Box<dyn Store<EntityTarget<Event>, Error = StoreError, Key = Key> + Send>,
target: Arc<dyn Target<Event> + Send + Sync>,
metrics: Arc<NotificationMetrics>,
semaphore: Arc<Semaphore>,
) -> mpsc::Sender<()> {
@@ -348,7 +350,7 @@ impl NotificationSystem {
// Create a new target from configuration
// This function will now be responsible for merging env, creating and persisting the final configuration.
let targets: Vec<Box<dyn Target + Send + Sync>> = self
let targets: Vec<Box<dyn Target<Event> + Send + Sync>> = self
.registry
.create_targets_from_config(&new_config)
.await

Some files were not shown because too many files have changed in this diff Show More