mirror of
https://github.com/rustfs/rustfs.git
synced 2026-01-17 01:30:33 +00:00
refactor: replace lazy_static with LazyLock and notify crate registry create_targets_from_config (#311)
* improve code for notify * improve code for logger and fix typo (#272) * Add GNU to build.yml (#275) * fix unzip error * fix url change error fix url change error * Simplify user experience and integrate console and endpoint Simplify user experience and integrate console and endpoint * Add gnu to build.yml * upgrade version * feat: add `cargo clippy --fix --allow-dirty` to pre-commit command (#282) Resolves #277 - Add --fix flag to automatically fix clippy warnings - Add --allow-dirty flag to run on dirty Git trees - Improves code quality in pre-commit workflow * fix: the issue where preview fails when the path length exceeds 255 characters (#280) * fix * fix: improve Windows build support and CI/CD workflow (#283) - Fix Windows zip command issue by using PowerShell Compress-Archive - Add Windows support for OSS upload with ossutil - Replace Chinese comments with English in build.yml - Fix bash syntax error in package_zip function - Improve code formatting and consistency - Update various configuration files for better cross-platform support Resolves Windows build failures in GitHub Actions. * fix: update link in README.md leading to a 404 error (#285) * add rustfs.spec for rustfs (#103) add support on loongarch64 * improve cargo.lock * build(deps): bump the dependencies group with 5 updates (#289) Bumps the dependencies group with 5 updates: | Package | From | To | | --- | --- | --- | | [hyper-util](https://github.com/hyperium/hyper-util) | `0.1.15` | `0.1.16` | | [rand](https://github.com/rust-random/rand) | `0.9.1` | `0.9.2` | | [serde_json](https://github.com/serde-rs/json) | `1.0.140` | `1.0.141` | | [strum](https://github.com/Peternator7/strum) | `0.27.1` | `0.27.2` | | [sysinfo](https://github.com/GuillaumeGomez/sysinfo) | `0.36.0` | `0.36.1` | Updates `hyper-util` from 0.1.15 to 0.1.16 - [Release notes](https://github.com/hyperium/hyper-util/releases) - [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md) - [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.15...v0.1.16) Updates `rand` from 0.9.1 to 0.9.2 - [Release notes](https://github.com/rust-random/rand/releases) - [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md) - [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.1...rand_core-0.9.2) Updates `serde_json` from 1.0.140 to 1.0.141 - [Release notes](https://github.com/serde-rs/json/releases) - [Commits](https://github.com/serde-rs/json/compare/v1.0.140...v1.0.141) Updates `strum` from 0.27.1 to 0.27.2 - [Release notes](https://github.com/Peternator7/strum/releases) - [Changelog](https://github.com/Peternator7/strum/blob/master/CHANGELOG.md) - [Commits](https://github.com/Peternator7/strum/compare/v0.27.1...v0.27.2) Updates `sysinfo` from 0.36.0 to 0.36.1 - [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md) - [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.36.0...v0.36.1) --- updated-dependencies: - dependency-name: hyper-util dependency-version: 0.1.16 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: rand dependency-version: 0.9.2 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: serde_json dependency-version: 1.0.141 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: strum dependency-version: 0.27.2 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: sysinfo dependency-version: 0.36.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * improve code for logger * improve * upgrade * refactor: 优化构建工作流,统一 latest 文件处理和简化制品上传 (#293) * Refactor: DatabaseManagerSystem as global Signed-off-by: junxiang Mu <1948535941@qq.com> * fix: fmt Signed-off-by: junxiang Mu <1948535941@qq.com> * Test: add e2e_test for s3select Signed-off-by: junxiang Mu <1948535941@qq.com> * Test: add test script for e2e Signed-off-by: junxiang Mu <1948535941@qq.com> * improve code for registry and intergation * improve code for registry `create_targets_from_config` * fix * Feature up/ilm (#305) * fix * fix * fix * fix delete-marker expiration. add api_restore. * fix * time retry object upload * lock file * make fmt * fix * restore object * fix * fix * serde-rs-xml -> quick-xml * fix * checksum * fix * fix * fix * fix * fix * fix * fix * transfer lang to english * upgrade clap version from 4.5.41 to 4.5.42 * refactor: replace `lazy_static` with `LazyLock` * add router * fix: modify comment * improve code * fix typos * fix * fix: modify name and fmt * improve code for registry * fix test --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: junxiang Mu <1948535941@qq.com> Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com> Co-authored-by: 安正超 <anzhengchao@gmail.com> Co-authored-by: shiro.lee <69624924+shiroleeee@users.noreply.github.com> Co-authored-by: Marco Orlandin <mipnamic@mipnamic.net> Co-authored-by: zhangwenlong <zhangwenlong@loongson.cn> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: junxiang Mu <1948535941@qq.com> Co-authored-by: likewu <likewu@126.com>
This commit is contained in:
406
Cargo.lock
generated
406
Cargo.lock
generated
@@ -975,7 +975,7 @@ dependencies = [
|
||||
"hyper-util",
|
||||
"pin-project-lite",
|
||||
"rustls 0.21.12",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-native-certs 0.8.1",
|
||||
"rustls-pki-types",
|
||||
"tokio",
|
||||
@@ -1191,7 +1191,7 @@ dependencies = [
|
||||
"hyper 1.6.0",
|
||||
"hyper-util",
|
||||
"pin-project-lite",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-pemfile 2.2.0",
|
||||
"rustls-pki-types",
|
||||
"tokio",
|
||||
@@ -1741,9 +1741,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "4.5.41"
|
||||
version = "4.5.42"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "be92d32e80243a54711e5d7ce823c35c41c9d929dc4ab58e1276f625841aadf9"
|
||||
checksum = "ed87a9d530bb41a67537289bafcac159cb3ee28460e0a4571123d2a778a6a882"
|
||||
dependencies = [
|
||||
"clap_builder",
|
||||
"clap_derive",
|
||||
@@ -1751,9 +1751,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "clap_builder"
|
||||
version = "4.5.41"
|
||||
version = "4.5.42"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "707eab41e9622f9139419d573eca0900137718000c517d47da73045f54331c3d"
|
||||
checksum = "64f4f3f3c77c94aff3c7e9aac9a2ca1974a5adf392a8bb751e827d6d127ab966"
|
||||
dependencies = [
|
||||
"anstream",
|
||||
"anstyle",
|
||||
@@ -1947,18 +1947,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "const-str"
|
||||
version = "0.6.3"
|
||||
version = "0.6.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "041fbfcf8e7054df725fb9985297e92422cdc80fcf313665f5ca3d761bb63f4c"
|
||||
checksum = "451d0640545a0553814b4c646eb549343561618838e9b42495f466131fe3ad49"
|
||||
dependencies = [
|
||||
"const-str-proc-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "const-str-proc-macro"
|
||||
version = "0.6.3"
|
||||
version = "0.6.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f801882b7ecd4188f4bca0317f34e022d623590d85893d7024b18d14f2a3b40b"
|
||||
checksum = "95013972663dd72254b963e48857284080001ffee418731f065fcf5290a5530d"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -2334,8 +2334,18 @@ version = "0.20.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fc7f46116c46ff9ab3eb1597a45688b6715c6e628b5c133e288e709a29bcb4ee"
|
||||
dependencies = [
|
||||
"darling_core",
|
||||
"darling_macro",
|
||||
"darling_core 0.20.11",
|
||||
"darling_macro 0.20.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling"
|
||||
version = "0.21.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a79c4acb1fd5fa3d9304be4c76e031c54d2e92d172a393e24b19a14fe8532fe9"
|
||||
dependencies = [
|
||||
"darling_core 0.21.0",
|
||||
"darling_macro 0.21.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2352,13 +2362,38 @@ dependencies = [
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling_core"
|
||||
version = "0.21.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "74875de90daf30eb59609910b84d4d368103aaec4c924824c6799b28f77d6a1d"
|
||||
dependencies = [
|
||||
"fnv",
|
||||
"ident_case",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"strsim",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling_macro"
|
||||
version = "0.20.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fc34b93ccb385b40dc71c6fceac4b2ad23662c7eeb248cf10d529b7e055b6ead"
|
||||
dependencies = [
|
||||
"darling_core",
|
||||
"darling_core 0.20.11",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling_macro"
|
||||
version = "0.21.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e79f8e61677d5df9167cd85265f8e5f64b215cdea3fb55eebc3e622e44c7a146"
|
||||
dependencies = [
|
||||
"darling_core 0.21.0",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
@@ -2962,7 +2997,7 @@ version = "0.20.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2d5bcf7b024d6835cfb3d473887cd966994907effbe9227e8c8219824d06c4e8"
|
||||
dependencies = [
|
||||
"darling",
|
||||
"darling 0.20.11",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
@@ -3575,6 +3610,12 @@ version = "1.0.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
|
||||
|
||||
[[package]]
|
||||
name = "dyn-clone"
|
||||
version = "1.0.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555"
|
||||
|
||||
[[package]]
|
||||
name = "e2e_test"
|
||||
version = "0.0.5"
|
||||
@@ -3595,7 +3636,7 @@ dependencies = [
|
||||
"serde_json",
|
||||
"serial_test",
|
||||
"tokio",
|
||||
"tonic",
|
||||
"tonic 0.14.0",
|
||||
"url",
|
||||
]
|
||||
|
||||
@@ -3700,7 +3741,7 @@ version = "0.12.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "76d07902c93376f1e96c34abc4d507c0911df3816cef50b01f5a2ff3ad8c370d"
|
||||
dependencies = [
|
||||
"darling",
|
||||
"darling 0.20.11",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
@@ -4796,7 +4837,7 @@ dependencies = [
|
||||
"hyper 1.6.0",
|
||||
"hyper-util",
|
||||
"log",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-native-certs 0.8.1",
|
||||
"rustls-pki-types",
|
||||
"tokio",
|
||||
@@ -5279,16 +5320,17 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "keyring"
|
||||
version = "3.6.2"
|
||||
version = "3.6.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1961983669d57bdfe6c0f3ef8e4c229b5ef751afcc7d87e4271d2f71f6ccfa8b"
|
||||
checksum = "eebcc3aff044e5944a8fbaf69eb277d11986064cba30c468730e8b9909fb551c"
|
||||
dependencies = [
|
||||
"byteorder",
|
||||
"dbus-secret-service",
|
||||
"log",
|
||||
"security-framework 2.11.1",
|
||||
"security-framework 3.2.0",
|
||||
"windows-sys 0.59.0",
|
||||
"windows-sys 0.60.2",
|
||||
"zeroize",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -5465,7 +5507,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "07033963ba89ebaf1584d767badaa2e8fcec21aedea6b8c0346d487d49c28667"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"windows-targets 0.53.2",
|
||||
"windows-targets 0.53.3",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -5476,13 +5518,13 @@ checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
|
||||
|
||||
[[package]]
|
||||
name = "libredox"
|
||||
version = "0.1.6"
|
||||
version = "0.1.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4488594b9328dee448adb906d8b126d9b7deb7cf5c22161ee591610bb1be83c0"
|
||||
checksum = "360e552c93fa0e8152ab463bc4c4837fce76a225df11dfaeea66c313de5e61f7"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"libc",
|
||||
"redox_syscall 0.5.16",
|
||||
"redox_syscall 0.5.17",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6540,10 +6582,10 @@ dependencies = [
|
||||
"opentelemetry",
|
||||
"opentelemetry-proto",
|
||||
"opentelemetry_sdk",
|
||||
"prost",
|
||||
"prost 0.13.5",
|
||||
"thiserror 2.0.12",
|
||||
"tokio",
|
||||
"tonic",
|
||||
"tonic 0.13.1",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
@@ -6555,8 +6597,8 @@ checksum = "2e046fd7660710fe5a05e8748e70d9058dc15c94ba914e7c4faa7c728f0e8ddc"
|
||||
dependencies = [
|
||||
"opentelemetry",
|
||||
"opentelemetry_sdk",
|
||||
"prost",
|
||||
"tonic",
|
||||
"prost 0.13.5",
|
||||
"tonic 0.13.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6691,7 +6733,7 @@ checksum = "bc838d2a56b5b1a6c25f55575dfc605fabb63bb2365f6c2353ef9159aa69e4a5"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"libc",
|
||||
"redox_syscall 0.5.16",
|
||||
"redox_syscall 0.5.17",
|
||||
"smallvec",
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
@@ -7253,14 +7295,24 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2796faa41db3ec313a31f7624d9286acf277b52de526150b7e69f3debf891ee5"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"prost-derive",
|
||||
"prost-derive 0.13.5",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prost"
|
||||
version = "0.14.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7231bd9b3d3d33c86b58adbac74b5ec0ad9f496b19d22801d773636feaa95f3d"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"prost-derive 0.14.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prost-build"
|
||||
version = "0.13.5"
|
||||
version = "0.14.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf"
|
||||
checksum = "ac6c3320f9abac597dcbc668774ef006702672474aad53c6d596b62e487b40b1"
|
||||
dependencies = [
|
||||
"heck 0.5.0",
|
||||
"itertools 0.14.0",
|
||||
@@ -7269,8 +7321,10 @@ dependencies = [
|
||||
"once_cell",
|
||||
"petgraph",
|
||||
"prettyplease",
|
||||
"prost",
|
||||
"prost 0.14.1",
|
||||
"prost-types",
|
||||
"pulldown-cmark",
|
||||
"pulldown-cmark-to-cmark",
|
||||
"regex",
|
||||
"syn 2.0.104",
|
||||
"tempfile",
|
||||
@@ -7290,12 +7344,25 @@ dependencies = [
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prost-types"
|
||||
version = "0.13.5"
|
||||
name = "prost-derive"
|
||||
version = "0.14.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "52c2c1bf36ddb1a1c396b3601a3cec27c2462e45f07c386894ec3ccf5332bd16"
|
||||
checksum = "9120690fafc389a67ba3803df527d0ec9cbbc9cc45e4cc20b332996dfb672425"
|
||||
dependencies = [
|
||||
"prost",
|
||||
"anyhow",
|
||||
"itertools 0.14.0",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prost-types"
|
||||
version = "0.14.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b9b4db3d6da204ed77bb26ba83b6122a73aeb2e87e25fbf7ad2e84c4ccbf8f72"
|
||||
dependencies = [
|
||||
"prost 0.14.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -7307,6 +7374,26 @@ dependencies = [
|
||||
"cc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pulldown-cmark"
|
||||
version = "0.13.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e8bbe1a966bd2f362681a44f6edce3c2310ac21e4d5067a6e7ec396297a6ea0"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"memchr",
|
||||
"unicase",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pulldown-cmark-to-cmark"
|
||||
version = "21.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e5b6a0769a491a08b31ea5c62494a8f144ee0987d86d670a8af4df1e1b7cde75"
|
||||
dependencies = [
|
||||
"pulldown-cmark",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quick-xml"
|
||||
version = "0.37.5"
|
||||
@@ -7340,7 +7427,7 @@ dependencies = [
|
||||
"quinn-proto",
|
||||
"quinn-udp",
|
||||
"rustc-hash 2.1.1",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"socket2 0.5.10",
|
||||
"thiserror 2.0.12",
|
||||
"tokio",
|
||||
@@ -7360,7 +7447,7 @@ dependencies = [
|
||||
"rand 0.9.2",
|
||||
"ring",
|
||||
"rustc-hash 2.1.1",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-pki-types",
|
||||
"slab",
|
||||
"thiserror 2.0.12",
|
||||
@@ -7607,9 +7694,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "redox_syscall"
|
||||
version = "0.5.16"
|
||||
version = "0.5.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7251471db004e509f4e75a62cca9435365b5ec7bcdff530d612ac7c87c44a792"
|
||||
checksum = "5407465600fb0548f1442edf71dd20683c6ed326200ace4b1ef0763521bb3b77"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
]
|
||||
@@ -7635,6 +7722,26 @@ dependencies = [
|
||||
"readme-rustdocifier",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ref-cast"
|
||||
version = "1.0.24"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4a0ae411dbe946a674d89546582cea4ba2bb8defac896622d6496f14c23ba5cf"
|
||||
dependencies = [
|
||||
"ref-cast-impl",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ref-cast-impl"
|
||||
version = "1.0.24"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1165225c21bff1f3bbce98f5a1f889949bc902d3575308cc7b0de30b4f6d27c7"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.11.1"
|
||||
@@ -7711,7 +7818,7 @@ dependencies = [
|
||||
"percent-encoding",
|
||||
"pin-project-lite",
|
||||
"quinn",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-pki-types",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -7803,6 +7910,40 @@ dependencies = [
|
||||
"windows-sys 0.52.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rmcp"
|
||||
version = "0.3.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "824daba0a34f8c5c5392295d381e0800f88fd986ba291699f8785f05fa344c1e"
|
||||
dependencies = [
|
||||
"base64 0.22.1",
|
||||
"chrono",
|
||||
"futures",
|
||||
"paste",
|
||||
"pin-project-lite",
|
||||
"rmcp-macros",
|
||||
"schemars",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror 2.0.12",
|
||||
"tokio",
|
||||
"tokio-util",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rmcp-macros"
|
||||
version = "0.3.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ad6543c0572a4dbc125c23e6f54963ea9ba002294fd81dd4012c204219b0dcaa"
|
||||
dependencies = [
|
||||
"darling 0.21.0",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"serde_json",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rmp"
|
||||
version = "0.8.14"
|
||||
@@ -7954,9 +8095,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustc-demangle"
|
||||
version = "0.1.25"
|
||||
version = "0.1.26"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "989e6739f80c4ad5b13e0fd7fe89531180375b18520cc8c82080e4dc4035b84f"
|
||||
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
|
||||
|
||||
[[package]]
|
||||
name = "rustc-hash"
|
||||
@@ -8024,7 +8165,7 @@ dependencies = [
|
||||
"rustfs-s3select-query",
|
||||
"rustfs-utils",
|
||||
"rustfs-zip",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"s3s",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -8040,7 +8181,7 @@ dependencies = [
|
||||
"tokio-stream",
|
||||
"tokio-tar",
|
||||
"tokio-util",
|
||||
"tonic",
|
||||
"tonic 0.14.0",
|
||||
"tower",
|
||||
"tower-http",
|
||||
"tracing",
|
||||
@@ -8128,7 +8269,7 @@ dependencies = [
|
||||
"s3s",
|
||||
"serde",
|
||||
"tokio",
|
||||
"tonic",
|
||||
"tonic 0.14.0",
|
||||
"uuid",
|
||||
]
|
||||
|
||||
@@ -8137,8 +8278,6 @@ name = "rustfs-config"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"const-str",
|
||||
"serde",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -8211,7 +8350,7 @@ dependencies = [
|
||||
"rustfs-signer",
|
||||
"rustfs-utils",
|
||||
"rustfs-workers",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"s3s",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -8226,7 +8365,7 @@ dependencies = [
|
||||
"tokio",
|
||||
"tokio-stream",
|
||||
"tokio-util",
|
||||
"tonic",
|
||||
"tonic 0.14.0",
|
||||
"tower",
|
||||
"tracing",
|
||||
"url",
|
||||
@@ -8316,7 +8455,7 @@ dependencies = [
|
||||
"serde_json",
|
||||
"thiserror 2.0.12",
|
||||
"tokio",
|
||||
"tonic",
|
||||
"tonic 0.14.0",
|
||||
"tracing",
|
||||
"url",
|
||||
"uuid",
|
||||
@@ -8334,6 +8473,23 @@ dependencies = [
|
||||
"time",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-mcp"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"aws-sdk-s3",
|
||||
"clap",
|
||||
"mime_guess",
|
||||
"rmcp",
|
||||
"schemars",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"tokio",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-notify"
|
||||
version = "0.0.5"
|
||||
@@ -8343,6 +8499,7 @@ dependencies = [
|
||||
"chrono",
|
||||
"dashmap 6.1.0",
|
||||
"form_urlencoded",
|
||||
"futures",
|
||||
"once_cell",
|
||||
"quick-xml 0.38.0",
|
||||
"reqwest",
|
||||
@@ -8420,10 +8577,11 @@ name = "rustfs-protos"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"flatbuffers 25.2.10",
|
||||
"prost",
|
||||
"prost 0.14.1",
|
||||
"rustfs-common",
|
||||
"tonic",
|
||||
"tonic-build",
|
||||
"tonic 0.14.0",
|
||||
"tonic-prost",
|
||||
"tonic-prost-build",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -8556,7 +8714,7 @@ dependencies = [
|
||||
"rand 0.9.2",
|
||||
"regex",
|
||||
"rustfs-config",
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"rustls-pemfile 2.2.0",
|
||||
"rustls-pki-types",
|
||||
"s3s",
|
||||
@@ -8647,9 +8805,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustls"
|
||||
version = "0.23.29"
|
||||
version = "0.23.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2491382039b29b9b11ff08b76ff6c97cf287671dbb74f0be44bda389fffe9bd1"
|
||||
checksum = "c0ebcbd2f03de0fc1122ad9bb24b127a5a6cd51d72604a3f3c50ac459762b6cc"
|
||||
dependencies = [
|
||||
"aws-lc-rs",
|
||||
"log",
|
||||
@@ -8849,6 +9007,32 @@ dependencies = [
|
||||
"windows-sys 0.59.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "schemars"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "82d20c4491bc164fa2f6c5d44565947a52ad80b9505d8e36f8d54c27c739fcd0"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"dyn-clone",
|
||||
"ref-cast",
|
||||
"schemars_derive",
|
||||
"serde",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "schemars_derive"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "33d020396d1d138dc19f1165df7545479dcd58d93810dc5d646a16e55abefa80"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"serde_derive_internals",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "scoped-tls"
|
||||
version = "1.0.1"
|
||||
@@ -9035,6 +9219,17 @@ dependencies = [
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive_internals"
|
||||
version = "0.29.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_fmt"
|
||||
version = "1.0.3"
|
||||
@@ -10166,7 +10361,7 @@ version = "0.26.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
|
||||
dependencies = [
|
||||
"rustls 0.23.29",
|
||||
"rustls 0.23.31",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
@@ -10291,6 +10486,33 @@ name = "tonic"
|
||||
version = "0.13.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7e581ba15a835f4d9ea06c55ab1bd4dce26fc53752c69a04aac00703bfb49ba9"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"base64 0.22.1",
|
||||
"bytes",
|
||||
"flate2",
|
||||
"http 1.3.1",
|
||||
"http-body 1.0.1",
|
||||
"http-body-util",
|
||||
"hyper 1.6.0",
|
||||
"hyper-timeout",
|
||||
"hyper-util",
|
||||
"percent-encoding",
|
||||
"pin-project",
|
||||
"prost 0.13.5",
|
||||
"tokio",
|
||||
"tokio-stream",
|
||||
"tower",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tonic"
|
||||
version = "0.14.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "308e1db96abdccdf0a9150fb69112bf6ea72640e0bd834ef0c4a618ccc8c8ddc"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"axum",
|
||||
@@ -10306,8 +10528,8 @@ dependencies = [
|
||||
"hyper-util",
|
||||
"percent-encoding",
|
||||
"pin-project",
|
||||
"prost",
|
||||
"socket2 0.5.10",
|
||||
"socket2 0.6.0",
|
||||
"sync_wrapper",
|
||||
"tokio",
|
||||
"tokio-stream",
|
||||
"tower",
|
||||
@@ -10318,9 +10540,32 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "tonic-build"
|
||||
version = "0.13.1"
|
||||
version = "0.14.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "eac6f67be712d12f0b41328db3137e0d0757645d8904b4cb7d51cd9c2279e847"
|
||||
checksum = "18262cdd13dec66e8e3f2e3fe535e4b2cc706fab444a7d3678d75d8ac2557329"
|
||||
dependencies = [
|
||||
"prettyplease",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tonic-prost"
|
||||
version = "0.14.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2d8b5b7a44512c59f5ad45e0c40e53263cbbf4426d74fe6b569e04f1d4206e9c"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"prost 0.14.1",
|
||||
"tonic 0.14.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tonic-prost-build"
|
||||
version = "0.14.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "114cca66d757d72422ef8cccf8be3065321860ac9fa4be73aab37a8a20a9a805"
|
||||
dependencies = [
|
||||
"prettyplease",
|
||||
"proc-macro2",
|
||||
@@ -10328,6 +10573,8 @@ dependencies = [
|
||||
"prost-types",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
"tempfile",
|
||||
"tonic-build",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -10990,13 +11237,13 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wayland-backend"
|
||||
version = "0.3.10"
|
||||
version = "0.3.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fe770181423e5fc79d3e2a7f4410b7799d5aab1de4372853de3c6aa13ca24121"
|
||||
checksum = "673a33c33048a5ade91a6b139580fa174e19fb0d23f396dca9fa15f2e1e49b35"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"downcast-rs",
|
||||
"rustix 0.38.44",
|
||||
"rustix 1.0.8",
|
||||
"scoped-tls",
|
||||
"smallvec",
|
||||
"wayland-sys",
|
||||
@@ -11004,21 +11251,21 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wayland-client"
|
||||
version = "0.31.10"
|
||||
version = "0.31.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "978fa7c67b0847dbd6a9f350ca2569174974cd4082737054dbb7fbb79d7d9a61"
|
||||
checksum = "c66a47e840dc20793f2264eb4b3e4ecb4b75d91c0dd4af04b456128e0bdd449d"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"rustix 0.38.44",
|
||||
"rustix 1.0.8",
|
||||
"wayland-backend",
|
||||
"wayland-scanner",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wayland-protocols"
|
||||
version = "0.32.8"
|
||||
version = "0.32.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "779075454e1e9a521794fed15886323ea0feda3f8b0fc1390f5398141310422a"
|
||||
checksum = "efa790ed75fbfd71283bd2521a1cfdc022aabcc28bdcff00851f9e4ae88d9901"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"wayland-backend",
|
||||
@@ -11028,9 +11275,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wayland-scanner"
|
||||
version = "0.31.6"
|
||||
version = "0.31.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "896fdafd5d28145fce7958917d69f2fd44469b1d4e861cb5961bcbeebc6d1484"
|
||||
checksum = "54cb1e9dc49da91950bdfd8b848c49330536d9d1fb03d4bfec8cae50caa50ae3"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quick-xml 0.37.5",
|
||||
@@ -11039,9 +11286,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "wayland-sys"
|
||||
version = "0.31.6"
|
||||
version = "0.31.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dbcebb399c77d5aa9fa5db874806ee7b4eba4e73650948e8f93963f128896615"
|
||||
checksum = "34949b42822155826b41db8e5d0c1be3a2bd296c747577a43a3e6daefc296142"
|
||||
dependencies = [
|
||||
"dlib",
|
||||
"log",
|
||||
@@ -11445,7 +11692,7 @@ version = "0.60.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb"
|
||||
dependencies = [
|
||||
"windows-targets 0.53.2",
|
||||
"windows-targets 0.53.3",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -11496,10 +11743,11 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "windows-targets"
|
||||
version = "0.53.2"
|
||||
version = "0.53.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c66f69fcc9ce11da9966ddb31a40968cad001c5bedeb5c2b82ede4253ab48aef"
|
||||
checksum = "d5fe6031c4041849d7c496a8ded650796e7b6ecc19df1a431c1a363342e5dc91"
|
||||
dependencies = [
|
||||
"windows-link",
|
||||
"windows_aarch64_gnullvm 0.53.0",
|
||||
"windows_aarch64_msvc 0.53.0",
|
||||
"windows_i686_gnu 0.53.0",
|
||||
@@ -11741,7 +11989,7 @@ version = "0.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a76ff259533532054cfbaefb115c613203c73707017459206380f03b3b3f266e"
|
||||
dependencies = [
|
||||
"darling",
|
||||
"darling 0.20.11",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn 2.0.104",
|
||||
|
||||
39
Cargo.toml
39
Cargo.toml
@@ -90,6 +90,7 @@ rustfs-checksums = { path = "crates/checksums", version = "0.0.5" }
|
||||
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
|
||||
rustfs-mcp = { path = "crates/mcp", version = "0.0.5" }
|
||||
aes-gcm = { version = "0.10.3", features = ["std"] }
|
||||
anyhow = "1.0.98"
|
||||
arc-swap = "1.7.1"
|
||||
argon2 = { version = "0.5.3", features = ["std"] }
|
||||
atoi = "2.0.0"
|
||||
@@ -98,7 +99,8 @@ async-recursion = "1.1.1"
|
||||
async-trait = "0.1.88"
|
||||
async-compression = { version = "0.4.0" }
|
||||
atomic_enum = "0.3.0"
|
||||
aws-sdk-s3 = "1.96.0"
|
||||
aws-config = { version = "1.8.3" }
|
||||
aws-sdk-s3 = "1.100.0"
|
||||
axum = "0.8.4"
|
||||
axum-extra = "0.10.1"
|
||||
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
|
||||
@@ -108,11 +110,13 @@ brotli = "8.0.1"
|
||||
bytes = { version = "1.10.1", features = ["serde"] }
|
||||
bytesize = "2.0.1"
|
||||
byteorder = "1.5.0"
|
||||
bytes-utils = "0.1.4"
|
||||
cfg-if = "1.0.1"
|
||||
crc-fast = "1.3.0"
|
||||
chacha20poly1305 = { version = "0.10.1" }
|
||||
chrono = { version = "0.4.41", features = ["serde"] }
|
||||
clap = { version = "4.5.41", features = ["derive", "env"] }
|
||||
const-str = { version = "0.6.3", features = ["std", "proc"] }
|
||||
clap = { version = "4.5.42", features = ["derive", "env"] }
|
||||
const-str = { version = "0.6.4", features = ["std", "proc"] }
|
||||
crc32fast = "1.5.0"
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
dashmap = "6.1.0"
|
||||
@@ -145,7 +149,7 @@ http-body = "1.0.1"
|
||||
humantime = "2.2.0"
|
||||
ipnetwork = { version = "0.21.1", features = ["serde"] }
|
||||
jsonwebtoken = "9.3.1"
|
||||
keyring = { version = "3.6.2", features = [
|
||||
keyring = { version = "3.6.3", features = [
|
||||
"apple-native",
|
||||
"windows-native",
|
||||
"sync-secret-service",
|
||||
@@ -186,7 +190,8 @@ blake3 = { version = "1.8.2" }
|
||||
pbkdf2 = "0.12.2"
|
||||
percent-encoding = "2.3.1"
|
||||
pin-project-lite = "0.2.16"
|
||||
prost = "0.13.5"
|
||||
prost = "0.14.1"
|
||||
pretty_assertions = "1.4.1"
|
||||
quick-xml = "0.38.0"
|
||||
rand = "0.9.2"
|
||||
rdkafka = { version = "0.38.0", features = ["tokio"] }
|
||||
@@ -205,6 +210,7 @@ rfd = { version = "0.15.4", default-features = false, features = [
|
||||
"xdg-portal",
|
||||
"tokio",
|
||||
] }
|
||||
rmcp = { version = "0.3.1" }
|
||||
rmp = "0.8.14"
|
||||
rmp-serde = "1.3.0"
|
||||
rsa = "0.9.8"
|
||||
@@ -212,16 +218,18 @@ rumqttc = { version = "0.24" }
|
||||
rust-embed = { version = "8.7.2" }
|
||||
rust-i18n = { version = "3.1.5" }
|
||||
rustfs-rsc = "2025.506.1"
|
||||
rustls = { version = "0.23.29" }
|
||||
rustls = { version = "0.23.31" }
|
||||
rustls-pki-types = "1.12.0"
|
||||
rustls-pemfile = "2.2.0"
|
||||
s3s = { version = "0.12.0-minio-preview.2" }
|
||||
shadow-rs = { version = "1.2.0", default-features = false }
|
||||
schemars = "1.0.4"
|
||||
serde = { version = "1.0.219", features = ["derive"] }
|
||||
serde_json = { version = "1.0.141", features = ["raw_value"] }
|
||||
serde_urlencoded = "0.7.1"
|
||||
serial_test = "3.2.0"
|
||||
sha1 = "0.10.6"
|
||||
sha2 = "0.10.9"
|
||||
shadow-rs = { version = "1.2.0", default-features = false }
|
||||
siphasher = "1.0.1"
|
||||
smallvec = { version = "1.15.1", features = ["serde"] }
|
||||
snafu = "0.8.6"
|
||||
@@ -241,22 +249,24 @@ time = { version = "0.3.41", features = [
|
||||
"macros",
|
||||
"serde",
|
||||
] }
|
||||
tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
|
||||
tokio = { version = "1.47.0", features = ["fs", "rt-multi-thread"] }
|
||||
tokio-rustls = { version = "0.26.2", default-features = false }
|
||||
tokio-stream = { version = "0.1.17" }
|
||||
tokio-tar = "0.3.1"
|
||||
tokio-test = "0.4.4"
|
||||
tokio-util = { version = "0.7.15", features = ["io", "compat"] }
|
||||
tonic = { version = "0.13.1", features = ["gzip"] }
|
||||
tonic-build = { version = "0.13.1" }
|
||||
tonic = { version = "0.14.0", features = ["gzip"] }
|
||||
tonic-prost = { version = "0.14.0" }
|
||||
tonic-prost-build = { version = "0.14.0" }
|
||||
tower = { version = "0.5.2", features = ["timeout"] }
|
||||
tower-http = { version = "0.6.6", features = ["cors"] }
|
||||
tracing = "0.1.41"
|
||||
tracing-appender = "0.2.3"
|
||||
tracing-core = "0.1.34"
|
||||
tracing-error = "0.2.1"
|
||||
tracing-subscriber = { version = "0.3.19", features = ["env-filter", "time"] }
|
||||
tracing-appender = "0.2.3"
|
||||
tracing-opentelemetry = "0.31.0"
|
||||
tracing-subscriber = { version = "0.3.19", features = ["env-filter", "time"] }
|
||||
tracing-test = "0.2.5"
|
||||
transform-stream = "0.3.1"
|
||||
url = "2.5.4"
|
||||
urlencoding = "2.1.3"
|
||||
@@ -270,7 +280,10 @@ winapi = { version = "0.3.9" }
|
||||
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
|
||||
zip = "2.4.2"
|
||||
zstd = "0.13.3"
|
||||
anyhow = "1.0.98"
|
||||
|
||||
|
||||
[workspace.metadata.cargo-shear]
|
||||
ignored = ["rustfs", "rust-i18n"]
|
||||
|
||||
[profile.wasm-dev]
|
||||
inherits = "dev"
|
||||
|
||||
@@ -21,13 +21,13 @@ rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Checksum calculation and verification callbacks for HTTP request and response bodies sent by service clients generated by RustFS, ensuring data integrity and authenticity."
|
||||
keywords = ["checksum-calculation", "verification", "integrity", "authenticity", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "checksum"]
|
||||
keywords = ["checksum-calculation", "verification", "integrity", "authenticity", "rustfs"]
|
||||
categories = ["web-programming", "development-tools", "network-programming"]
|
||||
documentation = "https://docs.rs/rustfs-signer/latest/rustfs_checksum/"
|
||||
|
||||
[dependencies]
|
||||
bytes = { workspace = true }
|
||||
crc-fast = "1.3.0"
|
||||
crc-fast = { workspace = true }
|
||||
hex = { workspace = true }
|
||||
http = { workspace = true }
|
||||
http-body = { workspace = true }
|
||||
@@ -39,10 +39,7 @@ sha2 = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
bytes-utils = "0.1.2"
|
||||
pretty_assertions = "1.3"
|
||||
tracing-test = "0.2.1"
|
||||
|
||||
[dev-dependencies.tokio]
|
||||
version = "1.23.1"
|
||||
features = ["macros", "rt"]
|
||||
bytes-utils = { workspace = true }
|
||||
pretty_assertions = { workspace = true }
|
||||
tracing-test = { workspace = true }
|
||||
tokio = { workspace = true, features = ["macros", "rt"] }
|
||||
@@ -26,9 +26,6 @@ categories = ["web-programming", "development-tools", "config"]
|
||||
|
||||
[dependencies]
|
||||
const-str = { workspace = true, optional = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
use const_str::concat;
|
||||
|
||||
/// Application name
|
||||
/// Default value: RustFs
|
||||
/// Default value: RustFS
|
||||
/// Environment variable: RUSTFS_APP_NAME
|
||||
pub const APP_NAME: &str = "RustFs";
|
||||
pub const APP_NAME: &str = "RustFS";
|
||||
/// Application version
|
||||
/// Default value: 1.0.0
|
||||
/// Environment variable: RUSTFS_VERSION
|
||||
@@ -71,6 +71,16 @@ pub const DEFAULT_ACCESS_KEY: &str = "rustfsadmin";
|
||||
/// Example: --secret-key rustfsadmin
|
||||
pub const DEFAULT_SECRET_KEY: &str = "rustfsadmin";
|
||||
|
||||
/// Default console enable
|
||||
/// This is the default value for the console server.
|
||||
/// It is used to enable or disable the console server.
|
||||
/// Default value: true
|
||||
/// Environment variable: RUSTFS_CONSOLE_ENABLE
|
||||
/// Command line argument: --console-enable
|
||||
/// Example: RUSTFS_CONSOLE_ENABLE=true
|
||||
/// Example: --console-enable true
|
||||
pub const DEFAULT_CONSOLE_ENABLE: bool = true;
|
||||
|
||||
/// Default OBS configuration endpoint
|
||||
/// Environment variable: DEFAULT_OBS_ENDPOINT
|
||||
/// Command line argument: --obs-endpoint
|
||||
@@ -126,28 +136,28 @@ pub const DEFAULT_SINK_FILE_LOG_FILE: &str = concat!(DEFAULT_LOG_FILENAME, "-sin
|
||||
/// This is the default log directory for rustfs.
|
||||
/// It is used to store the logs of the application.
|
||||
/// Default value: logs
|
||||
/// Environment variable: RUSTFS_OBSERVABILITY_LOG_DIRECTORY
|
||||
pub const DEFAULT_LOG_DIR: &str = "/logs";
|
||||
/// Environment variable: RUSTFS_LOG_DIRECTORY
|
||||
pub const DEFAULT_LOG_DIR: &str = "logs";
|
||||
|
||||
/// Default log rotation size mb for rustfs
|
||||
/// This is the default log rotation size for rustfs.
|
||||
/// It is used to rotate the logs of the application.
|
||||
/// Default value: 100 MB
|
||||
/// Environment variable: RUSTFS_OBSERVABILITY_LOG_ROTATION_SIZE_MB
|
||||
/// Environment variable: RUSTFS_OBS_LOG_ROTATION_SIZE_MB
|
||||
pub const DEFAULT_LOG_ROTATION_SIZE_MB: u64 = 100;
|
||||
|
||||
/// Default log rotation time for rustfs
|
||||
/// This is the default log rotation time for rustfs.
|
||||
/// It is used to rotate the logs of the application.
|
||||
/// Default value: hour, eg: day,hour,minute,second
|
||||
/// Environment variable: RUSTFS_OBSERVABILITY_LOG_ROTATION_TIME
|
||||
/// Environment variable: RUSTFS_OBS_LOG_ROTATION_TIME
|
||||
pub const DEFAULT_LOG_ROTATION_TIME: &str = "day";
|
||||
|
||||
/// Default log keep files for rustfs
|
||||
/// This is the default log keep files for rustfs.
|
||||
/// It is used to keep the logs of the application.
|
||||
/// Default value: 30
|
||||
/// Environment variable: RUSTFS_OBSERVABILITY_LOG_KEEP_FILES
|
||||
/// Environment variable: RUSTFS_OBS_LOG_KEEP_FILES
|
||||
pub const DEFAULT_LOG_KEEP_FILES: u16 = 30;
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -157,7 +167,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_app_basic_constants() {
|
||||
// Test application basic constants
|
||||
assert_eq!(APP_NAME, "RustFs");
|
||||
assert_eq!(APP_NAME, "RustFS");
|
||||
assert!(!APP_NAME.contains(' '), "App name should not contain spaces");
|
||||
|
||||
assert_eq!(VERSION, "0.0.1");
|
||||
|
||||
@@ -27,7 +27,15 @@ pub const DEFAULT_TARGET: &str = "1";
|
||||
|
||||
pub const NOTIFY_PREFIX: &str = "notify";
|
||||
|
||||
pub const NOTIFY_ROUTE_PREFIX: &str = "notify_";
|
||||
pub const NOTIFY_ROUTE_PREFIX: &str = const_str::concat!(NOTIFY_PREFIX, "_");
|
||||
|
||||
/// Standard config keys and values.
|
||||
pub const ENABLE_KEY: &str = "enable";
|
||||
pub const COMMENT_KEY: &str = "comment";
|
||||
|
||||
/// Enable values
|
||||
pub const ENABLE_ON: &str = "on";
|
||||
pub const ENABLE_OFF: &str = "off";
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub const NOTIFY_SUB_SYSTEMS: &[&str] = &[NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS];
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::notify::{COMMENT_KEY, ENABLE_KEY};
|
||||
|
||||
// MQTT Keys
|
||||
pub const MQTT_BROKER: &str = "broker";
|
||||
pub const MQTT_TOPIC: &str = "topic";
|
||||
@@ -23,6 +25,21 @@ pub const MQTT_KEEP_ALIVE_INTERVAL: &str = "keep_alive_interval";
|
||||
pub const MQTT_QUEUE_DIR: &str = "queue_dir";
|
||||
pub const MQTT_QUEUE_LIMIT: &str = "queue_limit";
|
||||
|
||||
/// A list of all valid configuration keys for an MQTT target.
|
||||
pub const NOTIFY_MQTT_KEYS: &[&str] = &[
|
||||
ENABLE_KEY, // "enable" is a common key
|
||||
MQTT_BROKER,
|
||||
MQTT_TOPIC,
|
||||
MQTT_QOS,
|
||||
MQTT_USERNAME,
|
||||
MQTT_PASSWORD,
|
||||
MQTT_RECONNECT_INTERVAL,
|
||||
MQTT_KEEP_ALIVE_INTERVAL,
|
||||
MQTT_QUEUE_DIR,
|
||||
MQTT_QUEUE_LIMIT,
|
||||
COMMENT_KEY,
|
||||
];
|
||||
|
||||
// MQTT Environment Variables
|
||||
pub const ENV_MQTT_ENABLE: &str = "RUSTFS_NOTIFY_MQTT_ENABLE";
|
||||
pub const ENV_MQTT_BROKER: &str = "RUSTFS_NOTIFY_MQTT_BROKER";
|
||||
@@ -34,3 +51,16 @@ pub const ENV_MQTT_RECONNECT_INTERVAL: &str = "RUSTFS_NOTIFY_MQTT_RECONNECT_INTE
|
||||
pub const ENV_MQTT_KEEP_ALIVE_INTERVAL: &str = "RUSTFS_NOTIFY_MQTT_KEEP_ALIVE_INTERVAL";
|
||||
pub const ENV_MQTT_QUEUE_DIR: &str = "RUSTFS_NOTIFY_MQTT_QUEUE_DIR";
|
||||
pub const ENV_MQTT_QUEUE_LIMIT: &str = "RUSTFS_NOTIFY_MQTT_QUEUE_LIMIT";
|
||||
|
||||
pub const ENV_NOTIFY_MQTT_KEYS: &[&str; 10] = &[
|
||||
ENV_MQTT_ENABLE,
|
||||
ENV_MQTT_BROKER,
|
||||
ENV_MQTT_TOPIC,
|
||||
ENV_MQTT_QOS,
|
||||
ENV_MQTT_USERNAME,
|
||||
ENV_MQTT_PASSWORD,
|
||||
ENV_MQTT_RECONNECT_INTERVAL,
|
||||
ENV_MQTT_KEEP_ALIVE_INTERVAL,
|
||||
ENV_MQTT_QUEUE_DIR,
|
||||
ENV_MQTT_QUEUE_LIMIT,
|
||||
];
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::notify::{COMMENT_KEY, ENABLE_KEY};
|
||||
|
||||
// Webhook Keys
|
||||
pub const WEBHOOK_ENDPOINT: &str = "endpoint";
|
||||
pub const WEBHOOK_AUTH_TOKEN: &str = "auth_token";
|
||||
@@ -20,6 +22,18 @@ pub const WEBHOOK_QUEUE_DIR: &str = "queue_dir";
|
||||
pub const WEBHOOK_CLIENT_CERT: &str = "client_cert";
|
||||
pub const WEBHOOK_CLIENT_KEY: &str = "client_key";
|
||||
|
||||
/// A list of all valid configuration keys for a webhook target.
|
||||
pub const NOTIFY_WEBHOOK_KEYS: &[&str] = &[
|
||||
ENABLE_KEY, // "enable" is a common key
|
||||
WEBHOOK_ENDPOINT,
|
||||
WEBHOOK_AUTH_TOKEN,
|
||||
WEBHOOK_QUEUE_LIMIT,
|
||||
WEBHOOK_QUEUE_DIR,
|
||||
WEBHOOK_CLIENT_CERT,
|
||||
WEBHOOK_CLIENT_KEY,
|
||||
COMMENT_KEY,
|
||||
];
|
||||
|
||||
// Webhook Environment Variables
|
||||
pub const ENV_WEBHOOK_ENABLE: &str = "RUSTFS_NOTIFY_WEBHOOK_ENABLE";
|
||||
pub const ENV_WEBHOOK_ENDPOINT: &str = "RUSTFS_NOTIFY_WEBHOOK_ENDPOINT";
|
||||
@@ -28,3 +42,13 @@ pub const ENV_WEBHOOK_QUEUE_LIMIT: &str = "RUSTFS_NOTIFY_WEBHOOK_QUEUE_LIMIT";
|
||||
pub const ENV_WEBHOOK_QUEUE_DIR: &str = "RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR";
|
||||
pub const ENV_WEBHOOK_CLIENT_CERT: &str = "RUSTFS_NOTIFY_WEBHOOK_CLIENT_CERT";
|
||||
pub const ENV_WEBHOOK_CLIENT_KEY: &str = "RUSTFS_NOTIFY_WEBHOOK_CLIENT_KEY";
|
||||
|
||||
pub const ENV_NOTIFY_WEBHOOK_KEYS: &[&str; 7] = &[
|
||||
ENV_WEBHOOK_ENABLE,
|
||||
ENV_WEBHOOK_ENDPOINT,
|
||||
ENV_WEBHOOK_AUTH_TOKEN,
|
||||
ENV_WEBHOOK_QUEUE_LIMIT,
|
||||
ENV_WEBHOOK_QUEUE_DIR,
|
||||
ENV_WEBHOOK_CLIENT_CERT,
|
||||
ENV_WEBHOOK_CLIENT_KEY,
|
||||
];
|
||||
|
||||
@@ -12,279 +12,24 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::observability::logger::LoggerConfig;
|
||||
use crate::observability::otel::OtelConfig;
|
||||
use crate::observability::sink::SinkConfig;
|
||||
use serde::{Deserialize, Serialize};
|
||||
// Observability Keys
|
||||
|
||||
/// Observability configuration
|
||||
#[derive(Debug, Deserialize, Serialize, Clone)]
|
||||
pub struct ObservabilityConfig {
|
||||
pub otel: OtelConfig,
|
||||
pub sinks: Vec<SinkConfig>,
|
||||
pub logger: Option<LoggerConfig>,
|
||||
}
|
||||
pub const ENV_OBS_ENDPOINT: &str = "RUSTFS_OBS_ENDPOINT";
|
||||
pub const ENV_OBS_USE_STDOUT: &str = "RUSTFS_OBS_USE_STDOUT";
|
||||
pub const ENV_OBS_SAMPLE_RATIO: &str = "RUSTFS_OBS_SAMPLE_RATIO";
|
||||
pub const ENV_OBS_METER_INTERVAL: &str = "RUSTFS_OBS_METER_INTERVAL";
|
||||
pub const ENV_OBS_SERVICE_NAME: &str = "RUSTFS_OBS_SERVICE_NAME";
|
||||
pub const ENV_OBS_SERVICE_VERSION: &str = "RUSTFS_OBS_SERVICE_VERSION";
|
||||
pub const ENV_OBS_ENVIRONMENT: &str = "RUSTFS_OBS_ENVIRONMENT";
|
||||
pub const ENV_OBS_LOGGER_LEVEL: &str = "RUSTFS_OBS_LOGGER_LEVEL";
|
||||
pub const ENV_OBS_LOCAL_LOGGING_ENABLED: &str = "RUSTFS_OBS_LOCAL_LOGGING_ENABLED";
|
||||
pub const ENV_OBS_LOG_DIRECTORY: &str = "RUSTFS_OBS_LOG_DIRECTORY";
|
||||
pub const ENV_OBS_LOG_FILENAME: &str = "RUSTFS_OBS_LOG_FILENAME";
|
||||
pub const ENV_OBS_LOG_ROTATION_SIZE_MB: &str = "RUSTFS_OBS_LOG_ROTATION_SIZE_MB";
|
||||
pub const ENV_OBS_LOG_ROTATION_TIME: &str = "RUSTFS_OBS_LOG_ROTATION_TIME";
|
||||
pub const ENV_OBS_LOG_KEEP_FILES: &str = "RUSTFS_OBS_LOG_KEEP_FILES";
|
||||
|
||||
impl ObservabilityConfig {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
otel: OtelConfig::new(),
|
||||
sinks: vec![SinkConfig::new()],
|
||||
logger: Some(LoggerConfig::new()),
|
||||
}
|
||||
}
|
||||
}
|
||||
pub const ENV_AUDIT_LOGGER_QUEUE_CAPACITY: &str = "RUSTFS_AUDIT_LOGGER_QUEUE_CAPACITY";
|
||||
|
||||
impl Default for ObservabilityConfig {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_new() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Verify OTEL config is initialized
|
||||
assert!(config.otel.use_stdout.is_some(), "OTEL use_stdout should be configured");
|
||||
assert!(config.otel.sample_ratio.is_some(), "OTEL sample_ratio should be configured");
|
||||
assert!(config.otel.meter_interval.is_some(), "OTEL meter_interval should be configured");
|
||||
assert!(config.otel.service_name.is_some(), "OTEL service_name should be configured");
|
||||
assert!(config.otel.service_version.is_some(), "OTEL service_version should be configured");
|
||||
assert!(config.otel.environment.is_some(), "OTEL environment should be configured");
|
||||
assert!(config.otel.logger_level.is_some(), "OTEL logger_level should be configured");
|
||||
|
||||
// Verify sinks are initialized
|
||||
assert!(!config.sinks.is_empty(), "Sinks should not be empty");
|
||||
assert_eq!(config.sinks.len(), 1, "Should have exactly one default sink");
|
||||
|
||||
// Verify logger is initialized
|
||||
assert!(config.logger.is_some(), "Logger should be configured");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_default() {
|
||||
let config = ObservabilityConfig::default();
|
||||
let new_config = ObservabilityConfig::new();
|
||||
|
||||
// Default should be equivalent to new()
|
||||
assert_eq!(config.sinks.len(), new_config.sinks.len());
|
||||
assert_eq!(config.logger.is_some(), new_config.logger.is_some());
|
||||
|
||||
// OTEL configs should be equivalent
|
||||
assert_eq!(config.otel.use_stdout, new_config.otel.use_stdout);
|
||||
assert_eq!(config.otel.sample_ratio, new_config.otel.sample_ratio);
|
||||
assert_eq!(config.otel.meter_interval, new_config.otel.meter_interval);
|
||||
assert_eq!(config.otel.service_name, new_config.otel.service_name);
|
||||
assert_eq!(config.otel.service_version, new_config.otel.service_version);
|
||||
assert_eq!(config.otel.environment, new_config.otel.environment);
|
||||
assert_eq!(config.otel.logger_level, new_config.otel.logger_level);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_otel_defaults() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Test OTEL default values
|
||||
if let Some(_use_stdout) = config.otel.use_stdout {
|
||||
// Test boolean values - any boolean value is valid
|
||||
}
|
||||
|
||||
if let Some(sample_ratio) = config.otel.sample_ratio {
|
||||
assert!((0.0..=1.0).contains(&sample_ratio), "Sample ratio should be between 0.0 and 1.0");
|
||||
}
|
||||
|
||||
if let Some(meter_interval) = config.otel.meter_interval {
|
||||
assert!(meter_interval > 0, "Meter interval should be positive");
|
||||
assert!(meter_interval <= 3600, "Meter interval should be reasonable (≤ 1 hour)");
|
||||
}
|
||||
|
||||
if let Some(service_name) = &config.otel.service_name {
|
||||
assert!(!service_name.is_empty(), "Service name should not be empty");
|
||||
assert!(!service_name.contains(' '), "Service name should not contain spaces");
|
||||
}
|
||||
|
||||
if let Some(service_version) = &config.otel.service_version {
|
||||
assert!(!service_version.is_empty(), "Service version should not be empty");
|
||||
}
|
||||
|
||||
if let Some(environment) = &config.otel.environment {
|
||||
assert!(!environment.is_empty(), "Environment should not be empty");
|
||||
assert!(
|
||||
["development", "staging", "production", "test"].contains(&environment.as_str()),
|
||||
"Environment should be a standard environment name"
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(logger_level) = &config.otel.logger_level {
|
||||
assert!(
|
||||
["trace", "debug", "info", "warn", "error"].contains(&logger_level.as_str()),
|
||||
"Logger level should be a valid tracing level"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_sinks() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Test default sink configuration
|
||||
assert_eq!(config.sinks.len(), 1, "Should have exactly one default sink");
|
||||
|
||||
let _default_sink = &config.sinks[0];
|
||||
// Test that the sink has valid configuration
|
||||
// Note: We can't test specific values without knowing SinkConfig implementation
|
||||
// but we can test that it's properly initialized
|
||||
|
||||
// Test that we can add more sinks
|
||||
let mut config_mut = config.clone();
|
||||
config_mut.sinks.push(SinkConfig::new());
|
||||
assert_eq!(config_mut.sinks.len(), 2, "Should be able to add more sinks");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_logger() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Test logger configuration
|
||||
assert!(config.logger.is_some(), "Logger should be configured by default");
|
||||
|
||||
if let Some(_logger) = &config.logger {
|
||||
// Test that logger has valid configuration
|
||||
// Note: We can't test specific values without knowing LoggerConfig implementation
|
||||
// but we can test that it's properly initialized
|
||||
}
|
||||
|
||||
// Test that logger can be disabled
|
||||
let mut config_mut = config.clone();
|
||||
config_mut.logger = None;
|
||||
assert!(config_mut.logger.is_none(), "Logger should be able to be disabled");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_serialization() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Test serialization to JSON
|
||||
let json_result = serde_json::to_string(&config);
|
||||
assert!(json_result.is_ok(), "Config should be serializable to JSON");
|
||||
|
||||
let json_str = json_result.unwrap();
|
||||
assert!(!json_str.is_empty(), "Serialized JSON should not be empty");
|
||||
assert!(json_str.contains("otel"), "JSON should contain otel configuration");
|
||||
assert!(json_str.contains("sinks"), "JSON should contain sinks configuration");
|
||||
assert!(json_str.contains("logger"), "JSON should contain logger configuration");
|
||||
|
||||
// Test deserialization from JSON
|
||||
let deserialized_result: Result<ObservabilityConfig, _> = serde_json::from_str(&json_str);
|
||||
assert!(deserialized_result.is_ok(), "Config should be deserializable from JSON");
|
||||
|
||||
let deserialized_config = deserialized_result.unwrap();
|
||||
assert_eq!(deserialized_config.sinks.len(), config.sinks.len());
|
||||
assert_eq!(deserialized_config.logger.is_some(), config.logger.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_debug_format() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
let debug_str = format!("{config:?}");
|
||||
assert!(!debug_str.is_empty(), "Debug output should not be empty");
|
||||
assert!(debug_str.contains("ObservabilityConfig"), "Debug output should contain struct name");
|
||||
assert!(debug_str.contains("otel"), "Debug output should contain otel field");
|
||||
assert!(debug_str.contains("sinks"), "Debug output should contain sinks field");
|
||||
assert!(debug_str.contains("logger"), "Debug output should contain logger field");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_clone() {
|
||||
let config = ObservabilityConfig::new();
|
||||
let cloned_config = config.clone();
|
||||
|
||||
// Test that clone creates an independent copy
|
||||
assert_eq!(cloned_config.sinks.len(), config.sinks.len());
|
||||
assert_eq!(cloned_config.logger.is_some(), config.logger.is_some());
|
||||
assert_eq!(cloned_config.otel.endpoint, config.otel.endpoint);
|
||||
assert_eq!(cloned_config.otel.use_stdout, config.otel.use_stdout);
|
||||
assert_eq!(cloned_config.otel.sample_ratio, config.otel.sample_ratio);
|
||||
assert_eq!(cloned_config.otel.meter_interval, config.otel.meter_interval);
|
||||
assert_eq!(cloned_config.otel.service_name, config.otel.service_name);
|
||||
assert_eq!(cloned_config.otel.service_version, config.otel.service_version);
|
||||
assert_eq!(cloned_config.otel.environment, config.otel.environment);
|
||||
assert_eq!(cloned_config.otel.logger_level, config.otel.logger_level);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_modification() {
|
||||
let mut config = ObservabilityConfig::new();
|
||||
|
||||
// Test modifying OTEL endpoint
|
||||
let original_endpoint = config.otel.endpoint.clone();
|
||||
config.otel.endpoint = "http://localhost:4317".to_string();
|
||||
assert_ne!(config.otel.endpoint, original_endpoint);
|
||||
assert_eq!(config.otel.endpoint, "http://localhost:4317");
|
||||
|
||||
// Test modifying sinks
|
||||
let original_sinks_len = config.sinks.len();
|
||||
config.sinks.push(SinkConfig::new());
|
||||
assert_eq!(config.sinks.len(), original_sinks_len + 1);
|
||||
|
||||
// Test disabling logger
|
||||
config.logger = None;
|
||||
assert!(config.logger.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_edge_cases() {
|
||||
// Test with empty sinks
|
||||
let mut config = ObservabilityConfig::new();
|
||||
config.sinks.clear();
|
||||
assert!(config.sinks.is_empty(), "Sinks should be empty after clearing");
|
||||
|
||||
// Test serialization with empty sinks
|
||||
let json_result = serde_json::to_string(&config);
|
||||
assert!(json_result.is_ok(), "Config with empty sinks should be serializable");
|
||||
|
||||
// Test with no logger
|
||||
config.logger = None;
|
||||
let json_result = serde_json::to_string(&config);
|
||||
assert!(json_result.is_ok(), "Config with no logger should be serializable");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_memory_efficiency() {
|
||||
let config = ObservabilityConfig::new();
|
||||
|
||||
// Test that config doesn't use excessive memory
|
||||
let config_size = std::mem::size_of_val(&config);
|
||||
assert!(config_size < 5000, "Config should not use excessive memory");
|
||||
|
||||
// Test that endpoint string is not excessively long
|
||||
assert!(config.otel.endpoint.len() < 1000, "Endpoint should not be excessively long");
|
||||
|
||||
// Test that collections are reasonably sized
|
||||
assert!(config.sinks.len() < 100, "Sinks collection should be reasonably sized");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_observability_config_consistency() {
|
||||
// Create multiple configs and ensure they're consistent
|
||||
let config1 = ObservabilityConfig::new();
|
||||
let config2 = ObservabilityConfig::new();
|
||||
|
||||
// Both configs should have the same default structure
|
||||
assert_eq!(config1.sinks.len(), config2.sinks.len());
|
||||
assert_eq!(config1.logger.is_some(), config2.logger.is_some());
|
||||
assert_eq!(config1.otel.use_stdout, config2.otel.use_stdout);
|
||||
assert_eq!(config1.otel.sample_ratio, config2.otel.sample_ratio);
|
||||
assert_eq!(config1.otel.meter_interval, config2.otel.meter_interval);
|
||||
assert_eq!(config1.otel.service_name, config2.otel.service_name);
|
||||
assert_eq!(config1.otel.service_version, config2.otel.service_version);
|
||||
assert_eq!(config1.otel.environment, config2.otel.environment);
|
||||
assert_eq!(config1.otel.logger_level, config2.otel.logger_level);
|
||||
}
|
||||
}
|
||||
// Default values for observability configuration
|
||||
pub const DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY: usize = 10000;
|
||||
|
||||
@@ -12,62 +12,17 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::env;
|
||||
// RUSTFS_SINKS_FILE_PATH
|
||||
pub const ENV_SINKS_FILE_PATH: &str = "RUSTFS_SINKS_FILE_PATH";
|
||||
// RUSTFS_SINKS_FILE_BUFFER_SIZE
|
||||
pub const ENV_SINKS_FILE_BUFFER_SIZE: &str = "RUSTFS_SINKS_FILE_BUFFER_SIZE";
|
||||
// RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS
|
||||
pub const ENV_SINKS_FILE_FLUSH_INTERVAL_MS: &str = "RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS";
|
||||
// RUSTFS_SINKS_FILE_FLUSH_THRESHOLD
|
||||
pub const ENV_SINKS_FILE_FLUSH_THRESHOLD: &str = "RUSTFS_SINKS_FILE_FLUSH_THRESHOLD";
|
||||
|
||||
/// File sink configuration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct FileSink {
|
||||
pub path: String,
|
||||
#[serde(default = "default_buffer_size")]
|
||||
pub buffer_size: Option<usize>,
|
||||
#[serde(default = "default_flush_interval_ms")]
|
||||
pub flush_interval_ms: Option<u64>,
|
||||
#[serde(default = "default_flush_threshold")]
|
||||
pub flush_threshold: Option<usize>,
|
||||
}
|
||||
pub const DEFAULT_SINKS_FILE_BUFFER_SIZE: usize = 8192;
|
||||
|
||||
impl FileSink {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
path: env::var("RUSTFS_SINKS_FILE_PATH")
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(default_path),
|
||||
buffer_size: default_buffer_size(),
|
||||
flush_interval_ms: default_flush_interval_ms(),
|
||||
flush_threshold: default_flush_threshold(),
|
||||
}
|
||||
}
|
||||
}
|
||||
pub const DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS: u64 = 1000;
|
||||
|
||||
impl Default for FileSink {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
fn default_buffer_size() -> Option<usize> {
|
||||
Some(8192)
|
||||
}
|
||||
fn default_flush_interval_ms() -> Option<u64> {
|
||||
Some(1000)
|
||||
}
|
||||
fn default_flush_threshold() -> Option<usize> {
|
||||
Some(100)
|
||||
}
|
||||
|
||||
fn default_path() -> String {
|
||||
let temp_dir = env::temp_dir().join("rustfs");
|
||||
|
||||
if let Err(e) = std::fs::create_dir_all(&temp_dir) {
|
||||
eprintln!("Failed to create log directory: {e}");
|
||||
return "rustfs/rustfs.log".to_string();
|
||||
}
|
||||
|
||||
temp_dir
|
||||
.join("rustfs.log")
|
||||
.to_str()
|
||||
.unwrap_or("rustfs/rustfs.log")
|
||||
.to_string()
|
||||
}
|
||||
pub const DEFAULT_SINKS_FILE_FLUSH_THRESHOLD: usize = 100;
|
||||
|
||||
@@ -12,39 +12,16 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
// RUSTFS_SINKS_KAFKA_BROKERS
|
||||
pub const ENV_SINKS_KAFKA_BROKERS: &str = "RUSTFS_SINKS_KAFKA_BROKERS";
|
||||
pub const ENV_SINKS_KAFKA_TOPIC: &str = "RUSTFS_SINKS_KAFKA_TOPIC";
|
||||
// batch_size
|
||||
pub const ENV_SINKS_KAFKA_BATCH_SIZE: &str = "RUSTFS_SINKS_KAFKA_BATCH_SIZE";
|
||||
// batch_timeout_ms
|
||||
pub const ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS: &str = "RUSTFS_SINKS_KAFKA_BATCH_TIMEOUT_MS";
|
||||
|
||||
/// Kafka sink configuration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct KafkaSink {
|
||||
pub brokers: String,
|
||||
pub topic: String,
|
||||
#[serde(default = "default_batch_size")]
|
||||
pub batch_size: Option<usize>,
|
||||
#[serde(default = "default_batch_timeout_ms")]
|
||||
pub batch_timeout_ms: Option<u64>,
|
||||
}
|
||||
|
||||
impl KafkaSink {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
brokers: "localhost:9092".to_string(),
|
||||
topic: "rustfs".to_string(),
|
||||
batch_size: default_batch_size(),
|
||||
batch_timeout_ms: default_batch_timeout_ms(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for KafkaSink {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
fn default_batch_size() -> Option<usize> {
|
||||
Some(100)
|
||||
}
|
||||
fn default_batch_timeout_ms() -> Option<u64> {
|
||||
Some(1000)
|
||||
}
|
||||
// brokers
|
||||
pub const DEFAULT_SINKS_KAFKA_BROKERS: &str = "localhost:9092";
|
||||
pub const DEFAULT_SINKS_KAFKA_TOPIC: &str = "rustfs-sinks";
|
||||
pub const DEFAULT_SINKS_KAFKA_BATCH_SIZE: usize = 100;
|
||||
pub const DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS: u64 = 1000;
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
/// Logger configuration
|
||||
#[derive(Debug, Deserialize, Serialize, Clone)]
|
||||
pub struct LoggerConfig {
|
||||
pub queue_capacity: Option<usize>,
|
||||
}
|
||||
|
||||
impl LoggerConfig {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
queue_capacity: Some(10000),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for LoggerConfig {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
@@ -12,10 +12,12 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub(crate) mod config;
|
||||
pub(crate) mod file;
|
||||
pub(crate) mod kafka;
|
||||
pub(crate) mod logger;
|
||||
pub(crate) mod otel;
|
||||
pub(crate) mod sink;
|
||||
pub(crate) mod webhook;
|
||||
mod config;
|
||||
mod file;
|
||||
mod kafka;
|
||||
mod webhook;
|
||||
|
||||
pub use config::*;
|
||||
pub use file::*;
|
||||
pub use kafka::*;
|
||||
pub use webhook::*;
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::constants::app::{ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT};
|
||||
use crate::{APP_NAME, DEFAULT_LOG_LEVEL};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::env;
|
||||
|
||||
/// OpenTelemetry configuration
|
||||
#[derive(Debug, Deserialize, Serialize, Clone)]
|
||||
pub struct OtelConfig {
|
||||
pub endpoint: String, // Endpoint for metric collection
|
||||
pub use_stdout: Option<bool>, // Output to stdout
|
||||
pub sample_ratio: Option<f64>, // Trace sampling ratio
|
||||
pub meter_interval: Option<u64>, // Metric collection interval
|
||||
pub service_name: Option<String>, // Service name
|
||||
pub service_version: Option<String>, // Service version
|
||||
pub environment: Option<String>, // Environment
|
||||
pub logger_level: Option<String>, // Logger level
|
||||
pub local_logging_enabled: Option<bool>, // Local logging enabled
|
||||
}
|
||||
|
||||
impl OtelConfig {
|
||||
pub fn new() -> Self {
|
||||
extract_otel_config_from_env()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for OtelConfig {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function: Extract observable configuration from environment variables
|
||||
fn extract_otel_config_from_env() -> OtelConfig {
|
||||
OtelConfig {
|
||||
endpoint: env::var("RUSTFS_OBSERVABILITY_ENDPOINT").unwrap_or_else(|_| "".to_string()),
|
||||
use_stdout: env::var("RUSTFS_OBSERVABILITY_USE_STDOUT")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(USE_STDOUT)),
|
||||
sample_ratio: env::var("RUSTFS_OBSERVABILITY_SAMPLE_RATIO")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(SAMPLE_RATIO)),
|
||||
meter_interval: env::var("RUSTFS_OBSERVABILITY_METER_INTERVAL")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(METER_INTERVAL)),
|
||||
service_name: env::var("RUSTFS_OBSERVABILITY_SERVICE_NAME")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(APP_NAME.to_string())),
|
||||
service_version: env::var("RUSTFS_OBSERVABILITY_SERVICE_VERSION")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(SERVICE_VERSION.to_string())),
|
||||
environment: env::var("RUSTFS_OBSERVABILITY_ENVIRONMENT")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(ENVIRONMENT.to_string())),
|
||||
logger_level: env::var("RUSTFS_OBSERVABILITY_LOGGER_LEVEL")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_LEVEL.to_string())),
|
||||
local_logging_enabled: env::var("RUSTFS_OBSERVABILITY_LOCAL_LOGGING_ENABLED")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(false)),
|
||||
}
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::observability::file::FileSink;
|
||||
use crate::observability::kafka::KafkaSink;
|
||||
use crate::observability::webhook::WebhookSink;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
/// Sink configuration
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(tag = "type")]
|
||||
pub enum SinkConfig {
|
||||
Kafka(KafkaSink),
|
||||
Webhook(WebhookSink),
|
||||
File(FileSink),
|
||||
}
|
||||
|
||||
impl SinkConfig {
|
||||
pub fn new() -> Self {
|
||||
Self::File(FileSink::new())
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for SinkConfig {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
@@ -12,42 +12,17 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
// RUSTFS_SINKS_WEBHOOK_ENDPOINT
|
||||
pub const ENV_SINKS_WEBHOOK_ENDPOINT: &str = "RUSTFS_SINKS_WEBHOOK_ENDPOINT";
|
||||
// RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN
|
||||
pub const ENV_SINKS_WEBHOOK_AUTH_TOKEN: &str = "RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN";
|
||||
// max_retries
|
||||
pub const ENV_SINKS_WEBHOOK_MAX_RETRIES: &str = "RUSTFS_SINKS_WEBHOOK_MAX_RETRIES";
|
||||
// retry_delay_ms
|
||||
pub const ENV_SINKS_WEBHOOK_RETRY_DELAY_MS: &str = "RUSTFS_SINKS_WEBHOOK_RETRY_DELAY_MS";
|
||||
|
||||
/// Webhook sink configuration
|
||||
#[derive(Debug, Deserialize, Serialize, Clone)]
|
||||
pub struct WebhookSink {
|
||||
pub endpoint: String,
|
||||
pub auth_token: String,
|
||||
pub headers: Option<HashMap<String, String>>,
|
||||
#[serde(default = "default_max_retries")]
|
||||
pub max_retries: Option<usize>,
|
||||
#[serde(default = "default_retry_delay_ms")]
|
||||
pub retry_delay_ms: Option<u64>,
|
||||
}
|
||||
|
||||
impl WebhookSink {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
endpoint: "".to_string(),
|
||||
auth_token: "".to_string(),
|
||||
headers: Some(HashMap::new()),
|
||||
max_retries: default_max_retries(),
|
||||
retry_delay_ms: default_retry_delay_ms(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for WebhookSink {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
fn default_max_retries() -> Option<usize> {
|
||||
Some(3)
|
||||
}
|
||||
fn default_retry_delay_ms() -> Option<u64> {
|
||||
Some(100)
|
||||
}
|
||||
// Default values for webhook sink configuration
|
||||
pub const DEFAULT_SINKS_WEBHOOK_ENDPOINT: &str = "http://localhost:8080";
|
||||
pub const DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN: &str = "";
|
||||
pub const DEFAULT_SINKS_WEBHOOK_MAX_RETRIES: usize = 3;
|
||||
pub const DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS: u64 = 100;
|
||||
|
||||
@@ -38,7 +38,7 @@ url.workspace = true
|
||||
rustfs-madmin.workspace = true
|
||||
rustfs-filemeta.workspace = true
|
||||
bytes.workspace = true
|
||||
serial_test = "3.2.0"
|
||||
aws-sdk-s3 = "1.99.0"
|
||||
aws-config = "1.8.3"
|
||||
async-trait = { workspace = true }
|
||||
serial_test = { workspace = true }
|
||||
aws-sdk-s3.workspace = true
|
||||
aws-config = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
@@ -12,16 +12,16 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use super::{Config, GLOBAL_StorageClass, storageclass};
|
||||
use crate::config::{Config, GLOBAL_STORAGE_CLASS, storageclass};
|
||||
use crate::disk::RUSTFS_META_BUCKET;
|
||||
use crate::error::{Error, Result};
|
||||
use crate::store_api::{ObjectInfo, ObjectOptions, PutObjReader, StorageAPI};
|
||||
use http::HeaderMap;
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_config::DEFAULT_DELIMITER;
|
||||
use rustfs_utils::path::SLASH_SEPARATOR;
|
||||
use std::collections::HashSet;
|
||||
use std::sync::Arc;
|
||||
use std::sync::LazyLock;
|
||||
use tracing::{error, warn};
|
||||
|
||||
pub const CONFIG_PREFIX: &str = "config";
|
||||
@@ -29,14 +29,13 @@ const CONFIG_FILE: &str = "config.json";
|
||||
|
||||
pub const STORAGE_CLASS_SUB_SYS: &str = "storage_class";
|
||||
|
||||
lazy_static! {
|
||||
static ref CONFIG_BUCKET: String = format!("{}{}{}", RUSTFS_META_BUCKET, SLASH_SEPARATOR, CONFIG_PREFIX);
|
||||
static ref SubSystemsDynamic: HashSet<String> = {
|
||||
let mut h = HashSet::new();
|
||||
h.insert(STORAGE_CLASS_SUB_SYS.to_owned());
|
||||
h
|
||||
};
|
||||
}
|
||||
static CONFIG_BUCKET: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_META_BUCKET}{SLASH_SEPARATOR}{CONFIG_PREFIX}"));
|
||||
|
||||
static SUB_SYSTEMS_DYNAMIC: LazyLock<HashSet<String>> = LazyLock::new(|| {
|
||||
let mut h = HashSet::new();
|
||||
h.insert(STORAGE_CLASS_SUB_SYS.to_owned());
|
||||
h
|
||||
});
|
||||
pub async fn read_config<S: StorageAPI>(api: Arc<S>, file: &str) -> Result<Vec<u8>> {
|
||||
let (data, _obj) = read_config_with_metadata(api, file, &ObjectOptions::default()).await?;
|
||||
Ok(data)
|
||||
@@ -197,7 +196,7 @@ pub async fn lookup_configs<S: StorageAPI>(cfg: &mut Config, api: Arc<S>) {
|
||||
}
|
||||
|
||||
async fn apply_dynamic_config<S: StorageAPI>(cfg: &mut Config, api: Arc<S>) -> Result<()> {
|
||||
for key in SubSystemsDynamic.iter() {
|
||||
for key in SUB_SYSTEMS_DYNAMIC.iter() {
|
||||
apply_dynamic_config_for_sub_sys(cfg, api.clone(), key).await?;
|
||||
}
|
||||
|
||||
@@ -212,9 +211,9 @@ async fn apply_dynamic_config_for_sub_sys<S: StorageAPI>(cfg: &mut Config, api:
|
||||
for (i, count) in set_drive_counts.iter().enumerate() {
|
||||
match storageclass::lookup_config(&kvs, *count) {
|
||||
Ok(res) => {
|
||||
if i == 0 && GLOBAL_StorageClass.get().is_none() {
|
||||
if let Err(r) = GLOBAL_StorageClass.set(res) {
|
||||
error!("GLOBAL_StorageClass.set failed {:?}", r);
|
||||
if i == 0 && GLOBAL_STORAGE_CLASS.get().is_none() {
|
||||
if let Err(r) = GLOBAL_STORAGE_CLASS.set(res) {
|
||||
error!("GLOBAL_STORAGE_CLASS.set failed {:?}", r);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,26 +21,17 @@ pub mod storageclass;
|
||||
use crate::error::Result;
|
||||
use crate::store::ECStore;
|
||||
use com::{STORAGE_CLASS_SUB_SYS, lookup_configs, read_config_without_migrate};
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_config::DEFAULT_DELIMITER;
|
||||
use rustfs_config::notify::{COMMENT_KEY, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use std::sync::LazyLock;
|
||||
use std::sync::{Arc, OnceLock};
|
||||
|
||||
lazy_static! {
|
||||
pub static ref GLOBAL_StorageClass: OnceLock<storageclass::Config> = OnceLock::new();
|
||||
pub static ref DefaultKVS: OnceLock<HashMap<String, KVS>> = OnceLock::new();
|
||||
pub static ref GLOBAL_ServerConfig: OnceLock<Config> = OnceLock::new();
|
||||
pub static ref GLOBAL_ConfigSys: ConfigSys = ConfigSys::new();
|
||||
}
|
||||
|
||||
/// Standard config keys and values.
|
||||
pub const ENABLE_KEY: &str = "enable";
|
||||
pub const COMMENT_KEY: &str = "comment";
|
||||
|
||||
/// Enable values
|
||||
pub const ENABLE_ON: &str = "on";
|
||||
pub const ENABLE_OFF: &str = "off";
|
||||
pub static GLOBAL_STORAGE_CLASS: LazyLock<OnceLock<storageclass::Config>> = LazyLock::new(OnceLock::new);
|
||||
pub static DEFAULT_KVS: LazyLock<OnceLock<HashMap<String, KVS>>> = LazyLock::new(OnceLock::new);
|
||||
pub static GLOBAL_SERVER_CONFIG: LazyLock<OnceLock<Config>> = LazyLock::new(OnceLock::new);
|
||||
pub static GLOBAL_CONFIG_SYS: LazyLock<ConfigSys> = LazyLock::new(ConfigSys::new);
|
||||
|
||||
pub const ENV_ACCESS_KEY: &str = "RUSTFS_ACCESS_KEY";
|
||||
pub const ENV_SECRET_KEY: &str = "RUSTFS_SECRET_KEY";
|
||||
@@ -66,7 +57,7 @@ impl ConfigSys {
|
||||
|
||||
lookup_configs(&mut cfg, api).await;
|
||||
|
||||
let _ = GLOBAL_ServerConfig.set(cfg);
|
||||
let _ = GLOBAL_SERVER_CONFIG.set(cfg);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -131,6 +122,28 @@ impl KVS {
|
||||
|
||||
keys
|
||||
}
|
||||
|
||||
/// Insert or update a pair of key/values in KVS
|
||||
pub fn insert(&mut self, key: String, value: String) {
|
||||
for kv in self.0.iter_mut() {
|
||||
if kv.key == key {
|
||||
kv.value = value.clone();
|
||||
return;
|
||||
}
|
||||
}
|
||||
self.0.push(KV {
|
||||
key,
|
||||
value,
|
||||
hidden_if_empty: false,
|
||||
});
|
||||
}
|
||||
|
||||
/// Merge all entries from another KVS to the current instance
|
||||
pub fn extend(&mut self, other: KVS) {
|
||||
for KV { key, value, .. } in other.0.into_iter() {
|
||||
self.insert(key, value);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
@@ -159,7 +172,7 @@ impl Config {
|
||||
}
|
||||
|
||||
pub fn set_defaults(&mut self) {
|
||||
if let Some(defaults) = DefaultKVS.get() {
|
||||
if let Some(defaults) = DEFAULT_KVS.get() {
|
||||
for (k, v) in defaults.iter() {
|
||||
if !self.0.contains_key(k) {
|
||||
let mut default = HashMap::new();
|
||||
@@ -198,20 +211,17 @@ pub fn register_default_kvs(kvs: HashMap<String, KVS>) {
|
||||
p.insert(k, v);
|
||||
}
|
||||
|
||||
let _ = DefaultKVS.set(p);
|
||||
let _ = DEFAULT_KVS.set(p);
|
||||
}
|
||||
|
||||
pub fn init() {
|
||||
let mut kvs = HashMap::new();
|
||||
// Load storageclass default configuration
|
||||
kvs.insert(STORAGE_CLASS_SUB_SYS.to_owned(), storageclass::DefaultKVS.clone());
|
||||
kvs.insert(STORAGE_CLASS_SUB_SYS.to_owned(), storageclass::DEFAULT_KVS.clone());
|
||||
// New: Loading default configurations for notify_webhook and notify_mqtt
|
||||
// Referring subsystem names through constants to improve the readability and maintainability of the code
|
||||
kvs.insert(
|
||||
rustfs_config::notify::NOTIFY_WEBHOOK_SUB_SYS.to_owned(),
|
||||
notify::DefaultWebhookKVS.clone(),
|
||||
);
|
||||
kvs.insert(rustfs_config::notify::NOTIFY_MQTT_SUB_SYS.to_owned(), notify::DefaultMqttKVS.clone());
|
||||
kvs.insert(NOTIFY_WEBHOOK_SUB_SYS.to_owned(), notify::DEFAULT_WEBHOOK_KVS.clone());
|
||||
kvs.insert(NOTIFY_MQTT_SUB_SYS.to_owned(), notify::DEFAULT_MQTT_KVS.clone());
|
||||
|
||||
// Register all default configurations
|
||||
register_default_kvs(kvs)
|
||||
|
||||
@@ -12,40 +12,120 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::config::{ENABLE_KEY, ENABLE_OFF, KV, KVS};
|
||||
use lazy_static::lazy_static;
|
||||
use crate::config::{KV, KVS};
|
||||
use rustfs_config::notify::{
|
||||
DEFAULT_DIR, DEFAULT_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
|
||||
MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY,
|
||||
WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, ENABLE_OFF, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
|
||||
MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN,
|
||||
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use std::sync::LazyLock;
|
||||
|
||||
lazy_static! {
|
||||
/// The default configuration collection of webhooks,
|
||||
/// Use lazy_static! to ensure that these configurations are initialized only once during the program life cycle, enabling high-performance lazy loading.
|
||||
pub static ref DefaultWebhookKVS: KVS = KVS(vec![
|
||||
KV { key: ENABLE_KEY.to_owned(), value: ENABLE_OFF.to_owned(), hidden_if_empty: false },
|
||||
KV { key: WEBHOOK_ENDPOINT.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
/// The default configuration collection of webhooks,
|
||||
/// Initialized only once during the program life cycle, enabling high-performance lazy loading.
|
||||
pub static DEFAULT_WEBHOOK_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
KVS(vec![
|
||||
KV {
|
||||
key: ENABLE_KEY.to_owned(),
|
||||
value: ENABLE_OFF.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_ENDPOINT.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
// Sensitive information such as authentication tokens is hidden when the value is empty, enhancing security
|
||||
KV { key: WEBHOOK_AUTH_TOKEN.to_owned(), value: "".to_owned(), hidden_if_empty: true },
|
||||
KV { key: WEBHOOK_QUEUE_LIMIT.to_owned(), value: DEFAULT_LIMIT.to_string().to_owned(), hidden_if_empty: false },
|
||||
KV { key: WEBHOOK_QUEUE_DIR.to_owned(), value: DEFAULT_DIR.to_owned(), hidden_if_empty: false },
|
||||
KV { key: WEBHOOK_CLIENT_CERT.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
KV { key: WEBHOOK_CLIENT_KEY.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
]);
|
||||
KV {
|
||||
key: WEBHOOK_AUTH_TOKEN.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: true,
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_QUEUE_LIMIT.to_owned(),
|
||||
value: DEFAULT_LIMIT.to_string(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_CLIENT_CERT.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_CLIENT_KEY.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: COMMENT_KEY.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
])
|
||||
});
|
||||
|
||||
/// MQTT's default configuration collection
|
||||
pub static ref DefaultMqttKVS: KVS = KVS(vec![
|
||||
KV { key: ENABLE_KEY.to_owned(), value: ENABLE_OFF.to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_BROKER.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_TOPIC.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
/// MQTT's default configuration collection
|
||||
pub static DEFAULT_MQTT_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
KVS(vec![
|
||||
KV {
|
||||
key: ENABLE_KEY.to_owned(),
|
||||
value: ENABLE_OFF.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_BROKER.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_TOPIC.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
// Sensitive information such as passwords are hidden when the value is empty
|
||||
KV { key: MQTT_PASSWORD.to_owned(), value: "".to_owned(), hidden_if_empty: true },
|
||||
KV { key: MQTT_USERNAME.to_owned(), value: "".to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_QOS.to_owned(), value: "0".to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_KEEP_ALIVE_INTERVAL.to_owned(), value: "0s".to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_RECONNECT_INTERVAL.to_owned(), value: "0s".to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_QUEUE_DIR.to_owned(), value: DEFAULT_DIR.to_owned(), hidden_if_empty: false },
|
||||
KV { key: MQTT_QUEUE_LIMIT.to_owned(), value: DEFAULT_LIMIT.to_string().to_owned(), hidden_if_empty: false },
|
||||
]);
|
||||
}
|
||||
KV {
|
||||
key: MQTT_PASSWORD.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: true,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_USERNAME.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_QOS.to_owned(),
|
||||
value: "0".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_KEEP_ALIVE_INTERVAL.to_owned(),
|
||||
value: "0s".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_RECONNECT_INTERVAL.to_owned(),
|
||||
value: "0s".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: MQTT_QUEUE_LIMIT.to_owned(),
|
||||
value: DEFAULT_LIMIT.to_string(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: COMMENT_KEY.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
])
|
||||
});
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
use super::KVS;
|
||||
use crate::config::KV;
|
||||
use crate::error::{Error, Result};
|
||||
use lazy_static::lazy_static;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::env;
|
||||
use std::sync::LazyLock;
|
||||
use tracing::warn;
|
||||
|
||||
/// Default parity count for a given drive count
|
||||
@@ -62,34 +62,32 @@ pub const DEFAULT_RRS_PARITY: usize = 1;
|
||||
|
||||
pub static DEFAULT_INLINE_BLOCK: usize = 128 * 1024;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref DefaultKVS: KVS = {
|
||||
let kvs = vec![
|
||||
KV {
|
||||
key: CLASS_STANDARD.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: CLASS_RRS.to_owned(),
|
||||
value: "EC:1".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: OPTIMIZE.to_owned(),
|
||||
value: "availability".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: INLINE_BLOCK.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: true,
|
||||
},
|
||||
];
|
||||
pub static DEFAULT_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
let kvs = vec![
|
||||
KV {
|
||||
key: CLASS_STANDARD.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: CLASS_RRS.to_owned(),
|
||||
value: "EC:1".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: OPTIMIZE.to_owned(),
|
||||
value: "availability".to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
key: INLINE_BLOCK.to_owned(),
|
||||
value: "".to_owned(),
|
||||
hidden_if_empty: true,
|
||||
},
|
||||
];
|
||||
|
||||
KVS(kvs)
|
||||
};
|
||||
}
|
||||
KVS(kvs)
|
||||
});
|
||||
|
||||
// StorageClass - holds storage class information
|
||||
#[derive(Serialize, Deserialize, Debug, Default)]
|
||||
|
||||
@@ -36,8 +36,6 @@ pub const DISK_MIN_INODES: u64 = 1000;
|
||||
pub const DISK_FILL_FRACTION: f64 = 0.99;
|
||||
pub const DISK_RESERVE_FRACTION: f64 = 0.15;
|
||||
|
||||
pub const DEFAULT_PORT: u16 = 9000;
|
||||
|
||||
lazy_static! {
|
||||
static ref GLOBAL_RUSTFS_PORT: OnceLock<u16> = OnceLock::new();
|
||||
pub static ref GLOBAL_OBJECT_API: OnceLock<Arc<ECStore>> = OnceLock::new();
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(unused_imports)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -12,6 +11,8 @@
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
|
||||
use crate::bitrot::{create_bitrot_reader, create_bitrot_writer};
|
||||
@@ -33,7 +34,7 @@ use crate::store_api::{ListPartsInfo, ObjectToDelete};
|
||||
use crate::{
|
||||
bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts},
|
||||
cache_value::metacache_set::{ListPathRawOptions, list_path_raw},
|
||||
config::{GLOBAL_StorageClass, storageclass},
|
||||
config::{GLOBAL_STORAGE_CLASS, storageclass},
|
||||
disk::{
|
||||
CheckPartsResp, DeleteOptions, DiskAPI, DiskInfo, DiskInfoOptions, DiskOption, DiskStore, FileInfoVersions,
|
||||
RUSTFS_META_BUCKET, RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_TMP_BUCKET, ReadMultipleReq, ReadMultipleResp, ReadOptions,
|
||||
@@ -626,7 +627,7 @@ impl SetDisks {
|
||||
&& !found.etag.is_empty()
|
||||
&& part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum
|
||||
{
|
||||
ret[part_idx] = found;
|
||||
ret[part_idx] = found.clone();
|
||||
} else {
|
||||
ret[part_idx] = ObjectPartInfo {
|
||||
number: part_numbers[part_idx],
|
||||
@@ -2011,12 +2012,12 @@ impl SetDisks {
|
||||
if errs.iter().any(|err| err.is_some()) {
|
||||
let _ =
|
||||
rustfs_common::heal_channel::send_heal_request(rustfs_common::heal_channel::create_heal_request_with_options(
|
||||
fi.volume.to_string(), // bucket
|
||||
Some(fi.name.to_string()), // object_prefix
|
||||
false, // force_start
|
||||
Some(rustfs_common::heal_channel::HealChannelPriority::Normal), // priority
|
||||
Some(self.pool_index), // pool_index
|
||||
Some(self.set_index), // set_index
|
||||
fi.volume.to_string(), // bucket
|
||||
Some(fi.name.to_string()), // object_prefix
|
||||
false, // force_start
|
||||
Some(HealChannelPriority::Normal), // priority
|
||||
Some(self.pool_index), // pool_index
|
||||
Some(self.set_index), // set_index
|
||||
))
|
||||
.await;
|
||||
}
|
||||
@@ -2154,7 +2155,7 @@ impl SetDisks {
|
||||
bucket.to_string(),
|
||||
Some(object.to_string()),
|
||||
false,
|
||||
Some(rustfs_common::heal_channel::HealChannelPriority::Normal),
|
||||
Some(HealChannelPriority::Normal),
|
||||
Some(pool_index),
|
||||
Some(set_index),
|
||||
),
|
||||
@@ -2632,7 +2633,7 @@ impl SetDisks {
|
||||
}
|
||||
|
||||
let is_inline_buffer = {
|
||||
if let Some(sc) = GLOBAL_StorageClass.get() {
|
||||
if let Some(sc) = GLOBAL_STORAGE_CLASS.get() {
|
||||
sc.should_inline(erasure.shard_file_size(latest_meta.size), false)
|
||||
} else {
|
||||
false
|
||||
@@ -3287,12 +3288,7 @@ impl ObjectIO for SetDisks {
|
||||
let paths = vec![object.to_string()];
|
||||
let lock_acquired = self
|
||||
.namespace_lock
|
||||
.lock_batch(
|
||||
&paths,
|
||||
&self.locker_owner,
|
||||
std::time::Duration::from_secs(5),
|
||||
std::time::Duration::from_secs(10),
|
||||
)
|
||||
.lock_batch(&paths, &self.locker_owner, Duration::from_secs(5), Duration::from_secs(10))
|
||||
.await?;
|
||||
|
||||
if !lock_acquired {
|
||||
@@ -3303,7 +3299,7 @@ impl ObjectIO for SetDisks {
|
||||
let mut user_defined = opts.user_defined.clone();
|
||||
|
||||
let sc_parity_drives = {
|
||||
if let Some(sc) = GLOBAL_StorageClass.get() {
|
||||
if let Some(sc) = GLOBAL_STORAGE_CLASS.get() {
|
||||
sc.get_parity_for_sc(user_defined.get(AMZ_STORAGE_CLASS).cloned().unwrap_or_default().as_str())
|
||||
} else {
|
||||
None
|
||||
@@ -3348,7 +3344,7 @@ impl ObjectIO for SetDisks {
|
||||
let erasure = erasure_coding::Erasure::new(fi.erasure.data_blocks, fi.erasure.parity_blocks, fi.erasure.block_size);
|
||||
|
||||
let is_inline_buffer = {
|
||||
if let Some(sc) = GLOBAL_StorageClass.get() {
|
||||
if let Some(sc) = GLOBAL_STORAGE_CLASS.get() {
|
||||
sc.should_inline(erasure.shard_file_size(data.size()), opts.versioned)
|
||||
} else {
|
||||
false
|
||||
@@ -3919,7 +3915,7 @@ impl StorageAPI for SetDisks {
|
||||
bucket.to_string(),
|
||||
Some(object.to_string()),
|
||||
false,
|
||||
Some(rustfs_common::heal_channel::HealChannelPriority::Normal),
|
||||
Some(HealChannelPriority::Normal),
|
||||
Some(self.pool_index),
|
||||
Some(self.set_index),
|
||||
))
|
||||
@@ -4729,7 +4725,7 @@ impl StorageAPI for SetDisks {
|
||||
}
|
||||
|
||||
let sc_parity_drives = {
|
||||
if let Some(sc) = GLOBAL_StorageClass.get() {
|
||||
if let Some(sc) = GLOBAL_STORAGE_CLASS.get() {
|
||||
sc.get_parity_for_sc(user_defined.get(AMZ_STORAGE_CLASS).cloned().unwrap_or_default().as_str())
|
||||
} else {
|
||||
None
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -13,10 +12,12 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#![allow(clippy::map_entry)]
|
||||
|
||||
use crate::bucket::lifecycle::bucket_lifecycle_ops::init_background_expiry;
|
||||
use crate::bucket::metadata_sys::{self, set_bucket_metadata};
|
||||
use crate::bucket::utils::{check_valid_bucket_name, check_valid_bucket_name_strict, is_meta_bucketname};
|
||||
use crate::config::GLOBAL_StorageClass;
|
||||
use crate::config::GLOBAL_STORAGE_CLASS;
|
||||
use crate::config::storageclass;
|
||||
use crate::disk::endpoint::{Endpoint, EndpointType};
|
||||
use crate::disk::{DiskAPI, DiskInfo, DiskInfoOptions};
|
||||
@@ -1139,7 +1140,7 @@ impl StorageAPI for ECStore {
|
||||
#[tracing::instrument(skip(self))]
|
||||
async fn backend_info(&self) -> rustfs_madmin::BackendInfo {
|
||||
let (standard_sc_parity, rr_sc_parity) = {
|
||||
if let Some(sc) = GLOBAL_StorageClass.get() {
|
||||
if let Some(sc) = GLOBAL_STORAGE_CLASS.get() {
|
||||
let sc_parity = sc
|
||||
.get_parity_for_sc(storageclass::CLASS_STANDARD)
|
||||
.or(Some(self.pools[0].default_parity_count));
|
||||
|
||||
@@ -23,6 +23,7 @@ homepage.workspace = true
|
||||
description = "RustFS MCP (Model Context Protocol) Server"
|
||||
keywords = ["mcp", "s3", "aws", "rustfs", "server"]
|
||||
categories = ["development-tools", "web-programming"]
|
||||
documentation = "https://docs.rs/rustfs-mcp/latest/rustfs_mcp/"
|
||||
|
||||
[[bin]]
|
||||
name = "rustfs-mcp"
|
||||
@@ -36,7 +37,7 @@ aws-sdk-s3.workspace = true
|
||||
tokio = { workspace = true, features = ["io-std", "io-util", "macros", "signal"] }
|
||||
|
||||
# MCP SDK with macros support
|
||||
rmcp = { version = "0.3.0", features = ["server", "transport-io", "macros"] }
|
||||
rmcp = { workspace = true, features = ["server", "transport-io", "macros"] }
|
||||
|
||||
# Command line argument parsing
|
||||
clap = { workspace = true, features = ["derive", "env"] }
|
||||
@@ -44,27 +45,17 @@ clap = { workspace = true, features = ["derive", "env"] }
|
||||
# Serialization (still needed for S3 data structures)
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
schemars = "1.0"
|
||||
schemars = { workspace = true }
|
||||
|
||||
# Error handling
|
||||
anyhow.workspace = true
|
||||
thiserror.workspace = true
|
||||
|
||||
# Logging
|
||||
tracing.workspace = true
|
||||
tracing-subscriber.workspace = true
|
||||
|
||||
# File handling and MIME type detection
|
||||
mime_guess = "2.0"
|
||||
tokio-util = { version = "0.7", features = ["io"] }
|
||||
futures-util = "0.3"
|
||||
|
||||
# Async trait support for trait abstractions
|
||||
async-trait = "0.1"
|
||||
mime_guess = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
# Testing framework and utilities
|
||||
mockall = "0.13"
|
||||
tempfile = "3.12"
|
||||
tokio-test = "0.4"
|
||||
test-case = "3.3"
|
||||
|
||||
@@ -27,11 +27,12 @@ documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/"
|
||||
|
||||
[dependencies]
|
||||
rustfs-config = { workspace = true, features = ["notify"] }
|
||||
rustfs-ecstore = { workspace = true }
|
||||
rustfs-utils = { workspace = true, features = ["path", "sys"] }
|
||||
async-trait = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
dashmap = { workspace = true }
|
||||
rustfs-ecstore = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
form_urlencoded = { workspace = true }
|
||||
once_cell = { workspace = true }
|
||||
quick-xml = { workspace = true, features = ["serialize", "async-tokio"] }
|
||||
@@ -49,8 +50,6 @@ url = { workspace = true }
|
||||
urlencoding = { workspace = true }
|
||||
wildmatch = { workspace = true, features = ["serde"] }
|
||||
|
||||
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { workspace = true, features = ["test-util"] }
|
||||
reqwest = { workspace = true }
|
||||
|
||||
@@ -13,11 +13,11 @@
|
||||
// limitations under the License.
|
||||
|
||||
use rustfs_config::notify::{
|
||||
DEFAULT_LIMIT, DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC,
|
||||
MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR,
|
||||
WEBHOOK_QUEUE_LIMIT,
|
||||
DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
|
||||
MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT,
|
||||
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_ON, KV, KVS};
|
||||
use rustfs_ecstore::config::{Config, KV, KVS};
|
||||
use rustfs_notify::arn::TargetID;
|
||||
use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger};
|
||||
use rustfs_notify::{initialize, notification_system};
|
||||
|
||||
@@ -14,11 +14,11 @@
|
||||
|
||||
// Using Global Accessories
|
||||
use rustfs_config::notify::{
|
||||
DEFAULT_LIMIT, DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC,
|
||||
MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR,
|
||||
WEBHOOK_QUEUE_LIMIT,
|
||||
DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
|
||||
MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT,
|
||||
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_ON, KV, KVS};
|
||||
use rustfs_ecstore::config::{Config, KV, KVS};
|
||||
use rustfs_notify::arn::TargetID;
|
||||
use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger};
|
||||
use rustfs_notify::{initialize, notification_system};
|
||||
|
||||
@@ -82,6 +82,15 @@ pub enum TargetError {
|
||||
|
||||
#[error("Target is disabled")]
|
||||
Disabled,
|
||||
|
||||
#[error("Configuration parsing error: {0}")]
|
||||
ParseError(String),
|
||||
|
||||
#[error("Failed to save configuration: {0}")]
|
||||
SaveConfig(String),
|
||||
|
||||
#[error("Server not initialized: {0}")]
|
||||
ServerNotInitialized(String),
|
||||
}
|
||||
|
||||
/// Error types for the notification system
|
||||
@@ -112,7 +121,7 @@ pub enum NotificationError {
|
||||
AlreadyInitialized,
|
||||
|
||||
#[error("I/O error: {0}")]
|
||||
Io(std::io::Error),
|
||||
Io(io::Error),
|
||||
|
||||
#[error("Failed to read configuration: {0}")]
|
||||
ReadConfig(String),
|
||||
|
||||
@@ -19,40 +19,17 @@ use crate::{
|
||||
use async_trait::async_trait;
|
||||
use rumqttc::QoS;
|
||||
use rustfs_config::notify::{
|
||||
DEFAULT_DIR, DEFAULT_LIMIT, ENV_MQTT_BROKER, ENV_MQTT_ENABLE, ENV_MQTT_KEEP_ALIVE_INTERVAL, ENV_MQTT_PASSWORD, ENV_MQTT_QOS,
|
||||
ENV_MQTT_QUEUE_DIR, ENV_MQTT_QUEUE_LIMIT, ENV_MQTT_RECONNECT_INTERVAL, ENV_MQTT_TOPIC, ENV_MQTT_USERNAME,
|
||||
ENV_WEBHOOK_AUTH_TOKEN, ENV_WEBHOOK_CLIENT_CERT, ENV_WEBHOOK_CLIENT_KEY, ENV_WEBHOOK_ENABLE, ENV_WEBHOOK_ENDPOINT,
|
||||
ENV_WEBHOOK_QUEUE_DIR, ENV_WEBHOOK_QUEUE_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS,
|
||||
MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN,
|
||||
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
DEFAULT_DIR, DEFAULT_LIMIT, ENV_NOTIFY_MQTT_KEYS, ENV_NOTIFY_WEBHOOK_KEYS, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL,
|
||||
MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME,
|
||||
NOTIFY_MQTT_KEYS, NOTIFY_WEBHOOK_KEYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT,
|
||||
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use rustfs_config::{DEFAULT_DELIMITER, ENV_WORD_DELIMITER_DASH};
|
||||
use rustfs_ecstore::config::{ENABLE_KEY, ENABLE_ON, KVS};
|
||||
use rustfs_ecstore::config::KVS;
|
||||
use std::collections::HashSet;
|
||||
use std::time::Duration;
|
||||
use tracing::{debug, warn};
|
||||
use url::Url;
|
||||
|
||||
/// Helper function to get values from environment variables or KVS configurations.
|
||||
///
|
||||
/// It will give priority to reading from environment variables such as `BASE_ENV_KEY_ID` and fall back to the KVS configuration if it fails.
|
||||
fn get_config_value(id: &str, base_env_key: &str, config_key: &str, config: &KVS) -> Option<String> {
|
||||
let env_key = if id != DEFAULT_DELIMITER {
|
||||
format!(
|
||||
"{}{}{}",
|
||||
base_env_key,
|
||||
DEFAULT_DELIMITER,
|
||||
id.to_uppercase().replace(ENV_WORD_DELIMITER_DASH, DEFAULT_DELIMITER)
|
||||
)
|
||||
} else {
|
||||
base_env_key.to_string()
|
||||
};
|
||||
|
||||
match std::env::var(&env_key) {
|
||||
Ok(val) => Some(val),
|
||||
Err(_) => config.lookup(config_key),
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait for creating targets from configuration
|
||||
#[async_trait]
|
||||
pub trait TargetFactory: Send + Sync {
|
||||
@@ -61,6 +38,14 @@ pub trait TargetFactory: Send + Sync {
|
||||
|
||||
/// Validates target configuration
|
||||
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError>;
|
||||
|
||||
/// Returns a set of valid configuration field names for this target type.
|
||||
/// This is used to filter environment variables.
|
||||
fn get_valid_fields(&self) -> HashSet<String>;
|
||||
|
||||
/// Returns a set of valid configuration env field names for this target type.
|
||||
/// This is used to filter environment variables.
|
||||
fn get_valid_env_fields(&self) -> HashSet<String>;
|
||||
}
|
||||
|
||||
/// Factory for creating Webhook targets
|
||||
@@ -69,65 +54,42 @@ pub struct WebhookTargetFactory;
|
||||
#[async_trait]
|
||||
impl TargetFactory for WebhookTargetFactory {
|
||||
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target + Send + Sync>, TargetError> {
|
||||
let get = |base_env_key: &str, config_key: &str| get_config_value(&id, base_env_key, config_key, config);
|
||||
|
||||
let enable = get(ENV_WEBHOOK_ENABLE, ENABLE_KEY)
|
||||
.map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true"))
|
||||
.unwrap_or(false);
|
||||
|
||||
if !enable {
|
||||
return Err(TargetError::Configuration("Target is disabled".to_string()));
|
||||
}
|
||||
|
||||
let endpoint = get(ENV_WEBHOOK_ENDPOINT, WEBHOOK_ENDPOINT)
|
||||
// All config values are now read directly from the merged `config` KVS.
|
||||
let endpoint = config
|
||||
.lookup(WEBHOOK_ENDPOINT)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
|
||||
let endpoint_url = Url::parse(&endpoint)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{endpoint}')")))?;
|
||||
|
||||
let auth_token = get(ENV_WEBHOOK_AUTH_TOKEN, WEBHOOK_AUTH_TOKEN).unwrap_or_default();
|
||||
let queue_dir = get(ENV_WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string());
|
||||
|
||||
let queue_limit = get(ENV_WEBHOOK_QUEUE_LIMIT, WEBHOOK_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT);
|
||||
|
||||
let client_cert = get(ENV_WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_CERT).unwrap_or_default();
|
||||
let client_key = get(ENV_WEBHOOK_CLIENT_KEY, WEBHOOK_CLIENT_KEY).unwrap_or_default();
|
||||
|
||||
let args = WebhookArgs {
|
||||
enable,
|
||||
enable: true, // If we are here, it's already enabled.
|
||||
endpoint: endpoint_url,
|
||||
auth_token,
|
||||
queue_dir,
|
||||
queue_limit,
|
||||
client_cert,
|
||||
client_key,
|
||||
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
|
||||
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(WEBHOOK_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT),
|
||||
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
|
||||
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
|
||||
};
|
||||
|
||||
let target = crate::target::webhook::WebhookTarget::new(id, args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
|
||||
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
let get = |base_env_key: &str, config_key: &str| get_config_value(id, base_env_key, config_key, config);
|
||||
|
||||
let enable = get(ENV_WEBHOOK_ENABLE, ENABLE_KEY)
|
||||
.map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true"))
|
||||
.unwrap_or(false);
|
||||
|
||||
if !enable {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let endpoint = get(ENV_WEBHOOK_ENDPOINT, WEBHOOK_ENDPOINT)
|
||||
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
// Validation also uses the merged `config` KVS directly.
|
||||
let endpoint = config
|
||||
.lookup(WEBHOOK_ENDPOINT)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
|
||||
debug!("endpoint: {}", endpoint);
|
||||
let parsed_endpoint = endpoint.trim();
|
||||
Url::parse(parsed_endpoint)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{parsed_endpoint}')")))?;
|
||||
|
||||
let client_cert = get(ENV_WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_CERT).unwrap_or_default();
|
||||
let client_key = get(ENV_WEBHOOK_CLIENT_KEY, WEBHOOK_CLIENT_KEY).unwrap_or_default();
|
||||
let client_cert = config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default();
|
||||
let client_key = config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default();
|
||||
|
||||
if client_cert.is_empty() != client_key.is_empty() {
|
||||
return Err(TargetError::Configuration(
|
||||
@@ -135,15 +97,21 @@ impl TargetFactory for WebhookTargetFactory {
|
||||
));
|
||||
}
|
||||
|
||||
let queue_dir = get(ENV_WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_DIR)
|
||||
.and_then(|v| v.parse::<String>().ok())
|
||||
.unwrap_or(DEFAULT_DIR.to_string());
|
||||
let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string());
|
||||
if !queue_dir.is_empty() && !std::path::Path::new(&queue_dir).is_absolute() {
|
||||
return Err(TargetError::Configuration("Webhook queue directory must be an absolute path".to_string()));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_valid_fields(&self) -> HashSet<String> {
|
||||
NOTIFY_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
|
||||
fn get_valid_env_fields(&self) -> HashSet<String> {
|
||||
ENV_NOTIFY_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
}
|
||||
|
||||
/// Factory for creating MQTT targets
|
||||
@@ -152,84 +120,57 @@ pub struct MQTTTargetFactory;
|
||||
#[async_trait]
|
||||
impl TargetFactory for MQTTTargetFactory {
|
||||
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target + Send + Sync>, TargetError> {
|
||||
let get = |base_env_key: &str, config_key: &str| get_config_value(&id, base_env_key, config_key, config);
|
||||
|
||||
let enable = get(ENV_MQTT_ENABLE, ENABLE_KEY)
|
||||
.map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true"))
|
||||
.unwrap_or(false);
|
||||
|
||||
if !enable {
|
||||
return Err(TargetError::Configuration("Target is disabled".to_string()));
|
||||
}
|
||||
|
||||
let broker =
|
||||
get(ENV_MQTT_BROKER, MQTT_BROKER).ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
let broker = config
|
||||
.lookup(MQTT_BROKER)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
let broker_url = Url::parse(&broker)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
|
||||
|
||||
let topic =
|
||||
get(ENV_MQTT_TOPIC, MQTT_TOPIC).ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?;
|
||||
|
||||
let qos = get(ENV_MQTT_QOS, MQTT_QOS)
|
||||
.and_then(|v| v.parse::<u8>().ok())
|
||||
.map(|q| match q {
|
||||
0 => QoS::AtMostOnce,
|
||||
1 => QoS::AtLeastOnce,
|
||||
2 => QoS::ExactlyOnce,
|
||||
_ => QoS::AtLeastOnce,
|
||||
})
|
||||
.unwrap_or(QoS::AtLeastOnce);
|
||||
|
||||
let username = get(ENV_MQTT_USERNAME, MQTT_USERNAME).unwrap_or_default();
|
||||
let password = get(ENV_MQTT_PASSWORD, MQTT_PASSWORD).unwrap_or_default();
|
||||
|
||||
let reconnect_interval = get(ENV_MQTT_RECONNECT_INTERVAL, MQTT_RECONNECT_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(5));
|
||||
|
||||
let keep_alive = get(ENV_MQTT_KEEP_ALIVE_INTERVAL, MQTT_KEEP_ALIVE_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(30));
|
||||
|
||||
let queue_dir = get(ENV_MQTT_QUEUE_DIR, MQTT_QUEUE_DIR)
|
||||
.and_then(|v| v.parse::<String>().ok())
|
||||
.unwrap_or(DEFAULT_DIR.to_string());
|
||||
let queue_limit = get(ENV_MQTT_QUEUE_LIMIT, MQTT_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT);
|
||||
let topic = config
|
||||
.lookup(MQTT_TOPIC)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?;
|
||||
|
||||
let args = MQTTArgs {
|
||||
enable,
|
||||
enable: true, // Assumed enabled.
|
||||
broker: broker_url,
|
||||
topic,
|
||||
qos,
|
||||
username,
|
||||
password,
|
||||
max_reconnect_interval: reconnect_interval,
|
||||
keep_alive,
|
||||
queue_dir,
|
||||
queue_limit,
|
||||
qos: config
|
||||
.lookup(MQTT_QOS)
|
||||
.and_then(|v| v.parse::<u8>().ok())
|
||||
.map(|q| match q {
|
||||
0 => QoS::AtMostOnce,
|
||||
1 => QoS::AtLeastOnce,
|
||||
2 => QoS::ExactlyOnce,
|
||||
_ => QoS::AtLeastOnce,
|
||||
})
|
||||
.unwrap_or(QoS::AtLeastOnce),
|
||||
username: config.lookup(MQTT_USERNAME).unwrap_or_default(),
|
||||
password: config.lookup(MQTT_PASSWORD).unwrap_or_default(),
|
||||
max_reconnect_interval: config
|
||||
.lookup(MQTT_RECONNECT_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(5)),
|
||||
keep_alive: config
|
||||
.lookup(MQTT_KEEP_ALIVE_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(30)),
|
||||
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(MQTT_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT),
|
||||
};
|
||||
|
||||
let target = crate::target::mqtt::MQTTTarget::new(id, args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
|
||||
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
let get = |base_env_key: &str, config_key: &str| get_config_value(id, base_env_key, config_key, config);
|
||||
|
||||
let enable = get(ENV_MQTT_ENABLE, ENABLE_KEY)
|
||||
.map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true"))
|
||||
.unwrap_or(false);
|
||||
|
||||
if !enable {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let broker =
|
||||
get(ENV_MQTT_BROKER, MQTT_BROKER).ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
let broker = config
|
||||
.lookup(MQTT_BROKER)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
let url = Url::parse(&broker)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
|
||||
|
||||
@@ -240,11 +181,11 @@ impl TargetFactory for MQTTTargetFactory {
|
||||
}
|
||||
}
|
||||
|
||||
if get(ENV_MQTT_TOPIC, MQTT_TOPIC).is_none() {
|
||||
if config.lookup(MQTT_TOPIC).is_none() {
|
||||
return Err(TargetError::Configuration("Missing MQTT topic".to_string()));
|
||||
}
|
||||
|
||||
if let Some(qos_str) = get(ENV_MQTT_QOS, MQTT_QOS) {
|
||||
if let Some(qos_str) = config.lookup(MQTT_QOS) {
|
||||
let qos = qos_str
|
||||
.parse::<u8>()
|
||||
.map_err(|_| TargetError::Configuration("Invalid QoS value".to_string()))?;
|
||||
@@ -253,14 +194,12 @@ impl TargetFactory for MQTTTargetFactory {
|
||||
}
|
||||
}
|
||||
|
||||
let queue_dir = get(ENV_MQTT_QUEUE_DIR, MQTT_QUEUE_DIR)
|
||||
.and_then(|v| v.parse::<String>().ok())
|
||||
.unwrap_or(DEFAULT_DIR.to_string());
|
||||
let queue_dir = config.lookup(MQTT_QUEUE_DIR).unwrap_or_default();
|
||||
if !queue_dir.is_empty() {
|
||||
if !std::path::Path::new(&queue_dir).is_absolute() {
|
||||
return Err(TargetError::Configuration("MQTT queue directory must be an absolute path".to_string()));
|
||||
}
|
||||
if let Some(qos_str) = get(ENV_MQTT_QOS, MQTT_QOS) {
|
||||
if let Some(qos_str) = config.lookup(MQTT_QOS) {
|
||||
if qos_str == "0" {
|
||||
warn!("Using queue_dir with QoS 0 may result in event loss");
|
||||
}
|
||||
@@ -269,4 +208,12 @@ impl TargetFactory for MQTTTargetFactory {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_valid_fields(&self) -> HashSet<String> {
|
||||
NOTIFY_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
|
||||
fn get_valid_env_fields(&self) -> HashSet<String> {
|
||||
ENV_NOTIFY_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -210,10 +210,10 @@ impl NotificationSystem {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if let Err(e) = rustfs_ecstore::config::com::save_server_config(store, &new_config).await {
|
||||
error!("Failed to save config: {}", e);
|
||||
return Err(NotificationError::SaveConfig(e.to_string()));
|
||||
}
|
||||
// if let Err(e) = rustfs_ecstore::config::com::save_server_config(store, &new_config).await {
|
||||
// error!("Failed to save config: {}", e);
|
||||
// return Err(NotificationError::SaveConfig(e.to_string()));
|
||||
// }
|
||||
|
||||
info!("Configuration updated. Reloading system...");
|
||||
self.reload_config(new_config).await
|
||||
@@ -323,7 +323,6 @@ impl NotificationSystem {
|
||||
metrics: Arc<NotificationMetrics>,
|
||||
semaphore: Arc<Semaphore>,
|
||||
) -> mpsc::Sender<()> {
|
||||
// Event Stream Processing Using Batch Version
|
||||
stream::start_event_stream_with_batching(store, target, metrics, semaphore)
|
||||
}
|
||||
|
||||
@@ -348,6 +347,7 @@ impl NotificationSystem {
|
||||
self.update_config(new_config.clone()).await;
|
||||
|
||||
// Create a new target from configuration
|
||||
// This function will now be responsible for merging env, creating and persisting the final configuration.
|
||||
let targets: Vec<Box<dyn Target + Send + Sync>> = self
|
||||
.registry
|
||||
.create_targets_from_config(&new_config)
|
||||
|
||||
@@ -18,11 +18,12 @@ use crate::{
|
||||
factory::{MQTTTargetFactory, TargetFactory, WebhookTargetFactory},
|
||||
target::Target,
|
||||
};
|
||||
use rustfs_config::notify::NOTIFY_ROUTE_PREFIX;
|
||||
use futures::stream::{FuturesUnordered, StreamExt};
|
||||
use rustfs_config::notify::{ENABLE_KEY, ENABLE_ON, NOTIFY_ROUTE_PREFIX};
|
||||
use rustfs_config::{DEFAULT_DELIMITER, ENV_PREFIX};
|
||||
use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_OFF, ENABLE_ON, KVS};
|
||||
use std::collections::HashMap;
|
||||
use tracing::{debug, error, info};
|
||||
use rustfs_ecstore::config::{Config, KVS};
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use tracing::{debug, error, info, warn};
|
||||
|
||||
/// Registry for managing target factories
|
||||
pub struct TargetRegistry {
|
||||
@@ -74,77 +75,204 @@ impl TargetRegistry {
|
||||
}
|
||||
|
||||
/// Creates all targets from a configuration
|
||||
/// Create all notification targets from system configuration and environment variables.
|
||||
/// This method processes the creation of each target concurrently as follows:
|
||||
/// 1. Iterate through all registered target types (e.g. webhooks, mqtt).
|
||||
/// 2. For each type, resolve its configuration in the configuration file and environment variables.
|
||||
/// 3. Identify all target instance IDs that need to be created.
|
||||
/// 4. Combine the default configuration, file configuration, and environment variable configuration for each instance.
|
||||
/// 5. If the instance is enabled, create an asynchronous task for it to instantiate.
|
||||
/// 6. Concurrency executes all creation tasks and collects results.
|
||||
pub async fn create_targets_from_config(&self, config: &Config) -> Result<Vec<Box<dyn Target + Send + Sync>>, TargetError> {
|
||||
let mut targets: Vec<Box<dyn Target + Send + Sync>> = Vec::new();
|
||||
// Collect only environment variables with the relevant prefix to reduce memory usage
|
||||
let all_env: Vec<(String, String)> = std::env::vars().filter(|(key, _)| key.starts_with(ENV_PREFIX)).collect();
|
||||
// A collection of asynchronous tasks for concurrently executing target creation
|
||||
let mut tasks = FuturesUnordered::new();
|
||||
let mut final_config = config.clone(); // Clone a configuration for aggregating the final result
|
||||
// 1. Traverse all registered plants and process them by target type
|
||||
for (target_type, factory) in &self.factories {
|
||||
tracing::Span::current().record("target_type", target_type.as_str());
|
||||
info!("Start working on target types...");
|
||||
|
||||
// Iterate through configuration sections
|
||||
for (section, subsections) in &config.0 {
|
||||
// Only process notification sections
|
||||
if !section.starts_with(NOTIFY_ROUTE_PREFIX) {
|
||||
continue;
|
||||
// 2. Prepare the configuration source
|
||||
// 2.1. Get the configuration segment in the file, e.g. 'notify_webhook'
|
||||
let section_name = format!("{NOTIFY_ROUTE_PREFIX}{target_type}");
|
||||
let file_configs = config.0.get(§ion_name).cloned().unwrap_or_default();
|
||||
// 2.2. Get the default configuration for that type
|
||||
let default_cfg = file_configs.get(DEFAULT_DELIMITER).cloned().unwrap_or_default();
|
||||
debug!(?default_cfg, "Get the default configuration");
|
||||
|
||||
// *** Optimization point 1: Get all legitimate fields of the current target type ***
|
||||
let valid_fields = factory.get_valid_fields();
|
||||
debug!(?valid_fields, "Get the legitimate configuration fields");
|
||||
|
||||
// 3. Resolve instance IDs and configuration overrides from environment variables
|
||||
let mut instance_ids_from_env = HashSet::new();
|
||||
// 3.1. Instance discovery: Based on the '..._ENABLE_INSTANCEID' format
|
||||
let enable_prefix = format!("{ENV_PREFIX}{NOTIFY_ROUTE_PREFIX}{target_type}_{ENABLE_KEY}_").to_uppercase();
|
||||
for (key, value) in &all_env {
|
||||
if value.eq_ignore_ascii_case(ENABLE_ON)
|
||||
|| value.eq_ignore_ascii_case("true")
|
||||
|| value.eq_ignore_ascii_case("1")
|
||||
|| value.eq_ignore_ascii_case("yes")
|
||||
{
|
||||
if let Some(id) = key.strip_prefix(&enable_prefix) {
|
||||
if !id.is_empty() {
|
||||
instance_ids_from_env.insert(id.to_lowercase());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract target type from section name
|
||||
let target_type = section.trim_start_matches(NOTIFY_ROUTE_PREFIX);
|
||||
// 3.2. Parse all relevant environment variable configurations
|
||||
// 3.2.1. Build environment variable prefixes such as 'RUSTFS_NOTIFY_WEBHOOK_'
|
||||
let env_prefix = format!("{ENV_PREFIX}{NOTIFY_ROUTE_PREFIX}{target_type}_").to_uppercase();
|
||||
// 3.2.2. 'env_overrides' is used to store configurations parsed from environment variables in the format: {instance id -> {field -> value}}
|
||||
let mut env_overrides: HashMap<String, HashMap<String, String>> = HashMap::new();
|
||||
for (key, value) in &all_env {
|
||||
if let Some(rest) = key.strip_prefix(&env_prefix) {
|
||||
// Use rsplitn to split from the right side to properly extract the INSTANCE_ID at the end
|
||||
// Format: <FIELD_NAME>_<INSTANCE_ID> or <FIELD_NAME>
|
||||
let mut parts = rest.rsplitn(2, '_');
|
||||
|
||||
// Iterate through subsections (each representing a target instance)
|
||||
for (target_id, target_config) in subsections {
|
||||
// Skip disabled targets
|
||||
// The first part from the right is INSTANCE_ID
|
||||
let instance_id_part = parts.next().unwrap_or(DEFAULT_DELIMITER);
|
||||
// The remaining part is FIELD_NAME
|
||||
let field_name_part = parts.next();
|
||||
|
||||
let enable_from_config = target_config.lookup(ENABLE_KEY).unwrap_or_else(|| ENABLE_OFF.to_string());
|
||||
debug!("Target enablement from config: {}/{}: {}", target_type, target_id, enable_from_config);
|
||||
// Check environment variable for target enablement example: RUSTFS_NOTIFY_WEBHOOK_ENABLE|RUSTFS_NOTIFY_WEBHOOK_ENABLE_[TARGET_ID]
|
||||
let env_key = if target_id == DEFAULT_DELIMITER {
|
||||
// If no specific target ID, use the base target type, example: RUSTFS_NOTIFY_WEBHOOK_ENABLE
|
||||
format!(
|
||||
"{}{}{}{}{}",
|
||||
ENV_PREFIX,
|
||||
NOTIFY_ROUTE_PREFIX,
|
||||
target_type.to_uppercase(),
|
||||
DEFAULT_DELIMITER,
|
||||
ENABLE_KEY
|
||||
)
|
||||
} else {
|
||||
// If specific target ID, append it to the key, example: RUSTFS_NOTIFY_WEBHOOK_ENABLE_[TARGET_ID]
|
||||
format!(
|
||||
"{}{}{}{}{}{}{}",
|
||||
ENV_PREFIX,
|
||||
NOTIFY_ROUTE_PREFIX,
|
||||
target_type.to_uppercase(),
|
||||
DEFAULT_DELIMITER,
|
||||
ENABLE_KEY,
|
||||
DEFAULT_DELIMITER,
|
||||
target_id.to_uppercase()
|
||||
)
|
||||
let (field_name, instance_id) = match field_name_part {
|
||||
// Case 1: The format is <FIELD_NAME>_<INSTANCE_ID>
|
||||
// e.g., rest = "ENDPOINT_PRIMARY" -> field_name="ENDPOINT", instance_id="PRIMARY"
|
||||
Some(field) => (field.to_lowercase(), instance_id_part.to_lowercase()),
|
||||
// Case 2: The format is <FIELD_NAME> (无 INSTANCE_ID)
|
||||
// e.g., rest = "ENABLE" -> field_name="ENABLE", instance_id="" (Universal configuration `_ DEFAULT_DELIMITER`)
|
||||
None => (instance_id_part.to_lowercase(), DEFAULT_DELIMITER.to_string()),
|
||||
};
|
||||
|
||||
// *** Optimization point 2: Verify whether the parsed field_name is legal ***
|
||||
if !field_name.is_empty() && valid_fields.contains(&field_name) {
|
||||
debug!(
|
||||
instance_id = %if instance_id.is_empty() { DEFAULT_DELIMITER } else { &instance_id },
|
||||
%field_name,
|
||||
%value,
|
||||
"Parsing to environment variables"
|
||||
);
|
||||
env_overrides
|
||||
.entry(instance_id)
|
||||
.or_default()
|
||||
.insert(field_name, value.clone());
|
||||
} else {
|
||||
// Ignore illegal field names
|
||||
warn!(
|
||||
field_name = %field_name,
|
||||
"Ignore environment variable fields, not found in the list of valid fields for target type {}",
|
||||
target_type
|
||||
);
|
||||
}
|
||||
}
|
||||
.to_uppercase();
|
||||
debug!("Target env key: {},Target id: {}", env_key, target_id);
|
||||
let enable_from_env = std::env::var(&env_key)
|
||||
.map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true"))
|
||||
}
|
||||
debug!(?env_overrides, "Complete the environment variable analysis");
|
||||
|
||||
// 4. Determine all instance IDs that need to be processed
|
||||
let mut all_instance_ids: HashSet<String> =
|
||||
file_configs.keys().filter(|k| *k != DEFAULT_DELIMITER).cloned().collect();
|
||||
all_instance_ids.extend(instance_ids_from_env);
|
||||
debug!(?all_instance_ids, "Determine all instance IDs");
|
||||
|
||||
// 5. Merge configurations and create tasks for each instance
|
||||
for id in all_instance_ids {
|
||||
// 5.1. Merge configuration, priority: Environment variables > File instance configuration > File default configuration
|
||||
let mut merged_config = default_cfg.clone();
|
||||
// Instance-specific configuration in application files
|
||||
if let Some(file_instance_cfg) = file_configs.get(&id) {
|
||||
merged_config.extend(file_instance_cfg.clone());
|
||||
}
|
||||
// Application instance-specific environment variable configuration
|
||||
if let Some(env_instance_cfg) = env_overrides.get(&id) {
|
||||
// Convert HashMap<String, String> to KVS
|
||||
let mut kvs_from_env = KVS::new();
|
||||
for (k, v) in env_instance_cfg {
|
||||
kvs_from_env.insert(k.clone(), v.clone());
|
||||
}
|
||||
merged_config.extend(kvs_from_env);
|
||||
}
|
||||
debug!(instance_id = %id, ?merged_config, "Complete configuration merge");
|
||||
|
||||
// 5.2. Check if the instance is enabled
|
||||
let enabled = merged_config
|
||||
.lookup(ENABLE_KEY)
|
||||
.map(|v| {
|
||||
v.eq_ignore_ascii_case(ENABLE_ON)
|
||||
|| v.eq_ignore_ascii_case("true")
|
||||
|| v.eq_ignore_ascii_case("1")
|
||||
|| v.eq_ignore_ascii_case("yes")
|
||||
})
|
||||
.unwrap_or(false);
|
||||
debug!("Target env value: {},key: {},Target id: {}", enable_from_env, env_key, target_id);
|
||||
debug!(
|
||||
"Target enablement from env: {}/{}: result: {}",
|
||||
target_type, target_id, enable_from_config
|
||||
);
|
||||
if enable_from_config != ENABLE_ON && !enable_from_env {
|
||||
info!("Skipping disabled target: {}/{}", target_type, target_id);
|
||||
continue;
|
||||
}
|
||||
debug!("create target: {}/{} start", target_type, target_id);
|
||||
// Create target
|
||||
match self.create_target(target_type, target_id.clone(), target_config).await {
|
||||
Ok(target) => {
|
||||
info!("Created target: {}/{}", target_type, target_id);
|
||||
targets.push(target);
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to create target {}/{}: reason: {}", target_type, target_id, e);
|
||||
}
|
||||
|
||||
if enabled {
|
||||
info!(instance_id = %id, "Target is enabled, ready to create a task");
|
||||
// 5.3. Create asynchronous tasks for enabled instances
|
||||
let target_type_clone = target_type.clone();
|
||||
let tid = id.clone();
|
||||
let merged_config_arc = std::sync::Arc::new(merged_config);
|
||||
tasks.push(async move {
|
||||
let result = factory.create_target(tid.clone(), &merged_config_arc).await;
|
||||
(target_type_clone, tid, result, std::sync::Arc::clone(&merged_config_arc))
|
||||
});
|
||||
} else {
|
||||
info!(instance_id = %id, "Skip the disabled target and will be removed from the final configuration");
|
||||
// Remove disabled target from final configuration
|
||||
final_config.0.entry(section_name.clone()).or_default().remove(&id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(targets)
|
||||
// 6. Concurrently execute all creation tasks and collect results
|
||||
let mut successful_targets = Vec::new();
|
||||
let mut successful_configs = Vec::new();
|
||||
while let Some((target_type, id, result, final_config)) = tasks.next().await {
|
||||
match result {
|
||||
Ok(target) => {
|
||||
info!(target_type = %target_type, instance_id = %id, "Create a target successfully");
|
||||
successful_targets.push(target);
|
||||
successful_configs.push((target_type, id, final_config));
|
||||
}
|
||||
Err(e) => {
|
||||
error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create a target");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Aggregate new configuration and write back to system configuration
|
||||
if !successful_configs.is_empty() {
|
||||
info!(
|
||||
"Prepare to update {} successfully created target configurations to the system configuration...",
|
||||
successful_configs.len()
|
||||
);
|
||||
let mut new_config = config.clone();
|
||||
for (target_type, id, kvs) in successful_configs {
|
||||
let section_name = format!("{NOTIFY_ROUTE_PREFIX}{target_type}").to_lowercase();
|
||||
new_config.0.entry(section_name).or_default().insert(id, (*kvs).clone());
|
||||
}
|
||||
|
||||
let Some(store) = rustfs_ecstore::global::new_object_layer_fn() else {
|
||||
return Err(TargetError::ServerNotInitialized(
|
||||
"Failed to save target configuration: server storage not initialized".to_string(),
|
||||
));
|
||||
};
|
||||
|
||||
match rustfs_ecstore::config::com::save_server_config(store, &new_config).await {
|
||||
Ok(_) => {
|
||||
info!("The new configuration was saved to the system successfully.")
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to save the new configuration: {}", e);
|
||||
return Err(TargetError::SaveConfig(e.to_string()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
info!(count = successful_targets.len(), "All target processing completed");
|
||||
Ok(successful_targets)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -109,3 +109,11 @@ impl std::fmt::Display for ChannelTargetType {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn parse_bool(value: &str) -> Result<bool, TargetError> {
|
||||
match value.to_lowercase().as_str() {
|
||||
"true" | "on" | "yes" | "1" => Ok(true),
|
||||
"false" | "off" | "no" | "0" => Ok(false),
|
||||
_ => Err(TargetError::ParseError(format!("Unable to parse boolean: {value}"))),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -36,7 +36,8 @@ webhook = ["dep:reqwest"]
|
||||
kafka = ["dep:rdkafka"]
|
||||
|
||||
[dependencies]
|
||||
rustfs-config = { workspace = true, features = ["constants"] }
|
||||
rustfs-config = { workspace = true, features = ["constants", "observability"] }
|
||||
rustfs-utils = { workspace = true, features = ["ip", "path"] }
|
||||
async-trait = { workspace = true }
|
||||
chrono = { workspace = true }
|
||||
flexi_logger = { workspace = true, features = ["trc", "kv"] }
|
||||
@@ -49,7 +50,6 @@ opentelemetry_sdk = { workspace = true, features = ["rt-tokio"] }
|
||||
opentelemetry-stdout = { workspace = true }
|
||||
opentelemetry-otlp = { workspace = true, features = ["grpc-tonic", "gzip-tonic", "trace", "metrics", "logs", "internal-logs"] }
|
||||
opentelemetry-semantic-conventions = { workspace = true, features = ["semconv_experimental"] }
|
||||
rustfs-utils = { workspace = true, features = ["ip"] }
|
||||
serde = { workspace = true }
|
||||
smallvec = { workspace = true, features = ["serde"] }
|
||||
tracing = { workspace = true, features = ["std", "attributes"] }
|
||||
|
||||
@@ -12,11 +12,24 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use rustfs_config::{
|
||||
APP_NAME, DEFAULT_LOG_DIR, DEFAULT_LOG_FILENAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, DEFAULT_LOG_ROTATION_SIZE_MB,
|
||||
DEFAULT_LOG_ROTATION_TIME, DEFAULT_OBS_LOG_FILENAME, DEFAULT_SINK_FILE_LOG_FILE, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO,
|
||||
SERVICE_VERSION, USE_STDOUT,
|
||||
use rustfs_config::observability::{
|
||||
DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY, DEFAULT_SINKS_FILE_BUFFER_SIZE, DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS,
|
||||
DEFAULT_SINKS_FILE_FLUSH_THRESHOLD, DEFAULT_SINKS_KAFKA_BATCH_SIZE, DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS,
|
||||
DEFAULT_SINKS_KAFKA_BROKERS, DEFAULT_SINKS_KAFKA_TOPIC, DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN, DEFAULT_SINKS_WEBHOOK_ENDPOINT,
|
||||
DEFAULT_SINKS_WEBHOOK_MAX_RETRIES, DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS, ENV_AUDIT_LOGGER_QUEUE_CAPACITY, ENV_OBS_ENDPOINT,
|
||||
ENV_OBS_ENVIRONMENT, ENV_OBS_LOCAL_LOGGING_ENABLED, ENV_OBS_LOG_FILENAME, ENV_OBS_LOG_KEEP_FILES,
|
||||
ENV_OBS_LOG_ROTATION_SIZE_MB, ENV_OBS_LOG_ROTATION_TIME, ENV_OBS_LOGGER_LEVEL, ENV_OBS_METER_INTERVAL, ENV_OBS_SAMPLE_RATIO,
|
||||
ENV_OBS_SERVICE_NAME, ENV_OBS_SERVICE_VERSION, ENV_SINKS_FILE_BUFFER_SIZE, ENV_SINKS_FILE_FLUSH_INTERVAL_MS,
|
||||
ENV_SINKS_FILE_FLUSH_THRESHOLD, ENV_SINKS_FILE_PATH, ENV_SINKS_KAFKA_BATCH_SIZE, ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS,
|
||||
ENV_SINKS_KAFKA_BROKERS, ENV_SINKS_KAFKA_TOPIC, ENV_SINKS_WEBHOOK_AUTH_TOKEN, ENV_SINKS_WEBHOOK_ENDPOINT,
|
||||
ENV_SINKS_WEBHOOK_MAX_RETRIES, ENV_SINKS_WEBHOOK_RETRY_DELAY_MS,
|
||||
};
|
||||
use rustfs_config::observability::{ENV_OBS_LOG_DIRECTORY, ENV_OBS_USE_STDOUT};
|
||||
use rustfs_config::{
|
||||
APP_NAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, DEFAULT_LOG_ROTATION_SIZE_MB, DEFAULT_LOG_ROTATION_TIME,
|
||||
DEFAULT_OBS_LOG_FILENAME, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT,
|
||||
};
|
||||
use rustfs_utils::dirs::get_log_directory_to_string;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::env;
|
||||
|
||||
@@ -52,14 +65,14 @@ impl OtelConfig {
|
||||
pub fn extract_otel_config_from_env(endpoint: Option<String>) -> OtelConfig {
|
||||
let endpoint = if let Some(endpoint) = endpoint {
|
||||
if endpoint.is_empty() {
|
||||
env::var("RUSTFS_OBS_ENDPOINT").unwrap_or_else(|_| "".to_string())
|
||||
env::var(ENV_OBS_ENDPOINT).unwrap_or_else(|_| "".to_string())
|
||||
} else {
|
||||
endpoint
|
||||
}
|
||||
} else {
|
||||
env::var("RUSTFS_OBS_ENDPOINT").unwrap_or_else(|_| "".to_string())
|
||||
env::var(ENV_OBS_ENDPOINT).unwrap_or_else(|_| "".to_string())
|
||||
};
|
||||
let mut use_stdout = env::var("RUSTFS_OBS_USE_STDOUT")
|
||||
let mut use_stdout = env::var(ENV_OBS_USE_STDOUT)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(USE_STDOUT));
|
||||
@@ -70,51 +83,48 @@ impl OtelConfig {
|
||||
OtelConfig {
|
||||
endpoint,
|
||||
use_stdout,
|
||||
sample_ratio: env::var("RUSTFS_OBS_SAMPLE_RATIO")
|
||||
sample_ratio: env::var(ENV_OBS_SAMPLE_RATIO)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(SAMPLE_RATIO)),
|
||||
meter_interval: env::var("RUSTFS_OBS_METER_INTERVAL")
|
||||
meter_interval: env::var(ENV_OBS_METER_INTERVAL)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(METER_INTERVAL)),
|
||||
service_name: env::var("RUSTFS_OBS_SERVICE_NAME")
|
||||
service_name: env::var(ENV_OBS_SERVICE_NAME)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(APP_NAME.to_string())),
|
||||
service_version: env::var("RUSTFS_OBS_SERVICE_VERSION")
|
||||
service_version: env::var(ENV_OBS_SERVICE_VERSION)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(SERVICE_VERSION.to_string())),
|
||||
environment: env::var("RUSTFS_OBS_ENVIRONMENT")
|
||||
environment: env::var(ENV_OBS_ENVIRONMENT)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(ENVIRONMENT.to_string())),
|
||||
logger_level: env::var("RUSTFS_OBS_LOGGER_LEVEL")
|
||||
logger_level: env::var(ENV_OBS_LOGGER_LEVEL)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_LEVEL.to_string())),
|
||||
local_logging_enabled: env::var("RUSTFS_OBS_LOCAL_LOGGING_ENABLED")
|
||||
local_logging_enabled: env::var(ENV_OBS_LOCAL_LOGGING_ENABLED)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(false)),
|
||||
log_directory: env::var("RUSTFS_OBS_LOG_DIRECTORY")
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_DIR.to_string())),
|
||||
log_filename: env::var("RUSTFS_OBS_LOG_FILENAME")
|
||||
log_directory: Some(get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY)),
|
||||
log_filename: env::var(ENV_OBS_LOG_FILENAME)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_OBS_LOG_FILENAME.to_string())),
|
||||
log_rotation_size_mb: env::var("RUSTFS_OBS_LOG_ROTATION_SIZE_MB")
|
||||
log_rotation_size_mb: env::var(ENV_OBS_LOG_ROTATION_SIZE_MB)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_ROTATION_SIZE_MB)), // Default to 100 MB
|
||||
log_rotation_time: env::var("RUSTFS_OBS_LOG_ROTATION_TIME")
|
||||
log_rotation_time: env::var(ENV_OBS_LOG_ROTATION_TIME)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_ROTATION_TIME.to_string())), // Default to "Day"
|
||||
log_keep_files: env::var("RUSTFS_OBS_LOG_KEEP_FILES")
|
||||
log_keep_files: env::var(ENV_OBS_LOG_KEEP_FILES)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_LOG_KEEP_FILES)), // Default to keeping 30 log files
|
||||
@@ -154,16 +164,22 @@ impl KafkaSinkConfig {
|
||||
impl Default for KafkaSinkConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
brokers: env::var("RUSTFS_SINKS_KAFKA_BROKERS")
|
||||
brokers: env::var(ENV_SINKS_KAFKA_BROKERS)
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(|| "localhost:9092".to_string()),
|
||||
topic: env::var("RUSTFS_SINKS_KAFKA_TOPIC")
|
||||
.unwrap_or_else(|| DEFAULT_SINKS_KAFKA_BROKERS.to_string()),
|
||||
topic: env::var(ENV_SINKS_KAFKA_TOPIC)
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(|| "rustfs_sink".to_string()),
|
||||
batch_size: Some(100),
|
||||
batch_timeout_ms: Some(1000),
|
||||
.unwrap_or_else(|| DEFAULT_SINKS_KAFKA_TOPIC.to_string()),
|
||||
batch_size: env::var(ENV_SINKS_KAFKA_BATCH_SIZE)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_SINKS_KAFKA_BATCH_SIZE)),
|
||||
batch_timeout_ms: env::var(ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS)),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -186,16 +202,22 @@ impl WebhookSinkConfig {
|
||||
impl Default for WebhookSinkConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
endpoint: env::var("RUSTFS_SINKS_WEBHOOK_ENDPOINT")
|
||||
endpoint: env::var(ENV_SINKS_WEBHOOK_ENDPOINT)
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(|| "http://localhost:8080".to_string()),
|
||||
auth_token: env::var("RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN")
|
||||
.unwrap_or_else(|| DEFAULT_SINKS_WEBHOOK_ENDPOINT.to_string()),
|
||||
auth_token: env::var(ENV_SINKS_WEBHOOK_AUTH_TOKEN)
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(|| "rustfs_webhook_token".to_string()),
|
||||
max_retries: Some(3),
|
||||
retry_delay_ms: Some(100),
|
||||
.unwrap_or_else(|| DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN.to_string()),
|
||||
max_retries: env::var(ENV_SINKS_WEBHOOK_MAX_RETRIES)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_SINKS_WEBHOOK_MAX_RETRIES)),
|
||||
retry_delay_ms: env::var(ENV_SINKS_WEBHOOK_RETRY_DELAY_MS)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS)),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -210,18 +232,6 @@ pub struct FileSinkConfig {
|
||||
}
|
||||
|
||||
impl FileSinkConfig {
|
||||
pub fn get_default_log_path() -> String {
|
||||
let temp_dir = env::temp_dir().join(DEFAULT_LOG_FILENAME);
|
||||
if let Err(e) = std::fs::create_dir_all(&temp_dir) {
|
||||
eprintln!("Failed to create log directory: {e}");
|
||||
return DEFAULT_LOG_DIR.to_string();
|
||||
}
|
||||
temp_dir
|
||||
.join(DEFAULT_SINK_FILE_LOG_FILE)
|
||||
.to_str()
|
||||
.unwrap_or(DEFAULT_LOG_DIR)
|
||||
.to_string()
|
||||
}
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
@@ -230,22 +240,19 @@ impl FileSinkConfig {
|
||||
impl Default for FileSinkConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
path: env::var("RUSTFS_SINKS_FILE_PATH")
|
||||
.ok()
|
||||
.filter(|s| !s.trim().is_empty())
|
||||
.unwrap_or_else(Self::get_default_log_path),
|
||||
buffer_size: env::var("RUSTFS_SINKS_FILE_BUFFER_SIZE")
|
||||
path: get_log_directory_to_string(ENV_SINKS_FILE_PATH),
|
||||
buffer_size: env::var(ENV_SINKS_FILE_BUFFER_SIZE)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(8192)),
|
||||
flush_interval_ms: env::var("RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS")
|
||||
.or(Some(DEFAULT_SINKS_FILE_BUFFER_SIZE)),
|
||||
flush_interval_ms: env::var(ENV_SINKS_FILE_FLUSH_INTERVAL_MS)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(1000)),
|
||||
flush_threshold: env::var("RUSTFS_SINKS_FILE_FLUSH_THRESHOLD")
|
||||
.or(Some(DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS)),
|
||||
flush_threshold: env::var(ENV_SINKS_FILE_FLUSH_THRESHOLD)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(100)),
|
||||
.or(Some(DEFAULT_SINKS_FILE_FLUSH_THRESHOLD)),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -280,7 +287,10 @@ pub struct LoggerConfig {
|
||||
impl LoggerConfig {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
queue_capacity: Some(10000),
|
||||
queue_capacity: env::var(ENV_AUDIT_LOGGER_QUEUE_CAPACITY)
|
||||
.ok()
|
||||
.and_then(|v| v.parse().ok())
|
||||
.or(Some(DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,7 +48,7 @@ impl FileSink {
|
||||
}
|
||||
let file = if file_exists {
|
||||
// If the file exists, open it in append mode
|
||||
tracing::debug!("FileSink: File exists, opening in append mode.");
|
||||
tracing::debug!("FileSink: File exists, opening in append mode. Path: {:?}", path);
|
||||
OpenOptions::new().append(true).create(true).open(&path).await?
|
||||
} else {
|
||||
// If the file does not exist, create it
|
||||
|
||||
@@ -14,7 +14,6 @@
|
||||
|
||||
use crate::{AppConfig, SinkConfig, UnifiedLogEntry};
|
||||
use async_trait::async_trait;
|
||||
use rustfs_config::DEFAULT_SINK_FILE_LOG_FILE;
|
||||
use std::sync::Arc;
|
||||
|
||||
#[cfg(feature = "file")]
|
||||
@@ -47,8 +46,12 @@ pub async fn create_sinks(config: &AppConfig) -> Vec<Arc<dyn Sink>> {
|
||||
sinks.push(Arc::new(kafka::KafkaSink::new(
|
||||
producer,
|
||||
kafka_config.topic.clone(),
|
||||
kafka_config.batch_size.unwrap_or(100),
|
||||
kafka_config.batch_timeout_ms.unwrap_or(1000),
|
||||
kafka_config
|
||||
.batch_size
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_KAFKA_BATCH_SIZE),
|
||||
kafka_config
|
||||
.batch_timeout_ms
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS),
|
||||
)));
|
||||
tracing::info!("Kafka sink created for topic: {}", kafka_config.topic);
|
||||
}
|
||||
@@ -57,25 +60,35 @@ pub async fn create_sinks(config: &AppConfig) -> Vec<Arc<dyn Sink>> {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "webhook")]
|
||||
SinkConfig::Webhook(webhook_config) => {
|
||||
sinks.push(Arc::new(webhook::WebhookSink::new(
|
||||
webhook_config.endpoint.clone(),
|
||||
webhook_config.auth_token.clone(),
|
||||
webhook_config.max_retries.unwrap_or(3),
|
||||
webhook_config.retry_delay_ms.unwrap_or(100),
|
||||
webhook_config
|
||||
.max_retries
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_WEBHOOK_MAX_RETRIES),
|
||||
webhook_config
|
||||
.retry_delay_ms
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS),
|
||||
)));
|
||||
tracing::info!("Webhook sink created for endpoint: {}", webhook_config.endpoint);
|
||||
}
|
||||
|
||||
#[cfg(feature = "file")]
|
||||
SinkConfig::File(file_config) => {
|
||||
tracing::debug!("FileSink: Using path: {}", file_config.path);
|
||||
match file::FileSink::new(
|
||||
format!("{}/{}", file_config.path.clone(), DEFAULT_SINK_FILE_LOG_FILE),
|
||||
file_config.buffer_size.unwrap_or(8192),
|
||||
file_config.flush_interval_ms.unwrap_or(1000),
|
||||
file_config.flush_threshold.unwrap_or(100),
|
||||
format!("{}/{}", file_config.path.clone(), rustfs_config::DEFAULT_SINK_FILE_LOG_FILE),
|
||||
file_config
|
||||
.buffer_size
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_BUFFER_SIZE),
|
||||
file_config
|
||||
.flush_interval_ms
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS),
|
||||
file_config
|
||||
.flush_threshold
|
||||
.unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_FLUSH_THRESHOLD),
|
||||
)
|
||||
.await
|
||||
{
|
||||
|
||||
@@ -29,9 +29,9 @@ use opentelemetry_semantic_conventions::{
|
||||
SCHEMA_URL,
|
||||
attribute::{DEPLOYMENT_ENVIRONMENT_NAME, NETWORK_LOCAL_ADDRESS, SERVICE_VERSION as OTEL_SERVICE_VERSION},
|
||||
};
|
||||
use rustfs_config::observability::ENV_OBS_LOG_DIRECTORY;
|
||||
use rustfs_config::{
|
||||
APP_NAME, DEFAULT_LOG_DIR, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO,
|
||||
SERVICE_VERSION, USE_STDOUT,
|
||||
APP_NAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT,
|
||||
};
|
||||
use rustfs_utils::get_local_ip_with_default;
|
||||
use smallvec::SmallVec;
|
||||
@@ -293,7 +293,8 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
|
||||
}
|
||||
} else {
|
||||
// Obtain the log directory and file name configuration
|
||||
let log_directory = config.log_directory.as_deref().unwrap_or(DEFAULT_LOG_DIR);
|
||||
let default_log_directory = rustfs_utils::dirs::get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY);
|
||||
let log_directory = config.log_directory.as_deref().unwrap_or(default_log_directory.as_str());
|
||||
let log_filename = config.log_filename.as_deref().unwrap_or(service_name);
|
||||
|
||||
if let Err(e) = fs::create_dir_all(log_directory) {
|
||||
|
||||
@@ -37,4 +37,5 @@ rustfs-common.workspace = true
|
||||
flatbuffers = { workspace = true }
|
||||
prost = { workspace = true }
|
||||
tonic = { workspace = true, features = ["transport"] }
|
||||
tonic-build = { workspace = true }
|
||||
tonic-prost = { workspace = true }
|
||||
tonic-prost-build = { workspace = true }
|
||||
@@ -1,3 +1,17 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#![allow(unused_imports)]
|
||||
#![allow(clippy::all)]
|
||||
pub mod proto_gen;
|
||||
|
||||
@@ -1 +1,15 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod node_service;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -53,21 +53,19 @@ fn main() -> Result<(), AnyError> {
|
||||
let flatbuffer_out_dir = project_root_dir.join("generated").join("flatbuffers_generated");
|
||||
// let descriptor_set_path = PathBuf::from(env::var(ENV_OUT_DIR).unwrap()).join("proto-descriptor.bin");
|
||||
|
||||
tonic_build::configure()
|
||||
tonic_prost_build::configure()
|
||||
.out_dir(proto_out_dir)
|
||||
// .file_descriptor_set_path(descriptor_set_path)
|
||||
.protoc_arg("--experimental_allow_proto3_optional")
|
||||
.compile_well_known_types(true)
|
||||
.bytes(["."])
|
||||
.bytes(".")
|
||||
.emit_rerun_if_changed(false)
|
||||
.compile_protos(proto_files, &[proto_dir.clone()])
|
||||
.compile_protos(proto_files, &[proto_dir.to_string_lossy().as_ref()])
|
||||
.map_err(|e| format!("Failed to generate protobuf file: {e}."))?;
|
||||
|
||||
// protos/gen/mod.rs
|
||||
let generated_mod_rs_path = project_root_dir.join("generated").join("proto_gen").join("mod.rs");
|
||||
|
||||
let mut generated_mod_rs = fs::File::create(generated_mod_rs_path)?;
|
||||
writeln!(&mut generated_mod_rs, "pub mod node_service;")?;
|
||||
writeln!(
|
||||
&mut generated_mod_rs,
|
||||
r#"// Copyright 2024 RustFS Team
|
||||
@@ -84,12 +82,13 @@ fn main() -> Result<(), AnyError> {
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License."#
|
||||
)?;
|
||||
writeln!(&mut generated_mod_rs, "\n")?;
|
||||
writeln!(&mut generated_mod_rs, "pub mod node_service;")?;
|
||||
generated_mod_rs.flush()?;
|
||||
|
||||
let generated_mod_rs_path = project_root_dir.join("generated").join("mod.rs");
|
||||
|
||||
let mut generated_mod_rs = fs::File::create(generated_mod_rs_path)?;
|
||||
writeln!(&mut generated_mod_rs, "#![allow(unused_imports)]")?;
|
||||
|
||||
writeln!(
|
||||
&mut generated_mod_rs,
|
||||
r#"// Copyright 2024 RustFS Team
|
||||
@@ -106,6 +105,9 @@ fn main() -> Result<(), AnyError> {
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License."#
|
||||
)?;
|
||||
writeln!(&mut generated_mod_rs, "\n")?;
|
||||
writeln!(&mut generated_mod_rs, "#![allow(unused_imports)]")?;
|
||||
writeln!(&mut generated_mod_rs, "\n")?;
|
||||
writeln!(&mut generated_mod_rs, "#![allow(clippy::all)]")?;
|
||||
writeln!(&mut generated_mod_rs, "pub mod proto_gen;")?;
|
||||
generated_mod_rs.flush()?;
|
||||
|
||||
@@ -12,7 +12,9 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use rustfs_config::{DEFAULT_LOG_DIR, DEFAULT_LOG_FILENAME};
|
||||
use std::env;
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
/// Get the absolute path to the current project
|
||||
@@ -57,6 +59,72 @@ pub fn get_project_root() -> Result<PathBuf, String> {
|
||||
Err("The project root directory cannot be obtained. Please check the running environment and project structure.".to_string())
|
||||
}
|
||||
|
||||
/// Get the log directory as a string
|
||||
/// This function will try to find a writable log directory in the following order:
|
||||
pub fn get_log_directory_to_string(key: &str) -> String {
|
||||
get_log_directory(key).to_string_lossy().to_string()
|
||||
}
|
||||
|
||||
/// Get the log directory
|
||||
/// This function will try to find a writable log directory in the following order:
|
||||
pub fn get_log_directory(key: &str) -> PathBuf {
|
||||
// Environment variables are specified
|
||||
if let Ok(log_dir) = env::var(key) {
|
||||
let path = PathBuf::from(log_dir);
|
||||
if ensure_directory_writable(&path) {
|
||||
return path;
|
||||
}
|
||||
}
|
||||
|
||||
// System temporary directory
|
||||
if let Ok(mut temp_dir) = env::temp_dir().canonicalize() {
|
||||
temp_dir.push(DEFAULT_LOG_FILENAME);
|
||||
temp_dir.push(DEFAULT_LOG_DIR);
|
||||
if ensure_directory_writable(&temp_dir) {
|
||||
return temp_dir;
|
||||
}
|
||||
}
|
||||
|
||||
// User home directory
|
||||
if let Ok(home_dir) = env::var("HOME").or_else(|_| env::var("USERPROFILE")) {
|
||||
let mut path = PathBuf::from(home_dir);
|
||||
path.push(format!(".{DEFAULT_LOG_FILENAME}"));
|
||||
path.push(DEFAULT_LOG_DIR);
|
||||
if ensure_directory_writable(&path) {
|
||||
return path;
|
||||
}
|
||||
}
|
||||
|
||||
// Current working directory
|
||||
if let Ok(current_dir) = env::current_dir() {
|
||||
let mut path = current_dir;
|
||||
path.push(DEFAULT_LOG_DIR);
|
||||
if ensure_directory_writable(&path) {
|
||||
return path;
|
||||
}
|
||||
}
|
||||
|
||||
// Relative path
|
||||
PathBuf::from(DEFAULT_LOG_DIR)
|
||||
}
|
||||
|
||||
fn ensure_directory_writable(path: &PathBuf) -> bool {
|
||||
// Try creating a catalog
|
||||
if fs::create_dir_all(path).is_err() {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check write permissions
|
||||
let test_file = path.join(".write_test");
|
||||
match fs::write(&test_file, "test") {
|
||||
Ok(_) => {
|
||||
let _ = fs::remove_file(&test_file);
|
||||
true
|
||||
}
|
||||
Err(_) => false,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
@@ -16,7 +16,7 @@ use crate::admin::router::Operation;
|
||||
use crate::auth::{check_key_valid, get_session_token};
|
||||
use http::{HeaderMap, StatusCode};
|
||||
use matchit::Params;
|
||||
use rustfs_config::notify::{NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
|
||||
use rustfs_config::notify::{ENABLE_KEY, ENABLE_ON, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
|
||||
use rustfs_notify::EventName;
|
||||
use rustfs_notify::rules::{BucketNotificationConfig, PatternRules};
|
||||
use s3s::header::CONTENT_LENGTH;
|
||||
@@ -75,11 +75,8 @@ impl Operation for SetNotificationTarget {
|
||||
let mut kvs_map: HashMap<String, String> = serde_json::from_slice(&body)
|
||||
.map_err(|e| s3_error!(InvalidArgument, "invalid json body for target config: {}", e))?;
|
||||
// If there is an enable key, add an enable key value to "on"
|
||||
if !kvs_map.contains_key(rustfs_ecstore::config::ENABLE_KEY) {
|
||||
kvs_map.insert(
|
||||
rustfs_ecstore::config::ENABLE_KEY.to_string(),
|
||||
rustfs_ecstore::config::ENABLE_ON.to_string(),
|
||||
);
|
||||
if !kvs_map.contains_key(ENABLE_KEY) {
|
||||
kvs_map.insert(ENABLE_KEY.to_string(), ENABLE_ON.to_string());
|
||||
}
|
||||
|
||||
let kvs = rustfs_ecstore::config::KVS(
|
||||
|
||||
@@ -25,6 +25,7 @@ use handlers::{
|
||||
sts, tier, user,
|
||||
};
|
||||
|
||||
use crate::admin::handlers::event::{ListNotificationTargets, RemoveNotificationTarget, SetNotificationTarget};
|
||||
use handlers::{GetReplicationMetricsHandler, ListRemoteTargetHandler, RemoveRemoteTargetHandler, SetRemoteTargetHandler};
|
||||
use hyper::Method;
|
||||
use router::{AdminOperation, S3Router};
|
||||
@@ -365,5 +366,28 @@ fn register_user_route(r: &mut S3Router<AdminOperation>) -> std::io::Result<()>
|
||||
AdminOperation(&policies::SetPolicyForUserOrGroup {}),
|
||||
)?;
|
||||
|
||||
r.insert(
|
||||
Method::GET,
|
||||
format!("{}{}", ADMIN_PREFIX, "/v3/target-list").as_str(),
|
||||
AdminOperation(&ListNotificationTargets {}),
|
||||
)?;
|
||||
|
||||
r.insert(
|
||||
Method::POST,
|
||||
format!("{}{}", ADMIN_PREFIX, "/v3/target-set").as_str(),
|
||||
AdminOperation(&SetNotificationTarget {}),
|
||||
)?;
|
||||
|
||||
// Remove notification target
|
||||
// This endpoint removes a notification target based on its type and name.
|
||||
// target-remove?target_type=xxx&target_name=xxx
|
||||
// * `target_type` - Target type, such as "notify_webhook" or "notify_mqtt".
|
||||
// * `target_name` - A unique name for a Target, such as "1".
|
||||
r.insert(
|
||||
Method::DELETE,
|
||||
format!("{}{}", ADMIN_PREFIX, "/v3/target-remove").as_str(),
|
||||
AdminOperation(&RemoveNotificationTarget {}),
|
||||
)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -65,7 +65,7 @@ pub struct Opt {
|
||||
pub secret_key: String,
|
||||
|
||||
/// Enable console server
|
||||
#[arg(long, default_value_t = true, env = "RUSTFS_CONSOLE_ENABLE")]
|
||||
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_ENABLE, env = "RUSTFS_CONSOLE_ENABLE")]
|
||||
pub console_enable: bool,
|
||||
|
||||
/// Observability endpoint for trace, metrics and logs,only support grpc mode.
|
||||
|
||||
@@ -37,8 +37,8 @@ use rustfs_config::DEFAULT_DELIMITER;
|
||||
use rustfs_ecstore::bucket::metadata_sys::init_bucket_metadata_sys;
|
||||
use rustfs_ecstore::cmd::bucket_replication::init_bucket_replication_pool;
|
||||
use rustfs_ecstore::config as ecconfig;
|
||||
use rustfs_ecstore::config::GLOBAL_ConfigSys;
|
||||
use rustfs_ecstore::config::GLOBAL_ServerConfig;
|
||||
use rustfs_ecstore::config::GLOBAL_CONFIG_SYS;
|
||||
use rustfs_ecstore::config::GLOBAL_SERVER_CONFIG;
|
||||
use rustfs_ecstore::store_api::BucketOptions;
|
||||
use rustfs_ecstore::{
|
||||
StorageAPI,
|
||||
@@ -159,7 +159,7 @@ async fn run(opt: config::Opt) -> Result<()> {
|
||||
|
||||
ecconfig::init();
|
||||
// config system configuration
|
||||
GLOBAL_ConfigSys.init(store.clone()).await?;
|
||||
GLOBAL_CONFIG_SYS.init(store.clone()).await?;
|
||||
|
||||
// Initialize event notifier
|
||||
init_event_notifier().await;
|
||||
@@ -281,7 +281,7 @@ pub(crate) async fn init_event_notifier() {
|
||||
info!("Initializing event notifier...");
|
||||
|
||||
// 1. Get the global configuration loaded by ecstore
|
||||
let server_config = match GLOBAL_ServerConfig.get() {
|
||||
let server_config = match GLOBAL_SERVER_CONFIG.get() {
|
||||
Some(config) => config.clone(), // Clone the config to pass ownership
|
||||
None => {
|
||||
error!("Event notifier initialization failed: Global server config not loaded.");
|
||||
|
||||
@@ -58,11 +58,11 @@ export RUSTFS_CONSOLE_ADDRESS=":9001"
|
||||
#export RUSTFS_OBS_SERVICE_NAME=rustfs # 服务名称
|
||||
#export RUSTFS_OBS_SERVICE_VERSION=0.1.0 # 服务版本
|
||||
export RUSTFS_OBS_ENVIRONMENT=develop # 环境名称
|
||||
export RUSTFS_OBS_LOGGER_LEVEL=debug # 日志级别,支持 trace, debug, info, warn, error
|
||||
export RUSTFS_OBS_LOGGER_LEVEL=info # 日志级别,支持 trace, debug, info, warn, error
|
||||
export RUSTFS_OBS_LOCAL_LOGGING_ENABLED=true # 是否启用本地日志记录
|
||||
export RUSTFS_OBS_LOG_DIRECTORY="$current_dir/deploy/logs" # Log directory
|
||||
export RUSTFS_OBS_LOG_ROTATION_TIME="minute" # Log rotation time unit, can be "second", "minute", "hour", "day"
|
||||
export RUSTFS_OBS_LOG_ROTATION_SIZE_MB=1 # Log rotation size in MB
|
||||
export RUSTFS_OBS_LOG_ROTATION_TIME="hour" # Log rotation time unit, can be "second", "minute", "hour", "day"
|
||||
export RUSTFS_OBS_LOG_ROTATION_SIZE_MB=100 # Log rotation size in MB
|
||||
|
||||
export RUSTFS_SINKS_FILE_PATH="$current_dir/deploy/logs"
|
||||
export RUSTFS_SINKS_FILE_BUFFER_SIZE=12
|
||||
@@ -89,10 +89,18 @@ export OTEL_INSTRUMENTATION_SCHEMA_URL="https://opentelemetry.io/schemas/1.31.0"
|
||||
export OTEL_INSTRUMENTATION_ATTRIBUTES="env=production"
|
||||
|
||||
# notify
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENABLE="true" # 是否启用 webhook 通知
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENABLE="on" # 是否启用 webhook 通知
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT="http://[::]:3020/webhook" # webhook 通知地址
|
||||
export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR="$current_dir/deploy/logs/notify"
|
||||
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENABLE_PRIMARY="on" # 是否启用 webhook 通知
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT_PRIMARY="http://[::]:3020/webhook" # webhook 通知地址
|
||||
export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR_PRIMARY="$current_dir/deploy/logs/notify"
|
||||
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENABLE_MASTER="on" # 是否启用 webhook 通知
|
||||
export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT_MASTER="http://[::]:3020/webhook" # webhook 通知地址
|
||||
export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR_MASTER="$current_dir/deploy/logs/notify"
|
||||
|
||||
|
||||
export RUSTFS_NS_SCANNER_INTERVAL=60 # 对象扫描间隔时间,单位为秒
|
||||
# exportRUSTFS_SKIP_BACKGROUND_TASK=true
|
||||
|
||||
Reference in New Issue
Block a user