From c5df1f92c2cb107cb828ecc01dd97136c6c9db78 Mon Sep 17 00:00:00 2001 From: houseme Date: Wed, 30 Jul 2025 19:02:10 +0800 Subject: [PATCH] refactor: replace `lazy_static` with `LazyLock` and `notify` crate registry `create_targets_from_config` (#311) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * improve code for notify * improve code for logger and fix typo (#272) * Add GNU to build.yml (#275) * fix unzip error * fix url change error fix url change error * Simplify user experience and integrate console and endpoint Simplify user experience and integrate console and endpoint * Add gnu to build.yml * upgrade version * feat: add `cargo clippy --fix --allow-dirty` to pre-commit command (#282) Resolves #277 - Add --fix flag to automatically fix clippy warnings - Add --allow-dirty flag to run on dirty Git trees - Improves code quality in pre-commit workflow * fix: the issue where preview fails when the path length exceeds 255 characters (#280) * fix * fix: improve Windows build support and CI/CD workflow (#283) - Fix Windows zip command issue by using PowerShell Compress-Archive - Add Windows support for OSS upload with ossutil - Replace Chinese comments with English in build.yml - Fix bash syntax error in package_zip function - Improve code formatting and consistency - Update various configuration files for better cross-platform support Resolves Windows build failures in GitHub Actions. * fix: update link in README.md leading to a 404 error (#285) * add rustfs.spec for rustfs (#103) add support on loongarch64 * improve cargo.lock * build(deps): bump the dependencies group with 5 updates (#289) Bumps the dependencies group with 5 updates: | Package | From | To | | --- | --- | --- | | [hyper-util](https://github.com/hyperium/hyper-util) | `0.1.15` | `0.1.16` | | [rand](https://github.com/rust-random/rand) | `0.9.1` | `0.9.2` | | [serde_json](https://github.com/serde-rs/json) | `1.0.140` | `1.0.141` | | [strum](https://github.com/Peternator7/strum) | `0.27.1` | `0.27.2` | | [sysinfo](https://github.com/GuillaumeGomez/sysinfo) | `0.36.0` | `0.36.1` | Updates `hyper-util` from 0.1.15 to 0.1.16 - [Release notes](https://github.com/hyperium/hyper-util/releases) - [Changelog](https://github.com/hyperium/hyper-util/blob/master/CHANGELOG.md) - [Commits](https://github.com/hyperium/hyper-util/compare/v0.1.15...v0.1.16) Updates `rand` from 0.9.1 to 0.9.2 - [Release notes](https://github.com/rust-random/rand/releases) - [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md) - [Commits](https://github.com/rust-random/rand/compare/rand_core-0.9.1...rand_core-0.9.2) Updates `serde_json` from 1.0.140 to 1.0.141 - [Release notes](https://github.com/serde-rs/json/releases) - [Commits](https://github.com/serde-rs/json/compare/v1.0.140...v1.0.141) Updates `strum` from 0.27.1 to 0.27.2 - [Release notes](https://github.com/Peternator7/strum/releases) - [Changelog](https://github.com/Peternator7/strum/blob/master/CHANGELOG.md) - [Commits](https://github.com/Peternator7/strum/compare/v0.27.1...v0.27.2) Updates `sysinfo` from 0.36.0 to 0.36.1 - [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md) - [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.36.0...v0.36.1) --- updated-dependencies: - dependency-name: hyper-util dependency-version: 0.1.16 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: rand dependency-version: 0.9.2 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: serde_json dependency-version: 1.0.141 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: strum dependency-version: 0.27.2 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies - dependency-name: sysinfo dependency-version: 0.36.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: dependencies ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * improve code for logger * improve * upgrade * refactor: 优化构建工作流,统一 latest 文件处理和简化制品上传 (#293) * Refactor: DatabaseManagerSystem as global Signed-off-by: junxiang Mu <1948535941@qq.com> * fix: fmt Signed-off-by: junxiang Mu <1948535941@qq.com> * Test: add e2e_test for s3select Signed-off-by: junxiang Mu <1948535941@qq.com> * Test: add test script for e2e Signed-off-by: junxiang Mu <1948535941@qq.com> * improve code for registry and intergation * improve code for registry `create_targets_from_config` * fix * Feature up/ilm (#305) * fix * fix * fix * fix delete-marker expiration. add api_restore. * fix * time retry object upload * lock file * make fmt * fix * restore object * fix * fix * serde-rs-xml -> quick-xml * fix * checksum * fix * fix * fix * fix * fix * fix * fix * transfer lang to english * upgrade clap version from 4.5.41 to 4.5.42 * refactor: replace `lazy_static` with `LazyLock` * add router * fix: modify comment * improve code * fix typos * fix * fix: modify name and fmt * improve code for registry * fix test --------- Signed-off-by: dependabot[bot] Signed-off-by: junxiang Mu <1948535941@qq.com> Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com> Co-authored-by: 安正超 Co-authored-by: shiro.lee <69624924+shiroleeee@users.noreply.github.com> Co-authored-by: Marco Orlandin Co-authored-by: zhangwenlong Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: junxiang Mu <1948535941@qq.com> Co-authored-by: likewu --- Cargo.lock | 406 +++++++++--- Cargo.toml | 39 +- crates/checksums/Cargo.toml | 17 +- crates/config/Cargo.toml | 3 - crates/config/src/constants/app.rs | 26 +- crates/config/src/notify/mod.rs | 10 +- crates/config/src/notify/mqtt.rs | 30 + crates/config/src/notify/webhook.rs | 24 + crates/config/src/observability/config.rs | 291 +-------- crates/config/src/observability/file.rs | 67 +- crates/config/src/observability/kafka.rs | 47 +- crates/config/src/observability/logger.rs | 35 -- crates/config/src/observability/mod.rs | 16 +- crates/config/src/observability/otel.rs | 83 --- crates/config/src/observability/sink.rs | 39 -- crates/config/src/observability/webhook.rs | 51 +- crates/e2e_test/Cargo.toml | 8 +- crates/ecstore/src/config/com.rs | 27 +- crates/ecstore/src/config/mod.rs | 58 +- crates/ecstore/src/config/notify.rs | 142 ++++- crates/ecstore/src/config/storageclass.rs | 54 +- crates/ecstore/src/global.rs | 2 - crates/ecstore/src/set_disk.rs | 38 +- crates/ecstore/src/store.rs | 7 +- crates/mcp/Cargo.toml | 17 +- crates/notify/Cargo.toml | 5 +- crates/notify/examples/full_demo.rs | 8 +- crates/notify/examples/full_demo_one.rs | 8 +- crates/notify/src/error.rs | 11 +- crates/notify/src/factory.rs | 235 +++---- crates/notify/src/integration.rs | 10 +- crates/notify/src/registry.rs | 256 ++++++-- crates/notify/src/target/mod.rs | 8 + crates/obs/Cargo.toml | 4 +- crates/obs/src/config.rs | 124 ++-- crates/obs/src/sinks/file.rs | 2 +- crates/obs/src/sinks/mod.rs | 33 +- crates/obs/src/telemetry.rs | 7 +- crates/protos/Cargo.toml | 3 +- crates/protos/src/generated/mod.rs | 14 + crates/protos/src/generated/proto_gen/mod.rs | 14 + .../src/generated/proto_gen/node_service.rs | 588 +++++++++--------- crates/protos/src/main.rs | 16 +- crates/utils/src/dirs.rs | 68 ++ rustfs/src/admin/handlers/event.rs | 9 +- rustfs/src/admin/mod.rs | 24 + rustfs/src/config/mod.rs | 2 +- rustfs/src/main.rs | 8 +- scripts/run.sh | 16 +- 49 files changed, 1575 insertions(+), 1435 deletions(-) delete mode 100644 crates/config/src/observability/logger.rs delete mode 100644 crates/config/src/observability/otel.rs delete mode 100644 crates/config/src/observability/sink.rs diff --git a/Cargo.lock b/Cargo.lock index 0dd4d78d..21885d16 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -975,7 +975,7 @@ dependencies = [ "hyper-util", "pin-project-lite", "rustls 0.21.12", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-native-certs 0.8.1", "rustls-pki-types", "tokio", @@ -1191,7 +1191,7 @@ dependencies = [ "hyper 1.6.0", "hyper-util", "pin-project-lite", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-pemfile 2.2.0", "rustls-pki-types", "tokio", @@ -1741,9 +1741,9 @@ dependencies = [ [[package]] name = "clap" -version = "4.5.41" +version = "4.5.42" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "be92d32e80243a54711e5d7ce823c35c41c9d929dc4ab58e1276f625841aadf9" +checksum = "ed87a9d530bb41a67537289bafcac159cb3ee28460e0a4571123d2a778a6a882" dependencies = [ "clap_builder", "clap_derive", @@ -1751,9 +1751,9 @@ dependencies = [ [[package]] name = "clap_builder" -version = "4.5.41" +version = "4.5.42" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "707eab41e9622f9139419d573eca0900137718000c517d47da73045f54331c3d" +checksum = "64f4f3f3c77c94aff3c7e9aac9a2ca1974a5adf392a8bb751e827d6d127ab966" dependencies = [ "anstream", "anstyle", @@ -1947,18 +1947,18 @@ dependencies = [ [[package]] name = "const-str" -version = "0.6.3" +version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "041fbfcf8e7054df725fb9985297e92422cdc80fcf313665f5ca3d761bb63f4c" +checksum = "451d0640545a0553814b4c646eb549343561618838e9b42495f466131fe3ad49" dependencies = [ "const-str-proc-macro", ] [[package]] name = "const-str-proc-macro" -version = "0.6.3" +version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f801882b7ecd4188f4bca0317f34e022d623590d85893d7024b18d14f2a3b40b" +checksum = "95013972663dd72254b963e48857284080001ffee418731f065fcf5290a5530d" dependencies = [ "proc-macro2", "quote", @@ -2334,8 +2334,18 @@ version = "0.20.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc7f46116c46ff9ab3eb1597a45688b6715c6e628b5c133e288e709a29bcb4ee" dependencies = [ - "darling_core", - "darling_macro", + "darling_core 0.20.11", + "darling_macro 0.20.11", +] + +[[package]] +name = "darling" +version = "0.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a79c4acb1fd5fa3d9304be4c76e031c54d2e92d172a393e24b19a14fe8532fe9" +dependencies = [ + "darling_core 0.21.0", + "darling_macro 0.21.0", ] [[package]] @@ -2352,13 +2362,38 @@ dependencies = [ "syn 2.0.104", ] +[[package]] +name = "darling_core" +version = "0.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "74875de90daf30eb59609910b84d4d368103aaec4c924824c6799b28f77d6a1d" +dependencies = [ + "fnv", + "ident_case", + "proc-macro2", + "quote", + "strsim", + "syn 2.0.104", +] + [[package]] name = "darling_macro" version = "0.20.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc34b93ccb385b40dc71c6fceac4b2ad23662c7eeb248cf10d529b7e055b6ead" dependencies = [ - "darling_core", + "darling_core 0.20.11", + "quote", + "syn 2.0.104", +] + +[[package]] +name = "darling_macro" +version = "0.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e79f8e61677d5df9167cd85265f8e5f64b215cdea3fb55eebc3e622e44c7a146" +dependencies = [ + "darling_core 0.21.0", "quote", "syn 2.0.104", ] @@ -2962,7 +2997,7 @@ version = "0.20.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2d5bcf7b024d6835cfb3d473887cd966994907effbe9227e8c8219824d06c4e8" dependencies = [ - "darling", + "darling 0.20.11", "proc-macro2", "quote", "syn 2.0.104", @@ -3575,6 +3610,12 @@ version = "1.0.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813" +[[package]] +name = "dyn-clone" +version = "1.0.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555" + [[package]] name = "e2e_test" version = "0.0.5" @@ -3595,7 +3636,7 @@ dependencies = [ "serde_json", "serial_test", "tokio", - "tonic", + "tonic 0.14.0", "url", ] @@ -3700,7 +3741,7 @@ version = "0.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "76d07902c93376f1e96c34abc4d507c0911df3816cef50b01f5a2ff3ad8c370d" dependencies = [ - "darling", + "darling 0.20.11", "proc-macro2", "quote", "syn 2.0.104", @@ -4796,7 +4837,7 @@ dependencies = [ "hyper 1.6.0", "hyper-util", "log", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-native-certs 0.8.1", "rustls-pki-types", "tokio", @@ -5279,16 +5320,17 @@ dependencies = [ [[package]] name = "keyring" -version = "3.6.2" +version = "3.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1961983669d57bdfe6c0f3ef8e4c229b5ef751afcc7d87e4271d2f71f6ccfa8b" +checksum = "eebcc3aff044e5944a8fbaf69eb277d11986064cba30c468730e8b9909fb551c" dependencies = [ "byteorder", "dbus-secret-service", "log", "security-framework 2.11.1", "security-framework 3.2.0", - "windows-sys 0.59.0", + "windows-sys 0.60.2", + "zeroize", ] [[package]] @@ -5465,7 +5507,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "07033963ba89ebaf1584d767badaa2e8fcec21aedea6b8c0346d487d49c28667" dependencies = [ "cfg-if", - "windows-targets 0.53.2", + "windows-targets 0.53.3", ] [[package]] @@ -5476,13 +5518,13 @@ checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de" [[package]] name = "libredox" -version = "0.1.6" +version = "0.1.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4488594b9328dee448adb906d8b126d9b7deb7cf5c22161ee591610bb1be83c0" +checksum = "360e552c93fa0e8152ab463bc4c4837fce76a225df11dfaeea66c313de5e61f7" dependencies = [ "bitflags 2.9.1", "libc", - "redox_syscall 0.5.16", + "redox_syscall 0.5.17", ] [[package]] @@ -6540,10 +6582,10 @@ dependencies = [ "opentelemetry", "opentelemetry-proto", "opentelemetry_sdk", - "prost", + "prost 0.13.5", "thiserror 2.0.12", "tokio", - "tonic", + "tonic 0.13.1", "tracing", ] @@ -6555,8 +6597,8 @@ checksum = "2e046fd7660710fe5a05e8748e70d9058dc15c94ba914e7c4faa7c728f0e8ddc" dependencies = [ "opentelemetry", "opentelemetry_sdk", - "prost", - "tonic", + "prost 0.13.5", + "tonic 0.13.1", ] [[package]] @@ -6691,7 +6733,7 @@ checksum = "bc838d2a56b5b1a6c25f55575dfc605fabb63bb2365f6c2353ef9159aa69e4a5" dependencies = [ "cfg-if", "libc", - "redox_syscall 0.5.16", + "redox_syscall 0.5.17", "smallvec", "windows-targets 0.52.6", ] @@ -7253,14 +7295,24 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2796faa41db3ec313a31f7624d9286acf277b52de526150b7e69f3debf891ee5" dependencies = [ "bytes", - "prost-derive", + "prost-derive 0.13.5", +] + +[[package]] +name = "prost" +version = "0.14.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7231bd9b3d3d33c86b58adbac74b5ec0ad9f496b19d22801d773636feaa95f3d" +dependencies = [ + "bytes", + "prost-derive 0.14.1", ] [[package]] name = "prost-build" -version = "0.13.5" +version = "0.14.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf" +checksum = "ac6c3320f9abac597dcbc668774ef006702672474aad53c6d596b62e487b40b1" dependencies = [ "heck 0.5.0", "itertools 0.14.0", @@ -7269,8 +7321,10 @@ dependencies = [ "once_cell", "petgraph", "prettyplease", - "prost", + "prost 0.14.1", "prost-types", + "pulldown-cmark", + "pulldown-cmark-to-cmark", "regex", "syn 2.0.104", "tempfile", @@ -7290,12 +7344,25 @@ dependencies = [ ] [[package]] -name = "prost-types" -version = "0.13.5" +name = "prost-derive" +version = "0.14.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "52c2c1bf36ddb1a1c396b3601a3cec27c2462e45f07c386894ec3ccf5332bd16" +checksum = "9120690fafc389a67ba3803df527d0ec9cbbc9cc45e4cc20b332996dfb672425" dependencies = [ - "prost", + "anyhow", + "itertools 0.14.0", + "proc-macro2", + "quote", + "syn 2.0.104", +] + +[[package]] +name = "prost-types" +version = "0.14.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9b4db3d6da204ed77bb26ba83b6122a73aeb2e87e25fbf7ad2e84c4ccbf8f72" +dependencies = [ + "prost 0.14.1", ] [[package]] @@ -7307,6 +7374,26 @@ dependencies = [ "cc", ] +[[package]] +name = "pulldown-cmark" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e8bbe1a966bd2f362681a44f6edce3c2310ac21e4d5067a6e7ec396297a6ea0" +dependencies = [ + "bitflags 2.9.1", + "memchr", + "unicase", +] + +[[package]] +name = "pulldown-cmark-to-cmark" +version = "21.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e5b6a0769a491a08b31ea5c62494a8f144ee0987d86d670a8af4df1e1b7cde75" +dependencies = [ + "pulldown-cmark", +] + [[package]] name = "quick-xml" version = "0.37.5" @@ -7340,7 +7427,7 @@ dependencies = [ "quinn-proto", "quinn-udp", "rustc-hash 2.1.1", - "rustls 0.23.29", + "rustls 0.23.31", "socket2 0.5.10", "thiserror 2.0.12", "tokio", @@ -7360,7 +7447,7 @@ dependencies = [ "rand 0.9.2", "ring", "rustc-hash 2.1.1", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-pki-types", "slab", "thiserror 2.0.12", @@ -7607,9 +7694,9 @@ dependencies = [ [[package]] name = "redox_syscall" -version = "0.5.16" +version = "0.5.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7251471db004e509f4e75a62cca9435365b5ec7bcdff530d612ac7c87c44a792" +checksum = "5407465600fb0548f1442edf71dd20683c6ed326200ace4b1ef0763521bb3b77" dependencies = [ "bitflags 2.9.1", ] @@ -7635,6 +7722,26 @@ dependencies = [ "readme-rustdocifier", ] +[[package]] +name = "ref-cast" +version = "1.0.24" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4a0ae411dbe946a674d89546582cea4ba2bb8defac896622d6496f14c23ba5cf" +dependencies = [ + "ref-cast-impl", +] + +[[package]] +name = "ref-cast-impl" +version = "1.0.24" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1165225c21bff1f3bbce98f5a1f889949bc902d3575308cc7b0de30b4f6d27c7" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.104", +] + [[package]] name = "regex" version = "1.11.1" @@ -7711,7 +7818,7 @@ dependencies = [ "percent-encoding", "pin-project-lite", "quinn", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-pki-types", "serde", "serde_json", @@ -7803,6 +7910,40 @@ dependencies = [ "windows-sys 0.52.0", ] +[[package]] +name = "rmcp" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "824daba0a34f8c5c5392295d381e0800f88fd986ba291699f8785f05fa344c1e" +dependencies = [ + "base64 0.22.1", + "chrono", + "futures", + "paste", + "pin-project-lite", + "rmcp-macros", + "schemars", + "serde", + "serde_json", + "thiserror 2.0.12", + "tokio", + "tokio-util", + "tracing", +] + +[[package]] +name = "rmcp-macros" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ad6543c0572a4dbc125c23e6f54963ea9ba002294fd81dd4012c204219b0dcaa" +dependencies = [ + "darling 0.21.0", + "proc-macro2", + "quote", + "serde_json", + "syn 2.0.104", +] + [[package]] name = "rmp" version = "0.8.14" @@ -7954,9 +8095,9 @@ dependencies = [ [[package]] name = "rustc-demangle" -version = "0.1.25" +version = "0.1.26" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "989e6739f80c4ad5b13e0fd7fe89531180375b18520cc8c82080e4dc4035b84f" +checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace" [[package]] name = "rustc-hash" @@ -8024,7 +8165,7 @@ dependencies = [ "rustfs-s3select-query", "rustfs-utils", "rustfs-zip", - "rustls 0.23.29", + "rustls 0.23.31", "s3s", "serde", "serde_json", @@ -8040,7 +8181,7 @@ dependencies = [ "tokio-stream", "tokio-tar", "tokio-util", - "tonic", + "tonic 0.14.0", "tower", "tower-http", "tracing", @@ -8128,7 +8269,7 @@ dependencies = [ "s3s", "serde", "tokio", - "tonic", + "tonic 0.14.0", "uuid", ] @@ -8137,8 +8278,6 @@ name = "rustfs-config" version = "0.0.5" dependencies = [ "const-str", - "serde", - "serde_json", ] [[package]] @@ -8211,7 +8350,7 @@ dependencies = [ "rustfs-signer", "rustfs-utils", "rustfs-workers", - "rustls 0.23.29", + "rustls 0.23.31", "s3s", "serde", "serde_json", @@ -8226,7 +8365,7 @@ dependencies = [ "tokio", "tokio-stream", "tokio-util", - "tonic", + "tonic 0.14.0", "tower", "tracing", "url", @@ -8316,7 +8455,7 @@ dependencies = [ "serde_json", "thiserror 2.0.12", "tokio", - "tonic", + "tonic 0.14.0", "tracing", "url", "uuid", @@ -8334,6 +8473,23 @@ dependencies = [ "time", ] +[[package]] +name = "rustfs-mcp" +version = "0.0.5" +dependencies = [ + "anyhow", + "aws-sdk-s3", + "clap", + "mime_guess", + "rmcp", + "schemars", + "serde", + "serde_json", + "tokio", + "tracing", + "tracing-subscriber", +] + [[package]] name = "rustfs-notify" version = "0.0.5" @@ -8343,6 +8499,7 @@ dependencies = [ "chrono", "dashmap 6.1.0", "form_urlencoded", + "futures", "once_cell", "quick-xml 0.38.0", "reqwest", @@ -8420,10 +8577,11 @@ name = "rustfs-protos" version = "0.0.5" dependencies = [ "flatbuffers 25.2.10", - "prost", + "prost 0.14.1", "rustfs-common", - "tonic", - "tonic-build", + "tonic 0.14.0", + "tonic-prost", + "tonic-prost-build", ] [[package]] @@ -8556,7 +8714,7 @@ dependencies = [ "rand 0.9.2", "regex", "rustfs-config", - "rustls 0.23.29", + "rustls 0.23.31", "rustls-pemfile 2.2.0", "rustls-pki-types", "s3s", @@ -8647,9 +8805,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.29" +version = "0.23.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2491382039b29b9b11ff08b76ff6c97cf287671dbb74f0be44bda389fffe9bd1" +checksum = "c0ebcbd2f03de0fc1122ad9bb24b127a5a6cd51d72604a3f3c50ac459762b6cc" dependencies = [ "aws-lc-rs", "log", @@ -8849,6 +9007,32 @@ dependencies = [ "windows-sys 0.59.0", ] +[[package]] +name = "schemars" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "82d20c4491bc164fa2f6c5d44565947a52ad80b9505d8e36f8d54c27c739fcd0" +dependencies = [ + "chrono", + "dyn-clone", + "ref-cast", + "schemars_derive", + "serde", + "serde_json", +] + +[[package]] +name = "schemars_derive" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "33d020396d1d138dc19f1165df7545479dcd58d93810dc5d646a16e55abefa80" +dependencies = [ + "proc-macro2", + "quote", + "serde_derive_internals", + "syn 2.0.104", +] + [[package]] name = "scoped-tls" version = "1.0.1" @@ -9035,6 +9219,17 @@ dependencies = [ "syn 2.0.104", ] +[[package]] +name = "serde_derive_internals" +version = "0.29.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.104", +] + [[package]] name = "serde_fmt" version = "1.0.3" @@ -10166,7 +10361,7 @@ version = "0.26.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b" dependencies = [ - "rustls 0.23.29", + "rustls 0.23.31", "tokio", ] @@ -10291,6 +10486,33 @@ name = "tonic" version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7e581ba15a835f4d9ea06c55ab1bd4dce26fc53752c69a04aac00703bfb49ba9" +dependencies = [ + "async-trait", + "base64 0.22.1", + "bytes", + "flate2", + "http 1.3.1", + "http-body 1.0.1", + "http-body-util", + "hyper 1.6.0", + "hyper-timeout", + "hyper-util", + "percent-encoding", + "pin-project", + "prost 0.13.5", + "tokio", + "tokio-stream", + "tower", + "tower-layer", + "tower-service", + "tracing", +] + +[[package]] +name = "tonic" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "308e1db96abdccdf0a9150fb69112bf6ea72640e0bd834ef0c4a618ccc8c8ddc" dependencies = [ "async-trait", "axum", @@ -10306,8 +10528,8 @@ dependencies = [ "hyper-util", "percent-encoding", "pin-project", - "prost", - "socket2 0.5.10", + "socket2 0.6.0", + "sync_wrapper", "tokio", "tokio-stream", "tower", @@ -10318,9 +10540,32 @@ dependencies = [ [[package]] name = "tonic-build" -version = "0.13.1" +version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eac6f67be712d12f0b41328db3137e0d0757645d8904b4cb7d51cd9c2279e847" +checksum = "18262cdd13dec66e8e3f2e3fe535e4b2cc706fab444a7d3678d75d8ac2557329" +dependencies = [ + "prettyplease", + "proc-macro2", + "quote", + "syn 2.0.104", +] + +[[package]] +name = "tonic-prost" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2d8b5b7a44512c59f5ad45e0c40e53263cbbf4426d74fe6b569e04f1d4206e9c" +dependencies = [ + "bytes", + "prost 0.14.1", + "tonic 0.14.0", +] + +[[package]] +name = "tonic-prost-build" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "114cca66d757d72422ef8cccf8be3065321860ac9fa4be73aab37a8a20a9a805" dependencies = [ "prettyplease", "proc-macro2", @@ -10328,6 +10573,8 @@ dependencies = [ "prost-types", "quote", "syn 2.0.104", + "tempfile", + "tonic-build", ] [[package]] @@ -10990,13 +11237,13 @@ dependencies = [ [[package]] name = "wayland-backend" -version = "0.3.10" +version = "0.3.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fe770181423e5fc79d3e2a7f4410b7799d5aab1de4372853de3c6aa13ca24121" +checksum = "673a33c33048a5ade91a6b139580fa174e19fb0d23f396dca9fa15f2e1e49b35" dependencies = [ "cc", "downcast-rs", - "rustix 0.38.44", + "rustix 1.0.8", "scoped-tls", "smallvec", "wayland-sys", @@ -11004,21 +11251,21 @@ dependencies = [ [[package]] name = "wayland-client" -version = "0.31.10" +version = "0.31.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "978fa7c67b0847dbd6a9f350ca2569174974cd4082737054dbb7fbb79d7d9a61" +checksum = "c66a47e840dc20793f2264eb4b3e4ecb4b75d91c0dd4af04b456128e0bdd449d" dependencies = [ "bitflags 2.9.1", - "rustix 0.38.44", + "rustix 1.0.8", "wayland-backend", "wayland-scanner", ] [[package]] name = "wayland-protocols" -version = "0.32.8" +version = "0.32.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "779075454e1e9a521794fed15886323ea0feda3f8b0fc1390f5398141310422a" +checksum = "efa790ed75fbfd71283bd2521a1cfdc022aabcc28bdcff00851f9e4ae88d9901" dependencies = [ "bitflags 2.9.1", "wayland-backend", @@ -11028,9 +11275,9 @@ dependencies = [ [[package]] name = "wayland-scanner" -version = "0.31.6" +version = "0.31.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "896fdafd5d28145fce7958917d69f2fd44469b1d4e861cb5961bcbeebc6d1484" +checksum = "54cb1e9dc49da91950bdfd8b848c49330536d9d1fb03d4bfec8cae50caa50ae3" dependencies = [ "proc-macro2", "quick-xml 0.37.5", @@ -11039,9 +11286,9 @@ dependencies = [ [[package]] name = "wayland-sys" -version = "0.31.6" +version = "0.31.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dbcebb399c77d5aa9fa5db874806ee7b4eba4e73650948e8f93963f128896615" +checksum = "34949b42822155826b41db8e5d0c1be3a2bd296c747577a43a3e6daefc296142" dependencies = [ "dlib", "log", @@ -11445,7 +11692,7 @@ version = "0.60.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb" dependencies = [ - "windows-targets 0.53.2", + "windows-targets 0.53.3", ] [[package]] @@ -11496,10 +11743,11 @@ dependencies = [ [[package]] name = "windows-targets" -version = "0.53.2" +version = "0.53.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c66f69fcc9ce11da9966ddb31a40968cad001c5bedeb5c2b82ede4253ab48aef" +checksum = "d5fe6031c4041849d7c496a8ded650796e7b6ecc19df1a431c1a363342e5dc91" dependencies = [ + "windows-link", "windows_aarch64_gnullvm 0.53.0", "windows_aarch64_msvc 0.53.0", "windows_i686_gnu 0.53.0", @@ -11741,7 +11989,7 @@ version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a76ff259533532054cfbaefb115c613203c73707017459206380f03b3b3f266e" dependencies = [ - "darling", + "darling 0.20.11", "proc-macro2", "quote", "syn 2.0.104", diff --git a/Cargo.toml b/Cargo.toml index 2dc9d961..0172422a 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -90,6 +90,7 @@ rustfs-checksums = { path = "crates/checksums", version = "0.0.5" } rustfs-workers = { path = "crates/workers", version = "0.0.5" } rustfs-mcp = { path = "crates/mcp", version = "0.0.5" } aes-gcm = { version = "0.10.3", features = ["std"] } +anyhow = "1.0.98" arc-swap = "1.7.1" argon2 = { version = "0.5.3", features = ["std"] } atoi = "2.0.0" @@ -98,7 +99,8 @@ async-recursion = "1.1.1" async-trait = "0.1.88" async-compression = { version = "0.4.0" } atomic_enum = "0.3.0" -aws-sdk-s3 = "1.96.0" +aws-config = { version = "1.8.3" } +aws-sdk-s3 = "1.100.0" axum = "0.8.4" axum-extra = "0.10.1" axum-server = { version = "0.7.2", features = ["tls-rustls"] } @@ -108,11 +110,13 @@ brotli = "8.0.1" bytes = { version = "1.10.1", features = ["serde"] } bytesize = "2.0.1" byteorder = "1.5.0" +bytes-utils = "0.1.4" cfg-if = "1.0.1" +crc-fast = "1.3.0" chacha20poly1305 = { version = "0.10.1" } chrono = { version = "0.4.41", features = ["serde"] } -clap = { version = "4.5.41", features = ["derive", "env"] } -const-str = { version = "0.6.3", features = ["std", "proc"] } +clap = { version = "4.5.42", features = ["derive", "env"] } +const-str = { version = "0.6.4", features = ["std", "proc"] } crc32fast = "1.5.0" criterion = { version = "0.5", features = ["html_reports"] } dashmap = "6.1.0" @@ -145,7 +149,7 @@ http-body = "1.0.1" humantime = "2.2.0" ipnetwork = { version = "0.21.1", features = ["serde"] } jsonwebtoken = "9.3.1" -keyring = { version = "3.6.2", features = [ +keyring = { version = "3.6.3", features = [ "apple-native", "windows-native", "sync-secret-service", @@ -186,7 +190,8 @@ blake3 = { version = "1.8.2" } pbkdf2 = "0.12.2" percent-encoding = "2.3.1" pin-project-lite = "0.2.16" -prost = "0.13.5" +prost = "0.14.1" +pretty_assertions = "1.4.1" quick-xml = "0.38.0" rand = "0.9.2" rdkafka = { version = "0.38.0", features = ["tokio"] } @@ -205,6 +210,7 @@ rfd = { version = "0.15.4", default-features = false, features = [ "xdg-portal", "tokio", ] } +rmcp = { version = "0.3.1" } rmp = "0.8.14" rmp-serde = "1.3.0" rsa = "0.9.8" @@ -212,16 +218,18 @@ rumqttc = { version = "0.24" } rust-embed = { version = "8.7.2" } rust-i18n = { version = "3.1.5" } rustfs-rsc = "2025.506.1" -rustls = { version = "0.23.29" } +rustls = { version = "0.23.31" } rustls-pki-types = "1.12.0" rustls-pemfile = "2.2.0" s3s = { version = "0.12.0-minio-preview.2" } -shadow-rs = { version = "1.2.0", default-features = false } +schemars = "1.0.4" serde = { version = "1.0.219", features = ["derive"] } serde_json = { version = "1.0.141", features = ["raw_value"] } serde_urlencoded = "0.7.1" +serial_test = "3.2.0" sha1 = "0.10.6" sha2 = "0.10.9" +shadow-rs = { version = "1.2.0", default-features = false } siphasher = "1.0.1" smallvec = { version = "1.15.1", features = ["serde"] } snafu = "0.8.6" @@ -241,22 +249,24 @@ time = { version = "0.3.41", features = [ "macros", "serde", ] } -tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] } +tokio = { version = "1.47.0", features = ["fs", "rt-multi-thread"] } tokio-rustls = { version = "0.26.2", default-features = false } tokio-stream = { version = "0.1.17" } tokio-tar = "0.3.1" tokio-test = "0.4.4" tokio-util = { version = "0.7.15", features = ["io", "compat"] } -tonic = { version = "0.13.1", features = ["gzip"] } -tonic-build = { version = "0.13.1" } +tonic = { version = "0.14.0", features = ["gzip"] } +tonic-prost = { version = "0.14.0" } +tonic-prost-build = { version = "0.14.0" } tower = { version = "0.5.2", features = ["timeout"] } tower-http = { version = "0.6.6", features = ["cors"] } tracing = "0.1.41" +tracing-appender = "0.2.3" tracing-core = "0.1.34" tracing-error = "0.2.1" -tracing-subscriber = { version = "0.3.19", features = ["env-filter", "time"] } -tracing-appender = "0.2.3" tracing-opentelemetry = "0.31.0" +tracing-subscriber = { version = "0.3.19", features = ["env-filter", "time"] } +tracing-test = "0.2.5" transform-stream = "0.3.1" url = "2.5.4" urlencoding = "2.1.3" @@ -270,7 +280,10 @@ winapi = { version = "0.3.9" } xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] } zip = "2.4.2" zstd = "0.13.3" -anyhow = "1.0.98" + + +[workspace.metadata.cargo-shear] +ignored = ["rustfs", "rust-i18n"] [profile.wasm-dev] inherits = "dev" diff --git a/crates/checksums/Cargo.toml b/crates/checksums/Cargo.toml index 71a6e547..09139f04 100644 --- a/crates/checksums/Cargo.toml +++ b/crates/checksums/Cargo.toml @@ -21,13 +21,13 @@ rust-version.workspace = true version.workspace = true homepage.workspace = true description = "Checksum calculation and verification callbacks for HTTP request and response bodies sent by service clients generated by RustFS, ensuring data integrity and authenticity." -keywords = ["checksum-calculation", "verification", "integrity", "authenticity", "rustfs", "Minio"] -categories = ["web-programming", "development-tools", "checksum"] +keywords = ["checksum-calculation", "verification", "integrity", "authenticity", "rustfs"] +categories = ["web-programming", "development-tools", "network-programming"] documentation = "https://docs.rs/rustfs-signer/latest/rustfs_checksum/" [dependencies] bytes = { workspace = true } -crc-fast = "1.3.0" +crc-fast = { workspace = true } hex = { workspace = true } http = { workspace = true } http-body = { workspace = true } @@ -39,10 +39,7 @@ sha2 = { workspace = true } tracing = { workspace = true } [dev-dependencies] -bytes-utils = "0.1.2" -pretty_assertions = "1.3" -tracing-test = "0.2.1" - -[dev-dependencies.tokio] -version = "1.23.1" -features = ["macros", "rt"] +bytes-utils = { workspace = true } +pretty_assertions = { workspace = true } +tracing-test = { workspace = true } +tokio = { workspace = true, features = ["macros", "rt"] } \ No newline at end of file diff --git a/crates/config/Cargo.toml b/crates/config/Cargo.toml index 70524c95..c9095b31 100644 --- a/crates/config/Cargo.toml +++ b/crates/config/Cargo.toml @@ -26,9 +26,6 @@ categories = ["web-programming", "development-tools", "config"] [dependencies] const-str = { workspace = true, optional = true } -serde = { workspace = true } -serde_json = { workspace = true } - [lints] workspace = true diff --git a/crates/config/src/constants/app.rs b/crates/config/src/constants/app.rs index b2a74e91..b890defd 100644 --- a/crates/config/src/constants/app.rs +++ b/crates/config/src/constants/app.rs @@ -15,9 +15,9 @@ use const_str::concat; /// Application name -/// Default value: RustFs +/// Default value: RustFS /// Environment variable: RUSTFS_APP_NAME -pub const APP_NAME: &str = "RustFs"; +pub const APP_NAME: &str = "RustFS"; /// Application version /// Default value: 1.0.0 /// Environment variable: RUSTFS_VERSION @@ -71,6 +71,16 @@ pub const DEFAULT_ACCESS_KEY: &str = "rustfsadmin"; /// Example: --secret-key rustfsadmin pub const DEFAULT_SECRET_KEY: &str = "rustfsadmin"; +/// Default console enable +/// This is the default value for the console server. +/// It is used to enable or disable the console server. +/// Default value: true +/// Environment variable: RUSTFS_CONSOLE_ENABLE +/// Command line argument: --console-enable +/// Example: RUSTFS_CONSOLE_ENABLE=true +/// Example: --console-enable true +pub const DEFAULT_CONSOLE_ENABLE: bool = true; + /// Default OBS configuration endpoint /// Environment variable: DEFAULT_OBS_ENDPOINT /// Command line argument: --obs-endpoint @@ -126,28 +136,28 @@ pub const DEFAULT_SINK_FILE_LOG_FILE: &str = concat!(DEFAULT_LOG_FILENAME, "-sin /// This is the default log directory for rustfs. /// It is used to store the logs of the application. /// Default value: logs -/// Environment variable: RUSTFS_OBSERVABILITY_LOG_DIRECTORY -pub const DEFAULT_LOG_DIR: &str = "/logs"; +/// Environment variable: RUSTFS_LOG_DIRECTORY +pub const DEFAULT_LOG_DIR: &str = "logs"; /// Default log rotation size mb for rustfs /// This is the default log rotation size for rustfs. /// It is used to rotate the logs of the application. /// Default value: 100 MB -/// Environment variable: RUSTFS_OBSERVABILITY_LOG_ROTATION_SIZE_MB +/// Environment variable: RUSTFS_OBS_LOG_ROTATION_SIZE_MB pub const DEFAULT_LOG_ROTATION_SIZE_MB: u64 = 100; /// Default log rotation time for rustfs /// This is the default log rotation time for rustfs. /// It is used to rotate the logs of the application. /// Default value: hour, eg: day,hour,minute,second -/// Environment variable: RUSTFS_OBSERVABILITY_LOG_ROTATION_TIME +/// Environment variable: RUSTFS_OBS_LOG_ROTATION_TIME pub const DEFAULT_LOG_ROTATION_TIME: &str = "day"; /// Default log keep files for rustfs /// This is the default log keep files for rustfs. /// It is used to keep the logs of the application. /// Default value: 30 -/// Environment variable: RUSTFS_OBSERVABILITY_LOG_KEEP_FILES +/// Environment variable: RUSTFS_OBS_LOG_KEEP_FILES pub const DEFAULT_LOG_KEEP_FILES: u16 = 30; #[cfg(test)] @@ -157,7 +167,7 @@ mod tests { #[test] fn test_app_basic_constants() { // Test application basic constants - assert_eq!(APP_NAME, "RustFs"); + assert_eq!(APP_NAME, "RustFS"); assert!(!APP_NAME.contains(' '), "App name should not contain spaces"); assert_eq!(VERSION, "0.0.1"); diff --git a/crates/config/src/notify/mod.rs b/crates/config/src/notify/mod.rs index ca0e24b9..09d8f6f6 100644 --- a/crates/config/src/notify/mod.rs +++ b/crates/config/src/notify/mod.rs @@ -27,7 +27,15 @@ pub const DEFAULT_TARGET: &str = "1"; pub const NOTIFY_PREFIX: &str = "notify"; -pub const NOTIFY_ROUTE_PREFIX: &str = "notify_"; +pub const NOTIFY_ROUTE_PREFIX: &str = const_str::concat!(NOTIFY_PREFIX, "_"); + +/// Standard config keys and values. +pub const ENABLE_KEY: &str = "enable"; +pub const COMMENT_KEY: &str = "comment"; + +/// Enable values +pub const ENABLE_ON: &str = "on"; +pub const ENABLE_OFF: &str = "off"; #[allow(dead_code)] pub const NOTIFY_SUB_SYSTEMS: &[&str] = &[NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS]; diff --git a/crates/config/src/notify/mqtt.rs b/crates/config/src/notify/mqtt.rs index b949f567..ba5ed6b1 100644 --- a/crates/config/src/notify/mqtt.rs +++ b/crates/config/src/notify/mqtt.rs @@ -12,6 +12,8 @@ // See the License for the specific language governing permissions and // limitations under the License. +use crate::notify::{COMMENT_KEY, ENABLE_KEY}; + // MQTT Keys pub const MQTT_BROKER: &str = "broker"; pub const MQTT_TOPIC: &str = "topic"; @@ -23,6 +25,21 @@ pub const MQTT_KEEP_ALIVE_INTERVAL: &str = "keep_alive_interval"; pub const MQTT_QUEUE_DIR: &str = "queue_dir"; pub const MQTT_QUEUE_LIMIT: &str = "queue_limit"; +/// A list of all valid configuration keys for an MQTT target. +pub const NOTIFY_MQTT_KEYS: &[&str] = &[ + ENABLE_KEY, // "enable" is a common key + MQTT_BROKER, + MQTT_TOPIC, + MQTT_QOS, + MQTT_USERNAME, + MQTT_PASSWORD, + MQTT_RECONNECT_INTERVAL, + MQTT_KEEP_ALIVE_INTERVAL, + MQTT_QUEUE_DIR, + MQTT_QUEUE_LIMIT, + COMMENT_KEY, +]; + // MQTT Environment Variables pub const ENV_MQTT_ENABLE: &str = "RUSTFS_NOTIFY_MQTT_ENABLE"; pub const ENV_MQTT_BROKER: &str = "RUSTFS_NOTIFY_MQTT_BROKER"; @@ -34,3 +51,16 @@ pub const ENV_MQTT_RECONNECT_INTERVAL: &str = "RUSTFS_NOTIFY_MQTT_RECONNECT_INTE pub const ENV_MQTT_KEEP_ALIVE_INTERVAL: &str = "RUSTFS_NOTIFY_MQTT_KEEP_ALIVE_INTERVAL"; pub const ENV_MQTT_QUEUE_DIR: &str = "RUSTFS_NOTIFY_MQTT_QUEUE_DIR"; pub const ENV_MQTT_QUEUE_LIMIT: &str = "RUSTFS_NOTIFY_MQTT_QUEUE_LIMIT"; + +pub const ENV_NOTIFY_MQTT_KEYS: &[&str; 10] = &[ + ENV_MQTT_ENABLE, + ENV_MQTT_BROKER, + ENV_MQTT_TOPIC, + ENV_MQTT_QOS, + ENV_MQTT_USERNAME, + ENV_MQTT_PASSWORD, + ENV_MQTT_RECONNECT_INTERVAL, + ENV_MQTT_KEEP_ALIVE_INTERVAL, + ENV_MQTT_QUEUE_DIR, + ENV_MQTT_QUEUE_LIMIT, +]; diff --git a/crates/config/src/notify/webhook.rs b/crates/config/src/notify/webhook.rs index 0e6770b6..b4fefb5f 100644 --- a/crates/config/src/notify/webhook.rs +++ b/crates/config/src/notify/webhook.rs @@ -12,6 +12,8 @@ // See the License for the specific language governing permissions and // limitations under the License. +use crate::notify::{COMMENT_KEY, ENABLE_KEY}; + // Webhook Keys pub const WEBHOOK_ENDPOINT: &str = "endpoint"; pub const WEBHOOK_AUTH_TOKEN: &str = "auth_token"; @@ -20,6 +22,18 @@ pub const WEBHOOK_QUEUE_DIR: &str = "queue_dir"; pub const WEBHOOK_CLIENT_CERT: &str = "client_cert"; pub const WEBHOOK_CLIENT_KEY: &str = "client_key"; +/// A list of all valid configuration keys for a webhook target. +pub const NOTIFY_WEBHOOK_KEYS: &[&str] = &[ + ENABLE_KEY, // "enable" is a common key + WEBHOOK_ENDPOINT, + WEBHOOK_AUTH_TOKEN, + WEBHOOK_QUEUE_LIMIT, + WEBHOOK_QUEUE_DIR, + WEBHOOK_CLIENT_CERT, + WEBHOOK_CLIENT_KEY, + COMMENT_KEY, +]; + // Webhook Environment Variables pub const ENV_WEBHOOK_ENABLE: &str = "RUSTFS_NOTIFY_WEBHOOK_ENABLE"; pub const ENV_WEBHOOK_ENDPOINT: &str = "RUSTFS_NOTIFY_WEBHOOK_ENDPOINT"; @@ -28,3 +42,13 @@ pub const ENV_WEBHOOK_QUEUE_LIMIT: &str = "RUSTFS_NOTIFY_WEBHOOK_QUEUE_LIMIT"; pub const ENV_WEBHOOK_QUEUE_DIR: &str = "RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR"; pub const ENV_WEBHOOK_CLIENT_CERT: &str = "RUSTFS_NOTIFY_WEBHOOK_CLIENT_CERT"; pub const ENV_WEBHOOK_CLIENT_KEY: &str = "RUSTFS_NOTIFY_WEBHOOK_CLIENT_KEY"; + +pub const ENV_NOTIFY_WEBHOOK_KEYS: &[&str; 7] = &[ + ENV_WEBHOOK_ENABLE, + ENV_WEBHOOK_ENDPOINT, + ENV_WEBHOOK_AUTH_TOKEN, + ENV_WEBHOOK_QUEUE_LIMIT, + ENV_WEBHOOK_QUEUE_DIR, + ENV_WEBHOOK_CLIENT_CERT, + ENV_WEBHOOK_CLIENT_KEY, +]; diff --git a/crates/config/src/observability/config.rs b/crates/config/src/observability/config.rs index d7b836f3..4dfb5148 100644 --- a/crates/config/src/observability/config.rs +++ b/crates/config/src/observability/config.rs @@ -12,279 +12,24 @@ // See the License for the specific language governing permissions and // limitations under the License. -use crate::observability::logger::LoggerConfig; -use crate::observability::otel::OtelConfig; -use crate::observability::sink::SinkConfig; -use serde::{Deserialize, Serialize}; +// Observability Keys -/// Observability configuration -#[derive(Debug, Deserialize, Serialize, Clone)] -pub struct ObservabilityConfig { - pub otel: OtelConfig, - pub sinks: Vec, - pub logger: Option, -} +pub const ENV_OBS_ENDPOINT: &str = "RUSTFS_OBS_ENDPOINT"; +pub const ENV_OBS_USE_STDOUT: &str = "RUSTFS_OBS_USE_STDOUT"; +pub const ENV_OBS_SAMPLE_RATIO: &str = "RUSTFS_OBS_SAMPLE_RATIO"; +pub const ENV_OBS_METER_INTERVAL: &str = "RUSTFS_OBS_METER_INTERVAL"; +pub const ENV_OBS_SERVICE_NAME: &str = "RUSTFS_OBS_SERVICE_NAME"; +pub const ENV_OBS_SERVICE_VERSION: &str = "RUSTFS_OBS_SERVICE_VERSION"; +pub const ENV_OBS_ENVIRONMENT: &str = "RUSTFS_OBS_ENVIRONMENT"; +pub const ENV_OBS_LOGGER_LEVEL: &str = "RUSTFS_OBS_LOGGER_LEVEL"; +pub const ENV_OBS_LOCAL_LOGGING_ENABLED: &str = "RUSTFS_OBS_LOCAL_LOGGING_ENABLED"; +pub const ENV_OBS_LOG_DIRECTORY: &str = "RUSTFS_OBS_LOG_DIRECTORY"; +pub const ENV_OBS_LOG_FILENAME: &str = "RUSTFS_OBS_LOG_FILENAME"; +pub const ENV_OBS_LOG_ROTATION_SIZE_MB: &str = "RUSTFS_OBS_LOG_ROTATION_SIZE_MB"; +pub const ENV_OBS_LOG_ROTATION_TIME: &str = "RUSTFS_OBS_LOG_ROTATION_TIME"; +pub const ENV_OBS_LOG_KEEP_FILES: &str = "RUSTFS_OBS_LOG_KEEP_FILES"; -impl ObservabilityConfig { - pub fn new() -> Self { - Self { - otel: OtelConfig::new(), - sinks: vec![SinkConfig::new()], - logger: Some(LoggerConfig::new()), - } - } -} +pub const ENV_AUDIT_LOGGER_QUEUE_CAPACITY: &str = "RUSTFS_AUDIT_LOGGER_QUEUE_CAPACITY"; -impl Default for ObservabilityConfig { - fn default() -> Self { - Self::new() - } -} - -#[cfg(test)] -mod tests { - use super::*; - - #[test] - fn test_observability_config_new() { - let config = ObservabilityConfig::new(); - - // Verify OTEL config is initialized - assert!(config.otel.use_stdout.is_some(), "OTEL use_stdout should be configured"); - assert!(config.otel.sample_ratio.is_some(), "OTEL sample_ratio should be configured"); - assert!(config.otel.meter_interval.is_some(), "OTEL meter_interval should be configured"); - assert!(config.otel.service_name.is_some(), "OTEL service_name should be configured"); - assert!(config.otel.service_version.is_some(), "OTEL service_version should be configured"); - assert!(config.otel.environment.is_some(), "OTEL environment should be configured"); - assert!(config.otel.logger_level.is_some(), "OTEL logger_level should be configured"); - - // Verify sinks are initialized - assert!(!config.sinks.is_empty(), "Sinks should not be empty"); - assert_eq!(config.sinks.len(), 1, "Should have exactly one default sink"); - - // Verify logger is initialized - assert!(config.logger.is_some(), "Logger should be configured"); - } - - #[test] - fn test_observability_config_default() { - let config = ObservabilityConfig::default(); - let new_config = ObservabilityConfig::new(); - - // Default should be equivalent to new() - assert_eq!(config.sinks.len(), new_config.sinks.len()); - assert_eq!(config.logger.is_some(), new_config.logger.is_some()); - - // OTEL configs should be equivalent - assert_eq!(config.otel.use_stdout, new_config.otel.use_stdout); - assert_eq!(config.otel.sample_ratio, new_config.otel.sample_ratio); - assert_eq!(config.otel.meter_interval, new_config.otel.meter_interval); - assert_eq!(config.otel.service_name, new_config.otel.service_name); - assert_eq!(config.otel.service_version, new_config.otel.service_version); - assert_eq!(config.otel.environment, new_config.otel.environment); - assert_eq!(config.otel.logger_level, new_config.otel.logger_level); - } - - #[test] - fn test_observability_config_otel_defaults() { - let config = ObservabilityConfig::new(); - - // Test OTEL default values - if let Some(_use_stdout) = config.otel.use_stdout { - // Test boolean values - any boolean value is valid - } - - if let Some(sample_ratio) = config.otel.sample_ratio { - assert!((0.0..=1.0).contains(&sample_ratio), "Sample ratio should be between 0.0 and 1.0"); - } - - if let Some(meter_interval) = config.otel.meter_interval { - assert!(meter_interval > 0, "Meter interval should be positive"); - assert!(meter_interval <= 3600, "Meter interval should be reasonable (≤ 1 hour)"); - } - - if let Some(service_name) = &config.otel.service_name { - assert!(!service_name.is_empty(), "Service name should not be empty"); - assert!(!service_name.contains(' '), "Service name should not contain spaces"); - } - - if let Some(service_version) = &config.otel.service_version { - assert!(!service_version.is_empty(), "Service version should not be empty"); - } - - if let Some(environment) = &config.otel.environment { - assert!(!environment.is_empty(), "Environment should not be empty"); - assert!( - ["development", "staging", "production", "test"].contains(&environment.as_str()), - "Environment should be a standard environment name" - ); - } - - if let Some(logger_level) = &config.otel.logger_level { - assert!( - ["trace", "debug", "info", "warn", "error"].contains(&logger_level.as_str()), - "Logger level should be a valid tracing level" - ); - } - } - - #[test] - fn test_observability_config_sinks() { - let config = ObservabilityConfig::new(); - - // Test default sink configuration - assert_eq!(config.sinks.len(), 1, "Should have exactly one default sink"); - - let _default_sink = &config.sinks[0]; - // Test that the sink has valid configuration - // Note: We can't test specific values without knowing SinkConfig implementation - // but we can test that it's properly initialized - - // Test that we can add more sinks - let mut config_mut = config.clone(); - config_mut.sinks.push(SinkConfig::new()); - assert_eq!(config_mut.sinks.len(), 2, "Should be able to add more sinks"); - } - - #[test] - fn test_observability_config_logger() { - let config = ObservabilityConfig::new(); - - // Test logger configuration - assert!(config.logger.is_some(), "Logger should be configured by default"); - - if let Some(_logger) = &config.logger { - // Test that logger has valid configuration - // Note: We can't test specific values without knowing LoggerConfig implementation - // but we can test that it's properly initialized - } - - // Test that logger can be disabled - let mut config_mut = config.clone(); - config_mut.logger = None; - assert!(config_mut.logger.is_none(), "Logger should be able to be disabled"); - } - - #[test] - fn test_observability_config_serialization() { - let config = ObservabilityConfig::new(); - - // Test serialization to JSON - let json_result = serde_json::to_string(&config); - assert!(json_result.is_ok(), "Config should be serializable to JSON"); - - let json_str = json_result.unwrap(); - assert!(!json_str.is_empty(), "Serialized JSON should not be empty"); - assert!(json_str.contains("otel"), "JSON should contain otel configuration"); - assert!(json_str.contains("sinks"), "JSON should contain sinks configuration"); - assert!(json_str.contains("logger"), "JSON should contain logger configuration"); - - // Test deserialization from JSON - let deserialized_result: Result = serde_json::from_str(&json_str); - assert!(deserialized_result.is_ok(), "Config should be deserializable from JSON"); - - let deserialized_config = deserialized_result.unwrap(); - assert_eq!(deserialized_config.sinks.len(), config.sinks.len()); - assert_eq!(deserialized_config.logger.is_some(), config.logger.is_some()); - } - - #[test] - fn test_observability_config_debug_format() { - let config = ObservabilityConfig::new(); - - let debug_str = format!("{config:?}"); - assert!(!debug_str.is_empty(), "Debug output should not be empty"); - assert!(debug_str.contains("ObservabilityConfig"), "Debug output should contain struct name"); - assert!(debug_str.contains("otel"), "Debug output should contain otel field"); - assert!(debug_str.contains("sinks"), "Debug output should contain sinks field"); - assert!(debug_str.contains("logger"), "Debug output should contain logger field"); - } - - #[test] - fn test_observability_config_clone() { - let config = ObservabilityConfig::new(); - let cloned_config = config.clone(); - - // Test that clone creates an independent copy - assert_eq!(cloned_config.sinks.len(), config.sinks.len()); - assert_eq!(cloned_config.logger.is_some(), config.logger.is_some()); - assert_eq!(cloned_config.otel.endpoint, config.otel.endpoint); - assert_eq!(cloned_config.otel.use_stdout, config.otel.use_stdout); - assert_eq!(cloned_config.otel.sample_ratio, config.otel.sample_ratio); - assert_eq!(cloned_config.otel.meter_interval, config.otel.meter_interval); - assert_eq!(cloned_config.otel.service_name, config.otel.service_name); - assert_eq!(cloned_config.otel.service_version, config.otel.service_version); - assert_eq!(cloned_config.otel.environment, config.otel.environment); - assert_eq!(cloned_config.otel.logger_level, config.otel.logger_level); - } - - #[test] - fn test_observability_config_modification() { - let mut config = ObservabilityConfig::new(); - - // Test modifying OTEL endpoint - let original_endpoint = config.otel.endpoint.clone(); - config.otel.endpoint = "http://localhost:4317".to_string(); - assert_ne!(config.otel.endpoint, original_endpoint); - assert_eq!(config.otel.endpoint, "http://localhost:4317"); - - // Test modifying sinks - let original_sinks_len = config.sinks.len(); - config.sinks.push(SinkConfig::new()); - assert_eq!(config.sinks.len(), original_sinks_len + 1); - - // Test disabling logger - config.logger = None; - assert!(config.logger.is_none()); - } - - #[test] - fn test_observability_config_edge_cases() { - // Test with empty sinks - let mut config = ObservabilityConfig::new(); - config.sinks.clear(); - assert!(config.sinks.is_empty(), "Sinks should be empty after clearing"); - - // Test serialization with empty sinks - let json_result = serde_json::to_string(&config); - assert!(json_result.is_ok(), "Config with empty sinks should be serializable"); - - // Test with no logger - config.logger = None; - let json_result = serde_json::to_string(&config); - assert!(json_result.is_ok(), "Config with no logger should be serializable"); - } - - #[test] - fn test_observability_config_memory_efficiency() { - let config = ObservabilityConfig::new(); - - // Test that config doesn't use excessive memory - let config_size = std::mem::size_of_val(&config); - assert!(config_size < 5000, "Config should not use excessive memory"); - - // Test that endpoint string is not excessively long - assert!(config.otel.endpoint.len() < 1000, "Endpoint should not be excessively long"); - - // Test that collections are reasonably sized - assert!(config.sinks.len() < 100, "Sinks collection should be reasonably sized"); - } - - #[test] - fn test_observability_config_consistency() { - // Create multiple configs and ensure they're consistent - let config1 = ObservabilityConfig::new(); - let config2 = ObservabilityConfig::new(); - - // Both configs should have the same default structure - assert_eq!(config1.sinks.len(), config2.sinks.len()); - assert_eq!(config1.logger.is_some(), config2.logger.is_some()); - assert_eq!(config1.otel.use_stdout, config2.otel.use_stdout); - assert_eq!(config1.otel.sample_ratio, config2.otel.sample_ratio); - assert_eq!(config1.otel.meter_interval, config2.otel.meter_interval); - assert_eq!(config1.otel.service_name, config2.otel.service_name); - assert_eq!(config1.otel.service_version, config2.otel.service_version); - assert_eq!(config1.otel.environment, config2.otel.environment); - assert_eq!(config1.otel.logger_level, config2.otel.logger_level); - } -} +// Default values for observability configuration +pub const DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY: usize = 10000; diff --git a/crates/config/src/observability/file.rs b/crates/config/src/observability/file.rs index 1a6f3f4b..18f6942b 100644 --- a/crates/config/src/observability/file.rs +++ b/crates/config/src/observability/file.rs @@ -12,62 +12,17 @@ // See the License for the specific language governing permissions and // limitations under the License. -use serde::{Deserialize, Serialize}; -use std::env; +// RUSTFS_SINKS_FILE_PATH +pub const ENV_SINKS_FILE_PATH: &str = "RUSTFS_SINKS_FILE_PATH"; +// RUSTFS_SINKS_FILE_BUFFER_SIZE +pub const ENV_SINKS_FILE_BUFFER_SIZE: &str = "RUSTFS_SINKS_FILE_BUFFER_SIZE"; +// RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS +pub const ENV_SINKS_FILE_FLUSH_INTERVAL_MS: &str = "RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS"; +// RUSTFS_SINKS_FILE_FLUSH_THRESHOLD +pub const ENV_SINKS_FILE_FLUSH_THRESHOLD: &str = "RUSTFS_SINKS_FILE_FLUSH_THRESHOLD"; -/// File sink configuration -#[derive(Debug, Clone, Serialize, Deserialize)] -pub struct FileSink { - pub path: String, - #[serde(default = "default_buffer_size")] - pub buffer_size: Option, - #[serde(default = "default_flush_interval_ms")] - pub flush_interval_ms: Option, - #[serde(default = "default_flush_threshold")] - pub flush_threshold: Option, -} +pub const DEFAULT_SINKS_FILE_BUFFER_SIZE: usize = 8192; -impl FileSink { - pub fn new() -> Self { - Self { - path: env::var("RUSTFS_SINKS_FILE_PATH") - .ok() - .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(default_path), - buffer_size: default_buffer_size(), - flush_interval_ms: default_flush_interval_ms(), - flush_threshold: default_flush_threshold(), - } - } -} +pub const DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS: u64 = 1000; -impl Default for FileSink { - fn default() -> Self { - Self::new() - } -} - -fn default_buffer_size() -> Option { - Some(8192) -} -fn default_flush_interval_ms() -> Option { - Some(1000) -} -fn default_flush_threshold() -> Option { - Some(100) -} - -fn default_path() -> String { - let temp_dir = env::temp_dir().join("rustfs"); - - if let Err(e) = std::fs::create_dir_all(&temp_dir) { - eprintln!("Failed to create log directory: {e}"); - return "rustfs/rustfs.log".to_string(); - } - - temp_dir - .join("rustfs.log") - .to_str() - .unwrap_or("rustfs/rustfs.log") - .to_string() -} +pub const DEFAULT_SINKS_FILE_FLUSH_THRESHOLD: usize = 100; diff --git a/crates/config/src/observability/kafka.rs b/crates/config/src/observability/kafka.rs index 051d11cf..f5589d32 100644 --- a/crates/config/src/observability/kafka.rs +++ b/crates/config/src/observability/kafka.rs @@ -12,39 +12,16 @@ // See the License for the specific language governing permissions and // limitations under the License. -use serde::{Deserialize, Serialize}; +// RUSTFS_SINKS_KAFKA_BROKERS +pub const ENV_SINKS_KAFKA_BROKERS: &str = "RUSTFS_SINKS_KAFKA_BROKERS"; +pub const ENV_SINKS_KAFKA_TOPIC: &str = "RUSTFS_SINKS_KAFKA_TOPIC"; +// batch_size +pub const ENV_SINKS_KAFKA_BATCH_SIZE: &str = "RUSTFS_SINKS_KAFKA_BATCH_SIZE"; +// batch_timeout_ms +pub const ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS: &str = "RUSTFS_SINKS_KAFKA_BATCH_TIMEOUT_MS"; -/// Kafka sink configuration -#[derive(Debug, Clone, Serialize, Deserialize)] -pub struct KafkaSink { - pub brokers: String, - pub topic: String, - #[serde(default = "default_batch_size")] - pub batch_size: Option, - #[serde(default = "default_batch_timeout_ms")] - pub batch_timeout_ms: Option, -} - -impl KafkaSink { - pub fn new() -> Self { - Self { - brokers: "localhost:9092".to_string(), - topic: "rustfs".to_string(), - batch_size: default_batch_size(), - batch_timeout_ms: default_batch_timeout_ms(), - } - } -} - -impl Default for KafkaSink { - fn default() -> Self { - Self::new() - } -} - -fn default_batch_size() -> Option { - Some(100) -} -fn default_batch_timeout_ms() -> Option { - Some(1000) -} +// brokers +pub const DEFAULT_SINKS_KAFKA_BROKERS: &str = "localhost:9092"; +pub const DEFAULT_SINKS_KAFKA_TOPIC: &str = "rustfs-sinks"; +pub const DEFAULT_SINKS_KAFKA_BATCH_SIZE: usize = 100; +pub const DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS: u64 = 1000; diff --git a/crates/config/src/observability/logger.rs b/crates/config/src/observability/logger.rs deleted file mode 100644 index fc8c0bb3..00000000 --- a/crates/config/src/observability/logger.rs +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright 2024 RustFS Team -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -use serde::{Deserialize, Serialize}; - -/// Logger configuration -#[derive(Debug, Deserialize, Serialize, Clone)] -pub struct LoggerConfig { - pub queue_capacity: Option, -} - -impl LoggerConfig { - pub fn new() -> Self { - Self { - queue_capacity: Some(10000), - } - } -} - -impl Default for LoggerConfig { - fn default() -> Self { - Self::new() - } -} diff --git a/crates/config/src/observability/mod.rs b/crates/config/src/observability/mod.rs index fff1f5b9..2b1392bc 100644 --- a/crates/config/src/observability/mod.rs +++ b/crates/config/src/observability/mod.rs @@ -12,10 +12,12 @@ // See the License for the specific language governing permissions and // limitations under the License. -pub(crate) mod config; -pub(crate) mod file; -pub(crate) mod kafka; -pub(crate) mod logger; -pub(crate) mod otel; -pub(crate) mod sink; -pub(crate) mod webhook; +mod config; +mod file; +mod kafka; +mod webhook; + +pub use config::*; +pub use file::*; +pub use kafka::*; +pub use webhook::*; diff --git a/crates/config/src/observability/otel.rs b/crates/config/src/observability/otel.rs deleted file mode 100644 index 5ef32231..00000000 --- a/crates/config/src/observability/otel.rs +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright 2024 RustFS Team -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -use crate::constants::app::{ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT}; -use crate::{APP_NAME, DEFAULT_LOG_LEVEL}; -use serde::{Deserialize, Serialize}; -use std::env; - -/// OpenTelemetry configuration -#[derive(Debug, Deserialize, Serialize, Clone)] -pub struct OtelConfig { - pub endpoint: String, // Endpoint for metric collection - pub use_stdout: Option, // Output to stdout - pub sample_ratio: Option, // Trace sampling ratio - pub meter_interval: Option, // Metric collection interval - pub service_name: Option, // Service name - pub service_version: Option, // Service version - pub environment: Option, // Environment - pub logger_level: Option, // Logger level - pub local_logging_enabled: Option, // Local logging enabled -} - -impl OtelConfig { - pub fn new() -> Self { - extract_otel_config_from_env() - } -} - -impl Default for OtelConfig { - fn default() -> Self { - Self::new() - } -} - -// Helper function: Extract observable configuration from environment variables -fn extract_otel_config_from_env() -> OtelConfig { - OtelConfig { - endpoint: env::var("RUSTFS_OBSERVABILITY_ENDPOINT").unwrap_or_else(|_| "".to_string()), - use_stdout: env::var("RUSTFS_OBSERVABILITY_USE_STDOUT") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(USE_STDOUT)), - sample_ratio: env::var("RUSTFS_OBSERVABILITY_SAMPLE_RATIO") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(SAMPLE_RATIO)), - meter_interval: env::var("RUSTFS_OBSERVABILITY_METER_INTERVAL") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(METER_INTERVAL)), - service_name: env::var("RUSTFS_OBSERVABILITY_SERVICE_NAME") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(APP_NAME.to_string())), - service_version: env::var("RUSTFS_OBSERVABILITY_SERVICE_VERSION") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(SERVICE_VERSION.to_string())), - environment: env::var("RUSTFS_OBSERVABILITY_ENVIRONMENT") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(ENVIRONMENT.to_string())), - logger_level: env::var("RUSTFS_OBSERVABILITY_LOGGER_LEVEL") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(DEFAULT_LOG_LEVEL.to_string())), - local_logging_enabled: env::var("RUSTFS_OBSERVABILITY_LOCAL_LOGGING_ENABLED") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(false)), - } -} diff --git a/crates/config/src/observability/sink.rs b/crates/config/src/observability/sink.rs deleted file mode 100644 index b769604d..00000000 --- a/crates/config/src/observability/sink.rs +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright 2024 RustFS Team -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -use crate::observability::file::FileSink; -use crate::observability::kafka::KafkaSink; -use crate::observability::webhook::WebhookSink; -use serde::{Deserialize, Serialize}; - -/// Sink configuration -#[derive(Debug, Clone, Serialize, Deserialize)] -#[serde(tag = "type")] -pub enum SinkConfig { - Kafka(KafkaSink), - Webhook(WebhookSink), - File(FileSink), -} - -impl SinkConfig { - pub fn new() -> Self { - Self::File(FileSink::new()) - } -} - -impl Default for SinkConfig { - fn default() -> Self { - Self::new() - } -} diff --git a/crates/config/src/observability/webhook.rs b/crates/config/src/observability/webhook.rs index bd6404f8..b40da1da 100644 --- a/crates/config/src/observability/webhook.rs +++ b/crates/config/src/observability/webhook.rs @@ -12,42 +12,17 @@ // See the License for the specific language governing permissions and // limitations under the License. -use serde::{Deserialize, Serialize}; -use std::collections::HashMap; +// RUSTFS_SINKS_WEBHOOK_ENDPOINT +pub const ENV_SINKS_WEBHOOK_ENDPOINT: &str = "RUSTFS_SINKS_WEBHOOK_ENDPOINT"; +// RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN +pub const ENV_SINKS_WEBHOOK_AUTH_TOKEN: &str = "RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN"; +// max_retries +pub const ENV_SINKS_WEBHOOK_MAX_RETRIES: &str = "RUSTFS_SINKS_WEBHOOK_MAX_RETRIES"; +// retry_delay_ms +pub const ENV_SINKS_WEBHOOK_RETRY_DELAY_MS: &str = "RUSTFS_SINKS_WEBHOOK_RETRY_DELAY_MS"; -/// Webhook sink configuration -#[derive(Debug, Deserialize, Serialize, Clone)] -pub struct WebhookSink { - pub endpoint: String, - pub auth_token: String, - pub headers: Option>, - #[serde(default = "default_max_retries")] - pub max_retries: Option, - #[serde(default = "default_retry_delay_ms")] - pub retry_delay_ms: Option, -} - -impl WebhookSink { - pub fn new() -> Self { - Self { - endpoint: "".to_string(), - auth_token: "".to_string(), - headers: Some(HashMap::new()), - max_retries: default_max_retries(), - retry_delay_ms: default_retry_delay_ms(), - } - } -} - -impl Default for WebhookSink { - fn default() -> Self { - Self::new() - } -} - -fn default_max_retries() -> Option { - Some(3) -} -fn default_retry_delay_ms() -> Option { - Some(100) -} +// Default values for webhook sink configuration +pub const DEFAULT_SINKS_WEBHOOK_ENDPOINT: &str = "http://localhost:8080"; +pub const DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN: &str = ""; +pub const DEFAULT_SINKS_WEBHOOK_MAX_RETRIES: usize = 3; +pub const DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS: u64 = 100; diff --git a/crates/e2e_test/Cargo.toml b/crates/e2e_test/Cargo.toml index cd51afdb..18701390 100644 --- a/crates/e2e_test/Cargo.toml +++ b/crates/e2e_test/Cargo.toml @@ -38,7 +38,7 @@ url.workspace = true rustfs-madmin.workspace = true rustfs-filemeta.workspace = true bytes.workspace = true -serial_test = "3.2.0" -aws-sdk-s3 = "1.99.0" -aws-config = "1.8.3" -async-trait = { workspace = true } +serial_test = { workspace = true } +aws-sdk-s3.workspace = true +aws-config = { workspace = true } +async-trait = { workspace = true } \ No newline at end of file diff --git a/crates/ecstore/src/config/com.rs b/crates/ecstore/src/config/com.rs index 11b45cbc..443905d5 100644 --- a/crates/ecstore/src/config/com.rs +++ b/crates/ecstore/src/config/com.rs @@ -12,16 +12,16 @@ // See the License for the specific language governing permissions and // limitations under the License. -use super::{Config, GLOBAL_StorageClass, storageclass}; +use crate::config::{Config, GLOBAL_STORAGE_CLASS, storageclass}; use crate::disk::RUSTFS_META_BUCKET; use crate::error::{Error, Result}; use crate::store_api::{ObjectInfo, ObjectOptions, PutObjReader, StorageAPI}; use http::HeaderMap; -use lazy_static::lazy_static; use rustfs_config::DEFAULT_DELIMITER; use rustfs_utils::path::SLASH_SEPARATOR; use std::collections::HashSet; use std::sync::Arc; +use std::sync::LazyLock; use tracing::{error, warn}; pub const CONFIG_PREFIX: &str = "config"; @@ -29,14 +29,13 @@ const CONFIG_FILE: &str = "config.json"; pub const STORAGE_CLASS_SUB_SYS: &str = "storage_class"; -lazy_static! { - static ref CONFIG_BUCKET: String = format!("{}{}{}", RUSTFS_META_BUCKET, SLASH_SEPARATOR, CONFIG_PREFIX); - static ref SubSystemsDynamic: HashSet = { - let mut h = HashSet::new(); - h.insert(STORAGE_CLASS_SUB_SYS.to_owned()); - h - }; -} +static CONFIG_BUCKET: LazyLock = LazyLock::new(|| format!("{RUSTFS_META_BUCKET}{SLASH_SEPARATOR}{CONFIG_PREFIX}")); + +static SUB_SYSTEMS_DYNAMIC: LazyLock> = LazyLock::new(|| { + let mut h = HashSet::new(); + h.insert(STORAGE_CLASS_SUB_SYS.to_owned()); + h +}); pub async fn read_config(api: Arc, file: &str) -> Result> { let (data, _obj) = read_config_with_metadata(api, file, &ObjectOptions::default()).await?; Ok(data) @@ -197,7 +196,7 @@ pub async fn lookup_configs(cfg: &mut Config, api: Arc) { } async fn apply_dynamic_config(cfg: &mut Config, api: Arc) -> Result<()> { - for key in SubSystemsDynamic.iter() { + for key in SUB_SYSTEMS_DYNAMIC.iter() { apply_dynamic_config_for_sub_sys(cfg, api.clone(), key).await?; } @@ -212,9 +211,9 @@ async fn apply_dynamic_config_for_sub_sys(cfg: &mut Config, api: for (i, count) in set_drive_counts.iter().enumerate() { match storageclass::lookup_config(&kvs, *count) { Ok(res) => { - if i == 0 && GLOBAL_StorageClass.get().is_none() { - if let Err(r) = GLOBAL_StorageClass.set(res) { - error!("GLOBAL_StorageClass.set failed {:?}", r); + if i == 0 && GLOBAL_STORAGE_CLASS.get().is_none() { + if let Err(r) = GLOBAL_STORAGE_CLASS.set(res) { + error!("GLOBAL_STORAGE_CLASS.set failed {:?}", r); } } } diff --git a/crates/ecstore/src/config/mod.rs b/crates/ecstore/src/config/mod.rs index 34dbfee1..e957b7d8 100644 --- a/crates/ecstore/src/config/mod.rs +++ b/crates/ecstore/src/config/mod.rs @@ -21,26 +21,17 @@ pub mod storageclass; use crate::error::Result; use crate::store::ECStore; use com::{STORAGE_CLASS_SUB_SYS, lookup_configs, read_config_without_migrate}; -use lazy_static::lazy_static; use rustfs_config::DEFAULT_DELIMITER; +use rustfs_config::notify::{COMMENT_KEY, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS}; use serde::{Deserialize, Serialize}; use std::collections::HashMap; +use std::sync::LazyLock; use std::sync::{Arc, OnceLock}; -lazy_static! { - pub static ref GLOBAL_StorageClass: OnceLock = OnceLock::new(); - pub static ref DefaultKVS: OnceLock> = OnceLock::new(); - pub static ref GLOBAL_ServerConfig: OnceLock = OnceLock::new(); - pub static ref GLOBAL_ConfigSys: ConfigSys = ConfigSys::new(); -} - -/// Standard config keys and values. -pub const ENABLE_KEY: &str = "enable"; -pub const COMMENT_KEY: &str = "comment"; - -/// Enable values -pub const ENABLE_ON: &str = "on"; -pub const ENABLE_OFF: &str = "off"; +pub static GLOBAL_STORAGE_CLASS: LazyLock> = LazyLock::new(OnceLock::new); +pub static DEFAULT_KVS: LazyLock>> = LazyLock::new(OnceLock::new); +pub static GLOBAL_SERVER_CONFIG: LazyLock> = LazyLock::new(OnceLock::new); +pub static GLOBAL_CONFIG_SYS: LazyLock = LazyLock::new(ConfigSys::new); pub const ENV_ACCESS_KEY: &str = "RUSTFS_ACCESS_KEY"; pub const ENV_SECRET_KEY: &str = "RUSTFS_SECRET_KEY"; @@ -66,7 +57,7 @@ impl ConfigSys { lookup_configs(&mut cfg, api).await; - let _ = GLOBAL_ServerConfig.set(cfg); + let _ = GLOBAL_SERVER_CONFIG.set(cfg); Ok(()) } @@ -131,6 +122,28 @@ impl KVS { keys } + + /// Insert or update a pair of key/values in KVS + pub fn insert(&mut self, key: String, value: String) { + for kv in self.0.iter_mut() { + if kv.key == key { + kv.value = value.clone(); + return; + } + } + self.0.push(KV { + key, + value, + hidden_if_empty: false, + }); + } + + /// Merge all entries from another KVS to the current instance + pub fn extend(&mut self, other: KVS) { + for KV { key, value, .. } in other.0.into_iter() { + self.insert(key, value); + } + } } #[derive(Debug, Clone)] @@ -159,7 +172,7 @@ impl Config { } pub fn set_defaults(&mut self) { - if let Some(defaults) = DefaultKVS.get() { + if let Some(defaults) = DEFAULT_KVS.get() { for (k, v) in defaults.iter() { if !self.0.contains_key(k) { let mut default = HashMap::new(); @@ -198,20 +211,17 @@ pub fn register_default_kvs(kvs: HashMap) { p.insert(k, v); } - let _ = DefaultKVS.set(p); + let _ = DEFAULT_KVS.set(p); } pub fn init() { let mut kvs = HashMap::new(); // Load storageclass default configuration - kvs.insert(STORAGE_CLASS_SUB_SYS.to_owned(), storageclass::DefaultKVS.clone()); + kvs.insert(STORAGE_CLASS_SUB_SYS.to_owned(), storageclass::DEFAULT_KVS.clone()); // New: Loading default configurations for notify_webhook and notify_mqtt // Referring subsystem names through constants to improve the readability and maintainability of the code - kvs.insert( - rustfs_config::notify::NOTIFY_WEBHOOK_SUB_SYS.to_owned(), - notify::DefaultWebhookKVS.clone(), - ); - kvs.insert(rustfs_config::notify::NOTIFY_MQTT_SUB_SYS.to_owned(), notify::DefaultMqttKVS.clone()); + kvs.insert(NOTIFY_WEBHOOK_SUB_SYS.to_owned(), notify::DEFAULT_WEBHOOK_KVS.clone()); + kvs.insert(NOTIFY_MQTT_SUB_SYS.to_owned(), notify::DEFAULT_MQTT_KVS.clone()); // Register all default configurations register_default_kvs(kvs) diff --git a/crates/ecstore/src/config/notify.rs b/crates/ecstore/src/config/notify.rs index 71e239cb..2e1dedc2 100644 --- a/crates/ecstore/src/config/notify.rs +++ b/crates/ecstore/src/config/notify.rs @@ -12,40 +12,120 @@ // See the License for the specific language governing permissions and // limitations under the License. -use crate::config::{ENABLE_KEY, ENABLE_OFF, KV, KVS}; -use lazy_static::lazy_static; +use crate::config::{KV, KVS}; use rustfs_config::notify::{ - DEFAULT_DIR, DEFAULT_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, - MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, - WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, + COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, ENABLE_OFF, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, + MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, + WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, }; +use std::sync::LazyLock; -lazy_static! { - /// The default configuration collection of webhooks, - /// Use lazy_static! to ensure that these configurations are initialized only once during the program life cycle, enabling high-performance lazy loading. - pub static ref DefaultWebhookKVS: KVS = KVS(vec![ - KV { key: ENABLE_KEY.to_owned(), value: ENABLE_OFF.to_owned(), hidden_if_empty: false }, - KV { key: WEBHOOK_ENDPOINT.to_owned(), value: "".to_owned(), hidden_if_empty: false }, +/// The default configuration collection of webhooks, +/// Initialized only once during the program life cycle, enabling high-performance lazy loading. +pub static DEFAULT_WEBHOOK_KVS: LazyLock = LazyLock::new(|| { + KVS(vec![ + KV { + key: ENABLE_KEY.to_owned(), + value: ENABLE_OFF.to_owned(), + hidden_if_empty: false, + }, + KV { + key: WEBHOOK_ENDPOINT.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, // Sensitive information such as authentication tokens is hidden when the value is empty, enhancing security - KV { key: WEBHOOK_AUTH_TOKEN.to_owned(), value: "".to_owned(), hidden_if_empty: true }, - KV { key: WEBHOOK_QUEUE_LIMIT.to_owned(), value: DEFAULT_LIMIT.to_string().to_owned(), hidden_if_empty: false }, - KV { key: WEBHOOK_QUEUE_DIR.to_owned(), value: DEFAULT_DIR.to_owned(), hidden_if_empty: false }, - KV { key: WEBHOOK_CLIENT_CERT.to_owned(), value: "".to_owned(), hidden_if_empty: false }, - KV { key: WEBHOOK_CLIENT_KEY.to_owned(), value: "".to_owned(), hidden_if_empty: false }, - ]); + KV { + key: WEBHOOK_AUTH_TOKEN.to_owned(), + value: "".to_owned(), + hidden_if_empty: true, + }, + KV { + key: WEBHOOK_QUEUE_LIMIT.to_owned(), + value: DEFAULT_LIMIT.to_string(), + hidden_if_empty: false, + }, + KV { + key: WEBHOOK_QUEUE_DIR.to_owned(), + value: DEFAULT_DIR.to_owned(), + hidden_if_empty: false, + }, + KV { + key: WEBHOOK_CLIENT_CERT.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + KV { + key: WEBHOOK_CLIENT_KEY.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + KV { + key: COMMENT_KEY.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + ]) +}); - /// MQTT's default configuration collection - pub static ref DefaultMqttKVS: KVS = KVS(vec![ - KV { key: ENABLE_KEY.to_owned(), value: ENABLE_OFF.to_owned(), hidden_if_empty: false }, - KV { key: MQTT_BROKER.to_owned(), value: "".to_owned(), hidden_if_empty: false }, - KV { key: MQTT_TOPIC.to_owned(), value: "".to_owned(), hidden_if_empty: false }, +/// MQTT's default configuration collection +pub static DEFAULT_MQTT_KVS: LazyLock = LazyLock::new(|| { + KVS(vec![ + KV { + key: ENABLE_KEY.to_owned(), + value: ENABLE_OFF.to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_BROKER.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_TOPIC.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, // Sensitive information such as passwords are hidden when the value is empty - KV { key: MQTT_PASSWORD.to_owned(), value: "".to_owned(), hidden_if_empty: true }, - KV { key: MQTT_USERNAME.to_owned(), value: "".to_owned(), hidden_if_empty: false }, - KV { key: MQTT_QOS.to_owned(), value: "0".to_owned(), hidden_if_empty: false }, - KV { key: MQTT_KEEP_ALIVE_INTERVAL.to_owned(), value: "0s".to_owned(), hidden_if_empty: false }, - KV { key: MQTT_RECONNECT_INTERVAL.to_owned(), value: "0s".to_owned(), hidden_if_empty: false }, - KV { key: MQTT_QUEUE_DIR.to_owned(), value: DEFAULT_DIR.to_owned(), hidden_if_empty: false }, - KV { key: MQTT_QUEUE_LIMIT.to_owned(), value: DEFAULT_LIMIT.to_string().to_owned(), hidden_if_empty: false }, - ]); -} + KV { + key: MQTT_PASSWORD.to_owned(), + value: "".to_owned(), + hidden_if_empty: true, + }, + KV { + key: MQTT_USERNAME.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_QOS.to_owned(), + value: "0".to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_KEEP_ALIVE_INTERVAL.to_owned(), + value: "0s".to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_RECONNECT_INTERVAL.to_owned(), + value: "0s".to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_QUEUE_DIR.to_owned(), + value: DEFAULT_DIR.to_owned(), + hidden_if_empty: false, + }, + KV { + key: MQTT_QUEUE_LIMIT.to_owned(), + value: DEFAULT_LIMIT.to_string(), + hidden_if_empty: false, + }, + KV { + key: COMMENT_KEY.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + ]) +}); diff --git a/crates/ecstore/src/config/storageclass.rs b/crates/ecstore/src/config/storageclass.rs index 81e7370b..34e8410b 100644 --- a/crates/ecstore/src/config/storageclass.rs +++ b/crates/ecstore/src/config/storageclass.rs @@ -15,9 +15,9 @@ use super::KVS; use crate::config::KV; use crate::error::{Error, Result}; -use lazy_static::lazy_static; use serde::{Deserialize, Serialize}; use std::env; +use std::sync::LazyLock; use tracing::warn; /// Default parity count for a given drive count @@ -62,34 +62,32 @@ pub const DEFAULT_RRS_PARITY: usize = 1; pub static DEFAULT_INLINE_BLOCK: usize = 128 * 1024; -lazy_static! { - pub static ref DefaultKVS: KVS = { - let kvs = vec![ - KV { - key: CLASS_STANDARD.to_owned(), - value: "".to_owned(), - hidden_if_empty: false, - }, - KV { - key: CLASS_RRS.to_owned(), - value: "EC:1".to_owned(), - hidden_if_empty: false, - }, - KV { - key: OPTIMIZE.to_owned(), - value: "availability".to_owned(), - hidden_if_empty: false, - }, - KV { - key: INLINE_BLOCK.to_owned(), - value: "".to_owned(), - hidden_if_empty: true, - }, - ]; +pub static DEFAULT_KVS: LazyLock = LazyLock::new(|| { + let kvs = vec![ + KV { + key: CLASS_STANDARD.to_owned(), + value: "".to_owned(), + hidden_if_empty: false, + }, + KV { + key: CLASS_RRS.to_owned(), + value: "EC:1".to_owned(), + hidden_if_empty: false, + }, + KV { + key: OPTIMIZE.to_owned(), + value: "availability".to_owned(), + hidden_if_empty: false, + }, + KV { + key: INLINE_BLOCK.to_owned(), + value: "".to_owned(), + hidden_if_empty: true, + }, + ]; - KVS(kvs) - }; -} + KVS(kvs) +}); // StorageClass - holds storage class information #[derive(Serialize, Deserialize, Debug, Default)] diff --git a/crates/ecstore/src/global.rs b/crates/ecstore/src/global.rs index 53351226..b7168045 100644 --- a/crates/ecstore/src/global.rs +++ b/crates/ecstore/src/global.rs @@ -36,8 +36,6 @@ pub const DISK_MIN_INODES: u64 = 1000; pub const DISK_FILL_FRACTION: f64 = 0.99; pub const DISK_RESERVE_FRACTION: f64 = 0.15; -pub const DEFAULT_PORT: u16 = 9000; - lazy_static! { static ref GLOBAL_RUSTFS_PORT: OnceLock = OnceLock::new(); pub static ref GLOBAL_OBJECT_API: OnceLock> = OnceLock::new(); diff --git a/crates/ecstore/src/set_disk.rs b/crates/ecstore/src/set_disk.rs index 760ab2df..967297d3 100644 --- a/crates/ecstore/src/set_disk.rs +++ b/crates/ecstore/src/set_disk.rs @@ -1,4 +1,3 @@ -#![allow(unused_imports)] // Copyright 2024 RustFS Team // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -12,6 +11,8 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. + +#![allow(unused_imports)] #![allow(unused_variables)] use crate::bitrot::{create_bitrot_reader, create_bitrot_writer}; @@ -33,7 +34,7 @@ use crate::store_api::{ListPartsInfo, ObjectToDelete}; use crate::{ bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts}, cache_value::metacache_set::{ListPathRawOptions, list_path_raw}, - config::{GLOBAL_StorageClass, storageclass}, + config::{GLOBAL_STORAGE_CLASS, storageclass}, disk::{ CheckPartsResp, DeleteOptions, DiskAPI, DiskInfo, DiskInfoOptions, DiskOption, DiskStore, FileInfoVersions, RUSTFS_META_BUCKET, RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_TMP_BUCKET, ReadMultipleReq, ReadMultipleResp, ReadOptions, @@ -626,7 +627,7 @@ impl SetDisks { && !found.etag.is_empty() && part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum { - ret[part_idx] = found; + ret[part_idx] = found.clone(); } else { ret[part_idx] = ObjectPartInfo { number: part_numbers[part_idx], @@ -2011,12 +2012,12 @@ impl SetDisks { if errs.iter().any(|err| err.is_some()) { let _ = rustfs_common::heal_channel::send_heal_request(rustfs_common::heal_channel::create_heal_request_with_options( - fi.volume.to_string(), // bucket - Some(fi.name.to_string()), // object_prefix - false, // force_start - Some(rustfs_common::heal_channel::HealChannelPriority::Normal), // priority - Some(self.pool_index), // pool_index - Some(self.set_index), // set_index + fi.volume.to_string(), // bucket + Some(fi.name.to_string()), // object_prefix + false, // force_start + Some(HealChannelPriority::Normal), // priority + Some(self.pool_index), // pool_index + Some(self.set_index), // set_index )) .await; } @@ -2154,7 +2155,7 @@ impl SetDisks { bucket.to_string(), Some(object.to_string()), false, - Some(rustfs_common::heal_channel::HealChannelPriority::Normal), + Some(HealChannelPriority::Normal), Some(pool_index), Some(set_index), ), @@ -2632,7 +2633,7 @@ impl SetDisks { } let is_inline_buffer = { - if let Some(sc) = GLOBAL_StorageClass.get() { + if let Some(sc) = GLOBAL_STORAGE_CLASS.get() { sc.should_inline(erasure.shard_file_size(latest_meta.size), false) } else { false @@ -3287,12 +3288,7 @@ impl ObjectIO for SetDisks { let paths = vec![object.to_string()]; let lock_acquired = self .namespace_lock - .lock_batch( - &paths, - &self.locker_owner, - std::time::Duration::from_secs(5), - std::time::Duration::from_secs(10), - ) + .lock_batch(&paths, &self.locker_owner, Duration::from_secs(5), Duration::from_secs(10)) .await?; if !lock_acquired { @@ -3303,7 +3299,7 @@ impl ObjectIO for SetDisks { let mut user_defined = opts.user_defined.clone(); let sc_parity_drives = { - if let Some(sc) = GLOBAL_StorageClass.get() { + if let Some(sc) = GLOBAL_STORAGE_CLASS.get() { sc.get_parity_for_sc(user_defined.get(AMZ_STORAGE_CLASS).cloned().unwrap_or_default().as_str()) } else { None @@ -3348,7 +3344,7 @@ impl ObjectIO for SetDisks { let erasure = erasure_coding::Erasure::new(fi.erasure.data_blocks, fi.erasure.parity_blocks, fi.erasure.block_size); let is_inline_buffer = { - if let Some(sc) = GLOBAL_StorageClass.get() { + if let Some(sc) = GLOBAL_STORAGE_CLASS.get() { sc.should_inline(erasure.shard_file_size(data.size()), opts.versioned) } else { false @@ -3919,7 +3915,7 @@ impl StorageAPI for SetDisks { bucket.to_string(), Some(object.to_string()), false, - Some(rustfs_common::heal_channel::HealChannelPriority::Normal), + Some(HealChannelPriority::Normal), Some(self.pool_index), Some(self.set_index), )) @@ -4729,7 +4725,7 @@ impl StorageAPI for SetDisks { } let sc_parity_drives = { - if let Some(sc) = GLOBAL_StorageClass.get() { + if let Some(sc) = GLOBAL_STORAGE_CLASS.get() { sc.get_parity_for_sc(user_defined.get(AMZ_STORAGE_CLASS).cloned().unwrap_or_default().as_str()) } else { None diff --git a/crates/ecstore/src/store.rs b/crates/ecstore/src/store.rs index 88cffaec..716f0d07 100644 --- a/crates/ecstore/src/store.rs +++ b/crates/ecstore/src/store.rs @@ -1,4 +1,3 @@ -#![allow(clippy::map_entry)] // Copyright 2024 RustFS Team // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -13,10 +12,12 @@ // See the License for the specific language governing permissions and // limitations under the License. +#![allow(clippy::map_entry)] + use crate::bucket::lifecycle::bucket_lifecycle_ops::init_background_expiry; use crate::bucket::metadata_sys::{self, set_bucket_metadata}; use crate::bucket::utils::{check_valid_bucket_name, check_valid_bucket_name_strict, is_meta_bucketname}; -use crate::config::GLOBAL_StorageClass; +use crate::config::GLOBAL_STORAGE_CLASS; use crate::config::storageclass; use crate::disk::endpoint::{Endpoint, EndpointType}; use crate::disk::{DiskAPI, DiskInfo, DiskInfoOptions}; @@ -1139,7 +1140,7 @@ impl StorageAPI for ECStore { #[tracing::instrument(skip(self))] async fn backend_info(&self) -> rustfs_madmin::BackendInfo { let (standard_sc_parity, rr_sc_parity) = { - if let Some(sc) = GLOBAL_StorageClass.get() { + if let Some(sc) = GLOBAL_STORAGE_CLASS.get() { let sc_parity = sc .get_parity_for_sc(storageclass::CLASS_STANDARD) .or(Some(self.pools[0].default_parity_count)); diff --git a/crates/mcp/Cargo.toml b/crates/mcp/Cargo.toml index cb6a759d..06d080c0 100644 --- a/crates/mcp/Cargo.toml +++ b/crates/mcp/Cargo.toml @@ -23,6 +23,7 @@ homepage.workspace = true description = "RustFS MCP (Model Context Protocol) Server" keywords = ["mcp", "s3", "aws", "rustfs", "server"] categories = ["development-tools", "web-programming"] +documentation = "https://docs.rs/rustfs-mcp/latest/rustfs_mcp/" [[bin]] name = "rustfs-mcp" @@ -36,7 +37,7 @@ aws-sdk-s3.workspace = true tokio = { workspace = true, features = ["io-std", "io-util", "macros", "signal"] } # MCP SDK with macros support -rmcp = { version = "0.3.0", features = ["server", "transport-io", "macros"] } +rmcp = { workspace = true, features = ["server", "transport-io", "macros"] } # Command line argument parsing clap = { workspace = true, features = ["derive", "env"] } @@ -44,27 +45,17 @@ clap = { workspace = true, features = ["derive", "env"] } # Serialization (still needed for S3 data structures) serde.workspace = true serde_json.workspace = true -schemars = "1.0" +schemars = { workspace = true } # Error handling anyhow.workspace = true -thiserror.workspace = true # Logging tracing.workspace = true tracing-subscriber.workspace = true # File handling and MIME type detection -mime_guess = "2.0" -tokio-util = { version = "0.7", features = ["io"] } -futures-util = "0.3" - -# Async trait support for trait abstractions -async-trait = "0.1" +mime_guess = { workspace = true } [dev-dependencies] # Testing framework and utilities -mockall = "0.13" -tempfile = "3.12" -tokio-test = "0.4" -test-case = "3.3" diff --git a/crates/notify/Cargo.toml b/crates/notify/Cargo.toml index 885ff841..bf58db49 100644 --- a/crates/notify/Cargo.toml +++ b/crates/notify/Cargo.toml @@ -27,11 +27,12 @@ documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/" [dependencies] rustfs-config = { workspace = true, features = ["notify"] } +rustfs-ecstore = { workspace = true } rustfs-utils = { workspace = true, features = ["path", "sys"] } async-trait = { workspace = true } chrono = { workspace = true, features = ["serde"] } dashmap = { workspace = true } -rustfs-ecstore = { workspace = true } +futures = { workspace = true } form_urlencoded = { workspace = true } once_cell = { workspace = true } quick-xml = { workspace = true, features = ["serialize", "async-tokio"] } @@ -49,8 +50,6 @@ url = { workspace = true } urlencoding = { workspace = true } wildmatch = { workspace = true, features = ["serde"] } - - [dev-dependencies] tokio = { workspace = true, features = ["test-util"] } reqwest = { workspace = true } diff --git a/crates/notify/examples/full_demo.rs b/crates/notify/examples/full_demo.rs index da51b0b8..4eb93df9 100644 --- a/crates/notify/examples/full_demo.rs +++ b/crates/notify/examples/full_demo.rs @@ -13,11 +13,11 @@ // limitations under the License. use rustfs_config::notify::{ - DEFAULT_LIMIT, DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC, - MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, - WEBHOOK_QUEUE_LIMIT, + DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, + MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, + WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, }; -use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_ON, KV, KVS}; +use rustfs_ecstore::config::{Config, KV, KVS}; use rustfs_notify::arn::TargetID; use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger}; use rustfs_notify::{initialize, notification_system}; diff --git a/crates/notify/examples/full_demo_one.rs b/crates/notify/examples/full_demo_one.rs index 80b80c02..5d06eef9 100644 --- a/crates/notify/examples/full_demo_one.rs +++ b/crates/notify/examples/full_demo_one.rs @@ -14,11 +14,11 @@ // Using Global Accessories use rustfs_config::notify::{ - DEFAULT_LIMIT, DEFAULT_TARGET, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_TOPIC, - MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, - WEBHOOK_QUEUE_LIMIT, + DEFAULT_LIMIT, DEFAULT_TARGET, ENABLE_KEY, ENABLE_ON, MQTT_BROKER, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, + MQTT_TOPIC, MQTT_USERNAME, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_ENDPOINT, + WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, }; -use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_ON, KV, KVS}; +use rustfs_ecstore::config::{Config, KV, KVS}; use rustfs_notify::arn::TargetID; use rustfs_notify::{BucketNotificationConfig, Event, EventName, LogLevel, NotificationError, init_logger}; use rustfs_notify::{initialize, notification_system}; diff --git a/crates/notify/src/error.rs b/crates/notify/src/error.rs index a6e2aea1..8f5fcf0f 100644 --- a/crates/notify/src/error.rs +++ b/crates/notify/src/error.rs @@ -82,6 +82,15 @@ pub enum TargetError { #[error("Target is disabled")] Disabled, + + #[error("Configuration parsing error: {0}")] + ParseError(String), + + #[error("Failed to save configuration: {0}")] + SaveConfig(String), + + #[error("Server not initialized: {0}")] + ServerNotInitialized(String), } /// Error types for the notification system @@ -112,7 +121,7 @@ pub enum NotificationError { AlreadyInitialized, #[error("I/O error: {0}")] - Io(std::io::Error), + Io(io::Error), #[error("Failed to read configuration: {0}")] ReadConfig(String), diff --git a/crates/notify/src/factory.rs b/crates/notify/src/factory.rs index a33ba396..ada2de48 100644 --- a/crates/notify/src/factory.rs +++ b/crates/notify/src/factory.rs @@ -19,40 +19,17 @@ use crate::{ use async_trait::async_trait; use rumqttc::QoS; use rustfs_config::notify::{ - DEFAULT_DIR, DEFAULT_LIMIT, ENV_MQTT_BROKER, ENV_MQTT_ENABLE, ENV_MQTT_KEEP_ALIVE_INTERVAL, ENV_MQTT_PASSWORD, ENV_MQTT_QOS, - ENV_MQTT_QUEUE_DIR, ENV_MQTT_QUEUE_LIMIT, ENV_MQTT_RECONNECT_INTERVAL, ENV_MQTT_TOPIC, ENV_MQTT_USERNAME, - ENV_WEBHOOK_AUTH_TOKEN, ENV_WEBHOOK_CLIENT_CERT, ENV_WEBHOOK_CLIENT_KEY, ENV_WEBHOOK_ENABLE, ENV_WEBHOOK_ENDPOINT, - ENV_WEBHOOK_QUEUE_DIR, ENV_WEBHOOK_QUEUE_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, - MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, - WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, + DEFAULT_DIR, DEFAULT_LIMIT, ENV_NOTIFY_MQTT_KEYS, ENV_NOTIFY_WEBHOOK_KEYS, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, + MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, + NOTIFY_MQTT_KEYS, NOTIFY_WEBHOOK_KEYS, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, + WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, }; -use rustfs_config::{DEFAULT_DELIMITER, ENV_WORD_DELIMITER_DASH}; -use rustfs_ecstore::config::{ENABLE_KEY, ENABLE_ON, KVS}; +use rustfs_ecstore::config::KVS; +use std::collections::HashSet; use std::time::Duration; use tracing::{debug, warn}; use url::Url; -/// Helper function to get values from environment variables or KVS configurations. -/// -/// It will give priority to reading from environment variables such as `BASE_ENV_KEY_ID` and fall back to the KVS configuration if it fails. -fn get_config_value(id: &str, base_env_key: &str, config_key: &str, config: &KVS) -> Option { - let env_key = if id != DEFAULT_DELIMITER { - format!( - "{}{}{}", - base_env_key, - DEFAULT_DELIMITER, - id.to_uppercase().replace(ENV_WORD_DELIMITER_DASH, DEFAULT_DELIMITER) - ) - } else { - base_env_key.to_string() - }; - - match std::env::var(&env_key) { - Ok(val) => Some(val), - Err(_) => config.lookup(config_key), - } -} - /// Trait for creating targets from configuration #[async_trait] pub trait TargetFactory: Send + Sync { @@ -61,6 +38,14 @@ pub trait TargetFactory: Send + Sync { /// Validates target configuration fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError>; + + /// Returns a set of valid configuration field names for this target type. + /// This is used to filter environment variables. + fn get_valid_fields(&self) -> HashSet; + + /// Returns a set of valid configuration env field names for this target type. + /// This is used to filter environment variables. + fn get_valid_env_fields(&self) -> HashSet; } /// Factory for creating Webhook targets @@ -69,65 +54,42 @@ pub struct WebhookTargetFactory; #[async_trait] impl TargetFactory for WebhookTargetFactory { async fn create_target(&self, id: String, config: &KVS) -> Result, TargetError> { - let get = |base_env_key: &str, config_key: &str| get_config_value(&id, base_env_key, config_key, config); - - let enable = get(ENV_WEBHOOK_ENABLE, ENABLE_KEY) - .map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true")) - .unwrap_or(false); - - if !enable { - return Err(TargetError::Configuration("Target is disabled".to_string())); - } - - let endpoint = get(ENV_WEBHOOK_ENDPOINT, WEBHOOK_ENDPOINT) + // All config values are now read directly from the merged `config` KVS. + let endpoint = config + .lookup(WEBHOOK_ENDPOINT) .ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?; let endpoint_url = Url::parse(&endpoint) .map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{endpoint}')")))?; - let auth_token = get(ENV_WEBHOOK_AUTH_TOKEN, WEBHOOK_AUTH_TOKEN).unwrap_or_default(); - let queue_dir = get(ENV_WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()); - - let queue_limit = get(ENV_WEBHOOK_QUEUE_LIMIT, WEBHOOK_QUEUE_LIMIT) - .and_then(|v| v.parse::().ok()) - .unwrap_or(DEFAULT_LIMIT); - - let client_cert = get(ENV_WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_CERT).unwrap_or_default(); - let client_key = get(ENV_WEBHOOK_CLIENT_KEY, WEBHOOK_CLIENT_KEY).unwrap_or_default(); - let args = WebhookArgs { - enable, + enable: true, // If we are here, it's already enabled. endpoint: endpoint_url, - auth_token, - queue_dir, - queue_limit, - client_cert, - client_key, + auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(), + queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()), + queue_limit: config + .lookup(WEBHOOK_QUEUE_LIMIT) + .and_then(|v| v.parse::().ok()) + .unwrap_or(DEFAULT_LIMIT), + client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(), + client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(), }; let target = crate::target::webhook::WebhookTarget::new(id, args)?; Ok(Box::new(target)) } - fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError> { - let get = |base_env_key: &str, config_key: &str| get_config_value(id, base_env_key, config_key, config); - - let enable = get(ENV_WEBHOOK_ENABLE, ENABLE_KEY) - .map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true")) - .unwrap_or(false); - - if !enable { - return Ok(()); - } - - let endpoint = get(ENV_WEBHOOK_ENDPOINT, WEBHOOK_ENDPOINT) + fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> { + // Validation also uses the merged `config` KVS directly. + let endpoint = config + .lookup(WEBHOOK_ENDPOINT) .ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?; debug!("endpoint: {}", endpoint); let parsed_endpoint = endpoint.trim(); Url::parse(parsed_endpoint) .map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{parsed_endpoint}')")))?; - let client_cert = get(ENV_WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_CERT).unwrap_or_default(); - let client_key = get(ENV_WEBHOOK_CLIENT_KEY, WEBHOOK_CLIENT_KEY).unwrap_or_default(); + let client_cert = config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(); + let client_key = config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(); if client_cert.is_empty() != client_key.is_empty() { return Err(TargetError::Configuration( @@ -135,15 +97,21 @@ impl TargetFactory for WebhookTargetFactory { )); } - let queue_dir = get(ENV_WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_DIR) - .and_then(|v| v.parse::().ok()) - .unwrap_or(DEFAULT_DIR.to_string()); + let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()); if !queue_dir.is_empty() && !std::path::Path::new(&queue_dir).is_absolute() { return Err(TargetError::Configuration("Webhook queue directory must be an absolute path".to_string())); } Ok(()) } + + fn get_valid_fields(&self) -> HashSet { + NOTIFY_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect() + } + + fn get_valid_env_fields(&self) -> HashSet { + ENV_NOTIFY_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect() + } } /// Factory for creating MQTT targets @@ -152,84 +120,57 @@ pub struct MQTTTargetFactory; #[async_trait] impl TargetFactory for MQTTTargetFactory { async fn create_target(&self, id: String, config: &KVS) -> Result, TargetError> { - let get = |base_env_key: &str, config_key: &str| get_config_value(&id, base_env_key, config_key, config); - - let enable = get(ENV_MQTT_ENABLE, ENABLE_KEY) - .map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true")) - .unwrap_or(false); - - if !enable { - return Err(TargetError::Configuration("Target is disabled".to_string())); - } - - let broker = - get(ENV_MQTT_BROKER, MQTT_BROKER).ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?; + let broker = config + .lookup(MQTT_BROKER) + .ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?; let broker_url = Url::parse(&broker) .map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?; - let topic = - get(ENV_MQTT_TOPIC, MQTT_TOPIC).ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?; - - let qos = get(ENV_MQTT_QOS, MQTT_QOS) - .and_then(|v| v.parse::().ok()) - .map(|q| match q { - 0 => QoS::AtMostOnce, - 1 => QoS::AtLeastOnce, - 2 => QoS::ExactlyOnce, - _ => QoS::AtLeastOnce, - }) - .unwrap_or(QoS::AtLeastOnce); - - let username = get(ENV_MQTT_USERNAME, MQTT_USERNAME).unwrap_or_default(); - let password = get(ENV_MQTT_PASSWORD, MQTT_PASSWORD).unwrap_or_default(); - - let reconnect_interval = get(ENV_MQTT_RECONNECT_INTERVAL, MQTT_RECONNECT_INTERVAL) - .and_then(|v| v.parse::().ok()) - .map(Duration::from_secs) - .unwrap_or_else(|| Duration::from_secs(5)); - - let keep_alive = get(ENV_MQTT_KEEP_ALIVE_INTERVAL, MQTT_KEEP_ALIVE_INTERVAL) - .and_then(|v| v.parse::().ok()) - .map(Duration::from_secs) - .unwrap_or_else(|| Duration::from_secs(30)); - - let queue_dir = get(ENV_MQTT_QUEUE_DIR, MQTT_QUEUE_DIR) - .and_then(|v| v.parse::().ok()) - .unwrap_or(DEFAULT_DIR.to_string()); - let queue_limit = get(ENV_MQTT_QUEUE_LIMIT, MQTT_QUEUE_LIMIT) - .and_then(|v| v.parse::().ok()) - .unwrap_or(DEFAULT_LIMIT); + let topic = config + .lookup(MQTT_TOPIC) + .ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?; let args = MQTTArgs { - enable, + enable: true, // Assumed enabled. broker: broker_url, topic, - qos, - username, - password, - max_reconnect_interval: reconnect_interval, - keep_alive, - queue_dir, - queue_limit, + qos: config + .lookup(MQTT_QOS) + .and_then(|v| v.parse::().ok()) + .map(|q| match q { + 0 => QoS::AtMostOnce, + 1 => QoS::AtLeastOnce, + 2 => QoS::ExactlyOnce, + _ => QoS::AtLeastOnce, + }) + .unwrap_or(QoS::AtLeastOnce), + username: config.lookup(MQTT_USERNAME).unwrap_or_default(), + password: config.lookup(MQTT_PASSWORD).unwrap_or_default(), + max_reconnect_interval: config + .lookup(MQTT_RECONNECT_INTERVAL) + .and_then(|v| v.parse::().ok()) + .map(Duration::from_secs) + .unwrap_or_else(|| Duration::from_secs(5)), + keep_alive: config + .lookup(MQTT_KEEP_ALIVE_INTERVAL) + .and_then(|v| v.parse::().ok()) + .map(Duration::from_secs) + .unwrap_or_else(|| Duration::from_secs(30)), + queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()), + queue_limit: config + .lookup(MQTT_QUEUE_LIMIT) + .and_then(|v| v.parse::().ok()) + .unwrap_or(DEFAULT_LIMIT), }; let target = crate::target::mqtt::MQTTTarget::new(id, args)?; Ok(Box::new(target)) } - fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError> { - let get = |base_env_key: &str, config_key: &str| get_config_value(id, base_env_key, config_key, config); - - let enable = get(ENV_MQTT_ENABLE, ENABLE_KEY) - .map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true")) - .unwrap_or(false); - - if !enable { - return Ok(()); - } - - let broker = - get(ENV_MQTT_BROKER, MQTT_BROKER).ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?; + fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> { + let broker = config + .lookup(MQTT_BROKER) + .ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?; let url = Url::parse(&broker) .map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?; @@ -240,11 +181,11 @@ impl TargetFactory for MQTTTargetFactory { } } - if get(ENV_MQTT_TOPIC, MQTT_TOPIC).is_none() { + if config.lookup(MQTT_TOPIC).is_none() { return Err(TargetError::Configuration("Missing MQTT topic".to_string())); } - if let Some(qos_str) = get(ENV_MQTT_QOS, MQTT_QOS) { + if let Some(qos_str) = config.lookup(MQTT_QOS) { let qos = qos_str .parse::() .map_err(|_| TargetError::Configuration("Invalid QoS value".to_string()))?; @@ -253,14 +194,12 @@ impl TargetFactory for MQTTTargetFactory { } } - let queue_dir = get(ENV_MQTT_QUEUE_DIR, MQTT_QUEUE_DIR) - .and_then(|v| v.parse::().ok()) - .unwrap_or(DEFAULT_DIR.to_string()); + let queue_dir = config.lookup(MQTT_QUEUE_DIR).unwrap_or_default(); if !queue_dir.is_empty() { if !std::path::Path::new(&queue_dir).is_absolute() { return Err(TargetError::Configuration("MQTT queue directory must be an absolute path".to_string())); } - if let Some(qos_str) = get(ENV_MQTT_QOS, MQTT_QOS) { + if let Some(qos_str) = config.lookup(MQTT_QOS) { if qos_str == "0" { warn!("Using queue_dir with QoS 0 may result in event loss"); } @@ -269,4 +208,12 @@ impl TargetFactory for MQTTTargetFactory { Ok(()) } + + fn get_valid_fields(&self) -> HashSet { + NOTIFY_MQTT_KEYS.iter().map(|s| s.to_string()).collect() + } + + fn get_valid_env_fields(&self) -> HashSet { + ENV_NOTIFY_MQTT_KEYS.iter().map(|s| s.to_string()).collect() + } } diff --git a/crates/notify/src/integration.rs b/crates/notify/src/integration.rs index 4ef18a08..7f786285 100644 --- a/crates/notify/src/integration.rs +++ b/crates/notify/src/integration.rs @@ -210,10 +210,10 @@ impl NotificationSystem { return Ok(()); } - if let Err(e) = rustfs_ecstore::config::com::save_server_config(store, &new_config).await { - error!("Failed to save config: {}", e); - return Err(NotificationError::SaveConfig(e.to_string())); - } + // if let Err(e) = rustfs_ecstore::config::com::save_server_config(store, &new_config).await { + // error!("Failed to save config: {}", e); + // return Err(NotificationError::SaveConfig(e.to_string())); + // } info!("Configuration updated. Reloading system..."); self.reload_config(new_config).await @@ -323,7 +323,6 @@ impl NotificationSystem { metrics: Arc, semaphore: Arc, ) -> mpsc::Sender<()> { - // Event Stream Processing Using Batch Version stream::start_event_stream_with_batching(store, target, metrics, semaphore) } @@ -348,6 +347,7 @@ impl NotificationSystem { self.update_config(new_config.clone()).await; // Create a new target from configuration + // This function will now be responsible for merging env, creating and persisting the final configuration. let targets: Vec> = self .registry .create_targets_from_config(&new_config) diff --git a/crates/notify/src/registry.rs b/crates/notify/src/registry.rs index 828173c3..f6a346cf 100644 --- a/crates/notify/src/registry.rs +++ b/crates/notify/src/registry.rs @@ -18,11 +18,12 @@ use crate::{ factory::{MQTTTargetFactory, TargetFactory, WebhookTargetFactory}, target::Target, }; -use rustfs_config::notify::NOTIFY_ROUTE_PREFIX; +use futures::stream::{FuturesUnordered, StreamExt}; +use rustfs_config::notify::{ENABLE_KEY, ENABLE_ON, NOTIFY_ROUTE_PREFIX}; use rustfs_config::{DEFAULT_DELIMITER, ENV_PREFIX}; -use rustfs_ecstore::config::{Config, ENABLE_KEY, ENABLE_OFF, ENABLE_ON, KVS}; -use std::collections::HashMap; -use tracing::{debug, error, info}; +use rustfs_ecstore::config::{Config, KVS}; +use std::collections::{HashMap, HashSet}; +use tracing::{debug, error, info, warn}; /// Registry for managing target factories pub struct TargetRegistry { @@ -74,77 +75,204 @@ impl TargetRegistry { } /// Creates all targets from a configuration + /// Create all notification targets from system configuration and environment variables. + /// This method processes the creation of each target concurrently as follows: + /// 1. Iterate through all registered target types (e.g. webhooks, mqtt). + /// 2. For each type, resolve its configuration in the configuration file and environment variables. + /// 3. Identify all target instance IDs that need to be created. + /// 4. Combine the default configuration, file configuration, and environment variable configuration for each instance. + /// 5. If the instance is enabled, create an asynchronous task for it to instantiate. + /// 6. Concurrency executes all creation tasks and collects results. pub async fn create_targets_from_config(&self, config: &Config) -> Result>, TargetError> { - let mut targets: Vec> = Vec::new(); + // Collect only environment variables with the relevant prefix to reduce memory usage + let all_env: Vec<(String, String)> = std::env::vars().filter(|(key, _)| key.starts_with(ENV_PREFIX)).collect(); + // A collection of asynchronous tasks for concurrently executing target creation + let mut tasks = FuturesUnordered::new(); + let mut final_config = config.clone(); // Clone a configuration for aggregating the final result + // 1. Traverse all registered plants and process them by target type + for (target_type, factory) in &self.factories { + tracing::Span::current().record("target_type", target_type.as_str()); + info!("Start working on target types..."); - // Iterate through configuration sections - for (section, subsections) in &config.0 { - // Only process notification sections - if !section.starts_with(NOTIFY_ROUTE_PREFIX) { - continue; + // 2. Prepare the configuration source + // 2.1. Get the configuration segment in the file, e.g. 'notify_webhook' + let section_name = format!("{NOTIFY_ROUTE_PREFIX}{target_type}"); + let file_configs = config.0.get(§ion_name).cloned().unwrap_or_default(); + // 2.2. Get the default configuration for that type + let default_cfg = file_configs.get(DEFAULT_DELIMITER).cloned().unwrap_or_default(); + debug!(?default_cfg, "Get the default configuration"); + + // *** Optimization point 1: Get all legitimate fields of the current target type *** + let valid_fields = factory.get_valid_fields(); + debug!(?valid_fields, "Get the legitimate configuration fields"); + + // 3. Resolve instance IDs and configuration overrides from environment variables + let mut instance_ids_from_env = HashSet::new(); + // 3.1. Instance discovery: Based on the '..._ENABLE_INSTANCEID' format + let enable_prefix = format!("{ENV_PREFIX}{NOTIFY_ROUTE_PREFIX}{target_type}_{ENABLE_KEY}_").to_uppercase(); + for (key, value) in &all_env { + if value.eq_ignore_ascii_case(ENABLE_ON) + || value.eq_ignore_ascii_case("true") + || value.eq_ignore_ascii_case("1") + || value.eq_ignore_ascii_case("yes") + { + if let Some(id) = key.strip_prefix(&enable_prefix) { + if !id.is_empty() { + instance_ids_from_env.insert(id.to_lowercase()); + } + } + } } - // Extract target type from section name - let target_type = section.trim_start_matches(NOTIFY_ROUTE_PREFIX); + // 3.2. Parse all relevant environment variable configurations + // 3.2.1. Build environment variable prefixes such as 'RUSTFS_NOTIFY_WEBHOOK_' + let env_prefix = format!("{ENV_PREFIX}{NOTIFY_ROUTE_PREFIX}{target_type}_").to_uppercase(); + // 3.2.2. 'env_overrides' is used to store configurations parsed from environment variables in the format: {instance id -> {field -> value}} + let mut env_overrides: HashMap> = HashMap::new(); + for (key, value) in &all_env { + if let Some(rest) = key.strip_prefix(&env_prefix) { + // Use rsplitn to split from the right side to properly extract the INSTANCE_ID at the end + // Format: _ or + let mut parts = rest.rsplitn(2, '_'); - // Iterate through subsections (each representing a target instance) - for (target_id, target_config) in subsections { - // Skip disabled targets + // The first part from the right is INSTANCE_ID + let instance_id_part = parts.next().unwrap_or(DEFAULT_DELIMITER); + // The remaining part is FIELD_NAME + let field_name_part = parts.next(); - let enable_from_config = target_config.lookup(ENABLE_KEY).unwrap_or_else(|| ENABLE_OFF.to_string()); - debug!("Target enablement from config: {}/{}: {}", target_type, target_id, enable_from_config); - // Check environment variable for target enablement example: RUSTFS_NOTIFY_WEBHOOK_ENABLE|RUSTFS_NOTIFY_WEBHOOK_ENABLE_[TARGET_ID] - let env_key = if target_id == DEFAULT_DELIMITER { - // If no specific target ID, use the base target type, example: RUSTFS_NOTIFY_WEBHOOK_ENABLE - format!( - "{}{}{}{}{}", - ENV_PREFIX, - NOTIFY_ROUTE_PREFIX, - target_type.to_uppercase(), - DEFAULT_DELIMITER, - ENABLE_KEY - ) - } else { - // If specific target ID, append it to the key, example: RUSTFS_NOTIFY_WEBHOOK_ENABLE_[TARGET_ID] - format!( - "{}{}{}{}{}{}{}", - ENV_PREFIX, - NOTIFY_ROUTE_PREFIX, - target_type.to_uppercase(), - DEFAULT_DELIMITER, - ENABLE_KEY, - DEFAULT_DELIMITER, - target_id.to_uppercase() - ) + let (field_name, instance_id) = match field_name_part { + // Case 1: The format is _ + // e.g., rest = "ENDPOINT_PRIMARY" -> field_name="ENDPOINT", instance_id="PRIMARY" + Some(field) => (field.to_lowercase(), instance_id_part.to_lowercase()), + // Case 2: The format is (无 INSTANCE_ID) + // e.g., rest = "ENABLE" -> field_name="ENABLE", instance_id="" (Universal configuration `_ DEFAULT_DELIMITER`) + None => (instance_id_part.to_lowercase(), DEFAULT_DELIMITER.to_string()), + }; + + // *** Optimization point 2: Verify whether the parsed field_name is legal *** + if !field_name.is_empty() && valid_fields.contains(&field_name) { + debug!( + instance_id = %if instance_id.is_empty() { DEFAULT_DELIMITER } else { &instance_id }, + %field_name, + %value, + "Parsing to environment variables" + ); + env_overrides + .entry(instance_id) + .or_default() + .insert(field_name, value.clone()); + } else { + // Ignore illegal field names + warn!( + field_name = %field_name, + "Ignore environment variable fields, not found in the list of valid fields for target type {}", + target_type + ); + } } - .to_uppercase(); - debug!("Target env key: {},Target id: {}", env_key, target_id); - let enable_from_env = std::env::var(&env_key) - .map(|v| v.eq_ignore_ascii_case(ENABLE_ON) || v.eq_ignore_ascii_case("true")) + } + debug!(?env_overrides, "Complete the environment variable analysis"); + + // 4. Determine all instance IDs that need to be processed + let mut all_instance_ids: HashSet = + file_configs.keys().filter(|k| *k != DEFAULT_DELIMITER).cloned().collect(); + all_instance_ids.extend(instance_ids_from_env); + debug!(?all_instance_ids, "Determine all instance IDs"); + + // 5. Merge configurations and create tasks for each instance + for id in all_instance_ids { + // 5.1. Merge configuration, priority: Environment variables > File instance configuration > File default configuration + let mut merged_config = default_cfg.clone(); + // Instance-specific configuration in application files + if let Some(file_instance_cfg) = file_configs.get(&id) { + merged_config.extend(file_instance_cfg.clone()); + } + // Application instance-specific environment variable configuration + if let Some(env_instance_cfg) = env_overrides.get(&id) { + // Convert HashMap to KVS + let mut kvs_from_env = KVS::new(); + for (k, v) in env_instance_cfg { + kvs_from_env.insert(k.clone(), v.clone()); + } + merged_config.extend(kvs_from_env); + } + debug!(instance_id = %id, ?merged_config, "Complete configuration merge"); + + // 5.2. Check if the instance is enabled + let enabled = merged_config + .lookup(ENABLE_KEY) + .map(|v| { + v.eq_ignore_ascii_case(ENABLE_ON) + || v.eq_ignore_ascii_case("true") + || v.eq_ignore_ascii_case("1") + || v.eq_ignore_ascii_case("yes") + }) .unwrap_or(false); - debug!("Target env value: {},key: {},Target id: {}", enable_from_env, env_key, target_id); - debug!( - "Target enablement from env: {}/{}: result: {}", - target_type, target_id, enable_from_config - ); - if enable_from_config != ENABLE_ON && !enable_from_env { - info!("Skipping disabled target: {}/{}", target_type, target_id); - continue; - } - debug!("create target: {}/{} start", target_type, target_id); - // Create target - match self.create_target(target_type, target_id.clone(), target_config).await { - Ok(target) => { - info!("Created target: {}/{}", target_type, target_id); - targets.push(target); - } - Err(e) => { - error!("Failed to create target {}/{}: reason: {}", target_type, target_id, e); - } + + if enabled { + info!(instance_id = %id, "Target is enabled, ready to create a task"); + // 5.3. Create asynchronous tasks for enabled instances + let target_type_clone = target_type.clone(); + let tid = id.clone(); + let merged_config_arc = std::sync::Arc::new(merged_config); + tasks.push(async move { + let result = factory.create_target(tid.clone(), &merged_config_arc).await; + (target_type_clone, tid, result, std::sync::Arc::clone(&merged_config_arc)) + }); + } else { + info!(instance_id = %id, "Skip the disabled target and will be removed from the final configuration"); + // Remove disabled target from final configuration + final_config.0.entry(section_name.clone()).or_default().remove(&id); } } } - Ok(targets) + // 6. Concurrently execute all creation tasks and collect results + let mut successful_targets = Vec::new(); + let mut successful_configs = Vec::new(); + while let Some((target_type, id, result, final_config)) = tasks.next().await { + match result { + Ok(target) => { + info!(target_type = %target_type, instance_id = %id, "Create a target successfully"); + successful_targets.push(target); + successful_configs.push((target_type, id, final_config)); + } + Err(e) => { + error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create a target"); + } + } + } + + // 7. Aggregate new configuration and write back to system configuration + if !successful_configs.is_empty() { + info!( + "Prepare to update {} successfully created target configurations to the system configuration...", + successful_configs.len() + ); + let mut new_config = config.clone(); + for (target_type, id, kvs) in successful_configs { + let section_name = format!("{NOTIFY_ROUTE_PREFIX}{target_type}").to_lowercase(); + new_config.0.entry(section_name).or_default().insert(id, (*kvs).clone()); + } + + let Some(store) = rustfs_ecstore::global::new_object_layer_fn() else { + return Err(TargetError::ServerNotInitialized( + "Failed to save target configuration: server storage not initialized".to_string(), + )); + }; + + match rustfs_ecstore::config::com::save_server_config(store, &new_config).await { + Ok(_) => { + info!("The new configuration was saved to the system successfully.") + } + Err(e) => { + error!("Failed to save the new configuration: {}", e); + return Err(TargetError::SaveConfig(e.to_string())); + } + } + } + + info!(count = successful_targets.len(), "All target processing completed"); + Ok(successful_targets) } } diff --git a/crates/notify/src/target/mod.rs b/crates/notify/src/target/mod.rs index c47a11d3..a7913f3a 100644 --- a/crates/notify/src/target/mod.rs +++ b/crates/notify/src/target/mod.rs @@ -109,3 +109,11 @@ impl std::fmt::Display for ChannelTargetType { } } } + +pub fn parse_bool(value: &str) -> Result { + match value.to_lowercase().as_str() { + "true" | "on" | "yes" | "1" => Ok(true), + "false" | "off" | "no" | "0" => Ok(false), + _ => Err(TargetError::ParseError(format!("Unable to parse boolean: {value}"))), + } +} diff --git a/crates/obs/Cargo.toml b/crates/obs/Cargo.toml index 1263e2cd..77e799ac 100644 --- a/crates/obs/Cargo.toml +++ b/crates/obs/Cargo.toml @@ -36,7 +36,8 @@ webhook = ["dep:reqwest"] kafka = ["dep:rdkafka"] [dependencies] -rustfs-config = { workspace = true, features = ["constants"] } +rustfs-config = { workspace = true, features = ["constants", "observability"] } +rustfs-utils = { workspace = true, features = ["ip", "path"] } async-trait = { workspace = true } chrono = { workspace = true } flexi_logger = { workspace = true, features = ["trc", "kv"] } @@ -49,7 +50,6 @@ opentelemetry_sdk = { workspace = true, features = ["rt-tokio"] } opentelemetry-stdout = { workspace = true } opentelemetry-otlp = { workspace = true, features = ["grpc-tonic", "gzip-tonic", "trace", "metrics", "logs", "internal-logs"] } opentelemetry-semantic-conventions = { workspace = true, features = ["semconv_experimental"] } -rustfs-utils = { workspace = true, features = ["ip"] } serde = { workspace = true } smallvec = { workspace = true, features = ["serde"] } tracing = { workspace = true, features = ["std", "attributes"] } diff --git a/crates/obs/src/config.rs b/crates/obs/src/config.rs index 854273a3..8031cc84 100644 --- a/crates/obs/src/config.rs +++ b/crates/obs/src/config.rs @@ -12,11 +12,24 @@ // See the License for the specific language governing permissions and // limitations under the License. -use rustfs_config::{ - APP_NAME, DEFAULT_LOG_DIR, DEFAULT_LOG_FILENAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, DEFAULT_LOG_ROTATION_SIZE_MB, - DEFAULT_LOG_ROTATION_TIME, DEFAULT_OBS_LOG_FILENAME, DEFAULT_SINK_FILE_LOG_FILE, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, - SERVICE_VERSION, USE_STDOUT, +use rustfs_config::observability::{ + DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY, DEFAULT_SINKS_FILE_BUFFER_SIZE, DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS, + DEFAULT_SINKS_FILE_FLUSH_THRESHOLD, DEFAULT_SINKS_KAFKA_BATCH_SIZE, DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS, + DEFAULT_SINKS_KAFKA_BROKERS, DEFAULT_SINKS_KAFKA_TOPIC, DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN, DEFAULT_SINKS_WEBHOOK_ENDPOINT, + DEFAULT_SINKS_WEBHOOK_MAX_RETRIES, DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS, ENV_AUDIT_LOGGER_QUEUE_CAPACITY, ENV_OBS_ENDPOINT, + ENV_OBS_ENVIRONMENT, ENV_OBS_LOCAL_LOGGING_ENABLED, ENV_OBS_LOG_FILENAME, ENV_OBS_LOG_KEEP_FILES, + ENV_OBS_LOG_ROTATION_SIZE_MB, ENV_OBS_LOG_ROTATION_TIME, ENV_OBS_LOGGER_LEVEL, ENV_OBS_METER_INTERVAL, ENV_OBS_SAMPLE_RATIO, + ENV_OBS_SERVICE_NAME, ENV_OBS_SERVICE_VERSION, ENV_SINKS_FILE_BUFFER_SIZE, ENV_SINKS_FILE_FLUSH_INTERVAL_MS, + ENV_SINKS_FILE_FLUSH_THRESHOLD, ENV_SINKS_FILE_PATH, ENV_SINKS_KAFKA_BATCH_SIZE, ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS, + ENV_SINKS_KAFKA_BROKERS, ENV_SINKS_KAFKA_TOPIC, ENV_SINKS_WEBHOOK_AUTH_TOKEN, ENV_SINKS_WEBHOOK_ENDPOINT, + ENV_SINKS_WEBHOOK_MAX_RETRIES, ENV_SINKS_WEBHOOK_RETRY_DELAY_MS, }; +use rustfs_config::observability::{ENV_OBS_LOG_DIRECTORY, ENV_OBS_USE_STDOUT}; +use rustfs_config::{ + APP_NAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, DEFAULT_LOG_ROTATION_SIZE_MB, DEFAULT_LOG_ROTATION_TIME, + DEFAULT_OBS_LOG_FILENAME, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT, +}; +use rustfs_utils::dirs::get_log_directory_to_string; use serde::{Deserialize, Serialize}; use std::env; @@ -52,14 +65,14 @@ impl OtelConfig { pub fn extract_otel_config_from_env(endpoint: Option) -> OtelConfig { let endpoint = if let Some(endpoint) = endpoint { if endpoint.is_empty() { - env::var("RUSTFS_OBS_ENDPOINT").unwrap_or_else(|_| "".to_string()) + env::var(ENV_OBS_ENDPOINT).unwrap_or_else(|_| "".to_string()) } else { endpoint } } else { - env::var("RUSTFS_OBS_ENDPOINT").unwrap_or_else(|_| "".to_string()) + env::var(ENV_OBS_ENDPOINT).unwrap_or_else(|_| "".to_string()) }; - let mut use_stdout = env::var("RUSTFS_OBS_USE_STDOUT") + let mut use_stdout = env::var(ENV_OBS_USE_STDOUT) .ok() .and_then(|v| v.parse().ok()) .or(Some(USE_STDOUT)); @@ -70,51 +83,48 @@ impl OtelConfig { OtelConfig { endpoint, use_stdout, - sample_ratio: env::var("RUSTFS_OBS_SAMPLE_RATIO") + sample_ratio: env::var(ENV_OBS_SAMPLE_RATIO) .ok() .and_then(|v| v.parse().ok()) .or(Some(SAMPLE_RATIO)), - meter_interval: env::var("RUSTFS_OBS_METER_INTERVAL") + meter_interval: env::var(ENV_OBS_METER_INTERVAL) .ok() .and_then(|v| v.parse().ok()) .or(Some(METER_INTERVAL)), - service_name: env::var("RUSTFS_OBS_SERVICE_NAME") + service_name: env::var(ENV_OBS_SERVICE_NAME) .ok() .and_then(|v| v.parse().ok()) .or(Some(APP_NAME.to_string())), - service_version: env::var("RUSTFS_OBS_SERVICE_VERSION") + service_version: env::var(ENV_OBS_SERVICE_VERSION) .ok() .and_then(|v| v.parse().ok()) .or(Some(SERVICE_VERSION.to_string())), - environment: env::var("RUSTFS_OBS_ENVIRONMENT") + environment: env::var(ENV_OBS_ENVIRONMENT) .ok() .and_then(|v| v.parse().ok()) .or(Some(ENVIRONMENT.to_string())), - logger_level: env::var("RUSTFS_OBS_LOGGER_LEVEL") + logger_level: env::var(ENV_OBS_LOGGER_LEVEL) .ok() .and_then(|v| v.parse().ok()) .or(Some(DEFAULT_LOG_LEVEL.to_string())), - local_logging_enabled: env::var("RUSTFS_OBS_LOCAL_LOGGING_ENABLED") + local_logging_enabled: env::var(ENV_OBS_LOCAL_LOGGING_ENABLED) .ok() .and_then(|v| v.parse().ok()) .or(Some(false)), - log_directory: env::var("RUSTFS_OBS_LOG_DIRECTORY") - .ok() - .and_then(|v| v.parse().ok()) - .or(Some(DEFAULT_LOG_DIR.to_string())), - log_filename: env::var("RUSTFS_OBS_LOG_FILENAME") + log_directory: Some(get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY)), + log_filename: env::var(ENV_OBS_LOG_FILENAME) .ok() .and_then(|v| v.parse().ok()) .or(Some(DEFAULT_OBS_LOG_FILENAME.to_string())), - log_rotation_size_mb: env::var("RUSTFS_OBS_LOG_ROTATION_SIZE_MB") + log_rotation_size_mb: env::var(ENV_OBS_LOG_ROTATION_SIZE_MB) .ok() .and_then(|v| v.parse().ok()) .or(Some(DEFAULT_LOG_ROTATION_SIZE_MB)), // Default to 100 MB - log_rotation_time: env::var("RUSTFS_OBS_LOG_ROTATION_TIME") + log_rotation_time: env::var(ENV_OBS_LOG_ROTATION_TIME) .ok() .and_then(|v| v.parse().ok()) .or(Some(DEFAULT_LOG_ROTATION_TIME.to_string())), // Default to "Day" - log_keep_files: env::var("RUSTFS_OBS_LOG_KEEP_FILES") + log_keep_files: env::var(ENV_OBS_LOG_KEEP_FILES) .ok() .and_then(|v| v.parse().ok()) .or(Some(DEFAULT_LOG_KEEP_FILES)), // Default to keeping 30 log files @@ -154,16 +164,22 @@ impl KafkaSinkConfig { impl Default for KafkaSinkConfig { fn default() -> Self { Self { - brokers: env::var("RUSTFS_SINKS_KAFKA_BROKERS") + brokers: env::var(ENV_SINKS_KAFKA_BROKERS) .ok() .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(|| "localhost:9092".to_string()), - topic: env::var("RUSTFS_SINKS_KAFKA_TOPIC") + .unwrap_or_else(|| DEFAULT_SINKS_KAFKA_BROKERS.to_string()), + topic: env::var(ENV_SINKS_KAFKA_TOPIC) .ok() .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(|| "rustfs_sink".to_string()), - batch_size: Some(100), - batch_timeout_ms: Some(1000), + .unwrap_or_else(|| DEFAULT_SINKS_KAFKA_TOPIC.to_string()), + batch_size: env::var(ENV_SINKS_KAFKA_BATCH_SIZE) + .ok() + .and_then(|v| v.parse().ok()) + .or(Some(DEFAULT_SINKS_KAFKA_BATCH_SIZE)), + batch_timeout_ms: env::var(ENV_SINKS_KAFKA_BATCH_TIMEOUT_MS) + .ok() + .and_then(|v| v.parse().ok()) + .or(Some(DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS)), } } } @@ -186,16 +202,22 @@ impl WebhookSinkConfig { impl Default for WebhookSinkConfig { fn default() -> Self { Self { - endpoint: env::var("RUSTFS_SINKS_WEBHOOK_ENDPOINT") + endpoint: env::var(ENV_SINKS_WEBHOOK_ENDPOINT) .ok() .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(|| "http://localhost:8080".to_string()), - auth_token: env::var("RUSTFS_SINKS_WEBHOOK_AUTH_TOKEN") + .unwrap_or_else(|| DEFAULT_SINKS_WEBHOOK_ENDPOINT.to_string()), + auth_token: env::var(ENV_SINKS_WEBHOOK_AUTH_TOKEN) .ok() .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(|| "rustfs_webhook_token".to_string()), - max_retries: Some(3), - retry_delay_ms: Some(100), + .unwrap_or_else(|| DEFAULT_SINKS_WEBHOOK_AUTH_TOKEN.to_string()), + max_retries: env::var(ENV_SINKS_WEBHOOK_MAX_RETRIES) + .ok() + .and_then(|v| v.parse().ok()) + .or(Some(DEFAULT_SINKS_WEBHOOK_MAX_RETRIES)), + retry_delay_ms: env::var(ENV_SINKS_WEBHOOK_RETRY_DELAY_MS) + .ok() + .and_then(|v| v.parse().ok()) + .or(Some(DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS)), } } } @@ -210,18 +232,6 @@ pub struct FileSinkConfig { } impl FileSinkConfig { - pub fn get_default_log_path() -> String { - let temp_dir = env::temp_dir().join(DEFAULT_LOG_FILENAME); - if let Err(e) = std::fs::create_dir_all(&temp_dir) { - eprintln!("Failed to create log directory: {e}"); - return DEFAULT_LOG_DIR.to_string(); - } - temp_dir - .join(DEFAULT_SINK_FILE_LOG_FILE) - .to_str() - .unwrap_or(DEFAULT_LOG_DIR) - .to_string() - } pub fn new() -> Self { Self::default() } @@ -230,22 +240,19 @@ impl FileSinkConfig { impl Default for FileSinkConfig { fn default() -> Self { Self { - path: env::var("RUSTFS_SINKS_FILE_PATH") - .ok() - .filter(|s| !s.trim().is_empty()) - .unwrap_or_else(Self::get_default_log_path), - buffer_size: env::var("RUSTFS_SINKS_FILE_BUFFER_SIZE") + path: get_log_directory_to_string(ENV_SINKS_FILE_PATH), + buffer_size: env::var(ENV_SINKS_FILE_BUFFER_SIZE) .ok() .and_then(|v| v.parse().ok()) - .or(Some(8192)), - flush_interval_ms: env::var("RUSTFS_SINKS_FILE_FLUSH_INTERVAL_MS") + .or(Some(DEFAULT_SINKS_FILE_BUFFER_SIZE)), + flush_interval_ms: env::var(ENV_SINKS_FILE_FLUSH_INTERVAL_MS) .ok() .and_then(|v| v.parse().ok()) - .or(Some(1000)), - flush_threshold: env::var("RUSTFS_SINKS_FILE_FLUSH_THRESHOLD") + .or(Some(DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS)), + flush_threshold: env::var(ENV_SINKS_FILE_FLUSH_THRESHOLD) .ok() .and_then(|v| v.parse().ok()) - .or(Some(100)), + .or(Some(DEFAULT_SINKS_FILE_FLUSH_THRESHOLD)), } } } @@ -280,7 +287,10 @@ pub struct LoggerConfig { impl LoggerConfig { pub fn new() -> Self { Self { - queue_capacity: Some(10000), + queue_capacity: env::var(ENV_AUDIT_LOGGER_QUEUE_CAPACITY) + .ok() + .and_then(|v| v.parse().ok()) + .or(Some(DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY)), } } } diff --git a/crates/obs/src/sinks/file.rs b/crates/obs/src/sinks/file.rs index 1bbcdf02..fe1bcb60 100644 --- a/crates/obs/src/sinks/file.rs +++ b/crates/obs/src/sinks/file.rs @@ -48,7 +48,7 @@ impl FileSink { } let file = if file_exists { // If the file exists, open it in append mode - tracing::debug!("FileSink: File exists, opening in append mode."); + tracing::debug!("FileSink: File exists, opening in append mode. Path: {:?}", path); OpenOptions::new().append(true).create(true).open(&path).await? } else { // If the file does not exist, create it diff --git a/crates/obs/src/sinks/mod.rs b/crates/obs/src/sinks/mod.rs index dc2752be..212607a6 100644 --- a/crates/obs/src/sinks/mod.rs +++ b/crates/obs/src/sinks/mod.rs @@ -14,7 +14,6 @@ use crate::{AppConfig, SinkConfig, UnifiedLogEntry}; use async_trait::async_trait; -use rustfs_config::DEFAULT_SINK_FILE_LOG_FILE; use std::sync::Arc; #[cfg(feature = "file")] @@ -47,8 +46,12 @@ pub async fn create_sinks(config: &AppConfig) -> Vec> { sinks.push(Arc::new(kafka::KafkaSink::new( producer, kafka_config.topic.clone(), - kafka_config.batch_size.unwrap_or(100), - kafka_config.batch_timeout_ms.unwrap_or(1000), + kafka_config + .batch_size + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_KAFKA_BATCH_SIZE), + kafka_config + .batch_timeout_ms + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_KAFKA_BATCH_TIMEOUT_MS), ))); tracing::info!("Kafka sink created for topic: {}", kafka_config.topic); } @@ -57,25 +60,35 @@ pub async fn create_sinks(config: &AppConfig) -> Vec> { } } } + #[cfg(feature = "webhook")] SinkConfig::Webhook(webhook_config) => { sinks.push(Arc::new(webhook::WebhookSink::new( webhook_config.endpoint.clone(), webhook_config.auth_token.clone(), - webhook_config.max_retries.unwrap_or(3), - webhook_config.retry_delay_ms.unwrap_or(100), + webhook_config + .max_retries + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_WEBHOOK_MAX_RETRIES), + webhook_config + .retry_delay_ms + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_WEBHOOK_RETRY_DELAY_MS), ))); tracing::info!("Webhook sink created for endpoint: {}", webhook_config.endpoint); } - #[cfg(feature = "file")] SinkConfig::File(file_config) => { tracing::debug!("FileSink: Using path: {}", file_config.path); match file::FileSink::new( - format!("{}/{}", file_config.path.clone(), DEFAULT_SINK_FILE_LOG_FILE), - file_config.buffer_size.unwrap_or(8192), - file_config.flush_interval_ms.unwrap_or(1000), - file_config.flush_threshold.unwrap_or(100), + format!("{}/{}", file_config.path.clone(), rustfs_config::DEFAULT_SINK_FILE_LOG_FILE), + file_config + .buffer_size + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_BUFFER_SIZE), + file_config + .flush_interval_ms + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_FLUSH_INTERVAL_MS), + file_config + .flush_threshold + .unwrap_or(rustfs_config::observability::DEFAULT_SINKS_FILE_FLUSH_THRESHOLD), ) .await { diff --git a/crates/obs/src/telemetry.rs b/crates/obs/src/telemetry.rs index ba812cc6..3b81a9e7 100644 --- a/crates/obs/src/telemetry.rs +++ b/crates/obs/src/telemetry.rs @@ -29,9 +29,9 @@ use opentelemetry_semantic_conventions::{ SCHEMA_URL, attribute::{DEPLOYMENT_ENVIRONMENT_NAME, NETWORK_LOCAL_ADDRESS, SERVICE_VERSION as OTEL_SERVICE_VERSION}, }; +use rustfs_config::observability::ENV_OBS_LOG_DIRECTORY; use rustfs_config::{ - APP_NAME, DEFAULT_LOG_DIR, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, - SERVICE_VERSION, USE_STDOUT, + APP_NAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT, }; use rustfs_utils::get_local_ip_with_default; use smallvec::SmallVec; @@ -293,7 +293,8 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard { } } else { // Obtain the log directory and file name configuration - let log_directory = config.log_directory.as_deref().unwrap_or(DEFAULT_LOG_DIR); + let default_log_directory = rustfs_utils::dirs::get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY); + let log_directory = config.log_directory.as_deref().unwrap_or(default_log_directory.as_str()); let log_filename = config.log_filename.as_deref().unwrap_or(service_name); if let Err(e) = fs::create_dir_all(log_directory) { diff --git a/crates/protos/Cargo.toml b/crates/protos/Cargo.toml index 9f8ac0ac..ed9b2bc1 100644 --- a/crates/protos/Cargo.toml +++ b/crates/protos/Cargo.toml @@ -37,4 +37,5 @@ rustfs-common.workspace = true flatbuffers = { workspace = true } prost = { workspace = true } tonic = { workspace = true, features = ["transport"] } -tonic-build = { workspace = true } +tonic-prost = { workspace = true } +tonic-prost-build = { workspace = true } \ No newline at end of file diff --git a/crates/protos/src/generated/mod.rs b/crates/protos/src/generated/mod.rs index 4ab5a438..9866676a 100644 --- a/crates/protos/src/generated/mod.rs +++ b/crates/protos/src/generated/mod.rs @@ -1,3 +1,17 @@ +// Copyright 2024 RustFS Team +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + #![allow(unused_imports)] #![allow(clippy::all)] pub mod proto_gen; diff --git a/crates/protos/src/generated/proto_gen/mod.rs b/crates/protos/src/generated/proto_gen/mod.rs index 35d3fe1b..fd0a9626 100644 --- a/crates/protos/src/generated/proto_gen/mod.rs +++ b/crates/protos/src/generated/proto_gen/mod.rs @@ -1 +1,15 @@ +// Copyright 2024 RustFS Team +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + pub mod node_service; diff --git a/crates/protos/src/generated/proto_gen/node_service.rs b/crates/protos/src/generated/proto_gen/node_service.rs index b48cc380..bdde52b8 100644 --- a/crates/protos/src/generated/proto_gen/node_service.rs +++ b/crates/protos/src/generated/proto_gen/node_service.rs @@ -1,46 +1,46 @@ // This file is @generated by prost-build. -/// -------------------------------------------------------------------- -#[derive(Clone, PartialEq, ::prost::Message)] +/// --- +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct Error { #[prost(uint32, tag = "1")] pub code: u32, #[prost(string, tag = "2")] pub error_info: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct PingRequest { #[prost(uint64, tag = "1")] pub version: u64, #[prost(bytes = "bytes", tag = "2")] pub body: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct PingResponse { #[prost(uint64, tag = "1")] pub version: u64, #[prost(bytes = "bytes", tag = "2")] pub body: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct HealBucketRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub options: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct HealBucketResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListBucketRequest { #[prost(string, tag = "1")] pub options: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListBucketResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -49,28 +49,28 @@ pub struct ListBucketResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeBucketRequest { #[prost(string, tag = "1")] pub name: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub options: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeBucketResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetBucketInfoRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub options: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetBucketInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -79,19 +79,19 @@ pub struct GetBucketInfoResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteBucketRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteBucketResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadAllRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -101,7 +101,7 @@ pub struct ReadAllRequest { #[prost(string, tag = "3")] pub path: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadAllResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -110,7 +110,7 @@ pub struct ReadAllResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteAllRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -122,14 +122,14 @@ pub struct WriteAllRequest { #[prost(bytes = "bytes", tag = "4")] pub data: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteAllResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -141,14 +141,14 @@ pub struct DeleteRequest { #[prost(string, tag = "4")] pub options: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct VerifyFileRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -160,7 +160,7 @@ pub struct VerifyFileRequest { #[prost(string, tag = "4")] pub file_info: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct VerifyFileResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -169,7 +169,7 @@ pub struct VerifyFileResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadPartsRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -178,7 +178,7 @@ pub struct ReadPartsRequest { #[prost(string, repeated, tag = "3")] pub paths: ::prost::alloc::vec::Vec<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadPartsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -187,7 +187,7 @@ pub struct ReadPartsResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct CheckPartsRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -199,7 +199,7 @@ pub struct CheckPartsRequest { #[prost(string, tag = "4")] pub file_info: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct CheckPartsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -208,7 +208,7 @@ pub struct CheckPartsResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenamePartRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -223,14 +223,14 @@ pub struct RenamePartRequest { #[prost(bytes = "bytes", tag = "6")] pub meta: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenamePartResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenameFileRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -243,14 +243,14 @@ pub struct RenameFileRequest { #[prost(string, tag = "5")] pub dst_path: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenameFileResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -264,14 +264,14 @@ pub struct WriteRequest { #[prost(bytes = "bytes", tag = "5")] pub data: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadAtRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -285,7 +285,7 @@ pub struct ReadAtRequest { #[prost(int64, tag = "5")] pub length: i64, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadAtResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -296,7 +296,7 @@ pub struct ReadAtResponse { #[prost(message, optional, tag = "4")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListDirRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -304,7 +304,7 @@ pub struct ListDirRequest { #[prost(string, tag = "2")] pub volume: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListDirResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -313,7 +313,7 @@ pub struct ListDirResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WalkDirRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -321,7 +321,7 @@ pub struct WalkDirRequest { #[prost(bytes = "bytes", tag = "2")] pub walk_dir_options: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WalkDirResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -330,7 +330,7 @@ pub struct WalkDirResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenameDataRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -346,7 +346,7 @@ pub struct RenameDataRequest { #[prost(string, tag = "6")] pub dst_path: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct RenameDataResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -355,7 +355,7 @@ pub struct RenameDataResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeVolumesRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -363,14 +363,14 @@ pub struct MakeVolumesRequest { #[prost(string, repeated, tag = "2")] pub volumes: ::prost::alloc::vec::Vec<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeVolumesResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeVolumeRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -378,20 +378,20 @@ pub struct MakeVolumeRequest { #[prost(string, tag = "2")] pub volume: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct MakeVolumeResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListVolumesRequest { /// indicate which one in the disks #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ListVolumesResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -400,7 +400,7 @@ pub struct ListVolumesResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct StatVolumeRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -408,7 +408,7 @@ pub struct StatVolumeRequest { #[prost(string, tag = "2")] pub volume: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct StatVolumeResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -417,7 +417,7 @@ pub struct StatVolumeResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeletePathsRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -426,14 +426,14 @@ pub struct DeletePathsRequest { #[prost(string, repeated, tag = "3")] pub paths: ::prost::alloc::vec::Vec<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeletePathsResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct UpdateMetadataRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -446,14 +446,14 @@ pub struct UpdateMetadataRequest { #[prost(string, tag = "5")] pub opts: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct UpdateMetadataResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteMetadataRequest { /// indicate which one in the disks #[prost(string, tag = "1")] @@ -465,14 +465,14 @@ pub struct WriteMetadataRequest { #[prost(string, tag = "4")] pub file_info: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct WriteMetadataResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadVersionRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -485,7 +485,7 @@ pub struct ReadVersionRequest { #[prost(string, tag = "5")] pub opts: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadVersionResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -494,7 +494,7 @@ pub struct ReadVersionResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadXlRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -505,7 +505,7 @@ pub struct ReadXlRequest { #[prost(bool, tag = "4")] pub read_data: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadXlResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -514,7 +514,7 @@ pub struct ReadXlResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVersionRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -529,7 +529,7 @@ pub struct DeleteVersionRequest { #[prost(string, tag = "6")] pub opts: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVersionResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -538,7 +538,7 @@ pub struct DeleteVersionResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVersionsRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, @@ -549,7 +549,7 @@ pub struct DeleteVersionsRequest { #[prost(string, tag = "4")] pub opts: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVersionsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -558,14 +558,14 @@ pub struct DeleteVersionsResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadMultipleRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub read_multiple_req: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReadMultipleResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -574,28 +574,28 @@ pub struct ReadMultipleResponse { #[prost(message, optional, tag = "3")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVolumeRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub volume: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteVolumeResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(message, optional, tag = "2")] pub error: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DiskInfoRequest { #[prost(string, tag = "1")] pub disk: ::prost::alloc::string::String, #[prost(string, tag = "2")] pub opts: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DiskInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -605,12 +605,12 @@ pub struct DiskInfoResponse { pub error: ::core::option::Option, } /// lock api have same argument type -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GenerallyLockRequest { #[prost(string, tag = "1")] pub args: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GenerallyLockResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -622,12 +622,12 @@ pub struct Mss { #[prost(map = "string, string", tag = "1")] pub value: ::std::collections::HashMap<::prost::alloc::string::String, ::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct LocalStorageInfoRequest { #[prost(bool, tag = "1")] pub metrics: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LocalStorageInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -636,12 +636,12 @@ pub struct LocalStorageInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct ServerInfoRequest { #[prost(bool, tag = "1")] pub metrics: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ServerInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -650,9 +650,9 @@ pub struct ServerInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetCpusRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetCpusResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -661,9 +661,9 @@ pub struct GetCpusResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetNetInfoRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetNetInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -672,9 +672,9 @@ pub struct GetNetInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetPartitionsRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetPartitionsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -683,9 +683,9 @@ pub struct GetPartitionsResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetOsInfoRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetOsInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -694,9 +694,9 @@ pub struct GetOsInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSeLinuxInfoRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSeLinuxInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -705,9 +705,9 @@ pub struct GetSeLinuxInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSysConfigRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSysConfigResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -716,9 +716,9 @@ pub struct GetSysConfigResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSysErrorsRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSysErrorsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -727,9 +727,9 @@ pub struct GetSysErrorsResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMemInfoRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMemInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -738,14 +738,14 @@ pub struct GetMemInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMetricsRequest { #[prost(bytes = "bytes", tag = "1")] pub metric_type: ::prost::bytes::Bytes, #[prost(bytes = "bytes", tag = "2")] pub opts: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMetricsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -754,9 +754,9 @@ pub struct GetMetricsResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetProcInfoRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetProcInfoResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -765,19 +765,19 @@ pub struct GetProcInfoResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct StartProfilingRequest { #[prost(string, tag = "1")] pub profiler: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct StartProfilingResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct DownloadProfileDataRequest {} #[derive(Clone, PartialEq, ::prost::Message)] pub struct DownloadProfileDataResponse { @@ -788,12 +788,12 @@ pub struct DownloadProfileDataResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetBucketStatsDataRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetBucketStatsDataResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -802,9 +802,9 @@ pub struct GetBucketStatsDataResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSrMetricsDataRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetSrMetricsDataResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -813,9 +813,9 @@ pub struct GetSrMetricsDataResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetAllBucketStatsRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetAllBucketStatsResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -824,55 +824,55 @@ pub struct GetAllBucketStatsResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadBucketMetadataRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadBucketMetadataResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteBucketMetadataRequest { #[prost(string, tag = "1")] pub bucket: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteBucketMetadataResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeletePolicyRequest { #[prost(string, tag = "1")] pub policy_name: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeletePolicyResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadPolicyRequest { #[prost(string, tag = "1")] pub policy_name: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadPolicyResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadPolicyMappingRequest { #[prost(string, tag = "1")] pub user_or_group: ::prost::alloc::string::String, @@ -881,78 +881,78 @@ pub struct LoadPolicyMappingRequest { #[prost(bool, tag = "3")] pub is_group: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadPolicyMappingResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteUserRequest { #[prost(string, tag = "1")] pub access_key: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteUserResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteServiceAccountRequest { #[prost(string, tag = "1")] pub access_key: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct DeleteServiceAccountResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadUserRequest { #[prost(string, tag = "1")] pub access_key: ::prost::alloc::string::String, #[prost(bool, tag = "2")] pub temp: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadUserResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadServiceAccountRequest { #[prost(string, tag = "1")] pub access_key: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadServiceAccountResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadGroupRequest { #[prost(string, tag = "1")] pub group: ::prost::alloc::string::String, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadGroupResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReloadSiteReplicationConfigRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReloadSiteReplicationConfigResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -964,16 +964,16 @@ pub struct SignalServiceRequest { #[prost(message, optional, tag = "1")] pub vars: ::core::option::Option, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct SignalServiceResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct BackgroundHealStatusRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct BackgroundHealStatusResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -982,12 +982,12 @@ pub struct BackgroundHealStatusResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMetacacheListingRequest { #[prost(bytes = "bytes", tag = "1")] pub opts: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct GetMetacacheListingResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -996,12 +996,12 @@ pub struct GetMetacacheListingResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct UpdateMetacacheListingRequest { #[prost(bytes = "bytes", tag = "1")] pub metacache: ::prost::bytes::Bytes, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct UpdateMetacacheListingResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -1010,39 +1010,39 @@ pub struct UpdateMetacacheListingResponse { #[prost(string, optional, tag = "3")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReloadPoolMetaRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct ReloadPoolMetaResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct StopRebalanceRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct StopRebalanceResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadRebalanceMetaRequest { #[prost(bool, tag = "1")] pub start_rebalance: bool, } -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadRebalanceMetaResponse { #[prost(bool, tag = "1")] pub success: bool, #[prost(string, optional, tag = "2")] pub error_info: ::core::option::Option<::prost::alloc::string::String>, } -#[derive(Clone, Copy, PartialEq, ::prost::Message)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadTransitionTierConfigRequest {} -#[derive(Clone, PartialEq, ::prost::Message)] +#[derive(Clone, PartialEq, Eq, Hash, ::prost::Message)] pub struct LoadTransitionTierConfigResponse { #[prost(bool, tag = "1")] pub success: bool, @@ -1137,7 +1137,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/Ping"); let mut req = request.into_request(); req.extensions_mut() @@ -1152,7 +1152,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/HealBucket"); let mut req = request.into_request(); req.extensions_mut() @@ -1167,7 +1167,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ListBucket"); let mut req = request.into_request(); req.extensions_mut() @@ -1182,7 +1182,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/MakeBucket"); let mut req = request.into_request(); req.extensions_mut() @@ -1197,7 +1197,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetBucketInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1212,7 +1212,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteBucket"); let mut req = request.into_request(); req.extensions_mut() @@ -1227,7 +1227,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadAll"); let mut req = request.into_request(); req.extensions_mut() @@ -1242,7 +1242,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/WriteAll"); let mut req = request.into_request(); req.extensions_mut() @@ -1257,7 +1257,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/Delete"); let mut req = request.into_request(); req.extensions_mut() @@ -1272,7 +1272,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/VerifyFile"); let mut req = request.into_request(); req.extensions_mut() @@ -1287,7 +1287,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadParts"); let mut req = request.into_request(); req.extensions_mut() @@ -1302,7 +1302,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/CheckParts"); let mut req = request.into_request(); req.extensions_mut() @@ -1317,7 +1317,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/RenamePart"); let mut req = request.into_request(); req.extensions_mut() @@ -1332,7 +1332,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/RenameFile"); let mut req = request.into_request(); req.extensions_mut() @@ -1347,7 +1347,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/Write"); let mut req = request.into_request(); req.extensions_mut() @@ -1362,14 +1362,14 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/WriteStream"); let mut req = request.into_streaming_request(); req.extensions_mut() .insert(GrpcMethod::new("node_service.NodeService", "WriteStream")); self.inner.streaming(req, path, codec).await } - /// rpc Append(AppendRequest) returns (AppendResponse) {}; + /// rpc Append(AppendRequest) returns (AppendResponse) {}; pub async fn read_at( &mut self, request: impl tonic::IntoStreamingRequest, @@ -1378,7 +1378,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadAt"); let mut req = request.into_streaming_request(); req.extensions_mut() @@ -1393,7 +1393,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ListDir"); let mut req = request.into_request(); req.extensions_mut() @@ -1408,7 +1408,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/WalkDir"); let mut req = request.into_request(); req.extensions_mut() @@ -1423,7 +1423,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/RenameData"); let mut req = request.into_request(); req.extensions_mut() @@ -1438,7 +1438,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/MakeVolumes"); let mut req = request.into_request(); req.extensions_mut() @@ -1453,7 +1453,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/MakeVolume"); let mut req = request.into_request(); req.extensions_mut() @@ -1468,7 +1468,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ListVolumes"); let mut req = request.into_request(); req.extensions_mut() @@ -1483,7 +1483,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/StatVolume"); let mut req = request.into_request(); req.extensions_mut() @@ -1498,7 +1498,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeletePaths"); let mut req = request.into_request(); req.extensions_mut() @@ -1513,7 +1513,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/UpdateMetadata"); let mut req = request.into_request(); req.extensions_mut() @@ -1528,7 +1528,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/WriteMetadata"); let mut req = request.into_request(); req.extensions_mut() @@ -1543,7 +1543,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadVersion"); let mut req = request.into_request(); req.extensions_mut() @@ -1558,7 +1558,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadXL"); let mut req = request.into_request(); req.extensions_mut() @@ -1573,7 +1573,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteVersion"); let mut req = request.into_request(); req.extensions_mut() @@ -1588,7 +1588,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteVersions"); let mut req = request.into_request(); req.extensions_mut() @@ -1603,7 +1603,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadMultiple"); let mut req = request.into_request(); req.extensions_mut() @@ -1618,7 +1618,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteVolume"); let mut req = request.into_request(); req.extensions_mut() @@ -1633,7 +1633,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DiskInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1648,7 +1648,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/Lock"); let mut req = request.into_request(); req.extensions_mut() @@ -1663,7 +1663,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/UnLock"); let mut req = request.into_request(); req.extensions_mut() @@ -1678,7 +1678,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/RLock"); let mut req = request.into_request(); req.extensions_mut() @@ -1693,7 +1693,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/RUnLock"); let mut req = request.into_request(); req.extensions_mut() @@ -1708,7 +1708,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ForceUnLock"); let mut req = request.into_request(); req.extensions_mut() @@ -1723,7 +1723,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/Refresh"); let mut req = request.into_request(); req.extensions_mut() @@ -1738,7 +1738,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LocalStorageInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1753,7 +1753,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ServerInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1768,7 +1768,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetCpus"); let mut req = request.into_request(); req.extensions_mut() @@ -1783,7 +1783,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetNetInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1798,7 +1798,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetPartitions"); let mut req = request.into_request(); req.extensions_mut() @@ -1813,7 +1813,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetOsInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1828,7 +1828,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetSELinuxInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1843,7 +1843,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetSysConfig"); let mut req = request.into_request(); req.extensions_mut() @@ -1858,7 +1858,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetSysErrors"); let mut req = request.into_request(); req.extensions_mut() @@ -1873,7 +1873,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetMemInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1888,7 +1888,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetMetrics"); let mut req = request.into_request(); req.extensions_mut() @@ -1903,7 +1903,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetProcInfo"); let mut req = request.into_request(); req.extensions_mut() @@ -1918,7 +1918,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/StartProfiling"); let mut req = request.into_request(); req.extensions_mut() @@ -1933,7 +1933,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DownloadProfileData"); let mut req = request.into_request(); req.extensions_mut() @@ -1948,7 +1948,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetBucketStats"); let mut req = request.into_request(); req.extensions_mut() @@ -1963,7 +1963,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetSRMetrics"); let mut req = request.into_request(); req.extensions_mut() @@ -1978,7 +1978,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetAllBucketStats"); let mut req = request.into_request(); req.extensions_mut() @@ -1993,7 +1993,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadBucketMetadata"); let mut req = request.into_request(); req.extensions_mut() @@ -2008,7 +2008,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteBucketMetadata"); let mut req = request.into_request(); req.extensions_mut() @@ -2023,7 +2023,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeletePolicy"); let mut req = request.into_request(); req.extensions_mut() @@ -2038,7 +2038,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadPolicy"); let mut req = request.into_request(); req.extensions_mut() @@ -2053,7 +2053,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadPolicyMapping"); let mut req = request.into_request(); req.extensions_mut() @@ -2068,7 +2068,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteUser"); let mut req = request.into_request(); req.extensions_mut() @@ -2083,7 +2083,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/DeleteServiceAccount"); let mut req = request.into_request(); req.extensions_mut() @@ -2098,7 +2098,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadUser"); let mut req = request.into_request(); req.extensions_mut() @@ -2113,7 +2113,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadServiceAccount"); let mut req = request.into_request(); req.extensions_mut() @@ -2128,7 +2128,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadGroup"); let mut req = request.into_request(); req.extensions_mut() @@ -2143,7 +2143,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReloadSiteReplicationConfig"); let mut req = request.into_request(); req.extensions_mut() @@ -2160,7 +2160,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/SignalService"); let mut req = request.into_request(); req.extensions_mut() @@ -2175,7 +2175,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/BackgroundHealStatus"); let mut req = request.into_request(); req.extensions_mut() @@ -2190,7 +2190,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/GetMetacacheListing"); let mut req = request.into_request(); req.extensions_mut() @@ -2205,7 +2205,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/UpdateMetacacheListing"); let mut req = request.into_request(); req.extensions_mut() @@ -2220,7 +2220,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReloadPoolMeta"); let mut req = request.into_request(); req.extensions_mut() @@ -2235,7 +2235,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/StopRebalance"); let mut req = request.into_request(); req.extensions_mut() @@ -2250,7 +2250,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadRebalanceMeta"); let mut req = request.into_request(); req.extensions_mut() @@ -2265,7 +2265,7 @@ pub mod node_service_client { .ready() .await .map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?; - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/LoadTransitionTierConfig"); let mut req = request.into_request(); req.extensions_mut() @@ -2354,7 +2354,7 @@ pub mod node_service_server { type ReadAtStream: tonic::codegen::tokio_stream::Stream> + std::marker::Send + 'static; - /// rpc Append(AppendRequest) returns (AppendResponse) {}; + /// rpc Append(AppendRequest) returns (AppendResponse) {}; async fn read_at( &self, request: tonic::Request>, @@ -2691,7 +2691,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = PingSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2719,7 +2719,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = HealBucketSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2747,7 +2747,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ListBucketSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2775,7 +2775,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = MakeBucketSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2803,7 +2803,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetBucketInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2831,7 +2831,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteBucketSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2859,7 +2859,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadAllSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2887,7 +2887,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = WriteAllSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2915,7 +2915,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2943,7 +2943,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = VerifyFileSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2971,7 +2971,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadPartsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -2999,7 +2999,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = CheckPartsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3027,7 +3027,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RenamePartSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3055,7 +3055,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RenameFileSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3083,7 +3083,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = WriteSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3112,7 +3112,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = WriteStreamSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3141,7 +3141,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadAtSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3169,7 +3169,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ListDirSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3198,7 +3198,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = WalkDirSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3226,7 +3226,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RenameDataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3254,7 +3254,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = MakeVolumesSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3282,7 +3282,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = MakeVolumeSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3310,7 +3310,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ListVolumesSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3338,7 +3338,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = StatVolumeSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3366,7 +3366,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeletePathsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3394,7 +3394,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = UpdateMetadataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3422,7 +3422,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = WriteMetadataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3450,7 +3450,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadVersionSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3478,7 +3478,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadXLSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3506,7 +3506,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteVersionSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3534,7 +3534,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteVersionsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3562,7 +3562,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReadMultipleSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3590,7 +3590,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteVolumeSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3618,7 +3618,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DiskInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3646,7 +3646,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LockSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3674,7 +3674,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = UnLockSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3702,7 +3702,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RLockSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3730,7 +3730,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RUnLockSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3758,7 +3758,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ForceUnLockSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3786,7 +3786,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = RefreshSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3814,7 +3814,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LocalStorageInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3842,7 +3842,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ServerInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3870,7 +3870,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetCpusSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3898,7 +3898,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetNetInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3926,7 +3926,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetPartitionsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3954,7 +3954,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetOsInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -3982,7 +3982,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetSELinuxInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4010,7 +4010,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetSysConfigSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4038,7 +4038,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetSysErrorsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4066,7 +4066,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetMemInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4094,7 +4094,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetMetricsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4122,7 +4122,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetProcInfoSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4150,7 +4150,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = StartProfilingSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4178,7 +4178,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DownloadProfileDataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4206,7 +4206,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetBucketStatsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4234,7 +4234,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetSRMetricsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4262,7 +4262,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetAllBucketStatsSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4290,7 +4290,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadBucketMetadataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4318,7 +4318,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteBucketMetadataSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4346,7 +4346,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeletePolicySvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4374,7 +4374,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadPolicySvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4402,7 +4402,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadPolicyMappingSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4430,7 +4430,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteUserSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4458,7 +4458,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = DeleteServiceAccountSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4486,7 +4486,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadUserSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4514,7 +4514,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadServiceAccountSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4542,7 +4542,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadGroupSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4572,7 +4572,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReloadSiteReplicationConfigSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4600,7 +4600,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = SignalServiceSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4628,7 +4628,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = BackgroundHealStatusSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4656,7 +4656,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = GetMetacacheListingSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4684,7 +4684,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = UpdateMetacacheListingSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4712,7 +4712,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = ReloadPoolMetaSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4740,7 +4740,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = StopRebalanceSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4768,7 +4768,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadRebalanceMetaSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); @@ -4796,7 +4796,7 @@ pub mod node_service_server { let inner = self.inner.clone(); let fut = async move { let method = LoadTransitionTierConfigSvc(inner); - let codec = tonic::codec::ProstCodec::default(); + let codec = tonic_prost::ProstCodec::default(); let mut grpc = tonic::server::Grpc::new(codec) .apply_compression_config(accept_compression_encodings, send_compression_encodings) .apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size); diff --git a/crates/protos/src/main.rs b/crates/protos/src/main.rs index 1a752bee..99d77c61 100644 --- a/crates/protos/src/main.rs +++ b/crates/protos/src/main.rs @@ -53,21 +53,19 @@ fn main() -> Result<(), AnyError> { let flatbuffer_out_dir = project_root_dir.join("generated").join("flatbuffers_generated"); // let descriptor_set_path = PathBuf::from(env::var(ENV_OUT_DIR).unwrap()).join("proto-descriptor.bin"); - tonic_build::configure() + tonic_prost_build::configure() .out_dir(proto_out_dir) // .file_descriptor_set_path(descriptor_set_path) .protoc_arg("--experimental_allow_proto3_optional") .compile_well_known_types(true) - .bytes(["."]) + .bytes(".") .emit_rerun_if_changed(false) - .compile_protos(proto_files, &[proto_dir.clone()]) + .compile_protos(proto_files, &[proto_dir.to_string_lossy().as_ref()]) .map_err(|e| format!("Failed to generate protobuf file: {e}."))?; // protos/gen/mod.rs let generated_mod_rs_path = project_root_dir.join("generated").join("proto_gen").join("mod.rs"); - let mut generated_mod_rs = fs::File::create(generated_mod_rs_path)?; - writeln!(&mut generated_mod_rs, "pub mod node_service;")?; writeln!( &mut generated_mod_rs, r#"// Copyright 2024 RustFS Team @@ -84,12 +82,13 @@ fn main() -> Result<(), AnyError> { // See the License for the specific language governing permissions and // limitations under the License."# )?; + writeln!(&mut generated_mod_rs, "\n")?; + writeln!(&mut generated_mod_rs, "pub mod node_service;")?; generated_mod_rs.flush()?; let generated_mod_rs_path = project_root_dir.join("generated").join("mod.rs"); - let mut generated_mod_rs = fs::File::create(generated_mod_rs_path)?; - writeln!(&mut generated_mod_rs, "#![allow(unused_imports)]")?; + writeln!( &mut generated_mod_rs, r#"// Copyright 2024 RustFS Team @@ -106,6 +105,9 @@ fn main() -> Result<(), AnyError> { // See the License for the specific language governing permissions and // limitations under the License."# )?; + writeln!(&mut generated_mod_rs, "\n")?; + writeln!(&mut generated_mod_rs, "#![allow(unused_imports)]")?; + writeln!(&mut generated_mod_rs, "\n")?; writeln!(&mut generated_mod_rs, "#![allow(clippy::all)]")?; writeln!(&mut generated_mod_rs, "pub mod proto_gen;")?; generated_mod_rs.flush()?; diff --git a/crates/utils/src/dirs.rs b/crates/utils/src/dirs.rs index f56af192..edfcddfa 100644 --- a/crates/utils/src/dirs.rs +++ b/crates/utils/src/dirs.rs @@ -12,7 +12,9 @@ // See the License for the specific language governing permissions and // limitations under the License. +use rustfs_config::{DEFAULT_LOG_DIR, DEFAULT_LOG_FILENAME}; use std::env; +use std::fs; use std::path::{Path, PathBuf}; /// Get the absolute path to the current project @@ -57,6 +59,72 @@ pub fn get_project_root() -> Result { Err("The project root directory cannot be obtained. Please check the running environment and project structure.".to_string()) } +/// Get the log directory as a string +/// This function will try to find a writable log directory in the following order: +pub fn get_log_directory_to_string(key: &str) -> String { + get_log_directory(key).to_string_lossy().to_string() +} + +/// Get the log directory +/// This function will try to find a writable log directory in the following order: +pub fn get_log_directory(key: &str) -> PathBuf { + // Environment variables are specified + if let Ok(log_dir) = env::var(key) { + let path = PathBuf::from(log_dir); + if ensure_directory_writable(&path) { + return path; + } + } + + // System temporary directory + if let Ok(mut temp_dir) = env::temp_dir().canonicalize() { + temp_dir.push(DEFAULT_LOG_FILENAME); + temp_dir.push(DEFAULT_LOG_DIR); + if ensure_directory_writable(&temp_dir) { + return temp_dir; + } + } + + // User home directory + if let Ok(home_dir) = env::var("HOME").or_else(|_| env::var("USERPROFILE")) { + let mut path = PathBuf::from(home_dir); + path.push(format!(".{DEFAULT_LOG_FILENAME}")); + path.push(DEFAULT_LOG_DIR); + if ensure_directory_writable(&path) { + return path; + } + } + + // Current working directory + if let Ok(current_dir) = env::current_dir() { + let mut path = current_dir; + path.push(DEFAULT_LOG_DIR); + if ensure_directory_writable(&path) { + return path; + } + } + + // Relative path + PathBuf::from(DEFAULT_LOG_DIR) +} + +fn ensure_directory_writable(path: &PathBuf) -> bool { + // Try creating a catalog + if fs::create_dir_all(path).is_err() { + return false; + } + + // Check write permissions + let test_file = path.join(".write_test"); + match fs::write(&test_file, "test") { + Ok(_) => { + let _ = fs::remove_file(&test_file); + true + } + Err(_) => false, + } +} + #[cfg(test)] mod tests { use super::*; diff --git a/rustfs/src/admin/handlers/event.rs b/rustfs/src/admin/handlers/event.rs index 03c97d49..690ede84 100644 --- a/rustfs/src/admin/handlers/event.rs +++ b/rustfs/src/admin/handlers/event.rs @@ -16,7 +16,7 @@ use crate::admin::router::Operation; use crate::auth::{check_key_valid, get_session_token}; use http::{HeaderMap, StatusCode}; use matchit::Params; -use rustfs_config::notify::{NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS}; +use rustfs_config::notify::{ENABLE_KEY, ENABLE_ON, NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS}; use rustfs_notify::EventName; use rustfs_notify::rules::{BucketNotificationConfig, PatternRules}; use s3s::header::CONTENT_LENGTH; @@ -75,11 +75,8 @@ impl Operation for SetNotificationTarget { let mut kvs_map: HashMap = serde_json::from_slice(&body) .map_err(|e| s3_error!(InvalidArgument, "invalid json body for target config: {}", e))?; // If there is an enable key, add an enable key value to "on" - if !kvs_map.contains_key(rustfs_ecstore::config::ENABLE_KEY) { - kvs_map.insert( - rustfs_ecstore::config::ENABLE_KEY.to_string(), - rustfs_ecstore::config::ENABLE_ON.to_string(), - ); + if !kvs_map.contains_key(ENABLE_KEY) { + kvs_map.insert(ENABLE_KEY.to_string(), ENABLE_ON.to_string()); } let kvs = rustfs_ecstore::config::KVS( diff --git a/rustfs/src/admin/mod.rs b/rustfs/src/admin/mod.rs index 4e24d7dc..0449b290 100644 --- a/rustfs/src/admin/mod.rs +++ b/rustfs/src/admin/mod.rs @@ -25,6 +25,7 @@ use handlers::{ sts, tier, user, }; +use crate::admin::handlers::event::{ListNotificationTargets, RemoveNotificationTarget, SetNotificationTarget}; use handlers::{GetReplicationMetricsHandler, ListRemoteTargetHandler, RemoveRemoteTargetHandler, SetRemoteTargetHandler}; use hyper::Method; use router::{AdminOperation, S3Router}; @@ -365,5 +366,28 @@ fn register_user_route(r: &mut S3Router) -> std::io::Result<()> AdminOperation(&policies::SetPolicyForUserOrGroup {}), )?; + r.insert( + Method::GET, + format!("{}{}", ADMIN_PREFIX, "/v3/target-list").as_str(), + AdminOperation(&ListNotificationTargets {}), + )?; + + r.insert( + Method::POST, + format!("{}{}", ADMIN_PREFIX, "/v3/target-set").as_str(), + AdminOperation(&SetNotificationTarget {}), + )?; + + // Remove notification target + // This endpoint removes a notification target based on its type and name. + // target-remove?target_type=xxx&target_name=xxx + // * `target_type` - Target type, such as "notify_webhook" or "notify_mqtt". + // * `target_name` - A unique name for a Target, such as "1". + r.insert( + Method::DELETE, + format!("{}{}", ADMIN_PREFIX, "/v3/target-remove").as_str(), + AdminOperation(&RemoveNotificationTarget {}), + )?; + Ok(()) } diff --git a/rustfs/src/config/mod.rs b/rustfs/src/config/mod.rs index fc127762..038f5f76 100644 --- a/rustfs/src/config/mod.rs +++ b/rustfs/src/config/mod.rs @@ -65,7 +65,7 @@ pub struct Opt { pub secret_key: String, /// Enable console server - #[arg(long, default_value_t = true, env = "RUSTFS_CONSOLE_ENABLE")] + #[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_ENABLE, env = "RUSTFS_CONSOLE_ENABLE")] pub console_enable: bool, /// Observability endpoint for trace, metrics and logs,only support grpc mode. diff --git a/rustfs/src/main.rs b/rustfs/src/main.rs index 0aa2b573..8275d844 100644 --- a/rustfs/src/main.rs +++ b/rustfs/src/main.rs @@ -37,8 +37,8 @@ use rustfs_config::DEFAULT_DELIMITER; use rustfs_ecstore::bucket::metadata_sys::init_bucket_metadata_sys; use rustfs_ecstore::cmd::bucket_replication::init_bucket_replication_pool; use rustfs_ecstore::config as ecconfig; -use rustfs_ecstore::config::GLOBAL_ConfigSys; -use rustfs_ecstore::config::GLOBAL_ServerConfig; +use rustfs_ecstore::config::GLOBAL_CONFIG_SYS; +use rustfs_ecstore::config::GLOBAL_SERVER_CONFIG; use rustfs_ecstore::store_api::BucketOptions; use rustfs_ecstore::{ StorageAPI, @@ -159,7 +159,7 @@ async fn run(opt: config::Opt) -> Result<()> { ecconfig::init(); // config system configuration - GLOBAL_ConfigSys.init(store.clone()).await?; + GLOBAL_CONFIG_SYS.init(store.clone()).await?; // Initialize event notifier init_event_notifier().await; @@ -281,7 +281,7 @@ pub(crate) async fn init_event_notifier() { info!("Initializing event notifier..."); // 1. Get the global configuration loaded by ecstore - let server_config = match GLOBAL_ServerConfig.get() { + let server_config = match GLOBAL_SERVER_CONFIG.get() { Some(config) => config.clone(), // Clone the config to pass ownership None => { error!("Event notifier initialization failed: Global server config not loaded."); diff --git a/scripts/run.sh b/scripts/run.sh index b9e94979..a16db700 100755 --- a/scripts/run.sh +++ b/scripts/run.sh @@ -58,11 +58,11 @@ export RUSTFS_CONSOLE_ADDRESS=":9001" #export RUSTFS_OBS_SERVICE_NAME=rustfs # 服务名称 #export RUSTFS_OBS_SERVICE_VERSION=0.1.0 # 服务版本 export RUSTFS_OBS_ENVIRONMENT=develop # 环境名称 -export RUSTFS_OBS_LOGGER_LEVEL=debug # 日志级别,支持 trace, debug, info, warn, error +export RUSTFS_OBS_LOGGER_LEVEL=info # 日志级别,支持 trace, debug, info, warn, error export RUSTFS_OBS_LOCAL_LOGGING_ENABLED=true # 是否启用本地日志记录 export RUSTFS_OBS_LOG_DIRECTORY="$current_dir/deploy/logs" # Log directory -export RUSTFS_OBS_LOG_ROTATION_TIME="minute" # Log rotation time unit, can be "second", "minute", "hour", "day" -export RUSTFS_OBS_LOG_ROTATION_SIZE_MB=1 # Log rotation size in MB +export RUSTFS_OBS_LOG_ROTATION_TIME="hour" # Log rotation time unit, can be "second", "minute", "hour", "day" +export RUSTFS_OBS_LOG_ROTATION_SIZE_MB=100 # Log rotation size in MB export RUSTFS_SINKS_FILE_PATH="$current_dir/deploy/logs" export RUSTFS_SINKS_FILE_BUFFER_SIZE=12 @@ -89,10 +89,18 @@ export OTEL_INSTRUMENTATION_SCHEMA_URL="https://opentelemetry.io/schemas/1.31.0" export OTEL_INSTRUMENTATION_ATTRIBUTES="env=production" # notify -export RUSTFS_NOTIFY_WEBHOOK_ENABLE="true" # 是否启用 webhook 通知 +export RUSTFS_NOTIFY_WEBHOOK_ENABLE="on" # 是否启用 webhook 通知 export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT="http://[::]:3020/webhook" # webhook 通知地址 export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR="$current_dir/deploy/logs/notify" +export RUSTFS_NOTIFY_WEBHOOK_ENABLE_PRIMARY="on" # 是否启用 webhook 通知 +export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT_PRIMARY="http://[::]:3020/webhook" # webhook 通知地址 +export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR_PRIMARY="$current_dir/deploy/logs/notify" + +export RUSTFS_NOTIFY_WEBHOOK_ENABLE_MASTER="on" # 是否启用 webhook 通知 +export RUSTFS_NOTIFY_WEBHOOK_ENDPOINT_MASTER="http://[::]:3020/webhook" # webhook 通知地址 +export RUSTFS_NOTIFY_WEBHOOK_QUEUE_DIR_MASTER="$current_dir/deploy/logs/notify" + export RUSTFS_NS_SCANNER_INTERVAL=60 # 对象扫描间隔时间,单位为秒 # exportRUSTFS_SKIP_BACKGROUND_TASK=true