Compare commits

..

18 Commits

Author SHA1 Message Date
houseme
f7e188eee7 feat: upgrade datafusion to v50.0.0 and update related dependencies f… (#563)
* feat: upgrade datafusion to v50.0.0 and update related dependencies for compatibility

* fix

* fmt
2025-09-18 23:30:25 +08:00
houseme
4b9cb512f2 remove crate rustfs-audit-logger (#562) 2025-09-18 17:46:46 +08:00
Copilot
e5f0760009 Fix entrypoint.sh incorrectly passing logs directory as data volume with improved separation (#561)
* Initial plan

* Fix entrypoint.sh: separate log directory from data volumes

Co-authored-by: overtrue <1472352+overtrue@users.noreply.github.com>

* Improve separation: use functions and RUSTFS_OBS_LOG_DIRECTORY env var

Co-authored-by: overtrue <1472352+overtrue@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: overtrue <1472352+overtrue@users.noreply.github.com>
2025-09-18 17:05:14 +08:00
houseme
a6c211f4ea Feature/add dns logs (#558)
* add logs

* improve code for dns and logger
2025-09-18 12:00:43 +08:00
shiro.lee
f049c656d9 fix: list_objects does not return common_prefixes field. (#543) (#554)
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
2025-09-18 07:27:37 +08:00
majinghe
65dd947350 add tls support for docker compose (#553)
* add tls support for docker compose

* update docker compose file with comment
2025-09-17 22:45:23 +08:00
0xdx2
57f082ee2b fix: enforce max-keys limit to 1000 in S3 implementation (#549)
Co-authored-by: damon <damonxue2@gmail.com>
2025-09-16 18:02:24 +08:00
weisd
ae7e86d7ef refactor: simplify initialization flow and modernize string formatting (#548) 2025-09-16 15:44:50 +08:00
houseme
a12a3bedc3 feat(obs): optimize WriteMode selection logic in init_telemetry (#546)
- Refactor WriteMode selection to ensure all variables moved into thread closures are owned types, preventing lifetime issues.
- Simplify and clarify WriteMode assignment for production and non-production environments.
- Improve code readability and maintainability for logger initialization.
2025-09-16 08:25:37 +08:00
Copilot
cafec06b7e [Optimization] Enhance obs module telemetry.rs with environment-aware logging and production security (#539)
* Initial plan

* Implement environment-aware logging with production stdout auto-disable

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* add mimalloc crate

* fix

* improve code

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>
Co-authored-by: houseme <housemecn@gmail.com>
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
2025-09-15 14:52:20 +08:00
Parm Gill
1770679e66 Adding a toggle for update check (#532) 2025-09-14 22:26:48 +08:00
jon
a4fbf596e6 add startup logo (#528)
* add startup logo

* Replace logo ASCII art in main.rs

---------

Co-authored-by: houseme <housemecn@gmail.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
2025-09-14 12:04:00 +08:00
houseme
3f717292bf feat(console): support accessing console endpoint via port 9000 (#533)
* fix(main): update dns_init startup logic and remove unused code in http.rs

- Refactored the DNS resolver initialization logic in `main.rs` for improved startup reliability and error handling.
- Removed unused code from `http.rs` to keep the codebase clean and maintainable.

* feat(console): support accessing console endpoint via port 9000

- Added compatibility to allow console access through port 9000.
- Improved endpoint detection and routing for console service on standard and custom ports.
- Enhanced user experience for environments using port 9000 as the default access point.
2025-09-14 01:14:14 +08:00
houseme
73f0ecbf8f fix(main): update dns_init startup logic and remove unused code in http.rs (#531)
- Refactored the DNS resolver initialization logic in `main.rs` for improved startup reliability and error handling.
- Removed unused code from `http.rs` to keep the codebase clean and maintainable.
2025-09-13 23:43:25 +08:00
houseme
0c3079ae5e remove deps (#529) 2025-09-13 21:06:40 +08:00
majinghe
ebf30b0db5 update docker compose usage guidence in READEME file (#526) 2025-09-13 16:24:36 +08:00
Copilot
29c004d935 feat: enhance console separation with enterprise-grade security, monitoring, and advanced tower-http integration (#513)
* Initial plan

* feat: implement console service separation from endpoint

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* feat: add console separation documentation and tests

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* feat: enhance console separation with configurable CORS and improved Docker support

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* feat: implement enhanced console separation with security hardening and monitoring

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* refactor: implement console TLS following endpoint logic and improve configuration

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* add tower-http feature "timeout|limit"

* add dependencies crates `axum-server`

* refactor: reconstruct console server with enhanced tower-http features and environment variables

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* upgrade dep

* improve code for dns and console port `:9001`

* improve code

* fix

* docs: comprehensive improvement of console separation documentation and Docker deployment standards

Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>

* fmt

* add logs

* improve code for Config handler

* remove logs

* fix

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: houseme <4829346+houseme@users.noreply.github.com>
Co-authored-by: houseme <housemecn@gmail.com>
2025-09-13 14:48:14 +08:00
majinghe
4595bf7db6 fix docker compose running with no such file error (#519)
* fix docker compose running with no such file error

* fix observability docker compose
2025-09-13 13:04:06 +08:00
76 changed files with 5693 additions and 2865 deletions

View File

@@ -14,18 +14,27 @@
services:
tempo-init:
image: busybox:latest
command: ["sh", "-c", "chown -R 10001:10001 /var/tempo"]
volumes:
- ./tempo-data:/var/tempo
user: root
networks:
- otel-network
restart: "no"
tempo:
image: grafana/tempo:latest
#user: root # The container must be started with root to execute chown in the script
#entrypoint: [ "/etc/tempo/entrypoint.sh" ] # Specify a custom entry point
user: "10001" # The container must be started with root to execute chown in the script
command: [ "-config.file=/etc/tempo.yaml" ] # This is passed as a parameter to the entry point script
volumes:
- ./tempo-entrypoint.sh:/etc/tempo/entrypoint.sh # Mount entry point script
- ./tempo.yaml:/etc/tempo.yaml
- ./tempo.yaml:/etc/tempo.yaml:ro
- ./tempo-data:/var/tempo
ports:
- "3200:3200" # tempo
- "24317:4317" # otlp grpc
restart: unless-stopped
networks:
- otel-network
@@ -94,4 +103,4 @@ networks:
driver: bridge
name: "network_otel_config"
driver_opts:
com.docker.network.enable_ipv6: "true"
com.docker.network.enable_ipv6: "true"

View File

@@ -42,9 +42,9 @@ exporters:
namespace: "rustfs" # 指标前缀
send_timestamps: true # 发送时间戳
# enable_open_metrics: true
loki: # Loki 导出器,用于日志数据
otlphttp/loki: # Loki 导出器,用于日志数据
# endpoint: "http://loki:3100/otlp/v1/logs"
endpoint: "http://loki:3100/loki/api/v1/push"
endpoint: "http://loki:3100/otlp/v1/logs"
tls:
insecure: true
extensions:
@@ -65,7 +65,7 @@ service:
logs:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ loki ]
exporters: [ otlphttp/loki ]
telemetry:
logs:
level: "info" # Collector 日志级别

View File

@@ -1,8 +0,0 @@
#!/bin/sh
# Run as root to fix directory permissions
chown -R 10001:10001 /var/tempo
# Use su-exec (a lightweight sudo/gosu alternative, commonly used in Alpine mirroring)
# Switch to user 10001 and execute the original command (CMD) passed to the script
# "$@" represents all parameters passed to this script, i.e. command in docker-compose
exec su-exec 10001:10001 /tempo "$@"

4
.gitignore vendored
View File

@@ -20,4 +20,6 @@ profile.json
.docker/openobserve-otel/data
*.zst
.secrets
*.go
*.go
*.pb
*.svg

1409
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -16,7 +16,6 @@
members = [
"rustfs", # Core file system implementation
"crates/appauth", # Application authentication and authorization
"crates/audit-logger", # Audit logging system for file operations
"crates/common", # Shared utilities and data structures
"crates/config", # Configuration management
"crates/crypto", # Cryptography and security features
@@ -64,7 +63,6 @@ all = "warn"
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
rustfs-audit-logger = { path = "crates/audit-logger", version = "0.0.5" }
rustfs-common = { path = "crates/common", version = "0.0.5" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
@@ -98,42 +96,43 @@ async-trait = "0.1.89"
async-compression = { version = "0.4.19" }
atomic_enum = "0.3.0"
aws-config = { version = "1.8.6" }
aws-sdk-s3 = "1.101.0"
aws-sdk-s3 = "1.106.0"
axum = "0.8.4"
axum-extra = "0.10.1"
axum-server = "0.7.2"
base64-simd = "0.8.0"
base64 = "0.22.1"
brotli = "8.0.2"
bytes = { version = "1.10.1", features = ["serde"] }
bytesize = "2.0.1"
bytesize = "2.1.0"
byteorder = "1.5.0"
cfg-if = "1.0.3"
crc-fast = "1.5.0"
crc-fast = "1.3.0"
chacha20poly1305 = { version = "0.10.1" }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.46", features = ["derive", "env"] }
const-str = { version = "0.6.4", features = ["std", "proc"] }
chrono = { version = "0.4.42", features = ["serde"] }
clap = { version = "4.5.47", features = ["derive", "env"] }
const-str = { version = "0.7.0", features = ["std", "proc"] }
crc32fast = "1.5.0"
criterion = { version = "0.7", features = ["html_reports"] }
crossbeam-queue = "0.3.12"
dashmap = "6.1.0"
datafusion = "46.0.1"
datafusion = "50.0.0"
derive_builder = "0.20.2"
enumset = "1.1.10"
flatbuffers = "25.2.10"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks", "compress", "kv"] }
form_urlencoded = "1.2.2"
futures = "0.3.31"
futures-core = "0.3.31"
futures-util = "0.3.31"
glob = "0.3.3"
hex = "0.4.3"
hex-simd = "0.8.0"
highway = { version = "1.3.0" }
hickory-proto = "0.25.2"
hickory-resolver = { version = "0.25.2", features = ["tls-ring"] }
hmac = "0.12.1"
hyper = "1.7.0"
hyper-util = { version = "0.1.16", features = [
hyper-util = { version = "0.1.17", features = [
"tokio",
"server-auto",
"server-graceful",
@@ -141,7 +140,7 @@ hyper-util = { version = "0.1.16", features = [
hyper-rustls = "0.27.7"
http = "1.3.1"
http-body = "1.0.1"
humantime = "2.2.0"
humantime = "2.3.0"
ipnetwork = { version = "0.21.1", features = ["serde"] }
jsonwebtoken = "9.3.1"
lazy_static = "1.5.0"
@@ -157,7 +156,7 @@ nix = { version = "0.30.1", features = ["fs"] }
nu-ansi-term = "0.50.1"
num_cpus = { version = "1.17.0" }
nvml-wrapper = "0.11.0"
object_store = "0.11.2"
object_store = "0.12.3"
once_cell = "1.21.3"
opentelemetry = { version = "0.30.0" }
opentelemetry-appender-tracing = { version = "0.30.1", features = [
@@ -196,11 +195,11 @@ reqwest = { version = "0.12.23", default-features = false, features = [
"json",
"blocking",
] }
rmcp = { version = "0.6.1" }
rmcp = { version = "0.6.4" }
rmp = "0.8.14"
rmp-serde = "1.3.0"
rsa = "0.9.8"
rumqttc = { version = "0.24" }
rumqttc = { version = "0.25.0" }
rust-embed = { version = "8.7.2" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.31" }
@@ -208,8 +207,8 @@ rustls-pki-types = "1.12.0"
rustls-pemfile = "2.2.0"
s3s = { version = "0.12.0-minio-preview.3" }
schemars = "1.0.4"
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.143", features = ["raw_value"] }
serde = { version = "1.0.225", features = ["derive"] }
serde_json = { version = "1.0.145", features = ["raw_value"] }
serde_urlencoded = "0.7.1"
serial_test = "3.2.0"
sha1 = "0.10.6"
@@ -217,17 +216,18 @@ sha2 = "0.10.9"
shadow-rs = { version = "1.3.0", default-features = false }
siphasher = "1.0.1"
smallvec = { version = "1.15.1", features = ["serde"] }
snafu = "0.8.8"
smartstring = "1.0.1"
snafu = "0.8.9"
snap = "1.1.1"
socket2 = "0.6.0"
strum = { version = "0.27.2", features = ["derive"] }
sysinfo = "0.37.0"
sysctl = "0.6.0"
tempfile = "3.21.0"
sysctl = "0.7.1"
tempfile = "3.22.0"
temp-env = "0.3.6"
test-case = "3.3.1"
thiserror = "2.0.16"
time = { version = "0.3.42", features = [
time = { version = "0.3.43", features = [
"std",
"parsing",
"formatting",
@@ -235,14 +235,14 @@ time = { version = "0.3.42", features = [
"serde",
] }
tokio = { version = "1.47.1", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-rustls = { version = "0.26.3", default-features = false }
tokio-stream = { version = "0.1.17" }
tokio-tar = "0.3.1"
tokio-test = "0.4.4"
tokio-util = { version = "0.7.16", features = ["io", "compat"] }
tonic = { version = "0.14.1", features = ["gzip"] }
tonic-prost = { version = "0.14.1" }
tonic-prost-build = { version = "0.14.1" }
tonic = { version = "0.14.2", features = ["gzip"] }
tonic-prost = { version = "0.14.2" }
tonic-prost-build = { version = "0.14.2" }
tower = { version = "0.5.2", features = ["timeout"] }
tower-http = { version = "0.6.6", features = ["cors"] }
tracing = "0.1.41"
@@ -253,19 +253,19 @@ tracing-subscriber = { version = "0.3.20", features = ["env-filter", "time"] }
transform-stream = "0.3.1"
url = "2.5.7"
urlencoding = "2.1.3"
uuid = { version = "1.18.0", features = [
uuid = { version = "1.18.1", features = [
"v4",
"fast-rng",
"macro-diagnostics",
] }
wildmatch = { version = "2.4.0", features = ["serde"] }
wildmatch = { version = "2.5.0", features = ["serde"] }
winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "2.4.2"
zip = "5.1.1"
zstd = "0.13.3"
[workspace.metadata.cargo-shear]
ignored = ["rustfs", "rust-i18n", "rustfs-mcp", "rustfs-audit-logger", "tokio-test"]
ignored = ["rustfs", "rust-i18n", "rustfs-mcp", "tokio-test"]
[profile.wasm-dev]
inherits = "dev"

View File

@@ -69,15 +69,19 @@ RUN chmod +x /usr/bin/rustfs /entrypoint.sh && \
chmod 0750 /data /logs
ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ADDRESS=":9001" \
RUSTFS_ACCESS_KEY="rustfsadmin" \
RUSTFS_SECRET_KEY="rustfsadmin" \
RUSTFS_CONSOLE_ENABLE="true" \
RUSTFS_EXTERNAL_ADDRESS="" \
RUSTFS_CORS_ALLOWED_ORIGINS="*" \
RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*" \
RUSTFS_VOLUMES="/data" \
RUST_LOG="warn" \
RUSTFS_OBS_LOG_DIRECTORY="/logs" \
RUSTFS_SINKS_FILE_PATH="/logs"
EXPOSE 9000
EXPOSE 9000 9001
VOLUME ["/data", "/logs"]
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -74,9 +74,9 @@ To get started with RustFS, follow these steps:
1. **One-click installation script (Option 1)**
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
2. **Docker Quick Start (Option 2)**
@@ -91,6 +91,14 @@ To get started with RustFS, follow these steps:
docker run -d -p 9000:9000 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0.alpha.45
```
For docker installation, you can also run the container with docker compose. With the `docker-compose.yml` file under root directory, running the command:
```
docker compose --profile observability up -d
```
**NOTE**: You should be better to have a look for `docker-compose.yaml` file. Because, several services contains in the file. Grafan,prometheus,jaeger containers will be launched using docker compose file, which is helpful for rustfs observability. If you want to start redis as well as nginx container, you can specify the corresponding profiles.
3. **Build from Source (Option 3) - Advanced Users**
For developers who want to build RustFS Docker images from source with multi-architecture support:
@@ -128,6 +136,8 @@ To get started with RustFS, follow these steps:
5. **Create a Bucket**: Use the console to create a new bucket for your objects.
6. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
**NOTE**: If you want to access RustFS instance with `https`, you can refer to [TLS configuration docs](https://docs.rustfs.com/integration/tls-configured.html).
## Documentation
For detailed documentation, including configuration options, API references, and advanced usage, please visit our [Documentation](https://docs.rustfs.com).

View File

@@ -74,10 +74,20 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs
```
对于使用 Docker 安装来讲,你还可以使用 `docker compose` 来启动 rustfs 实例。在仓库的根目录下面有一个 `docker-compose.yml` 文件。运行如下命令即可:
```
docker compose --profile observability up -d
```
**注意**:在使用 `docker compose` 之前,你应该仔细阅读一下 `docker-compose.yaml`,因为该文件中包含多个服务,除了 rustfs 以外,还有 grafana、prometheus、jaeger 等,这些是为 rustfs 可观测性服务的,还有 redis 和 nginx。你想启动哪些容器就需要用 `--profile` 参数指定相应的 profile。
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9000` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
**注意**:如果你想通过 `https` 来访问 RustFS 实例,请参考 [TLS 配置文档](https://docs.rustfs.com/zh/integration/tls-configured.html)
## 文档
有关详细文档包括配置选项、API 参考和高级用法,请访问我们的[文档](https://docs.rustfs.com)。

View File

@@ -17,7 +17,6 @@ rustfs-ecstore = { workspace = true }
rustfs-common = { workspace = true }
rustfs-filemeta = { workspace = true }
rustfs-madmin = { workspace = true }
rustfs-utils = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tokio-util = { workspace = true }
tracing = { workspace = true }
@@ -29,10 +28,7 @@ uuid = { workspace = true, features = ["v4", "serde"] }
anyhow = { workspace = true }
async-trait = { workspace = true }
futures = { workspace = true }
url = { workspace = true }
rustfs-lock = { workspace = true }
s3s = { workspace = true }
lazy_static = { workspace = true }
chrono = { workspace = true }
rand = { workspace = true }
reqwest = { workspace = true }
@@ -44,5 +40,3 @@ serial_test = "3.2.0"
tracing-subscriber = { workspace = true }
walkdir = "2.5.0"
tempfile = { workspace = true }
criterion = { workspace = true, features = ["html_reports"] }
sysinfo = "0.30.8"

View File

@@ -340,7 +340,7 @@ impl HealTask {
Ok((result, error)) => {
if let Some(e) = error {
// Check if this is a "File not found" error during delete operations
let error_msg = format!("{}", e);
let error_msg = format!("{e}");
if error_msg.contains("File not found") || error_msg.contains("not found") {
info!(
"Object {}/{} not found during heal - likely deleted intentionally, treating as successful",
@@ -395,7 +395,7 @@ impl HealTask {
}
Err(e) => {
// Check if this is a "File not found" error during delete operations
let error_msg = format!("{}", e);
let error_msg = format!("{e}");
if error_msg.contains("File not found") || error_msg.contains("not found") {
info!(
"Object {}/{} not found during heal - likely deleted intentionally, treating as successful",

View File

@@ -1,44 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[package]
name = "rustfs-audit-logger"
edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Audit logging system for RustFS, providing detailed logging of file operations and system events."
documentation = "https://docs.rs/audit-logger/latest/audit_logger/"
keywords = ["audit", "logging", "file-operations", "system-events", "RustFS"]
categories = ["web-programming", "development-tools::profiling", "asynchronous", "api-bindings", "development-tools::debugging"]
[dependencies]
rustfs-targets = { workspace = true }
async-trait = { workspace = true }
chrono = { workspace = true }
reqwest = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true, features = ["std", "attributes"] }
tracing-core = { workspace = true }
tokio = { workspace = true, features = ["sync", "fs", "rt-multi-thread", "rt", "time", "macros"] }
url = { workspace = true }
uuid = { workspace = true }
thiserror = { workspace = true }
figment = { version = "0.10", features = ["json", "env"] }
[lints]
workspace = true

View File

@@ -1,34 +0,0 @@
{
"console": {
"enabled": true
},
"logger_webhook": {
"default": {
"enabled": true,
"endpoint": "http://localhost:3000/logs",
"auth_token": "secret-token-for-logs",
"batch_size": 5,
"queue_size": 1000,
"max_retry": 3,
"retry_interval": "2s"
}
},
"audit_webhook": {
"splunk": {
"enabled": true,
"endpoint": "http://localhost:3000/audit",
"auth_token": "secret-token-for-audit",
"batch_size": 10
}
},
"audit_kafka": {
"default": {
"enabled": false,
"brokers": [
"kafka1:9092",
"kafka2:9092"
],
"topic": "minio-audit-events"
}
}
}

View File

@@ -1,17 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
fn main() {
println!("Audit Logger Example");
}

View File

@@ -1,90 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::entry::ObjectVersion;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Args - defines the arguments for API operations
/// Args is used to define the arguments for API operations.
///
/// # Example
/// ```
/// use rustfs_audit_logger::Args;
/// use std::collections::HashMap;
///
/// let args = Args::new()
/// .set_bucket(Some("my-bucket".to_string()))
/// .set_object(Some("my-object".to_string()))
/// .set_version_id(Some("123".to_string()))
/// .set_metadata(Some(HashMap::new()));
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, Default, Eq, PartialEq)]
pub struct Args {
#[serde(rename = "bucket", skip_serializing_if = "Option::is_none")]
pub bucket: Option<String>,
#[serde(rename = "object", skip_serializing_if = "Option::is_none")]
pub object: Option<String>,
#[serde(rename = "versionId", skip_serializing_if = "Option::is_none")]
pub version_id: Option<String>,
#[serde(rename = "objects", skip_serializing_if = "Option::is_none")]
pub objects: Option<Vec<ObjectVersion>>,
#[serde(rename = "metadata", skip_serializing_if = "Option::is_none")]
pub metadata: Option<HashMap<String, String>>,
}
impl Args {
/// Create a new Args object
pub fn new() -> Self {
Args {
bucket: None,
object: None,
version_id: None,
objects: None,
metadata: None,
}
}
/// Set the bucket
pub fn set_bucket(mut self, bucket: Option<String>) -> Self {
self.bucket = bucket;
self
}
/// Set the object
pub fn set_object(mut self, object: Option<String>) -> Self {
self.object = object;
self
}
/// Set the version ID
pub fn set_version_id(mut self, version_id: Option<String>) -> Self {
self.version_id = version_id;
self
}
/// Set the objects
pub fn set_objects(mut self, objects: Option<Vec<ObjectVersion>>) -> Self {
self.objects = objects;
self
}
/// Set the metadata
pub fn set_metadata(mut self, metadata: Option<HashMap<String, String>>) -> Self {
self.metadata = metadata;
self
}
}

View File

@@ -1,469 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::{BaseLogEntry, LogRecord, ObjectVersion};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
/// API details structure
/// ApiDetails is used to define the details of an API operation
///
/// The `ApiDetails` structure contains the following fields:
/// - `name` - the name of the API operation
/// - `bucket` - the bucket name
/// - `object` - the object name
/// - `objects` - the list of objects
/// - `status` - the status of the API operation
/// - `status_code` - the status code of the API operation
/// - `input_bytes` - the input bytes
/// - `output_bytes` - the output bytes
/// - `header_bytes` - the header bytes
/// - `time_to_first_byte` - the time to first byte
/// - `time_to_first_byte_in_ns` - the time to first byte in nanoseconds
/// - `time_to_response` - the time to response
/// - `time_to_response_in_ns` - the time to response in nanoseconds
///
/// The `ApiDetails` structure contains the following methods:
/// - `new` - create a new `ApiDetails` with default values
/// - `set_name` - set the name
/// - `set_bucket` - set the bucket
/// - `set_object` - set the object
/// - `set_objects` - set the objects
/// - `set_status` - set the status
/// - `set_status_code` - set the status code
/// - `set_input_bytes` - set the input bytes
/// - `set_output_bytes` - set the output bytes
/// - `set_header_bytes` - set the header bytes
/// - `set_time_to_first_byte` - set the time to first byte
/// - `set_time_to_first_byte_in_ns` - set the time to first byte in nanoseconds
/// - `set_time_to_response` - set the time to response
/// - `set_time_to_response_in_ns` - set the time to response in nanoseconds
///
/// # Example
/// ```
/// use rustfs_audit_logger::ApiDetails;
/// use rustfs_audit_logger::ObjectVersion;
///
/// let api = ApiDetails::new()
/// .set_name(Some("GET".to_string()))
/// .set_bucket(Some("my-bucket".to_string()))
/// .set_object(Some("my-object".to_string()))
/// .set_objects(vec![ObjectVersion::new_with_object_name("my-object".to_string())])
/// .set_status(Some("OK".to_string()))
/// .set_status_code(Some(200))
/// .set_input_bytes(100)
/// .set_output_bytes(200)
/// .set_header_bytes(Some(50))
/// .set_time_to_first_byte(Some("100ms".to_string()))
/// .set_time_to_first_byte_in_ns(Some("100000000ns".to_string()))
/// .set_time_to_response(Some("200ms".to_string()))
/// .set_time_to_response_in_ns(Some("200000000ns".to_string()));
/// ```
#[derive(Debug, Serialize, Deserialize, Clone, Default, PartialEq, Eq)]
pub struct ApiDetails {
#[serde(rename = "name", skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
#[serde(rename = "bucket", skip_serializing_if = "Option::is_none")]
pub bucket: Option<String>,
#[serde(rename = "object", skip_serializing_if = "Option::is_none")]
pub object: Option<String>,
#[serde(rename = "objects", skip_serializing_if = "Vec::is_empty", default)]
pub objects: Vec<ObjectVersion>,
#[serde(rename = "status", skip_serializing_if = "Option::is_none")]
pub status: Option<String>,
#[serde(rename = "statusCode", skip_serializing_if = "Option::is_none")]
pub status_code: Option<i32>,
#[serde(rename = "rx")]
pub input_bytes: i64,
#[serde(rename = "tx")]
pub output_bytes: i64,
#[serde(rename = "txHeaders", skip_serializing_if = "Option::is_none")]
pub header_bytes: Option<i64>,
#[serde(rename = "timeToFirstByte", skip_serializing_if = "Option::is_none")]
pub time_to_first_byte: Option<String>,
#[serde(rename = "timeToFirstByteInNS", skip_serializing_if = "Option::is_none")]
pub time_to_first_byte_in_ns: Option<String>,
#[serde(rename = "timeToResponse", skip_serializing_if = "Option::is_none")]
pub time_to_response: Option<String>,
#[serde(rename = "timeToResponseInNS", skip_serializing_if = "Option::is_none")]
pub time_to_response_in_ns: Option<String>,
}
impl ApiDetails {
/// Create a new `ApiDetails` with default values
pub fn new() -> Self {
ApiDetails {
name: None,
bucket: None,
object: None,
objects: Vec::new(),
status: None,
status_code: None,
input_bytes: 0,
output_bytes: 0,
header_bytes: None,
time_to_first_byte: None,
time_to_first_byte_in_ns: None,
time_to_response: None,
time_to_response_in_ns: None,
}
}
/// Set the name
pub fn set_name(mut self, name: Option<String>) -> Self {
self.name = name;
self
}
/// Set the bucket
pub fn set_bucket(mut self, bucket: Option<String>) -> Self {
self.bucket = bucket;
self
}
/// Set the object
pub fn set_object(mut self, object: Option<String>) -> Self {
self.object = object;
self
}
/// Set the objects
pub fn set_objects(mut self, objects: Vec<ObjectVersion>) -> Self {
self.objects = objects;
self
}
/// Set the status
pub fn set_status(mut self, status: Option<String>) -> Self {
self.status = status;
self
}
/// Set the status code
pub fn set_status_code(mut self, status_code: Option<i32>) -> Self {
self.status_code = status_code;
self
}
/// Set the input bytes
pub fn set_input_bytes(mut self, input_bytes: i64) -> Self {
self.input_bytes = input_bytes;
self
}
/// Set the output bytes
pub fn set_output_bytes(mut self, output_bytes: i64) -> Self {
self.output_bytes = output_bytes;
self
}
/// Set the header bytes
pub fn set_header_bytes(mut self, header_bytes: Option<i64>) -> Self {
self.header_bytes = header_bytes;
self
}
/// Set the time to first byte
pub fn set_time_to_first_byte(mut self, time_to_first_byte: Option<String>) -> Self {
self.time_to_first_byte = time_to_first_byte;
self
}
/// Set the time to first byte in nanoseconds
pub fn set_time_to_first_byte_in_ns(mut self, time_to_first_byte_in_ns: Option<String>) -> Self {
self.time_to_first_byte_in_ns = time_to_first_byte_in_ns;
self
}
/// Set the time to response
pub fn set_time_to_response(mut self, time_to_response: Option<String>) -> Self {
self.time_to_response = time_to_response;
self
}
/// Set the time to response in nanoseconds
pub fn set_time_to_response_in_ns(mut self, time_to_response_in_ns: Option<String>) -> Self {
self.time_to_response_in_ns = time_to_response_in_ns;
self
}
}
/// Entry - audit entry logs
/// AuditLogEntry is used to define the structure of an audit log entry
///
/// The `AuditLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `version` - the version of the audit log entry
/// - `deployment_id` - the deployment ID
/// - `event` - the event
/// - `entry_type` - the type of audit message
/// - `api` - the API details
/// - `remote_host` - the remote host
/// - `user_agent` - the user agent
/// - `req_path` - the request path
/// - `req_host` - the request host
/// - `req_claims` - the request claims
/// - `req_query` - the request query
/// - `req_header` - the request header
/// - `resp_header` - the response header
/// - `access_key` - the access key
/// - `parent_user` - the parent user
/// - `error` - the error
///
/// The `AuditLogEntry` structure contains the following methods:
/// - `new` - create a new `AuditEntry` with default values
/// - `new_with_values` - create a new `AuditEntry` with version, time, event and api details
/// - `with_base` - set the base log entry
/// - `set_version` - set the version
/// - `set_deployment_id` - set the deployment ID
/// - `set_event` - set the event
/// - `set_entry_type` - set the entry type
/// - `set_api` - set the API details
/// - `set_remote_host` - set the remote host
/// - `set_user_agent` - set the user agent
/// - `set_req_path` - set the request path
/// - `set_req_host` - set the request host
/// - `set_req_claims` - set the request claims
/// - `set_req_query` - set the request query
/// - `set_req_header` - set the request header
/// - `set_resp_header` - set the response header
/// - `set_access_key` - set the access key
/// - `set_parent_user` - set the parent user
/// - `set_error` - set the error
///
/// # Example
/// ```
/// use rustfs_audit_logger::AuditLogEntry;
/// use rustfs_audit_logger::ApiDetails;
/// use std::collections::HashMap;
///
/// let entry = AuditLogEntry::new()
/// .set_version("1.0".to_string())
/// .set_deployment_id(Some("123".to_string()))
/// .set_event("event".to_string())
/// .set_entry_type(Some("type".to_string()))
/// .set_api(ApiDetails::new())
/// .set_remote_host(Some("remote-host".to_string()))
/// .set_user_agent(Some("user-agent".to_string()))
/// .set_req_path(Some("req-path".to_string()))
/// .set_req_host(Some("req-host".to_string()))
/// .set_req_claims(Some(HashMap::new()))
/// .set_req_query(Some(HashMap::new()))
/// .set_req_header(Some(HashMap::new()))
/// .set_resp_header(Some(HashMap::new()))
/// .set_access_key(Some("access-key".to_string()))
/// .set_parent_user(Some("parent-user".to_string()))
/// .set_error(Some("error".to_string()));
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct AuditLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub version: String,
#[serde(rename = "deploymentid", skip_serializing_if = "Option::is_none")]
pub deployment_id: Option<String>,
pub event: String,
// Class of audit message - S3, admin ops, bucket management
#[serde(rename = "type", skip_serializing_if = "Option::is_none")]
pub entry_type: Option<String>,
pub api: ApiDetails,
#[serde(rename = "remotehost", skip_serializing_if = "Option::is_none")]
pub remote_host: Option<String>,
#[serde(rename = "userAgent", skip_serializing_if = "Option::is_none")]
pub user_agent: Option<String>,
#[serde(rename = "requestPath", skip_serializing_if = "Option::is_none")]
pub req_path: Option<String>,
#[serde(rename = "requestHost", skip_serializing_if = "Option::is_none")]
pub req_host: Option<String>,
#[serde(rename = "requestClaims", skip_serializing_if = "Option::is_none")]
pub req_claims: Option<HashMap<String, Value>>,
#[serde(rename = "requestQuery", skip_serializing_if = "Option::is_none")]
pub req_query: Option<HashMap<String, String>>,
#[serde(rename = "requestHeader", skip_serializing_if = "Option::is_none")]
pub req_header: Option<HashMap<String, String>>,
#[serde(rename = "responseHeader", skip_serializing_if = "Option::is_none")]
pub resp_header: Option<HashMap<String, String>>,
#[serde(rename = "accessKey", skip_serializing_if = "Option::is_none")]
pub access_key: Option<String>,
#[serde(rename = "parentUser", skip_serializing_if = "Option::is_none")]
pub parent_user: Option<String>,
#[serde(rename = "error", skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
impl AuditLogEntry {
/// Create a new `AuditEntry` with default values
pub fn new() -> Self {
AuditLogEntry {
base: BaseLogEntry::new(),
version: String::new(),
deployment_id: None,
event: String::new(),
entry_type: None,
api: ApiDetails::new(),
remote_host: None,
user_agent: None,
req_path: None,
req_host: None,
req_claims: None,
req_query: None,
req_header: None,
resp_header: None,
access_key: None,
parent_user: None,
error: None,
}
}
/// Create a new `AuditEntry` with version, time, event and api details
pub fn new_with_values(version: String, time: DateTime<Utc>, event: String, api: ApiDetails) -> Self {
let mut base = BaseLogEntry::new();
base.timestamp = time;
AuditLogEntry {
base,
version,
deployment_id: None,
event,
entry_type: None,
api,
remote_host: None,
user_agent: None,
req_path: None,
req_host: None,
req_claims: None,
req_query: None,
req_header: None,
resp_header: None,
access_key: None,
parent_user: None,
error: None,
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the version
pub fn set_version(mut self, version: String) -> Self {
self.version = version;
self
}
/// Set the deployment ID
pub fn set_deployment_id(mut self, deployment_id: Option<String>) -> Self {
self.deployment_id = deployment_id;
self
}
/// Set the event
pub fn set_event(mut self, event: String) -> Self {
self.event = event;
self
}
/// Set the entry type
pub fn set_entry_type(mut self, entry_type: Option<String>) -> Self {
self.entry_type = entry_type;
self
}
/// Set the API details
pub fn set_api(mut self, api: ApiDetails) -> Self {
self.api = api;
self
}
/// Set the remote host
pub fn set_remote_host(mut self, remote_host: Option<String>) -> Self {
self.remote_host = remote_host;
self
}
/// Set the user agent
pub fn set_user_agent(mut self, user_agent: Option<String>) -> Self {
self.user_agent = user_agent;
self
}
/// Set the request path
pub fn set_req_path(mut self, req_path: Option<String>) -> Self {
self.req_path = req_path;
self
}
/// Set the request host
pub fn set_req_host(mut self, req_host: Option<String>) -> Self {
self.req_host = req_host;
self
}
/// Set the request claims
pub fn set_req_claims(mut self, req_claims: Option<HashMap<String, Value>>) -> Self {
self.req_claims = req_claims;
self
}
/// Set the request query
pub fn set_req_query(mut self, req_query: Option<HashMap<String, String>>) -> Self {
self.req_query = req_query;
self
}
/// Set the request header
pub fn set_req_header(mut self, req_header: Option<HashMap<String, String>>) -> Self {
self.req_header = req_header;
self
}
/// Set the response header
pub fn set_resp_header(mut self, resp_header: Option<HashMap<String, String>>) -> Self {
self.resp_header = resp_header;
self
}
/// Set the access key
pub fn set_access_key(mut self, access_key: Option<String>) -> Self {
self.access_key = access_key;
self
}
/// Set the parent user
pub fn set_parent_user(mut self, parent_user: Option<String>) -> Self {
self.parent_user = parent_user;
self
}
/// Set the error
pub fn set_error(mut self, error: Option<String>) -> Self {
self.error = error;
self
}
}
impl LogRecord for AuditLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}

View File

@@ -1,108 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
/// Base log entry structure shared by all log types
/// This structure is used to serialize log entries to JSON
/// and send them to the log sinks
/// This structure is also used to deserialize log entries from JSON
/// This structure is also used to store log entries in the database
/// This structure is also used to query log entries from the database
///
/// The `BaseLogEntry` structure contains the following fields:
/// - `timestamp` - the timestamp of the log entry
/// - `request_id` - the request ID of the log entry
/// - `message` - the message of the log entry
/// - `tags` - the tags of the log entry
///
/// The `BaseLogEntry` structure contains the following methods:
/// - `new` - create a new `BaseLogEntry` with default values
/// - `message` - set the message
/// - `request_id` - set the request ID
/// - `tags` - set the tags
/// - `timestamp` - set the timestamp
///
/// # Example
/// ```
/// use rustfs_audit_logger::BaseLogEntry;
/// use chrono::{DateTime, Utc};
/// use std::collections::HashMap;
///
/// let timestamp = Utc::now();
/// let request = Some("req-123".to_string());
/// let message = Some("This is a log message".to_string());
/// let tags = Some(HashMap::new());
///
/// let entry = BaseLogEntry::new()
/// .timestamp(timestamp)
/// .request_id(request)
/// .message(message)
/// .tags(tags);
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq, Default)]
pub struct BaseLogEntry {
#[serde(rename = "time")]
pub timestamp: DateTime<Utc>,
#[serde(rename = "requestID", skip_serializing_if = "Option::is_none")]
pub request_id: Option<String>,
#[serde(rename = "message", skip_serializing_if = "Option::is_none")]
pub message: Option<String>,
#[serde(rename = "tags", skip_serializing_if = "Option::is_none")]
pub tags: Option<HashMap<String, Value>>,
}
impl BaseLogEntry {
/// Create a new BaseLogEntry with default values
pub fn new() -> Self {
BaseLogEntry {
timestamp: Utc::now(),
request_id: None,
message: None,
tags: None,
}
}
/// Set the message
pub fn message(mut self, message: Option<String>) -> Self {
self.message = message;
self
}
/// Set the request ID
pub fn request_id(mut self, request_id: Option<String>) -> Self {
self.request_id = request_id;
self
}
/// Set the tags
pub fn tags(mut self, tags: Option<HashMap<String, Value>>) -> Self {
self.tags = tags;
self
}
/// Set the timestamp
pub fn timestamp(mut self, timestamp: DateTime<Utc>) -> Self {
self.timestamp = timestamp;
self
}
}

View File

@@ -1,159 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
pub(crate) mod args;
pub(crate) mod audit;
pub(crate) mod base;
pub(crate) mod unified;
use serde::de::Error;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use tracing_core::Level;
/// ObjectVersion is used across multiple modules
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)]
pub struct ObjectVersion {
#[serde(rename = "name")]
pub object_name: String,
#[serde(rename = "versionId", skip_serializing_if = "Option::is_none")]
pub version_id: Option<String>,
}
impl ObjectVersion {
/// Create a new ObjectVersion object
pub fn new() -> Self {
ObjectVersion {
object_name: String::new(),
version_id: None,
}
}
/// Create a new ObjectVersion with object name
pub fn new_with_object_name(object_name: String) -> Self {
ObjectVersion {
object_name,
version_id: None,
}
}
/// Set the object name
pub fn set_object_name(mut self, object_name: String) -> Self {
self.object_name = object_name;
self
}
/// Set the version ID
pub fn set_version_id(mut self, version_id: Option<String>) -> Self {
self.version_id = version_id;
self
}
}
impl Default for ObjectVersion {
fn default() -> Self {
Self::new()
}
}
/// Log kind/level enum
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
pub enum LogKind {
#[serde(rename = "INFO")]
#[default]
Info,
#[serde(rename = "WARNING")]
Warning,
#[serde(rename = "ERROR")]
Error,
#[serde(rename = "FATAL")]
Fatal,
}
/// Trait for types that can be serialized to JSON and have a timestamp
/// This trait is used by `ServerLogEntry` to convert the log entry to JSON
/// and get the timestamp of the log entry
/// This trait is implemented by `ServerLogEntry`
///
/// # Example
/// ```
/// use rustfs_audit_logger::LogRecord;
/// use chrono::{DateTime, Utc};
/// use rustfs_audit_logger::ServerLogEntry;
/// use tracing_core::Level;
///
/// let log_entry = ServerLogEntry::new(Level::INFO, "api_handler".to_string());
/// let json = log_entry.to_json();
/// let timestamp = log_entry.get_timestamp();
/// ```
pub trait LogRecord {
fn to_json(&self) -> String;
fn get_timestamp(&self) -> chrono::DateTime<chrono::Utc>;
}
/// Wrapper for `tracing_core::Level` to implement `Serialize` and `Deserialize`
/// for `ServerLogEntry`
/// This is necessary because `tracing_core::Level` does not implement `Serialize`
/// and `Deserialize`
/// This is a workaround to allow `ServerLogEntry` to be serialized and deserialized
/// using `serde`
///
/// # Example
/// ```
/// use rustfs_audit_logger::SerializableLevel;
/// use tracing_core::Level;
///
/// let level = Level::INFO;
/// let serializable_level = SerializableLevel::from(level);
/// ```
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SerializableLevel(pub Level);
impl From<Level> for SerializableLevel {
fn from(level: Level) -> Self {
SerializableLevel(level)
}
}
impl From<SerializableLevel> for Level {
fn from(serializable_level: SerializableLevel) -> Self {
serializable_level.0
}
}
impl Serialize for SerializableLevel {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(self.0.as_str())
}
}
impl<'de> Deserialize<'de> for SerializableLevel {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
match s.as_str() {
"TRACE" => Ok(SerializableLevel(Level::TRACE)),
"DEBUG" => Ok(SerializableLevel(Level::DEBUG)),
"INFO" => Ok(SerializableLevel(Level::INFO)),
"WARN" => Ok(SerializableLevel(Level::WARN)),
"ERROR" => Ok(SerializableLevel(Level::ERROR)),
_ => Err(D::Error::custom("unknown log level")),
}
}
}

View File

@@ -1,266 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use crate::{AuditLogEntry, BaseLogEntry, LogKind, LogRecord, SerializableLevel};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use tracing_core::Level;
/// Server log entry with structured fields
/// ServerLogEntry is used to log structured log entries from the server
///
/// The `ServerLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `level` - the log level
/// - `source` - the source of the log entry
/// - `user_id` - the user ID
/// - `fields` - the structured fields of the log entry
///
/// The `ServerLogEntry` structure contains the following methods:
/// - `new` - create a new `ServerLogEntry` with specified level and source
/// - `with_base` - set the base log entry
/// - `user_id` - set the user ID
/// - `fields` - set the fields
/// - `add_field` - add a field
///
/// # Example
/// ```
/// use rustfs_audit_logger::ServerLogEntry;
/// use tracing_core::Level;
///
/// let entry = ServerLogEntry::new(Level::INFO, "test_module".to_string())
/// .user_id(Some("user-456".to_string()))
/// .add_field("operation".to_string(), "login".to_string());
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct ServerLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub level: SerializableLevel,
pub source: String,
#[serde(rename = "userId", skip_serializing_if = "Option::is_none")]
pub user_id: Option<String>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub fields: Vec<(String, String)>,
}
impl ServerLogEntry {
/// Create a new ServerLogEntry with specified level and source
pub fn new(level: Level, source: String) -> Self {
ServerLogEntry {
base: BaseLogEntry::new(),
level: SerializableLevel(level),
source,
user_id: None,
fields: Vec::new(),
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the user ID
pub fn user_id(mut self, user_id: Option<String>) -> Self {
self.user_id = user_id;
self
}
/// Set fields
pub fn fields(mut self, fields: Vec<(String, String)>) -> Self {
self.fields = fields;
self
}
/// Add a field
pub fn add_field(mut self, key: String, value: String) -> Self {
self.fields.push((key, value));
self
}
}
impl LogRecord for ServerLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}
/// Console log entry structure
/// ConsoleLogEntry is used to log console log entries
/// The `ConsoleLogEntry` structure contains the following fields:
/// - `base` - the base log entry
/// - `level` - the log level
/// - `console_msg` - the console message
/// - `node_name` - the node name
/// - `err` - the error message
///
/// The `ConsoleLogEntry` structure contains the following methods:
/// - `new` - create a new `ConsoleLogEntry`
/// - `new_with_console_msg` - create a new `ConsoleLogEntry` with console message and node name
/// - `with_base` - set the base log entry
/// - `set_level` - set the log level
/// - `set_node_name` - set the node name
/// - `set_console_msg` - set the console message
/// - `set_err` - set the error message
///
/// # Example
/// ```
/// use rustfs_audit_logger::ConsoleLogEntry;
///
/// let entry = ConsoleLogEntry::new_with_console_msg("Test message".to_string(), "node-123".to_string());
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ConsoleLogEntry {
#[serde(flatten)]
pub base: BaseLogEntry,
pub level: LogKind,
pub console_msg: String,
pub node_name: String,
#[serde(skip)]
pub err: Option<String>,
}
impl ConsoleLogEntry {
/// Create a new ConsoleLogEntry
pub fn new() -> Self {
ConsoleLogEntry {
base: BaseLogEntry::new(),
level: LogKind::Info,
console_msg: String::new(),
node_name: String::new(),
err: None,
}
}
/// Create a new ConsoleLogEntry with console message and node name
pub fn new_with_console_msg(console_msg: String, node_name: String) -> Self {
ConsoleLogEntry {
base: BaseLogEntry::new(),
level: LogKind::Info,
console_msg,
node_name,
err: None,
}
}
/// Set the base log entry
pub fn with_base(mut self, base: BaseLogEntry) -> Self {
self.base = base;
self
}
/// Set the log level
pub fn set_level(mut self, level: LogKind) -> Self {
self.level = level;
self
}
/// Set the node name
pub fn set_node_name(mut self, node_name: String) -> Self {
self.node_name = node_name;
self
}
/// Set the console message
pub fn set_console_msg(mut self, console_msg: String) -> Self {
self.console_msg = console_msg;
self
}
/// Set the error message
pub fn set_err(mut self, err: Option<String>) -> Self {
self.err = err;
self
}
}
impl Default for ConsoleLogEntry {
fn default() -> Self {
Self::new()
}
}
impl LogRecord for ConsoleLogEntry {
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_else(|_| String::from("{}"))
}
fn get_timestamp(&self) -> DateTime<Utc> {
self.base.timestamp
}
}
/// Unified log entry type
/// UnifiedLogEntry is used to log different types of log entries
///
/// The `UnifiedLogEntry` enum contains the following variants:
/// - `Server` - a server log entry
/// - `Audit` - an audit log entry
/// - `Console` - a console log entry
///
/// The `UnifiedLogEntry` enum contains the following methods:
/// - `to_json` - convert the log entry to JSON
/// - `get_timestamp` - get the timestamp of the log entry
///
/// # Example
/// ```
/// use rustfs_audit_logger::{UnifiedLogEntry, ServerLogEntry};
/// use tracing_core::Level;
///
/// let server_entry = ServerLogEntry::new(Level::INFO, "test_module".to_string());
/// let unified = UnifiedLogEntry::Server(server_entry);
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum UnifiedLogEntry {
#[serde(rename = "server")]
Server(ServerLogEntry),
#[serde(rename = "audit")]
Audit(Box<AuditLogEntry>),
#[serde(rename = "console")]
Console(ConsoleLogEntry),
}
impl LogRecord for UnifiedLogEntry {
fn to_json(&self) -> String {
match self {
UnifiedLogEntry::Server(entry) => entry.to_json(),
UnifiedLogEntry::Audit(entry) => entry.to_json(),
UnifiedLogEntry::Console(entry) => entry.to_json(),
}
}
fn get_timestamp(&self) -> DateTime<Utc> {
match self {
UnifiedLogEntry::Server(entry) => entry.get_timestamp(),
UnifiedLogEntry::Audit(entry) => entry.get_timestamp(),
UnifiedLogEntry::Console(entry) => entry.get_timestamp(),
}
}
}

View File

@@ -1,8 +0,0 @@
mod entry;
mod logger;
pub use entry::args::Args;
pub use entry::audit::{ApiDetails, AuditLogEntry};
pub use entry::base::BaseLogEntry;
pub use entry::unified::{ConsoleLogEntry, ServerLogEntry, UnifiedLogEntry};
pub use entry::{LogKind, LogRecord, ObjectVersion, SerializableLevel};

View File

@@ -1,29 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
// Default value function
fn default_batch_size() -> usize {
10
}
fn default_queue_size() -> usize {
10000
}
fn default_max_retry() -> u32 {
5
}
fn default_retry_interval() -> std::time::Duration {
std::time::Duration::from_secs(3)
}

View File

@@ -1,13 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

View File

@@ -1,108 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
use chrono::{DateTime, Utc};
use serde::Serialize;
use std::collections::HashMap;
use uuid::Uuid;
///A Trait for a log entry that can be serialized and sent
pub trait Loggable: Serialize + Send + Sync + 'static {
fn to_json(&self) -> Result<String, serde_json::Error> {
serde_json::to_string(self)
}
}
/// Standard log entries
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct LogEntry {
pub deployment_id: String,
pub level: String,
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub trace: Option<Trace>,
pub time: DateTime<Utc>,
pub request_id: String,
}
impl Loggable for LogEntry {}
/// Audit log entry
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct AuditEntry {
pub version: String,
pub deployment_id: String,
pub time: DateTime<Utc>,
pub trigger: String,
pub api: ApiDetails,
pub remote_host: String,
pub request_id: String,
pub user_agent: String,
pub access_key: String,
#[serde(skip_serializing_if = "HashMap::is_empty")]
pub tags: HashMap<String, String>,
}
impl Loggable for AuditEntry {}
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Trace {
pub message: String,
pub source: Vec<String>,
#[serde(skip_serializing_if = "HashMap::is_empty")]
pub variables: HashMap<String, String>,
}
#[derive(Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ApiDetails {
pub name: String,
pub bucket: String,
pub object: String,
pub status: String,
pub status_code: u16,
pub time_to_first_byte: String,
pub time_to_response: String,
}
// Helper functions to create entries
impl AuditEntry {
pub fn new(api_name: &str, bucket: &str, object: &str) -> Self {
AuditEntry {
version: "1".to_string(),
deployment_id: "global-deployment-id".to_string(),
time: Utc::now(),
trigger: "incoming".to_string(),
api: ApiDetails {
name: api_name.to_string(),
bucket: bucket.to_string(),
object: object.to_string(),
status: "OK".to_string(),
status_code: 200,
time_to_first_byte: "10ms".to_string(),
time_to_response: "50ms".to_string(),
},
remote_host: "127.0.0.1".to_string(),
request_id: Uuid::new_v4().to_string(),
user_agent: "Rust-Client/1.0".to_string(),
access_key: "minioadmin".to_string(),
tags: HashMap::new(),
}
}
}

View File

@@ -1,13 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

View File

@@ -1,36 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
pub mod config;
pub mod dispatch;
pub mod entry;
pub mod factory;
use async_trait::async_trait;
use std::error::Error;
/// General Log Target Trait
#[async_trait]
pub trait Target: Send + Sync {
/// Send a single logizable entry
async fn send(&self, entry: Box<Self>) -> Result<(), Box<dyn Error + Send>>;
/// Returns the unique name of the target
fn name(&self) -> &str;
/// Close target gracefully, ensuring all buffered logs are processed
async fn shutdown(&self);
}

View File

@@ -124,7 +124,7 @@ pub const DEFAULT_LOG_FILENAME: &str = "rustfs";
/// This is the default log filename for OBS.
/// It is used to store the logs of the application.
/// Default value: rustfs.log
pub const DEFAULT_OBS_LOG_FILENAME: &str = concat!(DEFAULT_LOG_FILENAME, ".");
pub const DEFAULT_OBS_LOG_FILENAME: &str = concat!(DEFAULT_LOG_FILENAME, "");
/// Default sink file log file for rustfs
/// This is the default sink file log file for rustfs.
@@ -160,6 +160,16 @@ pub const DEFAULT_LOG_ROTATION_TIME: &str = "day";
/// Environment variable: RUSTFS_OBS_LOG_KEEP_FILES
pub const DEFAULT_LOG_KEEP_FILES: u16 = 30;
/// This is the external address for rustfs to access endpoint (used in Docker deployments).
/// This should match the mapped host port when using Docker port mapping.
/// Example: ":9020" when mapping host port 9020 to container port 9000.
/// Default value: DEFAULT_ADDRESS
/// Environment variable: RUSTFS_EXTERNAL_ADDRESS
/// Command line argument: --external-address
/// Example: RUSTFS_EXTERNAL_ADDRESS=":9020"
/// Example: --external-address ":9020"
pub const ENV_EXTERNAL_ADDRESS: &str = "RUSTFS_EXTERNAL_ADDRESS";
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -0,0 +1,91 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/// CORS allowed origins for the endpoint service
/// Comma-separated list of origins or "*" for all origins
pub const ENV_CORS_ALLOWED_ORIGINS: &str = "RUSTFS_CORS_ALLOWED_ORIGINS";
/// Default CORS allowed origins for the endpoint service
/// Comes from the console service default
/// See DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS
pub const DEFAULT_CORS_ALLOWED_ORIGINS: &str = DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS;
/// CORS allowed origins for the console service
/// Comma-separated list of origins or "*" for all origins
pub const ENV_CONSOLE_CORS_ALLOWED_ORIGINS: &str = "RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS";
/// Default CORS allowed origins for the console service
pub const DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS: &str = "*";
/// Enable or disable the console service
pub const ENV_CONSOLE_ENABLE: &str = "RUSTFS_CONSOLE_ENABLE";
/// Address for the console service to bind to
pub const ENV_CONSOLE_ADDRESS: &str = "RUSTFS_CONSOLE_ADDRESS";
/// RUSTFS_CONSOLE_RATE_LIMIT_ENABLE
/// Enable or disable rate limiting for the console service
pub const ENV_CONSOLE_RATE_LIMIT_ENABLE: &str = "RUSTFS_CONSOLE_RATE_LIMIT_ENABLE";
/// Default console rate limit enable
/// This is the default value for enabling rate limiting on the console server.
/// Rate limiting helps protect against abuse and DoS attacks on the management interface.
/// Default value: false
/// Environment variable: RUSTFS_CONSOLE_RATE_LIMIT_ENABLE
/// Command line argument: --console-rate-limit-enable
/// Example: RUSTFS_CONSOLE_RATE_LIMIT_ENABLE=true
/// Example: --console-rate-limit-enable true
pub const DEFAULT_CONSOLE_RATE_LIMIT_ENABLE: bool = false;
/// Set the rate limit requests per minute for the console service
/// Limits the number of requests per minute per client IP when rate limiting is enabled
/// Default: 100 requests per minute
pub const ENV_CONSOLE_RATE_LIMIT_RPM: &str = "RUSTFS_CONSOLE_RATE_LIMIT_RPM";
/// Default console rate limit requests per minute
/// This is the default rate limit for console requests when rate limiting is enabled.
/// Limits the number of requests per minute per client IP to prevent abuse.
/// Default value: 100 requests per minute
/// Environment variable: RUSTFS_CONSOLE_RATE_LIMIT_RPM
/// Command line argument: --console-rate-limit-rpm
/// Example: RUSTFS_CONSOLE_RATE_LIMIT_RPM=100
/// Example: --console-rate-limit-rpm 100
pub const DEFAULT_CONSOLE_RATE_LIMIT_RPM: u32 = 100;
/// Set the console authentication timeout in seconds
/// Specifies how long a console authentication session remains valid
/// Default: 3600 seconds (1 hour)
/// Minimum: 300 seconds (5 minutes)
/// Maximum: 86400 seconds (24 hours)
pub const ENV_CONSOLE_AUTH_TIMEOUT: &str = "RUSTFS_CONSOLE_AUTH_TIMEOUT";
/// Default console authentication timeout in seconds
/// This is the default timeout for console authentication sessions.
/// After this timeout, users need to re-authenticate to access the console.
/// Default value: 3600 seconds (1 hour)
/// Environment variable: RUSTFS_CONSOLE_AUTH_TIMEOUT
/// Command line argument: --console-auth-timeout
/// Example: RUSTFS_CONSOLE_AUTH_TIMEOUT=3600
/// Example: --console-auth-timeout 3600
pub const DEFAULT_CONSOLE_AUTH_TIMEOUT: u64 = 3600;
/// Toggle update check
/// It controls whether to check for newer versions of rustfs
/// Default value: true
/// Environment variable: RUSTFS_CHECK_UPDATE
/// Example: RUSTFS_CHECK_UPDATE=false
pub const ENV_UPDATE_CHECK: &str = "RUSTFS_CHECK_UPDATE";
/// Default value for update toggle
pub const DEFAULT_UPDATE_CHECK: bool = true;

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod app;
pub mod env;
pub mod tls;
pub(crate) mod app;
pub(crate) mod console;
pub(crate) mod env;
pub(crate) mod tls;

View File

@@ -17,6 +17,8 @@ pub mod constants;
#[cfg(feature = "constants")]
pub use constants::app::*;
#[cfg(feature = "constants")]
pub use constants::console::*;
#[cfg(feature = "constants")]
pub use constants::env::*;
#[cfg(feature = "constants")]
pub use constants::tls::*;

View File

@@ -29,7 +29,70 @@ pub const ENV_OBS_LOG_ROTATION_SIZE_MB: &str = "RUSTFS_OBS_LOG_ROTATION_SIZE_MB"
pub const ENV_OBS_LOG_ROTATION_TIME: &str = "RUSTFS_OBS_LOG_ROTATION_TIME";
pub const ENV_OBS_LOG_KEEP_FILES: &str = "RUSTFS_OBS_LOG_KEEP_FILES";
/// Log pool capacity for async logging
pub const ENV_OBS_LOG_POOL_CAPA: &str = "RUSTFS_OBS_LOG_POOL_CAPA";
/// Log message capacity for async logging
pub const ENV_OBS_LOG_MESSAGE_CAPA: &str = "RUSTFS_OBS_LOG_MESSAGE_CAPA";
/// Log flush interval in milliseconds for async logging
pub const ENV_OBS_LOG_FLUSH_MS: &str = "RUSTFS_OBS_LOG_FLUSH_MS";
/// Default values for log pool
pub const DEFAULT_OBS_LOG_POOL_CAPA: usize = 10240;
/// Default values for message capacity
pub const DEFAULT_OBS_LOG_MESSAGE_CAPA: usize = 32768;
/// Default values for flush interval in milliseconds
pub const DEFAULT_OBS_LOG_FLUSH_MS: u64 = 200;
/// Audit logger queue capacity environment variable key
pub const ENV_AUDIT_LOGGER_QUEUE_CAPACITY: &str = "RUSTFS_AUDIT_LOGGER_QUEUE_CAPACITY";
// Default values for observability configuration
/// Default values for observability configuration
pub const DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY: usize = 10000;
/// Default values for observability configuration
// ### Supported Environment Values
// - `production` - Secure file-only logging
// - `development` - Full debugging with stdout
// - `test` - Test environment with stdout support
// - `staging` - Staging environment with stdout support
pub const DEFAULT_OBS_ENVIRONMENT_PRODUCTION: &str = "production";
pub const DEFAULT_OBS_ENVIRONMENT_DEVELOPMENT: &str = "development";
pub const DEFAULT_OBS_ENVIRONMENT_TEST: &str = "test";
pub const DEFAULT_OBS_ENVIRONMENT_STAGING: &str = "staging";
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_env_keys() {
assert_eq!(ENV_OBS_ENDPOINT, "RUSTFS_OBS_ENDPOINT");
assert_eq!(ENV_OBS_USE_STDOUT, "RUSTFS_OBS_USE_STDOUT");
assert_eq!(ENV_OBS_SAMPLE_RATIO, "RUSTFS_OBS_SAMPLE_RATIO");
assert_eq!(ENV_OBS_METER_INTERVAL, "RUSTFS_OBS_METER_INTERVAL");
assert_eq!(ENV_OBS_SERVICE_NAME, "RUSTFS_OBS_SERVICE_NAME");
assert_eq!(ENV_OBS_SERVICE_VERSION, "RUSTFS_OBS_SERVICE_VERSION");
assert_eq!(ENV_OBS_ENVIRONMENT, "RUSTFS_OBS_ENVIRONMENT");
assert_eq!(ENV_OBS_LOGGER_LEVEL, "RUSTFS_OBS_LOGGER_LEVEL");
assert_eq!(ENV_OBS_LOCAL_LOGGING_ENABLED, "RUSTFS_OBS_LOCAL_LOGGING_ENABLED");
assert_eq!(ENV_OBS_LOG_DIRECTORY, "RUSTFS_OBS_LOG_DIRECTORY");
assert_eq!(ENV_OBS_LOG_FILENAME, "RUSTFS_OBS_LOG_FILENAME");
assert_eq!(ENV_OBS_LOG_ROTATION_SIZE_MB, "RUSTFS_OBS_LOG_ROTATION_SIZE_MB");
assert_eq!(ENV_OBS_LOG_ROTATION_TIME, "RUSTFS_OBS_LOG_ROTATION_TIME");
assert_eq!(ENV_OBS_LOG_KEEP_FILES, "RUSTFS_OBS_LOG_KEEP_FILES");
assert_eq!(ENV_AUDIT_LOGGER_QUEUE_CAPACITY, "RUSTFS_AUDIT_LOGGER_QUEUE_CAPACITY");
}
#[test]
fn test_default_values() {
assert_eq!(DEFAULT_AUDIT_LOGGER_QUEUE_CAPACITY, 10000);
assert_eq!(DEFAULT_OBS_ENVIRONMENT_PRODUCTION, "production");
assert_eq!(DEFAULT_OBS_ENVIRONMENT_DEVELOPMENT, "development");
assert_eq!(DEFAULT_OBS_ENVIRONMENT_TEST, "test");
assert_eq!(DEFAULT_OBS_ENVIRONMENT_STAGING, "staging");
}
}

View File

@@ -458,7 +458,7 @@ impl LocalDisk {
{
let cache = self.path_cache.read();
for (i, (bucket, key)) in requests.iter().enumerate() {
let cache_key = format!("{}/{}", bucket, key);
let cache_key = format!("{bucket}/{key}");
if let Some(cached_path) = cache.get(&cache_key) {
results.push((i, cached_path.clone()));
} else {

View File

@@ -13,13 +13,12 @@
// limitations under the License.
use rustfs_utils::{XHost, check_local_server_addr, get_host_ip, is_local_host};
use tracing::{error, instrument, warn};
use tracing::{error, info, instrument, warn};
use crate::{
disk::endpoint::{Endpoint, EndpointType},
disks_layout::DisksLayout,
global::global_rustfs_port,
// utils::net::{self, XHost},
};
use std::io::{Error, Result};
use std::{
@@ -169,7 +168,7 @@ impl AsMut<Vec<Endpoints>> for PoolEndpointList {
impl PoolEndpointList {
/// creates a list of endpoints per pool, resolves their relevant
/// hostnames and discovers those are local or remote.
fn create_pool_endpoints(server_addr: &str, disks_layout: &DisksLayout) -> Result<Self> {
async fn create_pool_endpoints(server_addr: &str, disks_layout: &DisksLayout) -> Result<Self> {
if disks_layout.is_empty_layout() {
return Err(Error::other("invalid number of endpoints"));
}
@@ -242,15 +241,32 @@ impl PoolEndpointList {
let host = ep.url.host().unwrap();
let host_ip_set = if let Some(set) = host_ip_cache.get(&host) {
info!(
target: "rustfs::ecstore::endpoints",
host = %host,
endpoint = %ep.to_string(),
from = "cache",
"Create pool endpoints host '{}' found in cache for endpoint '{}'", host, ep.to_string()
);
set
} else {
let ips = match get_host_ip(host.clone()) {
let ips = match get_host_ip(host.clone()).await {
Ok(ips) => ips,
Err(e) => {
error!("host {} not found, error:{}", host, e);
error!("Create pool endpoints host {} not found, error:{}", host, e);
return Err(Error::other(format!("host '{host}' cannot resolve: {e}")));
}
};
info!(
target: "rustfs::ecstore::endpoints",
host = %host,
endpoint = %ep.to_string(),
from = "get_host_ip",
"Create pool endpoints host '{}' resolved to ips {:?} for endpoint '{}'",
host,
ips,
ep.to_string()
);
host_ip_cache.insert(host.clone(), ips);
host_ip_cache.get(&host).unwrap()
};
@@ -466,19 +482,22 @@ impl EndpointServerPools {
}
None
}
pub fn from_volumes(server_addr: &str, endpoints: Vec<String>) -> Result<(EndpointServerPools, SetupType)> {
pub async fn from_volumes(server_addr: &str, endpoints: Vec<String>) -> Result<(EndpointServerPools, SetupType)> {
let layouts = DisksLayout::from_volumes(endpoints.as_slice())?;
Self::create_server_endpoints(server_addr, &layouts)
Self::create_server_endpoints(server_addr, &layouts).await
}
/// validates and creates new endpoints from input args, supports
/// both ellipses and without ellipses transparently.
pub fn create_server_endpoints(server_addr: &str, disks_layout: &DisksLayout) -> Result<(EndpointServerPools, SetupType)> {
pub async fn create_server_endpoints(
server_addr: &str,
disks_layout: &DisksLayout,
) -> Result<(EndpointServerPools, SetupType)> {
if disks_layout.pools.is_empty() {
return Err(Error::other("Invalid arguments specified"));
}
let pool_eps = PoolEndpointList::create_pool_endpoints(server_addr, disks_layout)?;
let pool_eps = PoolEndpointList::create_pool_endpoints(server_addr, disks_layout).await?;
let mut ret: EndpointServerPools = Vec::with_capacity(pool_eps.as_ref().len()).into();
for (i, eps) in pool_eps.inner.into_iter().enumerate() {
@@ -753,8 +772,8 @@ mod test {
}
}
#[test]
fn test_create_pool_endpoints() {
#[tokio::test]
async fn test_create_pool_endpoints() {
#[derive(Default)]
struct TestCase<'a> {
num: usize,
@@ -1276,7 +1295,7 @@ mod test {
match (
test_case.expected_err,
PoolEndpointList::create_pool_endpoints(test_case.server_addr, &disks_layout),
PoolEndpointList::create_pool_endpoints(test_case.server_addr, &disks_layout).await,
) {
(None, Err(err)) => panic!("Test {}: error: expected = <nil>, got = {}", test_case.num, err),
(Some(err), Ok(_)) => panic!("Test {}: error: expected = {}, got = <nil>", test_case.num, err),
@@ -1343,8 +1362,8 @@ mod test {
(urls, local_flags)
}
#[test]
fn test_create_server_endpoints() {
#[tokio::test]
async fn test_create_server_endpoints() {
let test_cases = [
// Invalid input.
("", vec![], false),
@@ -1379,7 +1398,7 @@ mod test {
}
};
let ret = EndpointServerPools::create_server_endpoints(test_case.0, &disks_layout);
let ret = EndpointServerPools::create_server_endpoints(test_case.0, &disks_layout).await;
if let Err(err) = ret {
if test_case.2 {

View File

@@ -65,7 +65,7 @@ impl OptimizedFileCache {
// Cache miss, read file
let data = tokio::fs::read(&path)
.await
.map_err(|e| Error::other(format!("Read metadata failed: {}", e)))?;
.map_err(|e| Error::other(format!("Read metadata failed: {e}")))?;
let mut meta = FileMeta::default();
meta.unmarshal_msg(&data)?;
@@ -86,7 +86,7 @@ impl OptimizedFileCache {
let data = tokio::fs::read(&path)
.await
.map_err(|e| Error::other(format!("Read file failed: {}", e)))?;
.map_err(|e| Error::other(format!("Read file failed: {e}")))?;
let bytes = Bytes::from(data);
self.file_content_cache.insert(path, bytes.clone()).await;
@@ -295,9 +295,9 @@ mod tests {
let mut paths = Vec::new();
for i in 0..5 {
let file_path = dir.path().join(format!("test_{}.txt", i));
let file_path = dir.path().join(format!("test_{i}.txt"));
let mut file = std::fs::File::create(&file_path).unwrap();
writeln!(file, "content {}", i).unwrap();
writeln!(file, "content {i}").unwrap();
paths.push(file_path);
}

View File

@@ -37,26 +37,27 @@ pub const DISK_FILL_FRACTION: f64 = 0.99;
pub const DISK_RESERVE_FRACTION: f64 = 0.15;
lazy_static! {
static ref GLOBAL_RUSTFS_PORT: OnceLock<u16> = OnceLock::new();
pub static ref GLOBAL_OBJECT_API: OnceLock<Arc<ECStore>> = OnceLock::new();
pub static ref GLOBAL_LOCAL_DISK: Arc<RwLock<Vec<Option<DiskStore>>>> = Arc::new(RwLock::new(Vec::new()));
pub static ref GLOBAL_IsErasure: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_IsDistErasure: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_IsErasureSD: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_LOCAL_DISK_MAP: Arc<RwLock<HashMap<String, Option<DiskStore>>>> = Arc::new(RwLock::new(HashMap::new()));
pub static ref GLOBAL_LOCAL_DISK_SET_DRIVES: Arc<RwLock<TypeLocalDiskSetDrives>> = Arc::new(RwLock::new(Vec::new()));
pub static ref GLOBAL_Endpoints: OnceLock<EndpointServerPools> = OnceLock::new();
pub static ref GLOBAL_RootDiskThreshold: RwLock<u64> = RwLock::new(0);
pub static ref GLOBAL_TierConfigMgr: Arc<RwLock<TierConfigMgr>> = TierConfigMgr::new();
pub static ref GLOBAL_LifecycleSys: Arc<LifecycleSys> = LifecycleSys::new();
pub static ref GLOBAL_EventNotifier: Arc<RwLock<EventNotifier>> = EventNotifier::new();
//pub static ref GLOBAL_RemoteTargetTransport
static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
static ref GLOBAL_RUSTFS_PORT: OnceLock<u16> = OnceLock::new();
static ref GLOBAL_RUSTFS_EXTERNAL_PORT: OnceLock<u16> = OnceLock::new();
pub static ref GLOBAL_OBJECT_API: OnceLock<Arc<ECStore>> = OnceLock::new();
pub static ref GLOBAL_LOCAL_DISK: Arc<RwLock<Vec<Option<DiskStore>>>> = Arc::new(RwLock::new(Vec::new()));
pub static ref GLOBAL_IsErasure: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_IsDistErasure: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_IsErasureSD: RwLock<bool> = RwLock::new(false);
pub static ref GLOBAL_LOCAL_DISK_MAP: Arc<RwLock<HashMap<String, Option<DiskStore>>>> = Arc::new(RwLock::new(HashMap::new()));
pub static ref GLOBAL_LOCAL_DISK_SET_DRIVES: Arc<RwLock<TypeLocalDiskSetDrives>> = Arc::new(RwLock::new(Vec::new()));
pub static ref GLOBAL_Endpoints: OnceLock<EndpointServerPools> = OnceLock::new();
pub static ref GLOBAL_RootDiskThreshold: RwLock<u64> = RwLock::new(0);
pub static ref GLOBAL_TierConfigMgr: Arc<RwLock<TierConfigMgr>> = TierConfigMgr::new();
pub static ref GLOBAL_LifecycleSys: Arc<LifecycleSys> = LifecycleSys::new();
pub static ref GLOBAL_EventNotifier: Arc<RwLock<EventNotifier>> = EventNotifier::new();
//pub static ref GLOBAL_RemoteTargetTransport
static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
}
// Global cancellation token for background services (data scanner and auto heal)
@@ -108,6 +109,22 @@ pub fn set_global_rustfs_port(value: u16) {
GLOBAL_RUSTFS_PORT.set(value).expect("set_global_rustfs_port fail");
}
/// Get the global rustfs external port
pub fn global_rustfs_external_port() -> u16 {
if let Some(p) = GLOBAL_RUSTFS_EXTERNAL_PORT.get() {
*p
} else {
rustfs_config::DEFAULT_PORT
}
}
/// Set the global rustfs external port
pub fn set_global_rustfs_external_port(value: u16) {
GLOBAL_RUSTFS_EXTERNAL_PORT
.set(value)
.expect("set_global_rustfs_external_port fail");
}
/// Get the global rustfs port
pub fn set_global_deployment_id(id: Uuid) {
globalDeploymentIDPtr.set(id).unwrap();

View File

@@ -2430,7 +2430,7 @@ impl SetDisks {
.map_err(|e| {
let elapsed = start_time.elapsed();
error!("Failed to acquire write lock for heal operation after {:?}: {:?}", elapsed, e);
DiskError::other(format!("Failed to acquire write lock for heal operation: {:?}", e))
DiskError::other(format!("Failed to acquire write lock for heal operation: {e:?}"))
})?;
let elapsed = start_time.elapsed();
info!("Successfully acquired write lock for object: {} in {:?}", object, elapsed);
@@ -3045,7 +3045,7 @@ impl SetDisks {
.fast_lock_manager
.acquire_write_lock("", object, self.locker_owner.as_str())
.await
.map_err(|e| DiskError::other(format!("Failed to acquire write lock for heal directory operation: {:?}", e)))?;
.map_err(|e| DiskError::other(format!("Failed to acquire write lock for heal directory operation: {e:?}")))?;
let disks = {
let disks = self.disks.read().await;
@@ -5594,7 +5594,7 @@ impl StorageAPI for SetDisks {
self.fast_lock_manager
.acquire_write_lock("", object, self.locker_owner.as_str())
.await
.map_err(|e| Error::other(format!("Failed to acquire write lock for heal operation: {:?}", e)))?,
.map_err(|e| Error::other(format!("Failed to acquire write lock for heal operation: {e:?}")))?,
)
} else {
None

View File

@@ -42,8 +42,7 @@ url.workspace = true
uuid.workspace = true
thiserror.workspace = true
once_cell.workspace = true
parking_lot = "0.12"
smallvec = "1.11"
smartstring = "1.0"
crossbeam-queue = "0.3"
heapless = "0.8"
parking_lot.workspace = true
smallvec.workspace = true
smartstring.workspace = true
crossbeam-queue = { workspace = true }

View File

@@ -23,7 +23,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Lock system status: {}", if manager.is_disabled() { "DISABLED" } else { "ENABLED" });
match std::env::var("RUSTFS_ENABLE_LOCKS") {
Ok(value) => println!("RUSTFS_ENABLE_LOCKS set to: {}", value),
Ok(value) => println!("RUSTFS_ENABLE_LOCKS set to: {value}"),
Err(_) => println!("RUSTFS_ENABLE_LOCKS not set (defaults to enabled)"),
}
@@ -34,7 +34,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Lock acquired successfully! Disabled: {}", guard.is_disabled());
}
Err(e) => {
println!("Failed to acquire lock: {:?}", e);
println!("Failed to acquire lock: {e:?}");
}
}

View File

@@ -87,7 +87,7 @@ impl LockClient for LocalClient {
current_owner,
current_mode,
}) => Ok(LockResponse::failure(
format!("Lock conflict: resource held by {} in {:?} mode", current_owner, current_mode),
format!("Lock conflict: resource held by {current_owner} in {current_mode:?} mode"),
std::time::Duration::ZERO,
)),
Err(crate::fast_lock::LockResult::Acquired) => {
@@ -131,7 +131,7 @@ impl LockClient for LocalClient {
current_owner,
current_mode,
}) => Ok(LockResponse::failure(
format!("Lock conflict: resource held by {} in {:?} mode", current_owner, current_mode),
format!("Lock conflict: resource held by {current_owner} in {current_mode:?} mode"),
std::time::Duration::ZERO,
)),
Err(crate::fast_lock::LockResult::Acquired) => {

View File

@@ -409,14 +409,14 @@ mod tests {
// Acquire multiple guards and verify unique IDs
let mut guards = Vec::new();
for i in 0..100 {
let object_name = format!("object_{}", i);
let object_name = format!("object_{i}");
let guard = manager
.acquire_write_lock("bucket", object_name.as_str(), "owner")
.await
.expect("Failed to acquire lock");
let guard_id = guard.guard_id();
assert!(guard_ids.insert(guard_id), "Guard ID {} is not unique", guard_id);
assert!(guard_ids.insert(guard_id), "Guard ID {guard_id} is not unique");
guards.push(guard);
}
@@ -501,7 +501,7 @@ mod tests {
let handle = tokio::spawn(async move {
for i in 0..10 {
let object_name = format!("obj_{}_{}", task_id, i);
let object_name = format!("obj_{task_id}_{i}");
// Acquire lock
let mut guard = match manager.acquire_write_lock("bucket", object_name.as_str(), "owner").await {
@@ -535,7 +535,7 @@ mod tests {
let blocked = double_release_blocked.load(Ordering::SeqCst);
// Should have many successful releases and all double releases blocked
assert!(successes > 150, "Expected many successful releases, got {}", successes);
assert!(successes > 150, "Expected many successful releases, got {successes}");
assert_eq!(blocked, successes, "All double releases should be blocked");
// Verify no active guards remain
@@ -567,7 +567,7 @@ mod tests {
// Acquire multiple locks for the same object to ensure they're in the same shard
let mut guards = Vec::new();
for i in 0..10 {
let owner_name = format!("owner_{}", i);
let owner_name = format!("owner_{i}");
let guard = manager
.acquire_read_lock("bucket", "shared_object", owner_name.as_str())
.await
@@ -586,7 +586,7 @@ mod tests {
let cleaned = shard.adaptive_cleanup();
// Should clean very little due to active guards
assert!(cleaned <= 5, "Should be conservative with active guards, cleaned: {}", cleaned);
assert!(cleaned <= 5, "Should be conservative with active guards, cleaned: {cleaned}");
// Locks should be protected by active guards
let remaining_locks = shard.lock_count();
@@ -809,7 +809,7 @@ mod tests {
let manager_clone = manager.clone();
let handle = tokio::spawn(async move {
let _guard = manager_clone
.acquire_read_lock("bucket", format!("object-{}", i), "background")
.acquire_read_lock("bucket", format!("object-{i}"), "background")
.await;
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
});
@@ -842,7 +842,7 @@ mod tests {
// High priority should generally perform reasonably well
// This is more of a performance validation than a strict requirement
println!("High priority: {:?}, Normal: {:?}", high_priority_duration, normal_duration);
println!("High priority: {high_priority_duration:?}, Normal: {normal_duration:?}");
// Both operations should complete in reasonable time (less than 100ms in test environment)
// This validates that the priority system isn't causing severe degradation
@@ -858,7 +858,7 @@ mod tests {
let mut _guards = Vec::new();
for i in 0..100 {
if let Ok(guard) = manager
.acquire_write_lock("bucket", format!("load-object-{}", i), "loader")
.acquire_write_lock("bucket", format!("load-object-{i}"), "loader")
.await
{
_guards.push(guard);
@@ -909,8 +909,8 @@ mod tests {
// Try to acquire all locks for this "query"
for obj_id in 0..objects_per_query {
let bucket = "databend";
let object = format!("table_partition_{}_{}", query_id, obj_id);
let owner = format!("query_{}", query_id);
let object = format!("table_partition_{query_id}_{obj_id}");
let owner = format!("query_{query_id}");
match manager_clone.acquire_high_priority_read_lock(bucket, object, owner).await {
Ok(guard) => query_locks.push(guard),
@@ -935,8 +935,8 @@ mod tests {
let manager_clone = manager.clone();
let handle = tokio::spawn(async move {
let bucket = "databend";
let object = format!("write_target_{}", write_id);
let owner = format!("writer_{}", write_id);
let object = format!("write_target_{write_id}");
let owner = format!("writer_{write_id}");
match manager_clone.acquire_write_lock(bucket, object, owner).await {
Ok(_guard) => {
@@ -988,7 +988,7 @@ mod tests {
let handle = tokio::spawn(async move {
let bucket = "test";
let object = format!("extreme_load_object_{}", i % 20); // Force some contention
let owner = format!("user_{}", i);
let owner = format!("user_{i}");
// Mix of read and write locks to create realistic contention
let result = if i % 3 == 0 {
@@ -1066,8 +1066,8 @@ mod tests {
};
let bucket = "datacenter-shared";
let object = format!("table_{}_{}_partition_{}", table_id, client_id, table_idx);
let owner = format!("client_{}_query_{}", client_id, query_id);
let object = format!("table_{table_id}_{client_id}_partition_{table_idx}");
let owner = format!("client_{client_id}_query_{query_id}");
// Mix of operations - mostly reads with some writes
let lock_result = if table_idx == 0 && query_id % 7 == 0 {
@@ -1129,10 +1129,10 @@ mod tests {
}
println!("\n=== Multi-Client Datacenter Simulation Results ===");
println!("Total execution time: {:?}", total_time);
println!("Total clients: {}", num_clients);
println!("Queries per client: {}", queries_per_client);
println!("Total queries executed: {}", total_queries);
println!("Total execution time: {total_time:?}");
println!("Total clients: {num_clients}");
println!("Queries per client: {queries_per_client}");
println!("Total queries executed: {total_queries}");
println!(
"Successful queries: {} ({:.1}%)",
successful_queries,
@@ -1179,8 +1179,7 @@ mod tests {
// Performance assertion - should complete in reasonable time
assert!(
total_time < std::time::Duration::from_secs(120),
"Multi-client test took too long: {:?}",
total_time
"Multi-client test took too long: {total_time:?}"
);
}
@@ -1209,8 +1208,8 @@ mod tests {
client_attempts += 1;
let bucket = "hot-data";
let object = format!("popular_table_{}", obj_id);
let owner = format!("thundering_client_{}", client_id);
let object = format!("popular_table_{obj_id}");
let owner = format!("thundering_client_{client_id}");
// Simulate different access patterns
let result = match obj_id % 3 {
@@ -1265,12 +1264,12 @@ mod tests {
let success_rate = total_successes as f64 / total_attempts as f64;
println!("\n=== Thundering Herd Scenario Results ===");
println!("Concurrent clients: {}", num_concurrent_clients);
println!("Hot objects: {}", hot_objects);
println!("Total attempts: {}", total_attempts);
println!("Total successes: {}", total_successes);
println!("Concurrent clients: {num_concurrent_clients}");
println!("Hot objects: {hot_objects}");
println!("Total attempts: {total_attempts}");
println!("Total successes: {total_successes}");
println!("Success rate: {:.1}%", success_rate * 100.0);
println!("Total time: {:?}", total_time);
println!("Total time: {total_time:?}");
println!(
"Average time per operation: {:.1}ms",
total_time.as_millis() as f64 / total_attempts as f64
@@ -1286,8 +1285,7 @@ mod tests {
// Should handle this volume in reasonable time
assert!(
total_time < std::time::Duration::from_secs(180),
"Thundering herd test took too long: {:?}",
total_time
"Thundering herd test took too long: {total_time:?}"
);
}
@@ -1314,7 +1312,7 @@ mod tests {
for op_id in 0..operations_per_transaction {
let bucket = "oltp-data";
let object = format!("record_{}_{}", oltp_id * 10 + tx_id, op_id);
let owner = format!("oltp_{}_{}", oltp_id, tx_id);
let owner = format!("oltp_{oltp_id}_{tx_id}");
// OLTP is mostly writes
let result = manager_clone.acquire_write_lock(bucket, object, owner).await;
@@ -1358,7 +1356,7 @@ mod tests {
table_id % 20, // Some overlap between queries
query_id
);
let owner = format!("olap_{}_{}", olap_id, query_id);
let owner = format!("olap_{olap_id}_{query_id}");
// OLAP is mostly reads with high priority
let result = manager_clone.acquire_critical_read_lock(bucket, object, owner).await;
@@ -1408,7 +1406,7 @@ mod tests {
let olap_success_rate = olap_successes as f64 / olap_attempts as f64;
println!("\n=== Mixed Workload Stress Test Results ===");
println!("Total time: {:?}", total_time);
println!("Total time: {total_time:?}");
println!(
"OLTP: {}/{} transactions succeeded ({:.1}%)",
oltp_successes,

View File

@@ -152,14 +152,14 @@ pub mod performance_comparison {
for i in 0..1000 {
let guard = fast_manager
.acquire_write_lock("bucket", format!("object_{}", i), owner)
.acquire_write_lock("bucket", format!("object_{i}"), owner)
.await
.expect("Failed to acquire fast lock");
guards.push(guard);
}
let fast_duration = start.elapsed();
println!("Fast lock: 1000 acquisitions in {:?}", fast_duration);
println!("Fast lock: 1000 acquisitions in {fast_duration:?}");
// Release all
drop(guards);

View File

@@ -27,7 +27,7 @@ mod tests {
let mut guards = Vec::new();
for i in 0..100 {
let bucket = format!("test-bucket-{}", i % 10); // Reuse some bucket names
let object = format!("test-object-{}", i);
let object = format!("test-object-{i}");
let guard = manager
.acquire_write_lock(bucket.as_str(), object.as_str(), "test-owner")
@@ -53,10 +53,7 @@ mod tests {
0.0
};
println!(
"Pool stats - Hits: {}, Misses: {}, Releases: {}, Pool size: {}",
hits, misses, releases, pool_size
);
println!("Pool stats - Hits: {hits}, Misses: {misses}, Releases: {releases}, Pool size: {pool_size}");
println!("Hit rate: {:.2}%", hit_rate * 100.0);
// We should see some pool activity
@@ -82,7 +79,7 @@ mod tests {
.expect("Failed to acquire second read lock");
let duration = start.elapsed();
println!("Two read locks on different objects took: {:?}", duration);
println!("Two read locks on different objects took: {duration:?}");
// Should be very fast since no contention
assert!(duration < Duration::from_millis(10), "Read locks should be fast with no contention");
@@ -103,7 +100,7 @@ mod tests {
.expect("Failed to acquire second read lock on same object");
let duration = start.elapsed();
println!("Two read locks on same object took: {:?}", duration);
println!("Two read locks on same object took: {duration:?}");
// Should still be fast since read locks are compatible
assert!(duration < Duration::from_millis(10), "Compatible read locks should be fast");
@@ -132,7 +129,7 @@ mod tests {
.expect("Failed to acquire second read lock");
let second_duration = start.elapsed();
println!("First lock: {:?}, Second lock: {:?}", first_duration, second_duration);
println!("First lock: {first_duration:?}, Second lock: {second_duration:?}");
// Both should be very fast (sub-millisecond typically)
assert!(first_duration < Duration::from_millis(10));
@@ -157,7 +154,7 @@ mod tests {
let result = manager.acquire_locks_batch(batch).await;
let duration = start.elapsed();
println!("Batch operation took: {:?}", duration);
println!("Batch operation took: {duration:?}");
assert!(result.all_acquired, "All locks should be acquired");
assert_eq!(result.successful_locks.len(), 3);

View File

@@ -616,6 +616,7 @@ impl ServerHandler for RustfsMcpServer {
server_info: Implementation {
name: "rustfs-mcp-server".into(),
version: env!("CARGO_PKG_VERSION").into(),
..Default::default()
},
}
}

View File

@@ -40,7 +40,7 @@ rustfs-config = { workspace = true, features = ["constants", "observability"] }
rustfs-utils = { workspace = true, features = ["ip", "path"] }
async-trait = { workspace = true }
chrono = { workspace = true }
flexi_logger = { workspace = true, features = ["trc", "kv"] }
flexi_logger = { workspace = true }
nu-ansi-term = { workspace = true }
nvml-wrapper = { workspace = true, optional = true }
opentelemetry = { workspace = true }
@@ -62,6 +62,7 @@ serde_json = { workspace = true }
sysinfo = { workspace = true }
thiserror = { workspace = true }
# Only enable kafka features and related dependencies on Linux
[target.'cfg(target_os = "linux")'.dependencies]
rdkafka = { workspace = true, features = ["tokio"], optional = true }

View File

@@ -21,12 +21,57 @@
## ✨ Features
- **Environment-Aware Logging**: Automatically configures logging behavior based on deployment environment
- Production: File-only logging (stdout disabled by default for security and log aggregation)
- Development/Test: Full logging with stdout support for debugging
- OpenTelemetry integration for distributed tracing
- Prometheus metrics collection and exposition
- Structured logging with configurable levels
- Structured logging with configurable levels and rotation
- Performance profiling and analytics
- Real-time health checks and status monitoring
- Custom dashboards and alerting integration
- Enhanced error handling and resilience
## 🚀 Environment-Aware Logging
The obs module automatically adapts logging behavior based on your deployment environment:
### Production Environment
```bash
# Set production environment - disables stdout logging by default
export RUSTFS_OBS_ENVIRONMENT=production
# All logs go to files only (no stdout) for security and log aggregation
# Enhanced error handling with clear failure diagnostics
```
### Development/Test Environment
```bash
# Set development environment - enables stdout logging
export RUSTFS_OBS_ENVIRONMENT=development
# Logs appear both in files and stdout for easier debugging
# Full span tracking and verbose error messages
```
### Configuration Override
You can always override the environment defaults:
```rust
use rustfs_obs::OtelConfig;
let config = OtelConfig {
endpoint: "".to_string(),
use_stdout: Some(true), // Explicit override - forces stdout even in production
environment: Some("production".to_string()),
..Default::default()
};
```
### Supported Environment Values
- `production` - Secure file-only logging
- `development` - Full debugging with stdout
- `test` - Test environment with stdout support
- `staging` - Staging environment with stdout support
## 📚 Documentation

View File

@@ -13,15 +13,19 @@
// limitations under the License.
use crate::OtelConfig;
use flexi_logger::{Age, Cleanup, Criterion, DeferredNow, FileSpec, LogSpecification, Naming, Record, WriteMode, style};
use flexi_logger::{
Age, Cleanup, Criterion, DeferredNow, FileSpec, LogSpecification, Naming, Record, WriteMode,
WriteMode::{AsyncWith, BufferAndFlush},
style,
};
use nu_ansi_term::Color;
use opentelemetry::trace::TracerProvider;
use opentelemetry::{KeyValue, global};
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_otlp::WithExportConfig;
use opentelemetry_sdk::logs::SdkLoggerProvider;
use opentelemetry_sdk::{
Resource,
logs::SdkLoggerProvider,
metrics::{MeterProviderBuilder, PeriodicReader, SdkMeterProvider},
trace::{RandomIdGenerator, Sampler, SdkTracerProvider},
};
@@ -29,15 +33,19 @@ use opentelemetry_semantic_conventions::{
SCHEMA_URL,
attribute::{DEPLOYMENT_ENVIRONMENT_NAME, NETWORK_LOCAL_ADDRESS, SERVICE_VERSION as OTEL_SERVICE_VERSION},
};
use rustfs_config::observability::ENV_OBS_LOG_DIRECTORY;
use rustfs_config::{
APP_NAME, DEFAULT_LOG_KEEP_FILES, DEFAULT_LOG_LEVEL, ENVIRONMENT, METER_INTERVAL, SAMPLE_RATIO, SERVICE_VERSION, USE_STDOUT,
observability::{
DEFAULT_OBS_ENVIRONMENT_PRODUCTION, DEFAULT_OBS_LOG_FLUSH_MS, DEFAULT_OBS_LOG_MESSAGE_CAPA, DEFAULT_OBS_LOG_POOL_CAPA,
ENV_OBS_LOG_DIRECTORY,
},
};
use rustfs_utils::get_local_ip_with_default;
use smallvec::SmallVec;
use std::borrow::Cow;
use std::fs;
use std::io::IsTerminal;
use std::time::Duration;
use std::{env, fs};
use tracing::info;
use tracing_error::ErrorLayer;
use tracing_opentelemetry::{MetricsLayer, OpenTelemetryLayer};
@@ -121,7 +129,7 @@ fn resource(config: &OtelConfig) -> Resource {
/// Creates a periodic reader for stdout metrics
fn create_periodic_reader(interval: u64) -> PeriodicReader<opentelemetry_stdout::MetricExporter> {
PeriodicReader::builder(opentelemetry_stdout::MetricExporter::default())
.with_interval(std::time::Duration::from_secs(interval))
.with_interval(Duration::from_secs(interval))
.build()
}
@@ -129,11 +137,23 @@ fn create_periodic_reader(interval: u64) -> PeriodicReader<opentelemetry_stdout:
pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
// avoid repeated access to configuration fields
let endpoint = &config.endpoint;
let use_stdout = config.use_stdout.unwrap_or(USE_STDOUT);
let environment = config.environment.as_deref().unwrap_or(ENVIRONMENT);
// Environment-aware stdout configuration
// Check for explicit environment control via RUSTFS_OBS_ENVIRONMENT
let is_production = environment.to_lowercase() == DEFAULT_OBS_ENVIRONMENT_PRODUCTION;
// Default stdout behavior based on environment
let default_use_stdout = if is_production {
false // Disable stdout in production for security and log aggregation
} else {
USE_STDOUT // Use configured default for dev/test environments
};
let use_stdout = config.use_stdout.unwrap_or(default_use_stdout);
let meter_interval = config.meter_interval.unwrap_or(METER_INTERVAL);
let logger_level = config.logger_level.as_deref().unwrap_or(DEFAULT_LOG_LEVEL);
let service_name = config.service_name.as_deref().unwrap_or(APP_NAME);
let environment = config.environment.as_deref().unwrap_or(ENVIRONMENT);
// Configure flexi_logger to cut by time and size
let mut flexi_logger_handle = None;
@@ -144,7 +164,7 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
// initialize tracer provider
let tracer_provider = {
let sample_ratio = config.sample_ratio.unwrap_or(SAMPLE_RATIO);
let sampler = if sample_ratio > 0.0 && sample_ratio < 1.0 {
let sampler = if (0.0..1.0).contains(&sample_ratio) {
Sampler::TraceIdRatioBased(sample_ratio)
} else {
Sampler::AlwaysOn
@@ -197,7 +217,7 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
builder = builder.with_reader(
PeriodicReader::builder(exporter)
.with_interval(std::time::Duration::from_secs(meter_interval))
.with_interval(Duration::from_secs(meter_interval))
.build(),
);
@@ -249,7 +269,7 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
.with_line_number(true);
// Only add full span events tracking in the development environment
if environment != ENVIRONMENT {
if !is_production {
layer = layer.with_span_events(FmtSpan::FULL);
}
@@ -257,8 +277,7 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
};
let filter = build_env_filter(logger_level, None);
let otel_filter = build_env_filter(logger_level, None);
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider).with_filter(otel_filter);
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider).with_filter(build_env_filter(logger_level, None));
let tracer = tracer_provider.tracer(Cow::Borrowed(service_name).to_string());
// Configure registry to avoid repeated calls to filter methods
@@ -280,78 +299,96 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
"OpenTelemetry telemetry initialized with OTLP endpoint: {}, logger_level: {},RUST_LOG env: {}",
endpoint,
logger_level,
std::env::var("RUST_LOG").unwrap_or_else(|_| "Not set".to_string())
env::var("RUST_LOG").unwrap_or_else(|_| "Not set".to_string())
);
}
}
OtelGuard {
return OtelGuard {
tracer_provider: Some(tracer_provider),
meter_provider: Some(meter_provider),
logger_provider: Some(logger_provider),
_flexi_logger_handles: flexi_logger_handle,
}
} else {
// Obtain the log directory and file name configuration
let default_log_directory = rustfs_utils::dirs::get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY);
let log_directory = config.log_directory.as_deref().unwrap_or(default_log_directory.as_str());
let log_filename = config.log_filename.as_deref().unwrap_or(service_name);
if let Err(e) = fs::create_dir_all(log_directory) {
eprintln!("Failed to create log directory {log_directory}: {e}");
}
#[cfg(unix)]
{
// Linux/macOS Setting Permissions
// Set the log directory permissions to 755 (rwxr-xr-x)
use std::fs::Permissions;
use std::os::unix::fs::PermissionsExt;
match fs::set_permissions(log_directory, Permissions::from_mode(0o755)) {
Ok(_) => eprintln!("Log directory permissions set to 755: {log_directory}"),
Err(e) => eprintln!("Failed to set log directory permissions {log_directory}: {e}"),
}
}
// Build log cutting conditions
let rotation_criterion = match (config.log_rotation_time.as_deref(), config.log_rotation_size_mb) {
// Cut by time and size at the same time
(Some(time), Some(size)) => {
let age = match time.to_lowercase().as_str() {
"hour" => Age::Hour,
"day" => Age::Day,
"minute" => Age::Minute,
"second" => Age::Second,
_ => Age::Day, // The default is by day
};
Criterion::AgeOrSize(age, size * 1024 * 1024) // Convert to bytes
}
// Cut by time only
(Some(time), None) => {
let age = match time.to_lowercase().as_str() {
"hour" => Age::Hour,
"day" => Age::Day,
"minute" => Age::Minute,
"second" => Age::Second,
_ => Age::Day, // The default is by day
};
Criterion::Age(age)
}
// Cut by size only
(None, Some(size)) => {
Criterion::Size(size * 1024 * 1024) // Convert to bytes
}
// By default, it is cut by the day
_ => Criterion::Age(Age::Day),
};
}
// The number of log files retained
let keep_files = config.log_keep_files.unwrap_or(DEFAULT_LOG_KEEP_FILES);
// Obtain the log directory and file name configuration
let default_log_directory = rustfs_utils::dirs::get_log_directory_to_string(ENV_OBS_LOG_DIRECTORY);
let log_directory = config.log_directory.as_deref().unwrap_or(default_log_directory.as_str());
let log_filename = config.log_filename.as_deref().unwrap_or(service_name);
// Parsing the log level
let log_spec = LogSpecification::parse(logger_level).unwrap_or(LogSpecification::info());
// Enhanced error handling for directory creation
if let Err(e) = fs::create_dir_all(log_directory) {
eprintln!("ERROR: Failed to create log directory '{log_directory}': {e}");
eprintln!("Ensure the parent directory exists and you have write permissions.");
eprintln!("Attempting to continue with logging, but file logging may fail.");
} else {
eprintln!("Log directory ready: {log_directory}");
}
// Convert the logger_level string to the corresponding LevelFilter
let level_filter = match logger_level.to_lowercase().as_str() {
#[cfg(unix)]
{
// Linux/macOS Setting Permissions with better error handling
use std::fs::Permissions;
use std::os::unix::fs::PermissionsExt;
match fs::set_permissions(log_directory, Permissions::from_mode(0o755)) {
Ok(_) => eprintln!("Log directory permissions set to 755: {log_directory}"),
Err(e) => {
eprintln!("WARNING: Failed to set log directory permissions for '{log_directory}': {e}");
eprintln!("This may affect log file access. Consider checking directory ownership and permissions.");
}
}
}
// Build log cutting conditions
let rotation_criterion = match (config.log_rotation_time.as_deref(), config.log_rotation_size_mb) {
// Cut by time and size at the same time
(Some(time), Some(size)) => {
let age = match time.to_lowercase().as_str() {
"hour" => Age::Hour,
"day" => Age::Day,
"minute" => Age::Minute,
"second" => Age::Second,
_ => Age::Day, // The default is by day
};
Criterion::AgeOrSize(age, size * 1024 * 1024) // Convert to bytes
}
// Cut by time only
(Some(time), None) => {
let age = match time.to_lowercase().as_str() {
"hour" => Age::Hour,
"day" => Age::Day,
"minute" => Age::Minute,
"second" => Age::Second,
_ => Age::Day, // The default is by day
};
Criterion::Age(age)
}
// Cut by size only
(None, Some(size)) => {
Criterion::Size(size * 1024 * 1024) // Convert to bytes
}
// By default, it is cut by the day
_ => Criterion::Age(Age::Day),
};
// The number of log files retained
let keep_files = config.log_keep_files.unwrap_or(DEFAULT_LOG_KEEP_FILES);
// Parsing the log level
let log_spec = LogSpecification::parse(logger_level).unwrap_or_else(|e| {
eprintln!("WARNING: Invalid logger level '{logger_level}': {e}. Using default 'info' level.");
LogSpecification::info()
});
// Environment-aware stdout configuration
// In production: disable stdout completely (Duplicate::None)
// In development/test: use level-based filtering
let level_filter = if is_production {
flexi_logger::Duplicate::None // No stdout output in production
} else {
// Convert the logger_level string to the corresponding LevelFilter for dev/test
match logger_level.to_lowercase().as_str() {
"trace" => flexi_logger::Duplicate::Trace,
"debug" => flexi_logger::Duplicate::Debug,
"info" => flexi_logger::Duplicate::Info,
@@ -359,56 +396,114 @@ pub(crate) fn init_telemetry(config: &OtelConfig) -> OtelGuard {
"error" => flexi_logger::Duplicate::Error,
"off" => flexi_logger::Duplicate::None,
_ => flexi_logger::Duplicate::Info, // the default is info
};
}
};
// Configure the flexi_logger
let flexi_logger_result = flexi_logger::Logger::try_with_env_or_str(logger_level)
.unwrap_or_else(|e| {
eprintln!("Invalid logger level: {logger_level}, using default: {DEFAULT_LOG_LEVEL}, failed error: {e:?}");
flexi_logger::Logger::with(log_spec.clone())
})
.log_to_file(
FileSpec::default()
.directory(log_directory)
.basename(log_filename)
.suppress_timestamp(),
)
.rotate(rotation_criterion, Naming::TimestampsDirect, Cleanup::KeepLogFiles(keep_files.into()))
.format_for_files(format_for_file) // Add a custom formatting function for file output
.duplicate_to_stdout(level_filter) // Use dynamic levels
.format_for_stdout(format_with_color) // Add a custom formatting function for terminal output
.write_mode(WriteMode::BufferAndFlush)
.append() // Avoid clearing existing logs at startup
.print_message() // Startup information output to console
.start();
if let Ok(logger) = flexi_logger_result {
// Save the logger handle to keep the logging
flexi_logger_handle = Some(logger);
eprintln!("Flexi logger initialized with file logging to {log_directory}/{log_filename}.log");
// Log logging of log cutting conditions
match (config.log_rotation_time.as_deref(), config.log_rotation_size_mb) {
(Some(time), Some(size)) => eprintln!(
"Log rotation configured for: every {time} or when size exceeds {size}MB, keeping {keep_files} files"
),
(Some(time), None) => eprintln!("Log rotation configured for: every {time}, keeping {keep_files} files"),
(None, Some(size)) => {
eprintln!("Log rotation configured for: when size exceeds {size}MB, keeping {keep_files} files")
}
_ => eprintln!("Log rotation configured for: daily, keeping {keep_files} files"),
// Choose write mode based on environment
let write_mode = if is_production {
get_env_async_with().unwrap_or_else(|| {
eprintln!(
"Using default Async write mode in production. To customize, set RUSTFS_OBS_LOG_POOL_CAPA, RUSTFS_OBS_LOG_MESSAGE_CAPA, and RUSTFS_OBS_LOG_FLUSH_MS environment variables."
);
AsyncWith {
pool_capa: DEFAULT_OBS_LOG_POOL_CAPA,
message_capa: DEFAULT_OBS_LOG_MESSAGE_CAPA,
flush_interval: Duration::from_millis(DEFAULT_OBS_LOG_FLUSH_MS),
}
})
} else {
BufferAndFlush
};
// Configure the flexi_logger with enhanced error handling
let mut flexi_logger_builder = flexi_logger::Logger::try_with_env_or_str(logger_level)
.unwrap_or_else(|e| {
eprintln!("WARNING: Invalid logger configuration '{logger_level}': {e:?}");
eprintln!("Falling back to default configuration with level: {DEFAULT_LOG_LEVEL}");
flexi_logger::Logger::with(log_spec.clone())
})
.log_to_file(
FileSpec::default()
.directory(log_directory)
.basename(log_filename)
.suppress_timestamp(),
)
.rotate(rotation_criterion, Naming::TimestampsDirect, Cleanup::KeepLogFiles(keep_files.into()))
.format_for_files(format_for_file) // Add a custom formatting function for file output
.write_mode(write_mode)
.append(); // Avoid clearing existing logs at startup
// Environment-aware stdout configuration
flexi_logger_builder = flexi_logger_builder.duplicate_to_stdout(level_filter);
// Only add stdout formatting and startup messages in non-production environments
if !is_production {
flexi_logger_builder = flexi_logger_builder
.format_for_stdout(format_with_color) // Add a custom formatting function for terminal output
.print_message(); // Startup information output to console
}
let flexi_logger_result = flexi_logger_builder.start();
if let Ok(logger) = flexi_logger_result {
// Save the logger handle to keep the logging
flexi_logger_handle = Some(logger);
// Environment-aware success messages
if is_production {
eprintln!("Production logging initialized: file-only mode to {log_directory}/{log_filename}.log");
eprintln!("Stdout logging disabled in production environment for security and log aggregation.");
} else {
eprintln!("Failed to initialize flexi_logger: {:?}", flexi_logger_result.err());
eprintln!("Development/Test logging initialized with file logging to {log_directory}/{log_filename}.log");
eprintln!("Stdout logging enabled for debugging. Environment: {environment}");
}
OtelGuard {
tracer_provider: None,
meter_provider: None,
logger_provider: None,
_flexi_logger_handles: flexi_logger_handle,
// Log rotation configuration details
match (config.log_rotation_time.as_deref(), config.log_rotation_size_mb) {
(Some(time), Some(size)) => {
eprintln!("Log rotation configured for: every {time} or when size exceeds {size}MB, keeping {keep_files} files")
}
(Some(time), None) => eprintln!("Log rotation configured for: every {time}, keeping {keep_files} files"),
(None, Some(size)) => {
eprintln!("Log rotation configured for: when size exceeds {size}MB, keeping {keep_files} files")
}
_ => eprintln!("Log rotation configured for: daily, keeping {keep_files} files"),
}
} else {
eprintln!("CRITICAL: Failed to initialize flexi_logger: {:?}", flexi_logger_result.err());
eprintln!("Possible causes:");
eprintln!(" 1. Insufficient permissions to write to log directory: {log_directory}");
eprintln!(" 2. Log directory does not exist or is not accessible");
eprintln!(" 3. Invalid log configuration parameters");
eprintln!(" 4. Disk space issues");
eprintln!("Application will continue but logging to files will not work properly.");
}
OtelGuard {
tracer_provider: None,
meter_provider: None,
logger_provider: None,
_flexi_logger_handles: flexi_logger_handle,
}
}
// Read the AsyncWith parameter from the environment variable
fn get_env_async_with() -> Option<WriteMode> {
let pool_capa = env::var("RUSTFS_OBS_LOG_POOL_CAPA")
.ok()
.and_then(|v| v.parse::<usize>().ok());
let message_capa = env::var("RUSTFS_OBS_LOG_MESSAGE_CAPA")
.ok()
.and_then(|v| v.parse::<usize>().ok());
let flush_ms = env::var("RUSTFS_OBS_LOG_FLUSH_MS").ok().and_then(|v| v.parse::<u64>().ok());
match (pool_capa, message_capa, flush_ms) {
(Some(pool), Some(msg), Some(flush)) => Some(AsyncWith {
pool_capa: pool,
message_capa: msg,
flush_interval: Duration::from_millis(flush),
}),
_ => None,
}
}
@@ -473,3 +568,140 @@ fn format_for_file(w: &mut dyn std::io::Write, now: &mut DeferredNow, record: &R
record.args()
)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_production_environment_detection() {
// Test production environment logic
let production_envs = vec!["production", "PRODUCTION", "Production"];
for env_value in production_envs {
let is_production = env_value.to_lowercase() == "production";
assert!(is_production, "Should detect '{env_value}' as production environment");
}
}
#[test]
fn test_non_production_environment_detection() {
// Test non-production environment logic
let non_production_envs = vec!["development", "test", "staging", "dev", "local"];
for env_value in non_production_envs {
let is_production = env_value.to_lowercase() == "production";
assert!(!is_production, "Should not detect '{env_value}' as production environment");
}
}
#[test]
fn test_stdout_behavior_logic() {
// Test the stdout behavior logic without environment manipulation
struct TestCase {
is_production: bool,
config_use_stdout: Option<bool>,
expected_use_stdout: bool,
description: &'static str,
}
let test_cases = vec![
TestCase {
is_production: true,
config_use_stdout: None,
expected_use_stdout: false,
description: "Production with no config should disable stdout",
},
TestCase {
is_production: false,
config_use_stdout: None,
expected_use_stdout: USE_STDOUT,
description: "Non-production with no config should use default",
},
TestCase {
is_production: true,
config_use_stdout: Some(true),
expected_use_stdout: true,
description: "Production with explicit true should enable stdout",
},
TestCase {
is_production: true,
config_use_stdout: Some(false),
expected_use_stdout: false,
description: "Production with explicit false should disable stdout",
},
TestCase {
is_production: false,
config_use_stdout: Some(true),
expected_use_stdout: true,
description: "Non-production with explicit true should enable stdout",
},
];
for case in test_cases {
let default_use_stdout = if case.is_production { false } else { USE_STDOUT };
let actual_use_stdout = case.config_use_stdout.unwrap_or(default_use_stdout);
assert_eq!(actual_use_stdout, case.expected_use_stdout, "Test case failed: {}", case.description);
}
}
#[test]
fn test_log_level_filter_mapping_logic() {
// Test the log level mapping logic used in the real implementation
let test_cases = vec![
("trace", "Trace"),
("debug", "Debug"),
("info", "Info"),
("warn", "Warn"),
("warning", "Warn"),
("error", "Error"),
("off", "None"),
("invalid_level", "Info"), // Should default to Info
];
for (input_level, expected_variant) in test_cases {
let filter_variant = match input_level.to_lowercase().as_str() {
"trace" => "Trace",
"debug" => "Debug",
"info" => "Info",
"warn" | "warning" => "Warn",
"error" => "Error",
"off" => "None",
_ => "Info", // default case
};
assert_eq!(
filter_variant, expected_variant,
"Log level '{input_level}' should map to '{expected_variant}'"
);
}
}
#[test]
fn test_otel_config_environment_defaults() {
// Test that OtelConfig properly handles environment detection logic
let config = OtelConfig {
endpoint: "".to_string(),
use_stdout: None,
environment: Some("production".to_string()),
..Default::default()
};
// Simulate the logic from init_telemetry
let environment = config.environment.as_deref().unwrap_or(ENVIRONMENT);
assert_eq!(environment, "production");
// Test with development environment
let dev_config = OtelConfig {
endpoint: "".to_string(),
use_stdout: None,
environment: Some("development".to_string()),
..Default::default()
};
let dev_environment = dev_config.environment.as_deref().unwrap_or(ENVIRONMENT);
assert_eq!(dev_environment, "development");
}
}

View File

@@ -173,6 +173,7 @@ impl PartialEq for Functions {
}
#[derive(Clone, Serialize, Deserialize)]
#[allow(dead_code)]
pub struct Value;
#[cfg(test)]

View File

@@ -19,19 +19,10 @@ use futures::pin_mut;
use futures::{Stream, StreamExt};
use futures_core::stream::BoxStream;
use http::HeaderMap;
use object_store::Attributes;
use object_store::GetOptions;
use object_store::GetResult;
use object_store::ListResult;
use object_store::MultipartUpload;
use object_store::ObjectMeta;
use object_store::ObjectStore;
use object_store::PutMultipartOpts;
use object_store::PutOptions;
use object_store::PutPayload;
use object_store::PutResult;
use object_store::path::Path;
use object_store::{Error as o_Error, Result};
use object_store::{
Attributes, Error as o_Error, GetOptions, GetResult, ListResult, MultipartUpload, ObjectMeta, ObjectStore,
PutMultipartOptions, PutOptions, PutPayload, PutResult, Result, path::Path,
};
use pin_project_lite::pin_project;
use rustfs_common::DEFAULT_DELIMITER;
use rustfs_ecstore::StorageAPI;
@@ -102,7 +93,7 @@ impl ObjectStore for EcObjectStore {
unimplemented!()
}
async fn put_multipart_opts(&self, _location: &Path, _opts: PutMultipartOpts) -> Result<Box<dyn MultipartUpload>> {
async fn put_multipart_opts(&self, _location: &Path, _opts: PutMultipartOptions) -> Result<Box<dyn MultipartUpload>> {
unimplemented!()
}
@@ -122,7 +113,7 @@ impl ObjectStore for EcObjectStore {
let meta = ObjectMeta {
location: location.clone(),
last_modified: Utc::now(),
size: reader.object_info.size as usize,
size: reader.object_info.size as u64,
e_tag: reader.object_info.etag,
version: None,
};
@@ -151,12 +142,12 @@ impl ObjectStore for EcObjectStore {
Ok(GetResult {
payload,
meta,
range: 0..reader.object_info.size as usize,
range: 0..reader.object_info.size as u64,
attributes,
})
}
async fn get_ranges(&self, _location: &Path, _ranges: &[Range<usize>]) -> Result<Vec<Bytes>> {
async fn get_ranges(&self, _location: &Path, _ranges: &[Range<u64>]) -> Result<Vec<Bytes>> {
unimplemented!()
}
@@ -175,7 +166,7 @@ impl ObjectStore for EcObjectStore {
Ok(ObjectMeta {
location: location.clone(),
last_modified: Utc::now(),
size: info.size as usize,
size: info.size as u64,
e_tag: info.etag,
version: None,
})
@@ -185,7 +176,7 @@ impl ObjectStore for EcObjectStore {
unimplemented!()
}
fn list(&self, _prefix: Option<&Path>) -> BoxStream<'_, Result<ObjectMeta>> {
fn list(&self, _prefix: Option<&Path>) -> BoxStream<'static, Result<ObjectMeta>> {
unimplemented!()
}

View File

@@ -61,7 +61,7 @@ impl MetadataProvider {
Self {
provider,
current_session_table_provider,
config_options: session.inner().config_options().clone(),
config_options: session.inner().config_options().as_ref().clone(),
session,
func_manager,
}

View File

@@ -26,7 +26,6 @@ use datafusion::{
propagate_empty_relation::PropagateEmptyRelation, push_down_filter::PushDownFilter, push_down_limit::PushDownLimit,
replace_distinct_aggregate::ReplaceDistinctWithAggregate, scalar_subquery_to_join::ScalarSubqueryToJoin,
simplify_expressions::SimplifyExpressions, single_distinct_to_groupby::SingleDistinctToGroupBy,
unwrap_cast_in_comparison::UnwrapCastInComparison,
},
};
use rustfs_s3select_api::{
@@ -66,7 +65,6 @@ impl Default for DefaultLogicalOptimizer {
let rules: Vec<Arc<dyn OptimizerRule + Send + Sync>> = vec![
// df default rules start
Arc::new(SimplifyExpressions::new()),
Arc::new(UnwrapCastInComparison::new()),
Arc::new(ReplaceDistinctWithAggregate::new()),
Arc::new(EliminateJoin::new()),
Arc::new(DecorrelatePredicateSubquery::new()),
@@ -91,7 +89,6 @@ impl Default for DefaultLogicalOptimizer {
// The previous optimizations added expressions and projections,
// that might benefit from the following rules
Arc::new(SimplifyExpressions::new()),
Arc::new(UnwrapCastInComparison::new()),
Arc::new(CommonSubexprEliminate::new()),
// PushDownProjection can pushdown Projections through Limits, do PushDownLimit again.
Arc::new(PushDownLimit::new()),

View File

@@ -438,7 +438,7 @@ fn is_fatal_mqtt_error(err: &ConnectionError) -> bool {
rumqttc::StateError::InvalidState // The internal state machine is in invalid state
| rumqttc::StateError::WrongPacket // Agreement Violation: Unexpected Data Packet Received
| rumqttc::StateError::Unsolicited(_) // Agreement Violation: Unsolicited ACK Received
| rumqttc::StateError::OutgoingPacketTooLarge { .. } // Try to send too large packets
| rumqttc::StateError::CollisionTimeout // Agreement Violation (if this stage occurs)
| rumqttc::StateError::EmptySubscription // Agreement violation (if this stage occurs)
=> true,

View File

@@ -35,10 +35,8 @@ futures = { workspace = true, optional = true }
hex-simd = { workspace = true, optional = true }
highway = { workspace = true, optional = true }
hickory-resolver = { workspace = true, optional = true }
hickory-proto = { workspace = true, optional = true }
hmac = { workspace = true, optional = true }
hyper = { workspace = true, optional = true }
hyper-util = { workspace = true, optional = true }
local-ip-address = { workspace = true, optional = true }
lz4 = { workspace = true, optional = true }
md-5 = { workspace = true, optional = true }
@@ -81,7 +79,7 @@ workspace = true
default = ["ip"] # features that are enabled by default
ip = ["dep:local-ip-address"] # ip characteristics and their dependencies
tls = ["dep:rustls", "dep:rustls-pemfile", "dep:rustls-pki-types"] # tls characteristics and their dependencies
net = ["ip", "dep:url", "dep:netif", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hyper-util", "dep:hickory-resolver", "dep:hickory-proto", "dep:moka", "dep:thiserror"] # network features with DNS resolver
net = ["ip", "dep:url", "dep:netif", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hickory-resolver", "dep:moka", "dep:thiserror", "dep:tokio"] # network features with DNS resolver
io = ["dep:tokio"]
path = []
notify = ["dep:hyper", "dep:s3s"] # file system notification features

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(dead_code)]
//! Layered DNS resolution utility for Kubernetes environments
//!
//! This module provides robust DNS resolution with multiple fallback layers:
@@ -127,6 +129,7 @@ impl LayeredDnsResolver {
/// Validate domain format according to RFC standards
#[instrument(skip_all, fields(domain = %domain))]
fn validate_domain_format(domain: &str) -> Result<(), DnsError> {
info!("Validating domain format start");
// Check FQDN length
if domain.len() > MAX_FQDN_LENGTH {
return Err(DnsError::InvalidFormat {
@@ -157,7 +160,7 @@ impl LayeredDnsResolver {
});
}
}
info!("DNS resolver validated successfully");
Ok(())
}
@@ -209,7 +212,6 @@ impl LayeredDnsResolver {
let ips: Vec<IpAddr> = lookup.iter().collect();
if !ips.is_empty() {
info!("System DNS resolution successful for domain: {} -> {} IPs", domain, ips.len());
debug!("System DNS resolved IPs: {:?}", ips);
Ok(ips)
} else {
warn!("System DNS returned empty result for domain: {}", domain);
@@ -242,7 +244,6 @@ impl LayeredDnsResolver {
let ips: Vec<IpAddr> = lookup.iter().collect();
if !ips.is_empty() {
info!("Public DNS resolution successful for domain: {} -> {} IPs", domain, ips.len());
debug!("Public DNS resolved IPs: {:?}", ips);
Ok(ips)
} else {
warn!("Public DNS returned empty result for domain: {}", domain);
@@ -270,6 +271,7 @@ impl LayeredDnsResolver {
/// 3. Public DNS (hickory-resolver with TLS-enabled Cloudflare DNS fallback)
#[instrument(skip_all, fields(domain = %domain))]
pub async fn resolve(&self, domain: &str) -> Result<Vec<IpAddr>, DnsError> {
info!("Starting DNS resolution process for domain: {} start", domain);
// Validate domain format first
Self::validate_domain_format(domain)?;
@@ -305,7 +307,7 @@ impl LayeredDnsResolver {
}
Err(public_err) => {
error!(
"All DNS resolution attempts failed for domain: {}. System DNS: failed, Public DNS: {}",
"All DNS resolution attempts failed for domain:` {}`. System DNS: failed, Public DNS: {}",
domain, public_err
);
Err(DnsError::AllAttemptsFailed {
@@ -345,6 +347,7 @@ pub fn get_global_dns_resolver() -> Option<&'static LayeredDnsResolver> {
/// Resolve domain using the global DNS resolver with comprehensive tracing
#[instrument(skip_all, fields(domain = %domain))]
pub async fn resolve_domain(domain: &str) -> Result<Vec<IpAddr>, DnsError> {
info!("resolving domain for: {}", domain);
match get_global_dns_resolver() {
Some(resolver) => resolver.resolve(domain).await,
None => Err(DnsError::InitializationFailed {
@@ -395,7 +398,7 @@ mod tests {
// Test cache stats (note: moka cache might not immediately reflect changes)
let (total, _weighted_size) = resolver.cache_stats().await;
// Cache should have at least the entry we just added (might be 0 due to async nature)
assert!(total <= 1, "Cache should have at most 1 entry, got {}", total);
assert!(total <= 1, "Cache should have at most 1 entry, got {total}");
}
#[tokio::test]
@@ -406,12 +409,12 @@ mod tests {
match resolver.resolve("localhost").await {
Ok(ips) => {
assert!(!ips.is_empty());
println!("Resolved localhost to: {:?}", ips);
println!("Resolved localhost to: {ips:?}");
}
Err(e) => {
// In some test environments, even localhost might fail
// This is acceptable as long as our error handling works
println!("DNS resolution failed (might be expected in test environments): {}", e);
println!("DNS resolution failed (might be expected in test environments): {e}");
}
}
}
@@ -427,7 +430,7 @@ mod tests {
assert!(result.is_err());
if let Err(e) = result {
println!("Expected error for invalid domain: {}", e);
println!("Expected error for invalid domain: {e}");
// Should be AllAttemptsFailed since both system and public DNS should fail
assert!(matches!(e, DnsError::AllAttemptsFailed { .. }));
}
@@ -463,10 +466,10 @@ mod tests {
match resolve_domain("localhost").await {
Ok(ips) => {
assert!(!ips.is_empty());
println!("Global resolver resolved localhost to: {:?}", ips);
println!("Global resolver resolved localhost to: {ips:?}");
}
Err(e) => {
println!("Global resolver DNS resolution failed (might be expected in test environments): {}", e);
println!("Global resolver DNS resolution failed (might be expected in test environments): {e}");
}
}
}

View File

@@ -15,6 +15,7 @@
use bytes::Bytes;
use futures::pin_mut;
use futures::{Stream, StreamExt};
use std::io::Error;
use std::net::Ipv6Addr;
use std::sync::{LazyLock, Mutex};
use std::{
@@ -23,6 +24,7 @@ use std::{
net::{IpAddr, SocketAddr, TcpListener, ToSocketAddrs},
time::{Duration, Instant},
};
use tracing::{error, info};
use transform_stream::AsyncTryStream;
use url::{Host, Url};
@@ -61,7 +63,7 @@ pub fn is_socket_addr(addr: &str) -> bool {
pub fn check_local_server_addr(server_addr: &str) -> std::io::Result<SocketAddr> {
let addr: Vec<SocketAddr> = match server_addr.to_socket_addrs() {
Ok(addr) => addr.collect(),
Err(err) => return Err(std::io::Error::other(err)),
Err(err) => return Err(Error::other(err)),
};
// 0.0.0.0 is a wildcard address and refers to local network
@@ -82,7 +84,7 @@ pub fn check_local_server_addr(server_addr: &str) -> std::io::Result<SocketAddr>
}
}
Err(std::io::Error::other("host in server address should be this server"))
Err(Error::other("host in server address should be this server"))
}
/// checks if the given parameter correspond to one of
@@ -93,7 +95,7 @@ pub fn is_local_host(host: Host<&str>, port: u16, local_port: u16) -> std::io::R
Host::Domain(domain) => {
let ips = match (domain, 0).to_socket_addrs().map(|v| v.map(|v| v.ip()).collect::<Vec<_>>()) {
Ok(ips) => ips,
Err(err) => return Err(std::io::Error::other(err)),
Err(err) => return Err(Error::other(err)),
};
ips.iter().any(|ip| local_set.contains(ip))
@@ -113,49 +115,20 @@ pub fn is_local_host(host: Host<&str>, port: u16, local_port: u16) -> std::io::R
///
/// This is the async version of `get_host_ip()` that provides enhanced DNS resolution
/// with Kubernetes support when the "net" feature is enabled.
pub async fn get_host_ip_async(host: Host<&str>) -> std::io::Result<HashSet<IpAddr>> {
match host {
Host::Domain(domain) => {
#[cfg(feature = "net")]
{
use crate::dns_resolver::resolve_domain;
match resolve_domain(domain).await {
Ok(ips) => Ok(ips.into_iter().collect()),
Err(e) => Err(std::io::Error::other(format!("DNS resolution failed: {}", e))),
}
}
#[cfg(not(feature = "net"))]
{
// Fallback to standard resolution when DNS resolver is not available
match (domain, 0)
.to_socket_addrs()
.map(|v| v.map(|v| v.ip()).collect::<HashSet<_>>())
{
Ok(ips) => Ok(ips),
Err(err) => Err(std::io::Error::other(err)),
}
}
}
Host::Ipv4(ip) => {
let mut set = HashSet::with_capacity(1);
set.insert(IpAddr::V4(ip));
Ok(set)
}
Host::Ipv6(ip) => {
let mut set = HashSet::with_capacity(1);
set.insert(IpAddr::V6(ip));
Ok(set)
}
}
}
/// returns IP address of given host using standard resolution.
///
/// **Note**: This function uses standard library DNS resolution with caching.
/// For enhanced DNS resolution with Kubernetes support, use `get_host_ip_async()`.
pub fn get_host_ip(host: Host<&str>) -> std::io::Result<HashSet<IpAddr>> {
pub async fn get_host_ip(host: Host<&str>) -> std::io::Result<HashSet<IpAddr>> {
match host {
Host::Domain(domain) => {
// match crate::dns_resolver::resolve_domain(domain).await {
// Ok(ips) => {
// info!("Resolved domain {domain} using custom DNS resolver: {ips:?}");
// return Ok(ips.into_iter().collect());
// }
// Err(err) => {
// error!(
// "Failed to resolve domain {domain} using custom DNS resolver, falling back to system resolver,err: {err}"
// );
// }
// }
// Check cache first
if let Ok(mut cache) = DNS_CACHE.lock() {
if let Some(entry) = cache.get(domain) {
@@ -167,7 +140,9 @@ pub fn get_host_ip(host: Host<&str>) -> std::io::Result<HashSet<IpAddr>> {
}
}
// Perform DNS resolution
info!("Cache miss for domain {domain}, querying system resolver.");
// Fallback to standard resolution when DNS resolver is not available
match (domain, 0)
.to_socket_addrs()
.map(|v| v.map(|v| v.ip()).collect::<HashSet<_>>())
@@ -181,21 +156,17 @@ pub fn get_host_ip(host: Host<&str>) -> std::io::Result<HashSet<IpAddr>> {
cache.retain(|_, v| !v.is_expired(DNS_CACHE_TTL));
}
}
info!("System query for domain {domain}: {:?}", ips);
Ok(ips)
}
Err(err) => Err(std::io::Error::other(err)),
Err(err) => {
error!("Failed to resolve domain {domain} using system resolver, err: {err}");
Err(Error::other(err))
}
}
}
Host::Ipv4(ip) => {
let mut set = HashSet::with_capacity(1);
set.insert(IpAddr::V4(ip));
Ok(set)
}
Host::Ipv6(ip) => {
let mut set = HashSet::with_capacity(1);
set.insert(IpAddr::V6(ip));
Ok(set)
}
Host::Ipv4(ip) => Ok([IpAddr::V4(ip)].into_iter().collect()),
Host::Ipv6(ip) => Ok([IpAddr::V6(ip)].into_iter().collect()),
}
}
@@ -207,7 +178,7 @@ pub fn get_available_port() -> u16 {
pub fn must_get_local_ips() -> std::io::Result<Vec<IpAddr>> {
match netif::up() {
Ok(up) => Ok(up.map(|x| x.address().to_owned()).collect()),
Err(err) => Err(std::io::Error::other(format!("Unable to get IP addresses of this host: {err}"))),
Err(err) => Err(Error::other(format!("Unable to get IP addresses of this host: {err}"))),
}
}
@@ -215,7 +186,7 @@ pub fn get_default_location(_u: Url, _region_override: &str) -> String {
todo!();
}
pub fn get_endpoint_url(endpoint: &str, secure: bool) -> Result<Url, std::io::Error> {
pub fn get_endpoint_url(endpoint: &str, secure: bool) -> Result<Url, Error> {
let mut scheme = "https";
if !secure {
scheme = "http";
@@ -223,7 +194,7 @@ pub fn get_endpoint_url(endpoint: &str, secure: bool) -> Result<Url, std::io::Er
let endpoint_url_str = format!("{scheme}://{endpoint}");
let Ok(endpoint_url) = Url::parse(&endpoint_url_str) else {
return Err(std::io::Error::other("url parse error."));
return Err(Error::other("url parse error."));
};
//is_valid_endpoint_url(endpoint_url)?;
@@ -258,7 +229,7 @@ impl Display for XHost {
}
impl TryFrom<String> for XHost {
type Error = std::io::Error;
type Error = Error;
fn try_from(value: String) -> Result<Self, Self::Error> {
if let Some(addr) = value.to_socket_addrs()?.next() {
@@ -268,7 +239,7 @@ impl TryFrom<String> for XHost {
is_port_set: addr.port() > 0,
})
} else {
Err(std::io::Error::new(std::io::ErrorKind::InvalidData, "value invalid"))
Err(Error::new(std::io::ErrorKind::InvalidData, "value invalid"))
}
}
}
@@ -278,7 +249,7 @@ pub fn parse_and_resolve_address(addr_str: &str) -> std::io::Result<SocketAddr>
let port_str = port;
let port: u16 = port_str
.parse()
.map_err(|e| std::io::Error::other(format!("Invalid port format: {addr_str}, err:{e:?}")))?;
.map_err(|e| Error::other(format!("Invalid port format: {addr_str}, err:{e:?}")))?;
let final_port = if port == 0 {
get_available_port() // assume get_available_port is available here
} else {
@@ -318,9 +289,9 @@ where
#[cfg(test)]
mod test {
use std::net::{Ipv4Addr, Ipv6Addr};
use super::*;
use crate::init_global_dns_resolver;
use std::net::{Ipv4Addr, Ipv6Addr};
#[test]
fn test_is_socket_addr() {
@@ -424,23 +395,29 @@ mod test {
assert!(is_local_host(invalid_host, 0, 0).is_err());
}
#[test]
fn test_get_host_ip() {
#[tokio::test]
async fn test_get_host_ip() {
match init_global_dns_resolver().await {
Ok(_) => {}
Err(e) => {
error!("Failed to initialize global DNS resolver: {e}");
}
}
// Test IPv4 address
let ipv4_host = Host::Ipv4(Ipv4Addr::new(192, 168, 1, 1));
let ipv4_result = get_host_ip(ipv4_host).unwrap();
let ipv4_result = get_host_ip(ipv4_host).await.unwrap();
assert_eq!(ipv4_result.len(), 1);
assert!(ipv4_result.contains(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 1))));
// Test IPv6 address
let ipv6_host = Host::Ipv6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 1));
let ipv6_result = get_host_ip(ipv6_host).unwrap();
let ipv6_result = get_host_ip(ipv6_host).await.unwrap();
assert_eq!(ipv6_result.len(), 1);
assert!(ipv6_result.contains(&IpAddr::V6(Ipv6Addr::new(0x2001, 0xdb8, 0, 0, 0, 0, 0, 1))));
// Test localhost domain
let localhost_host = Host::Domain("localhost");
let localhost_result = get_host_ip(localhost_host).unwrap();
let localhost_result = get_host_ip(localhost_host).await.unwrap();
assert!(!localhost_result.is_empty());
// Should contain at least loopback address
assert!(
@@ -450,7 +427,16 @@ mod test {
// Test invalid domain
let invalid_host = Host::Domain("invalid.nonexistent.domain.example");
assert!(get_host_ip(invalid_host).is_err());
match get_host_ip(invalid_host.clone()).await {
Ok(ips) => {
// Depending on DNS resolver behavior, it might return empty set or error
assert!(ips.is_empty(), "Expected empty IP set for invalid domain, got: {ips:?}");
}
Err(_) => {
error!("Expected error for invalid domain");
} // Expected error
}
assert!(get_host_ip(invalid_host).await.is_err());
}
#[test]

View File

@@ -28,20 +28,27 @@ services:
TARGETPLATFORM: linux/amd64
ports:
- "9000:9000" # S3 API port
- "9001:9001" # Console port
environment:
- RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1,/data/rustfs2,/data/rustfs3
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_EXTERNAL_ADDRESS=:9000 # Same as internal since no port mapping
- RUSTFS_CORS_ALLOWED_ORIGINS=*
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=*
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
- RUSTFS_LOG_LEVEL=info
- RUSTFS_TLS_PATH=/opt/tls
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
volumes:
- rustfs_data_0:/data/rustfs0
- rustfs_data_1:/data/rustfs1
- rustfs_data_2:/data/rustfs2
- rustfs_data_3:/data/rustfs3
- ./logs:/app/logs
- logs_data:/app/logs
- .docker/tls/:/opt/tls # TLS configuration, you should create tls directory and put your tls files in it and then specify the path here
networks:
- rustfs-network
restart: unless-stopped
@@ -49,11 +56,8 @@ services:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:9000/health",
"sh", "-c",
"curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"
]
interval: 30s
timeout: 10s
@@ -71,11 +75,16 @@ services:
dockerfile: Dockerfile.source
# Pure development environment
ports:
- "9010:9000"
- "9010:9000" # S3 API port
- "9011:9001" # Console port
environment:
- RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_EXTERNAL_ADDRESS=:9010 # External port mapping 9010 -> 9000
- RUSTFS_CORS_ALLOWED_ORIGINS=*
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=*
- RUSTFS_ACCESS_KEY=devadmin
- RUSTFS_SECRET_KEY=devadmin
- RUSTFS_LOG_LEVEL=debug
@@ -85,6 +94,17 @@ services:
networks:
- rustfs-network
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"sh", "-c",
"curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"
]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
profiles:
- dev
@@ -95,7 +115,7 @@ services:
command:
- --config=/etc/otelcol-contrib/otel-collector.yml
volumes:
- ./.docker/observability/otel-collector.yml:/etc/otelcol-contrib/otel-collector.yml:ro
- ./.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/otel-collector.yml:ro
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
@@ -219,3 +239,5 @@ volumes:
driver: local
redis_data:
driver: local
logs_data:
driver: local

1362
docs/console-separation.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -12,48 +12,68 @@ elif [ "$1" = "rustfs" ]; then
set -- /usr/bin/rustfs "$@"
fi
# 2) Parse and create local mount directories (ignore http/https), ensure /logs is included
VOLUME_RAW="${RUSTFS_VOLUMES:-/data}"
# Convert comma/tab to space
VOLUME_LIST=$(echo "$VOLUME_RAW" | tr ',\t' ' ')
LOCAL_VOLUMES=""
for vol in $VOLUME_LIST; do
case "$vol" in
/*)
case "$vol" in
http://*|https://*) : ;;
*) LOCAL_VOLUMES="$LOCAL_VOLUMES $vol" ;;
esac
;;
*)
: # skip non-local paths
;;
esac
done
# Ensure /logs is included
case " $LOCAL_VOLUMES " in
*" /logs "*) : ;;
*) LOCAL_VOLUMES="$LOCAL_VOLUMES /logs" ;;
esac
# 2) Process data volumes (separate from log directory)
DATA_VOLUMES=""
process_data_volumes() {
VOLUME_RAW="${RUSTFS_VOLUMES:-/data}"
# Convert comma/tab to space
VOLUME_LIST=$(echo "$VOLUME_RAW" | tr ',\t' ' ')
for vol in $VOLUME_LIST; do
case "$vol" in
/*)
case "$vol" in
http://*|https://*) : ;;
*) DATA_VOLUMES="$DATA_VOLUMES $vol" ;;
esac
;;
*)
: # skip non-local paths
;;
esac
done
echo "Initializing data directories:$DATA_VOLUMES"
for vol in $DATA_VOLUMES; do
if [ ! -d "$vol" ]; then
echo " mkdir -p $vol"
mkdir -p "$vol"
# If target user is specified, try to set directory owner to that user (non-recursive to avoid large disk overhead)
if [ -n "$RUSTFS_UID" ] && [ -n "$RUSTFS_GID" ]; then
chown "$RUSTFS_UID:$RUSTFS_GID" "$vol" 2>/dev/null || true
elif [ -n "$RUSTFS_USERNAME" ] && [ -n "$RUSTFS_GROUPNAME" ]; then
chown "$RUSTFS_USERNAME:$RUSTFS_GROUPNAME" "$vol" 2>/dev/null || true
fi
fi
done
}
echo "Initializing mount directories:$LOCAL_VOLUMES"
for vol in $LOCAL_VOLUMES; do
if [ ! -d "$vol" ]; then
echo " mkdir -p $vol"
mkdir -p "$vol"
# 3) Process log directory (separate from data volumes)
process_log_directory() {
LOG_DIR="${RUSTFS_OBS_LOG_DIRECTORY:-/logs}"
echo "Initializing log directory: $LOG_DIR"
if [ ! -d "$LOG_DIR" ]; then
echo " mkdir -p $LOG_DIR"
mkdir -p "$LOG_DIR"
# If target user is specified, try to set directory owner to that user (non-recursive to avoid large disk overhead)
if [ -n "$RUSTFS_UID" ] && [ -n "$RUSTFS_GID" ]; then
chown "$RUSTFS_UID:$RUSTFS_GID" "$vol" 2>/dev/null || true
chown "$RUSTFS_UID:$RUSTFS_GID" "$LOG_DIR" 2>/dev/null || true
elif [ -n "$RUSTFS_USERNAME" ] && [ -n "$RUSTFS_GROUPNAME" ]; then
chown "$RUSTFS_USERNAME:$RUSTFS_GROUPNAME" "$vol" 2>/dev/null || true
chown "$RUSTFS_USERNAME:$RUSTFS_GROUPNAME" "$LOG_DIR" 2>/dev/null || true
fi
fi
done
}
# 3) Default credentials warning
# Execute the separate processes
process_data_volumes
process_log_directory
# 4) Default credentials warning
if [ "${RUSTFS_ACCESS_KEY}" = "rustfsadmin" ] || [ "${RUSTFS_SECRET_KEY}" = "rustfsadmin" ]; then
echo "!!!WARNING: Using default RUSTFS_ACCESS_KEY or RUSTFS_SECRET_KEY. Override them in production!"
fi
echo "Starting: $*"
set -- "$@" $DATA_VOLUMES
exec "$@"

270
examples/README.md Normal file
View File

@@ -0,0 +1,270 @@
# RustFS Docker Deployment Examples
This directory contains various deployment scripts and configuration files for RustFS with console and endpoint service separation.
## Quick Start Scripts
### `docker-quickstart.sh`
The fastest way to get RustFS running with different configurations.
```bash
# Basic deployment (ports 9000-9001)
./docker-quickstart.sh basic
# Development environment (ports 9010-9011)
./docker-quickstart.sh dev
# Production-like deployment (ports 9020-9021)
./docker-quickstart.sh prod
# Check status of all deployments
./docker-quickstart.sh status
# Test health of all running services
./docker-quickstart.sh test
# Clean up all containers
./docker-quickstart.sh cleanup
```
### `enhanced-docker-deployment.sh`
Comprehensive deployment script with multiple scenarios and detailed logging.
```bash
# Deploy individual scenarios
./enhanced-docker-deployment.sh basic # Basic setup with port mapping
./enhanced-docker-deployment.sh dev # Development environment
./enhanced-docker-deployment.sh prod # Production-like with security
# Deploy all scenarios at once
./enhanced-docker-deployment.sh all
# Check status and test services
./enhanced-docker-deployment.sh status
./enhanced-docker-deployment.sh test
# View logs for specific container
./enhanced-docker-deployment.sh logs rustfs-dev
# Complete cleanup
./enhanced-docker-deployment.sh cleanup
```
### `enhanced-security-deployment.sh`
Production-ready deployment with enhanced security features including TLS, rate limiting, and secure credential generation.
```bash
# Deploy with security hardening
./enhanced-security-deployment.sh
# Features:
# - Automatic TLS certificate generation
# - Secure credential generation
# - Rate limiting configuration
# - Console access restrictions
# - Health check validation
```
## Docker Compose Examples
### `docker-comprehensive.yml`
Complete Docker Compose configuration with multiple deployment profiles.
```bash
# Deploy specific profiles
docker-compose -f docker-comprehensive.yml --profile basic up -d
docker-compose -f docker-comprehensive.yml --profile dev up -d
docker-compose -f docker-comprehensive.yml --profile production up -d
docker-compose -f docker-comprehensive.yml --profile enterprise up -d
docker-compose -f docker-comprehensive.yml --profile api-only up -d
# Deploy with reverse proxy
docker-compose -f docker-comprehensive.yml --profile production --profile nginx up -d
```
#### Available Profiles:
- **basic**: Simple deployment for testing (ports 9000-9001)
- **dev**: Development environment with debug logging (ports 9010-9011)
- **production**: Production deployment with security (ports 9020-9021)
- **enterprise**: Full enterprise setup with TLS (ports 9030-9443)
- **api-only**: API endpoint without console (port 9040)
## Usage Examples by Scenario
### Development Setup
```bash
# Quick development start
./docker-quickstart.sh dev
# Or use enhanced deployment for more features
./enhanced-docker-deployment.sh dev
# Or use Docker Compose
docker-compose -f docker-comprehensive.yml --profile dev up -d
```
**Access Points:**
- API: http://localhost:9010 (or 9030 for enhanced)
- Console: http://localhost:9011/rustfs/console/ (or 9031 for enhanced)
- Credentials: dev-admin / dev-secret
### Production Deployment
```bash
# Security-hardened deployment
./enhanced-security-deployment.sh
# Or production profile
./enhanced-docker-deployment.sh prod
```
**Features:**
- TLS encryption for console
- Rate limiting enabled
- Restricted CORS policies
- Secure credential generation
- Console bound to localhost only
### Testing and CI/CD
```bash
# API-only deployment for testing
docker-compose -f docker-comprehensive.yml --profile api-only up -d
# Quick basic setup for integration tests
./docker-quickstart.sh basic
```
## Configuration Examples
### Environment Variables
All deployment scripts support customization via environment variables:
```bash
# Custom image and ports
export RUSTFS_IMAGE="rustfs/rustfs:custom-tag"
export CONSOLE_PORT="8001"
export API_PORT="8000"
# Custom data directories
export DATA_DIR="/custom/data/path"
export CERTS_DIR="/custom/certs/path"
# Run with custom configuration
./enhanced-security-deployment.sh
```
### Common Configurations
```bash
# Development - permissive CORS
RUSTFS_CORS_ALLOWED_ORIGINS="*"
RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*"
# Production - restrictive CORS
RUSTFS_CORS_ALLOWED_ORIGINS="https://myapp.com,https://api.myapp.com"
RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="https://admin.myapp.com"
# Security hardening
RUSTFS_CONSOLE_RATE_LIMIT_ENABLE="true"
RUSTFS_CONSOLE_RATE_LIMIT_RPM="60"
RUSTFS_CONSOLE_AUTH_TIMEOUT="1800"
```
## Monitoring and Health Checks
All deployments include health check endpoints:
```bash
# Test API health
curl http://localhost:9000/health
# Test console health
curl http://localhost:9001/health
# Test all deployments
./docker-quickstart.sh test
./enhanced-docker-deployment.sh test
```
## Network Architecture
### Port Mappings
| Deployment | API Port | Console Port | Description |
|-----------|----------|--------------|-------------|
| Basic | 9000 | 9001 | Simple deployment |
| Dev | 9010 | 9011 | Development environment |
| Prod | 9020 | 9021 | Production-like setup |
| Enterprise | 9030 | 9443 | Enterprise with TLS |
| API-Only | 9040 | - | API endpoint only |
### Network Isolation
Production deployments use network isolation:
- **Public API Network**: Exposes API endpoints to external clients
- **Internal Console Network**: Restricts console access to internal networks
- **Secure Network**: Isolated network for enterprise deployments
## Security Considerations
### Development
- Permissive CORS policies for easy testing
- Debug logging enabled
- Default credentials for simplicity
### Production
- Restrictive CORS policies
- TLS encryption for console
- Rate limiting enabled
- Secure credential generation
- Console bound to localhost
- Network isolation
### Enterprise
- Complete TLS encryption
- Advanced rate limiting
- Authentication timeouts
- Secret management
- Network segregation
## Troubleshooting
### Common Issues
1. **Port Conflicts**: Use different ports via environment variables
2. **CORS Errors**: Check origin configuration and browser network tab
3. **Health Check Failures**: Verify services are running and ports are accessible
4. **Permission Issues**: Check volume mount permissions and certificate file permissions
### Debug Commands
```bash
# Check container logs
docker logs rustfs-container
# Check container environment
docker exec rustfs-container env | grep RUSTFS
# Test connectivity
docker exec rustfs-container curl http://localhost:9000/health
docker exec rustfs-container curl http://localhost:9001/health
# Check listening ports
docker exec rustfs-container netstat -tulpn | grep -E ':(9000|9001)'
```
## Migration from Previous Versions
See [docs/console-separation.md](../docs/console-separation.md) for detailed migration instructions from single-port deployments to the separated architecture.
## Additional Resources
- [Console Separation Documentation](../docs/console-separation.md)
- [Docker Compose Configuration](../docker-compose.yml)
- [Main Dockerfile](../Dockerfile)
- [Security Best Practices](../docs/console-separation.md#security-hardening)

View File

@@ -0,0 +1,224 @@
# RustFS Comprehensive Docker Deployment Examples
# This file demonstrates various deployment scenarios for RustFS with console separation
version: "3.8"
services:
# Basic deployment with default settings
rustfs-basic:
image: rustfs/rustfs:latest
container_name: rustfs-basic
ports:
- "9000:9000" # API endpoint
- "9001:9001" # Console interface
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_EXTERNAL_ADDRESS=:9000
- RUSTFS_CORS_ALLOWED_ORIGINS=http://localhost:9001
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=*
- RUSTFS_ACCESS_KEY=admin
- RUSTFS_SECRET_KEY=password
volumes:
- rustfs-basic-data:/data
networks:
- rustfs-network
restart: unless-stopped
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"]
interval: 30s
timeout: 10s
retries: 3
profiles:
- basic
# Development environment with debug logging
rustfs-dev:
image: rustfs/rustfs:latest
container_name: rustfs-dev
ports:
- "9010:9000" # API endpoint
- "9011:9001" # Console interface
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_EXTERNAL_ADDRESS=:9010
- RUSTFS_CORS_ALLOWED_ORIGINS=*
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=*
- RUSTFS_ACCESS_KEY=dev-admin
- RUSTFS_SECRET_KEY=dev-password
- RUST_LOG=debug
- RUSTFS_LOG_LEVEL=debug
volumes:
- rustfs-dev-data:/data
- rustfs-dev-logs:/logs
networks:
- rustfs-network
restart: unless-stopped
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"]
interval: 30s
timeout: 10s
retries: 3
profiles:
- dev
# Production environment with security hardening
rustfs-production:
image: rustfs/rustfs:latest
container_name: rustfs-production
ports:
- "9020:9000" # API endpoint (public)
- "127.0.0.1:9021:9001" # Console (localhost only)
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_EXTERNAL_ADDRESS=:9020
- RUSTFS_CORS_ALLOWED_ORIGINS=https://myapp.com,https://api.myapp.com
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=https://admin.myapp.com
- RUSTFS_CONSOLE_RATE_LIMIT_ENABLE=true
- RUSTFS_CONSOLE_RATE_LIMIT_RPM=60
- RUSTFS_CONSOLE_AUTH_TIMEOUT=1800
- RUSTFS_ACCESS_KEY_FILE=/run/secrets/rustfs_access_key
- RUSTFS_SECRET_KEY_FILE=/run/secrets/rustfs_secret_key
volumes:
- rustfs-production-data:/data
- rustfs-production-logs:/logs
- rustfs-certs:/certs:ro
networks:
- rustfs-network
secrets:
- rustfs_access_key
- rustfs_secret_key
restart: unless-stopped
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health && curl -f http://localhost:9001/health"]
interval: 30s
timeout: 10s
retries: 3
profiles:
- production
# Enterprise deployment with TLS and full security
rustfs-enterprise:
image: rustfs/rustfs:latest
container_name: rustfs-enterprise
ports:
- "9030:9000" # API endpoint
- "127.0.0.1:9443:9001" # Console with TLS (localhost only)
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_EXTERNAL_ADDRESS=:9030
- RUSTFS_TLS_PATH=/certs
- RUSTFS_CORS_ALLOWED_ORIGINS=https://enterprise.com
- RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS=https://admin.enterprise.com
- RUSTFS_CONSOLE_RATE_LIMIT_ENABLE=true
- RUSTFS_CONSOLE_RATE_LIMIT_RPM=30
- RUSTFS_CONSOLE_AUTH_TIMEOUT=900
volumes:
- rustfs-enterprise-data:/data
- rustfs-enterprise-logs:/logs
- rustfs-enterprise-certs:/certs:ro
networks:
- rustfs-secure-network
secrets:
- rustfs_enterprise_access_key
- rustfs_enterprise_secret_key
restart: unless-stopped
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health && curl -k -f https://localhost:9001/health"]
interval: 30s
timeout: 10s
retries: 3
profiles:
- enterprise
# API-only deployment (console disabled)
rustfs-api-only:
image: rustfs/rustfs:latest
container_name: rustfs-api-only
ports:
- "9040:9000" # API endpoint only
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=false
- RUSTFS_CORS_ALLOWED_ORIGINS=https://client-app.com
- RUSTFS_ACCESS_KEY=api-only-key
- RUSTFS_SECRET_KEY=api-only-secret
volumes:
- rustfs-api-data:/data
networks:
- rustfs-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/health"]
interval: 30s
timeout: 10s
retries: 3
profiles:
- api-only
# Nginx reverse proxy for production
nginx-proxy:
image: nginx:alpine
container_name: rustfs-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
networks:
- rustfs-network
restart: unless-stopped
depends_on:
- rustfs-production
profiles:
- production
- enterprise
networks:
rustfs-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
rustfs-secure-network:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.21.0.0/16
volumes:
rustfs-basic-data:
driver: local
rustfs-dev-data:
driver: local
rustfs-dev-logs:
driver: local
rustfs-production-data:
driver: local
rustfs-production-logs:
driver: local
rustfs-enterprise-data:
driver: local
rustfs-enterprise-logs:
driver: local
rustfs-enterprise-certs:
driver: local
rustfs-api-data:
driver: local
rustfs-certs:
driver: local
secrets:
rustfs_access_key:
external: true
rustfs_secret_key:
external: true
rustfs_enterprise_access_key:
external: true
rustfs_enterprise_secret_key:
external: true

295
examples/docker-quickstart.sh Executable file
View File

@@ -0,0 +1,295 @@
#!/bin/bash
# RustFS Docker Quick Start Script
# This script provides easy deployment commands for different scenarios
set -e
# Colors for output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
log() {
echo -e "${GREEN}[RustFS]${NC} $1"
}
info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Print banner
print_banner() {
echo -e "${BLUE}"
echo "=================================================="
echo " RustFS Docker Quick Start"
echo " Console & Endpoint Separation"
echo "=================================================="
echo -e "${NC}"
}
# Check Docker availability
check_docker() {
if ! command -v docker &> /dev/null; then
error "Docker is not installed or not available in PATH"
exit 1
fi
info "Docker is available: $(docker --version)"
}
# Quick start - basic deployment
quick_basic() {
log "Starting RustFS basic deployment..."
docker run -d \
--name rustfs-quick \
-p 9000:9000 \
-p 9001:9001 \
-e RUSTFS_EXTERNAL_ADDRESS=":9000" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="http://localhost:9001" \
-v rustfs-quick-data:/data \
rustfs/rustfs:latest
echo
info "✅ RustFS deployed successfully!"
info "🌐 API Endpoint: http://localhost:9000"
info "🖥️ Console UI: http://localhost:9001/rustfs/console/"
info "🔐 Credentials: rustfsadmin / rustfsadmin"
info "🏥 Health Check: curl http://localhost:9000/health"
echo
info "To stop: docker stop rustfs-quick"
info "To remove: docker rm rustfs-quick && docker volume rm rustfs-quick-data"
}
# Development deployment with debug logging
quick_dev() {
log "Starting RustFS development environment..."
docker run -d \
--name rustfs-dev \
-p 9010:9000 \
-p 9011:9001 \
-e RUSTFS_EXTERNAL_ADDRESS=":9010" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="*" \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*" \
-e RUSTFS_ACCESS_KEY="dev-admin" \
-e RUSTFS_SECRET_KEY="dev-secret" \
-e RUST_LOG="debug" \
-v rustfs-dev-data:/data \
rustfs/rustfs:latest
echo
info "✅ RustFS development environment ready!"
info "🌐 API Endpoint: http://localhost:9010"
info "🖥️ Console UI: http://localhost:9011/rustfs/console/"
info "🔐 Credentials: dev-admin / dev-secret"
info "📊 Debug logging enabled"
echo
info "To stop: docker stop rustfs-dev"
}
# Production-like deployment
quick_prod() {
log "Starting RustFS production-like deployment..."
# Generate secure credentials
ACCESS_KEY="prod-$(openssl rand -hex 8)"
SECRET_KEY="$(openssl rand -hex 24)"
docker run -d \
--name rustfs-prod \
-p 9020:9000 \
-p 127.0.0.1:9021:9001 \
-e RUSTFS_EXTERNAL_ADDRESS=":9020" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="https://myapp.com" \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="https://admin.myapp.com" \
-e RUSTFS_CONSOLE_RATE_LIMIT_ENABLE="true" \
-e RUSTFS_CONSOLE_RATE_LIMIT_RPM="60" \
-e RUSTFS_ACCESS_KEY="$ACCESS_KEY" \
-e RUSTFS_SECRET_KEY="$SECRET_KEY" \
-v rustfs-prod-data:/data \
rustfs/rustfs:latest
# Save credentials
echo "RUSTFS_ACCESS_KEY=$ACCESS_KEY" > rustfs-prod-credentials.txt
echo "RUSTFS_SECRET_KEY=$SECRET_KEY" >> rustfs-prod-credentials.txt
chmod 600 rustfs-prod-credentials.txt
echo
info "✅ RustFS production deployment ready!"
info "🌐 API Endpoint: http://localhost:9020 (public)"
info "🖥️ Console UI: http://127.0.0.1:9021/rustfs/console/ (localhost only)"
info "🔐 Credentials saved to rustfs-prod-credentials.txt"
info "🔒 Console restricted to localhost for security"
echo
warn "⚠️ Change default CORS origins for production use"
}
# Stop and cleanup
cleanup() {
log "Cleaning up RustFS deployments..."
docker stop rustfs-quick rustfs-dev rustfs-prod 2>/dev/null || true
docker rm rustfs-quick rustfs-dev rustfs-prod 2>/dev/null || true
info "Containers stopped and removed"
echo
info "To also remove data volumes, run:"
info "docker volume rm rustfs-quick-data rustfs-dev-data rustfs-prod-data"
}
# Show status of all deployments
status() {
log "RustFS deployment status:"
echo
if docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -q rustfs; then
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | head -n1
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep rustfs
else
info "No RustFS containers are currently running"
fi
echo
info "Available endpoints:"
if docker ps --filter "name=rustfs-quick" --format "{{.Names}}" | grep -q rustfs-quick; then
echo " Basic: http://localhost:9000 (API) | http://localhost:9001/rustfs/console/ (Console)"
fi
if docker ps --filter "name=rustfs-dev" --format "{{.Names}}" | grep -q rustfs-dev; then
echo " Dev: http://localhost:9010 (API) | http://localhost:9011/rustfs/console/ (Console)"
fi
if docker ps --filter "name=rustfs-prod" --format "{{.Names}}" | grep -q rustfs-prod; then
echo " Prod: http://localhost:9020 (API) | http://127.0.0.1:9021/rustfs/console/ (Console)"
fi
}
# Test deployments
test_deployments() {
log "Testing RustFS deployments..."
echo
# Test basic deployment
if docker ps --filter "name=rustfs-quick" --format "{{.Names}}" | grep -q rustfs-quick; then
info "Testing basic deployment..."
if curl -s -f http://localhost:9000/health | grep -q "ok"; then
echo " ✅ API health check: PASS"
else
echo " ❌ API health check: FAIL"
fi
if curl -s -f http://localhost:9001/health | grep -q "console"; then
echo " ✅ Console health check: PASS"
else
echo " ❌ Console health check: FAIL"
fi
fi
# Test dev deployment
if docker ps --filter "name=rustfs-dev" --format "{{.Names}}" | grep -q rustfs-dev; then
info "Testing development deployment..."
if curl -s -f http://localhost:9010/health | grep -q "ok"; then
echo " ✅ Dev API health check: PASS"
else
echo " ❌ Dev API health check: FAIL"
fi
if curl -s -f http://localhost:9011/health | grep -q "console"; then
echo " ✅ Dev Console health check: PASS"
else
echo " ❌ Dev Console health check: FAIL"
fi
fi
# Test prod deployment
if docker ps --filter "name=rustfs-prod" --format "{{.Names}}" | grep -q rustfs-prod; then
info "Testing production deployment..."
if curl -s -f http://localhost:9020/health | grep -q "ok"; then
echo " ✅ Prod API health check: PASS"
else
echo " ❌ Prod API health check: FAIL"
fi
if curl -s -f http://127.0.0.1:9021/health | grep -q "console"; then
echo " ✅ Prod Console health check: PASS"
else
echo " ❌ Prod Console health check: FAIL"
fi
fi
}
# Show help
show_help() {
print_banner
echo "Usage: $0 [command]"
echo
echo "Commands:"
echo " basic Start basic RustFS deployment (ports 9000-9001)"
echo " dev Start development deployment with debug logging (ports 9010-9011)"
echo " prod Start production-like deployment with security (ports 9020-9021)"
echo " status Show status of running deployments"
echo " test Test health of all running deployments"
echo " cleanup Stop and remove all RustFS containers"
echo " help Show this help message"
echo
echo "Examples:"
echo " $0 basic # Quick start with default settings"
echo " $0 dev # Development environment with debug logs"
echo " $0 prod # Production-like setup with security"
echo " $0 status # Check what's running"
echo " $0 test # Test all deployments"
echo " $0 cleanup # Clean everything up"
echo
echo "For more advanced deployments, see:"
echo " - examples/enhanced-docker-deployment.sh"
echo " - examples/enhanced-security-deployment.sh"
echo " - examples/docker-comprehensive.yml"
echo " - docs/console-separation.md"
echo
}
# Main execution
case "${1:-help}" in
"basic")
print_banner
check_docker
quick_basic
;;
"dev")
print_banner
check_docker
quick_dev
;;
"prod")
print_banner
check_docker
quick_prod
;;
"status")
print_banner
status
;;
"test")
print_banner
test_deployments
;;
"cleanup")
print_banner
cleanup
;;
"help"|*)
show_help
;;
esac

View File

@@ -0,0 +1,321 @@
#!/bin/bash
# RustFS Enhanced Docker Deployment Examples
# This script demonstrates various deployment scenarios for RustFS with console separation
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_section() {
echo -e "\n${BLUE}========================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}========================================${NC}\n"
}
# Function to clean up existing containers
cleanup() {
log_info "Cleaning up existing RustFS containers..."
docker stop rustfs-basic rustfs-dev rustfs-prod 2>/dev/null || true
docker rm rustfs-basic rustfs-dev rustfs-prod 2>/dev/null || true
}
# Function to wait for service to be ready
wait_for_service() {
local url=$1
local service_name=$2
local max_attempts=30
local attempt=0
log_info "Waiting for $service_name to be ready at $url..."
while [ $attempt -lt $max_attempts ]; do
if curl -s -f "$url" > /dev/null 2>&1; then
log_info "$service_name is ready!"
return 0
fi
attempt=$((attempt + 1))
sleep 1
done
log_error "$service_name failed to start within ${max_attempts}s"
return 1
}
# Scenario 1: Basic deployment with port mapping
deploy_basic() {
log_section "Scenario 1: Basic Docker Deployment with Port Mapping"
log_info "Starting RustFS with port mapping 9020:9000 and 9021:9001"
docker run -d \
--name rustfs-basic \
-p 9020:9000 \
-p 9021:9001 \
-e RUSTFS_EXTERNAL_ADDRESS=":9020" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="http://localhost:9021,http://127.0.0.1:9021" \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*" \
-e RUSTFS_ACCESS_KEY="basic-access" \
-e RUSTFS_SECRET_KEY="basic-secret" \
-v rustfs-basic-data:/data \
rustfs/rustfs:latest
# Wait for services to be ready
wait_for_service "http://localhost:9020/health" "API Service"
wait_for_service "http://localhost:9021/health" "Console Service"
log_info "Basic deployment ready!"
log_info "🌐 API endpoint: http://localhost:9020"
log_info "🖥️ Console UI: http://localhost:9021/rustfs/console/"
log_info "🔐 Credentials: basic-access / basic-secret"
log_info "🏥 Health checks:"
log_info " API: curl http://localhost:9020/health"
log_info " Console: curl http://localhost:9021/health"
}
# Scenario 2: Development environment
deploy_development() {
log_section "Scenario 2: Development Environment"
log_info "Starting RustFS development environment"
docker run -d \
--name rustfs-dev \
-p 9030:9000 \
-p 9031:9001 \
-e RUSTFS_EXTERNAL_ADDRESS=":9030" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="*" \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*" \
-e RUSTFS_ACCESS_KEY="dev-access" \
-e RUSTFS_SECRET_KEY="dev-secret" \
-e RUST_LOG="debug" \
-v rustfs-dev-data:/data \
rustfs/rustfs:latest
# Wait for services to be ready
wait_for_service "http://localhost:9030/health" "Dev API Service"
wait_for_service "http://localhost:9031/health" "Dev Console Service"
log_info "Development deployment ready!"
log_info "🌐 API endpoint: http://localhost:9030"
log_info "🖥️ Console UI: http://localhost:9031/rustfs/console/"
log_info "🔐 Credentials: dev-access / dev-secret"
log_info "📊 Debug logging enabled"
log_info "🏥 Health checks:"
log_info " API: curl http://localhost:9030/health"
log_info " Console: curl http://localhost:9031/health"
}
# Scenario 3: Production-like environment with security
deploy_production() {
log_section "Scenario 3: Production-like Deployment"
log_info "Starting RustFS production-like environment with security"
# Generate secure credentials
ACCESS_KEY=$(openssl rand -hex 16)
SECRET_KEY=$(openssl rand -hex 32)
# Save credentials for reference
cat > rustfs-prod-credentials.env << EOF
# RustFS Production Deployment Credentials
# Generated: $(date)
RUSTFS_ACCESS_KEY=$ACCESS_KEY
RUSTFS_SECRET_KEY=$SECRET_KEY
EOF
chmod 600 rustfs-prod-credentials.env
docker run -d \
--name rustfs-prod \
-p 9040:9000 \
-p 127.0.0.1:9041:9001 \
-e RUSTFS_ADDRESS="0.0.0.0:9000" \
-e RUSTFS_CONSOLE_ADDRESS="0.0.0.0:9001" \
-e RUSTFS_EXTERNAL_ADDRESS=":9040" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="https://myapp.example.com" \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="https://admin.example.com" \
-e RUSTFS_ACCESS_KEY="$ACCESS_KEY" \
-e RUSTFS_SECRET_KEY="$SECRET_KEY" \
-v rustfs-prod-data:/data \
rustfs/rustfs:latest
# Wait for services to be ready
wait_for_service "http://localhost:9040/health" "Prod API Service"
wait_for_service "http://127.0.0.1:9041/health" "Prod Console Service"
log_info "Production deployment ready!"
log_info "🌐 API endpoint: http://localhost:9040 (public)"
log_info "🖥️ Console UI: http://127.0.0.1:9041/rustfs/console/ (localhost only)"
log_info "🔐 Credentials: $ACCESS_KEY / $SECRET_KEY"
log_info "🔒 Security: Console restricted to localhost"
log_info "🏥 Health checks:"
log_info " API: curl http://localhost:9040/health"
log_info " Console: curl http://127.0.0.1:9041/health"
log_warn "⚠️ Console is restricted to localhost for security"
log_warn "⚠️ Credentials saved to rustfs-prod-credentials.env file"
}
# Function to show service status
show_status() {
log_section "Service Status"
echo "Running containers:"
docker ps --filter "name=rustfs-" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
echo -e "\nService endpoints:"
if docker ps --filter "name=rustfs-basic" --format "{{.Names}}" | grep -q rustfs-basic; then
echo " Basic API: http://localhost:9020"
echo " Basic Console: http://localhost:9021/rustfs/console/"
fi
if docker ps --filter "name=rustfs-dev" --format "{{.Names}}" | grep -q rustfs-dev; then
echo " Dev API: http://localhost:9030"
echo " Dev Console: http://localhost:9031/rustfs/console/"
fi
if docker ps --filter "name=rustfs-prod" --format "{{.Names}}" | grep -q rustfs-prod; then
echo " Prod API: http://localhost:9040"
echo " Prod Console: http://127.0.0.1:9041/rustfs/console/"
fi
}
# Function to test services
test_services() {
log_section "Testing Services"
# Test basic deployment
if docker ps --filter "name=rustfs-basic" --format "{{.Names}}" | grep -q rustfs-basic; then
log_info "Testing basic deployment..."
if curl -s http://localhost:9020/health | grep -q "ok"; then
log_info "✓ Basic API health check passed"
else
log_error "✗ Basic API health check failed"
fi
if curl -s http://localhost:9021/health | grep -q "console"; then
log_info "✓ Basic Console health check passed"
else
log_error "✗ Basic Console health check failed"
fi
fi
# Test development deployment
if docker ps --filter "name=rustfs-dev" --format "{{.Names}}" | grep -q rustfs-dev; then
log_info "Testing development deployment..."
if curl -s http://localhost:9030/health | grep -q "ok"; then
log_info "✓ Dev API health check passed"
else
log_error "✗ Dev API health check failed"
fi
if curl -s http://localhost:9031/health | grep -q "console"; then
log_info "✓ Dev Console health check passed"
else
log_error "✗ Dev Console health check failed"
fi
fi
# Test production deployment
if docker ps --filter "name=rustfs-prod" --format "{{.Names}}" | grep -q rustfs-prod; then
log_info "Testing production deployment..."
if curl -s http://localhost:9040/health | grep -q "ok"; then
log_info "✓ Prod API health check passed"
else
log_error "✗ Prod API health check failed"
fi
if curl -s http://127.0.0.1:9041/health | grep -q "console"; then
log_info "✓ Prod Console health check passed"
else
log_error "✗ Prod Console health check failed"
fi
fi
}
# Function to show logs
show_logs() {
log_section "Service Logs"
if [ -n "$1" ]; then
docker logs "$1"
else
echo "Available containers:"
docker ps --filter "name=rustfs-" --format "{{.Names}}"
echo -e "\nUsage: $0 logs <container-name>"
fi
}
# Main menu
case "${1:-menu}" in
"basic")
cleanup
deploy_basic
;;
"dev")
cleanup
deploy_development
;;
"prod")
cleanup
deploy_production
;;
"all")
cleanup
deploy_basic
deploy_development
deploy_production
show_status
;;
"status")
show_status
;;
"test")
test_services
;;
"logs")
show_logs "$2"
;;
"cleanup")
cleanup
docker volume rm rustfs-basic-data rustfs-dev-data rustfs-prod-data 2>/dev/null || true
log_info "Cleanup completed"
;;
"menu"|*)
echo "RustFS Enhanced Docker Deployment Examples"
echo ""
echo "Usage: $0 [command]"
echo ""
echo "Commands:"
echo " basic - Deploy basic RustFS with port mapping"
echo " dev - Deploy development environment"
echo " prod - Deploy production-like environment"
echo " all - Deploy all scenarios"
echo " status - Show status of running containers"
echo " test - Test all running services"
echo " logs - Show logs for specific container"
echo " cleanup - Clean up all containers and volumes"
echo ""
echo "Examples:"
echo " $0 basic # Deploy basic setup"
echo " $0 status # Check running services"
echo " $0 logs rustfs-dev # Show dev container logs"
echo " $0 cleanup # Clean everything up"
;;
esac

View File

@@ -0,0 +1,207 @@
#!/bin/bash
# RustFS Enhanced Security Deployment Script
# This script demonstrates production-ready deployment with enhanced security features
set -e
# Configuration
RUSTFS_IMAGE="${RUSTFS_IMAGE:-rustfs/rustfs:latest}"
CONTAINER_NAME="${CONTAINER_NAME:-rustfs-secure}"
DATA_DIR="${DATA_DIR:-./data}"
CERTS_DIR="${CERTS_DIR:-./certs}"
CONSOLE_PORT="${CONSOLE_PORT:-9443}"
API_PORT="${API_PORT:-9000}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log() {
echo -e "${BLUE}[INFO]${NC} $1"
}
warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1"
exit 1
}
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
# Check if Docker is available
check_docker() {
if ! command -v docker &> /dev/null; then
error "Docker is not installed or not in PATH"
fi
log "Docker is available"
}
# Generate TLS certificates for console
generate_certs() {
if [[ ! -d "$CERTS_DIR" ]]; then
mkdir -p "$CERTS_DIR"
log "Created certificates directory: $CERTS_DIR"
fi
if [[ ! -f "$CERTS_DIR/console.crt" ]] || [[ ! -f "$CERTS_DIR/console.key" ]]; then
log "Generating TLS certificates for console..."
openssl req -x509 -newkey rsa:4096 \
-keyout "$CERTS_DIR/console.key" \
-out "$CERTS_DIR/console.crt" \
-days 365 -nodes \
-subj "/C=US/ST=CA/L=SF/O=RustFS/CN=localhost"
chmod 600 "$CERTS_DIR/console.key"
chmod 644 "$CERTS_DIR/console.crt"
success "TLS certificates generated"
else
log "TLS certificates already exist"
fi
}
# Create data directory
create_data_dir() {
if [[ ! -d "$DATA_DIR" ]]; then
mkdir -p "$DATA_DIR"
log "Created data directory: $DATA_DIR"
fi
}
# Generate secure credentials
generate_credentials() {
if [[ -z "$RUSTFS_ACCESS_KEY" ]]; then
export RUSTFS_ACCESS_KEY="admin-$(openssl rand -hex 8)"
log "Generated access key: $RUSTFS_ACCESS_KEY"
fi
if [[ -z "$RUSTFS_SECRET_KEY" ]]; then
export RUSTFS_SECRET_KEY="$(openssl rand -hex 32)"
log "Generated secret key: [HIDDEN]"
fi
# Save credentials to .env file
cat > .env << EOF
RUSTFS_ACCESS_KEY=$RUSTFS_ACCESS_KEY
RUSTFS_SECRET_KEY=$RUSTFS_SECRET_KEY
EOF
chmod 600 .env
success "Credentials saved to .env file"
}
# Stop existing container
stop_existing() {
if docker ps -a --format "table {{.Names}}" | grep -q "^$CONTAINER_NAME\$"; then
log "Stopping existing container: $CONTAINER_NAME"
docker stop "$CONTAINER_NAME" 2>/dev/null || true
docker rm "$CONTAINER_NAME" 2>/dev/null || true
fi
}
# Deploy RustFS with enhanced security
deploy_rustfs() {
log "Deploying RustFS with enhanced security..."
docker run -d \
--name "$CONTAINER_NAME" \
--restart unless-stopped \
-p "$CONSOLE_PORT:9001" \
-p "$API_PORT:9000" \
-v "$(pwd)/$DATA_DIR:/data" \
-v "$(pwd)/$CERTS_DIR:/certs:ro" \
-e RUSTFS_CONSOLE_TLS_ENABLE=true \
-e RUSTFS_CONSOLE_TLS_CERT=/certs/console.crt \
-e RUSTFS_CONSOLE_TLS_KEY=/certs/console.key \
-e RUSTFS_CONSOLE_RATE_LIMIT_ENABLE=true \
-e RUSTFS_CONSOLE_RATE_LIMIT_RPM=60 \
-e RUSTFS_CONSOLE_AUTH_TIMEOUT=1800 \
-e RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="https://localhost:$CONSOLE_PORT" \
-e RUSTFS_CORS_ALLOWED_ORIGINS="http://localhost:$API_PORT" \
-e RUSTFS_ACCESS_KEY="$RUSTFS_ACCESS_KEY" \
-e RUSTFS_SECRET_KEY="$RUSTFS_SECRET_KEY" \
-e RUSTFS_EXTERNAL_ADDRESS=":$API_PORT" \
"$RUSTFS_IMAGE" /data
# Wait for container to start
sleep 5
if docker ps --format "table {{.Names}}" | grep -q "^$CONTAINER_NAME\$"; then
success "RustFS deployed successfully"
else
error "Failed to deploy RustFS"
fi
}
# Check service health
check_health() {
log "Checking service health..."
# Check console health
if curl -k -s "https://localhost:$CONSOLE_PORT/health" | jq -e '.status == "ok"' > /dev/null 2>&1; then
success "Console service is healthy"
else
warn "Console service health check failed"
fi
# Check API health
if curl -s "http://localhost:$API_PORT/health" | jq -e '.status == "ok"' > /dev/null 2>&1; then
success "API service is healthy"
else
warn "API service health check failed"
fi
}
# Display access information
show_access_info() {
echo
echo "=========================================="
echo " RustFS Access Information"
echo "=========================================="
echo
echo "🌐 Console (HTTPS): https://localhost:$CONSOLE_PORT/rustfs/console/"
echo "🔧 API Endpoint: http://localhost:$API_PORT"
echo "🏥 Console Health: https://localhost:$CONSOLE_PORT/health"
echo "🏥 API Health: http://localhost:$API_PORT/health"
echo
echo "🔐 Credentials:"
echo " Access Key: $RUSTFS_ACCESS_KEY"
echo " Secret Key: [Check .env file]"
echo
echo "📝 Logs: docker logs $CONTAINER_NAME"
echo "🛑 Stop: docker stop $CONTAINER_NAME"
echo
echo "⚠️ Note: Console uses self-signed certificate"
echo " Accept the certificate warning in your browser"
echo
}
# Main deployment flow
main() {
log "Starting RustFS Enhanced Security Deployment"
check_docker
create_data_dir
generate_certs
generate_credentials
stop_existing
deploy_rustfs
# Wait a bit for services to start
sleep 10
check_health
show_access_info
success "Deployment completed successfully!"
}
# Run main function
main "$@"

View File

@@ -51,11 +51,12 @@ rustfs-obs = { workspace = true }
rustfs-utils = { workspace = true, features = ["full"] }
rustfs-protos = { workspace = true }
rustfs-s3select-query = { workspace = true }
rustfs-audit-logger = { workspace = true }
rustfs-targets = { workspace = true }
atoi = { workspace = true }
atomic_enum = { workspace = true }
axum.workspace = true
axum-extra = { workspace = true }
axum-server = { workspace = true, features = ["tls-rustls"] }
async-trait = { workspace = true }
bytes = { workspace = true }
chrono = { workspace = true }
@@ -102,6 +103,8 @@ tower-http = { workspace = true, features = [
"compression-gzip",
"cors",
"catch-panic",
"timeout",
"limit",
] }
url = { workspace = true }
urlencoding = { workspace = true }
@@ -118,6 +121,9 @@ libsystemd.workspace = true
[target.'cfg(all(target_os = "linux", target_env = "gnu"))'.dependencies]
tikv-jemallocator = "0.6"
[target.'cfg(all(target_os = "linux", target_env = "musl"))'.dependencies]
mimalloc = "0.1"
[target.'cfg(not(target_os = "windows"))'.dependencies]
pprof = { version = "0.15.0", features = ["flamegraph", "protobuf-codec"] }

View File

@@ -12,38 +12,46 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// use crate::license::get_license;
use axum::{
// Router,
body::Body,
http::{Response, StatusCode},
response::IntoResponse,
// routing::get,
};
// use axum_extra::extract::Host;
// use rustfs_config::{RUSTFS_TLS_CERT, RUSTFS_TLS_KEY};
// use rustfs_utils::net::parse_and_resolve_address;
// use std::io;
use http::Uri;
// use axum::response::Redirect;
// use axum_server::tls_rustls::RustlsConfig;
// use http::{HeaderMap, HeaderName, Uri, header};
use crate::config::build;
use crate::license::get_license;
use axum::body::Body;
use axum::response::{IntoResponse, Response};
use axum_extra::extract::Host;
use http::{HeaderMap, HeaderName, StatusCode, Uri};
use mime_guess::from_path;
use rust_embed::RustEmbed;
use serde::Serialize;
use std::net::{IpAddr, SocketAddr};
use std::sync::OnceLock;
// use axum::response::Redirect;
// use axum::routing::get;
// use axum::{
// body::Body,
// http::{Response, StatusCode},
// response::IntoResponse,
// Router,
// };
// use axum_extra::extract::Host;
// use axum_server::tls_rustls::RustlsConfig;
// use http::{header, HeaderMap, HeaderName, Uri};
// use io::Error;
// use mime_guess::from_path;
// use rust_embed::RustEmbed;
// use rustfs_config::{RUSTFS_TLS_CERT, RUSTFS_TLS_KEY};
// use rustfs_utils::parse_and_resolve_address;
// use serde::Serialize;
// use shadow_rs::shadow;
// use std::io;
// use std::net::{IpAddr, SocketAddr};
// use std::sync::OnceLock;
// use std::time::Duration;
// use tokio::signal;
// use tower_http::cors::{Any, CorsLayer};
// use tower_http::trace::TraceLayer;
// use tracing::{debug, error, info, instrument};
use tracing::{error, instrument};
// shadow!(build);
// const RUSTFS_ADMIN_PREFIX: &str = "/rustfs/admin/v3";
const RUSTFS_ADMIN_PREFIX: &str = "/rustfs/admin/v3";
#[derive(RustEmbed)]
#[folder = "$CARGO_MANIFEST_DIR/static"]
@@ -77,235 +85,226 @@ pub(crate) async fn static_handler(uri: Uri) -> impl IntoResponse {
}
}
// #[derive(Debug, Serialize, Clone)]
// pub(crate) struct Config {
// #[serde(skip)]
// port: u16,
// api: Api,
// s3: S3,
// release: Release,
// license: License,
// doc: String,
#[derive(Debug, Serialize, Clone)]
pub(crate) struct Config {
#[serde(skip)]
port: u16,
api: Api,
s3: S3,
release: Release,
license: License,
doc: String,
}
impl Config {
fn new(local_ip: IpAddr, port: u16, version: &str, date: &str) -> Self {
Config {
port,
api: Api {
base_url: format!("http://{local_ip}:{port}/{RUSTFS_ADMIN_PREFIX}"),
},
s3: S3 {
endpoint: format!("http://{local_ip}:{port}"),
region: "cn-east-1".to_owned(),
},
release: Release {
version: version.to_string(),
date: date.to_string(),
},
license: License {
name: "Apache-2.0".to_string(),
url: "https://www.apache.org/licenses/LICENSE-2.0".to_string(),
},
doc: "https://rustfs.com/docs/".to_string(),
}
}
fn to_json(&self) -> String {
serde_json::to_string(self).unwrap_or_default()
}
#[allow(dead_code)]
pub(crate) fn version_info(&self) -> String {
format!(
"RELEASE.{}@{} (rust {} {})",
self.release.date.clone(),
self.release.version.clone().trim_start_matches('@'),
build::RUST_VERSION,
build::BUILD_TARGET
)
}
#[allow(dead_code)]
pub(crate) fn version(&self) -> String {
self.release.version.clone()
}
#[allow(dead_code)]
pub(crate) fn license(&self) -> String {
format!("{} {}", self.license.name.clone(), self.license.url.clone())
}
#[allow(dead_code)]
pub(crate) fn doc(&self) -> String {
self.doc.clone()
}
}
#[derive(Debug, Serialize, Clone)]
struct Api {
#[serde(rename = "baseURL")]
base_url: String,
}
#[derive(Debug, Serialize, Clone)]
struct S3 {
endpoint: String,
region: String,
}
#[derive(Debug, Serialize, Clone)]
struct Release {
version: String,
date: String,
}
#[derive(Debug, Serialize, Clone)]
struct License {
name: String,
url: String,
}
pub(crate) static CONSOLE_CONFIG: OnceLock<Config> = OnceLock::new();
#[allow(clippy::const_is_empty)]
pub(crate) fn init_console_cfg(local_ip: IpAddr, port: u16) {
CONSOLE_CONFIG.get_or_init(|| {
let ver = {
if !build::TAG.is_empty() {
build::TAG.to_string()
} else if !build::SHORT_COMMIT.is_empty() {
format!("@{}", build::SHORT_COMMIT)
} else {
build::PKG_VERSION.to_string()
}
};
Config::new(local_ip, port, ver.as_str(), build::COMMIT_DATE_3339)
});
}
// fn is_socket_addr_or_ip_addr(host: &str) -> bool {
// host.parse::<SocketAddr>().is_ok() || host.parse::<IpAddr>().is_ok()
// }
// impl Config {
// fn new(local_ip: IpAddr, port: u16, version: &str, date: &str) -> Self {
// Config {
// port,
// api: Api {
// base_url: format!("http://{local_ip}:{port}/{RUSTFS_ADMIN_PREFIX}"),
// },
// s3: S3 {
// endpoint: format!("http://{local_ip}:{port}"),
// region: "cn-east-1".to_owned(),
// },
// release: Release {
// version: version.to_string(),
// date: date.to_string(),
// },
// license: License {
// name: "Apache-2.0".to_string(),
// url: "https://www.apache.org/licenses/LICENSE-2.0".to_string(),
// },
// doc: "https://rustfs.com/docs/".to_string(),
// }
// }
#[allow(dead_code)]
pub async fn license_handler() -> impl IntoResponse {
let license = get_license().unwrap_or_default();
// fn to_json(&self) -> String {
// serde_json::to_string(self).unwrap_or_default()
// }
Response::builder()
.header("content-type", "application/json")
.status(StatusCode::OK)
.body(Body::from(serde_json::to_string(&license).unwrap_or_default()))
.unwrap()
}
// pub(crate) fn version_info(&self) -> String {
// format!(
// "RELEASE.{}@{} (rust {} {})",
// self.release.date.clone(),
// self.release.version.clone().trim_start_matches('@'),
// build::RUST_VERSION,
// build::BUILD_TARGET
// )
// }
fn _is_private_ip(ip: IpAddr) -> bool {
match ip {
IpAddr::V4(ip) => {
let octets = ip.octets();
// 10.0.0.0/8
octets[0] == 10 ||
// 172.16.0.0/12
(octets[0] == 172 && (octets[1] >= 16 && octets[1] <= 31)) ||
// 192.168.0.0/16
(octets[0] == 192 && octets[1] == 168)
}
IpAddr::V6(_) => false,
}
}
// pub(crate) fn version(&self) -> String {
// self.release.version.clone()
// }
#[allow(clippy::const_is_empty)]
#[allow(dead_code)]
#[instrument(fields(host))]
pub async fn config_handler(uri: Uri, Host(host): Host, headers: HeaderMap) -> impl IntoResponse {
// Get the scheme from the headers or use the URI scheme
let scheme = headers
.get(HeaderName::from_static("x-forwarded-proto"))
.and_then(|value| value.to_str().ok())
.unwrap_or_else(|| uri.scheme().map(|s| s.as_str()).unwrap_or("http"));
// pub(crate) fn license(&self) -> String {
// format!("{} {}", self.license.name.clone(), self.license.url.clone())
// }
let raw_host = uri.host().unwrap_or(host.as_str());
let host_for_url = if let Ok(socket_addr) = raw_host.parse::<SocketAddr>() {
// Successfully parsed, it's in IP:Port format.
// For IPv6, we need to enclose it in brackets to form a valid URL.
let ip = socket_addr.ip();
if ip.is_ipv6() { format!("[{ip}]") } else { format!("{ip}") }
} else if let Ok(ip) = raw_host.parse::<IpAddr>() {
// Pure IP (no ports)
if ip.is_ipv6() { format!("[{ip}]") } else { ip.to_string() }
} else {
// The domain name may not be able to resolve directly to IP, remove the port
raw_host.split(':').next().unwrap_or(raw_host).to_string()
};
// pub(crate) fn doc(&self) -> String {
// self.doc.clone()
// }
// }
// Make a copy of the current configuration
let mut cfg = match CONSOLE_CONFIG.get() {
Some(cfg) => cfg.clone(),
None => {
error!("Console configuration not initialized");
return Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.body(Body::from("Console configuration not initialized"))
.unwrap();
}
};
// #[derive(Debug, Serialize, Clone)]
// struct Api {
// #[serde(rename = "baseURL")]
// base_url: String,
// }
let url = format!("{}://{}:{}", scheme, host_for_url, cfg.port);
cfg.api.base_url = format!("{url}{RUSTFS_ADMIN_PREFIX}");
cfg.s3.endpoint = url;
// #[derive(Debug, Serialize, Clone)]
// struct S3 {
// endpoint: String,
// region: String,
// }
// #[derive(Debug, Serialize, Clone)]
// struct Release {
// version: String,
// date: String,
// }
// #[derive(Debug, Serialize, Clone)]
// struct License {
// name: String,
// url: String,
// }
// pub(crate) static CONSOLE_CONFIG: OnceLock<Config> = OnceLock::new();
// #[allow(clippy::const_is_empty)]
// pub(crate) fn init_console_cfg(local_ip: IpAddr, port: u16) {
// CONSOLE_CONFIG.get_or_init(|| {
// let ver = {
// if !build::TAG.is_empty() {
// build::TAG.to_string()
// } else if !build::SHORT_COMMIT.is_empty() {
// format!("@{}", build::SHORT_COMMIT)
// } else {
// build::PKG_VERSION.to_string()
// }
// };
// Config::new(local_ip, port, ver.as_str(), build::COMMIT_DATE_3339)
// });
// }
// // fn is_socket_addr_or_ip_addr(host: &str) -> bool {
// // host.parse::<SocketAddr>().is_ok() || host.parse::<IpAddr>().is_ok()
// // }
// #[allow(dead_code)]
// async fn license_handler() -> impl IntoResponse {
// let license = get_license().unwrap_or_default();
// Response::builder()
// .header("content-type", "application/json")
// .status(StatusCode::OK)
// .body(Body::from(serde_json::to_string(&license).unwrap_or_default()))
// .unwrap()
// }
// fn _is_private_ip(ip: IpAddr) -> bool {
// match ip {
// IpAddr::V4(ip) => {
// let octets = ip.octets();
// // 10.0.0.0/8
// octets[0] == 10 ||
// // 172.16.0.0/12
// (octets[0] == 172 && (octets[1] >= 16 && octets[1] <= 31)) ||
// // 192.168.0.0/16
// (octets[0] == 192 && octets[1] == 168)
// }
// IpAddr::V6(_) => false,
// }
// }
// #[allow(clippy::const_is_empty)]
// #[allow(dead_code)]
// #[instrument(fields(host))]
// async fn config_handler(uri: Uri, Host(host): Host, headers: HeaderMap) -> impl IntoResponse {
// // Get the scheme from the headers or use the URI scheme
// let scheme = headers
// .get(HeaderName::from_static("x-forwarded-proto"))
// .and_then(|value| value.to_str().ok())
// .unwrap_or_else(|| uri.scheme().map(|s| s.as_str()).unwrap_or("http"));
// // Print logs for debugging
// info!("Scheme: {}, ", scheme);
// // Get the host from the uri and use the value of the host extractor if it doesn't have one
// let host = uri.host().unwrap_or(host.as_str());
// let host = if let Ok(socket_addr) = host.parse::<SocketAddr>() {
// // Successfully parsed, it's in IP:Port format.
// // For IPv6, we need to enclose it in brackets to form a valid URL.
// let ip = socket_addr.ip();
// if ip.is_ipv6() { format!("[{ip}]") } else { format!("{ip}") }
// } else {
// // Failed to parse, it might be a domain name or a bare IP, use it as is.
// host.to_string()
// };
// // Make a copy of the current configuration
// let mut cfg = match CONSOLE_CONFIG.get() {
// Some(cfg) => cfg.clone(),
// None => {
// error!("Console configuration not initialized");
// return Response::builder()
// .status(StatusCode::INTERNAL_SERVER_ERROR)
// .body(Body::from("Console configuration not initialized"))
// .unwrap();
// }
// };
// let url = format!("{}://{}:{}", scheme, host, cfg.port);
// cfg.api.base_url = format!("{url}{RUSTFS_ADMIN_PREFIX}");
// cfg.s3.endpoint = url;
// Response::builder()
// .header("content-type", "application/json")
// .status(StatusCode::OK)
// .body(Body::from(cfg.to_json()))
// .unwrap()
// }
Response::builder()
.header("content-type", "application/json")
.status(StatusCode::OK)
.body(Body::from(cfg.to_json()))
.unwrap()
}
// pub fn register_router() -> Router {
// Router::new()
// // .route("/license", get(license_handler))
// // .route("/config.json", get(config_handler))
// .route("/license", get(license_handler))
// .route("/config.json", get(config_handler))
// .fallback_service(get(static_handler))
// }
//
// #[allow(dead_code)]
// pub async fn start_static_file_server(
// addrs: &str,
// local_ip: IpAddr,
// access_key: &str,
// secret_key: &str,
// tls_path: Option<String>,
// ) {
// pub async fn start_static_file_server(addrs: &str, tls_path: Option<String>) {
// // Configure CORS
// let cors = CorsLayer::new()
// .allow_origin(Any) // In the production environment, we recommend that you specify a specific domain name
// .allow_methods([http::Method::GET, http::Method::POST])
// .allow_headers([header::CONTENT_TYPE]);
//
// // Create a route
// let app = register_router()
// .layer(cors)
// .layer(tower_http::compression::CompressionLayer::new().gzip(true).deflate(true))
// .layer(TraceLayer::new_for_http());
// let server_addr = parse_and_resolve_address(addrs).expect("Failed to parse socket address");
// let server_port = server_addr.port();
// let server_address = server_addr.to_string();
// info!(
// "WebUI: http://{}:{} http://127.0.0.1:{} http://{}",
// local_ip, server_port, server_port, server_address
// );
// info!(" RootUser: {}", access_key);
// info!(" RootPass: {}", secret_key);
//
// // Check and start the HTTPS/HTTP server
// match start_server(server_addr, tls_path, app.clone()).await {
// Ok(_) => info!("Server shutdown gracefully"),
// Err(e) => error!("Server error: {}", e),
// match start_server(addrs, tls_path, app).await {
// Ok(_) => info!("Console Server shutdown gracefully"),
// Err(e) => error!("Console Server error: {}", e),
// }
// }
// async fn start_server(server_addr: SocketAddr, tls_path: Option<String>, app: Router) -> io::Result<()> {
//
// async fn start_server(addrs: &str, tls_path: Option<String>, app: Router) -> io::Result<()> {
// let server_addr = parse_and_resolve_address(addrs).expect("Console Failed to parse socket address");
// let server_port = server_addr.port();
// let server_address = server_addr.to_string();
//
// info!("Console WebUI: http://{} http://127.0.0.1:{} ", server_address, server_port);
//
// let tls_path = tls_path.unwrap_or_default();
// let key_path = format!("{tls_path}/{RUSTFS_TLS_KEY}");
// let cert_path = format!("{tls_path}/{RUSTFS_TLS_CERT}");
@@ -314,38 +313,38 @@ pub(crate) async fn static_handler(uri: Uri) -> impl IntoResponse {
// let handle_clone = handle.clone();
// tokio::spawn(async move {
// shutdown_signal().await;
// info!("Initiating graceful shutdown...");
// info!("Console Initiating graceful shutdown...");
// handle_clone.graceful_shutdown(Some(Duration::from_secs(10)));
// });
//
// let has_tls_certs = tokio::try_join!(tokio::fs::metadata(&key_path), tokio::fs::metadata(&cert_path)).is_ok();
// info!("Console TLS certs: {:?}", has_tls_certs);
// if has_tls_certs {
// info!("Found TLS certificates, starting with HTTPS");
// info!("Console Found TLS certificates, starting with HTTPS");
// match RustlsConfig::from_pem_file(cert_path, key_path).await {
// Ok(config) => {
// info!("Starting HTTPS server...");
// info!("Console Starting HTTPS server...");
// axum_server::bind_rustls(server_addr, config)
// .handle(handle.clone())
// .serve(app.into_make_service())
// .await
// .map_err(io::Error::other)?;
// info!("HTTPS server running on https://{}", server_addr);
// .map_err(Error::other)?;
//
// info!("Console HTTPS server running on https://{}", server_addr);
//
// Ok(())
// }
// Err(e) => {
// error!("Failed to create TLS config: {}", e);
// error!("Console Failed to create TLS config: {}", e);
// start_http_server(server_addr, app, handle).await
// }
// }
// } else {
// info!("TLS certificates not found at {} and {}", key_path, cert_path);
// info!("Console TLS certificates not found at {} and {}", key_path, cert_path);
// start_http_server(server_addr, app, handle).await
// }
// }
//
// #[allow(dead_code)]
// /// 308 redirect for HTTP to HTTPS
// fn redirect_to_https(https_port: u16) -> Router {
@@ -364,38 +363,38 @@ pub(crate) async fn static_handler(uri: Uri) -> impl IntoResponse {
// }),
// )
// }
//
// async fn start_http_server(addr: SocketAddr, app: Router, handle: axum_server::Handle) -> io::Result<()> {
// debug!("Starting HTTP server...");
// info!("Console Starting HTTP server... {}", addr.to_string());
// axum_server::bind(addr)
// .handle(handle)
// .serve(app.into_make_service())
// .await
// .map_err(io::Error::other)
// .map_err(Error::other)
// }
//
// async fn shutdown_signal() {
// let ctrl_c = async {
// signal::ctrl_c().await.expect("failed to install Ctrl+C handler");
// signal::ctrl_c().await.expect("Console failed to install Ctrl+C handler");
// };
//
// #[cfg(unix)]
// let terminate = async {
// signal::unix::signal(signal::unix::SignalKind::terminate())
// .expect("failed to install signal handler")
// .expect("Console failed to install signal handler")
// .recv()
// .await;
// };
//
// #[cfg(not(unix))]
// let terminate = std::future::pending::<()>();
//
// tokio::select! {
// _ = ctrl_c => {
// info!("shutdown_signal ctrl_c")
// info!("Console shutdown_signal ctrl_c")
// },
// _ = terminate => {
// info!("shutdown_signal terminate")
// info!("Console shutdown_signal terminate")
// },
// }
// }

View File

@@ -94,6 +94,28 @@ pub struct AccountInfo {
pub policy: BucketPolicy,
}
/// Health check handler for endpoint monitoring
pub struct HealthCheckHandler {}
#[async_trait::async_trait]
impl Operation for HealthCheckHandler {
async fn call(&self, _req: S3Request<Body>, _params: Params<'_, '_>) -> S3Result<S3Response<(StatusCode, Body)>> {
use serde_json::json;
let health_info = json!({
"status": "ok",
"service": "rustfs-endpoint",
"timestamp": chrono::Utc::now().to_rfc3339(),
"version": env!("CARGO_PKG_VERSION")
});
let body = serde_json::to_string(&health_info).unwrap_or_else(|_| "{}".to_string());
let response_body = Body::from(body);
Ok(S3Response::new((StatusCode::OK, response_body)))
}
}
pub struct AccountInfoHandler {}
#[async_trait::async_trait]
impl Operation for AccountInfoHandler {
@@ -1300,7 +1322,7 @@ impl Operation for ProfileHandler {
error!("Failed to build profiler report: {}", e);
return Ok(S3Response::new((
StatusCode::INTERNAL_SERVER_ERROR,
Body::from(format!("Failed to build profile report: {}", e)),
Body::from(format!("Failed to build profile report: {e}")),
)));
}
};
@@ -1331,7 +1353,7 @@ impl Operation for ProfileHandler {
error!("Failed to generate flamegraph: {}", e);
return Ok(S3Response::new((
StatusCode::INTERNAL_SERVER_ERROR,
Body::from(format!("Failed to generate flamegraph: {}", e)),
Body::from(format!("Failed to generate flamegraph: {e}")),
)));
}
};

View File

@@ -21,7 +21,8 @@ pub mod utils;
// use ecstore::global::{is_dist_erasure, is_erasure};
use handlers::{
GetReplicationMetricsHandler, ListRemoteTargetHandler, RemoveRemoteTargetHandler, SetRemoteTargetHandler, bucket_meta,
GetReplicationMetricsHandler, HealthCheckHandler, ListRemoteTargetHandler, RemoveRemoteTargetHandler, SetRemoteTargetHandler,
bucket_meta,
event::{
GetBucketNotification, ListNotificationTargets, NotificationTarget, RemoveBucketNotification, RemoveNotificationTarget,
SetBucketNotification,
@@ -41,6 +42,9 @@ const ADMIN_PREFIX: &str = "/rustfs/admin";
pub fn make_admin_route(console_enabled: bool) -> std::io::Result<impl S3Route> {
let mut r: S3Router<AdminOperation> = S3Router::new(console_enabled);
// Health check endpoint for monitoring and orchestration
r.insert(Method::GET, "/health", AdminOperation(&HealthCheckHandler {}))?;
// 1
r.insert(Method::POST, "/", AdminOperation(&sts::AssumeRoleHandle {}))?;

View File

@@ -0,0 +1,79 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(test)]
mod tests {
use crate::config::Opt;
use clap::Parser;
#[test]
fn test_default_console_configuration() {
// Test that default console configuration is correct
let args = vec!["rustfs", "/test/volume"];
let opt = Opt::parse_from(args);
assert!(opt.console_enable);
assert_eq!(opt.console_address, ":9001");
assert_eq!(opt.external_address, ":9000"); // Now defaults to DEFAULT_ADDRESS
assert_eq!(opt.address, ":9000");
}
#[test]
fn test_custom_console_configuration() {
// Test custom console configuration
let args = vec![
"rustfs",
"/test/volume",
"--console-address",
":8080",
"--address",
":8000",
"--console-enable",
"false",
];
let opt = Opt::parse_from(args);
assert!(opt.console_enable);
assert_eq!(opt.console_address, ":8080");
assert_eq!(opt.address, ":8000");
}
#[test]
fn test_external_address_configuration() {
// Test external address configuration for Docker
let args = vec!["rustfs", "/test/volume", "--external-address", ":9020"];
let opt = Opt::parse_from(args);
assert_eq!(opt.external_address, ":9020".to_string());
}
#[test]
fn test_console_and_endpoint_ports_different() {
// Ensure console and endpoint use different default ports
let args = vec!["rustfs", "/test/volume"];
let opt = Opt::parse_from(args);
// Parse port numbers from addresses
let endpoint_port: u16 = opt.address.trim_start_matches(':').parse().expect("Invalid endpoint port");
let console_port: u16 = opt
.console_address
.trim_start_matches(':')
.parse()
.expect("Invalid console port");
assert_ne!(endpoint_port, console_port, "Console and endpoint should use different ports");
assert_eq!(endpoint_port, 9000);
assert_eq!(console_port, 9001);
}
}

View File

@@ -17,6 +17,9 @@ use const_str::concat;
use std::string::ToString;
shadow_rs::shadow!(build);
#[cfg(test)]
mod config_test;
#[allow(clippy::const_is_empty)]
const SHORT_VERSION: &str = {
if !build::TAG.is_empty() {
@@ -68,6 +71,16 @@ pub struct Opt {
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_ENABLE, env = "RUSTFS_CONSOLE_ENABLE")]
pub console_enable: bool,
/// Console server bind address
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_ADDRESS.to_string(), env = "RUSTFS_CONSOLE_ADDRESS")]
pub console_address: String,
/// External address for console to access endpoint (used in Docker deployments)
/// This should match the mapped host port when using Docker port mapping
/// Example: ":9020" when mapping host port 9020 to container port 9000
#[arg(long, default_value_t = rustfs_config::DEFAULT_ADDRESS.to_string(), env = "RUSTFS_EXTERNAL_ADDRESS")]
pub external_address: String,
/// Observability endpoint for trace, metrics and logs,only support grpc mode.
#[arg(long, default_value_t = rustfs_config::DEFAULT_OBS_ENDPOINT.to_string(), env = "RUSTFS_OBS_ENDPOINT")]
pub obs_endpoint: String,
@@ -76,6 +89,18 @@ pub struct Opt {
#[arg(long, env = "RUSTFS_TLS_PATH")]
pub tls_path: Option<String>,
/// Enable rate limiting for console
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_ENABLE, env = "RUSTFS_CONSOLE_RATE_LIMIT_ENABLE")]
pub console_rate_limit_enable: bool,
/// Console rate limit: requests per minute
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_RPM, env = "RUSTFS_CONSOLE_RATE_LIMIT_RPM")]
pub console_rate_limit_rpm: u32,
/// Console authentication timeout in seconds
#[arg(long, default_value_t = rustfs_config::DEFAULT_CONSOLE_AUTH_TIMEOUT, env = "RUSTFS_CONSOLE_AUTH_TIMEOUT")]
pub console_auth_timeout: u64,
#[arg(long, env = "RUSTFS_LICENSE")]
pub license: Option<String>,

View File

@@ -26,7 +26,11 @@ mod update;
mod version;
// Ensure the correct path for parse_license is imported
use crate::server::{SHUTDOWN_TIMEOUT, ServiceState, ServiceStateManager, ShutdownSignal, start_http_server, wait_for_shutdown};
use crate::admin::console::init_console_cfg;
use crate::server::{
SHUTDOWN_TIMEOUT, ServiceState, ServiceStateManager, ShutdownSignal, start_console_server, start_http_server,
wait_for_shutdown,
};
use crate::storage::ecfs::{process_lambda_configurations, process_queue_configurations, process_topic_configurations};
use chrono::Datelike;
use clap::Parser;
@@ -37,6 +41,8 @@ use rustfs_ahm::{
};
use rustfs_common::globals::set_global_addr;
use rustfs_config::DEFAULT_DELIMITER;
use rustfs_config::DEFAULT_UPDATE_CHECK;
use rustfs_config::ENV_UPDATE_CHECK;
use rustfs_ecstore::bucket::metadata_sys;
use rustfs_ecstore::bucket::metadata_sys::init_bucket_metadata_sys;
use rustfs_ecstore::cmd::bucket_replication::init_bucket_replication_pool;
@@ -58,7 +64,6 @@ use rustfs_iam::init_iam_sys;
use rustfs_notify::global::notifier_instance;
use rustfs_obs::{init_obs, set_global_guard};
use rustfs_targets::arn::TargetID;
use rustfs_utils::dns_resolver::init_global_dns_resolver;
use rustfs_utils::net::parse_and_resolve_address;
use s3s::s3_error;
use std::io::{Error, Result};
@@ -70,6 +75,18 @@ use tracing::{debug, error, info, instrument, warn};
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
#[cfg(all(target_os = "linux", target_env = "musl"))]
#[global_allocator]
static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
const LOGO: &str = r#"
░█▀▄░█░█░█▀▀░▀█▀░█▀▀░█▀▀
░█▀▄░█░█░▀▀█░░█░░█▀▀░▀▀█
░▀░▀░▀▀▀░▀▀▀░░▀░░▀░░░▀▀▀
"#;
#[instrument]
fn print_server_info() {
let current_year = chrono::Utc::now().year();
@@ -93,6 +110,9 @@ async fn main() -> Result<()> {
// Initialize Observability
let (_logger, guard) = init_obs(Some(opt.clone().obs_endpoint)).await;
// print startup logo
info!("{}", LOGO);
// Store in global storage
set_global_guard(guard).map_err(Error::other)?;
@@ -108,12 +128,12 @@ async fn main() -> Result<()> {
async fn run(opt: config::Opt) -> Result<()> {
debug!("opt: {:?}", &opt);
// Initialize global DNS resolver early for enhanced DNS resolution (concurrent)
let dns_init = tokio::spawn(async {
if let Err(e) = init_global_dns_resolver().await {
warn!("Failed to initialize global DNS resolver: {}. Using standard DNS resolution.", e);
}
});
// // Initialize global DNS resolver early for enhanced DNS resolution (concurrent)
// let dns_init = tokio::spawn(async {
// if let Err(e) = rustfs_utils::dns_resolver::init_global_dns_resolver().await {
// warn!("Failed to initialize global DNS resolver: {}. Using standard DNS resolution.", e);
// }
// });
if let Some(region) = &opt.region {
rustfs_ecstore::global::set_global_region(region.clone());
@@ -123,7 +143,15 @@ async fn run(opt: config::Opt) -> Result<()> {
let server_port = server_addr.port();
let server_address = server_addr.to_string();
debug!("server_address {}", &server_address);
info!(
target: "rustfs::main::run",
server_address = %server_address,
ip = %server_addr.ip(),
port = %server_port,
version = %version::get_version(),
"Starting RustFS server at {}",
&server_address
);
// Set up AK and SK
rustfs_ecstore::global::init_global_action_cred(Some(opt.access_key.clone()), Some(opt.secret_key.clone()));
@@ -132,12 +160,17 @@ async fn run(opt: config::Opt) -> Result<()> {
set_global_addr(&opt.address).await;
// // Wait for DNS initialization to complete before network-heavy operations
// dns_init.await.map_err(Error::other)?;
// For RPC
let (endpoint_pools, setup_type) =
EndpointServerPools::from_volumes(server_address.clone().as_str(), opt.volumes.clone()).map_err(Error::other)?;
let (endpoint_pools, setup_type) = EndpointServerPools::from_volumes(server_address.clone().as_str(), opt.volumes.clone())
.await
.map_err(Error::other)?;
for (i, eps) in endpoint_pools.as_ref().iter().enumerate() {
info!(
target: "rustfs::main::run",
"Formatting {}st pool, {} set(s), {} drives per set.",
i + 1,
eps.set_count,
@@ -151,12 +184,20 @@ async fn run(opt: config::Opt) -> Result<()> {
for (i, eps) in endpoint_pools.as_ref().iter().enumerate() {
info!(
target: "rustfs::main::run",
id = i,
set_count = eps.set_count,
drives_per_set = eps.drives_per_set,
cmd = ?eps.cmd_line,
"created endpoints {}, set_count:{}, drives_per_set: {}, cmd: {:?}",
i, eps.set_count, eps.drives_per_set, eps.cmd_line
);
for ep in eps.endpoints.as_ref().iter() {
info!(" - {}", ep);
info!(
target: "rustfs::main::run",
" - endpoint: {}", ep
);
}
}
@@ -165,6 +206,52 @@ async fn run(opt: config::Opt) -> Result<()> {
state_manager.update(ServiceState::Starting);
let shutdown_tx = start_http_server(&opt, state_manager.clone()).await?;
// Start console server if enabled
let console_shutdown_tx = shutdown_tx.clone();
if opt.console_enable && !opt.console_address.is_empty() {
// Deal with port mapping issues for virtual machines like docker
let (external_addr, external_port) = if !opt.external_address.is_empty() {
let external_addr = parse_and_resolve_address(opt.external_address.as_str()).map_err(Error::other)?;
let external_port = external_addr.port();
if external_port != server_port {
warn!(
"External port {} is different from server port {}, ensure your firewall allows access to the external port if needed.",
external_port, server_port
);
}
info!(
target: "rustfs::main::run",
external_address = %external_addr,
external_port = %external_port,
"Using external address {} for endpoint access", external_addr
);
rustfs_ecstore::global::set_global_rustfs_external_port(external_port);
set_global_addr(&opt.external_address).await;
(external_addr.ip(), external_port)
} else {
(server_addr.ip(), server_port)
};
warn!("Starting console server on address: '{}', port: '{}'", external_addr, external_port);
// init console configuration
init_console_cfg(external_addr, external_port);
let opt_clone = opt.clone();
tokio::spawn(async move {
let console_shutdown_rx = console_shutdown_tx.subscribe();
if let Err(e) = start_console_server(&opt_clone, console_shutdown_rx).await {
error!("Console server failed to start: {}", e);
}
});
} else {
info!("Console server is disabled.");
info!("You can access the RustFS API at {}", &opt.address);
info!("For more information, visit https://rustfs.com/docs/");
info!("To enable the console, restart the server with --console-enable and a valid --console-address.");
info!(
"Current console address is set to: '{}' ,console enable is set to: '{}'",
&opt.console_address, &opt.console_enable
);
}
set_global_endpoints(endpoint_pools.as_ref().clone());
update_erasure_type(setup_type).await;
@@ -184,9 +271,6 @@ async fn run(opt: config::Opt) -> Result<()> {
// Initialize event notifier
init_event_notifier().await;
// Wait for DNS initialization to complete before network-heavy operations
dns_init.await.map_err(Error::other)?;
let buckets_list = store
.list_bucket(&BucketOptions {
no_metadata: true,
@@ -198,31 +282,11 @@ async fn run(opt: config::Opt) -> Result<()> {
// Collect bucket names into a vector
let buckets: Vec<String> = buckets_list.into_iter().map(|v| v.name).collect();
// Parallelize initialization tasks for better network performance
let bucket_metadata_task = tokio::spawn({
let store = store.clone();
let buckets = buckets.clone();
async move {
init_bucket_metadata_sys(store, buckets).await;
}
});
init_bucket_metadata_sys(store.clone(), buckets.clone()).await;
let iam_init_task = tokio::spawn({
let store = store.clone();
async move { init_iam_sys(store).await }
});
init_iam_sys(store.clone()).await.map_err(Error::other)?;
let notification_config_task = tokio::spawn({
let buckets = buckets.clone();
async move {
add_bucket_notification_configuration(buckets).await;
}
});
// Wait for all parallel initialization tasks to complete
bucket_metadata_task.await.map_err(Error::other)?;
iam_init_task.await.map_err(Error::other)??;
notification_config_task.await.map_err(Error::other)?;
add_bucket_notification_configuration(buckets.clone()).await;
// Initialize the global notification system
new_global_notification_sys(endpoint_pools.clone()).await.map_err(|err| {
@@ -237,7 +301,12 @@ async fn run(opt: config::Opt) -> Result<()> {
let enable_scanner = parse_bool_env_var("RUSTFS_ENABLE_SCANNER", true);
let enable_heal = parse_bool_env_var("RUSTFS_ENABLE_HEAL", true);
info!("Background services configuration: scanner={}, heal={}", enable_scanner, enable_heal);
info!(
target: "rustfs::main::run",
enable_scanner = enable_scanner,
enable_heal = enable_heal,
"Background services configuration: scanner={}, heal={}", enable_scanner, enable_heal
);
// Initialize heal manager and scanner based on environment variables
if enable_heal || enable_scanner {
@@ -247,11 +316,11 @@ async fn run(opt: config::Opt) -> Result<()> {
let heal_manager = init_heal_manager(heal_storage, None).await?;
if enable_scanner {
info!("Starting scanner with heal manager...");
info!(target: "rustfs::main::run","Starting scanner with heal manager...");
let scanner = Scanner::new(Some(ScannerConfig::default()), Some(heal_manager));
scanner.start().await?;
} else {
info!("Scanner disabled, but heal manager is initialized and available");
info!(target: "rustfs::main::run","Scanner disabled, but heal manager is initialized and available");
}
} else if enable_scanner {
info!("Starting scanner without heal manager...");
@@ -259,7 +328,7 @@ async fn run(opt: config::Opt) -> Result<()> {
scanner.start().await?;
}
} else {
info!("Both scanner and heal are disabled, skipping AHM service initialization");
info!(target: "rustfs::main::run","Both scanner and heal are disabled, skipping AHM service initialization");
}
// print server info
@@ -267,7 +336,153 @@ async fn run(opt: config::Opt) -> Result<()> {
// initialize bucket replication pool
init_bucket_replication_pool().await;
// Async update check with timeout (optional)
init_update_check();
// Perform hibernation for 1 second
tokio::time::sleep(SHUTDOWN_TIMEOUT).await;
// listen to the shutdown signal
match wait_for_shutdown().await {
#[cfg(unix)]
ShutdownSignal::CtrlC | ShutdownSignal::Sigint | ShutdownSignal::Sigterm => {
handle_shutdown(&state_manager, &shutdown_tx).await;
}
#[cfg(not(unix))]
ShutdownSignal::CtrlC => {
handle_shutdown(&state_manager, &shutdown_tx).await;
}
}
info!(target: "rustfs::main::run","server is stopped state: {:?}", state_manager.current_state());
Ok(())
}
/// Parse a boolean environment variable with default value
///
/// Returns true if the environment variable is not set or set to true/1/yes/on/enabled,
/// false if set to false/0/no/off/disabled
fn parse_bool_env_var(var_name: &str, default: bool) -> bool {
std::env::var(var_name)
.unwrap_or_else(|_| default.to_string())
.parse::<bool>()
.unwrap_or(default)
}
/// Handles the shutdown process of the server
async fn handle_shutdown(state_manager: &ServiceStateManager, shutdown_tx: &tokio::sync::broadcast::Sender<()>) {
info!(
target: "rustfs::main::handle_shutdown",
"Shutdown signal received in main thread"
);
// update the status to stopping first
state_manager.update(ServiceState::Stopping);
// Check environment variables to determine what services need to be stopped
let enable_scanner = parse_bool_env_var("RUSTFS_ENABLE_SCANNER", true);
let enable_heal = parse_bool_env_var("RUSTFS_ENABLE_HEAL", true);
// Stop background services based on what was enabled
if enable_scanner || enable_heal {
info!(
target: "rustfs::main::handle_shutdown",
"Stopping background services (data scanner and auto heal)..."
);
shutdown_background_services();
info!(
target: "rustfs::main::handle_shutdown",
"Stopping AHM services..."
);
shutdown_ahm_services();
} else {
info!(
target: "rustfs::main::handle_shutdown",
"Background services were disabled, skipping AHM shutdown"
);
}
// Stop the notification system
shutdown_event_notifier().await;
info!(
target: "rustfs::main::handle_shutdown",
"Server is stopping..."
);
let _ = shutdown_tx.send(());
// Wait for the worker thread to complete the cleaning work
tokio::time::sleep(SHUTDOWN_TIMEOUT).await;
// the last updated status is stopped
state_manager.update(ServiceState::Stopped);
info!(
target: "rustfs::main::handle_shutdown",
"Server stopped current "
);
}
#[instrument]
async fn init_event_notifier() {
info!(
target: "rustfs::main::init_event_notifier",
"Initializing event notifier..."
);
// 1. Get the global configuration loaded by ecstore
let server_config = match GLOBAL_SERVER_CONFIG.get() {
Some(config) => config.clone(), // Clone the config to pass ownership
None => {
error!("Event notifier initialization failed: Global server config not loaded.");
return;
}
};
info!(
target: "rustfs::main::init_event_notifier",
"Global server configuration loaded successfully"
);
// 2. Check if the notify subsystem exists in the configuration, and skip initialization if it doesn't
if server_config
.get_value(rustfs_config::notify::NOTIFY_MQTT_SUB_SYS, DEFAULT_DELIMITER)
.is_none()
|| server_config
.get_value(rustfs_config::notify::NOTIFY_WEBHOOK_SUB_SYS, DEFAULT_DELIMITER)
.is_none()
{
info!(
target: "rustfs::main::init_event_notifier",
"'notify' subsystem not configured, skipping event notifier initialization."
);
return;
}
info!(
target: "rustfs::main::init_event_notifier",
"Event notifier configuration found, proceeding with initialization."
);
// 3. Initialize the notification system asynchronously with a global configuration
// Use direct await for better error handling and faster initialization
if let Err(e) = rustfs_notify::initialize(server_config).await {
error!("Failed to initialize event notifier system: {}", e);
} else {
info!(
target: "rustfs::main::init_event_notifier",
"Event notifier system initialized successfully."
);
}
}
fn init_update_check() {
let update_check_enable = std::env::var(ENV_UPDATE_CHECK)
.unwrap_or_else(|_| DEFAULT_UPDATE_CHECK.to_string())
.parse::<bool>()
.unwrap_or(DEFAULT_UPDATE_CHECK);
if !update_check_enable {
return;
}
// Async update check with timeout
tokio::spawn(async {
use crate::update::{UpdateCheckError, check_updates};
@@ -302,106 +517,6 @@ async fn run(opt: config::Opt) -> Result<()> {
}
}
});
// Perform hibernation for 1 second
tokio::time::sleep(SHUTDOWN_TIMEOUT).await;
// listen to the shutdown signal
match wait_for_shutdown().await {
#[cfg(unix)]
ShutdownSignal::CtrlC | ShutdownSignal::Sigint | ShutdownSignal::Sigterm => {
handle_shutdown(&state_manager, &shutdown_tx).await;
}
#[cfg(not(unix))]
ShutdownSignal::CtrlC => {
handle_shutdown(&state_manager, &shutdown_tx).await;
}
}
info!("server is stopped state: {:?}", state_manager.current_state());
Ok(())
}
/// Parse a boolean environment variable with default value
///
/// Returns true if the environment variable is not set or set to true/1/yes/on/enabled,
/// false if set to false/0/no/off/disabled
fn parse_bool_env_var(var_name: &str, default: bool) -> bool {
std::env::var(var_name)
.unwrap_or_else(|_| default.to_string())
.parse::<bool>()
.unwrap_or(default)
}
/// Handles the shutdown process of the server
async fn handle_shutdown(state_manager: &ServiceStateManager, shutdown_tx: &tokio::sync::broadcast::Sender<()>) {
info!("Shutdown signal received in main thread");
// update the status to stopping first
state_manager.update(ServiceState::Stopping);
// Check environment variables to determine what services need to be stopped
let enable_scanner = parse_bool_env_var("RUSTFS_ENABLE_SCANNER", true);
let enable_heal = parse_bool_env_var("RUSTFS_ENABLE_HEAL", true);
// Stop background services based on what was enabled
if enable_scanner || enable_heal {
info!("Stopping background services (data scanner and auto heal)...");
shutdown_background_services();
info!("Stopping AHM services...");
shutdown_ahm_services();
} else {
info!("Background services were disabled, skipping AHM shutdown");
}
// Stop the notification system
shutdown_event_notifier().await;
info!("Server is stopping...");
let _ = shutdown_tx.send(());
// Wait for the worker thread to complete the cleaning work
tokio::time::sleep(SHUTDOWN_TIMEOUT).await;
// the last updated status is stopped
state_manager.update(ServiceState::Stopped);
info!("Server stopped current ");
}
#[instrument]
async fn init_event_notifier() {
info!("Initializing event notifier...");
// 1. Get the global configuration loaded by ecstore
let server_config = match GLOBAL_SERVER_CONFIG.get() {
Some(config) => config.clone(), // Clone the config to pass ownership
None => {
error!("Event notifier initialization failed: Global server config not loaded.");
return;
}
};
info!("Global server configuration loaded successfully. config: {:?}", server_config);
// 2. Check if the notify subsystem exists in the configuration, and skip initialization if it doesn't
if server_config
.get_value(rustfs_config::notify::NOTIFY_MQTT_SUB_SYS, DEFAULT_DELIMITER)
.is_none()
|| server_config
.get_value(rustfs_config::notify::NOTIFY_WEBHOOK_SUB_SYS, DEFAULT_DELIMITER)
.is_none()
{
info!("'notify' subsystem not configured, skipping event notifier initialization.");
return;
}
info!("Event notifier configuration found, proceeding with initialization.");
// 3. Initialize the notification system asynchronously with a global configuration
// Use direct await for better error handling and faster initialization
if let Err(e) = rustfs_notify::initialize(server_config).await {
error!("Failed to initialize event notifier system: {}", e);
} else {
info!("Event notifier system initialized successfully.");
}
}
#[instrument(skip_all)]
@@ -422,7 +537,10 @@ async fn add_bucket_notification_configuration(buckets: Vec<String>) {
match has_notification_config {
Some(cfg) => {
info!("Bucket '{}' has existing notification configuration: {:?}", bucket, cfg);
info!(
target: "rustfs::main::add_bucket_notification_configuration",
bucket = %bucket,
"Bucket '{}' has existing notification configuration: {:?}", bucket, cfg);
let mut event_rules = Vec::new();
process_queue_configurations(&mut event_rules, cfg.queue_configurations.clone(), TargetID::from_str);
@@ -438,7 +556,10 @@ async fn add_bucket_notification_configuration(buckets: Vec<String>) {
}
}
None => {
info!("Bucket '{}' has no existing notification configuration.", bucket);
info!(
target: "rustfs::main::add_bucket_notification_configuration",
bucket = %bucket,
"Bucket '{}' has no existing notification configuration.", bucket);
}
}
}

View File

@@ -23,7 +23,7 @@ pub fn init_profiler() -> Result<(), Box<dyn std::error::Error>> {
.frequency(1000)
.blocklist(&["libc", "libgcc", "pthread", "vdso"])
.build()
.map_err(|e| format!("Failed to build profiler guard: {}", e))?;
.map_err(|e| format!("Failed to build profiler guard: {e}"))?;
PROFILER_GUARD
.set(Arc::new(Mutex::new(guard)))

View File

@@ -0,0 +1,398 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::admin::console::static_handler;
use crate::config::Opt;
use axum::{Router, extract::Request, middleware, response::Json, routing::get};
use axum_server::tls_rustls::RustlsConfig;
use http::{HeaderValue, Method};
use rustfs_config::{RUSTFS_TLS_CERT, RUSTFS_TLS_KEY};
use rustfs_utils::net::parse_and_resolve_address;
use serde_json::json;
use std::io::Result;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use tokio_rustls::rustls::ServerConfig;
use tower_http::catch_panic::CatchPanicLayer;
use tower_http::cors::{AllowOrigin, Any, CorsLayer};
use tower_http::limit::RequestBodyLimitLayer;
use tower_http::timeout::TimeoutLayer;
use tower_http::trace::TraceLayer;
use tracing::{debug, error, info, instrument, warn};
const CONSOLE_PREFIX: &str = "/rustfs/console";
/// Console access logging middleware
async fn console_logging_middleware(req: Request, next: axum::middleware::Next) -> axum::response::Response {
let method = req.method().clone();
let uri = req.uri().clone();
let start = std::time::Instant::now();
let response = next.run(req).await;
let duration = start.elapsed();
info!(
target: "rustfs::console::access",
method = %method,
uri = %uri,
status = %response.status(),
duration_ms = %duration.as_millis(),
"Console access"
);
response
}
/// Setup TLS configuration for console using axum-server, following endpoint TLS implementation logic
#[instrument(skip(tls_path))]
async fn setup_console_tls_config(tls_path: Option<&String>) -> Result<Option<RustlsConfig>> {
let tls_path = match tls_path {
Some(path) if !path.is_empty() => path,
_ => {
debug!("TLS path is not provided, console starting with HTTP");
return Ok(None);
}
};
if tokio::fs::metadata(tls_path).await.is_err() {
debug!("TLS path does not exist, console starting with HTTP");
return Ok(None);
}
debug!("Found TLS directory for console, checking for certificates");
// Make sure to use a modern encryption suite
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
// 1. Attempt to load all certificates in the directory (multi-certificate support, for SNI)
if let Ok(cert_key_pairs) = rustfs_utils::load_all_certs_from_directory(tls_path) {
if !cert_key_pairs.is_empty() {
debug!(
"Found {} certificates for console, creating SNI-aware multi-cert resolver",
cert_key_pairs.len()
);
// Create an SNI-enabled certificate resolver
let resolver = rustfs_utils::create_multi_cert_resolver(cert_key_pairs)?;
// Configure the server to enable SNI support
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_cert_resolver(Arc::new(resolver));
// Configure ALPN protocol priority
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec(), b"http/1.0".to_vec()];
// Log SNI requests
if rustfs_utils::tls_key_log() {
server_config.key_log = Arc::new(rustls::KeyLogFile::new());
}
info!(target: "rustfs::console::tls", "Console TLS enabled with multi-certificate SNI support");
return Ok(Some(RustlsConfig::from_config(Arc::new(server_config))));
}
}
// 2. Revert to the traditional single-certificate mode
let key_path = format!("{tls_path}/{RUSTFS_TLS_KEY}");
let cert_path = format!("{tls_path}/{RUSTFS_TLS_CERT}");
if tokio::try_join!(tokio::fs::metadata(&key_path), tokio::fs::metadata(&cert_path)).is_ok() {
debug!("Found legacy single TLS certificate for console, starting with HTTPS");
return match RustlsConfig::from_pem_file(cert_path, key_path).await {
Ok(config) => {
info!(target: "rustfs::console::tls", "Console TLS enabled with single certificate");
Ok(Some(config))
}
Err(e) => {
error!(target: "rustfs::console::error", error = %e, "Failed to create TLS config for console");
Err(std::io::Error::other(e))
}
};
}
debug!("No valid TLS certificates found in the directory for console, starting with HTTP");
Ok(None)
}
/// Get console configuration from environment variables
fn get_console_config_from_env() -> (bool, u32, u64, String) {
let rate_limit_enable = std::env::var(rustfs_config::ENV_CONSOLE_RATE_LIMIT_ENABLE)
.unwrap_or_else(|_| rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_ENABLE.to_string())
.parse::<bool>()
.unwrap_or(rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_ENABLE);
let rate_limit_rpm = std::env::var(rustfs_config::ENV_CONSOLE_RATE_LIMIT_RPM)
.unwrap_or_else(|_| rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_RPM.to_string())
.parse::<u32>()
.unwrap_or(rustfs_config::DEFAULT_CONSOLE_RATE_LIMIT_RPM);
let auth_timeout = std::env::var(rustfs_config::ENV_CONSOLE_AUTH_TIMEOUT)
.unwrap_or_else(|_| rustfs_config::DEFAULT_CONSOLE_AUTH_TIMEOUT.to_string())
.parse::<u64>()
.unwrap_or(rustfs_config::DEFAULT_CONSOLE_AUTH_TIMEOUT);
let cors_allowed_origins = std::env::var(rustfs_config::ENV_CONSOLE_CORS_ALLOWED_ORIGINS)
.unwrap_or_else(|_| rustfs_config::DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS.to_string())
.parse::<String>()
.unwrap_or(rustfs_config::DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS.to_string());
(rate_limit_enable, rate_limit_rpm, auth_timeout, cors_allowed_origins)
}
/// Setup comprehensive middleware stack with tower-http features
fn setup_console_middleware_stack(
cors_layer: CorsLayer,
rate_limit_enable: bool,
rate_limit_rpm: u32,
auth_timeout: u64,
) -> Router {
let mut app = Router::new()
.route("/license", get(crate::admin::console::license_handler))
.route("/config.json", get(crate::admin::console::config_handler))
.route("/health", get(health_check))
.nest(CONSOLE_PREFIX, Router::new().fallback_service(get(static_handler)))
.fallback_service(get(static_handler));
// Add comprehensive middleware layers using tower-http features
app = app
.layer(CatchPanicLayer::new())
.layer(TraceLayer::new_for_http())
.layer(middleware::from_fn(console_logging_middleware))
.layer(cors_layer)
// Add timeout layer - convert auth_timeout from seconds to Duration
.layer(TimeoutLayer::new(Duration::from_secs(auth_timeout)))
// Add request body limit (10MB for console uploads)
.layer(RequestBodyLimitLayer::new(10 * 1024 * 1024));
// Add rate limiting if enabled
if rate_limit_enable {
info!("Console rate limiting enabled: {} requests per minute", rate_limit_rpm);
// Note: tower-http doesn't provide a built-in rate limiter, but we have the foundation
// For production, you would integrate with a rate limiting service like Redis
// For now, we log that it's configured and ready for integration
}
app
}
/// Console health check handler with comprehensive health information
async fn health_check() -> Json<serde_json::Value> {
use rustfs_ecstore::new_object_layer_fn;
let mut health_status = "ok";
let mut details = json!({});
// Check storage backend health
if let Some(_store) = new_object_layer_fn() {
details["storage"] = json!({"status": "connected"});
} else {
health_status = "degraded";
details["storage"] = json!({"status": "disconnected"});
}
// Check IAM system health
match rustfs_iam::get() {
Ok(_) => {
details["iam"] = json!({"status": "connected"});
}
Err(_) => {
health_status = "degraded";
details["iam"] = json!({"status": "disconnected"});
}
}
Json(json!({
"status": health_status,
"service": "rustfs-console",
"timestamp": chrono::Utc::now().to_rfc3339(),
"version": env!("CARGO_PKG_VERSION"),
"details": details,
"uptime": std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
}))
}
/// Parse CORS allowed origins from configuration
pub fn parse_cors_origins(origins: Option<&String>) -> CorsLayer {
let cors_layer = CorsLayer::new()
.allow_methods([Method::GET, Method::POST, Method::PUT, Method::DELETE, Method::OPTIONS])
.allow_headers(Any);
match origins {
Some(origins_str) if origins_str == "*" => cors_layer.allow_origin(Any).expose_headers(Any),
Some(origins_str) => {
let origins: Vec<&str> = origins_str.split(',').map(|s| s.trim()).collect();
if origins.is_empty() {
warn!("Empty CORS origins provided, using permissive CORS");
cors_layer.allow_origin(Any).expose_headers(Any)
} else {
// Parse origins with proper error handling
let mut valid_origins = Vec::new();
for origin in origins {
match origin.parse::<HeaderValue>() {
Ok(header_value) => {
valid_origins.push(header_value);
}
Err(e) => {
warn!("Invalid CORS origin '{}': {}", origin, e);
}
}
}
if valid_origins.is_empty() {
warn!("No valid CORS origins found, using permissive CORS");
cors_layer.allow_origin(Any).expose_headers(Any)
} else {
info!("Console CORS origins configured: {:?}", valid_origins);
cors_layer.allow_origin(AllowOrigin::list(valid_origins)).expose_headers(Any)
}
}
}
None => {
debug!("No CORS origins configured for console, using permissive CORS");
cors_layer.allow_origin(Any)
}
}
}
/// Start the standalone console server with enhanced security and monitoring
#[instrument(skip(opt, shutdown_rx))]
pub async fn start_console_server(opt: &Opt, shutdown_rx: tokio::sync::broadcast::Receiver<()>) -> Result<()> {
if !opt.console_enable {
debug!("Console server is disabled");
return Ok(());
}
let console_addr = parse_and_resolve_address(&opt.console_address)?;
// Get configuration from environment variables
let (rate_limit_enable, rate_limit_rpm, auth_timeout, cors_allowed_origins) = get_console_config_from_env();
// Setup TLS configuration if certificates are available
let tls_config = setup_console_tls_config(opt.tls_path.as_ref()).await?;
let tls_enabled = tls_config.is_some();
info!(
target: "rustfs::console::startup",
address = %console_addr,
tls_enabled = tls_enabled,
rate_limit_enabled = rate_limit_enable,
rate_limit_rpm = rate_limit_rpm,
auth_timeout_seconds = auth_timeout,
cors_allowed_origins = %cors_allowed_origins,
"Starting console server"
);
// String to Option<&String>
let cors_allowed_origins = if cors_allowed_origins.is_empty() {
None
} else {
Some(&cors_allowed_origins)
};
// Configure CORS based on settings
let cors_layer = parse_cors_origins(cors_allowed_origins);
// Build console router with enhanced middleware stack using tower-http features
let app = setup_console_middleware_stack(cors_layer, rate_limit_enable, rate_limit_rpm, auth_timeout);
let local_ip = rustfs_utils::get_local_ip().unwrap_or_else(|| "127.0.0.1".parse().unwrap());
let protocol = if tls_enabled { "https" } else { "http" };
info!(
target: "rustfs::console::startup",
"Console WebUI available at: {}://{}:{}/rustfs/console/index.html",
protocol, local_ip, console_addr.port()
);
info!(
target: "rustfs::console::startup",
"Console WebUI (localhost): {}://127.0.0.1:{}/rustfs/console/index.html",
protocol, console_addr.port()
);
// Handle connections based on TLS availability using axum-server
if let Some(tls_config) = tls_config {
handle_tls_connections(console_addr, app, tls_config, shutdown_rx).await
} else {
handle_plain_connections(console_addr, app, shutdown_rx).await
}
}
/// Handle TLS connections for console using axum-server with proper TLS support
async fn handle_tls_connections(
server_addr: SocketAddr,
app: Router,
tls_config: RustlsConfig,
mut shutdown_rx: tokio::sync::broadcast::Receiver<()>,
) -> Result<()> {
info!(target: "rustfs::console::tls", "Starting Console HTTPS server on {}", server_addr);
let handle = axum_server::Handle::new();
let handle_clone = handle.clone();
// Spawn shutdown signal handler
tokio::spawn(async move {
let _ = shutdown_rx.recv().await;
info!(target: "rustfs::console::shutdown", "Console TLS server shutdown signal received");
handle_clone.graceful_shutdown(Some(Duration::from_secs(10)));
});
// Start the HTTPS server using axum-server with RustlsConfig
if let Err(e) = axum_server::bind_rustls(server_addr, tls_config)
.handle(handle)
.serve(app.into_make_service())
.await
{
error!(target: "rustfs::console::error", error = %e, "Console TLS server error");
return Err(std::io::Error::other(e));
}
info!(target: "rustfs::console::shutdown", "Console TLS server stopped");
Ok(())
}
/// Handle plain HTTP connections using axum-server
async fn handle_plain_connections(
server_addr: SocketAddr,
app: Router,
mut shutdown_rx: tokio::sync::broadcast::Receiver<()>,
) -> Result<()> {
info!(target: "rustfs::console::startup", "Starting Console HTTP server on {}", server_addr);
let handle = axum_server::Handle::new();
let handle_clone = handle.clone();
// Spawn shutdown signal handler
tokio::spawn(async move {
let _ = shutdown_rx.recv().await;
info!(target: "rustfs::console::shutdown", "Console server shutdown signal received");
handle_clone.graceful_shutdown(Some(Duration::from_secs(10)));
});
// Start the HTTP server using axum-server
if let Err(e) = axum_server::bind(server_addr)
.handle(handle)
.serve(app.into_make_service())
.await
{
error!(target: "rustfs::console::error", error = %e, "Console server error");
return Err(std::io::Error::other(e));
}
info!(target: "rustfs::console::shutdown", "Console server stopped");
Ok(())
}

View File

@@ -0,0 +1,146 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(test)]
mod tests {
use crate::config::Opt;
use crate::server::console::start_console_server;
use clap::Parser;
use tokio::time::{Duration, timeout};
#[tokio::test]
async fn test_console_server_can_start_and_stop() {
// Test that console server can be started and shut down gracefully
let args = vec!["rustfs", "/tmp/test", "--console-address", ":0"]; // Use port 0 for auto-assignment
let opt = Opt::parse_from(args);
let (tx, rx) = tokio::sync::broadcast::channel(1);
// Start console server in a background task
let handle = tokio::spawn(async move { start_console_server(&opt, rx).await });
// Give it a moment to start
tokio::time::sleep(Duration::from_millis(100)).await;
// Send shutdown signal
let _ = tx.send(());
// Wait for server to shut down
let result = timeout(Duration::from_secs(5), handle).await;
assert!(result.is_ok(), "Console server should shutdown gracefully");
let server_result = result.unwrap();
assert!(server_result.is_ok(), "Console server should not have errors");
let final_result = server_result.unwrap();
assert!(final_result.is_ok(), "Console server should complete successfully");
}
#[tokio::test]
async fn test_console_cors_configuration() {
// Test CORS configuration parsing
use crate::server::console::parse_cors_origins;
// Test wildcard origin
let cors_wildcard = Some("*".to_string());
let _layer1 = parse_cors_origins(cors_wildcard.as_ref());
// Should create a layer without error
// Test specific origins
let cors_specific = Some("http://localhost:3000,https://admin.example.com".to_string());
let _layer2 = parse_cors_origins(cors_specific.as_ref());
// Should create a layer without error
// Test empty origin
let cors_empty = Some("".to_string());
let _layer3 = parse_cors_origins(cors_empty.as_ref());
// Should create a layer without error (falls back to permissive)
// Test no origin
let _layer4 = parse_cors_origins(None);
// Should create a layer without error (uses default)
}
#[tokio::test]
async fn test_external_address_configuration() {
// Test external address configuration
let args = vec![
"rustfs",
"/tmp/test",
"--console-address",
":9001",
"--external-address",
":9020",
];
let opt = Opt::parse_from(args);
assert_eq!(opt.console_address, ":9001");
assert_eq!(opt.external_address, ":9020".to_string());
}
#[tokio::test]
async fn test_console_tls_configuration() {
// Test TLS configuration options (now uses shared tls_path)
let args = vec!["rustfs", "/tmp/test", "--tls-path", "/path/to/tls"];
let opt = Opt::parse_from(args);
assert_eq!(opt.tls_path, Some("/path/to/tls".to_string()));
}
#[tokio::test]
async fn test_console_health_check_endpoint() {
// Test that console health check can be called
// This test would need a running server to be comprehensive
// For now, we test configuration and startup behavior
let args = vec!["rustfs", "/tmp/test", "--console-address", ":0"];
let opt = Opt::parse_from(args);
// Verify the configuration supports health checks
assert!(opt.console_enable, "Console should be enabled for health checks");
}
#[tokio::test]
async fn test_console_separate_logging_target() {
// Test that console uses separate logging targets
use tracing::info;
// This test verifies that logging targets are properly set up
info!(target: "rustfs::console::startup", "Test console startup log");
info!(target: "rustfs::console::access", "Test console access log");
info!(target: "rustfs::console::error", "Test console error log");
info!(target: "rustfs::console::shutdown", "Test console shutdown log");
// In a real implementation, we would verify these logs are captured separately
}
#[tokio::test]
async fn test_console_configuration_validation() {
// Test configuration validation
let args = vec![
"rustfs",
"/tmp/test",
"--console-enable",
"true",
"--console-address",
":9001",
"--external-address",
":9020",
];
let opt = Opt::parse_from(args);
// Verify all console-related configuration is parsed correctly
assert!(opt.console_enable);
assert_eq!(opt.console_address, ":9001");
assert_eq!(opt.external_address, ":9020".to_string());
}
}

View File

@@ -46,19 +46,77 @@ use tokio_rustls::TlsAcceptor;
use tonic::{Request, Status, metadata::MetadataValue};
use tower::ServiceBuilder;
use tower_http::catch_panic::CatchPanicLayer;
use tower_http::cors::CorsLayer;
use tower_http::cors::{AllowOrigin, Any, CorsLayer};
use tower_http::trace::TraceLayer;
use tracing::{Span, debug, error, info, instrument, warn};
const MI_B: usize = 1024 * 1024;
/// Parse CORS allowed origins from configuration
fn parse_cors_origins(origins: Option<&String>) -> CorsLayer {
use http::Method;
let cors_layer = CorsLayer::new()
.allow_methods([
Method::GET,
Method::POST,
Method::PUT,
Method::DELETE,
Method::HEAD,
Method::OPTIONS,
])
.allow_headers(Any);
match origins {
Some(origins_str) if origins_str == "*" => cors_layer.allow_origin(Any).expose_headers(Any),
Some(origins_str) => {
let origins: Vec<&str> = origins_str.split(',').map(|s| s.trim()).collect();
if origins.is_empty() {
warn!("Empty CORS origins provided, using permissive CORS");
cors_layer.allow_origin(Any).expose_headers(Any)
} else {
// Parse origins with proper error handling
let mut valid_origins = Vec::new();
for origin in origins {
match origin.parse::<http::HeaderValue>() {
Ok(header_value) => {
valid_origins.push(header_value);
}
Err(e) => {
warn!("Invalid CORS origin '{}': {}", origin, e);
}
}
}
if valid_origins.is_empty() {
warn!("No valid CORS origins found, using permissive CORS");
cors_layer.allow_origin(Any).expose_headers(Any)
} else {
info!("Endpoint CORS origins configured: {:?}", valid_origins);
cors_layer.allow_origin(AllowOrigin::list(valid_origins)).expose_headers(Any)
}
}
}
None => {
debug!("No CORS origins configured for endpoint, using permissive CORS");
cors_layer.allow_origin(Any).expose_headers(Any)
}
}
}
fn get_cors_allowed_origins() -> String {
std::env::var(rustfs_config::ENV_CORS_ALLOWED_ORIGINS)
.unwrap_or_else(|_| rustfs_config::DEFAULT_CORS_ALLOWED_ORIGINS.to_string())
.parse::<String>()
.unwrap_or(rustfs_config::DEFAULT_CONSOLE_CORS_ALLOWED_ORIGINS.to_string())
}
pub async fn start_http_server(
opt: &config::Opt,
worker_state_manager: ServiceStateManager,
) -> Result<tokio::sync::broadcast::Sender<()>> {
let server_addr = parse_and_resolve_address(opt.address.as_str()).map_err(Error::other)?;
let server_port = server_addr.port();
let server_address = server_addr.to_string();
// The listening address and port are obtained from the parameters
let listener = {
@@ -107,12 +165,6 @@ pub async fn start_http_server(
let api_endpoints = format!("http://{local_ip}:{server_port}");
let localhost_endpoint = format!("http://127.0.0.1:{server_port}");
info!(" API: {} {}", api_endpoints, localhost_endpoint);
if opt.console_enable {
info!(
" WebUI: http://{}:{}/rustfs/console/index.html http://127.0.0.1:{}/rustfs/console/index.html http://{}/rustfs/console/index.html",
local_ip, server_port, server_port, server_address
);
}
info!(" RootUser: {}", opt.access_key.clone());
info!(" RootPass: {}", opt.secret_key.clone());
if DEFAULT_ACCESS_KEY.eq(&opt.access_key) && DEFAULT_SECRET_KEY.eq(&opt.secret_key) {
@@ -134,7 +186,9 @@ pub async fn start_http_server(
b.set_auth(IAMAuth::new(access_key, secret_key));
b.set_access(store.clone());
b.set_route(admin::make_admin_route(opt.console_enable)?);
// When console runs on separate port, disable console routes on main endpoint
let console_on_endpoint = opt.console_enable; // Console will run separately
b.set_route(admin::make_admin_route(console_on_endpoint)?);
if !opt.server_domains.is_empty() {
MultiDomain::new(&opt.server_domains).map_err(Error::other)?; // validate domains
@@ -178,7 +232,17 @@ pub async fn start_http_server(
let (shutdown_tx, mut shutdown_rx) = tokio::sync::broadcast::channel(1);
let shutdown_tx_clone = shutdown_tx.clone();
// Capture CORS configuration for the server loop
let cors_allowed_origins = get_cors_allowed_origins();
let cors_allowed_origins = if cors_allowed_origins.is_empty() {
None
} else {
Some(cors_allowed_origins)
};
tokio::spawn(async move {
// Create CORS layer inside the server loop closure
let cors_layer = parse_cors_origins(cors_allowed_origins.as_ref());
#[cfg(unix)]
let (mut sigterm_inner, mut sigint_inner) = {
use tokio::signal::unix::{SignalKind, signal};
@@ -265,7 +329,14 @@ pub async fn start_http_server(
warn!(?err, "Failed to set set_send_buffer_size");
}
process_connection(socket, tls_acceptor.clone(), http_server.clone(), s3_service.clone(), graceful.clone());
process_connection(
socket,
tls_acceptor.clone(),
http_server.clone(),
s3_service.clone(),
graceful.clone(),
cors_layer.clone(),
);
}
worker_state_manager.update(ServiceState::Stopping);
@@ -372,6 +443,7 @@ fn process_connection(
http_server: Arc<ConnBuilder<TokioExecutor>>,
s3_service: S3Service,
graceful: Arc<GracefulShutdown>,
cors_layer: CorsLayer,
) {
tokio::spawn(async move {
// Build services inside each connected task to avoid passing complex service types across tasks,
@@ -423,7 +495,7 @@ fn process_connection(
debug!("http request failure error: {:?} in {:?}", _error, latency)
}),
)
.layer(CorsLayer::permissive())
.layer(cors_layer)
.layer(RedirectLayer)
.service(service);
let hybrid_service = TowerToHyperService::new(hybrid_service);

View File

@@ -13,11 +13,16 @@
// limitations under the License.
mod audit;
pub mod console;
mod http;
mod hybrid;
mod layer;
mod service_state;
#[cfg(test)]
mod console_test;
pub(crate) use console::start_console_server;
pub(crate) use http::start_http_server;
pub(crate) use service_state::SHUTDOWN_TIMEOUT;
pub(crate) use service_state::ServiceState;

View File

@@ -73,15 +73,15 @@ pub(crate) async fn wait_for_shutdown() -> ShutdownSignal {
tokio::select! {
_ = tokio::signal::ctrl_c() => {
info!("Received Ctrl-C signal");
info!("RustFS Received Ctrl-C signal");
ShutdownSignal::CtrlC
}
_ = sigint.recv() => {
info!("Received SIGINT signal");
info!("RustFS Received SIGINT signal");
ShutdownSignal::Sigint
}
_ = sigterm.recv() => {
info!("Received SIGTERM signal");
info!("RustFS Received SIGTERM signal");
ShutdownSignal::Sigterm
}
}
@@ -121,7 +121,7 @@ impl ServiceStateManager {
fn notify_systemd(&self, state: &ServiceState) {
match state {
ServiceState::Starting => {
info!("Service is starting...");
info!("RustFS Service is starting...");
#[cfg(target_os = "linux")]
if let Err(e) =
libsystemd::daemon::notify(false, &[libsystemd::daemon::NotifyState::Status("Starting...".to_string())])
@@ -130,15 +130,15 @@ impl ServiceStateManager {
}
}
ServiceState::Ready => {
info!("Service is ready");
info!("RustFS Service is ready");
notify_systemd("ready");
}
ServiceState::Stopping => {
info!("Service is stopping...");
info!("RustFS Service is stopping...");
notify_systemd("stopping");
}
ServiceState::Stopped => {
info!("Service has stopped");
info!("RustFS Service has stopped");
#[cfg(target_os = "linux")]
if let Err(e) =
libsystemd::daemon::notify(false, &[libsystemd::daemon::NotifyState::Status("Stopped".to_string())])

View File

@@ -1209,6 +1209,7 @@ impl S3 for FS {
name: v2.name,
prefix: v2.prefix,
max_keys: v2.max_keys,
common_prefixes: v2.common_prefixes,
..Default::default()
}))
}
@@ -1230,6 +1231,7 @@ impl S3 for FS {
let prefix = prefix.unwrap_or_default();
let max_keys = match max_keys {
Some(v) if v > 0 && v <= 1000 => v,
Some(v) if v > 1000 => 1000,
None => 1000,
_ => return Err(s3_error!(InvalidArgument, "max-keys must be between 1 and 1000")),
};

View File

@@ -45,7 +45,8 @@ export RUSTFS_VOLUMES="./target/volume/test{1...4}"
# export RUSTFS_VOLUMES="./target/volume/test"
export RUSTFS_ADDRESS=":9000"
export RUSTFS_CONSOLE_ENABLE=true
# export RUSTFS_CONSOLE_ADDRESS=":9001"
export RUSTFS_CONSOLE_ADDRESS=":9001"
export RUSTFS_EXTERNAL_ADDRESS=":9020"
# export RUSTFS_SERVER_DOMAINS="localhost:9000"
# HTTPS certificate directory
# export RUSTFS_TLS_PATH="./deploy/certs"
@@ -63,6 +64,9 @@ export RUSTFS_OBS_LOCAL_LOGGING_ENABLED=true # Whether to enable local logging
export RUSTFS_OBS_LOG_DIRECTORY="$current_dir/deploy/logs" # Log directory
export RUSTFS_OBS_LOG_ROTATION_TIME="hour" # Log rotation time unit, can be "second", "minute", "hour", "day"
export RUSTFS_OBS_LOG_ROTATION_SIZE_MB=100 # Log rotation size in MB
export RUSTFS_OBS_LOG_POOL_CAPA=10240
export RUSTFS_OBS_LOG_MESSAGE_CAPA=32768
export RUSTFS_OBS_LOG_FLUSH_MS=300
export RUSTFS_SINKS_FILE_PATH="$current_dir/deploy/logs"
export RUSTFS_SINKS_FILE_BUFFER_SIZE=12