Only apply repl.ReplaceAll() on values from literal variable names
(e.g. map outputs), not on values resolved from placeholder keys
(e.g. {http.request.header.*}). The placeholder path already resolves
the value via repl.Get(), so a second expansion allows user-controlled
input containing {env.*} or {file.*} to be evaluated, leaking
environment variables and file contents.
Add regression test to verify placeholder-sourced values are not
re-expanded.
When using copy_headers in a forward_auth block, client-supplied headers with
the same names were not being removed before being forwarded to the backend.
This happens because PR #6608 added a MatchNot guard that skips the Set
operation when the auth service does not return a given header. That guard
prevents setting headers to empty strings, which is the correct behavior,
but it also means a client can send X-User-Id: admin in their request and
if the auth service validates the token without returning X-User-Id, Caddy
skips the Set and the client value passes through unchanged to the backend.
The fix adds an unconditional delete route for each copy_headers entry,
placed just before the existing conditional set route. The delete always runs
regardless of what the auth service returns. The conditional set still only
runs when the auth service provides that header.
The end result is:
- Client-supplied headers are always removed
- When the auth service returns the header, the backend gets that value
- When the auth service does not return the header, the backend sees nothing
Existing behavior is unchanged for any deployment where the auth service
returns all of the configured copy_headers entries.
Fixes GHSA-7r4p-vjf4-gxv4
* perf: collect metrics once per route instead of per handler (#4644)
Move Prometheus metrics instrumentation from the per-handler level to
the per-route level. Previously, every middleware handler in a route was
individually wrapped with metricsInstrumentedHandler, causing metrics to
be collected N times per request (once per handler in the chain). Since
all handlers in a route see the same request, these per-handler metrics
were redundant and added significant CPU overhead (73% of request
handling time per the original profiling).
The fix introduces metricsInstrumentedRoute which wraps the entire
compiled handler chain once in wrapRoute, collecting metrics only when
the route actually matches. The handler label uses the first handler's
module name, which is the most meaningful identifier for the route.
Benchmark results (5 handlers per route):
Old (per-handler): ~4650 ns/op, 4400 B/op, 45 allocs/op
New (per-route): ~940 ns/op, 816 B/op, 8 allocs/op
Improvement: ~5x faster, ~5.4x less memory, ~5.6x fewer allocs
Signed-off-by: Varun Chawla <varun_6april@hotmail.com>
* Remove unused metricsInstrumentedHandler code
Delete the metricsInstrumentedHandler type, its constructor, and
ServeHTTP method since they are no longer used after switching to
route-level metrics collection via metricsInstrumentedRoute. Also
remove the unused metrics parameter from wrapMiddleware and the
middlewareHandlerFunc test helper, and convert existing tests to
use the new route-level API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Address review feedback: restore comments, move function to bottom
- Move computeApproximateRequestSize back to bottom of file to minimize diff
- Restore all useful comments that were accidentally dropped
- Old metricsInstrumentedHandler already removed in previous commit
---------
Signed-off-by: Varun Chawla <varun_6april@hotmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This refactors the initial approach in PR #7281, replacing the UsagePool
with a dedicated package-level sync.Map and atomic.Int64 to track
in-flight requests without global lock contention.
It also introduces a lookup map in the admin API to fix a potential
O(n^2) iteration over upstreams, ensuring that draining upstreams
are correctly exposed across config reloads without leaking memory.
Co-authored-by: Y.Horie <u5.horie@gmail.com>
reverseproxy: optimize in-flight tracking and admin API
- Replaced sync.RWMutex with sync.Map and atomic.Int64 to avoid lock contention under high RPS.
- Introduced a lookup map in the admin API to fix a potential O(n^2) iteration over upstreams.
* cmd: Pass configFile, not configFlag, for reload command
This *should* fix#7528.
* Remove debug log line
---------
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
Necessary as otherwise the early-bail in `until =
strings.IndexByte(remaining, nextCh) ... if until == -1` can cause a
case-insensitive mismatch
Co-authored-by: Asim Viladi Oglu Manizada <manizada@users.noreply.github.com>
Normalize exact hosts at provisioning and reqHost in the fast path so case-different Host variants can’t bypass host-gated routes.
Co-authored-by: Asim Viladi Oglu Manizada <manizada@users.noreply.github.com>
* admin: Fix tests locally, properly isolate storage
* Fix flaky pki_test
* Drop testdata dir logic
* Safer temp dir
* Test handlers without a full server
Start lightweight backend servers before starting Caddy so active health checks
probe a ready backend instead of the same Caddy instance during provisioning.
This removes the startup race without fixed sleeps or polling.
* refactor: use strings.Builder to improve performance
Signed-off-by: zjumathcode <pai314159@2980.com>
* refactor: small builder improvements per review (WriteByte / split writes)
also revert builder change in client_test.go
refactor(logging): build IP mask output via join of parts (more efficient)
---------
Signed-off-by: zjumathcode <pai314159@2980.com>
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* pki: add per-CA configurable maintenance_interval and renewal_window_ratio
- Add MaintenanceInterval and RenewalWindowRatio to CA struct (JSON + Caddyfile).
- Run one maintenance goroutine per CA using its own interval.
- needsRenewal uses per-CA RenewalWindowRatio; invalid/zero ratio falls back to defaults.
- Caddyfile: maintenance_interval duration, renewal_window_ratio <0-1>.
- Tests: TestCA_needsRenewal, TestParsePKIApp for new options.
Fixes#7475
* fix codestyle
* caddyfile: Add renewal_window_ratio global option
Adds support for configuring the TLS certificate renewal window ratio
directly in the Caddyfile global options block. This allows users to
customize when certificates should be renewed without needing to use
JSON configuration.
Example usage:
{
renewal_window_ratio 0.1666
}
Fixes#7467
* caddyfile: Add renewal_window_ratio to tls directive and tests
Adds support for renewal_window_ratio in the tls directive (not just
global options) and adds caddyfile adapt tests for both the global
option and tls directive.
* fix: inherit global renewal_window_ratio in site policies
* fix: correct test expected output for policy consolidation
* fix: properly inherit global renewal_window_ratio without removing other code