Compare commits

..

11 Commits

Author SHA1 Message Date
Sofia Rodrigues
a0b2e1f302 feat: introduce HTTP/1.1 server (#12151)
This PR introduces the Server module, an Async HTTP/1.1 server.

This contains the same code as #10478, divided into separate pieces to
facilitate easier review.

The pieces of this feature are:
- Core data structures: #12126
- Headers: #12127
- URI:  #12128
- Body: #12144
- H1: #12146
- Server: #12151
- Client:

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2026-04-20 16:25:45 +00:00
Sebastian Ullrich
10338ed1b0 fix: wrapInstance: do not leak via un-reducible instances (#13441)
This PR ensures that if wrapInstance encounters an instance that cannot
be reduced to a constructor, the wrapping definition is left at
semireducible transparency to avoid leakage.
2026-04-20 06:41:32 +00:00
Lean stage0 autoupdater
cc9a217df8 chore: update stage0 2026-04-19 21:43:33 +00:00
Sebastian Graf
81f559b0e4 chore: remove repeat/while macro_rules bootstrap from Init.While (#13479)
This PR removes the transitional `macro_rules` for `repeat`, `while`,
and `repeat ... until` from `Init.While`. After the latest stage0
update, the `@[builtin_macro]` and `@[builtin_doElem_elab]` definitions
in `Lean.Elab.BuiltinDo.Repeat` are picked up directly, so the bootstrap
duplicates in `Init.While` are no longer needed. `Init.While` now only
provides the `Loop` type and its `ForIn` instance.

This PR also adjusts `repeat`'s `ControlInfo` to match `for ... in`: its
`numRegularExits` is now unconditionally `1` rather than `if info.breaks
then 1 else 0`. Reporting `0` when the body has no `break` causes
`inferControlInfoSeq` (in any enclosing sequence whose `ControlInfo` is
inferred — e.g. a surrounding `for`/`if`/`match`/`try` body) to stop
aggregating after the `repeat` and miss any `return`/`break`/`continue`
that follows. The corresponding elaborator then sees the actual control
flow disagree with the inferred info and throws errors like `Early
returning ... but the info said there is no early return`. The new test
in `tests/elab/newdo.lean` pins down the regression. See
[#13437](https://github.com/leanprover/lean4/pull/13437) for further
discussion.
2026-04-19 21:01:14 +00:00
Joachim Breitner
7cc3a4cc0b perf: use .local asyncMode for eqnOptionsExt (#13477)
This PR fixes a benchmark regression introduced in #13475:
`eqnOptionsExt`
was using `.async .asyncEnv` asyncMode, which accumulates state in the
`checked` environment and can block. Switching to `.local` — consistent
with the neighbouring `eqnsExt` and the other declaration caches in
`src/Lean/Meta` — restores performance (the
`build/profile/blocked (unaccounted) wall-clock` bench moves from +33%
back to baseline). `.local` is safe here because
`saveEqnAffectingOptions`
is only called during top-level `def` elaboration and downstream readers
see the imported state; modifications on non-main branches are merged
into the main branch on completion.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:49:00 +00:00
Leonardo de Moura
e82cd9b62c fix: filter assigned metavariables before computing apply subgoal tags (#13476)
This PR refines how the `apply` tactic (and related tactics like
`rewrite`) name and tag the remaining subgoals. Assigned metavariables
are now filtered out *before* computing subgoal tags. As a consequence,
when only one unassigned subgoal remains, it inherits the tag of the
input goal instead of being given a fresh suffixed tag.

User-visible effect: proof states that previously displayed tags like
`case h`, `case a`, or `case upper.h` for a single remaining goal now
display the input goal's tag directly (e.g. no tag at all, or `case
upper`). This removes noise from `funext`, `rfl`-style, and
`induction`-alternative goals when the applied lemma introduces only one
non-assigned metavariable. Multi-goal applications are unaffected —
their subgoals continue to receive distinguishing suffixes.

This may affect users whose proofs rely on the previous tag names (for
example, `case h => ...` after `funext`). Such scripts need to be
updated to use the input goal's tag instead.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:31:49 +00:00
Joachim Breitner
1d2cfb47e7 feat: store eqn-affecting options at definition time instead of eager generation (#13475)
This PR replaces the eager equation realization that was triggered by
non-default values of equation-affecting options (like
`backward.eqns.nonrecursive`) with a `MapDeclarationExtension` that
stores non-default option values at definition time. These values are
then restored when equations are lazily realized, so the same equations
are produced regardless of when generation occurs.

Restoring the options is done via a new `withEqnOptions` helper in
`Lean.Meta.Eqns`. Because `realizeConst` overrides the caller's options
with the options saved in its `RealizationContext` — which are empty
for imported constants — the helper must also be applied inside the
`realizeConst` callbacks in `mkSimpleEqThm`, `mkEqns` (in
`Elab/PreDefinition/Eqns.lean`), `getConstUnfoldEqnFor?`, and
`Structural.mkUnfoldEq`. Without that, equation generation code that
reads eqn-affecting options inside the realize callback would see the
caller-independent defaults rather than the values stored in
`eqnOptionsExt` — so the store-at-definition-time behavior would not
carry across module boundaries.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 12:30:08 +00:00
Leonardo de Moura
439e6a85d3 fix: prune goals assigned by isDefEq in sym => mode (#13474)
This PR fixes a bug in `sym =>` interactive mode where goals whose
metavariable was assigned by `isDefEq` (e.g. via `apply Eq.refl`) were
not pruned. `pruneSolvedGoals` previously only filtered out goals
flagged as inconsistent, so an already-assigned goal would linger as an
unsolved goal. It now also removes goals whose metavariable is already
assigned.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 11:55:11 +00:00
Leonardo de Moura
2d38a70d1c fix: auto-introduce in sym => mode when goal closes during preprocessing (#13472)
This PR fixes a bug in `sym =>` interactive mode where satellite solvers
(`lia`, `ring`, `linarith`) would throw an internal error if their
automatic `intros + assertAll` preprocessing step already closed the
goal. Previously, `evalCheck` used `liftAction` which discarded the
closure result, so the subsequent `liftGoalM` call failed due to the
absence of a main goal. `liftAction` is now split so the caller can
distinguish the closed and subgoals cases and skip the solver body when
preprocessing already finished the job.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 08:32:49 +00:00
Sebastian Ullrich
80cbab1642 chore: don't fail on running build bench on built stage3 (#13467) 2026-04-18 22:07:21 +00:00
Leonardo de Moura
c0a53ffe97 chore: minor tweaks to Sym.simp test and benchmark (#13468)
This PR applies two minor tweaks:
- `tests/bench/sym/simp_1.lean`: share-common the proof term before
counting objects in `getProofSize`, so the reported size reflects the
shared representation.
- `tests/elab/sym_simp_3.lean`: use `>>` instead of `.andThen` when
composing `Sym.Simp` methods.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:11:30 +00:00
101 changed files with 5880 additions and 130 deletions

View File

@@ -279,7 +279,8 @@ jobs:
"os": large ? "nscloud-ubuntu-24.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": true,
"check-rebootstrap": level >= 1,
"check-stage3": level >= 2,
// Done as part of test-bench
//"check-stage3": level >= 2,
"test": true,
// NOTE: `test-bench` currently seems to be broken on `ubuntu-latest`
"test-bench": large && level >= 2,
@@ -291,7 +292,8 @@ jobs:
"os": large ? "nscloud-ubuntu-24.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": true,
"check-rebootstrap": level >= 1,
"check-stage3": level >= 2,
// Done as part of test-bench
//"check-stage3": level >= 2,
"test": true,
"secondary": true,
// NOTE: `test-bench` currently seems to be broken on `ubuntu-latest`

View File

@@ -222,8 +222,8 @@ private def addNonRecAux (docCtx : LocalContext × LocalInstances) (preDef : Pre
if compile && shouldGenCodeFor preDef then
compileDecl decl
if applyAttrAfterCompilation then
saveEqnAffectingOptions preDef.declName
enableRealizationsForConst preDef.declName
generateEagerEqns preDef.declName
addPreDefDocs docCtx preDef
if applyAttrAfterCompilation then
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation

View File

@@ -28,7 +28,7 @@ def getConstUnfoldEqnFor? (declName : Name) : MetaM (Option Name) := do
trace[ReservedNameAction] "getConstUnfoldEqnFor? {declName} failed, no unfold theorem available"
return none
let name := mkEqLikeNameFor ( getEnv) declName eqUnfoldThmSuffix
realizeConst declName name do
realizeConst declName name <| withEqnOptions declName do
-- we have to call `getUnfoldEqnFor?` again to make `unfoldEqnName` available in this context
let some unfoldEqnName getUnfoldEqnFor? (nonRec := true) declName | unreachable!
let info getConstInfo unfoldEqnName

View File

@@ -367,7 +367,7 @@ def mkEqns (declName : Name) (declNames : Array Name) : MetaM (Array Name) := do
thmNames := thmNames.push name
-- determinism: `type` should be independent of the environment changes since `baseName` was
-- added
realizeConst declName name (doRealize name info type)
realizeConst declName name (withEqnOptions declName (doRealize name info type))
return thmNames
where
doRealize name info type := withOptions (tactic.hygienic.set · false) do

View File

@@ -69,8 +69,10 @@ def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
a.name = `instance_reducible || a.name = `implicit_reducible do
setIrreducibleAttribute preDef.declName
for preDef in preDefs do
saveEqnAffectingOptions preDef.declName
/-
`enableRealizationsForConst` must happen before `generateEagerEqns`
It must happen in reverse order so that constants realized as part of the first decl
have realizations for the other ones enabled
-/
@@ -78,7 +80,6 @@ def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
enableRealizationsForConst preDef.declName
for preDef in preDefs do
generateEagerEqns preDef.declName
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation
end Lean.Elab.Mutual

View File

@@ -163,7 +163,7 @@ public def registerEqnsInfo (preDef : PreDefinition) (declNames : Array Name) (r
/-- Generate the "unfold" lemma for `declName`. -/
def mkUnfoldEq (declName : Name) (info : EqnInfo) : MetaM Name := do
let name := mkEqLikeNameFor ( getEnv) info.declName unfoldThmSuffix
realizeConst info.declNames[0]! name (doRealize name)
realizeConst info.declNames[0]! name (withEqnOptions declName (doRealize name))
return name
where
doRealize name := withOptions (tactic.hygienic.set · false) do

View File

@@ -208,11 +208,11 @@ def structuralRecursion
-/
registerEqnsInfo preDef (preDefs.map (·.declName)) recArgPos fixedParamPerms
addSmartUnfoldingDef docCtx preDef recArgPos
for preDef in preDefs do
saveEqnAffectingOptions preDef.declName
for preDef in preDefs do
-- must happen in separate loop so realizations can see eqnInfos of all other preDefs
enableRealizationsForConst preDef.declName
-- must happen after `enableRealizationsForConst`
generateEagerEqns preDef.declName
applyAttributesOf preDefsNonRec AttributeApplicationTime.afterCompilation

View File

@@ -497,14 +497,21 @@ def forEachVar (hs : Array Syntax) (tac : MVarId → FVarId → MetaM MVarId) :
/--
Searches for a metavariable `g` s.t. `tag` is its exact name.
If none then searches for a metavariable `g` s.t. `tag` is a suffix of its name.
If none, then it searches for a metavariable `g` s.t. `tag` is a prefix of its name. -/
If none, then it searches for a metavariable `g` s.t. `tag` is a prefix of its name.
We erase macro scopes from the metavariable's user name before comparing, so that
user-written tags match even when a previous tactic left hygienic macro scopes at
the end of the tag (e.g. `e_a.yield._@._internal._hyg.0`, where `yield` is not the
literal last component of the name). Case tags written by the user are never
macro-scoped, so erasing scopes on the mvar side is sufficient.
-/
private def findTag? (mvarIds : List MVarId) (tag : Name) : TacticM (Option MVarId) := do
match ( mvarIds.findM? fun mvarId => return tag == ( mvarId.getDecl).userName) with
match ( mvarIds.findM? fun mvarId => return tag == ( mvarId.getDecl).userName.eraseMacroScopes) with
| some mvarId => return mvarId
| none =>
match ( mvarIds.findM? fun mvarId => return tag.isSuffixOf ( mvarId.getDecl).userName) with
match ( mvarIds.findM? fun mvarId => return tag.isSuffixOf ( mvarId.getDecl).userName.eraseMacroScopes) with
| some mvarId => return mvarId
| none => mvarIds.findM? fun mvarId => return tag.isPrefixOf ( mvarId.getDecl).userName
| none => mvarIds.findM? fun mvarId => return tag.isPrefixOf ( mvarId.getDecl).userName.eraseMacroScopes
private def getCaseGoals (tag : TSyntax ``binderIdent) : TacticM (MVarId × List MVarId) := do
let gs getUnsolvedGoals

View File

@@ -68,7 +68,10 @@ def setGoals (goals : List Goal) : GrindTacticM Unit :=
def pruneSolvedGoals : GrindTacticM Unit := do
let gs getGoals
let gs := gs.filter fun g => !g.inconsistent
let gs gs.filterM fun g => do
if g.inconsistent then return false
-- The metavariable may have been assigned by `isDefEq`
return !( g.mvarId.isAssigned)
setGoals gs
def getUnsolvedGoals : GrindTacticM (List Goal) := do
@@ -329,13 +332,19 @@ def liftGoalM (k : GoalM α) : GrindTacticM α := do
replaceMainGoal [goal]
return a
def liftAction (a : Action) : GrindTacticM Unit := do
inductive LiftActionCoreResult where
| closed | subgoals
def liftActionCore (a : Action) : GrindTacticM LiftActionCoreResult := do
let goal getMainGoal
let ka := fun _ => throwError "tactic is not applicable"
let kp := fun goal => return .stuck [goal]
match ( liftGrindM <| a goal ka kp) with
| .closed _ => replaceMainGoal []
| .stuck gs => replaceMainGoal gs
| .closed _ => replaceMainGoal []; return .closed
| .stuck gs => replaceMainGoal gs; return .subgoals
def liftAction (a : Action) : GrindTacticM Unit := do
discard <| liftActionCore a
def done : GrindTacticM Unit := do
pruneSolvedGoals

View File

@@ -111,7 +111,9 @@ def evalCheck (tacticName : Name) (k : GoalM Bool)
This matches the behavior of these tactics in default tactic mode
where `lia` can close `x > 1 → x + y + z > 0` directly. -/
if ( read).sym then
liftAction <| Action.intros 0 >> Action.assertAll
match ( liftActionCore <| Action.intros 0 >> Action.assertAll) with
| .closed => return () -- closed the goal
| .subgoals => pure () -- continue
let recover := ( read).recover
liftGoalM do
let progress k

View File

@@ -37,12 +37,17 @@ register_builtin_option backward.eqns.deepRecursiveSplit : Bool := {
These options affect the generation of equational theorems in a significant way. For these, their
value at definition time, not realization time, should matter.
This is implemented by
* eagerly realizing the equations when they are set to a non-default value
* when realizing them lazily, reset the options to their default
This is implemented by storing their values at definition time (when non-default) in an environment
extension, and restoring them when the equations are lazily realized.
-/
def eqnAffectingOptions : Array (Lean.Option Bool) := #[backward.eqns.nonrecursive, backward.eqns.deepRecursiveSplit]
/-- Environment extension that stores the values of `eqnAffectingOptions` at definition time,
keyed by declaration name. Only populated when at least one option has a non-default value.
Stores an association list of (option name, value) pairs for options that differ from defaults. -/
builtin_initialize eqnOptionsExt : MapDeclarationExtension (Array (Name × DataValue))
mkMapDeclarationExtension (asyncMode := .local)
def eqnThmSuffixBase := "eq"
def eqnThmSuffixBasePrefix := eqnThmSuffixBase ++ "_"
def eqn1ThmSuffix := eqnThmSuffixBasePrefix ++ "1"
@@ -153,12 +158,30 @@ structure EqnsExtState where
builtin_initialize eqnsExt : EnvExtension EqnsExtState
registerEnvExtension (pure {}) (asyncMode := .local)
/--
Runs `act` with the equation-affecting options restored to the values stored for `declName`
at definition time (or reset to their defaults if none were stored). Use this inside
`realizeConst` callbacks, which otherwise see the caller-independent `ctx.opts` rather than
the outer `getEqnsFor?` context. -/
def withEqnOptions (declName : Name) (act : MetaM α) : MetaM α := do
let env getEnv
let setOpts : Options Options :=
if let some values := eqnOptionsExt.find? env declName then
fun os => Id.run do
let mut os := eqnAffectingOptions.foldl (fun os o => o.set os o.defValue) os
for (name, v) in values do
os := os.insert name v
return os
else
fun os => eqnAffectingOptions.foldl (fun os o => o.set os o.defValue) os
withOptions setOpts act
/--
Simple equation theorem for nonrecursive definitions.
-/
def mkSimpleEqThm (declName : Name) (name : Name) : MetaM (Option Name) := do
if let some (.defnInfo info) := ( getEnv).find? declName then
realizeConst declName name (doRealize name info)
realizeConst declName name (withEqnOptions declName (doRealize name info))
return some name
else
return none
@@ -229,19 +252,22 @@ private def getEqnsFor?Core (declName : Name) : MetaM (Option (Array Name)) := w
Returns equation theorems for the given declaration.
-/
def getEqnsFor? (declName : Name) : MetaM (Option (Array Name)) := withLCtx {} {} do
-- This is the entry point for lazy equation generation. Ignore the current value
-- of the options, and revert to the default.
withOptions (eqnAffectingOptions.foldl fun os o => o.set os o.defValue) do
withEqnOptions declName do
getEqnsFor?Core declName
/--
If any equation theorem affecting option is not the default value, create the equations now.
If any equation theorem affecting option is not the default value, store the option values
for later use during lazy equation generation.
-/
def generateEagerEqns (declName : Name) : MetaM Unit := do
def saveEqnAffectingOptions (declName : Name) : MetaM Unit := do
let opts getOptions
if eqnAffectingOptions.any fun o => o.get opts != o.defValue then
trace[Elab.definition.eqns] "generating eager equations for {declName}"
let _ getEqnsFor?Core declName
let mut nonDefaults : Array (Name × DataValue) := #[]
for o in eqnAffectingOptions do
if o.get opts != o.defValue then
nonDefaults := nonDefaults.push (o.name, KVMap.Value.toDataValue (o.get opts))
unless nonDefaults.isEmpty do
trace[Elab.definition.eqns] "saving equation-affecting options for {declName}"
modifyEnv (eqnOptionsExt.insert · declName nonDefaults)
@[expose] def GetUnfoldEqnFn := Name MetaM (Option Name)

View File

@@ -128,7 +128,6 @@ def postprocessAppMVars (tacticName : Name) (mvarId : MVarId) (newMVars : Array
(synthAssignedInstances := true) (allowSynthFailures := false) : MetaM Unit := do
synthAppInstances tacticName mvarId newMVars binderInfos synthAssignedInstances allowSynthFailures
-- TODO: default and auto params
appendParentTag mvarId newMVars binderInfos
private def dependsOnOthers (mvar : Expr) (otherMVars : Array Expr) : MetaM Bool :=
otherMVars.anyM fun otherMVar => do
@@ -223,6 +222,7 @@ def _root_.Lean.MVarId.apply (mvarId : MVarId) (e : Expr) (cfg : ApplyConfig :=
let e instantiateMVars e
mvarId.assign (mkAppN e newMVars)
let newMVars newMVars.filterM fun mvar => not <$> mvar.mvarId!.isAssigned
appendParentTag mvarId newMVars binderInfos
let otherMVarIds getMVarsNoDelayed e
let newMVarIds reorderGoals newMVars cfg.newGoals
let otherMVarIds := otherMVarIds.filter fun mvarId => !newMVarIds.contains mvarId

View File

@@ -82,6 +82,7 @@ def _root_.Lean.MVarId.rewrite (mvarId : MVarId) (e : Expr) (heq : Expr)
postprocessAppMVars `rewrite mvarId newMVars binderInfos
(synthAssignedInstances := !tactic.skipAssignedInstances.get ( getOptions))
let newMVarIds newMVars.map Expr.mvarId! |>.filterM fun mvarId => not <$> mvarId.isAssigned
appendParentTag mvarId newMVars binderInfos
let otherMVarIds getMVarsNoDelayed heqIn
let otherMVarIds := otherMVarIds.filter (!newMVarIds.contains ·)
let newMVarIds := newMVarIds ++ otherMVarIds

View File

@@ -145,7 +145,6 @@ public partial def wrapInstance (inst expectedType : Expr) (compile : Bool := tr
else
let name mkAuxDeclName
let wrapped mkAuxDefinition name expectedType inst (compile := false)
setReducibilityStatus name .implicitReducible
if isMeta then modifyEnv (markMeta · name)
if compile then
compileDecls (logErrors := logCompileErrors) #[name]

View File

@@ -132,6 +132,8 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let gen := mkStdGen seed
let selectables := shuffleIt selectables gen
let gate IO.Promise.new
for selectable in selectables do
if let some val selectable.selector.tryFn then
let result selectable.cont val
@@ -141,11 +143,14 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let promise IO.Promise.new
for selectable in selectables do
if finished.get then
break
let waiterPromise IO.Promise.new
let waiter := Waiter.mk finished waiterPromise
selectable.selector.registerFn waiter
discard <| IO.bindTask (t := waiterPromise.result?) fun res? => do
discard <| IO.bindTask (t := waiterPromise.result?) (sync := true) fun res? => do
match res? with
| none =>
/-
@@ -157,18 +162,20 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let async : Async _ :=
try
let res IO.ofExcept res
discard <| await gate.result?
for selectable in selectables do
selectable.selector.unregisterFn
let contRes selectable.cont res
promise.resolve (.ok contRes)
promise.resolve (.ok ( selectable.cont res))
catch e =>
promise.resolve (.error e)
async.toBaseIO
Async.ofPromise (pure promise)
gate.resolve ()
let result Async.ofPromise (pure promise)
return result
/--
Performs fair and data-loss free non-blocking multiplexing on the `Selectable`s in `selectables`.
@@ -224,6 +231,8 @@ def Selectable.combine (selectables : Array (Selectable α)) : IO (Selector α)
let derivedWaiter := Waiter.mk waiter.finished waiterPromise
selectable.selector.registerFn derivedWaiter
let barrier IO.Promise.new
discard <| IO.bindTask (t := waiterPromise.result?) fun res? => do
match res? with
| none => return (Task.pure (.ok ()))
@@ -231,6 +240,7 @@ def Selectable.combine (selectables : Array (Selectable α)) : IO (Selector α)
let async : Async _ := do
let mainPromise := waiter.promise
await barrier
for selectable in selectables do
selectable.selector.unregisterFn

View File

@@ -6,5 +6,189 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data
public import Std.Internal.Http.Protocol.H1
public import Std.Internal.Http.Server
public import Std.Internal.Http.Test.Helpers
public section
/-!
# HTTP Library
A low-level HTTP/1.1 server implementation for Lean. This library provides a pure,
sans-I/O protocol implementation that can be used with the `Async` library or with
custom connection handlers.
## Overview
This module provides a complete HTTP/1.1 server implementation with support for:
- Request/response handling with directional streaming bodies
- Keep-alive connections
- Chunked transfer encoding
- Header validation and management
- Configurable timeouts and limits
**Sans I/O Architecture**: The core protocol logic doesn't perform any actual I/O itself -
it just defines how data should be processed. This separation allows the protocol implementation
to remain pure and testable, while different transports (TCP sockets, mock clients) handle
the actual reading and writing of bytes.
## Quick Start
The main entry point is `Server.serve`, which starts an HTTP/1.1 server. Implement the
`Server.Handler` type class to define how the server handles requests, errors, and
`Expect: 100-continue` headers:
```lean
import Std.Internal.Http
open Std Internal IO Async
open Std Http Server
structure MyHandler
instance : Handler MyHandler where
onRequest _ req := do
Response.ok |>.text "Hello, World!"
def main : IO Unit := Async.block do
let addr : Net.SocketAddress := .v4 ⟨.ofParts 127 0 0 1, 8080⟩
let server ← Server.serve addr MyHandler.mk
server.waitShutdown
```
## Working with Requests
Incoming requests are represented by `Request Body.Stream`, which bundles the request
line, parsed headers, and a lazily-consumed body. Headers are available immediately,
while the body can be streamed or collected on demand, allowing handlers to process both
small and large payloads efficiently.
### Reading Headers
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
-- Access request method and URI
let method := req.head.method -- Method.get, Method.post, etc.
let uri := req.head.uri -- RequestTarget
-- Read a specific header
if let some contentType := req.head.headers.get? (.mk "content-type") then
IO.println s!"Content-Type: {contentType}"
Response.ok |>.text "OK"
```
### URI Query Semantics
`RequestTarget.query` is parsed using form-style key/value conventions (`k=v&...`), and `+` is decoded as a
space in query components. If you need RFC 3986 opaque query handling, use the raw request target string
(`toString req.head.uri`) and parse it with custom logic.
### Reading the Request Body
The request body is exposed as `Body.Stream`, which can be consumed incrementally or
collected into memory. The `readAll` method reads the entire body, with an optional size
limit to protect against unbounded payloads.
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
-- Collect entire body as a String
let bodyStr : String ← req.body.readAll
-- Or with a maximum size limit
let bodyStr : String ← req.body.readAll (maximumSize := some 1024)
Response.ok |>.text s!"Received: {bodyStr}"
```
## Building Responses
Responses are constructed using a builder API that starts from a status code and adds
headers and a body. Common helpers exist for text, HTML, JSON, and binary responses, while
still allowing full control over status codes and header values.
Response builders produce `Async (Response Body.Stream)`.
```lean
-- Text response
Response.ok |>.text "Hello!"
-- HTML response
Response.ok |>.html "<h1>Hello!</h1>"
-- JSON response
Response.ok |>.json "{\"key\": \"value\"}"
-- Binary response
Response.ok |>.bytes someByteArray
-- Custom status
Response.new |>.status .created |>.text "Resource created"
-- With custom headers
Response.ok
|>.header! "X-Custom-Header" "value"
|>.header! "Cache-Control" "no-cache"
|>.text "Response with headers"
```
### Streaming Responses
For large responses or server-sent events, use streaming:
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
Response.ok
|>.header! "Content-Type" "text/plain"
|>.stream fun stream => do
for i in [0:10] do
stream.send { data := s!"chunk {i}\n".toUTF8 }
Async.sleep 1000
stream.close
```
## Server Configuration
Configure server behavior with `Config`:
```lean
def config : Config := {
maxRequests := 10000000,
lingeringTimeout := 5000,
}
let server ← Server.serve addr MyHandler.mk config
```
## Handler Type Class
Implement `Server.Handler` to define how the server processes events. The class has three
methods, all with default implementations:
- `onRequest` — called for each incoming request; returns a response inside `ContextAsync`
- `onFailure` — called when an error occurs while processing a request
- `onContinue` — called when a request includes an `Expect: 100-continue` header; return
`true` to accept the body or `false` to reject it
```lean
structure MyHandler where
greeting : String
instance : Handler MyHandler where
onRequest self req := do
Response.ok |>.text self.greeting
onFailure self err := do
IO.eprintln s!"Error: {err}"
```
The handler methods operate in the following monads:
- `onRequest` uses `ContextAsync` — an asynchronous monad (`ReaderT CancellationContext Async`) that provides:
- Full access to `Async` operations (spawning tasks, sleeping, concurrent I/O)
- A `CancellationContext` tied to the client connection — when the client disconnects, the
context is cancelled, allowing your handler to detect this and stop work early
- `onFailure` uses `Async`
- `onContinue` uses `Async`
-/

View File

@@ -48,6 +48,12 @@ structure Any where
-/
recvSelector : Selector (Option Chunk)
/--
Non-blocking receive attempt. Returns `none` if no chunk is immediately available,
`some (some chunk)` when a chunk is ready, or `some none` at end-of-stream.
-/
tryRecv : Async (Option (Option Chunk))
/--
Returns the declared size.
-/
@@ -67,6 +73,7 @@ def ofBody [Http.Body α] (body : α) : Any where
close := Http.Body.close body
isClosed := Http.Body.isClosed body
recvSelector := Http.Body.recvSelector body
tryRecv := Http.Body.tryRecv body
getKnownSize := Http.Body.getKnownSize body
setKnownSize := Http.Body.setKnownSize body
@@ -77,6 +84,7 @@ instance : Http.Body Any where
close := Any.close
isClosed := Any.isClosed
recvSelector := Any.recvSelector
tryRecv := Any.tryRecv
getKnownSize := Any.getKnownSize
setKnownSize := Any.setKnownSize

View File

@@ -50,6 +50,12 @@ class Body (α : Type) where
-/
recvSelector : α Selector (Option Chunk)
/--
Non-blocking receive attempt. Returns `none` if no chunk is immediately available,
`some (some chunk)` when a chunk is ready, or `some none` at end-of-stream.
-/
tryRecv (body : α) : Async (Option (Option Chunk))
/--
Gets the declared size of the body.
-/

View File

@@ -52,6 +52,13 @@ Empty bodies are always closed for reading.
def isClosed (_ : Empty) : Async Bool :=
pure true
/--
Non-blocking receive. Empty bodies are always at EOF.
-/
@[inline]
def tryRecv (_ : Empty) : Async (Option (Option Chunk)) :=
pure (some none)
/--
Selector that immediately resolves with end-of-stream for an empty body.
-/
@@ -72,6 +79,7 @@ instance : Http.Body Empty where
close := Empty.close
isClosed := Empty.isClosed
recvSelector := Empty.recvSelector
tryRecv := Empty.tryRecv
getKnownSize _ := pure (some <| .fixed 0)
setKnownSize _ _ := pure ()

View File

@@ -100,6 +100,14 @@ def getKnownSize (full : Full) : Async (Option Body.Length) :=
| none => pure (some (.fixed 0))
| some data => pure (some (.fixed data.size))
/--
Non-blocking receive. `Full` bodies are always in-memory, so data is always
immediately available. Returns `some chunk` on first call, `some none` (EOF)
once consumed or closed.
-/
def tryRecv (full : Full) : Async (Option (Option Chunk)) := do
return some ( full.state.atomically takeChunk)
/--
Selector that immediately resolves to the remaining chunk (or EOF).
-/
@@ -128,6 +136,7 @@ instance : Http.Body Full where
close := Full.close
isClosed := Full.isClosed
recvSelector := Full.recvSelector
tryRecv := Full.tryRecv
getKnownSize := Full.getKnownSize
setKnownSize _ _ := pure ()

View File

@@ -227,6 +227,19 @@ def tryRecv (stream : Stream) : Async (Option Chunk) :=
Channel.pruneFinishedWaiters
Channel.tryRecv'
/--
Non-blocking receive for the `Body` typeclass. Returns `none` when no producer is
waiting and the channel is still open, `some (some chunk)` when data is ready,
or `some none` at end-of-stream (channel closed with no pending producer).
-/
def tryRecvBody (stream : Stream) : Async (Option (Option Chunk)) :=
stream.state.atomically do
Channel.pruneFinishedWaiters
if Channel.recvReady' then
return some ( Channel.tryRecv')
else
return none
private def recv' (stream : Stream) : BaseIO (AsyncTask (Option Chunk)) := do
stream.state.atomically do
Channel.pruneFinishedWaiters
@@ -598,6 +611,7 @@ instance : Http.Body Stream where
close := Stream.close
isClosed := Stream.isClosed
recvSelector := Stream.recvSelector
tryRecv := Stream.tryRecvBody
getKnownSize := Stream.getKnownSize
setKnownSize := Stream.setKnownSize

View File

@@ -156,6 +156,17 @@ end Chunk.ExtensionValue
/--
Represents a chunk of data with optional extensions (key-value pairs).
The interpretation of a chunk depends on the protocol layer consuming it:
- HTTP/1.1: The zero-size wire encoding (`0\r\n\r\n`) is reserved
exclusively as the `last-chunk` terminator. The HTTP/1.1 writer silently discards
any empty chunk (including its extensions) rather than emitting a premature
end-of-body signal. `Encode.encode` on a `Chunk.empty` value does produce
`"0\r\n\r\n"`, but that path bypasses the writer's framing logic.
- HTTP/2 (not yet implemented): Chunked transfer encoding does not exist; HTTP/2 uses DATA
frames instead. This type is specific to the HTTP/1.1 wire format.
Reference: https://httpwg.org/specs/rfc9112.html#rfc.section.7.1
-/
structure Chunk where
@@ -201,7 +212,7 @@ def toString? (chunk : Chunk) : Option String :=
instance : Encode .v11 Chunk where
encode buffer chunk :=
let chunkLen := chunk.data.size
let exts := chunk.extensions.foldl (fun acc (name, value) =>
let exts := chunk.extensions.foldl (fun acc (name, value) =>
acc ++ ";" ++ name.value ++ (value.elim "" (fun x => "=" ++ x.quote))) ""
let size := Nat.toDigits 16 chunkLen |>.toArray |>.map Char.toUInt8 |> ByteArray.mk
buffer.append #[size, exts.toUTF8, "\r\n".toUTF8, chunk.data, "\r\n".toUTF8]

View File

@@ -78,7 +78,9 @@ namespace ContentLength
Parses a content length header value.
-/
def parse (v : Value) : Option ContentLength :=
v.value.toNat?.map (.mk)
let s := v.value
if s.isEmpty || !s.all Char.isDigit then none
else s.toNat?.map (.mk)
/--
Serializes a content length header back to a name-value pair.

View File

@@ -703,22 +703,37 @@ private def writeHead (messageHead : Message.Head dir.swap) (machine : Machine d
let machine := machine.updateKeepAlive shouldKeepAlive
let size := Writer.determineTransferMode machine.writer
-- RFC 7230 §3.3.1: HTTP/1.0 does not support Transfer-Encoding. When the
-- response body length is unknown (chunked mode), fall back to connection-close
-- framing: disable keep-alive and write raw bytes (no chunk encoding, no TE header).
let machine :=
match dir, machine.reader.messageHead.version, size with
| .receiving, Version.v10, .chunked => machine.disableKeepAlive
| _, _, _ => machine
let headers := messageHead.headers
-- Add identity header based on direction
-- Add identity header based on direction. handler wins if it already set one.
let headers :=
let identityOpt := machine.config.agentName
match dir, identityOpt with
| .receiving, some server => headers.insert Header.Name.server server
| .sending, some userAgent => headers.insert Header.Name.userAgent userAgent
| .receiving, some server =>
if headers.contains Header.Name.server then headers
else headers.insert Header.Name.server server
| .sending, some userAgent =>
if headers.contains Header.Name.userAgent then headers
else headers.insert Header.Name.userAgent userAgent
| _, none => headers
-- Add Connection header based on keep-alive state and protocol version
-- Add Connection header based on keep-alive state and protocol version.
-- Erase any handler-supplied value first to avoid duplicate or conflicting
-- Connection headers on the wire.
let headers := headers.erase Header.Name.connection
let headers :=
if !machine.keepAlive !headers.hasEntry Header.Name.connection (.mk "close") then
if !machine.keepAlive then
headers.insert Header.Name.connection (.mk "close")
else if machine.keepAlive machine.reader.messageHead.version == .v10
!headers.hasEntry Header.Name.connection (.mk "keep-alive") then
else if machine.reader.messageHead.version == .v10 then
-- RFC 2616 §19.7.1: HTTP/1.0 keep-alive responses must echo Connection: keep-alive
headers.insert Header.Name.connection (.mk "keep-alive")
else
@@ -729,18 +744,29 @@ private def writeHead (messageHead : Message.Head dir.swap) (machine : Machine d
let headers :=
match dir, messageHead with
| .receiving, messageHead =>
if responseForbidsFramingHeaders messageHead.status then
headers.erase Header.Name.contentLength |>.erase Header.Name.transferEncoding
else if messageHead.status == .notModified then
-- `304` carries no body; keep explicit Content-Length metadata if the
-- user supplied it, but never keep Transfer-Encoding.
headers.erase Header.Name.transferEncoding
if responseForbidsFramingHeaders messageHead.status messageHead.status == .notModified then
headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
else if machine.reader.messageHead.version == .v10 && size == .chunked then
-- RFC 7230 §3.3.1: connection-close framing for HTTP/1.0 — strip all framing
-- headers so neither Content-Length nor Transfer-Encoding appears on the wire.
headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
else
normalizeFramingHeaders headers size
| .sending, _ =>
normalizeFramingHeaders headers size
let state := Writer.State.writingBody size
let state : Writer.State :=
match size with
| .fixed n => .writingBodyFixed n
| .chunked =>
-- RFC 7230 §3.3.1: HTTP/1.0 server-side uses connection-close framing (no chunk framing).
match dir, machine.reader.messageHead.version with
| .receiving, .v10 => .writingBodyClosingFrame
| _, _ => .writingBodyChunked
machine.modifyWriter (fun writer => {
writer with
@@ -891,6 +917,13 @@ def send (machine : Machine dir) (message : Message.Head dir.swap) : Machine dir
| .receiving => message.status.isInformational
| .sending => false
if isInterim then
-- RFC 9110 §15.2: 1xx responses MUST NOT carry a body, so framing headers
-- are meaningless and must not be forwarded even if the handler set them.
let message := Message.Head.setHeaders message
<| message.headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
machine.modifyWriter (fun w => {
w with outputData := Encode.encode (v := .v11) w.outputData message
})
@@ -1091,11 +1124,11 @@ private partial def processFixedBufferedBody (machine : Machine dir) (n : Nat) :
if writer.userClosedBody then
completeWriterMessage machine
else
machine.setWriterState (.writingBody (.fixed 0))
machine.setWriterState (.writingBodyFixed 0)
else
closeOnBadMessage machine
else
machine.setWriterState (.writingBody (.fixed remaining))
machine.setWriterState (.writingBodyFixed remaining)
/--
Handles fixed-length writer state when no user bytes are currently buffered.
@@ -1127,20 +1160,28 @@ private partial def processFixedBody (machine : Machine dir) (n : Nat) : Machine
processFixedIdleBody machine
/--
Processes chunked transfer-encoding output.
Writes buffered chunks when available, writes terminal `0\\r\\n\\r\\n` on
producer close, and supports omitted-body completion.
Processes chunked transfer-encoding output (HTTP/1.1).
-/
private partial def processChunkedBody (machine : Machine dir) : Machine dir :=
if machine.writer.omitBody then
completeOmittedBody machine
else if machine.writer.userClosedBody then
machine.modifyWriter Writer.writeFinalChunk
|> completeWriterMessage
machine.modifyWriter Writer.writeFinalChunk |> completeWriterMessage
else if machine.writer.userData.size > 0 then
machine.modifyWriter Writer.writeChunkedBody
|> processWrite
machine.modifyWriter Writer.writeChunkedBody |> processWrite
else
machine
/--
Processes connection-close body output (HTTP/1.0 server, unknown body length).
-/
private partial def processClosingFrameBody (machine : Machine dir) : Machine dir :=
if machine.writer.omitBody then
completeOmittedBody machine
else if machine.writer.userClosedBody then
machine.modifyWriter Writer.writeRawBody |> completeWriterMessage
else if machine.writer.userData.size > 0 then
machine.modifyWriter Writer.writeRawBody |> processWrite
else
machine
@@ -1174,10 +1215,12 @@ partial def processWrite (machine : Machine dir) : Machine dir :=
|> processWrite
else
machine
| .writingBody (.fixed n) =>
| .writingBodyFixed n =>
processFixedBody machine n
| .writingBody .chunked =>
| .writingBodyChunked =>
processChunkedBody machine
| .writingBodyClosingFrame =>
processClosingFrameBody machine
| .complete =>
processCompleteStep machine
| .closed =>

View File

@@ -65,6 +65,14 @@ def Message.Head.headers (m : Message.Head dir) : Headers :=
| .receiving => Request.Head.headers m
| .sending => Response.Head.headers m
/--
Returns a copy of the message head with the headers replaced.
-/
def Message.Head.setHeaders (m : Message.Head dir) (headers : Headers) : Message.Head dir :=
match dir with
| .receiving => { (m : Request.Head) with headers }
| .sending => { (m : Response.Head) with headers }
/--
Gets the version of a `Message`.
-/
@@ -82,7 +90,7 @@ def Message.Head.getSize (message : Message.Head dir) (allowEOFBody : Bool) : Op
match message.headers.getAll? .transferEncoding with
| none =>
match contentLength with
| some #[cl] => .fixed <$> cl.value.toNat?
| some #[cl] => .fixed <$> (Header.ContentLength.parse cl |>.map (·.length))
| some _ => none -- To avoid request smuggling with malformed/multiple content-length headers.
| none => if allowEOFBody then some (.fixed 0) else none

View File

@@ -51,9 +51,20 @@ inductive Writer.State
| waitingForFlush
/--
Writing the body output (either fixed-length or chunked).
Writing a fixed-length body; `n` is the number of bytes still to be sent.
-/
| writingBody (mode : Body.Length)
| writingBodyFixed (n : Nat)
/--
Writing a chunked transfer-encoding body (HTTP/1.1).
-/
| writingBodyChunked
/--
Writing a connection-close body (HTTP/1.0 server, unknown length).
Raw bytes are written without chunk framing; the peer reads until the connection closes.
-/
| writingBodyClosingFrame
/--
Completed writing a single message and ready to begin the next one.
@@ -162,7 +173,9 @@ def canAcceptData (writer : Writer dir) : Bool :=
match writer.state with
| .waitingHeaders => true
| .waitingForFlush => true
| .writingBody _ => !writer.userClosedBody
| .writingBodyFixed _
| .writingBodyChunked
| .writingBodyClosingFrame => !writer.userClosedBody
| _ => false
/--
@@ -185,6 +198,9 @@ def determineTransferMode (writer : Writer dir) : Body.Length :=
/--
Adds user data chunks to the writer's buffer if the writer can accept data.
Empty chunks (zero bytes of data) are accepted here but will be silently dropped
during the chunked-encoding write step — see `writeChunkedBody`.
-/
@[inline]
def addUserData (data : Array Chunk) (writer : Writer dir) : Writer dir :=
@@ -223,12 +239,14 @@ def writeFixedBody (writer : Writer dir) (limitSize : Nat) : Writer dir × Nat :
/--
Writes accumulated user data to output using chunked transfer encoding.
Empty chunks are silently discarded. See `Chunk.empty` for the protocol-level rationale.
-/
def writeChunkedBody (writer : Writer dir) : Writer dir :=
if writer.userData.size = 0 then
writer
else
let data := writer.userData
let data := writer.userData.filter (fun c => !c.data.isEmpty)
{ writer with userData := #[], userDataBytes := 0, outputData := data.foldl (Encode.encode .v11) writer.outputData }
/--
@@ -241,6 +259,15 @@ def writeFinalChunk (writer : Writer dir) : Writer dir :=
state := .complete
}
/--
Writes accumulated user data to output as raw bytes (HTTP/1.0 connection-close framing).
No chunk framing is added; the peer reads until the connection closes.
-/
def writeRawBody (writer : Writer dir) : Writer dir :=
{ writer with
outputData := writer.userData.foldl (fun buf c => buf.write c.data) writer.outputData,
userData := #[], userDataBytes := 0 }
/--
Extracts all accumulated output data and returns it with a cleared output buffer.
-/

View File

@@ -0,0 +1,188 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Internal.Async
public import Std.Internal.Async.TCP
public import Std.Sync.CancellationToken
public import Std.Sync.Semaphore
public import Std.Internal.Http.Server.Config
public import Std.Internal.Http.Server.Handler
public import Std.Internal.Http.Server.Connection
public section
/-!
# HTTP Server
This module defines a simple, asynchronous HTTP/1.1 server implementation.
It provides the `Std.Http.Server` structure, which encapsulates all server state, and functions for
starting, managing, and gracefully shutting down the server.
The server runs entirely using `Async` and uses a shared `CancellationContext` to signal shutdowns.
Each active client connection is tracked, and the server automatically resolves its shutdown
promise once all connections have closed.
-/
namespace Std.Http
open Std.Internal.IO.Async TCP
set_option linter.all true
/--
The `Server` structure holds all state required to manage the lifecycle of an HTTP server, including
connection tracking and shutdown coordination.
-/
structure Server where
/--
The context used for shutting down all connections and the server.
-/
context : Std.CancellationContext
/--
Active HTTP connections
-/
activeConnections : Std.Mutex Nat
/--
Semaphore used to enforce the maximum number of simultaneous active connections.
`none` means no connection limit.
-/
connectionLimit : Option Std.Semaphore
/--
Indicates when the server has successfully shut down.
-/
shutdownPromise : Std.Channel Unit
/--
Configuration of the server
-/
config : Std.Http.Config
namespace Server
/--
Create a new `Server` structure with an optional configuration.
-/
def new (config : Std.Http.Config := {}) : IO Server := do
let context Std.CancellationContext.new
let activeConnections Std.Mutex.new 0
let connectionLimit
if config.maxConnections = 0 then
pure none
else
some <$> Std.Semaphore.new config.maxConnections
let shutdownPromise Std.Channel.new
return { context, activeConnections, connectionLimit, shutdownPromise, config }
/--
Triggers cancellation of all requests and the accept loop in the server. This function should be used
in conjunction with `waitShutdown` to properly coordinate the shutdown sequence.
-/
@[inline]
def shutdown (s : Server) : Async Unit :=
s.context.cancel .shutdown
/--
Waits for the server to shut down. Blocks until another task or async operation calls the `shutdown` function.
-/
@[inline]
def waitShutdown (s : Server) : Async Unit := do
Async.ofAsyncTask (( s.shutdownPromise.recv).map Except.ok)
/--
Returns a `Selector` that waits for the server to shut down.
-/
@[inline]
def waitShutdownSelector (s : Server) : Selector Unit :=
s.shutdownPromise.recvSelector
/--
Triggers cancellation of all requests and the accept loop, then waits for the server to fully shut down.
This is a convenience function combining `shutdown` and then `waitShutdown`.
-/
@[inline]
def shutdownAndWait (s : Server) : Async Unit := do
s.context.cancel .shutdown
s.waitShutdown
@[inline]
private def frameCancellation (s : Server) (releaseConnectionPermit : Bool := false)
(action : ContextAsync α) : ContextAsync α := do
s.activeConnections.atomically (modify (· + 1))
try
action
finally
if releaseConnectionPermit then
if let some limit := s.connectionLimit then
limit.release
s.activeConnections.atomically do
modify (· - 1)
if ( get) = 0 ( s.context.isCancelled) then
discard <| s.shutdownPromise.send ()
/--
Start a new HTTP/1.1 server on the given socket address. This function uses `Async` to handle tasks
and TCP connections, and returns a `Server` structure that can be used to cancel the server.
-/
def serve {σ : Type} [Handler σ]
(addr : Net.SocketAddress)
(handler : σ)
(config : Config := {}) (backlog : UInt32 := 1024) : Async Server := do
let httpServer Server.new config
let server Socket.Server.mk
server.bind addr
server.listen backlog
server.noDelay
let runServer := do
frameCancellation httpServer (action := do
while true do
let permitAcquired
if let some limit := httpServer.connectionLimit then
let permit limit.acquire
await permit
pure true
else
pure false
let result Selectable.one #[
.case (server.acceptSelector) (fun x => pure <| some x),
.case ( ContextAsync.doneSelector) (fun _ => pure none)
]
match result with
| some client =>
let extensions do
match ( EIO.toBaseIO client.getPeerName) with
| .ok addr => pure <| Extensions.empty.insert (Server.RemoteAddr.mk addr)
| .error _ => pure Extensions.empty
ContextAsync.background
(frameCancellation httpServer (releaseConnectionPermit := permitAcquired)
(action := do
serveConnection client handler config extensions))
| none =>
if permitAcquired then
if let some limit := httpServer.connectionLimit then
limit.release
break
)
background (runServer httpServer.context)
return httpServer
end Std.Http.Server

View File

@@ -0,0 +1,196 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Time
public import Std.Internal.Http.Protocol.H1
public section
/-!
# Config
This module exposes the `Config`, a structure that describes timeout, request and headers
configuration of an HTTP Server.
-/
namespace Std.Http
set_option linter.all true
/--
Connection limits configuration with validation.
-/
structure Config where
/--
Maximum number of simultaneous active connections (default: 1024).
Setting this to `0` disables the limit entirely: the server will accept any number of
concurrent connections and no semaphore-based cap is enforced. Use with care — an
unconstrained server may exhaust file descriptors or memory under adversarial load.
-/
maxConnections : Nat := 1024
/--
Maximum number of requests per connection.
-/
maxRequests : Nat := 100
/--
Maximum number of headers allowed per request.
-/
maxHeaders : Nat := 50
/--
Maximum aggregate byte size of all header field lines in a single message
(name + value bytes plus 4 bytes per line for `: ` and `\r\n`). Default: 64 KiB.
-/
maxHeaderBytes : Nat := 65536
/--
Timeout (in milliseconds) for receiving additional data while a request is actively being
processed (e.g. reading the request body). Applies after the request headers have been parsed
and replaces the keep-alive timeout for the duration of the request.
-/
lingeringTimeout : Time.Millisecond.Offset := 10000
/--
Timeout for keep-alive connections
-/
keepAliveTimeout : { x : Time.Millisecond.Offset // x > 0 } := 12000, by decide
/--
Maximum time (in milliseconds) allowed to receive the complete request headers after the first
byte of a new request arrives. This prevents Slowloris-style attacks where a client sends bytes
at a slow rate to hold a connection slot open without completing a request. Once a request starts,
each individual read must complete within this window. Default: 5 seconds.
-/
headerTimeout : Time.Millisecond.Offset := 5000
/--
Whether to enable keep-alive connections by default.
-/
enableKeepAlive : Bool := true
/--
The maximum size that the connection can receive in a single recv call.
-/
maximumRecvSize : Nat := 8192
/--
Default buffer size for the connection
-/
defaultPayloadBytes : Nat := 8192
/--
Whether to automatically generate the `Date` header in responses.
-/
generateDate : Bool := true
/--
The `Server` header value injected into outgoing responses.
`none` suppresses the header entirely.
-/
serverName : Option Header.Value := some (.mk "LeanHTTP/1.1")
/--
Maximum length of request URI (default: 8192 bytes)
-/
maxUriLength : Nat := 8192
/--
Maximum number of bytes consumed while parsing request start-lines (default: 8192 bytes).
-/
maxStartLineLength : Nat := 8192
/--
Maximum length of header field name (default: 256 bytes)
-/
maxHeaderNameLength : Nat := 256
/--
Maximum length of header field value (default: 8192 bytes)
-/
maxHeaderValueLength : Nat := 8192
/--
Maximum number of spaces in delimiter sequences (default: 16)
-/
maxSpaceSequence : Nat := 16
/--
Maximum number of leading empty lines (bare CRLF) to skip before a request-line
(RFC 9112 §2.2 robustness). Default: 8.
-/
maxLeadingEmptyLines : Nat := 8
/--
Maximum length of chunk extension name (default: 256 bytes)
-/
maxChunkExtNameLength : Nat := 256
/--
Maximum length of chunk extension value (default: 256 bytes)
-/
maxChunkExtValueLength : Nat := 256
/--
Maximum number of bytes consumed while parsing one chunk-size line with extensions (default: 8192 bytes).
-/
maxChunkLineLength : Nat := 8192
/--
Maximum allowed chunk payload size in bytes (default: 8 MiB).
-/
maxChunkSize : Nat := 8 * 1024 * 1024
/--
Maximum allowed total body size per request in bytes (default: 64 MiB).
-/
maxBodySize : Nat := 64 * 1024 * 1024
/--
Maximum length of reason phrase (default: 512 bytes)
-/
maxReasonPhraseLength : Nat := 512
/--
Maximum number of trailer headers (default: 20)
-/
maxTrailerHeaders : Nat := 20
/--
Maximum number of extensions on a single chunk-size line (default: 16).
-/
maxChunkExtensions : Nat := 16
namespace Config
/--
Converts to HTTP/1.1 config.
-/
def toH1Config (config : Config) : Protocol.H1.Config where
maxMessages := config.maxRequests
maxHeaders := config.maxHeaders
maxHeaderBytes := config.maxHeaderBytes
enableKeepAlive := config.enableKeepAlive
agentName := config.serverName
maxUriLength := config.maxUriLength
maxStartLineLength := config.maxStartLineLength
maxHeaderNameLength := config.maxHeaderNameLength
maxHeaderValueLength := config.maxHeaderValueLength
maxSpaceSequence := config.maxSpaceSequence
maxLeadingEmptyLines := config.maxLeadingEmptyLines
maxChunkExtensions := config.maxChunkExtensions
maxChunkExtNameLength := config.maxChunkExtNameLength
maxChunkExtValueLength := config.maxChunkExtValueLength
maxChunkLineLength := config.maxChunkLineLength
maxChunkSize := config.maxChunkSize
maxBodySize := config.maxBodySize
maxReasonPhraseLength := config.maxReasonPhraseLength
maxTrailerHeaders := config.maxTrailerHeaders
end Std.Http.Config

View File

@@ -0,0 +1,560 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Internal.Async.TCP
public import Std.Internal.Async.ContextAsync
public import Std.Internal.Http.Transport
public import Std.Internal.Http.Protocol.H1
public import Std.Internal.Http.Server.Config
public import Std.Internal.Http.Server.Handler
public section
namespace Std
namespace Http
namespace Server
open Std Internal IO Async TCP Protocol
open Time
/-!
# Connection
This module defines `Server.Connection`, a structure used to handle a single HTTP connection with
possibly multiple requests.
-/
set_option linter.all true
/--
Represents the remote address of a client connection.
-/
structure RemoteAddr where
/--
The socket address of the remote client.
-/
addr : Net.SocketAddress
deriving TypeName
instance : ToString RemoteAddr where
toString addr := toString addr.addr
/--
A single HTTP connection.
-/
structure Connection (α : Type) where
/--
The client connection.
-/
socket : α
/--
The processing machine for HTTP/1.1.
-/
machine : H1.Machine .receiving
/--
Extensions to attach to each request (e.g., remote address).
-/
extensions : Extensions := .empty
namespace Connection
/--
Events produced by the async select loop in `receiveWithTimeout`.
Each variant corresponds to one possible outcome of waiting for I/O.
-/
private inductive Recv (β : Type)
| bytes (x : Option ByteArray)
| responseBody (x : Option Chunk)
| bodyInterest (x : Bool)
| response (x : (Except Error (Response β)))
| timeout
| shutdown
| close
/--
The set of I/O sources to wait on during a single poll iteration.
Each `Option` field is `none` when that source is not currently active.
-/
private structure PollSources (α β : Type) where
socket : Option α
expect : Option Nat
response : Option (Std.Channel (Except Error (Response β)))
responseBody : Option β
requestBody : Option Body.Stream
timeout : Millisecond.Offset
keepAliveTimeout : Option Millisecond.Offset
headerTimeout : Option Timestamp
connectionContext : CancellationContext
/--
Waits for the next I/O event across all active sources described by `sources`.
Computes the socket recv size from `config`, then races all active selectables.
Calls `Handler.onFailure` and returns `.close` on transport errors.
-/
private def pollNextEvent
{σ β : Type} [Transport α] [Handler σ] [Body β]
(config : Config) (handler : σ) (sources : PollSources α β)
: Async (Recv β) := do
let expectedBytes := sources.expect
|>.getD config.defaultPayloadBytes
|>.min config.maximumRecvSize
|>.toUInt64
let mut selectables : Array (Selectable (Recv β)) := #[
.case sources.connectionContext.doneSelector (fun _ => do
let reason sources.connectionContext.getCancellationReason
match reason with
| some .deadline => pure .timeout
| _ => pure .shutdown)
]
if let some socket := sources.socket then
selectables := selectables.push (.case (Transport.recvSelector socket expectedBytes) (Recv.bytes · |> pure))
if sources.keepAliveTimeout.isNone then
if let some timeout := sources.headerTimeout then
selectables := selectables.push (.case ( Selector.sleep (timeout - ( Timestamp.now)).toMilliseconds) (fun _ => pure .timeout))
else
selectables := selectables.push (.case ( Selector.sleep sources.timeout) (fun _ => pure .timeout))
if let some responseBody := sources.responseBody then
selectables := selectables.push (.case (Body.recvSelector responseBody) (Recv.responseBody · |> pure))
if let some requestBody := sources.requestBody then
selectables := selectables.push (.case (requestBody.interestSelector) (Recv.bodyInterest · |> pure))
if let some response := sources.response then
selectables := selectables.push (.case response.recvSelector (Recv.response · |> pure))
try Selectable.one selectables
catch e =>
Handler.onFailure handler e
pure .close
/--
Handles the `Expect: 100-continue` protocol for a pending request head.
Races between the handler's decision (`Handler.onContinue`), the connection being
cancelled, and a lingering timeout. Returns the updated machine and whether
`pendingHead` should be cleared (i.e. when the request is rejected).
-/
private def handleContinueEvent
{σ : Type} [Handler σ]
(handler : σ) (machine : H1.Machine .receiving) (head : Request.Head)
(config : Config) (connectionContext : CancellationContext)
: Async (H1.Machine .receiving × Bool) := do
let continueChannel : Std.Channel Bool Std.Channel.new
let continueTask Handler.onContinue handler head |>.asTask
BaseIO.chainTask continueTask fun
| .ok v => discard <| continueChannel.send v
| .error _ => discard <| continueChannel.send false
let canContinue Selectable.one #[
.case continueChannel.recvSelector pure,
.case connectionContext.doneSelector (fun _ => pure false),
.case ( Selector.sleep config.lingeringTimeout) (fun _ => pure false)
]
let status := if canContinue then Status.«continue» else Status.expectationFailed
return (machine.canContinue status, !canContinue)
/--
Injects a `Date` header into a response head if `Config.generateDate` is set
and the response does not already include one.
-/
private def prepareResponseHead (config : Config) (head : Response.Head) : Async Response.Head := do
if config.generateDate ¬head.headers.contains Header.Name.date then
let now Std.Time.DateTime.now (tz := .UTC)
return { head with headers := head.headers.insert Header.Name.date (Header.Value.ofString! now.toRFC822String) }
else
return head
/--
Applies a successful handler response to the machine.
Optionally injects a `Date` header, records the known body size, and sends the
response head. Returns the updated machine and the body stream to drain, or `none`
when the body should be omitted (e.g., for HEAD requests).
-/
private def applyResponse
{β : Type} [Body β]
(config : Config) (machine : H1.Machine .receiving) (res : Response β)
: Async (H1.Machine .receiving × Option β) := do
let size Body.getKnownSize res.body
let machineSized :=
if let some knownSize := size
then machine.setKnownSize knownSize
else machine
let responseHead prepareResponseHead config res.line
let machineWithHead := machineSized.send responseHead
if machineWithHead.writer.omitBody then
if ¬( Body.isClosed res.body) then
Body.close res.body
return (machineWithHead, none)
else
return (machineWithHead, some res.body)
/--
All mutable state carried through the connection processing loop.
Bundled into a struct so it can be passed to and returned from helper functions.
-/
private structure ConnectionState (β : Type) where
machine : H1.Machine .receiving
requestStream : Body.Stream
keepAliveTimeout : Option Millisecond.Offset
currentTimeout : Millisecond.Offset
headerTimeout : Option Timestamp
response : Std.Channel (Except Error (Response β))
respStream : Option β
requiresData : Bool
expectData : Option Nat
handlerDispatched : Bool
pendingHead : Option Request.Head
/--
Processes all H1 events from a single machine step, updating the connection state.
Handles keep-alive resets, body-size tracking, `Expect: 100-continue`, and parse errors.
Returns the updated state; stops early on `.failed`.
-/
private def processH1Events
{σ β : Type} [Handler σ] [Body β]
(handler : σ) (config : Config) (connectionContext : CancellationContext)
(events : Array (H1.Event .receiving))
(state : ConnectionState β)
: Async (ConnectionState β) := do
let mut st := state
for event in events do
match event with
| .needMoreData expect =>
st := { st with requiresData := true, expectData := expect }
| .needAnswer => pure ()
| .endHeaders head =>
-- Sets the pending head and removes the KeepAlive or Header timeout.
st := { st with
currentTimeout := config.lingeringTimeout
keepAliveTimeout := none
headerTimeout := none
pendingHead := some head
}
if let some length := head.getSize true then
-- Sets the size of the body that is going out of the connection.
Body.setKnownSize st.requestStream (some length)
| .«continue» =>
if let some head := st.pendingHead then
let (newMachine, clearPending) handleContinueEvent handler st.machine head config connectionContext
st := { st with machine := newMachine }
if clearPending then
st := { st with pendingHead := none }
| .next =>
-- Reset all per-request state for the next pipelined request.
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
if let some res := st.respStream then
if ¬( Body.isClosed res) then
Body.close res
let newStream Body.mkStream
st := { st with
requestStream := newStream
response := Std.Channel.new
respStream := none
keepAliveTimeout := some config.keepAliveTimeout.val
currentTimeout := config.keepAliveTimeout.val
headerTimeout := none
handlerDispatched := false
}
| .failed err =>
Handler.onFailure handler (toString err)
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
st := { st with requiresData := false, pendingHead := none }
break
| .closeBody =>
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
| .close => pure ()
return st
/--
Dispatches a pending request head to the handler if one is waiting.
Spawns the handler as an async task and routes its result back through `state.response`.
Returns the updated state with `pendingHead` cleared and `handlerDispatched` set.
-/
private def dispatchPendingRequest
{σ : Type} [Handler σ]
(handler : σ) (extensions : Extensions) (connectionContext : CancellationContext)
(state : ConnectionState (Handler.ResponseBody σ))
: Async (ConnectionState (Handler.ResponseBody σ)) := do
if let some line := state.pendingHead then
let task Handler.onRequest handler { line, body := state.requestStream, extensions } connectionContext
|>.asTask
BaseIO.chainTask task (discard state.response.send)
return { state with pendingHead := none, handlerDispatched := true }
else
return state
/--
Attempts a single non-blocking receive from the body and feeds any available chunk
into the machine, without going through the `Selectable.one` scheduler.
For fully-buffered bodies (e.g. `Body.Full`, `Body.Buffered`) this avoids one
`Selectable.one` round-trip when the chunk is already in memory. Streaming bodies
that have no producer waiting return `none` and fall through to the normal poll loop
unchanged.
Only one chunk is consumed here. Looping would introduce yield points between
`Body.tryRecv` calls, allowing a background producer to race ahead and close the
stream before `writeHead` runs — turning a streaming response with unknown size
into a fixed-length one.
-/
private def tryDrainBody [Body β]
(machine : H1.Machine .receiving) (body : β)
: Async (H1.Machine .receiving × Option β) := do
match Body.tryRecv body with
| none => pure (machine, some body)
| some (some chunk) => pure (machine.sendData #[chunk], some body)
| some none =>
if !( Body.isClosed body) then Body.close body
pure (machine.userClosedBody, none)
/--
Processes a single async I/O event and updates the connection state.
Returns the updated state and `true` if the connection should be closed immediately.
-/
private def handleRecvEvent
{σ β : Type} [Handler σ] [Body β]
(handler : σ) (config : Config)
(event : Recv β) (state : ConnectionState β)
: Async (ConnectionState β × Bool) := do
match event with
| .bytes (some bs) =>
let mut st := state
-- After the first byte after idle we switch from keep-alive timeout to per-request header timeout.
if st.keepAliveTimeout.isSome then
st := { st with
keepAliveTimeout := none
headerTimeout := some <| ( Timestamp.now) + config.headerTimeout
}
return ({ st with machine := st.machine.feed bs }, false)
| .bytes none =>
return ({ state with machine := state.machine.noMoreInput }, false)
| .responseBody (some chunk) =>
return ({ state with machine := state.machine.sendData #[chunk] }, false)
| .responseBody none =>
if let some res := state.respStream then
if ¬( Body.isClosed res) then Body.close res
return ({ state with machine := state.machine.userClosedBody, respStream := none }, false)
| .bodyInterest interested =>
if interested then
let (newMachine, pulledChunk) := state.machine.pullBody
let mut st := { state with machine := newMachine }
if let some pulled := pulledChunk then
try st.requestStream.send pulled.chunk pulled.incomplete
catch e => Handler.onFailure handler e
if pulled.final then
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
return (st, false)
else
return (state, false)
| .close => return (state, true)
| .timeout =>
Handler.onFailure handler "request header timeout"
return ({ state with machine := state.machine.closeWithError .requestTimeout, handlerDispatched := false }, false)
| .shutdown =>
return ({ state with machine := state.machine.closeWithError .serviceUnavailable, handlerDispatched := false }, false)
| .response (.error err) =>
Handler.onFailure handler err
return ({ state with machine := state.machine.closeWithError .internalServerError, handlerDispatched := false }, false)
| .response (.ok res) =>
if state.machine.failed then
if ¬( Body.isClosed res.body) then Body.close res.body
return ({ state with handlerDispatched := false }, false)
else
let (newMachine, newRespStream) applyResponse config state.machine res
-- Eagerly consume one chunk if immediately available (avoids a Selectable.one round-trip).
let (drainedMachine, drainedRespStream)
match newRespStream with
| none => pure (newMachine, none)
| some body => tryDrainBody newMachine body
return ({ state with machine := drainedMachine, handlerDispatched := false, respStream := drainedRespStream }, false)
/--
Computes the active `PollSources` for the current connection state.
Determines which IO sources need attention and whether to include the socket.
-/
private def buildPollSources
{α β : Type} [Transport α]
(socket : α) (connectionContext : CancellationContext) (state : ConnectionState β)
: Async (PollSources α β) := do
let requestBodyOpen
if state.machine.canPullBody then pure !( Body.isClosed state.requestStream)
else pure false
let requestBodyInterested
if state.machine.canPullBody requestBodyOpen then state.requestStream.hasInterest
else pure false
let requestBody
if state.machine.canPullBodyNow requestBodyOpen then pure (some state.requestStream)
else pure none
-- Include the socket only when there is more to do than waiting for the handler alone.
let pollSocket :=
state.requiresData !state.handlerDispatched state.respStream.isSome
state.machine.writer.sentMessage (state.machine.canPullBody requestBodyInterested)
return {
socket := if pollSocket then some socket else none
expect := state.expectData
response := if state.handlerDispatched then some state.response else none
responseBody := state.respStream
requestBody := requestBody
timeout := state.currentTimeout
keepAliveTimeout := state.keepAliveTimeout
headerTimeout := state.headerTimeout
connectionContext := connectionContext
}
/--
Runs the main request/response processing loop for a single connection.
Drives the HTTP/1.1 state machine through four phases each iteration:
send buffered output, process H1 events, dispatch pending requests, poll for I/O.
-/
private def handle
{σ : Type} [Transport α] [h : Handler σ]
(connection : Connection α)
(config : Config)
(connectionContext : CancellationContext)
(handler : σ) : Async Unit := do
let _ : Body (Handler.ResponseBody σ) := Handler.responseBodyInstance
let socket := connection.socket
let initStream Body.mkStream
let mut state : ConnectionState (Handler.ResponseBody σ) := {
machine := connection.machine
requestStream := initStream
keepAliveTimeout := some config.keepAliveTimeout.val
currentTimeout := config.keepAliveTimeout.val
headerTimeout := none
response := Std.Channel.new
respStream := none
requiresData := false
expectData := none
handlerDispatched := false
pendingHead := none
}
while ¬state.machine.halted do
-- Phase 1: advance the state machine and flush any output.
let (newMachine, step) := state.machine.step
state := { state with machine := newMachine }
if step.output.size > 0 then
try Transport.sendAll socket step.output.data
catch e =>
Handler.onFailure handler e
break
-- Phase 2: process all events emitted by this step.
state processH1Events handler config connectionContext step.events state
-- Phase 3: dispatch any newly parsed request to the handler.
state dispatchPendingRequest handler connection.extensions connectionContext state
-- Phase 4: wait for the next IO event when any source needs attention.
if state.requiresData state.handlerDispatched state.respStream.isSome state.machine.canPullBody then
state := { state with requiresData := false }
let sources buildPollSources socket connectionContext state
let event pollNextEvent config handler sources
let (newState, shouldClose) handleRecvEvent handler config event state
state := newState
if shouldClose then break
-- Clean up: close all open streams and the socket.
if ¬( Body.isClosed state.requestStream) then
Body.close state.requestStream
if let some res := state.respStream then
if ¬( Body.isClosed res) then Body.close res
Transport.close socket
end Connection
/--
Handles request/response processing for a single connection using an `Async` handler.
The library-level entry point for running a server is `Server.serve`.
This function can be used with a `TCP.Socket` or any other type that implements
`Transport` to build custom server loops.
# Example
```lean
-- Create a TCP socket server instance
let server ← Socket.Server.mk
server.bind addr
server.listen backlog
-- Enter an infinite loop to handle incoming client connections
while true do
let client ← server.accept
background (serveConnection client handler config)
```
-/
def serveConnection
{σ : Type} [Transport t] [Handler σ]
(client : t) (handler : σ)
(config : Config) (extensions : Extensions := .empty) : ContextAsync Unit := do
(Connection.mk client { config := config.toH1Config } extensions)
|>.handle config ( ContextAsync.getContext) handler
end Std.Http.Server

View File

@@ -0,0 +1,126 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Internal.Async
public import Std.Internal.Http.Data
public import Std.Internal.Async.ContextAsync
public section
namespace Std.Http.Server
open Std.Internal.IO.Async
set_option linter.all true
/--
A type class for handling HTTP server requests. Implement this class to define how the server
responds to incoming requests, failures, and `Expect: 100-continue` headers.
-/
class Handler (σ : Type) where
/--
Concrete body type produced by `onRequest`.
Defaults to `Body.Any`, but handlers may override it with any reader/writer-compatible body.
-/
ResponseBody : Type := Body.Any
/--
Body instance required by the connection loop for receiving response chunks.
-/
[responseBodyInstance : Body ResponseBody]
/--
Called for each incoming HTTP request.
-/
onRequest (self : σ) (request : Request Body.Stream) : ContextAsync (Response ResponseBody)
/--
Called when an I/O or transport error occurs while processing a request (e.g. broken socket,
handler exception). This is a **notification only**: the connection will close regardless of
the handler's response. Use this for logging and metrics. The default implementation does nothing.
-/
onFailure (self : σ) (error : IO.Error) : Async Unit :=
pure ()
/--
Called when a request includes an `Expect: 100-continue` header. Return `true` to send a
`100 Continue` response and accept the body. If `false` is returned the server sends
`417 Expectation Failed`, disables keep-alive, and closes the request body reader.
This function is guarded by `Config.lingeringTimeout` and may be cancelled on server shutdown.
The default implementation always returns `true`.
-/
onContinue (self : σ) (request : Request.Head) : Async Bool :=
pure true
/--
A stateless HTTP handler.
-/
structure StatelessHandler where
/--
Function called for each incoming request.
-/
onRequest : Request Body.Stream ContextAsync (Response Body.Any)
/--
Function called when an I/O or transport error occurs. The default does nothing.
-/
onFailure : IO.Error Async Unit := fun _ => pure ()
/--
Function called when a request includes `Expect: 100-continue`. Return `true` to accept
the body or `false` to reject it with `417 Expectation Failed`. The default always accepts.
-/
onContinue : Request.Head Async Bool := fun _ => pure true
instance : Handler StatelessHandler where
onRequest self request := self.onRequest request
onFailure self error := self.onFailure error
onContinue self request := self.onContinue request
namespace Handler
/--
Builds a `StatelessHandler` from a request-handling function.
-/
def ofFn
(f : Request Body.Stream ContextAsync (Response Body.Any)) :
StatelessHandler :=
{ onRequest := f }
/--
Builds a `StatelessHandler` from all three callback functions.
-/
def ofFns
(onRequest : Request Body.Stream ContextAsync (Response Body.Any))
(onFailure : IO.Error Async Unit := fun _ => pure ())
(onContinue : Request.Head Async Bool := fun _ => pure true) :
StatelessHandler :=
{ onRequest, onFailure, onContinue }
/--
Builds a `StatelessHandler` from a request function and a failure callback. Useful for
attaching error logging to a handler.
-/
def withFailure
(handler : StatelessHandler)
(onFailure : IO.Error Async Unit) :
StatelessHandler :=
{ handler with onFailure }
/--
Builds a `StatelessHandler` from a request function and a continue callback
-/
def withContinue
(handler : StatelessHandler)
(onContinue : Request.Head Async Bool) :
StatelessHandler :=
{ handler with onContinue }
end Handler
end Std.Http.Server

View File

@@ -0,0 +1,243 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Internal.Http.Server
public import Std.Internal.Async
public import Std.Internal.Async.Timer
import Init.Data.String.Legacy
public section
open Std.Internal.IO Async
open Std Http
namespace Std.Http.Internal.Test
abbrev TestHandler := Request Body.Stream ContextAsync (Response Body.Any)
instance : Std.Http.Server.Handler TestHandler where
onRequest handler request := handler request
/--
Default config for server tests. Short lingering timeout, no Date header.
-/
def defaultConfig : Config :=
{ lingeringTimeout := 1000, generateDate := false }
private def sendRaw
(client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config) : IO ByteArray :=
Async.block do
client.send raw
Std.Http.Server.serveConnection server handler config |>.run
let res client.recv?
pure (res.getD .empty)
private def sendClose
(client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config) : IO ByteArray :=
Async.block do
client.send raw
client.getSendChan.close
Std.Http.Server.serveConnection server handler config |>.run
let res client.recv?
pure (res.getD .empty)
-- Timeout wrapper
private def withTimeout {α : Type} (name : String) (ms : Nat) (action : IO α) : IO α := do
let task IO.asTask action
let ticks := (ms + 9) / 10
let rec loop : Nat IO α
| 0 => do IO.cancel task; throw <| IO.userError s!"'{name}' timed out after {ms}ms"
| n + 1 => do
if ( IO.getTaskState task) == .finished then
match IO.wait task with
| .ok x => pure x
| .error e => throw e
else IO.sleep 10; loop n
loop ticks
-- Test grouping
/--
Run `tests` and wrap any failure message with the group name.
Use as `#eval runGroup "Topic" do ...`.
-/
def runGroup (name : String) (tests : IO Unit) : IO Unit :=
try tests
catch e => throw (IO.userError s!"[{name}]\n{e}")
-- Per-test runners
/--
Create a fresh mock connection, send `raw`, and run assertions.
-/
def check
(name : String)
(raw : String)
(handler : TestHandler)
(expect : ByteArray IO Unit)
(config : Config := defaultConfig) : IO Unit := do
let (client, server) Mock.new
let response sendRaw client server raw.toUTF8 handler config
try expect response
catch e => throw (IO.userError s!"[{name}] {e}")
/--
Like `check` but closes the client channel before running the server.
Use for tests involving truncated input or silent-close (EOF-triggered behavior).
-/
def checkClose
(name : String)
(raw : String)
(handler : TestHandler)
(expect : ByteArray IO Unit)
(config : Config := defaultConfig) : IO Unit := do
let (client, server) Mock.new
let response sendClose client server raw.toUTF8 handler config
try expect response
catch e => throw (IO.userError s!"[{name}] {e}")
/--
Like `check` wrapped in a wall-clock timeout.
Required when the test involves streaming, async timers, or keep-alive behavior.
-/
def checkTimed
(name : String)
(ms : Nat := 2000)
(raw : String)
(handler : TestHandler)
(expect : ByteArray IO Unit)
(config : Config := defaultConfig) : IO Unit :=
withTimeout name ms <| check name raw handler expect config
-- Assertion helpers
/--
Assert the response starts with `prefix_` (e.g. `"HTTP/1.1 200"`).
-/
def assertStatus (response : ByteArray) (prefix_ : String) : IO Unit := do
let text := String.fromUTF8! response
unless text.startsWith prefix_ do
throw <| IO.userError s!"expected status {prefix_.quote}, got:\n{text.quote}"
/--
Assert the response is byte-for-byte equal to `expected`.
Use sparingly — prefer `assertStatus` + `assertContains` for 200 responses.
-/
def assertExact (response : ByteArray) (expected : String) : IO Unit := do
let text := String.fromUTF8! response
unless text == expected do
throw <| IO.userError s!"expected:\n{expected.quote}\ngot:\n{text.quote}"
/--
Assert `needle` appears anywhere in the response.
-/
def assertContains (response : ByteArray) (needle : String) : IO Unit := do
let text := String.fromUTF8! response
unless text.contains needle do
throw <| IO.userError s!"expected to contain {needle.quote}, got:\n{text.quote}"
/--
Assert `needle` does NOT appear in the response.
-/
def assertAbsent (response : ByteArray) (needle : String) : IO Unit := do
let text := String.fromUTF8! response
if text.contains needle then
throw <| IO.userError s!"expected NOT to contain {needle.quote}, got:\n{text.quote}"
/--
Assert the response contains exactly `n` occurrences of `"HTTP/1.1 "`.
-/
def assertResponseCount (response : ByteArray) (n : Nat) : IO Unit := do
let text := String.fromUTF8! response
let got := (text.splitOn "HTTP/1.1 ").length - 1
unless got == n do
throw <| IO.userError s!"expected {n} HTTP/1.1 responses, got {got}:\n{text.quote}"
-- Common fixed response strings
def r400 : String :=
"HTTP/1.1 400 Bad Request\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def r408 : String :=
"HTTP/1.1 408 Request Timeout\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def r413 : String :=
"HTTP/1.1 413 Content Too Large\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def r417 : String :=
"HTTP/1.1 417 Expectation Failed\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def r431 : String :=
"HTTP/1.1 431 Request Header Fields Too Large\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def r505 : String :=
"HTTP/1.1 505 HTTP Version Not Supported\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
-- Standard handlers
/--
Always respond 200 "ok" without reading the request body.
-/
def okHandler : TestHandler := fun _ => Response.ok |>.text "ok"
/--
Read the full request body and echo it back as text/plain.
-/
def echoHandler : TestHandler := fun req => do
Response.ok |>.text ( req.body.readAll)
/--
Respond 200 with the request URI as the body.
-/
def uriHandler : TestHandler := fun req =>
Response.ok |>.text (toString req.line.uri)
-- Request builder helpers
/--
Minimal GET request. `extra` is appended as raw header lines (each ending with `\x0d\n`)
before the blank line.
-/
def mkGet (path : String := "/") (extra : String := "") : String :=
s!"GET {path} HTTP/1.1\x0d\nHost: example.com\x0d\n{extra}\x0d\n"
/--
GET with `Connection: close`.
-/
def mkGetClose (path : String := "/") : String :=
mkGet path "Connection: close\x0d\n"
/--
POST with a fixed Content-Length body. `extra` is appended before the blank line.
-/
def mkPost (path : String) (body : String) (extra : String := "") : String :=
s!"POST {path} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {body.toUTF8.size}\x0d\n{extra}\x0d\n{body}"
/--
POST with Transfer-Encoding: chunked. `chunkedBody` is the pre-formatted body
(use `chunk` + `chunkEnd` to build it).
-/
def mkChunked (path : String) (chunkedBody : String) (extra : String := "") : String :=
s!"POST {path} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n{extra}\x0d\n{chunkedBody}"
/--
Format a single chunk: `<hex-size>\x0d\n<data>\x0d\n`.
-/
def chunk (data : String) : String :=
let hexSize := Nat.toDigits 16 data.toUTF8.size |> String.ofList
s!"{hexSize}\x0d\n{data}\x0d\n"
/--
The terminal zero-chunk that ends a chunked body.
-/
def chunkEnd : String := "0\x0d\n\x0d\n"
end Std.Http.Internal.Test

View File

@@ -0,0 +1,253 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Internal.Http.Protocol.H1
public section
/-!
# Transport
This module exposes a `Transport` type class that is used to represent different transport mechanisms
that can be used with an HTTP connection.
-/
namespace Std.Http
open Std Internal IO Async TCP
set_option linter.all true
/--
Generic HTTP interface that abstracts over different transport mechanisms.
-/
class Transport (α : Type) where
/--
Receive data from the client connection, up to the expected size.
Returns None if the connection is closed or no data is available.
-/
recv : α UInt64 Async (Option ByteArray)
/--
Send all data through the client connection.
-/
sendAll : α Array ByteArray Async Unit
/--
Get a selector for receiving data asynchronously.
-/
recvSelector : α UInt64 Selector (Option ByteArray)
/--
Close the transport connection.
The default implementation is a no-op; override this for transports that require explicit teardown.
For `Socket.Client`, the runtime closes the file descriptor when the object is finalized.
-/
close : α IO Unit := fun _ => pure ()
instance : Transport Socket.Client where
recv client expect := client.recv? expect
sendAll client data := client.sendAll data
recvSelector client expect := client.recvSelector expect
namespace Internal
open Internal.IO.Async in
/--
Shared state for a bidirectional mock connection.
-/
private structure Mock.SharedState where
/--
Client to server direction.
-/
clientToServer : Std.CloseableChannel ByteArray
/--
Server to client direction.
-/
serverToClient : Std.CloseableChannel ByteArray
/--
Mock client endpoint for testing.
-/
structure Mock.Client where
private shared : Mock.SharedState
/--
Mock server endpoint for testing.
-/
structure Mock.Server where
private shared : Mock.SharedState
namespace Mock
/--
Creates a mock server and client that are connected to each other and share the
same underlying state, enabling bidirectional communication.
-/
def new : BaseIO (Mock.Client × Mock.Server) := do
let first Std.CloseableChannel.new
let second Std.CloseableChannel.new
return (first, second, first, second)
/--
Receives data from a channel, joining all available data up to the expected size. First does a
blocking recv, then greedily consumes available data with tryRecv until `expect` bytes are reached.
-/
def recvJoined (recvChan : Std.CloseableChannel ByteArray) (expect : Option UInt64) : Async (Option ByteArray) := do
match await ( recvChan.recv) with
| none => return none
| some first =>
let mut result := first
repeat
if let some expect := expect then
if result.size.toUInt64 expect then break
match recvChan.tryRecv with
| none => break
| some chunk => result := result ++ chunk
return some result
/--
Sends a single ByteArray through a channel.
-/
def send (sendChan : Std.CloseableChannel ByteArray) (data : ByteArray) : Async Unit := do
Async.ofAsyncTask (( sendChan.send data) |>.map (Except.mapError (IO.userError toString)))
/--
Sends ByteArrays through a channel.
-/
def sendAll (sendChan : Std.CloseableChannel ByteArray) (data : Array ByteArray) : Async Unit := do
for chunk in data do
send sendChan chunk
/--
Creates a selector for receiving from a channel.
-/
def recvSelector (recvChan : Std.CloseableChannel ByteArray) : Selector (Option ByteArray) :=
recvChan.recvSelector
end Mock
namespace Mock.Client
/--
Gets the receive channel for a client (server to client direction).
-/
def getRecvChan (client : Mock.Client) : Std.CloseableChannel ByteArray :=
client.shared.serverToClient
/--
Gets the send channel for a client (client to server direction).
-/
def getSendChan (client : Mock.Client) : Std.CloseableChannel ByteArray :=
client.shared.clientToServer
/--
Sends a single ByteArray.
-/
def send (client : Mock.Client) (data : ByteArray) : Async Unit :=
Mock.send (getSendChan client) data
/--
Receives data, joining all available chunks.
-/
def recv? (client : Mock.Client) (expect : Option UInt64 := none) : Async (Option ByteArray) :=
Mock.recvJoined (getRecvChan client) expect
/--
Tries to receive data without blocking, joining all immediately available chunks.
Returns `none` if no data is available.
-/
def tryRecv? (client : Mock.Client) (_expect : UInt64 := 0) : BaseIO (Option ByteArray) := do
match (getRecvChan client).tryRecv with
| none => return none
| some first =>
let mut result := first
repeat
match (getRecvChan client).tryRecv with
| none => break
| some chunk => result := result ++ chunk
return some result
/--
Closes the mock server and client.
-/
def close (client : Mock.Client) : IO Unit := do
if !( client.shared.clientToServer.isClosed) then client.shared.clientToServer.close
if !( client.shared.serverToClient.isClosed) then client.shared.serverToClient.close
end Mock.Client
namespace Mock.Server
/--
Gets the receive channel for a server (client to server direction).
-/
def getRecvChan (server : Mock.Server) : Std.CloseableChannel ByteArray :=
server.shared.clientToServer
/--
Gets the send channel for a server (server to client direction).
-/
def getSendChan (server : Mock.Server) : Std.CloseableChannel ByteArray :=
server.shared.serverToClient
/--
Sends a single ByteArray.
-/
def send (server : Mock.Server) (data : ByteArray) : Async Unit :=
Mock.send (getSendChan server) data
/--
Receives data, joining all available chunks.
-/
def recv? (server : Mock.Server) (expect : Option UInt64 := none) : Async (Option ByteArray) :=
Mock.recvJoined (getRecvChan server) expect
/--
Tries to receive data without blocking, joining all immediately available chunks. Returns `none` if no
data is available.
-/
def tryRecv? (server : Mock.Server) (_expect : UInt64 := 0) : BaseIO (Option ByteArray) := do
match (getRecvChan server).tryRecv with
| none => return none
| some first =>
let mut result := first
repeat
match (getRecvChan server).tryRecv with
| none => break
| some chunk => result := result ++ chunk
return some result
/--
Closes the mock server and client.
-/
def close (server : Mock.Server) : IO Unit := do
if !( server.shared.clientToServer.isClosed) then server.shared.clientToServer.close
if !( server.shared.serverToClient.isClosed) then server.shared.serverToClient.close
end Mock.Server
instance : Transport Mock.Client where
recv client expect := Mock.recvJoined (Mock.Client.getRecvChan client) (some expect)
sendAll client data := Mock.sendAll (Mock.Client.getSendChan client) data
recvSelector client _ := Mock.recvSelector (Mock.Client.getRecvChan client)
close client := client.close
instance : Transport Mock.Server where
recv server expect := Mock.recvJoined (Mock.Server.getRecvChan server) (some expect)
sendAll server data := Mock.sendAll (Mock.Server.getSendChan server) data
recvSelector server _ := Mock.recvSelector (Mock.Server.getRecvChan server)
close server := server.close
end Internal
end Std.Http

View File

@@ -124,6 +124,9 @@ end IPv4Addr
namespace SocketAddressV4
instance : ToString SocketAddressV4 where
toString sa := toString sa.addr ++ ":" ++ toString sa.port
instance : Coe SocketAddressV4 SocketAddress where
coe addr := .v4 addr
@@ -161,6 +164,9 @@ end IPv6Addr
namespace SocketAddressV6
instance : ToString SocketAddressV6 where
toString sa := "[" ++ toString sa.addr ++ "]:" ++ toString sa.port
instance : Coe SocketAddressV6 SocketAddress where
coe addr := .v6 addr
@@ -186,6 +192,11 @@ end IPAddr
namespace SocketAddress
instance : ToString SocketAddress where
toString
| .v4 sa => toString sa
| .v6 sa => toString sa
/--
Obtain the `AddressFamily` associated with a `SocketAddress`.
-/

View File

@@ -11,6 +11,7 @@ public import Std.Sync.Channel
public import Std.Sync.Mutex
public import Std.Sync.RecursiveMutex
public import Std.Sync.Barrier
public import Std.Sync.Semaphore
public import Std.Sync.SharedMutex
public import Std.Sync.Notify
public import Std.Sync.Broadcast

View File

@@ -0,0 +1,96 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Init.Data.Queue
public import Init.System.Promise
public import Std.Sync.Mutex
public section
namespace Std
private structure SemaphoreState where
permits : Nat
waiters : Std.Queue (IO.Promise Unit) :=
deriving Nonempty
/--
Counting semaphore.
`Semaphore.acquire` returns a promise that is resolved once a permit is available.
If a permit is currently available, the returned promise is already resolved.
`Semaphore.release` either resolves one waiting promise or increments the available permits.
-/
structure Semaphore where private mk ::
private lock : Mutex SemaphoreState
/--
Creates a resolved promise.
-/
private def mkResolvedPromise [Nonempty α] (a : α) : BaseIO (IO.Promise α) := do
let promise IO.Promise.new
promise.resolve a
return promise
/--
Creates a new semaphore with `permits` initially available permits.
-/
def Semaphore.new (permits : Nat) : BaseIO Semaphore := do
return { lock := Mutex.new { permits } }
/--
Requests one permit.
Returns a promise that resolves once the permit is acquired.
-/
def Semaphore.acquire (sem : Semaphore) : BaseIO (IO.Promise Unit) := do
sem.lock.atomically do
let st get
if st.permits > 0 then
set { st with permits := st.permits - 1 }
mkResolvedPromise ()
else
let promise IO.Promise.new
set { st with waiters := st.waiters.enqueue promise }
return promise
/--
Tries to acquire a permit without blocking. Returns `true` on success.
-/
def Semaphore.tryAcquire (sem : Semaphore) : BaseIO Bool := do
sem.lock.atomically do
let st get
if st.permits > 0 then
set { st with permits := st.permits - 1 }
return true
else
return false
/--
Releases one permit and resolves one waiting acquirer, if any.
-/
def Semaphore.release (sem : Semaphore) : BaseIO Unit := do
let waiter? sem.lock.atomically do
let st get
match st.waiters.dequeue? with
| some (waiter, waiters) =>
set { st with waiters }
return some waiter
| none =>
set { st with permits := st.permits + 1 }
return none
if let some waiter := waiter? then
waiter.resolve ()
/--
Returns the number of currently available permits.
-/
def Semaphore.availablePermits (sem : Semaphore) : BaseIO Nat :=
sem.lock.atomically do
return ( get).permits
end Std

View File

@@ -22,7 +22,6 @@ options get_default_options() {
opts = opts.update({"quotPrecheck"}, true);
opts = opts.update({"pp", "rawOnError"}, true);
// bump stage0 to pick up new builtin `repeat`/`while` elaborators
// Temporary, core-only flags for editing (i.e. must be part of stage0/bin/lean). Must be synced
// with `LEAN_EXTRA_MAKE_OPTS` build flags in src/CMakeLists.txt.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -23,6 +23,8 @@ echo ">"
echo "> Building $STAGE_NEXT..."
echo ">"
make -C "$BUILD_NEXT" clean-stdlib
LAKE_OVERRIDE_LEAN=true LEAN="$(realpath fake_root/bin/lean)" \
WRAPPER_PREFIX="$(realpath fake_root)" WRAPPER_OUT="$OUT" \
lakeprof record -- \

View File

@@ -10,7 +10,9 @@ namespace SimpBench
def getProofSize (r : Sym.Simp.Result) : MetaM Nat :=
match r with
| .rfl _ _ => return 0
| .step _ p _ _ => p.numObjs
| .step _ p _ _ =>
let p := ShareCommon.shareCommon' p
p.numObjs
def checkWithKernel (r : Sym.Simp.Result) : MetaM Float := do
match r with

View File

@@ -1,4 +1,3 @@
case h
α : Type ?u
inst✝ : DecidableEq α
i : α

View File

@@ -2,8 +2,7 @@
2 * a
/--
trace: case h
x : Nat
trace: x : Nat
⊢ 2 * x = x + x
-/
#guard_msgs in
@@ -19,8 +18,7 @@ by
| a => 2 * a
/--
trace: case h
x : Nat
trace: x : Nat
⊢ 2 * x = x + x
-/
#guard_msgs in

View File

@@ -0,0 +1,18 @@
/-!
Regression test: `case <ctor>` must keep working after `congr; ext _; cases _`
even though the subgoal tags produced by `apply`-family tactics now inherit
the parent tag (without appending the binder name) when only one subgoal is
left. The case-tag matcher erases macro scopes so that `case yield` still
matches tags like `e_a.yield._@._internal._hyg.0`.
-/
example {δ : Type} {m : Type Type} [Monad m] [LawfulMonad m]
(x : m (ForInStep δ))
(f g : ForInStep δ m (ForInStep δ))
(h : a, f a = g a) :
(x >>= f) = (x >>= g) := by
congr
ext step
cases step
case yield => apply h
case done => apply h

View File

@@ -73,6 +73,32 @@ def channelRecvAfterClose : Async Unit := do
#eval channelRecvAfterClose.block
-- Test Body.stream runs producer concurrently and transfers chunks
def bodyStreamSends : Async Unit := do
let incoming Body.stream fun outgoing => do
outgoing.send (Chunk.ofByteArray "x".toUTF8)
let first incoming.recv
assert! first.isSome
assert! first.get!.data == "x".toUTF8
let done incoming.recv
assert! done.isNone
#eval bodyStreamSends.block
-- Test Body.stream closes channel when generator throws
def bodyStreamThrowCloses : Async Unit := do
let incoming Body.stream fun _ => do
throw (.userError "boom")
let result incoming.recv
assert! result.isNone
#eval bodyStreamThrowCloses.block
-- Test for-in iteration collects chunks until close
def channelForIn : Async Unit := do
@@ -108,6 +134,34 @@ def channelExtensions : Async Unit := do
#eval channelExtensions.block
-- Test incomplete sends are collapsed before delivery
def channelCollapseIncompleteChunks : Async Unit := do
let stream Body.mkStream
let first : Chunk := { data := "aaaaaaaaaa".toUTF8, extensions := #[(.mk "part", some <| .ofString! "first")] }
let second : Chunk := { data := "bbbbbbbbbb".toUTF8, extensions := #[(.mk "part", some <| .ofString! "second")] }
let last : Chunk := { data := "cccccccccccccccccccc".toUTF8, extensions := #[(.mk "part", some <| .ofString! "last")] }
stream.send first (incomplete := true)
stream.send second (incomplete := true)
let noChunkYet stream.tryRecv
assert! noChunkYet.isNone
let sendFinal async (t := AsyncTask) <| stream.send last
let result stream.recv
assert! result.isSome
let merged := result.get!
assert! merged.data == "aaaaaaaaaabbbbbbbbbbcccccccccccccccccccc".toUTF8
assert! merged.data.size == 40
assert! merged.extensions == #[(.mk "part", some <| .ofString! "first")]
await sendFinal
#eval channelCollapseIncompleteChunks.block
-- Test known size metadata
def channelKnownSize : Async Unit := do
@@ -482,6 +536,20 @@ def anyFromEmpty : Async Unit := do
#eval anyFromEmpty.block
-- Test Any wrapping an Incoming channel receives chunks
def anyFromChannel : Async Unit := do
let outgoing Body.mkStream
let any := Body.Any.ofBody outgoing
let sendTask async (t := AsyncTask) <| outgoing.send (Chunk.ofByteArray "data".toUTF8)
let result any.recv
assert! result.isSome
assert! result.get!.data == "data".toUTF8
await sendTask
#eval anyFromChannel.block
-- Test Any.close closes the underlying body
def anyCloseForwards : Async Unit := do

View File

@@ -0,0 +1,320 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- RFC 9112 §6: Transfer-Encoding and Content-Length framing
#eval runGroup "RFC 9112 §6: chunked body baseline" do
check "CL body accepted and echoed"
(raw := mkPost "/echo" "hello" "Connection: close\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "chunked body accepted and echoed"
(raw := mkChunked "/" (chunk "hello" ++ chunkEnd) "Connection: close\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "chunk-size uppercase and leading zeros accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n000A\x0d\n0123456789\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "0123456789")
check "chunk extensions with token and quoted value accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext=val;quoted=\"ok\"\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
#eval runGroup "RFC 9112 §6: chunked parse errors" do
check "invalid chunk-size token → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\nG\x0d\nabc\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
checkClose "chunk terminator must be CRLF → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nxxx__1a\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
checkClose "missing terminal 0-chunk at EOF → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
check "chunk-size trailing whitespace → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5 \x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
#eval runGroup "RFC 9112 §6.1: Transfer-Encoding validation (Critical)" do
-- TE + CL → request smuggling prevention
check "TE + Content-Length → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
-- chunked must be the last coding
check "TE: chunked not last (chunked, gzip) → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked, gzip\x0d\nConnection: close\x0d\n\x0d\nbody")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: duplicate chunked → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked, chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: gzip alone (no chunked framing) → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: gzip\x0d\nConnection: close\x0d\n\x0d\nbody")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: deflate alone → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: deflate\x0d\nConnection: close\x0d\n\x0d\nbody")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: identity alone → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: identity\x0d\nConnection: close\x0d\n\x0d\nbody")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: malformed token list (leading comma) → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: ,chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
check "TE: duplicate header lines with unsupported coding → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nTransfer-Encoding: gzip\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
-- NUL and control chars in TE value
check "NUL byte in TE value → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunk\x00ed\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
check "control char (0x01) in TE value → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunk\x01ed\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
#eval runGroup "RFC 9112 §6.1: Transfer-Encoding accepted cases" do
check "gzip, chunked accepted — chunked is last"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: gzip, chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "br, chunked accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: br, chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "mixed-case Chunked accepted (codings are lowercased)"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: Chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "TE trailing OWS accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked \x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "gzip+chunked chain is visible to handler in TE header"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: gzip, chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := fun req => do
let te := match req.line.headers.getAll? Header.Name.transferEncoding with
| some vs => String.intercalate "|" (vs.map (·.value) |>.toList)
| none => "<none>"
Response.ok |>.text te)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "gzip, chunked")
-- RFC 9112 §6.3: Content-Length
#eval runGroup "RFC 9112 §6.3: Content-Length" do
check "Content-Length with leading zeros accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 005\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "very large Content-Length → 413"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 99999999999999999999\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 413")
check "duplicate Content-Length same value → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := echoHandler)
(expect := (assertExact · r400))
check "duplicate Content-Length different values → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 3\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nabc")
(handler := echoHandler)
(expect := (assertExact · r400))
check "non-numeric Content-Length → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: abc\x0d\nConnection: close\x0d\n\x0d\nbody")
(handler := okHandler)
(expect := (assertExact · r400))
check "negative Content-Length → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: -1\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := (assertExact · r400))
check "extra bytes beyond CL become next pipelined request"
(raw :=
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\n\x0d\nhello" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
let body : String req.body.readAll
Response.ok |>.text s!"{toString req.line.uri}:{body}")
(expect := fun r =>
assertResponseCount r 2 *>
assertContains r "/:hello" *>
assertContains r "/second:" *>
assertAbsent r "/second:hello")
-- Chunk extension limits
#eval runGroup "Chunk extension name length (default limit = 256)" do
check "ext name at 256 bytes → accepted"
(raw :=
let name := String.ofList (List.replicate 256 'a')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;{name}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "ext name at 257 bytes → 400"
(raw :=
let name := String.ofList (List.replicate 257 'a')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;{name}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
#eval runGroup "Chunk extension value length (default limit = 256)" do
check "ext token value at 256 bytes → accepted"
(raw :=
let v := String.ofList (List.replicate 256 'v')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext={v}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "ext token value at 257 bytes → 400"
(raw :=
let v := String.ofList (List.replicate 257 'v')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext={v}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
check "ext quoted value at 256 bytes → accepted"
(raw :=
let v := String.ofList (List.replicate 256 'v')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext=\"{v}\"\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "ext quoted value at 257 bytes → 400"
(raw :=
let v := String.ofList (List.replicate 257 'v')
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext=\"{v}\"\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
#eval runGroup "Chunk extension count (default limit = 16)" do
check "16 extensions → accepted"
(raw :=
let exts := (List.range 16).foldl (fun acc i => acc ++ s!";e{i}") ""
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5{exts}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "17 extensions → 400"
(raw :=
let exts := (List.range 17).foldl (fun acc i => acc ++ s!";e{i}") ""
s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5{exts}\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
#eval runGroup "Chunk extension count (custom limit)" do
let cfg := { defaultConfig with maxChunkExtensions := 1 }
check "1 extension with limit=1 → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext1\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(config := cfg)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "2 extensions with limit=1 → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext1;ext2\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(config := cfg)
(expect := (assertExact · r400))
check "0 extensions with limit=1 → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(config := cfg)
(expect := fun r => assertStatus r "HTTP/1.1 200")
#eval runGroup "Chunk extension misc" do
check "name-only extension (no value) → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;novalue\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "hello")
check "extension on terminal zero-chunk → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0;final-ext=done\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "extension with quoted-string value → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;ext=\"hello world\"\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
check "non-token character in ext name → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;bad@name\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := (assertExact · r400))
check "multiple name=value extensions → accepted"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5;a=1;b=2;c=3\x0d\nhello\x0d\n0\x0d\n\x0d\n")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")
-- RFC 9112 §7.1: zero-size chunk encoding
#eval runGroup "RFC 9112 §7.1: empty outgoing chunks must not be encoded as last-chunk" do
-- A zero-size chunk IS the last-chunk terminator; the encoder must skip empty non-final chunks
-- so that subsequent data remains visible to the client.
check "empty chunk with extensions before real data — real data still received"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun _ => do
let ext := (Chunk.ExtensionName.ofString! "test", some (Chunk.ExtensionValue.ofString! "val"))
Response.ok |>.stream fun s => do
s.send { data := .empty, extensions := #[ext] }
s.send (Chunk.ofByteArray "after\n".toUTF8))
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "after")
check "multiple empty chunks before real data — real data still received"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun _ => do
Response.ok |>.stream fun s => do
s.send Chunk.empty
s.send Chunk.empty
s.send (Chunk.ofByteArray "data\n".toUTF8))
(expect := fun r => assertContains r "data")
#eval runGroup "RFC 9110 §8.6: Content-Length strict decimal" do
check "underscore in Content-Length → 400"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 1_0\x0d\nConnection: close\x0d\n\x0d\n0123456789")
(handler := echoHandler)
(expect := fun r => assertExact r r400)
check "valid Content-Length → 200"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := echoHandler)
(expect := fun r => assertStatus r "HTTP/1.1 200")

View File

@@ -0,0 +1,193 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Basic method dispatch and streaming responses
#eval runGroup "Basic dispatch" do
check "GET with Content-Length header → 200"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 7\x0d\nConnection: close\x0d\n\x0d\nsurvive")
(handler := fun req => do
if req.line.method == .get && req.line.headers.hasEntry (.mk "content-length") (.ofString! "7")
then Response.ok |>.text "ok"
else Response.badRequest |>.text "bad")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "ok")
check "GET → 200 with body"
(raw := mkGetClose "/api/users")
(handler := fun req => do
if req.line.method == .get then Response.ok |>.text "users list"
else Response.notFound |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "users list")
check "POST with JSON body → 201 Created"
(raw := mkPost "/api/users" "{\"name\":\"Alice\"}" "Content-Type: application/json\x0d\nConnection: close\x0d\n")
(handler := fun req => do
if req.line.headers.hasEntry (.mk "content-type") (.ofString! "application/json")
then Response.new |>.status .created |>.text "Created"
else Response.badRequest |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 201" *> assertContains r "Created")
check "DELETE → 204 No Content"
(raw := "DELETE /api/users/123 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
if req.line.method == .delete then Response.new |>.status .noContent |>.text ""
else Response.notFound |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 204")
check "HEAD → headers present, body absent"
(raw := "HEAD /api/users HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
if req.line.method == .head then Response.ok |>.text ""
else Response.notFound |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertAbsent r "\x0d\n\x0d\nX")
check "OPTIONS with Allow header"
(raw := "OPTIONS * HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
if req.line.method == .options
then Response.ok |>.header! "Allow" "GET, POST, DELETE, OPTIONS" |>.text ""
else Response.badRequest |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "Allow: GET")
check "multiple request headers preserved"
(raw := "GET /api/data HTTP/1.1\x0d\nHost: api.example.com\x0d\nAccept: application/json\x0d\nAuthorization: Bearer tok\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
if req.line.headers.hasEntry (.mk "authorization") (.ofString! "Bearer tok") &&
req.line.headers.hasEntry (.mk "accept") (.ofString! "application/json")
then Response.ok |>.text "authenticated"
else Response.new |>.status .unauthorized |>.text "unauthorized")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "authenticated")
check "query parameters preserved in URI"
(raw := mkGetClose "/api/search?q=test&limit=10")
(handler := fun req => do
if toString req.line.uri == "/api/search?q=test&limit=10"
then Response.ok |>.text "results"
else Response.notFound |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "results")
check "POST with empty body (CL: 0) → 202 Accepted"
(raw := "POST /api/trigger HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 0\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => do
if req.line.headers.hasEntry (.mk "content-length") (.ofString! "0")
then Response.new |>.status .accepted |>.text "triggered"
else Response.badRequest |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 202" *> assertContains r "triggered")
check "URI with percent-encoded characters"
(raw := mkGetClose "/api/users/%C3%A9")
(handler := fun req => do
if toString req.line.uri == "/api/users/%C3%A9"
then Response.ok |>.text "found"
else Response.notFound |>.text "")
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "found")
check "custom response headers preserved"
(raw := mkGetClose "/api/data")
(handler := fun _ =>
Response.ok
|>.header! "Cache-Control" "no-cache"
|>.header! "X-Custom-Header" "custom-value"
|>.text "data")
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Cache-Control: no-cache" *>
assertContains r "X-Custom-Header: custom-value")
check "custom status code (418 I'm a teapot)"
(raw := mkGetClose "/api/teapot")
(handler := fun _ => Response.new |>.status .imATeapot |>.text "I'm a teapot")
(expect := fun r => assertStatus r "HTTP/1.1 418" *> assertContains r "I'm a teapot")
check "large response body (1000 bytes)"
(raw := mkGetClose "/api/large")
(handler := fun _ => Response.ok |>.text (String.ofList (List.replicate 1000 'X')))
(expect := fun r => assertStatus r "HTTP/1.1 200" *> assertContains r "Content-Length: 1000")
-- Streaming responses
#eval runGroup "Streaming responses" do
check "streaming with explicit Content-Length"
(raw := mkGetClose "/stream")
(handler := fun _ => do
let stream Body.mkStream
background do
for i in [0:3] do
let sleep Sleep.mk 5
sleep.wait
stream.send <| Chunk.ofByteArray s!"chunk{i}\n".toUTF8
stream.close
return Response.ok
|>.header (.mk "content-length") (.mk "21")
|>.body stream)
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Content-Length: 21" *>
assertContains r "chunk0")
check "streaming with setKnownSize"
(raw := mkGetClose "/stream-sized")
(handler := fun _ => do
let stream Body.mkStream
stream.setKnownSize (some (.fixed 15))
background do
for i in [0:3] do
stream.send <| Chunk.ofByteArray s!"data{i}".toUTF8
stream.close
return Response.ok |>.body stream)
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Content-Length: 15" *>
assertContains r "data0")
check "streaming chunked encoding (unknown size)"
(raw := mkGetClose "/stream-chunked")
(handler := fun _ => do
let stream Body.mkStream
background do
stream.send <| Chunk.ofByteArray "hello".toUTF8
stream.send <| Chunk.ofByteArray "world".toUTF8
stream.close
return Response.ok |>.body stream)
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Transfer-Encoding: chunked" *>
assertContains r "hello" *>
assertContains r "world")
check "chunked request + streaming response"
(raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\ndata1\x0d\n5\x0d\ndata2\x0d\n0\x0d\n\x0d\n")
(handler := fun req => do
let stream Body.mkStream
let te := req.line.headers.get? (.mk "transfer-encoding")
if te.isSome then
background do
stream.send <| Chunk.ofByteArray "response0".toUTF8
stream.send <| Chunk.ofByteArray "response1".toUTF8
stream.close
return Response.ok
|>.header (.mk "content-length") (.mk "18")
|>.body stream
else
stream.close
return Response.badRequest |>.body stream)
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "response0" *>
assertContains r "response1")
check "fixed-length overflow: output stops at announced length"
(raw := mkGetClose "/overflow")
(handler := fun _ => do
let stream Body.mkStream
background do
stream.send <| Chunk.ofByteArray "abcdef".toUTF8
stream.close
return Response.ok
|>.header (.mk "content-length") (.mk "3")
|>.body stream)
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Content-Length: 3")

View File

@@ -348,21 +348,21 @@ info: "HTTP/1.1 418 I'm a teapot\x0d\n\x0d\n"
/-! ## Edge cases: Status encoding -/
-- Status.other 0: minimum possible value
-- Status.other 104: minimum valid non-known code (100103 are all named)
/--
info: "999 Unknown"
-/
#guard_msgs in
#eval encodeStr (Status.other 999, "Unknown", by decide, by decide, by decide)
-- Status.other that overlaps with a named status (100 = Continue)
-- Status.other 209: non-named code between two known blocks
/--
info: "888 Unknown"
-/
#guard_msgs in
#eval encodeStr (Status.other 888, "Unknown", by decide, by decide, by decide)
-- Status.other max UInt16
-- Status.other 999: maximum valid code
/--
info: "999 Unknown"
-/
@@ -410,7 +410,7 @@ info: "f\x0d\n0123456789abcde\x0d\n"
#guard_msgs in
#eval encodeStr (Chunk.ofByteArray "0123456789abcde".toUTF8)
-- Chunk.ofByteArray with empty data (same as Chunk.empty)
-- Chunk.ofByteArray with empty data encodes as the last-chunk terminator.
/--
info: "0\x0d\n\x0d\n"
-/

View File

@@ -0,0 +1,169 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Handlers for Expect: 100-continue testing
private structure RejectContinueHandler where
onRequestCalls : IO.Ref Nat
instance : Std.Http.Server.Handler RejectContinueHandler where
onRequest self _ := do
self.onRequestCalls.modify (· + 1)
Response.ok |>.text "request-ran"
onContinue _ _ := pure false
private structure AcceptContinueHandler where
onRequestCalls : IO.Ref Nat
instance : Std.Http.Server.Handler AcceptContinueHandler where
onRequest self request := do
self.onRequestCalls.modify (· + 1)
let body : String request.body.readAll
Response.ok |>.text s!"accepted:{body}"
onContinue _ _ := pure true
-- Per-test runner for generic handlers
private def checkH {σ : Type} [Std.Http.Server.Handler σ]
(name : String)
(raw : String)
(handler : σ)
(expect : ByteArray IO Unit)
(config : Config := defaultConfig) : IO Unit := do
let (client, server) Mock.new
let response Async.block do
client.send raw.toUTF8
Std.Http.Server.serveConnection server handler config |>.run
return ( client.recv?).getD .empty
try expect response
catch e => throw (IO.userError s!"[{name}] {e}")
private def assertCallCount (ref : IO.Ref Nat) (expected : Nat) : IO Unit := do
let got ref.get
unless got == expected do
throw <| IO.userError s!"expected {expected} onRequest calls, got {got}"
-- RFC 9110 §10.1.1: Expect: 100-continue
#eval runGroup "Expect: 100-continue — reject" do
let calls IO.mkRef 0
let handler : RejectContinueHandler := { onRequestCalls := calls }
checkH "rejected Expect → 417, handler not called"
(raw := "POST /upload HTTP/1.1\x0d\nHost: example.com\x0d\nExpect: 100-continue\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := handler)
(expect := fun r =>
assertContains r "HTTP/1.1 417 Expectation Failed" *>
assertAbsent r "100 Continue" *>
assertAbsent r "request-ran" *>
assertResponseCount r 1)
assertCallCount calls 0
#eval runGroup "Expect: 100-continue — reject blocks pipelining" do
let calls IO.mkRef 0
let handler : RejectContinueHandler := { onRequestCalls := calls }
checkH "rejected Expect closes exchange, blocks pipelined second request"
(raw :=
"POST /first HTTP/1.1\x0d\nHost: example.com\x0d\nExpect: 100-continue\x0d\nContent-Length: 5\x0d\n\x0d\nhello" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := handler)
(expect := fun r =>
assertContains r "HTTP/1.1 417 Expectation Failed" *>
assertAbsent r "/second")
assertCallCount calls 0
#eval runGroup "Expect: 100-continue — accept" do
let calls IO.mkRef 0
let handler : AcceptContinueHandler := { onRequestCalls := calls }
checkH "accepted Expect → 100 Continue then 200"
(raw := "POST /ok HTTP/1.1\x0d\nHost: example.com\x0d\nExpect: 100-continue\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := handler)
(expect := fun r =>
assertContains r "HTTP/1.1 100 Continue" *>
assertContains r "HTTP/1.1 200 OK" *>
assertContains r "accepted:hello" *>
assertResponseCount r 2) -- one interim + one final
assertCallCount calls 1
#eval runGroup "Expect: misc" do
let rejectCalls IO.mkRef 0
let rejectHandler : RejectContinueHandler := { onRequestCalls := rejectCalls }
checkH "non-100 Expect token → normal request, no interim"
(raw := "POST /odd HTTP/1.1\x0d\nHost: example.com\x0d\nExpect: something-else\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := rejectHandler)
(expect := fun r =>
assertContains r "HTTP/1.1 200 OK" *>
assertContains r "request-ran" *>
assertAbsent r "100 Continue")
assertCallCount rejectCalls 1
let acceptCalls IO.mkRef 0
let acceptHandler : AcceptContinueHandler := { onRequestCalls := acceptCalls }
checkH "Expect: 100-CONTINUE (case-insensitive) → 100 then 200"
(raw := "POST /case HTTP/1.1\x0d\nHost: example.com\x0d\nExpect: 100-CONTINUE\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := acceptHandler)
(expect := fun r =>
assertContains r "HTTP/1.1 100 Continue" *>
assertContains r "HTTP/1.1 200 OK")
assertCallCount acceptCalls 1
let noCalls IO.mkRef 0
let noExpectHandler : AcceptContinueHandler := { onRequestCalls := noCalls }
checkH "no Expect header → no 100 Continue emitted"
(raw := "POST /no-expect HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello")
(handler := noExpectHandler)
(expect := fun r =>
assertContains r "HTTP/1.1 200 OK" *>
assertContains r "accepted:hello" *>
assertAbsent r "100 Continue" *>
assertResponseCount r 1)
assertCallCount noCalls 1
-- Date header generation
#eval runGroup "Date header" do
check "generateDate: true adds Date header"
(raw := mkGetClose "/date")
(handler := fun _ => Response.ok |>.text "hello")
(config := { defaultConfig with generateDate := true })
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertContains r "Date: ")
check "generateDate: false omits Date header"
(raw := mkGetClose "/no-date")
(handler := fun _ => Response.ok |>.text "hello")
(config := { defaultConfig with generateDate := false })
(expect := fun r =>
assertStatus r "HTTP/1.1 200" *>
assertAbsent r "Date: ")
check "user-supplied Date header preserved and not duplicated"
(raw := mkGetClose "/custom-date")
(handler := fun _ =>
Response.ok
|>.header! "Date" "Mon, 01 Jan 2024 00:00:00 GMT"
|>.text "hello")
(config := { defaultConfig with generateDate := true })
(expect := fun r => do
assertContains r "Date: Mon, 01 Jan 2024 00:00:00 GMT"
let text := String.fromUTF8! r
let count := (text.splitOn "Date: ").length - 1
unless count == 1 do
throw <| IO.userError s!"expected 1 Date header, got {count}:\n{text.quote}")

View File

@@ -0,0 +1,630 @@
import Std.Internal.Http
import Std.Internal.Async
import Std.Internal.Async.Timer
open Std.Internal.IO Async
open Std Http Internal Test
def bad400 : String :=
"HTTP/1.1 400 Bad Request\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def runWithTimeout {α : Type} (name : String) (timeoutMs : Nat := 15000) (action : IO α) : IO α := do
let task IO.asTask action
let ticks := (timeoutMs + 9) / 10
let rec loop (remaining : Nat) : IO α := do
if ( IO.getTaskState task) == .finished then
match ( IO.wait task) with
| .ok x => pure x
| .error err => throw err
else
match remaining with
| 0 =>
IO.cancel task
throw <| IO.userError s!"Test '{name}' timed out after {timeoutMs}ms (possible hang/regression)"
| n + 1 =>
IO.sleep 10
loop n
loop ticks
def closeChannelIdempotent {α : Type} (ch : Std.CloseableChannel α) : IO Unit := do
match EIO.toBaseIO ch.close with
| .ok _ => pure ()
| .error .alreadyClosed => pure ()
| .error err => throw <| IO.userError (toString err)
def sendRaw
(client : Mock.Client)
(server : Mock.Server)
(raw : ByteArray)
(handler : TestHandler)
(config : Config := defaultConfig) : IO ByteArray := Async.block do
client.send raw
Std.Http.Server.serveConnection server handler config
|>.run
let res client.recv?
pure (res.getD .empty)
def sendRawAndClose
(client : Mock.Client)
(server : Mock.Server)
(raw : ByteArray)
(handler : TestHandler)
(config : Config := defaultConfig) : IO ByteArray := Async.block do
client.send raw
closeChannelIdempotent client.getSendChan
Std.Http.Server.serveConnection server handler config
|>.run
let res client.recv?
pure (res.getD .empty)
def sendFragmentedAndClose
(client : Mock.Client)
(server : Mock.Server)
(parts : Array ByteArray)
(handler : TestHandler)
(config : Config := defaultConfig) : IO ByteArray := Async.block do
let serverTask async (t := AsyncTask) do
Std.Http.Server.serveConnection server handler config
|>.run
for part in parts do
client.send part
closeChannelIdempotent client.getSendChan
await serverTask
let res client.recv?
pure (res.getD .empty)
def responseText (response : ByteArray) : String :=
String.fromUTF8! response
def responseBody (response : ByteArray) : String :=
let parts := (responseText response).splitOn "\x0d\n\x0d\n"
match parts.drop 1 with
| [] => ""
| body :: _ => body
def assertStatusPrefix (name : String) (response : ByteArray) (prefix_ : String) : IO Unit := do
let text := responseText response
unless text.startsWith prefix_ do
throw <| IO.userError s!"Test '{name}' failed:\nExpected status prefix {prefix_.quote}\nGot:\n{text.quote}"
def countOccurrences (s : String) (needle : String) : Nat :=
if needle.isEmpty then
0
else
(s.splitOn needle).length - 1
def assertStatusCount (name : String) (response : ByteArray) (expected : Nat) : IO Unit := do
let text := responseText response
let got := countOccurrences text "HTTP/1.1 "
if got != expected then
throw <| IO.userError s!"Test '{name}' failed:\nExpected {expected} responses but saw {got}\n{text.quote}"
def nextSeed (seed : Nat) : Nat :=
(1664525 * seed + 1013904223) % 4294967296
def randBelow (seed : Nat) (maxExclusive : Nat) : Nat × Nat :=
let seed' := nextSeed seed
if maxExclusive == 0 then
(0, seed')
else
(seed' % maxExclusive, seed')
def randIn (seed : Nat) (low : Nat) (high : Nat) : Nat × Nat :=
if high < low then
(low, seed)
else
let (n, seed') := randBelow seed (high - low + 1)
(low + n, seed')
def randomAsciiBytes (seed : Nat) (len : Nat) : ByteArray × Nat := Id.run do
let mut s := seed
let mut out := ByteArray.empty
for _ in [0:len] do
let (r, s') := randBelow s 38
s := s'
let code :=
if r < 26 then
97 + r
else if r < 36 then
48 + (r - 26)
else if r == 36 then
45
else
95
out := out.push (UInt8.ofNat code)
(out, s)
def randomTokenBytes (seed : Nat) (len : Nat) : ByteArray × Nat := Id.run do
let mut s := seed
let mut out := ByteArray.empty
for _ in [0:len] do
let (r, s') := randBelow s 36
s := s'
let code :=
if r < 26 then
97 + r
else
48 + (r - 26)
out := out.push (UInt8.ofNat code)
(out, s)
def randomSplit (seed : Nat) (data : ByteArray) (maxPart : Nat := 17) : Array ByteArray × Nat := Id.run do
let mut s := seed
let mut out : Array ByteArray := #[]
let mut i := 0
while i < data.size do
let remaining := data.size - i
let upper := Nat.min maxPart remaining
let (partLen, s') := randIn s 1 upper
s := s'
out := out.push (data.extract i (i + partLen))
i := i + partLen
(out, s)
def randomChunkedPayload (seed : Nat) (body : ByteArray) : ByteArray × Nat := Id.run do
let mut s := seed
let mut out := ByteArray.empty
let mut i := 0
while i < body.size do
let remaining := body.size - i
let maxChunk := Nat.min 9 remaining
let (chunkLen, s') := randIn s 1 maxChunk
s := s'
out := out ++ s!"{chunkLen}\x0d\n".toUTF8
out := out ++ body.extract i (i + chunkLen)
out := out ++ "\x0d\n".toUTF8
i := i + chunkLen
out := out ++ "0\x0d\n\x0d\n".toUTF8
(out, s)
def mkContentLengthHead (path : String) (bodySize : Nat) : ByteArray :=
s!"POST {path} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {bodySize}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
def mkChunkedHead (path : String) : ByteArray :=
s!"POST {path} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
def randomChunkExtensionList (seed : Nat) (count : Nat) : String × Nat := Id.run do
let mut s := seed
let mut ext := ""
for _ in [0:count] do
let (nameLen, s1) := randIn s 1 3
s := s1
let (valueLen, s2) := randIn s 1 3
s := s2
let (nameBytes, s3) := randomTokenBytes s nameLen
s := s3
let (valueBytes, s4) := randomTokenBytes s valueLen
s := s4
let name := String.fromUTF8! nameBytes
let value := String.fromUTF8! valueBytes
ext := ext ++ s!";{name}={value}"
(ext, s)
def randomTrailerLines (seed : Nat) (count : Nat) : String × Nat := Id.run do
let mut s := seed
let mut lines := ""
for i in [0:count] do
let (nameLen, s1) := randIn s 1 4
s := s1
let (valueLen, s2) := randIn s 1 6
s := s2
let (nameBytes, s3) := randomTokenBytes s nameLen
s := s3
let (valueBytes, s4) := randomTokenBytes s valueLen
s := s4
let name := String.fromUTF8! nameBytes
let value := String.fromUTF8! valueBytes
lines := lines ++ s!"X{i}-{name}: {value}\x0d\n"
(lines, s)
def echoBodyHandler : TestHandler := fun req => do
let body : String req.body.readAll
Response.ok |>.text body
def runPipelinedReadAll
(raw : ByteArray)
(config : Config := defaultConfig) : IO (ByteArray × Array String) := Async.block do
let (client, server) Mock.new
let seenRef IO.mkRef (#[] : Array String)
let handler : TestHandler := fun req => do
let uri := toString req.line.uri
seenRef.modify (·.push uri)
let _body : String req.body.readAll
Response.ok |>.text uri
client.send raw
closeChannelIdempotent client.getSendChan
Std.Http.Server.serveConnection server handler config
|>.run
let response client.recv?
let seen seenRef.get
pure (response.getD .empty, seen)
def fuzzContentLengthEcho (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, seed1) := randIn seed 0 128
seed := seed1
let (body, seed2) := randomAsciiBytes seed len
seed := seed2
let head := mkContentLengthHead s!"/fuzz-cl-{i}" body.size
let (bodyParts, seed3) := randomSplit seed body
seed := seed3
let parts := #[head] ++ bodyParts
let (client, server) Mock.new
let response sendFragmentedAndClose client server parts echoBodyHandler
let expectedBody := String.fromUTF8! body
assertStatusPrefix s!"fuzzContentLengthEcho case={i} seed={caseSeed}" response "HTTP/1.1 200"
let gotBody := responseBody response
if gotBody != expectedBody then
throw <| IO.userError s!"fuzzContentLengthEcho case={i} seed={caseSeed} failed:\nExpected body {expectedBody.quote}\nGot body {gotBody.quote}\nFull response:\n{(responseText response).quote}"
def fuzzContentLengthLeadingZerosAccepted (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, seed1) := randIn seed 1 96
seed := seed1
let (leadingZeros, seed2) := randIn seed 1 5
seed := seed2
let (body, seed3) := randomAsciiBytes seed len
seed := seed3
let clToken := String.ofList (List.replicate leadingZeros '0') ++ toString len
let raw :=
s!"POST /cl-leading-zeros-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {clToken}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8 ++ body
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler
let expectedBody := String.fromUTF8! body
assertStatusPrefix s!"fuzzContentLengthLeadingZerosAccepted case={i} seed={caseSeed} len={len} zeros={leadingZeros}" response "HTTP/1.1 200"
let gotBody := responseBody response
if gotBody != expectedBody then
throw <| IO.userError s!"fuzzContentLengthLeadingZerosAccepted case={i} seed={caseSeed} failed:\nExpected body {expectedBody.quote}\nGot body {gotBody.quote}\nFull response:\n{(responseText response).quote}"
def fuzzChunkedEcho (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, seed1) := randIn seed 0 128
seed := seed1
let (body, seed2) := randomAsciiBytes seed len
seed := seed2
let (chunkedBody, seed3) := randomChunkedPayload seed body
seed := seed3
let head := mkChunkedHead s!"/fuzz-chunked-{i}"
let raw := head ++ chunkedBody
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler
let expectedBody := String.fromUTF8! body
assertStatusPrefix s!"fuzzChunkedEcho case={i} seed={caseSeed}" response "HTTP/1.1 200"
let gotBody := responseBody response
if gotBody != expectedBody then
throw <| IO.userError s!"fuzzChunkedEcho case={i} seed={caseSeed} failed:\nExpected body {expectedBody.quote}\nGot body {gotBody.quote}\nFull response:\n{(responseText response).quote}"
def fuzzMixedTransferEncodingAndContentLengthRejected (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, seed1) := randIn seed 0 96
seed := seed1
let (body, seed2) := randomAsciiBytes seed len
seed := seed2
let (chunkedBody, seed3) := randomChunkedPayload seed body
seed := seed3
let (declaredCl, seed4) := randIn seed 0 128
seed := seed4
let raw :=
s!"POST /te-cl-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nContent-Length: {declaredCl}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8 ++ chunkedBody
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler
assertExact response bad400
def fuzzInvalidChunkSizeRejected (iterations : Nat) (seed0 : Nat) : IO Unit := do
let badTokens : Array String := #["g", "G", "z", "Z", "@", "!", "x"]
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (idx, seed1) := randBelow seed badTokens.size
seed := seed1
let token := badTokens[idx]!
let raw :=
s!"POST /bad-size-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n{token}\x0d\nabc\x0d\n0\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler
assertExact response bad400
def fuzzDuplicateContentLengthRejected (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (cl1, seed1) := randIn seed 0 64
seed := seed1
let (same, seed2) := randBelow seed 2
seed := seed2
let (delta, seed3) := randIn seed 1 10
seed := seed3
let cl2 := if same == 0 then cl1 else cl1 + delta
let (body, seed4) := randomAsciiBytes seed cl1
seed := seed4
let raw :=
s!"POST /dup-cl-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {cl1}\x0d\nContent-Length: {cl2}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8 ++ body
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler
assertExact response bad400
def fuzzChunkExtensionLimits (iterations : Nat) (seed0 : Nat) : IO Unit := do
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxChunkExtNameLength := 4
maxChunkExtValueLength := 4
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (nameLen, seed1) := randIn seed 1 8
seed := seed1
let (valueLen, seed2) := randIn seed 1 8
seed := seed2
let (nameBytes, seed3) := randomTokenBytes seed nameLen
seed := seed3
let (valueBytes, seed4) := randomTokenBytes seed valueLen
seed := seed4
let name := String.fromUTF8! nameBytes
let value := String.fromUTF8! valueBytes
let raw :=
s!"POST /ext-limit-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n1;{name}={value}\x0d\nx\x0d\n0\x0d\n\x0d\n".toUTF8
let expectOk := nameLen 4 valueLen 4
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler (config := config)
if expectOk then
assertStatusPrefix s!"fuzzChunkExtensionLimits case={i} seed={caseSeed} nameLen={nameLen} valueLen={valueLen}" response "HTTP/1.1 200"
else
assertExact response bad400
def fuzzChunkExtensionCountLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxChunkExtensions := 2
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (extCount, seed1) := randIn seed 0 5
seed := seed1
let (extList, seed2) := randomChunkExtensionList seed extCount
seed := seed2
let raw :=
s!"POST /ext-count-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n1{extList}\x0d\nx\x0d\n0\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler (config := config)
if extCount 2 then
assertStatusPrefix s!"fuzzChunkExtensionCountLimit case={i} seed={caseSeed} extCount={extCount}" response "HTTP/1.1 200"
else
assertExact response bad400
def fuzzTrailerHeaderCountLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxTrailerHeaders := 2
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (trailerCount, seed1) := randIn seed 0 5
seed := seed1
let (trailers, seed2) := randomTrailerLines seed trailerCount
seed := seed2
let raw :=
s!"POST /trailers-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n1\x0d\na\x0d\n0\x0d\n{trailers}\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRaw client server raw echoBodyHandler (config := config)
if trailerCount 2 then
assertStatusPrefix s!"fuzzTrailerHeaderCountLimit case={i} seed={caseSeed} trailerCount={trailerCount}" response "HTTP/1.1 200"
else
assertExact response bad400
def fuzzCompleteFirstBodyAllowsPipeline (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, seed1) := randIn seed 0 32
seed := seed1
let (body, seed2) := randomAsciiBytes seed len
seed := seed2
let uri1 := s!"/first-complete-{i}"
let req1 :=
s!"POST {uri1} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {len}\x0d\n\x0d\n".toUTF8 ++ body
let req2 := "GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (response, seen) runPipelinedReadAll (req1 ++ req2)
let text := responseText response
assertStatusCount s!"fuzzCompleteFirstBodyAllowsPipeline case={i} seed={caseSeed}" response 2
unless text.contains uri1 do
throw <| IO.userError s!"fuzzCompleteFirstBodyAllowsPipeline case={i} seed={caseSeed} failed:\nMissing first URI {uri1.quote}\n{text.quote}"
unless text.contains "/second" do
throw <| IO.userError s!"fuzzCompleteFirstBodyAllowsPipeline case={i} seed={caseSeed} failed:\nMissing second response\n{text.quote}"
if seen.size != 2 seen[0]! != uri1 seen[1]! != "/second" then
throw <| IO.userError s!"fuzzCompleteFirstBodyAllowsPipeline case={i} seed={caseSeed} failed:\nExpected seen=[{uri1.quote}, \"/second\"] got {seen}"
-- Property: Content-Length framing is stable across random payloads and random transport splits.
#eval runWithTimeout "fuzz_content_length_echo" 20000 do
fuzzContentLengthEcho 40 0x00C0FFEE
-- Property: Content-Length with randomized leading zeros is accepted and decoded to exact body length.
#eval runWithTimeout "fuzz_content_length_leading_zeros_accepted" 20000 do
fuzzContentLengthLeadingZerosAccepted 30 0x00CAB005
-- Property: Chunked framing reconstructs exact bodies under random chunking and transport splits.
#eval runWithTimeout "fuzz_chunked_echo" 20000 do
fuzzChunkedEcho 40 0x00123456
-- Property: Mixing Transfer-Encoding with Content-Length is always rejected.
#eval runWithTimeout "fuzz_te_cl_mixed_rejected" 20000 do
fuzzMixedTransferEncodingAndContentLengthRejected 30 0x0010CE11
-- Property: Invalid chunk-size tokens are rejected deterministically with 400.
#eval runWithTimeout "fuzz_invalid_chunk_size_rejected" 20000 do
fuzzInvalidChunkSizeRejected 30 0x00BAD001
-- Property: Duplicate Content-Length headers are always rejected (same or different values).
#eval runWithTimeout "fuzz_duplicate_content_length_rejected" 20000 do
fuzzDuplicateContentLengthRejected 30 0x00D0C1A7
-- Property: Chunk extension name/value limits are enforced under randomized lengths.
#eval runWithTimeout "fuzz_chunk_extension_limits" 20000 do
fuzzChunkExtensionLimits 40 0x00A11CE5
-- Property: Chunk extension count limit is enforced under randomized extension lists.
#eval runWithTimeout "fuzz_chunk_extension_count_limit" 20000 do
fuzzChunkExtensionCountLimit 35 0x00E77E11
-- Property: Trailer header count limit is enforced under randomized trailer sections.
#eval runWithTimeout "fuzz_trailer_header_count_limit" 20000 do
fuzzTrailerHeaderCountLimit 35 0x00A71A12
-- Property: Complete first request body allows pipelined follow-up parsing.
#eval runWithTimeout "fuzz_complete_first_body_allows_pipeline" 20000 do
fuzzCompleteFirstBodyAllowsPipeline 30 0x00777777

View File

@@ -0,0 +1,450 @@
import Std.Internal.Http
import Std.Internal.Async
open Std.Internal.IO Async
open Std Http Internal Test
/-!
# Limit-enforcement fuzzing for the HTTP/1.1 server.
Tests that every configurable limit in `H1.Config` and `Server.Config` is
correctly enforced under randomized inputs. Inspired by hyper's fuzzing of
size and count limits.
-/
def closeChannelIdempotent {α : Type} (ch : Std.CloseableChannel α) : IO Unit := do
match EIO.toBaseIO ch.close with
| .ok _ => pure ()
| .error .alreadyClosed => pure ()
| .error err => throw <| IO.userError (toString err)
def sendRaw
(client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config) : IO ByteArray := Async.block do
client.send raw
Std.Http.Server.serveConnection server handler config |>.run
let res client.recv?
pure (res.getD .empty)
def sendRawAndClose
(client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config) : IO ByteArray := Async.block do
client.send raw
closeChannelIdempotent client.getSendChan
Std.Http.Server.serveConnection server handler config |>.run
let res client.recv?
pure (res.getD .empty)
def runWithTimeout {α : Type} (name : String) (timeoutMs : Nat := 20000) (action : IO α) : IO α := do
let task IO.asTask action
let ticks := (timeoutMs + 9) / 10
let rec loop (remaining : Nat) : IO α := do
if ( IO.getTaskState task) == .finished then
match ( IO.wait task) with
| .ok x => pure x
| .error err => throw err
else
match remaining with
| 0 => IO.cancel task; throw <| IO.userError s!"Test '{name}' timed out"
| n + 1 => IO.sleep 10; loop n
loop ticks
-- PRNG
def nextSeed (seed : Nat) : Nat := (1664525 * seed + 1013904223) % 4294967296
def randBelow (seed : Nat) (n : Nat) : Nat × Nat :=
let s := nextSeed seed
(if n == 0 then 0 else s % n, s)
def randIn (seed : Nat) (lo hi : Nat) : Nat × Nat :=
if hi < lo then (lo, seed) else
let (r, s) := randBelow seed (hi - lo + 1)
(lo + r, s)
def randomTokenBytes (seed : Nat) (len : Nat) : ByteArray × Nat := Id.run do
let mut s := seed; let mut out := ByteArray.empty
for _ in [0:len] do
let (r, s') := randBelow s 36; s := s'
let code := if r < 26 then 97 + r else 48 + (r - 26)
out := out.push (UInt8.ofNat code)
(out, s)
def randomAsciiBytes (seed : Nat) (len : Nat) : ByteArray × Nat := Id.run do
let mut s := seed; let mut out := ByteArray.empty
for _ in [0:len] do
let (r, s') := randBelow s 26; s := s'
out := out.push (UInt8.ofNat (97 + r))
(out, s)
private def toHexAux : Nat Nat String String
| 0, _, acc => acc
| fuel + 1, n, acc =>
if n == 0 then acc
else
let d := n % 16
let c : Char := if d < 10 then Char.ofNat (48 + d) else Char.ofNat (87 + d)
toHexAux fuel (n / 16) (String.ofList [c] ++ acc)
def natToHex (n : Nat) : String :=
if n == 0 then "0" else toHexAux 16 n ""
def assertStatusPrefix (name : String) (response : ByteArray) (pfx : String) : IO Unit := do
let text := String.fromUTF8! response
unless text.startsWith pfx do
throw <| IO.userError s!"Test '{name}' failed:\nExpected {pfx.quote}\nGot:\n{text.quote}"
def countOccurrences (s needle : String) : Nat :=
if needle.isEmpty then 0 else (s.splitOn needle).length - 1
def bad400 : String :=
"HTTP/1.1 400 Bad Request\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
def echoBodyHandler : TestHandler := fun req => do
let body : String req.body.readAll
Response.ok |>.text body
-- ============================================================================
-- maxBodySize — Content-Length framing
-- ============================================================================
-- Property: Content-Length body exactly at or below maxBodySize → 200.
-- Content-Length body above maxBodySize → 413 (no body bytes needed).
def fuzzBodySizeLimitContentLength (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 64
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxBodySize := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (bodySize, s1) := randIn seed 0 (limit + 20); seed := s1
let (bodyBytes, s2) := randomAsciiBytes seed bodySize; seed := s2
let raw :=
s!"POST /bl-cl-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: {bodySize}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
++ bodyBytes
let (client, server) Mock.new
let response sendRaw client server raw (fun req => do
let _body : String req.body.readAll; Response.ok |>.text "ok") config
if bodySize limit then
assertStatusPrefix s!"fuzzBodySizeLimitCL iter={i} seed={caseSeed} size={bodySize}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzBodySizeLimitCL iter={i} seed={caseSeed} size={bodySize}" response "HTTP/1.1 413"
-- ============================================================================
-- maxBodySize — chunked framing
-- ============================================================================
-- Property: chunked bodies with total bytes at or below maxBodySize → 200.
-- Chunked bodies exceeding maxBodySize → 413.
def fuzzBodySizeLimitChunked (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 64
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxBodySize := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
-- Total body size across 1-4 chunks
let (totalSize, s1) := randIn seed 0 (limit + 16); seed := s1
let (numChunks, s2) := randIn seed 1 4; seed := s2
-- Build chunks that sum to totalSize
let chunkSize := (totalSize + numChunks - 1) / numChunks
let mut chunkedBody := ByteArray.empty
let mut remaining := totalSize
for _ in [0:numChunks] do
if remaining == 0 then break
let thisChunk := Nat.min chunkSize remaining
let (chunkBytes, s3) := randomAsciiBytes seed thisChunk; seed := s3
chunkedBody := chunkedBody ++ s!"{natToHex thisChunk}\x0d\n".toUTF8 ++ chunkBytes ++ "\x0d\n".toUTF8
remaining := remaining - thisChunk
chunkedBody := chunkedBody ++ "0\x0d\n\x0d\n".toUTF8
let raw :=
s!"POST /bl-ch-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
++ chunkedBody
let (client, server) Mock.new
let response sendRaw client server raw (fun req => do
let _body : String req.body.readAll; Response.ok |>.text "ok") config
if totalSize limit then
assertStatusPrefix s!"fuzzBodySizeLimitChunked iter={i} seed={caseSeed} total={totalSize}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzBodySizeLimitChunked iter={i} seed={caseSeed} total={totalSize}" response "HTTP/1.1 413"
-- ============================================================================
-- maxHeaders — header count limit
-- ============================================================================
-- Property: header count at or below maxHeaders → 200.
-- header count above maxHeaders → 400.
def fuzzHeaderCountLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 5
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxHeaders := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
-- Host counts as 1 header, Connection as 1, so extra headers = headerCount - 2
let (headerCount, s1) := randIn seed 2 (limit + 4); seed := s1
let extraCount := headerCount - 2 -- we always add Host + Connection
let mut extraHeaders := ""
let mut s := s1
for j in [0:extraCount] do
let (nameLen, s2) := randIn s 1 8; s := s2
let (nameBytes, s3) := randomTokenBytes s nameLen; s := s3
let name := String.fromUTF8! nameBytes
extraHeaders := extraHeaders ++ s!"X-Extra-{j}-{name}: value\x0d\n"
seed := s
let raw :=
s!"GET /hc-{i} HTTP/1.1\x0d\nHost: example.com\x0d\n{extraHeaders}Connection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
-- headerCount includes Host and Connection (always present)
if headerCount limit then
assertStatusPrefix s!"fuzzHeaderCount iter={i} seed={caseSeed} count={headerCount}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzHeaderCount iter={i} seed={caseSeed} count={headerCount}" response "HTTP/1.1 431"
-- ============================================================================
-- maxHeaderBytes — total header bytes limit
-- ============================================================================
-- Property: headers whose aggregate bytes (name + ": " + value + "\r\n") are at or
-- below maxHeaderBytes are accepted; above it they are rejected with 400.
def fuzzHeaderTotalBytesLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
-- Fixed baseline: "Host: example.com\r\n" + "Connection: close\r\n" = 20 + 20 = 40 bytes.
-- We'll add a single large X-Payload header to cross the boundary.
let limit : Nat := 200
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxHeaderBytes := limit
maxHeaderValueLength := limit + 100 -- allow value longer than total limit for testing
}
let baseline := ("Host: example.com\x0d\nConnection: close\x0d\n".toUTF8).size
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
-- Pick a value size that puts total bytes just under or over the limit
-- Each "X-Payload: " header = name(9) + ": "(2) + value + "\r\n"(2) = value + 13
let overhead := baseline + 13 -- "X-Payload" (9) + ": " (2) + "\r\n" (2) + baseline
-- We want value sizes that land on both sides of (limit - overhead)
let boundary := if limit > overhead then limit - overhead else 0
let (valueSize, s1) := randIn seed 0 (boundary + 20); seed := s1
let (valueBytes, s2) := randomAsciiBytes seed valueSize; seed := s2
let value := String.fromUTF8! valueBytes
let raw :=
s!"GET /hb-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nX-Payload: {value}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
-- Total bytes = baseline + "X-Payload: " + value + "\r\n"
let totalBytes := baseline + 9 + 2 + valueSize + 2
if totalBytes limit then
assertStatusPrefix s!"fuzzHeaderBytes iter={i} seed={caseSeed} total={totalBytes}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzHeaderBytes iter={i} seed={caseSeed} total={totalBytes}" response "HTTP/1.1 431"
-- ============================================================================
-- maxMessages — requests per connection
-- ============================================================================
-- Property: after maxMessages requests on a single connection, the server
-- closes the connection (disables keep-alive). All maxMessages
-- requests receive a valid response.
def fuzzMaxMessagesPerConnection (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 3
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxRequests := limit
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (reqCount, s1) := randIn seed 1 (limit + 3); seed := s1
-- Build reqCount keep-alive requests followed by close
let mut raw := ByteArray.empty
for j in [0:reqCount] do
let connHeader :=
if j + 1 == reqCount then "Connection: close\x0d\n" else "Connection: keep-alive\x0d\n"
raw := raw ++ s!"GET /msg-{i}-{j} HTTP/1.1\x0d\nHost: example.com\x0d\n{connHeader}\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
let text := String.fromUTF8! response
let seen := countOccurrences text "HTTP/1.1 200"
let expected := Nat.min reqCount limit
if seen != expected then
throw <| IO.userError
s!"fuzzMaxMessages iter={i} seed={caseSeed} reqCount={reqCount}: expected {expected} responses, got {seen}\n{text.quote}"
-- ============================================================================
-- maxLeadingEmptyLines — leading CRLF before request-line
-- ============================================================================
-- Property: at most maxLeadingEmptyLines bare CRLFs before the request-line are tolerated.
-- More than that → 400.
def fuzzLeadingEmptyLinesLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 4
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxLeadingEmptyLines := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (lineCount, s1) := randIn seed 0 (limit + 4); seed := s1
let leadingCRLF := (List.replicate lineCount "\x0d\n").foldl (· ++ ·) ""
let raw :=
(leadingCRLF ++ s!"GET /le-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
if lineCount limit then
assertStatusPrefix s!"fuzzLeadingEmptyLines iter={i} seed={caseSeed} count={lineCount}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzLeadingEmptyLines iter={i} seed={caseSeed} count={lineCount}" response "HTTP/1.1 400"
-- ============================================================================
-- maxSpaceSequence — OWS (optional whitespace) limit
-- ============================================================================
-- Property: OWS sequences at or below maxSpaceSequence are accepted.
-- OWS sequences exceeding the limit → 400.
def fuzzOWSSequenceLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 4
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxSpaceSequence := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (spaceCount, s1) := randIn seed 0 (limit + 4); seed := s1
let spaces := String.ofList (List.replicate spaceCount ' ')
-- OWS appears between ':' and value in header fields
let raw :=
s!"GET /ows-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nX-OWS:{spaces}value\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
if spaceCount limit then
assertStatusPrefix s!"fuzzOWSLimit iter={i} seed={caseSeed} spaces={spaceCount}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzOWSLimit iter={i} seed={caseSeed} spaces={spaceCount}" response "HTTP/1.1 400"
-- ============================================================================
-- maxStartLineLength — request-line length limit
-- ============================================================================
-- Property: request lines at or below maxStartLineLength → 200.
-- Request lines above maxStartLineLength → 414 (URI too long) or 400.
def fuzzStartLineLengthLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 64
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxStartLineLength := limit
maxUriLength := limit
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
-- "GET " (4) + path + " HTTP/1.1\r\n" (12) → path can be up to (limit - 16)
let pathBudget := if limit > 16 then limit - 16 else 1
let (pathLen, s1) := randIn seed 1 (pathBudget + 10); seed := s1
let path := "/" ++ String.ofList (List.replicate (pathLen - 1) 'a')
let raw :=
s!"GET {path} HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
-- start-line = "GET " + path + " HTTP/1.1\r\n"
let lineLen := 4 + pathLen + 11
if lineLen limit then
assertStatusPrefix s!"fuzzStartLineLen iter={i} seed={caseSeed} len={lineLen}" response "HTTP/1.1 200"
else
-- Either 414 (URI too long) or 400
let text := String.fromUTF8! response
unless text.startsWith "HTTP/1.1 414" || text.startsWith "HTTP/1.1 400" do
throw <| IO.userError
s!"fuzzStartLineLen iter={i} seed={caseSeed} len={lineLen}: expected 414 or 400, got {text.quote}"
-- ============================================================================
-- maxHeaderNameLength and maxHeaderValueLength
-- ============================================================================
-- Property: header names exceeding maxHeaderNameLength → 400.
def fuzzHeaderNameLengthLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 16
let config : Config := { lingeringTimeout := 1000, generateDate := false, maxHeaderNameLength := limit }
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (nameLen, s1) := randIn seed 1 (limit + 8); seed := s1
let (nameBytes, s2) := randomTokenBytes seed nameLen; seed := s2
let name := String.fromUTF8! nameBytes
let raw :=
s!"GET /hnl-{i} HTTP/1.1\x0d\nHost: example.com\x0d\n{name}: value\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
if nameLen limit then
assertStatusPrefix s!"fuzzHeaderNameLen iter={i} seed={caseSeed} len={nameLen}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzHeaderNameLen iter={i} seed={caseSeed} len={nameLen}" response "HTTP/1.1 400"
-- Property: header values exceeding maxHeaderValueLength → 400.
def fuzzHeaderValueLengthLimit (iterations : Nat) (seed0 : Nat) : IO Unit := do
let limit : Nat := 32
let config : Config := {
lingeringTimeout := 1000
generateDate := false
maxHeaderValueLength := limit
maxHeaderBytes := 65536 -- don't let total bytes limit interfere
}
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (valueLen, s1) := randIn seed 0 (limit + 8); seed := s1
let (valueBytes, s2) := randomAsciiBytes seed valueLen; seed := s2
let value := String.fromUTF8! valueBytes
let raw :=
s!"GET /hvl-{i} HTTP/1.1\x0d\nHost: example.com\x0d\nX-Long: {value}\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let (client, server) Mock.new
let response sendRawAndClose client server raw okHandler config
if valueLen limit then
assertStatusPrefix s!"fuzzHeaderValueLen iter={i} seed={caseSeed} len={valueLen}" response "HTTP/1.1 200"
else
assertStatusPrefix s!"fuzzHeaderValueLen iter={i} seed={caseSeed} len={valueLen}" response "HTTP/1.1 400"
-- ============================================================================
-- Run all properties
-- ============================================================================
-- Property: maxBodySize enforced for Content-Length framing.
#eval runWithTimeout "fuzz_body_limit_content_length" 30000 do
fuzzBodySizeLimitContentLength 50 0x00B0DEC0
-- Property: maxBodySize enforced for chunked framing.
#eval runWithTimeout "fuzz_body_limit_chunked" 30000 do
fuzzBodySizeLimitChunked 40 0x00C8BE11
-- Property: maxHeaders count limit is enforced.
#eval runWithTimeout "fuzz_header_count_limit" 30000 do
fuzzHeaderCountLimit 50 0x00AA55AA
-- Property: maxHeaderBytes aggregate limit is enforced.
#eval runWithTimeout "fuzz_header_bytes_limit" 30000 do
fuzzHeaderTotalBytesLimit 40 0x00FACE77
-- Property: maxMessages per connection closes keep-alive after the configured count.
#eval runWithTimeout "fuzz_max_messages_per_connection" 30000 do
fuzzMaxMessagesPerConnection 30 0x00123ABC
-- Property: maxLeadingEmptyLines limit is enforced.
#eval runWithTimeout "fuzz_leading_empty_lines_limit" 30000 do
fuzzLeadingEmptyLinesLimit 50 0x00EEF00D
-- Property: maxSpaceSequence (OWS) limit is enforced.
#eval runWithTimeout "fuzz_ows_sequence_limit" 30000 do
fuzzOWSSequenceLimit 50 0x00ABE5AB
-- Property: maxStartLineLength / maxUriLength is enforced, returning 414 or 400.
#eval runWithTimeout "fuzz_start_line_length_limit" 30000 do
fuzzStartLineLengthLimit 50 0x00C0FFEE
-- Property: maxHeaderNameLength is enforced.
#eval runWithTimeout "fuzz_header_name_length_limit" 30000 do
fuzzHeaderNameLengthLimit 50 0x00DEAD01
-- Property: maxHeaderValueLength is enforced.
#eval runWithTimeout "fuzz_header_value_length_limit" 30000 do
fuzzHeaderValueLengthLimit 50 0x00BEEF02

View File

@@ -0,0 +1,286 @@
import Std.Internal.Http
import Std.Internal.Async
open Std.Internal.IO Async
open Std Http Internal Test
/-!
# Random-byte fuzzing for the HTTP/1.1 parser.
Inspired by hyper's `fuzz_h1_req` libFuzzer target. The core property: any byte
sequence fed to the server must be handled without panicking, hanging, or
producing a malformed response. The server must either:
- Send no bytes (connection closed before a complete request arrives), or
- Send a response that starts with "HTTP/1.1 ".
-/
def closeChannelIdempotent {α : Type} (ch : Std.CloseableChannel α) : IO Unit := do
match EIO.toBaseIO ch.close with
| .ok _ => pure ()
| .error .alreadyClosed => pure ()
| .error err => throw <| IO.userError (toString err)
def sendRawAndClose
(client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config := defaultConfig) : IO ByteArray := Async.block do
client.send raw
closeChannelIdempotent client.getSendChan
Std.Http.Server.serveConnection server handler config |>.run
let res client.recv?
pure (res.getD .empty)
def runWithTimeout {α : Type} (name : String) (timeoutMs : Nat := 20000) (action : IO α) : IO α := do
let task IO.asTask action
let ticks := (timeoutMs + 9) / 10
let rec loop (remaining : Nat) : IO α := do
if ( IO.getTaskState task) == .finished then
match ( IO.wait task) with
| .ok x => pure x
| .error err => throw err
else
match remaining with
| 0 => IO.cancel task; throw <| IO.userError s!"Test '{name}' timed out"
| n + 1 => IO.sleep 10; loop n
loop ticks
-- PRNG
def nextSeed (seed : Nat) : Nat := (1664525 * seed + 1013904223) % 4294967296
def randBelow (seed : Nat) (n : Nat) : Nat × Nat :=
let s := nextSeed seed
(if n == 0 then 0 else s % n, s)
def randIn (seed : Nat) (lo hi : Nat) : Nat × Nat :=
if hi < lo then (lo, seed) else
let (r, s) := randBelow seed (hi - lo + 1)
(lo + r, s)
-- All 256 byte values
def randomFullBytes (seed : Nat) (len : Nat) : ByteArray × Nat := Id.run do
let mut s := seed; let mut out := ByteArray.empty
for _ in [0:len] do
let (r, s') := randBelow s 256; s := s'
out := out.push (UInt8.ofNat r)
(out, s)
-- Server-generated responses are always valid ASCII. Verify the response is
-- either empty or starts with the HTTP/1.1 status-line prefix.
def assertValidHttpOrEmpty (name : String) (response : ByteArray) : IO Unit := do
if response.size == 0 then pure ()
else
let isValidPrefix (pfx : String) :=
let b := pfx.toUTF8
response.size >= b.size && response.extract 0 b.size == b
if isValidPrefix "HTTP/1.1 " || isValidPrefix "HTTP/1.0 " then pure ()
else
let display := match String.fromUTF8? (response.extract 0 (Nat.min 200 response.size)) with
| some s => s.quote | none => "(non-UTF-8 bytes)"
throw <| IO.userError
s!"Test '{name}' failed:\nResponse is neither empty nor a valid HTTP response:\n{display}"
-- Property: any fully-random byte sequence never causes a panic or malformed response.
-- Direct analogue of hyper's fuzz_h1_req libFuzzer target.
def fuzzRandomBytesNoPanic (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, s1) := randIn seed 0 96; seed := s1
let (bytes, s2) := randomFullBytes seed len; seed := s2
let (client, server) Mock.new
let response sendRawAndClose client server bytes okHandler
assertValidHttpOrEmpty s!"fuzzRandomBytesNoPanic iter={i} seed={caseSeed} len={len}" response
-- Property: flipping a single bit in any valid request must not cause a panic.
def fuzzBitFlipOnValidRequests (iterations : Nat) (seed0 : Nat) : IO Unit := do
let corpus : Array ByteArray := #[
"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
"POST /submit HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello".toUTF8,
"POST /chunked HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
"OPTIONS * HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
"CONNECT example.com:443 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
"DELETE /resource HTTP/1.1\x0d\nHost: api.example.com\x0d\nAuthorization: Bearer token123\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
]
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (idx, s1) := randBelow seed corpus.size; seed := s1
let req := corpus[idx]!
let (pos, s2) := randBelow seed req.size; seed := s2
let (bit, s3) := randBelow seed 8; seed := s3
let orig := req[pos]!
let mask : UInt8 := (1 : UInt8) <<< bit.toUInt8
let flipped := orig ^^^ mask
let mutated := (req.extract 0 pos).push flipped ++ req.extract (pos + 1) req.size
let (client, server) Mock.new
let response sendRawAndClose client server mutated okHandler
assertValidHttpOrEmpty
s!"fuzzBitFlip iter={i} seed={caseSeed} reqIdx={idx} pos={pos} bit={bit}" response
-- Property: truncating a valid request at any byte boundary must not cause a panic.
def fuzzTruncatedRequests (iterations : Nat) (seed0 : Nat) : IO Unit := do
let corpus : Array ByteArray := #[
"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
"POST /data HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 4\x0d\nConnection: close\x0d\n\x0d\ndata".toUTF8,
"POST /chunked HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n4\x0d\ndata\x0d\n0\x0d\n\x0d\n".toUTF8,
"HEAD /resource HTTP/1.1\x0d\nHost: example.com\x0d\nAccept: application/json\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
]
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (idx, s1) := randBelow seed corpus.size; seed := s1
let req := corpus[idx]!
let (truncLen, s2) := randBelow seed req.size; seed := s2
let truncated := req.extract 0 truncLen
let (client, server) Mock.new
let response sendRawAndClose client server truncated okHandler
assertValidHttpOrEmpty
s!"fuzzTruncated iter={i} seed={caseSeed} reqIdx={idx} truncLen={truncLen}" response
-- Property: a valid HTTP method prefix followed by garbage must not cause a panic.
def fuzzMethodPrefixWithGarbage (iterations : Nat) (seed0 : Nat) : IO Unit := do
let methods : Array ByteArray := #[
"GET ".toUTF8, "POST ".toUTF8, "PUT ".toUTF8, "DELETE ".toUTF8,
"HEAD ".toUTF8, "OPTIONS ".toUTF8, "PATCH ".toUTF8, "CONNECT ".toUTF8,
"HTTP/1.1 ".toUTF8,
ByteArray.empty,
]
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (mIdx, s1) := randBelow seed methods.size; seed := s1
let pfx := methods[mIdx]!
let (gLen, s2) := randIn seed 0 64; seed := s2
let (garbage, s3) := randomFullBytes seed gLen; seed := s3
let request := pfx ++ garbage
let (client, server) Mock.new
let response sendRawAndClose client server request okHandler
assertValidHttpOrEmpty
s!"fuzzMethodPrefix iter={i} seed={caseSeed} mIdx={mIdx} gLen={gLen}" response
-- Property: high-byte (0x800xFF, non-ASCII) sequences must not cause a panic.
def fuzzHighByteValues (iterations : Nat) (seed0 : Nat) : IO Unit := do
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (len, s1) := randIn seed 0 48; seed := s1
let mut out := ByteArray.empty
let mut s := s1
for _ in [0:len] do
let (r, s') := randBelow s 128; s := s'
out := out.push (UInt8.ofNat (r + 128))
seed := s
let (client, server) Mock.new
let response sendRawAndClose client server out okHandler
assertValidHttpOrEmpty s!"fuzzHighBytes iter={i} seed={caseSeed} len={len}" response
-- Property: garbage appended after a complete request must not cause a panic.
def fuzzGarbageAfterCompleteRequest (iterations : Nat) (seed0 : Nat) : IO Unit := do
let validReq :=
"GET /check HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let mut seed := seed0
for i in [0:iterations] do
let caseSeed := seed
let (gLen, s1) := randIn seed 0 32; seed := s1
let (garbage, s2) := randomFullBytes seed gLen; seed := s2
let request := validReq ++ garbage
let (client, server) Mock.new
let response sendRawAndClose client server request okHandler
assertValidHttpOrEmpty
s!"fuzzGarbageAfter iter={i} seed={caseSeed} gLen={gLen}" response
-- Property: every single-byte input is handled safely (all 256 values).
def fuzzSingleByteInputs : IO Unit := do
for b in List.range 256 do
let bytes := ByteArray.mk #[b.toUInt8]
let (client, server) Mock.new
let response sendRawAndClose client server bytes okHandler
assertValidHttpOrEmpty s!"fuzzSingleByte byte={b}" response
-- Property: known attack patterns and real-world malformed inputs are handled safely.
-- This is the Lean analogue of hyper's denial-of-service and smuggling corpus.
def fuzzKnownMaliciousPatterns : IO Unit := do
let patterns : Array ByteArray := #[
-- TLS 1.2 client hello prefix
ByteArray.mk #[0x16, 0x03, 0x01, 0x00, 0xa5, 0x01, 0x00, 0x00],
-- TLS 1.3 client hello prefix
ByteArray.mk #[0x16, 0x03, 0x03, 0x00, 0x7c, 0x01, 0x00, 0x00],
-- HTTP/2 connection preface
"PRI * HTTP/2.0\x0d\n\x0d\nSM\x0d\n\x0d\n".toUTF8,
-- Bare LF line endings
"GET / HTTP/1.1\nHost: example.com\n\n".toUTF8,
-- CR-only line endings
"GET / HTTP/1.1\x0dHost: example.com\x0d\x0d".toUTF8,
-- Null bytes everywhere
ByteArray.mk #[0x00, 0x00, 0x00, 0x00],
-- CRLF injection attempt in request-line
"GET /path%0d%0aInjected: header HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8,
-- Unicode in path (raw multibyte UTF-8)
"GET /caf\xc3\xa9 HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8,
-- HTTP response sent as a request (should fail, not panic)
"HTTP/1.1 200 OK\x0d\nContent-Length: 2\x0d\n\x0d\nok".toUTF8,
-- Request smuggling: CL.TE
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 6\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n0\x0d\n\x0d\nX".toUTF8,
-- Request smuggling: TE.CL
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nContent-Length: 3\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- Duplicate chunked coding
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked, chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- TE with tab (whitespace obfuscation)
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding:\x09chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- TE with null byte injection
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunk\x00ed\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- Extremely long method token
(String.ofList (List.replicate 8192 'A') ++ " / HTTP/1.1\x0d\nHost: h\x0d\n\x0d\n").toUTF8,
-- SSRF-like absolute-form URI targeting internal host
"GET http://169.254.169.254/latest/meta-data/ HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8,
-- Chunk size with non-hex chars
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\nGG\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- Chunk size overflow attempt (16+ hex digits)
"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\nffffffffffffffff1\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8,
-- Header with embedded CRLF in value
"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Inject: foo\x0d\nEvil: injected\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
-- Multiple Host headers (smuggling attempt)
"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nHost: evil.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
-- Absolute-form URI with bad host overrides Host header
"GET http://evil.internal/steal HTTP/1.1\x0d\nHost: public.example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
-- Folded header (obs-fold, rejected per RFC 9112)
"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Folded: value\x0d\n continuation\x0d\nConnection: close\x0d\n\x0d\n".toUTF8,
]
for i in [:patterns.size] do
let pattern := patterns[i]!
let (client, server) Mock.new
let response sendRawAndClose client server pattern okHandler
assertValidHttpOrEmpty s!"fuzzKnownMalicious pattern={i}" response
-- ============================================================================
-- Run all properties
-- ============================================================================
-- Property: any random byte sequence is handled safely (core libFuzzer equivalent).
#eval runWithTimeout "fuzz_random_bytes_no_panic" 30000 do
fuzzRandomBytesNoPanic 200 0x00FACADE
-- Property: single bit mutations on valid requests are handled safely.
#eval runWithTimeout "fuzz_bit_flip_valid_requests" 30000 do
fuzzBitFlipOnValidRequests 150 0x00B1FF10
-- Property: truncation at any byte boundary is handled safely.
#eval runWithTimeout "fuzz_truncated_requests" 30000 do
fuzzTruncatedRequests 150 0x00DEAD42
-- Property: HTTP method prefix followed by random garbage is handled safely.
#eval runWithTimeout "fuzz_method_prefix_with_garbage" 30000 do
fuzzMethodPrefixWithGarbage 100 0x00CA7500
-- Property: high-byte (non-ASCII) sequences are handled safely.
#eval runWithTimeout "fuzz_high_byte_values" 30000 do
fuzzHighByteValues 120 0x00FF8000
-- Property: garbage appended after a complete request is handled safely.
#eval runWithTimeout "fuzz_garbage_after_complete_request" 30000 do
fuzzGarbageAfterCompleteRequest 100 0x00A1B2C3
-- Property: every single-byte input is handled safely (all 256 cases).
#eval runWithTimeout "fuzz_single_byte_inputs" 30000 do
fuzzSingleByteInputs
-- Property: known attack patterns and malformed inputs are handled safely.
#eval runWithTimeout "fuzz_known_malicious_patterns" 30000 do
fuzzKnownMaliciousPatterns

View File

@@ -27,6 +27,23 @@ private def hasCloseBodyEvent (events : Array (Event .receiving)) : Bool :=
| .closeBody => true
| _ => false
private def hasContinueEvent (events : Array (Event .receiving)) : Bool :=
events.any fun
| .«continue» => true
| _ => false
private def countNeedAnswerEvents (events : Array (Event .receiving)) : Nat :=
events.foldl (init := 0) fun n e =>
match e with
| .needAnswer => n + 1
| _ => n
private def countFailedEvents (events : Array (Event .receiving)) : Nat :=
events.foldl (init := 0) fun n e =>
match e with
| .failed _ => n + 1
| _ => n
private def pulledBodyBytes (chunks : Array PulledChunk) : ByteArray :=
chunks.foldl (fun acc c => acc ++ c.chunk.data) .empty
@@ -185,6 +202,23 @@ private def assertIncrementalSuccess
else
pure ()
private def runOneStepReceiving
(payload : ByteArray)
(config : Protocol.H1.Config := {}) :
Machine .receiving × StepResult .receiving :=
let machine0 : Machine .receiving := { config := config }
(machine0.feed payload).step
private def assertFailedWith
(name : String)
(payload : ByteArray)
(expected : Error)
(config : Protocol.H1.Config := {}) : IO Unit := do
let (machine, step) := runOneStepReceiving payload config
ensure name (hasFailedEvent step.events) s!"expected failure event, events:\n{repr step.events}"
ensure name (machine.error == some expected)
s!"expected error {repr expected}, got {repr machine.error}"
-- Deterministic: one-byte incremental content-length request.
#eval show IO Unit from do
let body := "hello".toUTF8

View File

@@ -0,0 +1,202 @@
import Std.Internal.Http
import Std.Internal.Http.Protocol.H1.Parser
open Std Internal Parsec ByteArray
open Std.Http.Protocol.H1
private def ensure (name : String) (cond : Bool) (msg : String) : IO Unit := do
unless cond do
throw <| IO.userError s!"{name}: {msg}"
private def runParser (p : Parser α) (s : String) : Except String α :=
match (p <* eof).run s.toUTF8 with
| .ok x => .ok x
| .error e => .error e
private def randBelow (gen : StdGen) (maxExclusive : Nat) : Nat × StdGen :=
if maxExclusive = 0 then
(0, gen)
else
randNat gen 0 (maxExclusive - 1)
private def pick! [Inhabited α] (gen : StdGen) (xs : Array α) : α × StdGen :=
let (i, gen') := randBelow gen xs.size
(xs[i]!, gen')
private def randomToken (gen : StdGen) (len : Nat) : String × StdGen := Id.run do
let mut g := gen
let mut out := ""
for _ in [0:len] do
let (r, g') := randBelow g 38
g := g'
let c :=
if r < 26 then Char.ofNat (97 + r)
else if r < 36 then Char.ofNat (48 + (r - 26))
else if r = 36 then '-'
else '_'
out := out.push c
(out, g)
private def randomReason (gen : StdGen) (len : Nat) : String × StdGen := Id.run do
let mut g := gen
let mut out := ""
for _ in [0:len] do
let (r, g') := randBelow g 30
g := g'
let c := if r < 26 then Char.ofNat (65 + r) else ' '
out := out.push c
(out.trimAscii.toString, g)
private def pad3 (n : Nat) : String :=
if n < 10 then s!"00{n}" else if n < 100 then s!"0{n}" else s!"{n}"
private def expectRequestOk (s : String) : IO Unit := do
match runParser (parseRequestLine {}) s with
| .ok _ => pure ()
| .error e => throw <| IO.userError s!"expected request-line success for {s.quote}, got: {e}"
private def expectRequestFail (s : String) : IO Unit := do
match runParser (parseRequestLine {}) s with
| .ok _ => throw <| IO.userError s!"expected request-line failure for {s.quote}"
| .error _ => pure ()
private def expectStatusOk (s : String) : IO Unit := do
match runParser (parseStatusLine {}) s with
| .ok _ => pure ()
| .error e => throw <| IO.userError s!"expected status-line success for {s.quote}, got: {e}"
private def expectStatusFail (s : String) : IO Unit := do
match runParser (parseStatusLine {}) s with
| .ok _ => throw <| IO.userError s!"expected status-line failure for {s.quote}"
| .error _ => pure ()
private def expectOk {α} (name : String) (p : Parser α) (s : String) : IO α := do
match runParser p s with
| .ok x => pure x
| .error e => throw <| IO.userError s!"{name}: expected success for {s.quote}, got {e}"
private def expectFail {α} (name : String) (p : Parser α) (s : String) : IO Unit := do
match runParser p s with
| .ok _ => throw <| IO.userError s!"{name}: expected failure for {s.quote}"
| .error _ => pure ()
#eval show IO Unit from do
let methods : Array String := #["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS", "HEAD", "CONNECT"]
let targets : Array String := #["/", "/a", "/a/b", "/a/b?q=1", "*", "http://example.com", "example.com:443"]
let versions : Array String := #["HTTP/1.1", "HTTP/1.0"]
let mut gen : StdGen := StdGen.mk 0x5eed1234 0x12345
for i in [0:400] do
let (m, g1) := pick! gen methods
let (t, g2) := pick! g1 targets
let (v, g3) := pick! g2 versions
gen := g3
let line := s!"{m} {t} {v}\r\n"
expectRequestOk line
-- Mutation 1: drop the first space
expectRequestFail s!"{m}{t} {v}\r\n"
-- Mutation 2: invalid version token
expectRequestFail s!"{m} {t} HTTP/2.0\r\n"
-- Mutation 3: bad method character
expectRequestFail s!"{m}! {t} {v}\r\n"
ensure "request fuzz progress" (i < 100000) "unreachable safety check"
#eval show IO Unit from do
let knownCodes : Array Nat := #[200, 201, 204, 301, 400, 404, 500, 503]
let mut gen : StdGen := StdGen.mk 0xabcde123 0x777
for _ in [0:400] do
let (code, g1) := pick! gen knownCodes
let (len, g2) := randBelow g1 20
let (reasonRaw, g3) := randomReason g2 (len + 1)
gen := g3
let reason := if reasonRaw.isEmpty then "OK" else reasonRaw
let line := s!"HTTP/1.1 {pad3 code} {reason}\r\n"
expectStatusOk line
-- Mutation 1: unsupported version
expectStatusFail s!"HTTP/2.0 {pad3 code} {reason}\r\n"
-- Mutation 2: non-digit in status code
expectStatusFail s!"HTTP/1.1 A{(pad3 code).drop 1} {reason}\r\n"
-- Mutation 3: illegal reason byte (DEL)
expectStatusFail s!"HTTP/1.1 {pad3 code} bad{Char.ofNat 127}\r\n"
#eval show IO Unit from do
-- Randomized malformed gibberish smoke: parser must simply return error or success,
-- but never crash/panic.
let mut gen : StdGen := StdGen.mk 0x31415926 0x27182818
for _ in [0:300] do
let (len, g1) := randBelow gen 80
let (tok, g2) := randomToken g1 (len + 1)
gen := g2
let _ := runParser (parseRequestLine {}) (tok ++ "\r\n")
let _ := runParser (parseStatusLine {}) (tok ++ "\r\n")
pure ()
-- Component tests for individual parser parts.
#eval show IO Unit from do
-- parseSingleHeader
let sh1 expectOk "parseSingleHeader some" (parseSingleHeader {} <* eof) "Host: x\r\n"
ensure "parseSingleHeader some present" sh1.isSome "expected some header"
let sh2 expectOk "parseSingleHeader none" (parseSingleHeader {} <* eof) "\r\n"
ensure "parseSingleHeader none present" sh2.isNone "expected header terminator"
-- parseChunkSize / parseChunkPartial
let (n1, ext1) expectOk "parseChunkSize bare" (parseChunkSize {} <* eof) "A\r\n"
ensure "parseChunkSize value" (n1 == 10) "chunk-size mismatch"
ensure "parseChunkSize ext empty" (ext1.isEmpty) "expected no extensions"
let (n2, ext2) expectOk "parseChunkSize ext" (parseChunkSize {} <* eof) "4;foo=bar;baz=\"qux\"\r\n"
ensure "parseChunkSize ext value" (n2 == 4) "chunk-size mismatch with ext"
ensure "parseChunkSize ext count" (ext2.size == 2) "expected 2 extensions"
let cp1 expectOk "parseChunkPartial some" (parseChunkPartial {} <* eof) "4\r\nWiki"
ensure "parseChunkPartial some isSome" cp1.isSome "expected chunk data"
ensure "parseChunkPartial some size" ((cp1.map (fun (n, _, _) => n)).getD 0 == 4) "size mismatch"
let cp0 expectOk "parseChunkPartial none" (parseChunkPartial {} <* eof) "0\r\n"
ensure "parseChunkPartial none isNone" cp0.isNone "expected last-chunk marker"
-- parseFixedSizeData / parseChunkSizedData
let fs1 expectOk "parseFixedSizeData complete" (parseFixedSizeData 4 <* eof) "Wiki"
ensure "parseFixedSizeData complete shape"
(match fs1 with | .complete _ => true | _ => false)
"expected complete result"
let fs2 expectOk "parseFixedSizeData incomplete" (parseFixedSizeData 4 <* eof) "Wi"
ensure "parseFixedSizeData incomplete shape"
(match fs2 with | .incomplete _ 2 => true | _ => false)
"expected incomplete result with remaining=2"
let cs1 expectOk "parseChunkSizedData complete" (parseChunkSizedData 4 <* eof) "Wiki\r\n"
ensure "parseChunkSizedData complete shape"
(match cs1 with | .complete _ => true | _ => false)
"expected complete chunk-sized result"
let cs2 expectOk "parseChunkSizedData incomplete" (parseChunkSizedData 4 <* eof) "Wi"
ensure "parseChunkSizedData incomplete shape"
(match cs2 with | .incomplete _ 2 => true | _ => false)
"expected incomplete chunk-sized result with remaining=2"
-- parseTrailers
let trailers expectOk "parseTrailers ok" (parseTrailers {} <* eof) "X-Test: a\r\nY-Test: b\r\n\r\n"
ensure "parseTrailers count" (trailers.size == 2) "expected 2 trailers"
expectFail "parseTrailers forbidden" (parseTrailers {} <* eof) "Content-Length: 1\r\n\r\n"
-- parseRequestLineRawVersion / parseStatusLineRawVersion
let (m1, _, v1) expectOk "parseRequestLineRawVersion" (parseRequestLineRawVersion {} <* eof) "GET / HTTP/1.1\r\n"
ensure "parseRequestLineRawVersion method" (m1 == Std.Http.Method.get) "method mismatch"
ensure "parseRequestLineRawVersion version" (v1 == some Std.Http.Version.v11) "expected recognized v11"
let (_, rv) expectOk "parseStatusLineRawVersion" (parseStatusLineRawVersion {} <* eof) "HTTP/1.1 204 No Content\r\n"
ensure "parseStatusLineRawVersion recognized" (rv == some Std.Http.Version.v11) "expected v11"
-- parseRequestLine / parseStatusLine failures
expectFail "parseRequestLine invalid version" (parseRequestLine {} <* eof) "GET / HTTP/2.0\r\n"
expectFail "parseStatusLine invalid version" (parseStatusLine {} <* eof) "HTTP/2.0 200 OK\r\n"

View File

@@ -1,6 +1,7 @@
import Std.Internal.Http
open Std Http
open Std.Http.Internal.Test
open Std.Http.Protocol.H1
open Std.Http.Protocol.H1.Machine
@@ -58,15 +59,15 @@ private def withLastStep
| some step => k t step
/-- Create a new tester for the given direction. -/
def new (name : String) (config : Config := {}) : MachineTester dir :=
def new (name : String) (config : Protocol.H1.Config := {}) : MachineTester dir :=
{ name, machine := { config } }
/-- Create a new server-side tester (receives requests, sends responses). -/
def receiving (name : String) (config : Config := {}) : MachineTester .receiving :=
def receiving (name : String) (config : Protocol.H1.Config := {}) : MachineTester .receiving :=
{ name, machine := { config } }
/-- Create a new client-side tester (sends requests, receives responses). -/
def sending (name : String) (config : Config := {}) : MachineTester .sending :=
def sending (name : String) (config : Protocol.H1.Config := {}) : MachineTester .sending :=
{ name, machine := { config } }
/-- Feed a UTF-8 string to the machine. -/
@@ -312,20 +313,12 @@ def run (t : MachineTester dir) : IO Unit :=
end MachineTester
/-- Run a named group of tests; wraps any failure with the group name. -/
private def runGroup (name : String) (tests : IO Unit) : IO Unit := do
try tests
catch e => throw (IO.userError s!"[{name}]\n{toString e}")
----------------------------------------------------------------------------------------------------
private def mkGet (path : String := "/") (extra : String := "") : String :=
s!"GET {path} HTTP/1.1\r\nHost: example.com\r\n{extra}\r\n"
private def mkGet10 (path : String := "/") : String :=
s!"GET {path} HTTP/1.0\r\n\r\n"
private def mkPost (path : String) (bodyLen : Nat) (extra : String := "") : String :=
private def mkPostHead (path : String) (bodyLen : Nat) (extra : String := "") : String :=
s!"POST {path} HTTP/1.1\r\nHost: example.com\r\nContent-Length: {bodyLen}\r\n{extra}\r\n"
private def mkChunkedPost (path : String) (extra : String := "") : String :=
@@ -432,7 +425,7 @@ private def minimalGetRequest : Request.Head :=
let body := "Hello, World!".toUTF8
MachineTester.receiving "§6.3: Content-Length body delivered exactly"
|>.feed (mkPost "/" body.size ++ String.fromUTF8! body)
|>.feed (mkPostHead "/" body.size ++ String.fromUTF8! body)
|>.step |>.assertNoError |>.assertHasEndHeaders
|>.drainBody |>.assertPulledBody body |>.assertLastChunkFinal |>.run
@@ -461,12 +454,12 @@ private def minimalGetRequest : Request.Head :=
let partialBody := "hello".toUTF8
MachineTester.receiving "§6.3: noMoreInput mid fixed-length body → connectionClosed"
|>.feed (mkPost "/" 10 ++ String.fromUTF8! partialBody)
|>.feed (mkPostHead "/" 10 ++ String.fromUTF8! partialBody)
|>.step |>.assertHasEndHeaders |>.noMoreInput
|>.drainBody |>.assertFailedWith .connectionClosed |>.run
MachineTester.receiving "§6.3: fixed-length body stops at declared byte count"
|>.feed (mkPost "/" 5 ++ "helloworld")
|>.feed (mkPostHead "/" 5 ++ "helloworld")
|>.step |>.assertHasEndHeaders
|>.drainBody
|>.assertPulledBody "hello".toUTF8
@@ -475,7 +468,7 @@ private def minimalGetRequest : Request.Head :=
let part1 := "hel".toUTF8
let part2 := "lo".toUTF8
MachineTester.receiving "§6.3: fixed-length body assembled across incremental feeds"
|>.feed (mkPost "/" 5) |>.step |>.assertHasEndHeaders
|>.feed (mkPostHead "/" 5) |>.step |>.assertHasEndHeaders
|>.feedBytes part1 |>.pullBody
|>.feedBytes part2 |>.pullBody
|>.assertPulledBody (part1 ++ part2) |>.assertLastChunkFinal |>.run
@@ -642,7 +635,7 @@ private def minimalGetRequest : Request.Head :=
-- accepted: body becomes readable after canContinue .continue
let body := "hello".toUTF8
MachineTester.receiving "§15.5.14: Expect:100-continue accepted → body readable"
|>.feed (mkPost "/upload" body.size "Expect: 100-continue\r\n")
|>.feed (mkPostHead "/upload" body.size "Expect: 100-continue\r\n")
|>.step |>.assertHasEndHeaders |>.assertHasContinue
|>.canContinue .«continue»
|>.feedBytes body |>.step |>.assertCanPullBodyNow
@@ -650,7 +643,7 @@ private def minimalGetRequest : Request.Head :=
-- rejected: reader closed, body not delivered
MachineTester.receiving "§15.5.14: Expect:100-continue rejected → reader closed"
|>.feed (mkPost "/upload" 5 "Expect: 100-continue\r\n")
|>.feed (mkPostHead "/upload" 5 "Expect: 100-continue\r\n")
|>.step |>.assertHasContinue
|>.canContinue .expectationFailed
|>.assertReaderClosed
@@ -666,7 +659,7 @@ private def minimalGetRequest : Request.Head :=
-- §15.5.14: writer completing a non-1xx response while reader is in .continue state
-- must force-close the reader (body will never arrive after rejection)
MachineTester.receiving "§15.5.14: non-final send while reader awaits canContinue → .close emitted"
|>.feed (mkPost "/upload" 5 "Expect: 100-continue\r\n")
|>.feed (mkPostHead "/upload" 5 "Expect: 100-continue\r\n")
|>.step |>.assertHasContinue
|>.send { status := .expectationFailed } |>.setKnownSize (.fixed 0) |>.userClosedBody
|>.step
@@ -690,7 +683,7 @@ private def minimalGetRequest : Request.Head :=
-- §15.2: 1xx responses are valid while awaiting canContinue; writer must remain open
MachineTester.receiving "§15.2: 1xx interim response while reader awaits canContinue → writer stays open"
|>.feed (mkPost "/upload" 5 "Expect: 100-continue\r\n")
|>.feed (mkPostHead "/upload" 5 "Expect: 100-continue\r\n")
|>.step |>.assertHasContinue
|>.send { status := .processing } |>.step
|>.assertIsWaitingMessage |>.run

View File

@@ -0,0 +1,351 @@
import Std.Internal.Http
import Std.Internal.Async
import Std.Internal.Async.Timer
open Std.Internal.IO Async
open Std Http Internal Test
def runWithTimeout {α : Type} (name : String) (timeoutMs : Nat := 2000) (action : IO α) : IO α := do
let task IO.asTask action
let ticks := (timeoutMs + 9) / 10
let rec loop (remaining : Nat) : IO α := do
if ( IO.getTaskState task) == .finished then
match ( IO.wait task) with
| .ok x => pure x
| .error err => throw err
else
match remaining with
| 0 =>
IO.cancel task
throw <| IO.userError s!"Test '{name}' timed out after {timeoutMs}ms (possible hang/loop)"
| n + 1 =>
IO.sleep 10
loop n
loop ticks
def sendRaw (client : Mock.Client) (server : Mock.Server) (raw : ByteArray)
(handler : TestHandler) (config : Config := { lingeringTimeout := 500, generateDate := false }) : IO ByteArray := Async.block do
client.send raw
Std.Http.Server.serveConnection server handler config
|>.run
let res client.recv?
pure <| res.getD .empty
def sendRawTimed (name : String) (raw : ByteArray)
(handler : TestHandler) (config : Config := { lingeringTimeout := 500, generateDate := false }) : IO ByteArray :=
runWithTimeout name 2000 do
let (client, server) Mock.new
sendRaw client server raw handler config
def runClosedClientTimed (name : String) (raw : ByteArray)
(handler : TestHandler) (config : Config := { lingeringTimeout := 500, generateDate := false }) : IO Unit :=
runWithTimeout name 2000 do
Async.block do
let (client, server) Mock.new
client.send raw
client.close
Std.Http.Server.serveConnection server handler config
|>.run
def countOccurrences (s : String) (needle : String) : Nat :=
if needle.isEmpty then 0 else (s.splitOn needle).length - 1
def assertStatusPrefix (name : String) (response : ByteArray) (prefix_ : String) : IO Unit := do
let text := String.fromUTF8! response
unless text.startsWith prefix_ do
throw <| IO.userError s!"Test '{name}' failed:\nExpected prefix: {prefix_.quote}\nGot:\n{text.quote}"
def assertNotContains (name : String) (response : ByteArray) (needle : String) : IO Unit := do
let text := String.fromUTF8! response
if text.contains needle then
throw <| IO.userError s!"Test '{name}' failed:\nDid not expect {needle.quote}\nGot:\n{text.quote}"
def assertEndsWith (name : String) (response : ByteArray) (suffix_ : String) : IO Unit := do
let text := String.fromUTF8! response
unless text.endsWith suffix_ do
throw <| IO.userError s!"Test '{name}' failed:\nExpected suffix: {suffix_.quote}\nGot:\n{text.quote}"
def assertStatusCount (name : String) (response : ByteArray) (expected : Nat) : IO Unit := do
let text := String.fromUTF8! response
let got := countOccurrences text "HTTP/1.1 "
unless got == expected do
throw <| IO.userError s!"Test '{name}' failed:\nExpected {expected} HTTP responses, got {got}\n{text.quote}"
def onesChunked (n : Nat) : String := Id.run do
let mut body := ""
for i in [0:n] do
body := body ++ s!"{toString i |>.length}\x0d\n{toString i}\x0d\n"
body ++ "0\x0d\n\x0d\n"
def ignoreHandler : TestHandler := fun _ => Response.ok |>.text "ok"
def echoBodyHandler : TestHandler := fun req => do
let mut body := ByteArray.empty
for chunk in req.body do
body := body ++ chunk.data
Response.ok |>.text (String.fromUTF8! body)
def firstChunkHandler : TestHandler := fun req => do
let first req.body.recv
let text :=
match first with
| some chunk => String.fromUTF8! chunk.data
| none => "none"
Response.ok |>.text text
def streamPiecesHandler (n : Nat) : TestHandler := fun _ => do
let outgoing Body.mkStream
background do
for i in [0:n] do
outgoing.send <| Chunk.ofByteArray s!"piece-{i};".toUTF8
outgoing.close
return Response.ok
|>.body outgoing
def stressResponseHandler (n : Nat) : TestHandler := fun _ => do
let outgoing Body.mkStream
background do
for i in [0:n] do
outgoing.send <| Chunk.ofByteArray s!"x{i},".toUTF8
outgoing.close
return Response.ok
|>.body outgoing
-- 01: Ignore fixed-size request body and respond immediately.
#eval runWithTimeout "01_ignore_fixed_length_body" 2000 do
let raw := "POST /fixed HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 10\x0d\nConnection: close\x0d\n\x0d\n0123456789".toUTF8
let response sendRawTimed "01_ignore_fixed_length_body/send" raw ignoreHandler
assertStatusPrefix "01_ignore_fixed_length_body" response "HTTP/1.1 200"
-- 02: Ignore chunked request body and respond immediately.
#eval runWithTimeout "02_ignore_chunked_body" 2000 do
let raw := "POST /chunked HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n6\x0d\n world\x0d\n0\x0d\n\x0d\n".toUTF8
let response sendRawTimed "02_ignore_chunked_body/send" raw ignoreHandler
assertStatusPrefix "02_ignore_chunked_body" response "HTTP/1.1 200"
-- 03: Large fixed-size body ignored by handler (regression for stalled body transfer).
#eval runWithTimeout "03_ignore_large_fixed_body" 2000 do
let body := String.ofList (List.replicate 8192 'A')
let raw := s!"POST /large HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 8192\x0d\nConnection: close\x0d\n\x0d\n{body}".toUTF8
let response sendRawTimed "03_ignore_large_fixed_body/send" raw ignoreHandler
assertStatusPrefix "03_ignore_large_fixed_body" response "HTTP/1.1 200"
-- 04: Read full request body and echo it.
#eval runWithTimeout "04_echo_full_body" 2000 do
let raw := "POST /echo HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 11\x0d\nConnection: close\x0d\n\x0d\nhello world".toUTF8
let response sendRawTimed "04_echo_full_body/send" raw echoBodyHandler
assertContains response "hello world"
-- 05: Read only first chunk and reply (should not deadlock connection).
#eval runWithTimeout "05_read_first_chunk_only" 2000 do
let raw := "POST /first HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 11\x0d\nConnection: close\x0d\n\x0d\nhello world".toUTF8
let response sendRawTimed "05_read_first_chunk_only/send" raw firstChunkHandler
assertStatusPrefix "05_read_first_chunk_only" response "HTTP/1.1 200"
assertContains response "hello world"
-- 06: Stream many response chunks.
#eval runWithTimeout "06_stream_many_response_chunks" 2000 do
let raw := "GET /stream HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let response sendRawTimed "06_stream_many_response_chunks/send" raw (streamPiecesHandler 40)
assertStatusPrefix "06_stream_many_response_chunks" response "HTTP/1.1 200"
assertContains response "piece-0;"
assertContains response "piece-39;"
-- 07: Stream response with known fixed size.
#eval runWithTimeout "07_stream_known_size" 2000 do
let raw := "GET /known HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let response sendRawTimed "07_stream_known_size/send" raw (fun _ => do
let outgoing Body.mkStream
outgoing.setKnownSize (some (.fixed 8))
background do
outgoing.send <| Chunk.ofByteArray "abcd".toUTF8
outgoing.send <| Chunk.ofByteArray "efgh".toUTF8
outgoing.close
return Response.ok
|>.body outgoing)
assertStatusPrefix "07_stream_known_size" response "HTTP/1.1 200"
assertContains response "Content-Length: 8"
assertContains response "abcdefgh"
-- 08: Use interestSelector before sending response data.
#eval runWithTimeout "08_interest_selector_gating" 2000 do
let raw := "GET /interest HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let response sendRawTimed "08_interest_selector_gating/send" raw (fun _ => do
let outgoing Body.mkStream
background do
let interested Selectable.one #[
.case outgoing.interestSelector pure
]
if interested then
outgoing.send <| Chunk.ofByteArray "interest-ok".toUTF8
outgoing.close
return Response.ok
|>.body outgoing)
assertStatusPrefix "08_interest_selector_gating" response "HTTP/1.1 200"
assertContains response "interest-ok"
-- 09: Incomplete sends collapse into one payload.
#eval runWithTimeout "09_incomplete_send_collapse" 2000 do
let raw := "GET /collapse HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let response sendRawTimed "09_incomplete_send_collapse/send" raw (fun _ => do
let outgoing Body.mkStream
background do
outgoing.send ({ data := "hello ".toUTF8, extensions := #[] } : Chunk) (incomplete := true)
outgoing.send ({ data := "wor".toUTF8, extensions := #[] } : Chunk) (incomplete := true)
outgoing.send ({ data := "ld".toUTF8, extensions := #[] } : Chunk)
outgoing.close
return Response.ok
|>.body outgoing)
assertStatusPrefix "09_incomplete_send_collapse" response "HTTP/1.1 200"
assertContains response "hello world"
-- 10: Pipeline fixed-body POST then GET.
#eval runWithTimeout "10_pipeline_fixed_then_get" 2000 do
let raw := ("POST /one HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\n\x0d\nhello" ++
"GET /two HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let response sendRawTimed "10_pipeline_fixed_then_get/send" raw uriHandler
assertStatusCount "10_pipeline_fixed_then_get" response 2
assertContains response "/one"
assertContains response "/two"
-- 11: Pipeline chunked-body POST then GET.
#eval runWithTimeout "11_pipeline_chunked_then_get" 2000 do
let raw := ("POST /chunk HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n" ++
"GET /two HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let response sendRawTimed "11_pipeline_chunked_then_get/send" raw uriHandler
assertStatusCount "11_pipeline_chunked_then_get" response 2
assertContains response "/chunk"
assertContains response "/two"
-- 12: Malformed first request should not loop into second.
#eval runWithTimeout "12_malformed_closes_connection" 2000 do
let raw := ("GET / HTTP/1.1\x0d\nHost: example.com\x0d\nBadHeader value\x0d\n\x0d\n" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let response sendRawTimed "12_malformed_closes_connection/send" raw uriHandler
assertStatusPrefix "12_malformed_closes_connection" response "HTTP/1.1 400"
assertStatusCount "12_malformed_closes_connection" response 1
-- 13: Client closes while server is streaming response.
#eval runWithTimeout "13_client_close_while_streaming" 2000 do
let raw := "GET /close-stream HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
runClosedClientTimed "13_client_close_while_streaming/run" raw (stressResponseHandler 600)
-- 14: Client closes before sending full body.
#eval runWithTimeout "14_client_close_mid_body" 2000 do
let raw := "POST /mid-body HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 100\x0d\nConnection: close\x0d\n\x0d\nabcde".toUTF8
runClosedClientTimed "14_client_close_mid_body/run" raw ignoreHandler
-- 15: Handler throws while request body is present.
#eval runWithTimeout "15_handler_throw_unread_body" 2000 do
let raw := "POST /throw HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\nConnection: close\x0d\n\x0d\nhello".toUTF8
let response sendRawTimed "15_handler_throw_unread_body/send" raw (fun _ => throw <| IO.userError "boom")
assertStatusPrefix "15_handler_throw_unread_body" response "HTTP/1.1 500"
-- 16: Many tiny chunked request chunks ignored by handler.
#eval runWithTimeout "16_many_tiny_chunked_ignored" 2000 do
let body := onesChunked 80
let raw := s!"POST /tiny HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n{body}".toUTF8
let response sendRawTimed "16_many_tiny_chunked_ignored/send" raw ignoreHandler
assertStatusPrefix "16_many_tiny_chunked_ignored" response "HTTP/1.1 200"
-- 17: Many tiny chunked request chunks consumed and counted.
#eval runWithTimeout "17_many_tiny_chunked_consumed" 2000 do
let body := onesChunked 25
let raw := s!"POST /count HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n{body}".toUTF8
let response sendRawTimed "17_many_tiny_chunked_consumed/send" raw (fun req => do
let mut count := 0
for _ in req.body do
count := count + 1
Response.ok |>.text (toString count))
assertStatusPrefix "17_many_tiny_chunked_consumed" response "HTTP/1.1 200"
assertEndsWith "17_many_tiny_chunked_consumed" response "25"
pure ()
-- 18: Stress response streaming with many chunks and active client.
#eval runWithTimeout "18_stress_streaming_active_client" 2000 do
let raw := "GET /stress HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let response sendRawTimed "18_stress_streaming_active_client/send" raw (stressResponseHandler 120)
assertStatusPrefix "18_stress_streaming_active_client" response "HTTP/1.1 200"
assertContains response "x0,"
assertContains response "x119,"
-- 19: Pipeline with large unread first body still processes second request.
#eval runWithTimeout "19_pipeline_large_unread_then_get" 2000 do
let body := String.ofList (List.replicate 5000 'b')
let raw := (s!"POST /big HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5000\x0d\n\x0d\n{body}" ++
"GET /after HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let response sendRawTimed "19_pipeline_large_unread_then_get/send" raw uriHandler
assertStatusCount "19_pipeline_large_unread_then_get" response 2
assertContains response "/big"
assertContains response "/after"
-- 20: Triple pipeline mixed body styles.
#eval runWithTimeout "20_triple_pipeline_mixed" 2000 do
let raw := ("POST /a HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 4\x0d\n\x0d\ndata" ++
"POST /b HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n3\x0d\nhey\x0d\n0\x0d\n\x0d\n" ++
"GET /c HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n").toUTF8
let response sendRawTimed "20_triple_pipeline_mixed/send" raw uriHandler
assertStatusCount "20_triple_pipeline_mixed" response 3
assertContains response "/a"
assertContains response "/b"
assertContains response "/c"
-- 21: Slow/incomplete active body transfer must time out (no connection pinning).
#eval runWithTimeout "21_incomplete_slow_post_times_out" 2000 do
let raw := "POST /slow HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 100\x0d\nConnection: close\x0d\n\x0d\nabcde".toUTF8
let response sendRawTimed
"21_incomplete_slow_post_times_out/send"
raw
(fun req => do
let _s : String req.body.readAll
Response.ok |>.text "unreachable")
(config := { lingeringTimeout := 200, generateDate := false })
assertStatusPrefix "21_incomplete_slow_post_times_out" response "HTTP/1.1 408"
-- 22: Keep-alive + unknown-size stream flushes once first chunk is available.
#eval runWithTimeout "22_keepalive_unknown_size_flushes_early" 3000 do
Async.block do
let (client, server) Mock.new
let handler : TestHandler := fun _ => do
let outgoing Body.mkStream
background do
outgoing.send <| Chunk.ofByteArray "aaa".toUTF8
let sleep Sleep.mk 300
sleep.wait
outgoing.send <| Chunk.ofByteArray "bbb".toUTF8
outgoing.close
return Response.ok
|>.body outgoing
background <| (Std.Http.Server.serveConnection server handler {
lingeringTimeout := 800
keepAliveTimeout := 1500, by decide
generateDate := false
}).run
client.send "GET /stream HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8
let mut early : Option ByteArray := none
for _ in [0:5] do
if early.isNone then
let sleep Sleep.mk 40
sleep.wait
early client.tryRecv?
let earlyBytes := early.getD ByteArray.empty
if earlyBytes.isEmpty then
throw <| IO.userError "Test '22_keepalive_unknown_size_flushes_early' failed:\nExpected early streamed bytes before producer EOF"
assertContains earlyBytes "Transfer-Encoding: chunked"
assertContains earlyBytes "aaa"
assertNotContains "22_keepalive_unknown_size_flushes_early no second chunk yet" earlyBytes "bbb"
let sleep Sleep.mk 420
sleep.wait
let later := ( client.tryRecv?).getD ByteArray.empty
assertContains later "bbb"
client.close

View File

@@ -0,0 +1,150 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Helper: run pipelined raw request string, closing the client after send.
-- Returns (response bytes, list of URIs seen by the handler).
private def runPipelined
(raw : String)
(readBody : Bool := true)
(config : Config := defaultConfig) : IO (ByteArray × Array String) := Async.block do
let (client, server) Mock.new
let seenRef IO.mkRef (#[] : Array String)
let handler : TestHandler := fun req => do
seenRef.modify (·.push (toString req.line.uri))
let body
if readBody then req.body.readAll
else pure "<ignored>"
Response.ok |>.text s!"{toString req.line.uri}:{body}"
client.send raw.toUTF8
client.getSendChan.close
Std.Http.Server.serveConnection server handler config |>.run
let response client.recv?
let seen seenRef.get
pure (response.getD .empty, seen)
private def assertSeenCount (seen : Array String) (expected : Nat) : IO Unit := do
unless seen.size == expected do
throw <| IO.userError s!"expected {expected} handler calls, got {seen.size}: {seen}"
-- HTTP/1.1 keep-alive behavior
#eval runGroup "Keep-alive: basic" do
check "two sequential keep-alive requests → 2 responses"
(raw :=
"GET /first HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(expect := fun r =>
assertResponseCount r 2 *>
assertContains r "/first" *>
assertContains r "/second")
check "Connection: close on first request blocks pipelined second"
(raw :=
"GET /first HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(expect := fun r =>
assertResponseCount r 1 *>
assertContains r "/first" *>
assertAbsent r "/second")
check "enableKeepAlive: false → one response only"
(raw :=
"GET /1 HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n" ++
"GET /2 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(config := { defaultConfig with enableKeepAlive := false, lingeringTimeout := 3000 })
(expect := fun r =>
assertResponseCount r 1 *>
assertContains r "/1" *>
assertAbsent r "/2")
check "maxRequests: 2 caps third request"
(raw :=
"GET /0 HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n" ++
"GET /1 HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n" ++
"GET /2 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(config := { defaultConfig with maxRequests := 2, lingeringTimeout := 3000 })
(expect := fun r =>
assertResponseCount r 2 *>
assertContains r "/0" *>
assertContains r "/1" *>
assertAbsent r "/2")
-- Body draining between keep-alive requests
#eval runGroup "Keep-alive: unread body draining" do
check "handler ignores fixed-size body → next keep-alive works"
(raw :=
"POST /ignore HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 5\x0d\n\x0d\nhello" ++
"GET /after HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(config := { defaultConfig with lingeringTimeout := 3000 })
(expect := fun r =>
assertResponseCount r 2 *>
assertContains r "/ignore" *>
assertContains r "/after")
check "handler ignores chunked body → next keep-alive works"
(raw :=
"POST /chunked HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n" ++
"GET /next HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun req => Response.ok |>.text (toString req.line.uri))
(config := { defaultConfig with lingeringTimeout := 3000 })
(expect := fun r =>
assertResponseCount r 2 *>
assertContains r "/chunked" *>
assertContains r "/next")
-- Pipelining after exact Content-Length
#eval runGroup "Keep-alive: pipelined requests after exact CL" do
let (response, seen) runPipelined
("POST /first HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 3\x0d\n\x0d\nabc" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
assertResponseCount response 2
assertContains response "/first"
assertContains response "/second"
assertSeenCount seen 2
#eval runGroup "Keep-alive: incomplete body blocks pipelining" do
let (response1, seen1) runPipelined
("POST /first HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 10\x0d\n\x0d\nabc" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
assertContains response1 "/first"
assertAbsent response1 "/second"
assertSeenCount seen1 1
let (response2, _) runPipelined
("POST /chunked-first HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\nF\x0d\nhel" ++
"GET /second HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
assertAbsent response2 "/second"
#eval runGroup "Keep-alive: CL=0 and complete chunked allow immediate next" do
let (resp1, seen1) runPipelined
("POST /empty HTTP/1.1\x0d\nHost: example.com\x0d\nContent-Length: 0\x0d\n\x0d\n" ++
"GET /tail HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
assertResponseCount resp1 2
assertContains resp1 "/empty"
assertContains resp1 "/tail"
assertSeenCount seen1 2
let (resp2, seen2) runPipelined
("POST /chunked HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n" ++
"GET /tail HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
assertResponseCount resp2 2
assertContains resp2 "/chunked"
assertContains resp2 "/tail"
assertSeenCount seen2 2

View File

@@ -0,0 +1,126 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Shared fixtures
private def ok200 : String :=
"HTTP/1.1 200 OK\x0d\nContent-Type: text/plain; charset=utf-8\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 2\x0d\n\x0d\nok"
-- RFC 9112 §5: Header fields — syntax and byte-level validation
#eval runGroup "RFC 9112 §5: header field syntax" do
check "header without colon → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nBadHeader value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "leading whitespace (obs-fold) → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\n X-Bad: folded\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "space in header name → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nBad Header: value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "bare LF line endings → 400"
(raw := "GET / HTTP/1.1\nHost: example.com\nConnection: close\n\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "tab in header value → accepted"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Tab: value\twith\ttabs\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
check "additional colon in header value stays in value"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nBad:Name: value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
check "CRLF in header value parsed as two separate headers → 200"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Inject: value\x0d\nEvil: injected\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
-- Critical: NUL and control chars in header names/values
#eval runGroup "RFC 9110 §5.5: invalid bytes in header fields (Critical)" do
check "NUL in header name → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Bad\x00Header: value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "NUL in header value → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Header: bad\x00value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "control char (0x01) in header value → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Header: bad\x01value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
-- RFC 9112 §6.3 / §9110 §8.6: header size limits
#eval runGroup "Header size limits" do
check "header name > 256 bytes → 400"
(raw :=
let longName := String.ofList (List.replicate 257 'X')
s!"GET / HTTP/1.1\x0d\nHost: example.com\x0d\n{longName}: value\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "header value too long → 400"
(raw :=
let longVal := String.ofList (List.replicate 9000 'x')
s!"GET / HTTP/1.1\x0d\nHost: example.com\x0d\nX-Long: {longVal}\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "too many headers (101) → 431"
(raw := Id.run do
let mut raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n"
for i in [0:101] do
raw := raw ++ s!"X-Header-{i}: value{i}\x0d\n"
return raw ++ "\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r431)
check "total header bytes too large → 431"
(raw := Id.run do
let mut raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n"
let v := String.ofList (List.replicate 200 'a')
for i in [0:200] do
raw := raw ++ s!"X-Header-{i}: {v}\x0d\n"
return raw ++ "\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r431)
check "request-line too long → 400"
(raw :=
let longUri := "/" ++ String.ofList (List.replicate 2000 'a')
s!"GET {longUri} HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
#eval runGroup "maxStartLineLength config" do
let cfg : Config := { defaultConfig with maxStartLineLength := 16384 }
let segment := String.ofList (List.replicate 255 'a')
let maxUri := "/" ++ String.intercalate "/" (List.replicate 32 segment)
check "URI at maxStartLineLength limit → 200"
(raw := s!"GET {maxUri} HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(config := cfg)
(expect := fun r => assertExact r ok200)
check "URI one byte over limit → 414"
(raw := s!"GET {maxUri}/x HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(config := cfg)
(expect := fun r => assertStatus r "HTTP/1.1 414")

View File

@@ -0,0 +1,166 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Shared fixtures
private def ok200 : String :=
"HTTP/1.1 200 OK\x0d\nContent-Type: text/plain; charset=utf-8\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 2\x0d\n\x0d\nok"
-- RFC 9112 §3: Request Line
#eval runGroup "RFC 9112 §3: request-line parse failures" do
check "missing version → 400"
(raw := "GET /\x0d\nHost: example.com\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "missing URI (double space) → 400"
(raw := "GET HTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "extra spaces in request-line → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "whitespace-only request-line → 400"
(raw := " \x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "no spaces in request-line → 400"
(raw := "GETHTTP/1.1\x0d\nHost: example.com\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "garbage after request-line version → 400"
(raw := "GET / HTTP/1.1 xxxxxx\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
-- Empty connection: no bytes → silent close, no response
checkClose "empty connection → silent close"
(raw := "")
(handler := okHandler)
(expect := fun r => assertExact r "")
#eval runGroup "RFC 9112 §2.2: leading CRLF before request-line" do
check "single leading CRLF accepted"
(raw := "\x0d\nGET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
-- RFC 9112 §9: HTTP version
#eval runGroup "RFC 9112 §9: HTTP version" do
check "HTTP/2.0 → 505"
(raw := "GET / HTTP/2.0\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r505)
-- RFC 9110 §9: Methods
#eval runGroup "RFC 9110 §9: method validation" do
check "unknown method FOOBAR → 400"
(raw := "FOOBAR / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "lowercase method → 400"
(raw := "get / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "non-ASCII method → 400"
(raw := "GÉT / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "very long unrecognized method → 400"
(raw :=
let m := String.ofList (List.replicate 20 'G')
s!"{m} / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "token method with hyphen (X-CUSTOM) → 400"
(raw := "X-CUSTOM / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
-- RFC 9112 §3.2: Request target forms
#eval runGroup "RFC 9112 §3.2: request target forms" do
check "GET authority-form → 400"
(raw := "GET example.com:443 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "CONNECT authority-form accepted"
(raw := "CONNECT example.com:443 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
check "CONNECT authority-form port mismatch → 400"
(raw := "CONNECT example.com:444 HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "GET asterisk-form → 400"
(raw := "GET * HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "OPTIONS * accepted"
(raw := "OPTIONS * HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
check "absolute-form URI accepted"
(raw := "GET http://example.com/path HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
-- RFC 9112 §3.3: Early invalid bytes
#eval runGroup "RFC 9112 §3: early invalid bytes" do
checkClose "NUL byte → 400"
(raw := "\x00")
(handler := okHandler)
(expect := fun r => assertExact r r400)
checkClose "SP byte → 400"
(raw := "\x20")
(handler := okHandler)
(expect := fun r => assertExact r r400)
checkClose "TLS client-hello prefix → 400"
(raw := "\x16\x03\x01\x00\xa5")
(handler := okHandler)
(expect := fun r => assertExact r r400)
-- RFC 7230 §5.4: Host header
#eval runGroup "RFC 7230 §5.4: Host header" do
check "missing Host header → 400"
(raw := "GET / HTTP/1.1\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "empty Host allowed for origin-form"
(raw := "GET / HTTP/1.1\x0d\nHost: \x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)
check "multiple Host headers → 400"
(raw := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nHost: other.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r r400)
check "absolute-form: URI authority takes precedence over Host"
(raw := "GET http://good.example/path HTTP/1.1\x0d\nHost: good.example\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200)

View File

@@ -0,0 +1,197 @@
import Std.Internal.Http.Test.Helpers
open Std.Internal.IO Async
open Std Http Internal Test
-- Shared fixtures
private def ok200Head : String :=
"HTTP/1.1 200 OK\x0d\nContent-Type: text/plain; charset=utf-8\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 2\x0d\n\x0d\n"
-- RFC 9110 §9.3.2: HEAD
#eval runGroup "RFC 9110 §9.3.2: HEAD response framing" do
check "HEAD omits body bytes, preserves headers"
(raw := "HEAD / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := okHandler)
(expect := fun r => assertExact r ok200Head)
check "GET and HEAD produce identical header sections"
(raw := "GET /frame HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n")
(handler := fun _ => Response.ok |>.text "hello")
(expect := fun getResp => do
-- Run HEAD against the same handler
let (client2, server2) Mock.new
let headResp Async.block do
client2.send "HEAD /frame HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
Std.Http.Server.serveConnection server2 (show TestHandler from fun _ => Response.ok |>.text "hello") defaultConfig |>.run
return ( client2.recv?).getD .empty
let getHeaders := (String.fromUTF8! getResp).splitOn "\x0d\n\x0d\n" |>.headD ""
let headHeaders := (String.fromUTF8! headResp).splitOn "\x0d\n\x0d\n" |>.headD ""
unless getHeaders == headHeaders do
throw <| IO.userError s!"headers differ:\nGET: {getHeaders.quote}\nHEAD: {headHeaders.quote}"
assertContains getResp "hello" *>
assertAbsent headResp "hello")
-- RFC 9110 §15.4: 304 and 204 response framing
#eval runGroup "RFC 9110 §15.4: 304 Not Modified strips framing headers" do
-- Direct machine test: write a 304 head with Content-Length: 5 and verify it is stripped.
-- RFC 9110 §8.6 permits Content-Length in 304 as optional metadata, but we strip it to
-- avoid forwarding a stale or wrong value from a handler that did not intend to advertise
-- a body size.
let request := "GET /cache HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let machine0 : Protocol.H1.Machine .receiving := { config := {} }
let (machine1, _) := (machine0.feed request).step
let headers304 := Headers.empty.insert Header.Name.contentLength (Header.Value.ofString! "5")
let (_, step304) := (machine1.send ({ status := .notModified, headers := headers304 } : Response.Head)).step
let text304 := String.fromUTF8! step304.output.toByteArray
unless text304.contains "HTTP/1.1 304 Not Modified" do
throw <| IO.userError s!"expected 304 status in output:\n{text304.quote}"
if text304.contains "Content-Length:" || text304.contains "Transfer-Encoding:" then
throw <| IO.userError s!"unexpected framing headers in 304:\n{text304.quote}"
#eval runGroup "RFC 9110 §15.3.5: 204 No Content strips framing headers" do
let request := "GET /empty HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let machine0 : Protocol.H1.Machine .receiving := { config := {} }
let (machine1, _) := (machine0.feed request).step
let headers204 := Headers.empty.insert Header.Name.contentLength (Header.Value.ofString! "9")
let (_, step204) := (machine1.send ({ status := .noContent, headers := headers204 } : Response.Head)).step
let text204 := String.fromUTF8! step204.output.toByteArray
unless step204.output.size > 0 do
throw <| IO.userError "expected serialized response output"
unless text204.contains "HTTP/1.1 204 No Content" do
throw <| IO.userError s!"expected 204 status:\n{text204.quote}"
if text204.contains "Content-Length:" || text204.contains "Transfer-Encoding:" then
throw <| IO.userError s!"unexpected framing headers in 204:\n{text204.quote}"
-- RFC 9112 §9.6: Client-mode — parsing responses
#eval runGroup "RFC 9112 §9.6: client-mode response parsing" do
-- Parse a 200 response with headers
let machineA : Protocol.H1.Machine .sending := { config := {}, reader := { state := .needStartLine } }
let rawA := "HTTP/1.1 200 OK\x0d\nContent-Length: 0\x0d\nConnection: close\x0d\n\x0d\n"
let (machineA', stepA) := (machineA.feed rawA.toUTF8).step
if stepA.events.any (fun | .failed _ => true | _ => false) then
throw <| IO.userError s!"unexpected failure parsing 200 response: {repr stepA.events}"
unless stepA.events.any (fun | .endHeaders _ => true | _ => false) do
throw <| IO.userError s!"missing endHeaders event: {repr stepA.events}"
unless machineA'.reader.messageHead.status == .ok do
throw <| IO.userError s!"unexpected status: {repr machineA'.reader.messageHead.status}"
unless machineA'.reader.messageHead.headers.hasEntry Header.Name.contentLength (Header.Value.ofString! "0") do
throw <| IO.userError "missing Content-Length header in parsed response"
-- Parse headerless 204
let machineB : Protocol.H1.Machine .sending := { config := {}, reader := { state := .needStartLine } }
let rawB := "HTTP/1.1 204 No Content\x0d\n\x0d\n"
let (_, stepB) := (machineB.feed rawB.toUTF8).step
if stepB.events.any (fun | .failed _ => true | _ => false) then
throw <| IO.userError s!"unexpected failure parsing 204: {repr stepB.events}"
if stepB.events.any (fun | .needMoreData _ => true | _ => false) then
throw <| IO.userError s!"unexpected needMoreData for 204: {repr stepB.events}"
unless stepB.events.any (fun | .endHeaders _ => true | _ => false) do
throw <| IO.userError s!"missing endHeaders for 204: {repr stepB.events}"
-- 204 with Content-Length in response: body framing should be ignored
let machineC : Protocol.H1.Machine .sending := { config := {}, reader := { state := .needStartLine } }
let rawC := "HTTP/1.1 204 No Content\x0d\nContent-Length: 5\x0d\n\x0d\nHELLO"
let (machineC', stepC) := (machineC.feed rawC.toUTF8).step
if stepC.events.any (fun | .failed _ => true | _ => false) then
throw <| IO.userError s!"unexpected failure for 204 with framing: {repr stepC.events}"
-- The 5 bytes of "HELLO" should remain unread
unless machineC'.reader.input.remainingBytes == 5 do
throw <| IO.userError s!"expected 5 unread bytes, got {machineC'.reader.input.remainingBytes}"
-- RFC 9110 §15.2: 1xx informational responses MUST NOT carry framing headers
#eval runGroup "RFC 9110 §15.2: 1xx informational responses strip framing headers" do
let request := "GET / HTTP/1.1\x0d\nHost: example.com\x0d\nConnection: close\x0d\n\x0d\n".toUTF8
let machine0 : Protocol.H1.Machine .receiving := { config := {} }
let (machine1, _) := (machine0.feed request).step
-- 100 Continue: handler-set Content-Length must be stripped
let headers100 := Headers.empty
|>.insert Header.Name.contentLength (Header.Value.ofString! "5")
let (machine2, step100) := (machine1.send ({ status := .«continue», headers := headers100 } : Response.Head)).step
let text100 := String.fromUTF8! step100.output.toByteArray
unless text100.contains "HTTP/1.1 100 Continue" do
throw <| IO.userError s!"expected 100 status in output:\n{text100.quote}"
if text100.contains "Content-Length:" then
throw <| IO.userError s!"Content-Length must not appear in 1xx output:\n{text100.quote}"
-- 103 Early Hints: both Content-Length and Transfer-Encoding must be stripped
let headers103 := Headers.empty
|>.insert Header.Name.contentLength (Header.Value.ofString! "42")
|>.insert Header.Name.transferEncoding (Header.Value.ofString! "chunked")
let (machine3, step103) := (machine2.send ({ status := .earlyHints, headers := headers103 } : Response.Head)).step
let text103 := String.fromUTF8! step103.output.toByteArray
unless text103.contains "HTTP/1.1 103 Early Hints" do
throw <| IO.userError s!"expected 103 status in output:\n{text103.quote}"
if text103.contains "Content-Length:" || text103.contains "Transfer-Encoding:" then
throw <| IO.userError s!"framing headers must not appear in 1xx output:\n{text103.quote}"
-- Machine must remain in waitingHeaders after sending 1xx (interim does not advance writer)
unless machine3.writer.state == .waitingHeaders do
throw <| IO.userError s!"writer must stay in waitingHeaders after 1xx, got: {repr machine3.writer.state}"
-- Final 200 OK still works after chained 1xx responses
let headers200 := Headers.empty
|>.insert Header.Name.contentLength (Header.Value.ofString! "0")
let (_, step200) := (machine3.send ({ status := .ok, headers := headers200 } : Response.Head)).step
let text200 := String.fromUTF8! step200.output.toByteArray
unless text200.contains "HTTP/1.1 200 OK" do
throw <| IO.userError s!"expected 200 after 1xx chain:\n{text200.quote}"
unless text200.contains "Content-Length: 0" do
throw <| IO.userError s!"Content-Length must be preserved in final response:\n{text200.quote}"
-- RFC 7230 §3.3.1 / RFC 9112 §6.1: HTTP/1.0 connection-close framing.
-- When the handler does not set Content-Length for an HTTP/1.0 request the machine
-- must not emit Transfer-Encoding or Content-Length; it writes raw bytes and closes.
#eval runGroup "RFC 7230 §3.3.1: HTTP/1.0 connection-close — headers" do
let request10 := "GET / HTTP/1.0\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8
let machine0 : Protocol.H1.Machine .receiving := { config := {} }
let (machine1, _) := (machine0.feed request10).step
let (_, stepA) := (machine1.send ({ status := .ok, headers := .empty } : Response.Head)).step
let textA := String.fromUTF8! stepA.output.toByteArray
unless textA.contains "200 OK" do
throw <| IO.userError s!"expected 200 status line:\n{textA.quote}"
if textA.contains "Transfer-Encoding:" then
throw <| IO.userError s!"Transfer-Encoding must not appear in HTTP/1.0 response:\n{textA.quote}"
if textA.contains "Content-Length:" then
throw <| IO.userError s!"Content-Length must not appear when body length is unknown:\n{textA.quote}"
#eval runGroup "RFC 7230 §3.3.1: HTTP/1.0 connection-close — body framing" do
let request10 := "GET / HTTP/1.0\x0d\nHost: example.com\x0d\n\x0d\n".toUTF8
let machine0 : Protocol.H1.Machine .receiving := { config := {} }
let (machine1, _) := (machine0.feed request10).step
-- Non-empty body: raw bytes must appear in output without chunk framing.
let body := "hello world".toUTF8
let machine2 :=
machine1
|>.send ({ status := .ok, headers := .empty } : Response.Head)
|>.sendData #[{ data := body, extensions := #[] }]
|>.userClosedBody
let (machine3, step2) := machine2.step
let output2 := String.fromUTF8! step2.output.toByteArray
unless output2.contains "hello world" do
throw <| IO.userError s!"body bytes must appear in output:\n{output2.quote}"
-- Chunk framing would look like "b\r\nhello world\r\n0\r\n\r\n"
if output2.contains "0\x0d\x0a\x0d\x0a" then
throw <| IO.userError s!"body must not be chunk-framed (found final-chunk terminator):\n{output2.quote}"
unless step2.events.any (fun | .close => true | _ => false) do
throw <| IO.userError s!"expected .close event after connection-close body:\n{repr step2.events}"
unless !machine3.keepAlive do
throw <| IO.userError "keepAlive must be false for HTTP/1.0 connection-close response"
-- Empty body: userClosedBody with no data must still emit .close.
let (machine1b, _) := (machine0.feed request10).step
let machine4 :=
machine1b
|>.send ({ status := .ok, headers := .empty } : Response.Head)
|>.userClosedBody
let (_, step3) := machine4.step
unless step3.events.any (fun | .close => true | _ => false) do
throw <| IO.userError s!"expected .close event for empty HTTP/1.0 body:\n{repr step3.events}"

View File

@@ -0,0 +1,248 @@
import Std.Internal.Http
import Std.Internal.Async
open Std.Internal.IO Async
open Std Http Internal Test
open Std.Http.Internal
def sendRaw
(client : Mock.Client)
(server : Mock.Server)
(raw : ByteArray)
(handler : TestHandler)
(config : Config := { lingeringTimeout := 3000, generateDate := false }) : IO ByteArray := Async.block do
client.send raw
Std.Http.Server.serveConnection server handler config
|>.run
let res client.recv?
pure <| res.getD .empty
def sendRawAndClose
(client : Mock.Client)
(server : Mock.Server)
(raw : ByteArray)
(handler : TestHandler)
(config : Config := { lingeringTimeout := 1000, generateDate := false }) : IO ByteArray := Async.block do
client.send raw
client.close
Std.Http.Server.serveConnection server handler config
|>.run
let res client.recv?
pure <| res.getD .empty
def bodyHandler : TestHandler :=
fun req => do
let body : String req.body.readAll
Response.ok |>.text body
def bad400 : String :=
"HTTP/1.1 400 Bad Request\x0d\nServer: LeanHTTP/1.1\x0d\nConnection: close\x0d\nContent-Length: 0\x0d\n\x0d\n"
-- Chunked body without trailers.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\n\x0d\n".toUTF8
let response sendRaw client server raw bodyHandler
assertStatus response "HTTP/1.1 200"
assertContains response "hello"
-- Single trailer header.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nChecksum: abc123\x0d\n\x0d\n".toUTF8
let response sendRaw client server raw bodyHandler
assertStatus response "HTTP/1.1 200"
assertContains response "hello"
-- Multiple trailer headers.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nChecksum: abc123\x0d\nExpires: Thu, 01 Dec 1994 16:00:00 GMT\x0d\nX-Custom: value\x0d\n\x0d\n".toUTF8
let response sendRaw client server raw bodyHandler
assertStatus response "HTTP/1.1 200"
assertContains response "hello"
-- Terminal chunk extensions can precede trailers.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0;ext=val\x0d\nX-Trailer: yes\x0d\n\x0d\n".toUTF8
let response sendRaw client server raw bodyHandler
assertStatus response "HTTP/1.1 200"
assertContains response "hello"
-- Trailer name and value limits.
#eval show IO _ from do
let exactName := String.ofList (List.replicate 256 'X')
let longName := String.ofList (List.replicate 257 'X')
let exactValue := String.ofList (List.replicate 8192 'v')
let longValue := String.ofList (List.replicate 8193 'v')
let (clientA, serverA) Mock.new
let rawA := s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\n{exactName}: value\x0d\n\x0d\n".toUTF8
let responseA sendRaw clientA serverA rawA bodyHandler
assertStatus responseA "HTTP/1.1 200"
let (clientB, serverB) Mock.new
let rawB := s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\n{longName}: value\x0d\n\x0d\n".toUTF8
let responseB sendRaw clientB serverB rawB bodyHandler
assertExact responseB bad400
let (clientC, serverC) Mock.new
let rawC := s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Exact: {exactValue}\x0d\n\x0d\n".toUTF8
let responseC sendRaw clientC serverC rawC bodyHandler
assertStatus responseC "HTTP/1.1 200"
let (clientD, serverD) Mock.new
let rawD := s!"POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Too-Long: {longValue}\x0d\n\x0d\n".toUTF8
let responseD sendRaw clientD serverD rawD bodyHandler
assertExact responseD bad400
-- maxTrailerHeaders enforcement.
#eval show IO _ from do
let config2 : Config := { lingeringTimeout := 3000, maxTrailerHeaders := 2, generateDate := false }
let (clientA, serverA) Mock.new
let okRaw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nT1: a\x0d\nT2: b\x0d\n\x0d\n".toUTF8
let okResponse sendRaw clientA serverA okRaw bodyHandler (config := config2)
assertStatus okResponse "HTTP/1.1 200"
let (clientB, serverB) Mock.new
let badRaw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nT1: a\x0d\nT2: b\x0d\nT3: c\x0d\n\x0d\n".toUTF8
let badResponse sendRaw clientB serverB badRaw bodyHandler (config := config2)
assertExact badResponse bad400
let config0 : Config := { lingeringTimeout := 3000, maxTrailerHeaders := 0, generateDate := false }
let (clientC, serverC) Mock.new
let rejectAny := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Trailer: rejected\x0d\n\x0d\n".toUTF8
let responseC sendRaw clientC serverC rejectAny bodyHandler (config := config0)
assertExact responseC bad400
let (clientD, serverD) Mock.new
let noTrailer := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\n\x0d\n".toUTF8
let responseD sendRaw clientD serverD noTrailer bodyHandler (config := config0)
assertStatus responseD "HTTP/1.1 200"
-- Trailer syntax validation.
#eval show IO _ from do
let (clientA, serverA) Mock.new
let noColon := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nBadTrailer value\x0d\n\x0d\n".toUTF8
let responseA sendRaw clientA serverA noColon bodyHandler
assertExact responseA bad400
let (clientB, serverB) Mock.new
let leadingWS := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\n X-Bad: folded\x0d\n\x0d\n".toUTF8
let responseB sendRaw clientB serverB leadingWS bodyHandler
assertExact responseB bad400
let (clientC, serverC) Mock.new
let spaceName := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nBad Name: value\x0d\n\x0d\n".toUTF8
let responseC sendRaw clientC serverC spaceName bodyHandler
assertExact responseC bad400
-- Trailer byte-level validation.
#eval show IO _ from do
let (clientA, serverA) Mock.new
let beforeName := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Bad".toUTF8
let afterName := "Name: value\x0d\n\x0d\n".toUTF8
let responseA sendRaw clientA serverA (beforeName ++ ByteArray.mk #[0] ++ afterName) bodyHandler
assertExact responseA bad400
let (clientB, serverB) Mock.new
let beforeValue := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Header: bad".toUTF8
let afterValue := "value\x0d\n\x0d\n".toUTF8
let responseB sendRaw clientB serverB (beforeValue ++ ByteArray.mk #[0] ++ afterValue) bodyHandler
assertExact responseB bad400
let (clientC, serverC) Mock.new
let responseC sendRaw clientC serverC (beforeValue ++ ByteArray.mk #[0x01] ++ afterValue) bodyHandler
assertExact responseC bad400
-- Incomplete trailer section with client close yields no response bytes.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n3\x0d\nabc\x0d\n0\x0d\nX-Trailer: value\x0d\n".toUTF8
let response sendRawAndClose client server raw bodyHandler
assert! response.size == 0
-- Trailer encoding emits terminal chunk plus trailer headers.
#eval show IO _ from Async.block do
let trailer := Trailer.empty
|>.insert (.mk "checksum") (.mk "abc123")
|>.insert (.mk "expires") (.mk "Thu, 01 Dec 1994")
let encoded := (Encode.encode (v := .v11) ChunkedBuffer.empty trailer).toByteArray
let text := String.fromUTF8! encoded
assert! text.contains "0\x0d\n"
assert! text.contains "Checksum: abc123\x0d\n"
assert! text.contains "Expires: Thu, 01 Dec 1994\x0d\n"
-- Empty trailer encoding is exactly terminal chunk CRLF CRLF.
#eval show IO _ from Async.block do
let encoded := (Encode.encode (v := .v11) ChunkedBuffer.empty Trailer.empty).toByteArray
let text := String.fromUTF8! encoded
assert! text == "0\x0d\n\x0d\n"
-- Trailer injection: forbidden field names must be rejected (RFC 9112 §6.5).
-- A client injecting framing or routing fields via trailers could confuse proxies.
#eval show IO _ from do
-- content-length in trailer must be rejected
let (clientA, serverA) Mock.new
let rawA := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nContent-Length: 1000\x0d\n\x0d\n".toUTF8
let responseA sendRaw clientA serverA rawA bodyHandler
assertExact responseA bad400
-- transfer-encoding in trailer must be rejected
let (clientB, serverB) Mock.new
let rawB := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nTransfer-Encoding: chunked\x0d\n\x0d\n".toUTF8
let responseB sendRaw clientB serverB rawB bodyHandler
assertExact responseB bad400
-- host in trailer must be rejected
let (clientC, serverC) Mock.new
let rawC := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nHost: evil.example\x0d\n\x0d\n".toUTF8
let responseC sendRaw clientC serverC rawC bodyHandler
assertExact responseC bad400
-- connection in trailer must be rejected
let (clientD, serverD) Mock.new
let rawD := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nConnection: keep-alive\x0d\n\x0d\n".toUTF8
let responseD sendRaw clientD serverD rawD bodyHandler
assertExact responseD bad400
-- authorization in trailer must be rejected
let (clientE, serverE) Mock.new
let rawE := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nAuthorization: Bearer token\x0d\n\x0d\n".toUTF8
let responseE sendRaw clientE serverE rawE bodyHandler
assertExact responseE bad400
-- cache-control in trailer must be rejected
let (clientF, serverF) Mock.new
let rawF := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nCache-Control: no-cache\x0d\n\x0d\n".toUTF8
let responseF sendRaw clientF serverF rawF bodyHandler
assertExact responseF bad400
-- te in trailer must be rejected
let (clientG, serverG) Mock.new
let rawG := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nTE: trailers\x0d\n\x0d\n".toUTF8
let responseG sendRaw clientG serverG rawG bodyHandler
assertExact responseG bad400
-- Forbidden trailer field names are rejected regardless of case.
#eval show IO _ from do
let (clientA, serverA) Mock.new
let rawA := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nCONTENT-LENGTH: 0\x0d\n\x0d\n".toUTF8
let responseA sendRaw clientA serverA rawA bodyHandler
assertExact responseA bad400
let (clientB, serverB) Mock.new
let rawB := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nContent-Length: 0\x0d\nChecksum: abc\x0d\n\x0d\n".toUTF8
let responseB sendRaw clientB serverB rawB bodyHandler
assertExact responseB bad400
-- Non-forbidden custom trailers are still allowed after the fix.
#eval show IO _ from do
let (client, server) Mock.new
let raw := "POST / HTTP/1.1\x0d\nHost: example.com\x0d\nTransfer-Encoding: chunked\x0d\nConnection: close\x0d\n\x0d\n5\x0d\nhello\x0d\n0\x0d\nChecksum: deadbeef\x0d\nX-Timing: 12ms\x0d\n\x0d\n".toUTF8
let response sendRaw client server raw bodyHandler
assertStatus response "HTTP/1.1 200"
assertContains response "hello"

View File

@@ -212,6 +212,7 @@ info: some " "
#eval parseCheck "https://user:pass@secure.example.com/private"
#eval parseCheck "/double//slash//path"
#eval parseCheck "http://user%40example:pass%3Aword@host.com"
#eval parseCheck "http://[::ffff:192.168.1.1]/path"
#eval parseCheck "http://example.com:/"
#eval parseCheck "http://example.com:/?q=1"
#eval parseCheck "///////"
@@ -261,6 +262,8 @@ info: some " "
#eval parseCheckFail ""
#eval parseCheckFail "[::1"
#eval parseCheckFail "[:::1]:80"
#eval parseCheckFail "http://exa_mple.com/path"
#eval parseCheckFail "http://[fe80::1%25eth0]/path"
#eval parseCheckFail "#frag"
#eval parseCheckFail "/path/\n"
#eval parseCheckFail "/path/\u0000"

View File

@@ -0,0 +1,90 @@
import Std.Internal.Async.Select
import Std.Sync.Channel
open Std Internal IO Async
/-!
Tests for the `gate` in `Selectable.one`.
-/
private def immediateSelector (value : α) : Selector α where
tryFn := pure none
registerFn waiter := waiter.race (lose := pure ()) (win := fun p => p.resolve (.ok value))
unregisterFn := pure ()
private def trackingSelector (unregCount : IO.Ref Nat) : Selector Unit where
tryFn := pure none
registerFn _ := pure ()
unregisterFn := unregCount.modify (· + 1)
def testImmediateWins : Async Bool := do
let cnt IO.mkRef 0
let result Selectable.one #[
.case (immediateSelector 42) pure,
.case (trackingSelector cnt) fun _ => pure 0,
]
return result == 42 && ( cnt.get) == 1
/-- info: true -/
#guard_msgs in
#eval testImmediateWins.block
def testImmediateWinsAfter : Async Bool := do
let cnt IO.mkRef 0
let result Selectable.one #[
.case (trackingSelector cnt) fun _ => pure 0,
.case (immediateSelector 99) pure,
]
return result == 99 && ( cnt.get) == 1
/-- info: true -/
#guard_msgs in
#eval testImmediateWinsAfter.block
-- Both selectors' unregisterFns are called across 500 iterations.
def testUnregisterCalledOnEveryIteration : Async Bool := do
let mut ok := true
for i in List.range 500 do
let cnt IO.mkRef 0
let result Selectable.one #[
.case (immediateSelector i) pure,
.case (trackingSelector cnt) fun _ => pure 0,
]
if result != i || ( cnt.get) != 1 then ok := false
return ok
/-- info: true -/
#guard_msgs in
#eval testUnregisterCalledOnEveryIteration.block
-- When there are N tracking selectors, each is unregistered exactly once.
def testAllTrackersUnregistered : Async Bool := do
let n := 5
let counters (List.range n).mapM fun _ => IO.mkRef 0
let trackers := counters.map (fun cnt => Selectable.case (trackingSelector cnt) fun _ => pure 0)
let _ Selectable.one (trackers.toArray ++ #[.case (immediateSelector 99) pure])
let counts counters.mapM fun c => c.get
return counts.all (· == 1)
/-- info: true -/
#guard_msgs in
#eval testAllTrackersUnregistered.block
-- After repeated wins, the channel selector can still deliver messages correctly.
def testChannelUsableAfterRepeatedImmediateWin : Async Bool := do
let ch : Std.Channel Nat Std.Channel.new none
let cnt IO.mkRef 0
let mut ok := true
for i in List.range 500 do
cnt.set 0
let result Selectable.one #[
.case (immediateSelector i) pure,
.case ch.recvSelector fun _ => pure 0,
]
if result != i then ok := false
ch.sync.send 99999
let last Selectable.one #[.case ch.recvSelector pure]
return ok && last == 99999
/-- info: true -/
#guard_msgs in
#eval testChannelUsableAfterRepeatedImmediateWin.block

View File

@@ -1,5 +1,4 @@
consumePPHint.lean:8:8-8:14: warning: declaration uses `sorry`
case a
⊢ q
(have x := 0;
x + 1)

View File

@@ -10,7 +10,6 @@ y : Nat
| (fun x => x + y = 0) = fun x => False
y : Nat
| fun x => x + y = 0
case h
y x : Nat
| y + x = 0
y : Nat

View File

@@ -10,7 +10,7 @@ axiom elimEx (motive : Nat → Nat → Sort u) (x y : Nat)
error: Invalid alternative name `lower2`: Expected `lower`
---
error: unsolved goals
case upper.h
case upper
q d : Nat
⊢ q + d.succ > q
---
@@ -62,7 +62,7 @@ theorem invalidWildCard (p: Nat) : p ≤ q p > q := by
error: Invalid alternative name `lower2`: There are no unhandled alternatives
---
error: unsolved goals
case lower.h
case lower
p delta✝ : Nat
⊢ p > p + delta✝.succ
-/

View File

@@ -46,7 +46,7 @@ instCD2._aux_1
#guard_msgs in
#print instCD2
/--
info: @[implicit_reducible] private def instCD2._aux_1 : C D2 :=
info: private def instCD2._aux_1 : C D2 :=
instCI2
-/
#guard_msgs in

View File

@@ -100,7 +100,6 @@ x y : Nat
h : x = y
⊢ y = x
-----
case h
x y : Nat
h : x = y
⊢ x = y

View File

@@ -505,28 +505,24 @@ is not definitionally equal to the right-hand side
example : S true false := by with_reducible apply_rfl -- Error
/--
error: unsolved goals
case a
⊢ true = true
-/
#guard_msgs in
example : S true true := by apply_rfl -- Error (left-over goal)
/--
error: unsolved goals
case a
⊢ true = true
-/
#guard_msgs in
example : S true true := by with_reducible apply_rfl -- Error (left-over goal)
/--
error: unsolved goals
case a
⊢ false = true
-/
#guard_msgs in
example : S false false := by apply_rfl -- Error (left-over goal)
/--
error: unsolved goals
case a
⊢ false = true
-/
#guard_msgs in

View File

@@ -0,0 +1,21 @@
example (p q : Prop) : p q p q := by
sym =>
intro hp hq
apply And.intro
apply hp
apply hq
register_sym_simp chainSimp where
post := ground >> rewrite [Nat.add_zero, Nat.zero_add]
example (x y : Nat) (h : x y) : (1 - 1) + x y + (1 + 0) := by
sym =>
simp chainSimp
-- In the following tactic the goal is closed while preprocessing the target
lia
example : x, x = a := by
sym =>
apply Exists.intro
apply Eq.refl

View File

@@ -5,7 +5,7 @@ elab "sym_simp" "[" declNames:ident,* "]" : tactic => do
let rewrite Sym.mkSimprocFor ( declNames.getElems.mapM fun s => realizeGlobalConstNoOverload s.raw) Sym.Simp.dischargeSimpSelf
let methods : Sym.Simp.Methods := {
pre := Sym.Simp.simpControl
post := Sym.Simp.evalGround.andThen rewrite
post := Sym.Simp.evalGround >> rewrite
}
liftMetaTactic1 fun mvarId => Sym.SymM.run do
let mvarId Sym.preprocessMVar mvarId

View File

@@ -9,14 +9,13 @@ x y : Nat
case x
x y : Nat
⊢ x + y = y.add x
case a
a b : Nat
| foo (0 + a) (b + 0)
case a.x
case x
a b : Nat
| 0 + a
case a.y
case y
a b : Nat
| b + 0
a b : Nat
@@ -55,13 +54,10 @@ x y : Nat
⊢ f x (x.add y) y = y + x
x y : Nat
| x + y
case h.h
a b : Nat
| 0 + a + b
case h.h
a b : Nat
| a + b
case h.h
a b : Nat
| 0 + a + b
p : Nat → Prop
@@ -83,7 +79,6 @@ x : Nat
p : Prop
x : Nat
⊢ (True → p) → p
case h
x : Nat
| 0 + x
p : Prop

View File

@@ -19,7 +19,6 @@ n m : Nat
n m : Nat
⊢ Prod.Lex (fun a₁ a₂ => a₁ < a₂) (fun a₁ a₂ => a₁ < a₂) (dec1 n, 100) (n, m)
decreasing_by.lean:85:0-85:22: error: unsolved goals
case a
n m : Nat
⊢ Prod.Lex (fun a₁ a₂ => a₁ < a₂) (fun a₁ a₂ => a₁ < a₂) (n, dec2 m) (n, m)

View File

@@ -1,9 +1,9 @@
inductionErrors.lean:11:12-11:27: error: unsolved goals
case lower.h
case lower
p d : Nat
⊢ p ≤ p + d.succ
inductionErrors.lean:12:12-12:27: error: unsolved goals
case upper.h
case upper
q d : Nat
⊢ q + d.succ > q
inductionErrors.lean:16:19-16:26: error(lean.unknownIdentifier): Unknown identifier `elimEx2`

View File

@@ -1,5 +1,4 @@
mutwf1.lean:21:2-21:6: error: unsolved goals
case h.a
n : Nat
h : n ≠ 0
⊢ n.sub 0 ≠ 0
@@ -7,6 +6,5 @@ mutwf1.lean:31:22-31:29: error: failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
case h
n : Nat
⊢ n + 1 < n

Some files were not shown because too many files have changed in this diff Show More