Compare commits

...

46 Commits

Author SHA1 Message Date
Sofia Rodrigues
980515f230 fix: tests 2026-04-23 00:31:10 -03:00
Sofia Rodrigues
ba1fffcc54 refactor: move Std.Internal.IO.Async to Std.Async and Std/Internal/Http to Std/Http 2026-04-23 00:01:16 -03:00
Sebastian Graf
cae4decead test: speed up bench/mvcgen/sym ctest entry (#13498)
This PR drops `LEAN_NUM_THREADS=1` from the `run_test.sh` of
`bench/mvcgen/sym`. The single-threaded restriction was originally there
to get reproducible benchmark timings, but `run_test.sh` runs as a test
rather than a benchmark, and we do not care about timing reproducibility
for tests. Allowing the default thread count cuts the wall time of what
was the slowest ctest entry from ~30s to ~20s.
2026-04-22 13:41:07 +00:00
Sebastian Graf
a1240f7b80 fix: correct alternative-fold base in do match (#13491)
This PR fixes the `ControlInfo` inference for a do-block `match`: the
fold over the match arms started from `ControlInfo.pure` (defaults to
`numRegularExits := 1`, `noFallthrough := false`), but `alternative`
sums `numRegularExits` and ANDs `noFallthrough`, so the fold identity is
`{ numRegularExits := 0, noFallthrough := true }`. With the wrong base,
a `match` whose arms all `break`/`continue`/`return` reported
`numRegularExits = 1` and `noFallthrough = false`, suppressing the
dead-code warning on the continuation after the match. The fix corrects
both the inference handler in `InferControlInfo.lean` and the fold in
`elabDoMatchCore`.
2026-04-22 13:25:30 +00:00
Sebastian Graf
2b99012545 feat: split ControlInfo.noFallthrough from syntactic numRegularExits (#13502)
This PR splits `ControlInfo`'s dead-code signal in two.
`numRegularExits` is now purely syntactic: how many times the block
wires its continuation into the elaborated expression, consumed by
`withDuplicableCont` as a join-point duplication trigger (`> 1`). The
new `noFallthrough : Bool` asserts that the next doElem in the enclosing
sequence is semantically irrelevant; `false` asserts nothing. Invariant:
`numRegularExits = 0 → noFallthrough`; the converse does not hold.
`sequence` derives `noFallthrough := a.noFallthrough || b.noFallthrough`
(and aggregates syntactic fields unconditionally); `alternative` derives
it as `a.noFallthrough && b.noFallthrough`. The dead-code warning gate
in `withDuplicableCont` and `ControlLifter.ofCont` now reads
`noFallthrough`.
2026-04-22 12:32:11 +00:00
Mac Malone
b6f5892e22 fix: leantar architecture detection for Linux aarch64 (#13499)
This PR fixes the architecture detection for `leantar` on Linux aarch64,
ensuring it is properly bundled with Lean.
2026-04-22 05:20:59 +00:00
Mac Malone
3387404f10 chore: lake: rm autoParam in Lake.Load.Resolve (#13495)
This PR fixes a segfault in the stage2 build of `Lake.Load.Resolve`
caused by the presence of an `autoParam`.
2026-04-21 21:43:40 +00:00
Leonardo de Moura
e542810e79 test: grind homomorphism demo (#13497)
This PR adds an example for the Lean hackathon in Paris. It demonstrates
how users can implement https://hackmd.io/Qd0nkWdzQImVe7TDGSAGbA
2026-04-21 21:17:32 +00:00
Sebastian Graf
f32106283f fix: pin repeat's numRegularExits at 1 to match for (#13494)
This PR stops the `repeat` inference handler from reporting
`numRegularExits := 0` for break-less bodies. For break-less `repeat`
the loop never terminates normally, so `0` looks more accurate
semantically, but the loop expression still has type `m Unit` and the do
block's continuation after the loop is what carries that type. Reporting
`0` makes the elaborator flag that continuation as dead code, yet there
is no way for the user to remove it that is also type correct — unless
the enclosing do block's monadic result type happens to be `Unit`.
Pinning `numRegularExits` at `1` (matching `for ... in`) eliminates
those spurious warnings.
2026-04-21 16:15:19 +00:00
Lean stage0 autoupdater
eadf1404c5 chore: update stage0 2026-04-21 15:22:01 +00:00
Robert J. Simmons
bf269ce250 fix: preserve nesting level across empty doc snippet nesting (#13489)
This PR fixes a bug where the nesting level in Verso Docstrings is
forgotten when there's a doc comment with no headers.

It changes the `terminalNesting` of `VersoModuleDocs` to be recomputed
rather than stored in the structure; we never want it to be anything
besides the default value, and it's easy to accidentally break this
invariant.

Closes #13485
2026-04-21 12:58:52 +00:00
Marc Huisinga
25bab8bcc4 feat: server-side support for incremental diagnostics (#13260)
This PR adds server-side support for incremental diagnostics via a new
`isIncremental` field on `PublishDiagnosticsParams` that is only used by
the language server when clients set `incrementalDiagnosticSupport` in
`LeanClientCapabilities`.

### Context

The goal of this new feature is to avoid quadratic reporting of
diagnostics.

LSP has two means of reporting diagnostics; pull diagnostics (where the
client decides when to fetch the diagnostics of a project) and push
diagnostics (where the server decides when to update the set of
diagnostics of a file in the client).
Pull diagnostics have the inherent problem that clients need to
heuristically decide when the set of diagnostics should be updated, and
that diagnostics can only be incrementally reported per file, so the
Lean language server has always stuck with push diagnostics instead.
In principle, push diagnostics were also intended to only be reported
once for a full file, but all major language clients also support
replacing the old set of diagnostics for a file when a new set of
diagnostics is reported for the same version of the file, so we have
always reported diagnostics incrementally while the file is being
processed in this way.
However, this approach has a major limitation: all notifications must be
a full set of diagnostics, which means that we have to report a
quadratic amount of diagnostics while processing a file to the end.

### Semantics

When `LeanClientCapabilities.incrementalDiagnosticSupport` is set, the
language server will set `PublishDiagnosticsParams.isIncremental` when
it is reporting a set of diagnostics that should simply be appended to
the previously reported set of diagnostics instead of replacing it.
Specifically, clients implementing this new feature should implement the
following behaviour:
- If `PublishDiagnosticsParams.isIncremental` is `false` or the field is
missing, the current diagnostic report for a specific document should
replace the previous diagnostic report for that document instead of
appending to it. This is identical to the current behavior before this
PR.
- If `PublishDiagnosticsParams.isIncremental` is `true`, the current
diagnostic report for a specific document should append to the previous
diagnostic report for that document instead of replacing it.
- Versions should be ignored when deciding whether to replace or append
to a previous set of diagnostics. The language server ensures that the
`isIncremental` flag is set correctly.

### Client-side implementation

A client-side implementation for the VS Code extension can be found at
[vscode-lean4#752](https://github.com/leanprover/vscode-lean4/pull/752).

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Wojciech Nawrocki <13901751+Vtec234@users.noreply.github.com>
2026-04-21 12:48:15 +00:00
Mac Malone
fcaebdad22 refactor: lake: fix setDepPkgs & further verify resolution (#13487)
This PR fixes an issue with `Workspace.setDepPkgs` (introduced in
#13445) where it did not properly update the workspace `packageMap`. In
addition, this PR further verifies aspects of the workspace construction
during dependency resolution.
2026-04-21 03:47:27 +00:00
Lean stage0 autoupdater
e2f9df6578 chore: update stage0 2026-04-20 22:03:37 +00:00
Sebastian Graf
a3cb98bb27 fix: aggregate ControlInfo past numRegularExits == 0 elements (#13486)
This PR fixes `inferControlInfoSeq` and `ControlInfo.sequence` to keep
aggregating `breaks`/`continues`/`returnsEarly`/`reassigns` past
elements whose `ControlInfo` reports `numRegularExits := 0`. Previously
the analysis short-circuited at such elements, so any trailing
`return`/`break`/`continue` was missing from the inferred info. The
elaboration framework only skips subsequent doElems syntactically for
top-level `return`/`break`/`continue`; for every other `numRegularExits
== 0` case (e.g. a `match`/`if`/`try` whose branches all terminate, or a
`repeat` without `break`) the elaborator keeps visiting the continuation
and the for/match elaborator then tripped its invariant check with
`Early returning ... but the info said there is no early return`. With
this change the inferred info matches what the elaborator actually sees,
which also removes the need for the `numRegularExits := 1` workaround on
`repeat` introduced in #13479.
2026-04-20 21:13:59 +00:00
Sofia Rodrigues
a0b2e1f302 feat: introduce HTTP/1.1 server (#12151)
This PR introduces the Server module, an Async HTTP/1.1 server.

This contains the same code as #10478, divided into separate pieces to
facilitate easier review.

The pieces of this feature are:
- Core data structures: #12126
- Headers: #12127
- URI:  #12128
- Body: #12144
- H1: #12146
- Server: #12151
- Client:

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2026-04-20 16:25:45 +00:00
Sebastian Ullrich
10338ed1b0 fix: wrapInstance: do not leak via un-reducible instances (#13441)
This PR ensures that if wrapInstance encounters an instance that cannot
be reduced to a constructor, the wrapping definition is left at
semireducible transparency to avoid leakage.
2026-04-20 06:41:32 +00:00
Lean stage0 autoupdater
cc9a217df8 chore: update stage0 2026-04-19 21:43:33 +00:00
Sebastian Graf
81f559b0e4 chore: remove repeat/while macro_rules bootstrap from Init.While (#13479)
This PR removes the transitional `macro_rules` for `repeat`, `while`,
and `repeat ... until` from `Init.While`. After the latest stage0
update, the `@[builtin_macro]` and `@[builtin_doElem_elab]` definitions
in `Lean.Elab.BuiltinDo.Repeat` are picked up directly, so the bootstrap
duplicates in `Init.While` are no longer needed. `Init.While` now only
provides the `Loop` type and its `ForIn` instance.

This PR also adjusts `repeat`'s `ControlInfo` to match `for ... in`: its
`numRegularExits` is now unconditionally `1` rather than `if info.breaks
then 1 else 0`. Reporting `0` when the body has no `break` causes
`inferControlInfoSeq` (in any enclosing sequence whose `ControlInfo` is
inferred — e.g. a surrounding `for`/`if`/`match`/`try` body) to stop
aggregating after the `repeat` and miss any `return`/`break`/`continue`
that follows. The corresponding elaborator then sees the actual control
flow disagree with the inferred info and throws errors like `Early
returning ... but the info said there is no early return`. The new test
in `tests/elab/newdo.lean` pins down the regression. See
[#13437](https://github.com/leanprover/lean4/pull/13437) for further
discussion.
2026-04-19 21:01:14 +00:00
Joachim Breitner
7cc3a4cc0b perf: use .local asyncMode for eqnOptionsExt (#13477)
This PR fixes a benchmark regression introduced in #13475:
`eqnOptionsExt`
was using `.async .asyncEnv` asyncMode, which accumulates state in the
`checked` environment and can block. Switching to `.local` — consistent
with the neighbouring `eqnsExt` and the other declaration caches in
`src/Lean/Meta` — restores performance (the
`build/profile/blocked (unaccounted) wall-clock` bench moves from +33%
back to baseline). `.local` is safe here because
`saveEqnAffectingOptions`
is only called during top-level `def` elaboration and downstream readers
see the imported state; modifications on non-main branches are merged
into the main branch on completion.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:49:00 +00:00
Leonardo de Moura
e82cd9b62c fix: filter assigned metavariables before computing apply subgoal tags (#13476)
This PR refines how the `apply` tactic (and related tactics like
`rewrite`) name and tag the remaining subgoals. Assigned metavariables
are now filtered out *before* computing subgoal tags. As a consequence,
when only one unassigned subgoal remains, it inherits the tag of the
input goal instead of being given a fresh suffixed tag.

User-visible effect: proof states that previously displayed tags like
`case h`, `case a`, or `case upper.h` for a single remaining goal now
display the input goal's tag directly (e.g. no tag at all, or `case
upper`). This removes noise from `funext`, `rfl`-style, and
`induction`-alternative goals when the applied lemma introduces only one
non-assigned metavariable. Multi-goal applications are unaffected —
their subgoals continue to receive distinguishing suffixes.

This may affect users whose proofs rely on the previous tag names (for
example, `case h => ...` after `funext`). Such scripts need to be
updated to use the input goal's tag instead.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:31:49 +00:00
Joachim Breitner
1d2cfb47e7 feat: store eqn-affecting options at definition time instead of eager generation (#13475)
This PR replaces the eager equation realization that was triggered by
non-default values of equation-affecting options (like
`backward.eqns.nonrecursive`) with a `MapDeclarationExtension` that
stores non-default option values at definition time. These values are
then restored when equations are lazily realized, so the same equations
are produced regardless of when generation occurs.

Restoring the options is done via a new `withEqnOptions` helper in
`Lean.Meta.Eqns`. Because `realizeConst` overrides the caller's options
with the options saved in its `RealizationContext` — which are empty
for imported constants — the helper must also be applied inside the
`realizeConst` callbacks in `mkSimpleEqThm`, `mkEqns` (in
`Elab/PreDefinition/Eqns.lean`), `getConstUnfoldEqnFor?`, and
`Structural.mkUnfoldEq`. Without that, equation generation code that
reads eqn-affecting options inside the realize callback would see the
caller-independent defaults rather than the values stored in
`eqnOptionsExt` — so the store-at-definition-time behavior would not
carry across module boundaries.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 12:30:08 +00:00
Leonardo de Moura
439e6a85d3 fix: prune goals assigned by isDefEq in sym => mode (#13474)
This PR fixes a bug in `sym =>` interactive mode where goals whose
metavariable was assigned by `isDefEq` (e.g. via `apply Eq.refl`) were
not pruned. `pruneSolvedGoals` previously only filtered out goals
flagged as inconsistent, so an already-assigned goal would linger as an
unsolved goal. It now also removes goals whose metavariable is already
assigned.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 11:55:11 +00:00
Leonardo de Moura
2d38a70d1c fix: auto-introduce in sym => mode when goal closes during preprocessing (#13472)
This PR fixes a bug in `sym =>` interactive mode where satellite solvers
(`lia`, `ring`, `linarith`) would throw an internal error if their
automatic `intros + assertAll` preprocessing step already closed the
goal. Previously, `evalCheck` used `liftAction` which discarded the
closure result, so the subsequent `liftGoalM` call failed due to the
absence of a main goal. `liftAction` is now split so the caller can
distinguish the closed and subgoals cases and skip the solver body when
preprocessing already finished the job.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 08:32:49 +00:00
Sebastian Ullrich
80cbab1642 chore: don't fail on running build bench on built stage3 (#13467) 2026-04-18 22:07:21 +00:00
Leonardo de Moura
c0a53ffe97 chore: minor tweaks to Sym.simp test and benchmark (#13468)
This PR applies two minor tweaks:
- `tests/bench/sym/simp_1.lean`: share-common the proof term before
counting objects in `getProofSize`, so the reported size reflects the
shared representation.
- `tests/elab/sym_simp_3.lean`: use `>>` instead of `.andThen` when
composing `Sym.Simp` methods.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:11:30 +00:00
Lean stage0 autoupdater
91aa82da7f chore: update stage0 2026-04-18 11:10:08 +00:00
Kim Morrison
3e9ed3c29d chore: add AGENTS.md symlink to CLAUDE.md (#13461)
This PR adds an `AGENTS.md` symlink to `.claude/CLAUDE.md` so
Codex-style repository instructions can resolve to the same checked-in
guidance Claude Code already uses.

This keeps the repository's agent-facing instructions in one place
instead of maintaining separate copies for different tools. Previous
Codex runs did not automatically pick up the existing
`.claude/CLAUDE.md` guidance, which caused avoidable drift in PR
formatting and workflow behavior.
2026-04-18 06:48:05 +00:00
Mac Malone
43e1e8285b refactor: lake: introduce GitRev (#13456)
This PR adds a type abbreviation `GitRev` to Lake, which is used for
`String` values that signify Git revisions. Such revisions may be a SHA1
commit hash, a branch name, or one of Git's more complex specifiers.

The PR also adds a number of additional Git primitives which are useful
for #11662.
2026-04-18 03:44:26 +00:00
Mac Malone
d3c069593b refactor: introduce Package.depPkgs (#13445)
This PR adds `Pakcage.depPkgs` for internal use (namely in #11662). This
field contains the list of the package's direct dependencies (as package
objects). This is much more efficient than going through the old package
`deps` facet.

As part of this refactor, the Workspace `root` is now derived from
`packages` instead of being is own independent field.
2026-04-18 03:44:21 +00:00
Mac Malone
0fa749f71b refactor: lake: introduce lowerHexUInt64 (#13455)
This PR adds a `lowerHexUInt64` utility to Lake which is used to
optimize `Hash.hex`.
2026-04-18 03:43:11 +00:00
Kyle Miller
592eb02bb2 feat: have level metavariable pretty printer instantiate level metavariables (#13438)
This PR makes the universe level pretty printer instantiate level
metavariables when `pp.instantiateMVars` is true.

Previously level metavariables were not instantiated.

The PR adjusts the tracing in the LevelDefEq module to create the trace
message using the original MetavarContext. It also adds
`Meta.isLevelDefEq.step` traces for when level metavariables are
assigned.
2026-04-18 01:07:22 +00:00
Leonardo de Moura
70df9742f4 fix: kernel error in grind order module for Nat casts to non-Int types (#13453)
This PR fixes a kernel error in `grind` when propagating a `Nat`
equality to an order structure whose carrier type is not `Int` (e.g.
`Rat`). The auxiliary `Lean.Grind.Order.of_nat_eq` lemma was specialized
to `Int`, so the kernel rejected the application when the cast
destination differed.

We add a polymorphic `of_natCast_eq` lemma over `{α : Type u} [NatCast
α]` and cache the cast destination type in `TermMapEntry`.
`processNewEq` now uses the original `of_nat_eq` when the destination is
`Int` (the common case) and the new lemma otherwise. The symmetric
`nat_eq` propagation (deriving `Nat` equality from a derived cast
equality) is now guarded to fire only when the destination is `Int`,
since the `nat_eq` lemma is still specialized to `Int`.

Closes #13265.
2026-04-17 23:51:21 +00:00
Leonardo de Moura
9c245d5531 test: add regression test for Sym.simp eta-reduction (#13416) (#13452)
This PR adds a direct regression test for issue #13416. It exercises
`Std.HashMap.getElem_insert`, whose `dom` argument is a lambda closing
over pattern variables, and checks that the discrimination tree lookup
finds the theorem once the target's `dom` lambda is eta-reduced.

The underlying fix landed in #13448; this test pins the specific MWE
from the original issue so a regression would surface immediately.

Closes #13416

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:47:53 +00:00
Mac Malone
3c4440d3bc feat: lake: JobAction.reuse & .unpack (#13423)
This PR adds `JobAction.reuse` and `JobAction.unpack` which provide more
information captions for what a job is doing for the build monitor.
`reuse` is set when using an artifact from the Lake cache, `unpack` is
set when unpacking module `.ltar` archives and release (Reservoir or
GitHub) archives.
2026-04-17 22:34:04 +00:00
Leonardo de Moura
2964193af8 fix: avoid assigning mvar when Sym.intros produces no binders (#13451)
This PR fixes a bug in `Sym.introCore.finalize` where the original
metavariable was unconditionally assigned via a delayed assignment, even
when no binders were introduced. As a result, `Sym.intros` would return
`.failed` while the goal metavariable had already been silently
assigned, confusing downstream code that relies on `isAssigned` (e.g. VC
filters in `mvcgen'`).

The test and fix were suggested by Sebastian Graf (@sgraf812).

Co-authored-by: Sebastian Graf <sgraf1337@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 21:47:47 +00:00
Eric Wieser
43e96b119d fix: prevent a hang in acLt (#13367)
This PR removes some cases where `simp` would significantly overrun a
timeout.

This is a little tricky to test cleanly; using mathlib's
`#count_heartbeats` as
```lean4
#count_heartbeats in
set_option maxHeartbeats 200000 in
example (k : Nat) (a : Fin (1 + k + 1) → Nat) :
    0 ≤ sumRange (1 + k + 1) (fun i =>
        if h : i < 1 + k + 1 then a ⟨i, h⟩ else 0) := by
  simp only [Nat.add_comm, sumRange_add]
```
I see 200010 heartbeats with this PR, and 1873870 (9x the requested
limit) without.

This type of failure is wasteful in AI systems which try tactics with a
short timeout.
2026-04-17 21:46:29 +00:00
Leonardo de Moura
615f45ad7a chore: fix Sym benchmarks using stale run' API (#13450)
This PR updates two Sym benchmarks (`add_sub_cancel.lean` and
`meta_simp_1.lean`) to use the current `SymM.run` API. Both files still
referenced `run'`, which no longer exists, so they failed to elaborate.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 21:31:57 +00:00
Sebastian Graf
8a9a5ebd5e refactor: promote repeat/while builtin parsers to default priority (#13447)
This PR removes the transitional `syntax` declarations for `repeat`,
`while`, and `repeat ... until` from `Init.While` and promotes the
corresponding `@[builtin_doElem_parser]` defs in `Lean.Parser.Do` from
`low` to default priority, making them the canonical parsers.

The `macro_rules` in `Init.While` are kept as a bootstrap: they expand
`repeat`/`while`/`until` directly to `for _ in Loop.mk do ...`, which is
what any `prelude` Init file needs. The `@[builtin_macro]` /
`@[builtin_doElem_elab]` in `Lean.Elab.BuiltinDo.Repeat` are only
visible once `Lean.Elab.*` is transitively imported, so they cannot
serve Init bootstrap. The duplication will be removed in a follow-up
after the next stage0 update.
2026-04-17 20:57:41 +00:00
Leonardo de Moura
1af697a44b fix: eta-reduce patterns containing loose pattern variables (#13448)
This PR fixes a regression in `Sym.simp` where rewrite rules whose LHS
contains a lambda over a pattern variable (e.g. `∃ x, a = x`) failed to
match targets with semantically equivalent structure.

`Sym.etaReduceAux` previously refused any eta-reduction whenever the
body had loose bound variables, but patterns produced by stripping outer
foralls always carry such loose bvars. The eta-reduction therefore
skipped patterns while still firing on the target, producing mismatched
discrimination tree keys and no match.

The fix narrows the check to loose bvars in the range `[0, n)` (those
that would actually refer to the peeled binders) and lowers any
remaining loose bvars by `n` so that pattern-variable references stay
consistent in the reduced expression. The discrimination tree now
classifies patterns like `exists_eq_True : (∃ x, a = x) = True` with
their full structure rather than falling back to `.other`.

Includes a regression test (`sym_simp_1.lean`) and Sebastian Graf's MWE
(`sym_eta_mwe.lean`).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 20:49:21 +00:00
Lean stage0 autoupdater
234267b08a chore: update stage0 2026-04-17 16:13:04 +00:00
Sebastian Graf
704df340cb feat: make repeat and while syntax builtin (#13442)
This PR promotes the `repeat`, `while`, and `repeat ... until` parsers
from `syntax` declarations in `Init.While` to `@[builtin_doElem_parser]`
definitions in `Lean.Parser.Do`, alongside the other do-element parsers.
The `while` variants and `repeat ... until` get `@[builtin_macro]`
expansions; `repeat` itself gets a `@[builtin_doElem_elab]` so a
follow-up can extend it with an option-driven choice between `Loop.mk`
and a well-founded `Repeat.mk`.

The new builtin parsers are registered at `low` priority so that the
bootstrapping `syntax` declarations in `Init.While` (still needed for
stage0 compatibility) take precedence during the transition. After the
next stage0 update, the `Init.While` syntax and macros can be removed.
2026-04-17 15:19:59 +00:00
Henrik Böving
f180c9ce17 fix: handling of EmitC for small hex string literals (#13435)
This PR fixes a bug in EmitC that can be caused by working with the
string literal `"\x01abc"` in
Lean and causes a C compiler error.

The error is as follows:
```
run.c:29:189: error: hex escape sequence out of range
   29 | static const lean_string_object l_badString___closed__0_value = {.m_header = {.m_rc = 0, .m_cs_sz = 0, .m_other = 0, .m_tag = 249}, .m_size = 5, .m_capacity = 5, .m_length = 4, .m_data = "\x01abc"};
      |                                                                                                                                                                                             ^~~~~~~
1 error generated.
```
This happens as hex escape sequences can be arbitrarily long while lean
expects them to cut off
after two chars. Thus, the C compiler parses the string as one large hex
escape sequence `01abc` and
subsequently notices this is too large.

Discovered by @datokrat
2026-04-17 15:16:28 +00:00
Lean stage0 autoupdater
040376ebd0 chore: update stage0 2026-04-17 10:09:17 +00:00
Sebastian Graf
ce998700e6 feat: add ControlInfo handler for doRepeat (#13437)
This PR adds a builtin `doElem_control_info` handler for `doRepeat`. It
is ineffective as long as we have the macro for `repeat`.
2026-04-17 09:17:52 +00:00
Sebastian Ullrich
b09d39a766 chore: build bench: replace warmup with selective build (#13432) 2026-04-17 08:01:17 +00:00
902 changed files with 7488 additions and 811 deletions

View File

@@ -279,7 +279,8 @@ jobs:
"os": large ? "nscloud-ubuntu-24.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": true,
"check-rebootstrap": level >= 1,
"check-stage3": level >= 2,
// Done as part of test-bench
//"check-stage3": level >= 2,
"test": true,
// NOTE: `test-bench` currently seems to be broken on `ubuntu-latest`
"test-bench": large && level >= 2,
@@ -291,7 +292,8 @@ jobs:
"os": large ? "nscloud-ubuntu-24.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": true,
"check-rebootstrap": level >= 1,
"check-stage3": level >= 2,
// Done as part of test-bench
//"check-stage3": level >= 2,
"test": true,
"secondary": true,
// NOTE: `test-bench` currently seems to be broken on `ubuntu-latest`

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
.claude/CLAUDE.md

View File

@@ -86,7 +86,7 @@ if(NOT CMAKE_SYSTEM_NAME MATCHES "Emscripten")
set(LEANTAR_TARGET x86_64-pc-windows-msvc)
else()
set(LEANTAR_ARCHIVE_SUFFIX .tar.gz)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64")
if(CMAKE_SYSTEM_PROCESSOR MATCHES "arm64|aarch64")
set(LEANTAR_TARGET_ARCH aarch64)
else()
set(LEANTAR_TARGET_ARCH x86_64)

View File

@@ -59,9 +59,9 @@ Examples:
* `Nat.repeat f 3 a = f <| f <| f <| a`
* `Nat.repeat (· ++ "!") 4 "Hello" = "Hello!!!!"`
-/
@[specialize, expose] def repeat {α : Type u} (f : α α) : (n : Nat) (a : α) α
@[specialize, expose] def «repeat» {α : Type u} (f : α α) : (n : Nat) (a : α) α
| 0, a => a
| succ n, a => f (repeat f n a)
| succ n, a => f («repeat» f n a)
/--
Applies a function to a starting value the specified number of times.
@@ -1221,9 +1221,9 @@ theorem not_lt_eq (a b : Nat) : (¬ (a < b)) = (b ≤ a) :=
theorem not_gt_eq (a b : Nat) : (¬ (a > b)) = (a b) :=
not_lt_eq b a
@[csimp] theorem repeat_eq_repeatTR : @repeat = @repeatTR :=
@[csimp] theorem repeat_eq_repeatTR : @«repeat» = @repeatTR :=
funext fun α => funext fun f => funext fun n => funext fun init =>
let rec go : m n, repeatTR.loop f m (repeat f n init) = repeat f (m + n) init
let rec go : m n, repeatTR.loop f m («repeat» f n init) = «repeat» f (m + n) init
| 0, n => by simp [repeatTR.loop]
| succ m, n => by rw [repeatTR.loop, succ_add]; exact go m (succ n)
(go n 0).symm

View File

@@ -75,6 +75,9 @@ theorem nat_eq (a b : Nat) (x y : Int) : NatCast.natCast a = x → NatCast.natCa
theorem of_nat_eq (a b : Nat) (x y : Int) : NatCast.natCast a = x NatCast.natCast b = y a = b x = y := by
intro _ _; subst x y; intro; simp [*]
theorem of_natCast_eq {α : Type u} [NatCast α] (a b : Nat) (x y : α) : NatCast.natCast a = x NatCast.natCast b = y a = b x = y := by
intro h₁ h₂ h; subst h; exact h₁.symm.trans h₂
theorem le_of_not_le {α} [LE α] [Std.IsLinearPreorder α]
{a b : α} : ¬ a b b a := by
intro h

View File

@@ -10,13 +10,15 @@ public import Init.Core
public section
/-!
# Notation for `while` and `repeat` loops.
-/
namespace Lean
/-! # `repeat` and `while` notation -/
/-!
# `Loop` type backing `repeat`/`while`/`repeat ... until`
The parsers and elaborators for `repeat`, `while`, and `repeat ... until` live in
`Lean.Parser.Do` and `Lean.Elab.BuiltinDo.Repeat`. This module only provides the
`Loop` type (and `ForIn` instance) that those elaborators expand to.
-/
inductive Loop where
| mk
@@ -32,26 +34,4 @@ partial def Loop.forIn {β : Type u} {m : Type u → Type v} [Monad m] (_ : Loop
instance [Monad m] : ForIn m Loop Unit where
forIn := Loop.forIn
syntax (name := doRepeat) "repeat " doSeq : doElem
macro_rules
| `(doElem| repeat%$tk $seq) => `(doElem| for%$tk _ in Loop.mk do $seq)
syntax "while " ident " : " termBeforeDo " do " doSeq : doElem
macro_rules
| `(doElem| while%$tk $h : $cond do $seq) =>
`(doElem| repeat%$tk if $h:ident : $cond then $seq else break)
syntax "while " termBeforeDo " do " doSeq : doElem
macro_rules
| `(doElem| while%$tk $cond do $seq) => `(doElem| repeat%$tk if $cond then $seq else break)
syntax "repeat " doSeq ppDedent(ppLine) "until " term : doElem
macro_rules
| `(doElem| repeat%$tk $seq until $cond) =>
`(doElem| repeat%$tk do $seq:doSeq; if $cond then break)
end Lean

View File

@@ -207,7 +207,7 @@ def emitLns [EmitToString α] (as : List α) : EmitM Unit :=
emitLn "}"
return ret
def toHexDigit (c : Nat) : String :=
def toDigit (c : Nat) : String :=
String.singleton c.digitChar
def quoteString (s : String) : String :=
@@ -221,7 +221,11 @@ def quoteString (s : String) : String :=
else if c == '\"' then "\\\""
else if c == '?' then "\\?" -- avoid trigraphs
else if c.toNat <= 31 then
"\\x" ++ toHexDigit (c.toNat / 16) ++ toHexDigit (c.toNat % 16)
-- Use octal escapes instead of hex escapes because C hex escapes are
-- greedy: "\x01abc" would be parsed as the single escape \x01abc rather
-- than \x01 followed by "abc". Octal escapes consume at most 3 digits.
let n := c.toNat
"\\" ++ toDigit (n / 64) ++ toDigit ((n / 8) % 8) ++ toDigit (n % 8)
-- TODO(Leo): we should use `\unnnn` for escaping unicode characters.
else String.singleton c)
q;

View File

@@ -64,6 +64,12 @@ structure WorkspaceClientCapabilities where
deriving ToJson, FromJson
structure LeanClientCapabilities where
/--
Whether the client supports incremental `textDocument/publishDiagnostics` updates.
If `none` or `false`, the server will never set `PublishDiagnosticsParams.isIncremental?`
and always report full diagnostic updates that replace the previous one.
-/
incrementalDiagnosticSupport? : Option Bool := none
/--
Whether the client supports `DiagnosticWith.isSilent = true`.
If `none` or `false`, silent diagnostics will not be served to the client.
@@ -84,6 +90,13 @@ structure ClientCapabilities where
lean? : Option LeanClientCapabilities := none
deriving ToJson, FromJson
def ClientCapabilities.incrementalDiagnosticSupport (c : ClientCapabilities) : Bool := Id.run do
let some lean := c.lean?
| return false
let some incrementalDiagnosticSupport := lean.incrementalDiagnosticSupport?
| return false
return incrementalDiagnosticSupport
def ClientCapabilities.silentDiagnosticSupport (c : ClientCapabilities) : Bool := Id.run do
let some lean := c.lean?
| return false

View File

@@ -159,6 +159,14 @@ abbrev Diagnostic := DiagnosticWith String
structure PublishDiagnosticsParams where
uri : DocumentUri
version? : Option Int := none
/--
Whether the client should append this set of diagnostics to the previous set
rather than replacing the previous set by this one (the default LSP behavior).
`false` means the client should replace.
`none` is equivalent to `false`.
This is a Lean-specific extension (see `LeanClientCapabilities`).
-/
isIncremental? : Option Bool := none
diagnostics : Array Diagnostic
deriving Inhabited, BEq, ToJson, FromJson

View File

@@ -102,9 +102,32 @@ def normalizePublishDiagnosticsParams (p : PublishDiagnosticsParams) :
sorted.toArray
}
/--
Merges a new `textDocument/publishDiagnostics` notification into a previously accumulated one.
- If there is no previous notification, the new one is used as-is.
- If `isIncremental?` is `true`, the new diagnostics are appended.
- Otherwise the new notification replaces the previous one.
The returned params always have `isIncremental? := some false` since they represent the full
accumulated set.
-/
def mergePublishDiagnosticsParams (prev? : Option PublishDiagnosticsParams)
(next : PublishDiagnosticsParams) : PublishDiagnosticsParams := Id.run do
let replace := { next with isIncremental? := some false }
let some prev := prev?
| return replace
if next.isIncremental?.getD false then
return { next with
diagnostics := prev.diagnostics ++ next.diagnostics
isIncremental? := some false }
return replace
/--
Waits for the worker to emit all diagnostic notifications for the current document version and
returns the last notification, if any.
returns the accumulated diagnostics, if any.
Incoming notifications are merged using `mergePublishDiagnosticsParams`.
We used to return all notifications but with debouncing in the server, this would not be
deterministic anymore as what messages are dropped depends on wall-clock timing.
@@ -112,22 +135,25 @@ deterministic anymore as what messages are dropped depends on wall-clock timing.
partial def collectDiagnostics (waitForDiagnosticsId : RequestID := 0) (target : DocumentUri) (version : Nat)
: IpcM (Option (Notification PublishDiagnosticsParams)) := do
writeRequest waitForDiagnosticsId, "textDocument/waitForDiagnostics", WaitForDiagnosticsParams.mk target version
loop
loop none
where
loop := do
loop (accumulated? : Option PublishDiagnosticsParams) := do
match (readMessage) with
| Message.response id _ =>
if id == waitForDiagnosticsId then return none
else loop
| Message.responseError id _ msg _ =>
if id == waitForDiagnosticsId then
return accumulated?.map fun p =>
"textDocument/publishDiagnostics", normalizePublishDiagnosticsParams p
else loop accumulated?
| Message.responseError id _ msg _ =>
if id == waitForDiagnosticsId then
throw $ userError s!"Waiting for diagnostics failed: {msg}"
else loop
else loop accumulated?
| Message.notification "textDocument/publishDiagnostics" (some param) =>
match fromJson? (toJson param) with
| Except.ok diagnosticParam => return ( loop).getD "textDocument/publishDiagnostics", normalizePublishDiagnosticsParams diagnosticParam
| Except.ok (diagnosticParam : PublishDiagnosticsParams) =>
loop (some (mergePublishDiagnosticsParams accumulated? diagnosticParam))
| Except.error inner => throw $ userError s!"Cannot decode publishDiagnostics parameters\n{inner}"
| _ => loop
| _ => loop accumulated?
partial def waitForILeans (waitForILeansId : RequestID := 0) (target : DocumentUri) (version : Nat) : IpcM Unit := do
writeRequest waitForILeansId, "$/lean/waitForILeans", WaitForILeansParams.mk target version

View File

@@ -289,9 +289,11 @@ instance : ToMarkdown VersoModuleDocs.Snippet where
structure VersoModuleDocs where
snippets : PersistentArray VersoModuleDocs.Snippet := {}
terminalNesting : Option Nat := snippets.findSomeRev? (·.terminalNesting)
deriving Inhabited
def VersoModuleDocs.terminalNesting : VersoModuleDocs Option Nat
| VersoModuleDocs.mk snippets => snippets.findSomeRev? (·.terminalNesting)
instance : Repr VersoModuleDocs where
reprPrec v _ :=
.group <| .nest 2 <|
@@ -314,10 +316,7 @@ def add (docs : VersoModuleDocs) (snippet : Snippet) : Except String VersoModule
unless docs.canAdd snippet do
throw "Can't nest this snippet here"
return { docs with
snippets := docs.snippets.push snippet,
terminalNesting := snippet.terminalNesting
}
return { docs with snippets := docs.snippets.push snippet }
def add! (docs : VersoModuleDocs) (snippet : Snippet) : VersoModuleDocs :=
let ok :=
@@ -327,10 +326,7 @@ def add! (docs : VersoModuleDocs) (snippet : Snippet) : VersoModuleDocs :=
if not ok then
panic! "Can't nest this snippet here"
else
{ docs with
snippets := docs.snippets.push snippet,
terminalNesting := snippet.terminalNesting
}
{ docs with snippets := docs.snippets.push snippet }
private structure DocFrame where

View File

@@ -203,7 +203,8 @@ private def compileMatch (discrs : Array Term.Discr) (matchType : Expr) (lhss :
private def elabDoMatchCore (discrs : TSyntaxArray ``matchDiscr) (alts : Array DoMatchAltView)
(nondupDec : DoElemCont) : DoElabM Expr := do
let info alts.foldlM (fun info alt => info.alternative <$> inferControlInfoSeq alt.rhs) ControlInfo.pure
let info alts.foldlM (fun info alt => info.alternative <$> inferControlInfoSeq alt.rhs)
ControlInfo.empty
nondupDec.withDuplicableCont info fun dec => do
trace[Elab.do.match] "discrs: {discrs}, nondupDec.resultType: {nondupDec.resultType}, may postpone: {(← readThe Term.Context).mayPostpone}"
tryPostponeIfDiscrTypeIsMVar discrs

View File

@@ -17,17 +17,28 @@ namespace Lean.Elab.Do
open Lean.Parser.Term
/--
Builtin do-element elaborator for `repeat` (syntax kind `Lean.doRepeat`).
Builtin do-element elaborator for `repeat` (syntax kind `Lean.Parser.Term.doRepeat`).
Expands to `for _ in Loop.mk do ...`. A follow-up PR will drop the fallback macro
in `Init.While` (which currently preempts this elaborator) and extend this
Expands to `for _ in Loop.mk do ...`. A follow-up change will extend this
elaborator to choose between `Loop.mk` and a well-founded `Repeat.mk` based on a
configuration option.
-/
@[builtin_doElem_elab Lean.doRepeat] def elabDoRepeat : DoElab := fun stx dec => do
let `(doElem| repeat $seq) := stx | throwUnsupportedSyntax
let expanded `(doElem| for _ in Loop.mk do $seq)
@[builtin_doElem_elab Lean.Parser.Term.doRepeat] def elabDoRepeat : DoElab := fun stx dec => do
let `(doElem| repeat%$tk $seq) := stx | throwUnsupportedSyntax
let expanded `(doElem| for%$tk _ in Loop.mk do $seq)
Term.withMacroExpansion stx expanded <|
withRef expanded <| elabDoElem expanded dec
@[builtin_macro Lean.Parser.Term.doWhileH] def expandDoWhileH : Macro
| `(doElem| while%$tk $h : $cond do $seq) => `(doElem| repeat%$tk if $h:ident : $cond then $seq else break)
| _ => Macro.throwUnsupported
@[builtin_macro Lean.Parser.Term.doWhile] def expandDoWhile : Macro
| `(doElem| while%$tk $cond do $seq) => `(doElem| repeat%$tk if $cond then $seq else break)
| _ => Macro.throwUnsupported
@[builtin_macro Lean.Parser.Term.doRepeatUntil] def expandDoRepeatUntil : Macro
| `(doElem| repeat%$tk $seq until $cond) => `(doElem| repeat%$tk do $seq:doSeq; if $cond then break)
| _ => Macro.throwUnsupported
end Lean.Elab.Do

View File

@@ -587,7 +587,7 @@ def DoElemCont.withDuplicableCont (nondupDec : DoElemCont) (callerInfo : Control
withLocalDeclD nondupDec.resultName nondupDec.resultType fun r => do
withLocalDeclsDND (mutDecls.map fun (d : LocalDecl) => (d.userName, d.type)) fun muts => do
for (x, newX) in mutVars.zip muts do Term.addTermInfo' x newX
withDeadCode (if callerInfo.numRegularExits > 0 then .alive else .deadSemantically) do
withDeadCode (if callerInfo.noFallthrough then .deadSemantically else .alive) do
let e nondupDec.k
mkLambdaFVars (#[r] ++ muts) e
unless joinRhsMVar.mvarId!.checkedAssign joinRhs do

View File

@@ -232,9 +232,8 @@ def ControlLifter.ofCont (info : ControlInfo) (dec : DoElemCont) : DoElabM Contr
breakBase?,
continueBase?,
pureBase := controlStack,
-- The success continuation `origCont` is dead code iff the `ControlInfo` says that there is no
-- regular exit.
pureDeadCode := if info.numRegularExits > 0 then .alive else .deadSemantically,
-- The success continuation `origCont` is dead code iff the `ControlInfo` says so semantically.
pureDeadCode := if info.noFallthrough then .deadSemantically else .alive,
liftedDoBlockResultType := ( controlStack.stM dec.resultType),
}

View File

@@ -16,46 +16,81 @@ namespace Lean.Elab.Do
open Lean Meta Parser.Term
/-- Represents information about what control effects a `do` block has. -/
/--
Represents information about what control effects a `do` block has.
The fields split by flavor:
* `breaks`, `continues`, `returnsEarly`, and `reassigns` are **syntactic**: `true`/non-empty iff
the corresponding construct appears anywhere in the source text of the block, independent of
whether it is semantically reachable. Downstream elaborators must assume every such syntactic
effect may occur, because the elaborator visits every doElem (only top-level
`return`/`break`/`continue` short-circuit via `elabAsSyntacticallyDeadCode`).
* `numRegularExits` is also **syntactic**: the number of times the block wires the enclosing
continuation into its elaborated expression. `withDuplicableCont` reads it as a join-point
duplication trigger (`> 1`).
* `noFallthrough = true` asserts that the next doElem in the enclosing sequence is semantically
irrelevant (control never falls through to it). `noFallthrough = false` makes no such
assertion. The dead-code warning fires on the next element when this is `true`.
Invariant: `numRegularExits = 0 → noFallthrough`. The converse does not hold.
-/
structure ControlInfo where
/-- The `do` block may `break`. -/
/-- The `do` block syntactically contains a `break`. -/
breaks : Bool := false
/-- The `do` block may `continue`. -/
/-- The `do` block syntactically contains a `continue`. -/
continues : Bool := false
/-- The `do` block may `return` early. -/
/-- The `do` block syntactically contains an early `return`. -/
returnsEarly : Bool := false
/--
The number of regular exit paths the `do` block has.
Corresponds to the number of jumps to an ambient join point.
The number of times the block wires the enclosing continuation into its elaborated expression.
Consumed by `withDuplicableCont` to decide whether to introduce a join point (`> 1`).
-/
numRegularExits : Nat := 1
/-- The variables that are reassigned in the `do` block. -/
/--
When `true`, asserts that the next doElem in the enclosing sequence is semantically irrelevant
(control never falls through to it). `false` asserts nothing.
-/
noFallthrough : Bool := false
/-- The variables that are syntactically reassigned somewhere in the `do` block. -/
reassigns : NameSet := {}
deriving Inhabited
def ControlInfo.pure : ControlInfo := {}
def ControlInfo.sequence (a b : ControlInfo) : ControlInfo :=
if a.numRegularExits == 0 then a else {
/--
The identity of `ControlInfo.alternative`: a `ControlInfo` describing a block with no branches
at all (so no regular exits and the next element is trivially unreachable).
-/
def ControlInfo.empty : ControlInfo := { numRegularExits := 0, noFallthrough := true }
def ControlInfo.sequence (a b : ControlInfo) : ControlInfo := {
-- Syntactic fields aggregate unconditionally; the elaborator keeps visiting `b` unless `a` is
-- a syntactically-terminal element (only top-level `return`/`break`/`continue` are, via
-- `elabAsSyntacticallyDeadCode`).
breaks := a.breaks || b.breaks,
continues := a.continues || b.continues,
returnsEarly := a.returnsEarly || b.returnsEarly,
numRegularExits := b.numRegularExits,
reassigns := a.reassigns ++ b.reassigns,
numRegularExits := b.numRegularExits,
-- Semantic: the sequence is dead if either part is dead.
noFallthrough := a.noFallthrough || b.noFallthrough,
}
def ControlInfo.alternative (a b : ControlInfo) : ControlInfo := {
breaks := a.breaks || b.breaks,
continues := a.continues || b.continues,
returnsEarly := a.returnsEarly || b.returnsEarly,
numRegularExits := a.numRegularExits + b.numRegularExits,
reassigns := a.reassigns ++ b.reassigns,
numRegularExits := a.numRegularExits + b.numRegularExits,
-- Semantic: alternatives are dead only if all branches are dead.
noFallthrough := a.noFallthrough && b.noFallthrough,
}
instance : ToMessageData ControlInfo where
toMessageData info := m!"breaks: {info.breaks}, continues: {info.continues},
returnsEarly: {info.returnsEarly}, exitsRegularly: {info.numRegularExits},
reassigns: {info.reassigns.toList}"
returnsEarly: {info.returnsEarly}, numRegularExits: {info.numRegularExits},
noFallthrough: {info.noFallthrough}, reassigns: {info.reassigns.toList}"
/-- A handler for inferring `ControlInfo` from a `doElem` syntax. Register with `@[doElem_control_info parserName]`. -/
abbrev ControlInfoHandler := TSyntax `doElem TermElabM ControlInfo
@@ -89,9 +124,9 @@ partial def ofElem (stx : TSyntax `doElem) : TermElabM ControlInfo := do
return ofElem stxNew
match stx with
| `(doElem| break) => return { breaks := true, numRegularExits := 0 }
| `(doElem| continue) => return { continues := true, numRegularExits := 0 }
| `(doElem| return $[$_]?) => return { returnsEarly := true, numRegularExits := 0 }
| `(doElem| break) => return { breaks := true, numRegularExits := 0, noFallthrough := true }
| `(doElem| continue) => return { continues := true, numRegularExits := 0, noFallthrough := true }
| `(doElem| return $[$_]?) => return { returnsEarly := true, numRegularExits := 0, noFallthrough := true }
| `(doExpr| $_:term) => return { numRegularExits := 1 }
| `(doElem| do $doSeq) => ofSeq doSeq
-- Let
@@ -112,7 +147,7 @@ partial def ofElem (stx : TSyntax `doElem) : TermElabM ControlInfo := do
ofLetOrReassignArrow true decl
-- match
| `(doElem| match $[(dependent := $_)]? $[(generalizing := $_)]? $(_)? $_,* with $alts:matchAlt*) =>
let mut info := {}
let mut info := ControlInfo.empty
for alt in alts do
let `(matchAltExpr| | $[$_,*]|* => $rhs) := alt | throwUnsupportedSyntax
let rhsInfo ofSeq rhs
@@ -129,12 +164,15 @@ partial def ofElem (stx : TSyntax `doElem) : TermElabM ControlInfo := do
return thenInfo.alternative info
| `(doElem| unless $_ do $elseSeq) =>
ControlInfo.alternative {} <$> ofSeq elseSeq
| `(doElem| for $[$[$_ :]? $_ in $_],* do $bodySeq) =>
-- For/Repeat
| `(doElem| for $[$[$_ :]? $_ in $_],* do $bodySeq)
| `(doRepeat| repeat $bodySeq) =>
let info ofSeq bodySeq
return { info with -- keep only reassigns and earlyReturn
numRegularExits := 1,
continues := false,
breaks := false
breaks := false,
noFallthrough := false,
}
-- Try
| `(doElem| try $trySeq:doSeq $[$catches]* $[finally $finSeq?]?) =>
@@ -204,17 +242,7 @@ partial def ofLetOrReassign (reassigned : Array Ident) (rhs? : Option (TSyntax `
partial def ofSeq (stx : TSyntax ``doSeq) : TermElabM ControlInfo := do
let mut info : ControlInfo := {}
for elem in getDoElems stx do
if info.numRegularExits == 0 then
break
let elemInfo ofElem elem
info := {
info with
breaks := info.breaks || elemInfo.breaks
continues := info.continues || elemInfo.continues
returnsEarly := info.returnsEarly || elemInfo.returnsEarly
numRegularExits := elemInfo.numRegularExits
reassigns := info.reassigns ++ elemInfo.reassigns
}
info := info.sequence ( ofElem elem)
return info
partial def ofOptionSeq (stx? : Option (TSyntax ``doSeq)) : TermElabM ControlInfo := do

View File

@@ -1782,7 +1782,7 @@ mutual
doIfToCode doElem doElems
else if k == ``Parser.Term.doUnless then
doUnlessToCode doElem doElems
else if k == `Lean.doRepeat then
else if k == ``Parser.Term.doRepeat then
let seq := doElem[1]
let expanded `(doElem| for _ in Loop.mk do $seq)
doSeqToCode (expanded :: doElems)

View File

@@ -8,7 +8,7 @@ module
prelude
public import Lean.Elab.Do.Basic
meta import Lean.Parser.Do
import Std.Internal.Async.TCP
import Std.Async.TCP
/-!
# Interactive Debug Expression Evaluator (`idbg`)
@@ -22,7 +22,7 @@ and client (compiled program side) compute a deterministic port from the source
-/
open Lean Lean.Elab Lean.Elab.Term Lean.Meta
open Std.Net Std.Internal.IO.Async
open Std.Net Std.Async
namespace Lean.Idbg

View File

@@ -222,8 +222,8 @@ private def addNonRecAux (docCtx : LocalContext × LocalInstances) (preDef : Pre
if compile && shouldGenCodeFor preDef then
compileDecl decl
if applyAttrAfterCompilation then
saveEqnAffectingOptions preDef.declName
enableRealizationsForConst preDef.declName
generateEagerEqns preDef.declName
addPreDefDocs docCtx preDef
if applyAttrAfterCompilation then
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation

View File

@@ -28,7 +28,7 @@ def getConstUnfoldEqnFor? (declName : Name) : MetaM (Option Name) := do
trace[ReservedNameAction] "getConstUnfoldEqnFor? {declName} failed, no unfold theorem available"
return none
let name := mkEqLikeNameFor ( getEnv) declName eqUnfoldThmSuffix
realizeConst declName name do
realizeConst declName name <| withEqnOptions declName do
-- we have to call `getUnfoldEqnFor?` again to make `unfoldEqnName` available in this context
let some unfoldEqnName getUnfoldEqnFor? (nonRec := true) declName | unreachable!
let info getConstInfo unfoldEqnName

View File

@@ -367,7 +367,7 @@ def mkEqns (declName : Name) (declNames : Array Name) : MetaM (Array Name) := do
thmNames := thmNames.push name
-- determinism: `type` should be independent of the environment changes since `baseName` was
-- added
realizeConst declName name (doRealize name info type)
realizeConst declName name (withEqnOptions declName (doRealize name info type))
return thmNames
where
doRealize name info type := withOptions (tactic.hygienic.set · false) do

View File

@@ -69,8 +69,10 @@ def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
a.name = `instance_reducible || a.name = `implicit_reducible do
setIrreducibleAttribute preDef.declName
for preDef in preDefs do
saveEqnAffectingOptions preDef.declName
/-
`enableRealizationsForConst` must happen before `generateEagerEqns`
It must happen in reverse order so that constants realized as part of the first decl
have realizations for the other ones enabled
-/
@@ -78,7 +80,6 @@ def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
enableRealizationsForConst preDef.declName
for preDef in preDefs do
generateEagerEqns preDef.declName
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation
end Lean.Elab.Mutual

View File

@@ -163,7 +163,7 @@ public def registerEqnsInfo (preDef : PreDefinition) (declNames : Array Name) (r
/-- Generate the "unfold" lemma for `declName`. -/
def mkUnfoldEq (declName : Name) (info : EqnInfo) : MetaM Name := do
let name := mkEqLikeNameFor ( getEnv) info.declName unfoldThmSuffix
realizeConst info.declNames[0]! name (doRealize name)
realizeConst info.declNames[0]! name (withEqnOptions declName (doRealize name))
return name
where
doRealize name := withOptions (tactic.hygienic.set · false) do

View File

@@ -208,11 +208,11 @@ def structuralRecursion
-/
registerEqnsInfo preDef (preDefs.map (·.declName)) recArgPos fixedParamPerms
addSmartUnfoldingDef docCtx preDef recArgPos
for preDef in preDefs do
saveEqnAffectingOptions preDef.declName
for preDef in preDefs do
-- must happen in separate loop so realizations can see eqnInfos of all other preDefs
enableRealizationsForConst preDef.declName
-- must happen after `enableRealizationsForConst`
generateEagerEqns preDef.declName
applyAttributesOf preDefsNonRec AttributeApplicationTime.afterCompilation

View File

@@ -497,14 +497,21 @@ def forEachVar (hs : Array Syntax) (tac : MVarId → FVarId → MetaM MVarId) :
/--
Searches for a metavariable `g` s.t. `tag` is its exact name.
If none then searches for a metavariable `g` s.t. `tag` is a suffix of its name.
If none, then it searches for a metavariable `g` s.t. `tag` is a prefix of its name. -/
If none, then it searches for a metavariable `g` s.t. `tag` is a prefix of its name.
We erase macro scopes from the metavariable's user name before comparing, so that
user-written tags match even when a previous tactic left hygienic macro scopes at
the end of the tag (e.g. `e_a.yield._@._internal._hyg.0`, where `yield` is not the
literal last component of the name). Case tags written by the user are never
macro-scoped, so erasing scopes on the mvar side is sufficient.
-/
private def findTag? (mvarIds : List MVarId) (tag : Name) : TacticM (Option MVarId) := do
match ( mvarIds.findM? fun mvarId => return tag == ( mvarId.getDecl).userName) with
match ( mvarIds.findM? fun mvarId => return tag == ( mvarId.getDecl).userName.eraseMacroScopes) with
| some mvarId => return mvarId
| none =>
match ( mvarIds.findM? fun mvarId => return tag.isSuffixOf ( mvarId.getDecl).userName) with
match ( mvarIds.findM? fun mvarId => return tag.isSuffixOf ( mvarId.getDecl).userName.eraseMacroScopes) with
| some mvarId => return mvarId
| none => mvarIds.findM? fun mvarId => return tag.isPrefixOf ( mvarId.getDecl).userName
| none => mvarIds.findM? fun mvarId => return tag.isPrefixOf ( mvarId.getDecl).userName.eraseMacroScopes
private def getCaseGoals (tag : TSyntax ``binderIdent) : TacticM (MVarId × List MVarId) := do
let gs getUnsolvedGoals

View File

@@ -68,7 +68,10 @@ def setGoals (goals : List Goal) : GrindTacticM Unit :=
def pruneSolvedGoals : GrindTacticM Unit := do
let gs getGoals
let gs := gs.filter fun g => !g.inconsistent
let gs gs.filterM fun g => do
if g.inconsistent then return false
-- The metavariable may have been assigned by `isDefEq`
return !( g.mvarId.isAssigned)
setGoals gs
def getUnsolvedGoals : GrindTacticM (List Goal) := do
@@ -329,13 +332,19 @@ def liftGoalM (k : GoalM α) : GrindTacticM α := do
replaceMainGoal [goal]
return a
def liftAction (a : Action) : GrindTacticM Unit := do
inductive LiftActionCoreResult where
| closed | subgoals
def liftActionCore (a : Action) : GrindTacticM LiftActionCoreResult := do
let goal getMainGoal
let ka := fun _ => throwError "tactic is not applicable"
let kp := fun goal => return .stuck [goal]
match ( liftGrindM <| a goal ka kp) with
| .closed _ => replaceMainGoal []
| .stuck gs => replaceMainGoal gs
| .closed _ => replaceMainGoal []; return .closed
| .stuck gs => replaceMainGoal gs; return .subgoals
def liftAction (a : Action) : GrindTacticM Unit := do
discard <| liftActionCore a
def done : GrindTacticM Unit := do
pruneSolvedGoals

View File

@@ -111,7 +111,9 @@ def evalCheck (tacticName : Name) (k : GoalM Bool)
This matches the behavior of these tactics in default tactic mode
where `lia` can close `x > 1 → x + y + z > 0` directly. -/
if ( read).sym then
liftAction <| Action.intros 0 >> Action.assertAll
match ( liftActionCore <| Action.intros 0 >> Action.assertAll) with
| .closed => return () -- closed the goal
| .subgoals => pure () -- continue
let recover := ( read).recover
liftGoalM do
let progress k

View File

@@ -175,6 +175,7 @@ where
return !( allChildrenLt a b)
lpo (a b : Expr) : MetaM Bool := do
checkSystem "Lean.Meta.acLt"
-- Case 1: `a < b` if for some child `b_i` of `b`, we have `b_i >= a`
if ( someChildGe b a) then
return true

View File

@@ -37,12 +37,17 @@ register_builtin_option backward.eqns.deepRecursiveSplit : Bool := {
These options affect the generation of equational theorems in a significant way. For these, their
value at definition time, not realization time, should matter.
This is implemented by
* eagerly realizing the equations when they are set to a non-default value
* when realizing them lazily, reset the options to their default
This is implemented by storing their values at definition time (when non-default) in an environment
extension, and restoring them when the equations are lazily realized.
-/
def eqnAffectingOptions : Array (Lean.Option Bool) := #[backward.eqns.nonrecursive, backward.eqns.deepRecursiveSplit]
/-- Environment extension that stores the values of `eqnAffectingOptions` at definition time,
keyed by declaration name. Only populated when at least one option has a non-default value.
Stores an association list of (option name, value) pairs for options that differ from defaults. -/
builtin_initialize eqnOptionsExt : MapDeclarationExtension (Array (Name × DataValue))
mkMapDeclarationExtension (asyncMode := .local)
def eqnThmSuffixBase := "eq"
def eqnThmSuffixBasePrefix := eqnThmSuffixBase ++ "_"
def eqn1ThmSuffix := eqnThmSuffixBasePrefix ++ "1"
@@ -153,12 +158,30 @@ structure EqnsExtState where
builtin_initialize eqnsExt : EnvExtension EqnsExtState
registerEnvExtension (pure {}) (asyncMode := .local)
/--
Runs `act` with the equation-affecting options restored to the values stored for `declName`
at definition time (or reset to their defaults if none were stored). Use this inside
`realizeConst` callbacks, which otherwise see the caller-independent `ctx.opts` rather than
the outer `getEqnsFor?` context. -/
def withEqnOptions (declName : Name) (act : MetaM α) : MetaM α := do
let env getEnv
let setOpts : Options Options :=
if let some values := eqnOptionsExt.find? env declName then
fun os => Id.run do
let mut os := eqnAffectingOptions.foldl (fun os o => o.set os o.defValue) os
for (name, v) in values do
os := os.insert name v
return os
else
fun os => eqnAffectingOptions.foldl (fun os o => o.set os o.defValue) os
withOptions setOpts act
/--
Simple equation theorem for nonrecursive definitions.
-/
def mkSimpleEqThm (declName : Name) (name : Name) : MetaM (Option Name) := do
if let some (.defnInfo info) := ( getEnv).find? declName then
realizeConst declName name (doRealize name info)
realizeConst declName name (withEqnOptions declName (doRealize name info))
return some name
else
return none
@@ -229,19 +252,22 @@ private def getEqnsFor?Core (declName : Name) : MetaM (Option (Array Name)) := w
Returns equation theorems for the given declaration.
-/
def getEqnsFor? (declName : Name) : MetaM (Option (Array Name)) := withLCtx {} {} do
-- This is the entry point for lazy equation generation. Ignore the current value
-- of the options, and revert to the default.
withOptions (eqnAffectingOptions.foldl fun os o => o.set os o.defValue) do
withEqnOptions declName do
getEqnsFor?Core declName
/--
If any equation theorem affecting option is not the default value, create the equations now.
If any equation theorem affecting option is not the default value, store the option values
for later use during lazy equation generation.
-/
def generateEagerEqns (declName : Name) : MetaM Unit := do
def saveEqnAffectingOptions (declName : Name) : MetaM Unit := do
let opts getOptions
if eqnAffectingOptions.any fun o => o.get opts != o.defValue then
trace[Elab.definition.eqns] "generating eager equations for {declName}"
let _ getEqnsFor?Core declName
let mut nonDefaults : Array (Name × DataValue) := #[]
for o in eqnAffectingOptions do
if o.get opts != o.defValue then
nonDefaults := nonDefaults.push (o.name, KVMap.Value.toDataValue (o.get opts))
unless nonDefaults.isEmpty do
trace[Elab.definition.eqns] "saving equation-affecting options for {declName}"
modifyEnv (eqnOptionsExt.insert · declName nonDefaults)
@[expose] def GetUnfoldEqnFn := Name MetaM (Option Name)

View File

@@ -38,7 +38,9 @@ and assigning `?m := max ?n v`
private def solveSelfMax (mvarId : LMVarId) (v : Level) : MetaM Unit := do
assert! v.isMax
let n mkFreshLevelMVar
assignLevelMVar mvarId <| mkMaxArgsDiff mvarId v n
let v' := mkMaxArgsDiff mvarId v n
trace[Meta.isLevelDefEq.step] "solveSelfMax: {mkLevelMVar mvarId} := {v'}"
assignLevelMVar mvarId v'
/--
Returns true if `v` is `max u ?m` (or variant). That is, we solve `u =?= max u ?m` as `?m := u`.
@@ -53,6 +55,7 @@ private def tryApproxSelfMax (u v : Level) : MetaM Bool := do
where
solve (v' : Level) (mvarId : LMVarId) : MetaM Bool := do
if u == v' then
trace[Meta.isLevelDefEq.step] "tryApproxSelfMax {mkLevelMVar mvarId} := {u}"
assignLevelMVar mvarId u
return true
else
@@ -71,8 +74,14 @@ private def tryApproxMaxMax (u v : Level) : MetaM Bool := do
| _, _ => return false
where
solve (u₁ u₂ v' : Level) (mvarId : LMVarId) : MetaM Bool := do
if u₁ == v' then assignLevelMVar mvarId u₂; return true
else if u₂ == v' then assignLevelMVar mvarId u₁; return true
if u₁ == v' then
trace[Meta.isLevelDefEq.step] "tryApproxMaxMax {mkLevelMVar mvarId} := {u₂}"
assignLevelMVar mvarId u₂
return true
else if u₂ == v' then
trace[Meta.isLevelDefEq.step] "tryApproxMaxMax {mkLevelMVar mvarId} := {u₁}"
assignLevelMVar mvarId u₁
return true
else return false
private def postponeIsLevelDefEq (lhs : Level) (rhs : Level) : MetaM Unit := do
@@ -97,9 +106,11 @@ mutual
else if ( isMVarWithGreaterDepth v mvarId) then
-- If both `u` and `v` are both metavariables, but depth of v is greater, then we assign `v := u`.
-- This can only happen when levelAssignDepth is set to a smaller value than depth (e.g. during TC synthesis)
trace[Meta.isLevelDefEq.step] "{v} := {u}"
assignLevelMVar v.mvarId! u
return LBool.true
else if !u.occurs v then
trace[Meta.isLevelDefEq.step] "{u} := {v}"
assignLevelMVar u.mvarId! v
return LBool.true
else if v.isMax && !strictOccursMax u v then
@@ -133,8 +144,9 @@ mutual
@[export lean_is_level_def_eq]
partial def isLevelDefEqAuxImpl : Level Level MetaM Bool
| Level.succ lhs, Level.succ rhs => isLevelDefEqAux lhs rhs
| lhs, rhs =>
withTraceNode `Meta.isLevelDefEq (fun _ => return m!"{lhs} =?= {rhs}") do
| lhs, rhs => do
withTraceNodeBefore `Meta.isLevelDefEq (fun _ =>
withOptions (·.set `pp.instantiateMVars false) do addMessageContext m!"{lhs} =?= {rhs}") do
if lhs.getLevelOffset == rhs.getLevelOffset then
return lhs.getOffset == rhs.getOffset
else

View File

@@ -9,19 +9,37 @@ public import Lean.Meta.Sym.ExprPtr
public import Lean.Meta.Basic
import Lean.Meta.Transform
namespace Lean.Meta.Sym
/--
Returns `true` if `e` contains a loose bound variable with index in `[0, n)`
This function assumes `n` is small. If this becomes a bottleneck, we should
implement a version of `lean_expr_has_loose_bvar` that checks the range in one traversal.
-/
def hasLooseBVarsInRange (e : Expr) (n : Nat) : Bool :=
e.hasLooseBVars && go n
where
go : Nat Bool
| 0 => false
| i+1 => e.hasLooseBVar i || go i
/--
Checks if `body` is eta-expanded with `n` applications: `f (.bvar (n-1)) ... (.bvar 0)`.
Returns `f` if so and `f` has no loose bvars; otherwise returns `default`.
Returns `f` if so and `f` has no loose bvars with indices in the range `[0, n)`; otherwise returns `default`.
- `n`: number of remaining applications to check
- `i`: expected bvar index (starts at 0, increments with each application)
- `default`: returned when not eta-reducible (enables pointer equality check)
-/
def etaReduceAux (body : Expr) (n : Nat) (i : Nat) (default : Expr) : Expr := Id.run do
match n with
| 0 => if body.hasLooseBVars then default else body
| n+1 =>
let .app f (.bvar j) := body | default
if j == i then etaReduceAux f n (i+1) default else default
def etaReduceAux (body : Expr) (n : Nat) (i : Nat) (default : Expr) : Expr :=
go body n i
where
go (body : Expr) (m : Nat) (i : Nat) : Expr := Id.run do
match m with
| 0 =>
if hasLooseBVarsInRange body n then default
else body.lowerLooseBVars n n
| m+1 =>
let .app f (.bvar j) := body | default
if j == i then go f m (i+1) else default
/--
If `e` is of the form `(fun x₁ ... xₙ => f x₁ ... xₙ)` and `f` does not contain `x₁`, ..., `xₙ`,

View File

@@ -48,6 +48,8 @@ def introCore (mvarId : MVarId) (max : Nat) (names : Array Name) : SymM (Array F
assignDelayedMVar auxMVar.mvarId! fvars mvarId'
mvarId.assign val
let finalize (lctx : LocalContext) (localInsts : LocalInstances) (fvars : Array Expr) (type : Expr) : SymM (Array Expr × MVarId) := do
if fvars.isEmpty then
return (#[], mvarId)
let type instantiateRevS type fvars
let mvar' mkFreshExprMVarAt lctx localInsts type .syntheticOpaque mvarDecl.userName
let mvarId' := mvar'.mvarId!

View File

@@ -128,7 +128,6 @@ def postprocessAppMVars (tacticName : Name) (mvarId : MVarId) (newMVars : Array
(synthAssignedInstances := true) (allowSynthFailures := false) : MetaM Unit := do
synthAppInstances tacticName mvarId newMVars binderInfos synthAssignedInstances allowSynthFailures
-- TODO: default and auto params
appendParentTag mvarId newMVars binderInfos
private def dependsOnOthers (mvar : Expr) (otherMVars : Array Expr) : MetaM Bool :=
otherMVars.anyM fun otherMVar => do
@@ -223,6 +222,7 @@ def _root_.Lean.MVarId.apply (mvarId : MVarId) (e : Expr) (cfg : ApplyConfig :=
let e instantiateMVars e
mvarId.assign (mkAppN e newMVars)
let newMVars newMVars.filterM fun mvar => not <$> mvar.mvarId!.isAssigned
appendParentTag mvarId newMVars binderInfos
let otherMVarIds getMVarsNoDelayed e
let newMVarIds reorderGoals newMVars cfg.newGoals
let otherMVarIds := otherMVarIds.filter fun mvarId => !newMVarIds.contains mvarId

View File

@@ -148,11 +148,14 @@ def propagatePending : OrderM Unit := do
- `h₁ : ↑ue' = ue`
- `h₂ : ↑ve' = ve`
- `h : ue = ve`
**Note**: We currently only support `Nat`. Thus `↑a` is actually
`NatCast.natCast a`. If we decide to support arbitrary semirings
in this module, we must adjust this code.
**Note**: We currently only support `Nat` originals. Thus `↑a` is actually
`NatCast.natCast a`. The lemma `nat_eq` is specialized to `Int`, so we
only invoke it when the cast destination is `Int`. For other types (e.g.
`Rat`), `pushEq ue ve h` above is sufficient and `grind` core can derive
the `Nat` equality via `norm_cast`/cast injectivity if needed.
-/
pushEq ue' ve' <| mkApp7 (mkConst ``Grind.Order.nat_eq) ue' ve' ue ve h₁ h₂ h
if ( inferType ue) == Int.mkType then
pushEq ue' ve' <| mkApp7 (mkConst ``Grind.Order.nat_eq) ue' ve' ue ve h₁ h₂ h
where
/--
If `e` is an auxiliary term used to represent some term `a`, returns
@@ -343,7 +346,7 @@ def getStructIdOf? (e : Expr) : GoalM (Option Nat) := do
return ( get').exprToStructId.find? { expr := e }
def propagateIneq (e : Expr) : GoalM Unit := do
if let some (e', he) := ( get').termMap.find? { expr := e } then
if let some { e := e', h := he, .. } := ( get').termMap.find? { expr := e } then
go e' (some he)
else
go e none
@@ -369,20 +372,27 @@ builtin_grind_propagator propagateLT ↓LT.lt := propagateIneq
public def processNewEq (a b : Expr) : GoalM Unit := do
unless isSameExpr a b do
let h mkEqProof a b
if let some (a', h₁) getAuxTerm? a then
let some (b', h₂) getAuxTerm? b | return ()
if let some { e := a', h := h₁, α } getAuxTerm? a then
let some { e := b', h := h₂, .. } getAuxTerm? b | return ()
/-
We have
- `h : a = b`
- `h₁ : ↑a = a'`
- `h₂ : ↑b = b'`
where `a'` and `b'` are `NatCast.natCast α inst _` for some type `α`.
-/
let h := mkApp7 (mkConst ``Grind.Order.of_nat_eq) a b a' b' h₁ h₂ h
go a' b' h
if α == Int.mkType then
let h := mkApp7 (mkConst ``Grind.Order.of_nat_eq) a b a' b' h₁ h₂ h
go a' b' h
else
let u getDecLevel α
let inst synthInstance (mkApp (mkConst ``NatCast [u]) α)
let h := mkApp9 (mkConst ``Grind.Order.of_natCast_eq [u]) α inst a b a' b' h₁ h₂ h
go a' b' h
else
go a b h
where
getAuxTerm? (e : Expr) : GoalM (Option (Expr × Expr)) := do
getAuxTerm? (e : Expr) : GoalM (Option TermMapEntry) := do
return ( get').termMap.find? { expr := e }
go (a b h : Expr) : GoalM Unit := do

View File

@@ -166,9 +166,9 @@ def setStructId (e : Expr) : OrderM Unit := do
exprToStructId := s.exprToStructId.insert { expr := e } structId
}
def updateTermMap (e eNew h : Expr) : GoalM Unit := do
def updateTermMap (e eNew h α : Expr) : GoalM Unit := do
modify' fun s => { s with
termMap := s.termMap.insert { expr := e } (eNew, h)
termMap := s.termMap.insert { expr := e } { e := eNew, h, α }
termMapInv := s.termMapInv.insert { expr := eNew } (e, h)
}
@@ -198,9 +198,9 @@ where
getOriginal? (e : Expr) : GoalM (Option Expr) := do
if let some (e', _) := ( get').termMapInv.find? { expr := e } then
return some e'
let_expr NatCast.natCast _ _ a := e | return none
let_expr NatCast.natCast α _ a := e | return none
if ( alreadyInternalized a) then
updateTermMap a e ( mkEqRefl e)
updateTermMap a e ( mkEqRefl e) α
return some a
else
return none
@@ -290,7 +290,7 @@ def internalizeTerm (e : Expr) : OrderM Unit := do
open Arith.Cutsat in
def adaptNat (e : Expr) : GoalM Expr := do
if let some (eNew, _) := ( get').termMap.find? { expr := e } then
if let some { e := eNew, .. } := ( get').termMap.find? { expr := e } then
return eNew
else match_expr e with
| LE.le _ _ lhs rhs => adaptCnstr lhs rhs (isLT := false)
@@ -307,12 +307,12 @@ where
let h := mkApp6
(mkConst (if isLT then ``Nat.ToInt.lt_eq else ``Nat.ToInt.le_eq))
lhs rhs lhs' rhs' h₁ h₂
updateTermMap e eNew h
updateTermMap e eNew h ( getIntExpr)
return eNew
adaptTerm : GoalM Expr := do
let (eNew, h) natToInt e
updateTermMap e eNew h
updateTermMap e eNew h ( getIntExpr)
return eNew
def adapt (α : Expr) (e : Expr) : GoalM (Expr × Expr) := do

View File

@@ -128,6 +128,13 @@ structure Struct where
propagate : List ToPropagate := []
deriving Inhabited
/-- Entry/Value for the map `termMap` in `State` -/
structure TermMapEntry where
e : Expr
h : Expr
α : Expr
deriving Inhabited
/-- State for all order types detected by `grind`. -/
structure State where
/-- Order structures detected. -/
@@ -143,7 +150,7 @@ structure State where
Example: given `x y : Nat`, `x ≤ y + 1` is mapped to `Int.ofNat x ≤ Int.ofNat y + 1`, and proof
of equivalence.
-/
termMap : PHashMap ExprPtr (Expr × Expr) := {}
termMap : PHashMap ExprPtr TermMapEntry := {}
/-- `termMap` inverse -/
termMapInv : PHashMap ExprPtr (Expr × Expr) := {}
deriving Inhabited

View File

@@ -82,6 +82,7 @@ def _root_.Lean.MVarId.rewrite (mvarId : MVarId) (e : Expr) (heq : Expr)
postprocessAppMVars `rewrite mvarId newMVars binderInfos
(synthAssignedInstances := !tactic.skipAssignedInstances.get ( getOptions))
let newMVarIds newMVars.map Expr.mvarId! |>.filterM fun mvarId => not <$> mvarId.isAssigned
appendParentTag mvarId newMVars binderInfos
let otherMVarIds getMVarsNoDelayed heqIn
let otherMVarIds := otherMVarIds.filter (!newMVarIds.contains ·)
let newMVarIds := newMVarIds ++ otherMVarIds

View File

@@ -145,7 +145,6 @@ public partial def wrapInstance (inst expectedType : Expr) (compile : Bool := tr
else
let name mkAuxDeclName
let wrapped mkAuxDefinition name expectedType inst (compile := false)
setReducibilityStatus name .implicitReducible
if isMeta then modifyEnv (markMeta · name)
if compile then
compileDecls (logErrors := logCompileErrors) #[name]

View File

@@ -273,6 +273,15 @@ with debug assertions enabled (see the `debugAssertions` option).
@[builtin_doElem_parser] def doDebugAssert := leading_parser:leadPrec
"debug_assert! " >> termParser
@[builtin_doElem_parser] def doRepeat := leading_parser
"repeat " >> doSeq
@[builtin_doElem_parser] def doWhileH := leading_parser
"while " >> ident >> " : " >> withForbidden "do" termParser >> " do " >> doSeq
@[builtin_doElem_parser] def doWhile := leading_parser
"while " >> withForbidden "do" termParser >> " do " >> doSeq
@[builtin_doElem_parser] def doRepeatUntil := leading_parser
"repeat " >> doSeq >> ppDedent ppLine >> "until " >> termParser
/-
We use `notFollowedBy` to avoid counterintuitive behavior.

View File

@@ -484,6 +484,7 @@ open SubExpr (Pos PosMap)
open Delaborator (OptionsPerPos topDownAnalyze DelabM getPPOption)
def delabLevel (l : Level) (prec : Nat) : MetaM Syntax.Level := do
let l if getPPInstantiateMVars ( getOptions) then instantiateLevelMVars l else pure l
let mvars := getPPMVarsLevels ( getOptions)
return Level.quote l prec (mvars := mvars) (lIndex? := ( getMCtx).findLevelIndex?)

View File

@@ -77,8 +77,6 @@ def OutputMessage.ofMsg (msg : JsonRpc.Message) : OutputMessage where
msg? := msg
serialized := toJson msg |>.compress
open Widget in
structure WorkerContext where
/-- Synchronized output channel for LSP messages. Notifications for outdated versions are
discarded on read. -/
@@ -89,10 +87,6 @@ structure WorkerContext where
-/
maxDocVersionRef : IO.Ref Int
freshRequestIdRef : IO.Ref Int
/--
Diagnostics that are included in every single `textDocument/publishDiagnostics` notification.
-/
stickyDiagnosticsRef : IO.Ref (Array InteractiveDiagnostic)
partialHandlersRef : IO.Ref (Std.TreeMap String PartialHandlerInfo)
pendingServerRequestsRef : IO.Ref (Std.TreeMap RequestID (IO.Promise (ServerRequestResponse Json)))
hLog : FS.Stream
@@ -208,19 +202,11 @@ This option can only be set on the command line, not in the lakefile or via `set
diags : Array Widget.InteractiveDiagnostic
deriving TypeName
/--
Sends a `textDocument/publishDiagnostics` notification to the client that contains the diagnostics
in `ctx.stickyDiagnosticsRef` and `doc.diagnosticsRef`.
-/
/-- Sends a `textDocument/publishDiagnostics` notification to the client. -/
private def publishDiagnostics (ctx : WorkerContext) (doc : EditableDocumentCore)
: BaseIO Unit := do
let stickyInteractiveDiagnostics ctx.stickyDiagnosticsRef.get
let docInteractiveDiagnostics doc.diagnosticsRef.get
let diagnostics :=
stickyInteractiveDiagnostics ++ docInteractiveDiagnostics
|>.map (·.toDiagnostic)
let notification := mkPublishDiagnosticsNotification doc.meta diagnostics
ctx.chanOut.sync.send <| .ofMsg notification
let supportsIncremental := ctx.initParams.capabilities.incrementalDiagnosticSupport
doc.publishDiagnostics supportsIncremental fun notif => ctx.chanOut.sync.send <| .ofMsg notif
open Language in
/--
@@ -321,7 +307,7 @@ This option can only be set on the command line, not in the lakefile or via `set
if let some cacheRef := node.element.diagnostics.interactiveDiagsRef? then
cacheRef.set <| some <| .mk { diags : MemorizedInteractiveDiagnostics }
pure diags
doc.diagnosticsRef.modify (· ++ diags)
doc.appendDiagnostics diags
if ( get).hasBlocked then
publishDiagnostics ctx doc
@@ -463,7 +449,7 @@ section Initialization
let clientHasWidgets := initParams.initializationOptions?.bind (·.hasWidgets?) |>.getD false
let maxDocVersionRef IO.mkRef 0
let freshRequestIdRef IO.mkRef (0 : Int)
let stickyDiagnosticsRef IO.mkRef
let stickyDiagsRef IO.mkRef {}
let pendingServerRequestsRef IO.mkRef
let chanOut mkLspOutputChannel maxDocVersionRef
let timestamp IO.monoMsNow
@@ -493,11 +479,10 @@ section Initialization
maxDocVersionRef
freshRequestIdRef
cmdlineOpts := opts
stickyDiagnosticsRef
}
let diagnosticsMutex Std.Mutex.new { stickyDiagsRef }
let doc : EditableDocumentCore := {
«meta» := doc, initSnap
diagnosticsRef := ( IO.mkRef )
«meta» := doc, initSnap, diagnosticsMutex
}
let reporterCancelTk CancelToken.new
let reporter reportSnapshots ctx doc reporterCancelTk
@@ -578,14 +563,11 @@ section Updates
modify fun st => { st with pendingRequests := map st.pendingRequests }
/-- Given the new document, updates editable doc state. -/
def updateDocument (doc : DocumentMeta) : WorkerM Unit := do
def updateDocument («meta» : DocumentMeta) : WorkerM Unit := do
( get).reporterCancelTk.set
let ctx read
let initSnap ctx.processor doc.mkInputContext
let doc : EditableDocumentCore := {
«meta» := doc, initSnap
diagnosticsRef := ( IO.mkRef )
}
let initSnap ctx.processor «meta».mkInputContext
let doc ( get).doc.update «meta» initSnap
let reporterCancelTk CancelToken.new
let reporter reportSnapshots ctx doc reporterCancelTk
modify fun st => { st with doc := { doc with reporter }, reporterCancelTk }
@@ -637,18 +619,16 @@ section NotificationHandling
let ctx read
let s get
let text := s.doc.meta.text
let importOutOfDataMessage := .text s!"Imports are out of date and should be rebuilt; \
use the \"Restart File\" command in your editor."
let importOutOfDateMessage :=
.text s!"Imports are out of date and should be rebuilt; \
use the \"Restart File\" command in your editor."
let diagnostic := {
range := 0, 0, 1, 0
fullRange? := some 0, 0, text.utf8PosToLspPos text.source.rawEndPos
severity? := DiagnosticSeverity.information
message := importOutOfDataMessage
message := importOutOfDateMessage
}
ctx.stickyDiagnosticsRef.modify fun stickyDiagnostics =>
let stickyDiagnostics := stickyDiagnostics.filter
(·.message.stripTags != importOutOfDataMessage.stripTags)
stickyDiagnostics.push diagnostic
s.doc.appendStickyDiagnostic diagnostic
publishDiagnostics ctx s.doc.toEditableDocumentCore
def handleRpcRelease (p : Lsp.RpcReleaseParams) : WorkerM Unit := do
@@ -759,19 +739,17 @@ section MessageHandling
open Widget RequestM Language in
def handleGetInteractiveDiagnosticsRequest
(ctx : WorkerContext)
(doc : EditableDocument)
(params : GetInteractiveDiagnosticsParams)
: RequestM (Array InteractiveDiagnostic) := do
let doc readDoc
-- NOTE: always uses latest document (which is the only one we can retrieve diagnostics for);
-- any race should be temporary as the client should re-request interactive diagnostics when
-- they receive the non-interactive diagnostics for the new document
let stickyDiags ctx.stickyDiagnosticsRef.get
let diags doc.diagnosticsRef.get
let allDiags doc.collectCurrentDiagnostics
-- NOTE: does not wait for `lineRange?` to be fully elaborated, which would be problematic with
-- fine-grained incremental reporting anyway; instead, the client is obligated to resend the
-- request when the non-interactive diagnostics of this range have changed
return (stickyDiags ++ diags).filter fun diag =>
return PersistentArray.toArray <| allDiags.filter fun diag =>
let r := diag.fullRange
let diagStartLine := r.start.line
let diagEndLine :=
@@ -784,7 +762,7 @@ section MessageHandling
s diagStartLine diagStartLine < e
diagStartLine s s < diagEndLine
def handlePreRequestSpecialCases? (ctx : WorkerContext) (st : WorkerState)
def handlePreRequestSpecialCases? (st : WorkerState)
(id : RequestID) (method : String) (params : Json)
: RequestM (Option (RequestTask SerializedLspResponse)) := do
match method with
@@ -795,7 +773,7 @@ section MessageHandling
let some seshRef := st.rpcSessions.get? params.sessionId
| throw RequestError.rpcNeedsReconnect
let params RequestM.parseRequestParams Widget.GetInteractiveDiagnosticsParams params.params
let resp handleGetInteractiveDiagnosticsRequest ctx params
let resp handleGetInteractiveDiagnosticsRequest st.doc params
let resp seshRef.modifyGet fun st =>
rpcEncode resp st.objects |>.map (·) ({st with objects := ·})
return some <| .pure { response? := resp, serialized := resp.compress, isComplete := true }
@@ -925,7 +903,7 @@ section MessageHandling
serverRequestEmitter := sendUntypedServerRequest ctx
}
let requestTask? EIO.toIO' <| RequestM.run (rc := rc) do
if let some response handlePreRequestSpecialCases? ctx st id method params then
if let some response handlePreRequestSpecialCases? st id method params then
return response
let task handleLspRequest method params
let task handlePostRequestSpecialCases id method params task

View File

@@ -10,6 +10,7 @@ prelude
public import Lean.Language.Lean.Types
public import Lean.Server.Snapshots
public import Lean.Server.AsyncList
public import Std.Sync.Mutex
public section
@@ -39,6 +40,26 @@ where
| some next => .delayed <| next.task.asServerTask.bindCheap go
| none => .nil)
/--
Tracks diagnostics and incremental diagnostic reporting state for a single document version.
The sticky diagnostics are shared across all document versions via an `IO.Ref`, while per-version
diagnostics are stored directly. The whole state is wrapped in a `Std.Mutex` on
`EditableDocumentCore` to ensure atomic updates.
-/
structure DiagnosticsState where
/--
Diagnostics that persist across document versions (e.g. stale dependency warnings).
Shared across all versions via an `IO.Ref`.
-/
stickyDiagsRef : IO.Ref (PersistentArray Widget.InteractiveDiagnostic)
/-- Diagnostics accumulated during snapshot reporting. -/
diags : PersistentArray Widget.InteractiveDiagnostic := {}
/-- Whether the next `publishDiagnostics` call should be incremental. -/
isIncremental : Bool := false
/-- Amount of diagnostics reported in `publishDiagnostics` so far. -/
publishedDiagsAmount : Nat := 0
/--
A document bundled with processing information. Turned into `EditableDocument` as soon as the
reporter task has been started.
@@ -50,11 +71,94 @@ structure EditableDocumentCore where
initSnap : Language.Lean.InitialSnapshot
/-- Old representation for backward compatibility. -/
cmdSnaps : AsyncList IO.Error Snapshot := private_decl% mkCmdSnaps initSnap
/--
Interactive versions of diagnostics reported so far. Filled by `reportSnapshots` and read by
`handleGetInteractiveDiagnosticsRequest`.
-/
diagnosticsRef : IO.Ref (Array Widget.InteractiveDiagnostic)
/-- Per-version diagnostics state, protected by a mutex. -/
diagnosticsMutex : Std.Mutex DiagnosticsState
namespace EditableDocumentCore
open Widget
/-- Appends new non-sticky diagnostics. -/
def appendDiagnostics (doc : EditableDocumentCore) (diags : Array InteractiveDiagnostic) :
BaseIO Unit :=
doc.diagnosticsMutex.atomically do
modify fun ds => { ds with diags := diags.foldl (init := ds.diags) fun acc d => acc.push d }
/--
Appends a sticky diagnostic and marks the next publish as non-incremental.
Removes any existing sticky diagnostic whose `message.stripTags` matches the new one.
-/
def appendStickyDiagnostic (doc : EditableDocumentCore) (diagnostic : InteractiveDiagnostic) :
BaseIO Unit :=
doc.diagnosticsMutex.atomically do
let ds get
ds.stickyDiagsRef.modify fun stickyDiags =>
let stickyDiags := stickyDiags.filter
(·.message.stripTags != diagnostic.message.stripTags)
stickyDiags.push diagnostic
set { ds with isIncremental := false }
/-- Returns all current diagnostics (sticky ++ doc). -/
def collectCurrentDiagnostics (doc : EditableDocumentCore) :
BaseIO (PersistentArray InteractiveDiagnostic) :=
doc.diagnosticsMutex.atomically do
let ds get
let stickyDiags ds.stickyDiagsRef.get
return stickyDiags ++ ds.diags
/--
Creates a new `EditableDocumentCore` for a new document version, sharing the same sticky
diagnostics with the previous version.
-/
def update (doc : EditableDocumentCore) (newMeta : DocumentMeta)
(newInitSnap : Language.Lean.InitialSnapshot) : BaseIO EditableDocumentCore := do
let stickyDiagsRef doc.diagnosticsMutex.atomically do
return ( get).stickyDiagsRef
let diagnosticsMutex Std.Mutex.new { stickyDiagsRef }
return { «meta» := newMeta, initSnap := newInitSnap, diagnosticsMutex }
/--
Collects diagnostics for a `textDocument/publishDiagnostics` notification, updates
the incremental tracking fields and writes the notification to the client.
When `incrementalDiagnosticSupport` is `true` and the state allows it, sends only
the newly added diagnostics with `isIncremental? := some true`. Otherwise, sends
all sticky and non-sticky diagnostics non-incrementally.
The state update and the write are performed atomically under the diagnostics mutex
to prevent reordering between concurrent publishers (the reporter task and the main thread).
-/
def publishDiagnostics (doc : EditableDocumentCore) (incrementalDiagnosticSupport : Bool)
(writeDiagnostics : JsonRpc.Notification Lsp.PublishDiagnosticsParams BaseIO Unit) :
BaseIO Unit := do
-- The mutex must be held across both the state update and the write to ensure that concurrent
-- publishers (e.g. the reporter task and the main thread) cannot interleave their state reads
-- and writes, which would reorder incremental/non-incremental messages and corrupt client state.
doc.diagnosticsMutex.atomically do
let ds get
let useIncremental := incrementalDiagnosticSupport && ds.isIncremental
let stickyDiags ds.stickyDiagsRef.get
let diags := ds.diags
let publishedDiagsAmount := ds.publishedDiagsAmount
set <| { ds with publishedDiagsAmount := diags.size, isIncremental := true }
let (diagsToSend, isIncremental) :=
if useIncremental then
let newDiags := diags.foldl (init := #[]) (start := publishedDiagsAmount) fun acc d =>
acc.push d.toDiagnostic
(newDiags, true)
else
let allDiags := stickyDiags.foldl (init := #[]) fun acc d =>
acc.push d.toDiagnostic
let allDiags := diags.foldl (init := allDiags) fun acc d =>
acc.push d.toDiagnostic
(allDiags, false)
let isIncremental? :=
if incrementalDiagnosticSupport then
some isIncremental
else
none
writeDiagnostics <| mkPublishDiagnosticsNotification doc.meta diagsToSend isIncremental?
end EditableDocumentCore
/-- `EditableDocumentCore` with reporter task. -/
structure EditableDocument extends EditableDocumentCore where

View File

@@ -152,9 +152,9 @@ def protocolOverview : Array MessageOverview := #[
.notification {
method := "textDocument/publishDiagnostics"
direction := .serverToClient
kinds := #[.extendedParameterType #[``PublishDiagnosticsParams.diagnostics, ``DiagnosticWith.fullRange?, ``DiagnosticWith.isSilent?, ``DiagnosticWith.leanTags?]]
kinds := #[.extendedParameterType #[``PublishDiagnosticsParams.isIncremental?, ``PublishDiagnosticsParams.diagnostics, ``DiagnosticWith.fullRange?, ``DiagnosticWith.isSilent?, ``DiagnosticWith.leanTags?]]
parameterType := PublishDiagnosticsParams
description := "Emitted by the language server whenever a new set of diagnostics becomes available for a file. Unlike most language servers, the Lean language server emits this notification incrementally while processing the file, not only when the full file has been processed."
description := "Emitted by the language server whenever a new set of diagnostics becomes available for a file. Unlike most language servers, the Lean language server emits this notification incrementally while processing the file, not only when the full file has been processed. If the client sets `LeanClientCapabilities.incrementalDiagnosticSupport` and `isIncremental` is `true`, the diagnostics in the notification should be appended to the existing diagnostics for the same document version rather than replacing them."
},
.notification {
method := "$/lean/fileProgress"

View File

@@ -723,6 +723,7 @@ partial def main (args : List String) : IO Unit := do
}
}
lean? := some {
incrementalDiagnosticSupport? := some true
silentDiagnosticSupport? := some true
rpcWireFormat? := some .v1
}

View File

@@ -133,12 +133,14 @@ def foldDocumentChanges (changes : Array Lsp.TextDocumentContentChangeEvent) (ol
changes.foldl applyDocumentChange oldText
/-- Constructs a `textDocument/publishDiagnostics` notification. -/
def mkPublishDiagnosticsNotification (m : DocumentMeta) (diagnostics : Array Lsp.Diagnostic) :
def mkPublishDiagnosticsNotification (m : DocumentMeta) (diagnostics : Array Lsp.Diagnostic)
(isIncremental : Option Bool := none) :
JsonRpc.Notification Lsp.PublishDiagnosticsParams where
method := "textDocument/publishDiagnostics"
param := {
uri := m.uri
version? := some m.version
isIncremental? := isIncremental
diagnostics := diagnostics
}

19
src/Std/Async.lean Normal file
View File

@@ -0,0 +1,19 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Std.Async.Basic
public import Std.Async.ContextAsync
public import Std.Async.Timer
public import Std.Async.TCP
public import Std.Async.UDP
public import Std.Async.DNS
public import Std.Async.Select
public import Std.Async.Process
public import Std.Async.System
public import Std.Async.Signal
public import Std.Async.IO

View File

@@ -12,8 +12,6 @@ public import Init.While
public section
namespace Std
namespace Internal
namespace IO
namespace Async
/-!
@@ -997,6 +995,4 @@ def background [Monad m] [MonadAsync t m] (action : m α) (prio := Task.Priority
discard (async (t := t) (prio := prio) action)
end Async
end IO
end Internal
end Std

View File

@@ -7,7 +7,7 @@ module
prelude
public import Std.Internal.UV
public import Std.Internal.Async.Timer
public import Std.Async.Timer
public import Std.Sync.CancellationContext
public section
@@ -18,8 +18,6 @@ cooperative cancellation support that must be explicitly checked for and cancell
-/
namespace Std
namespace Internal
namespace IO
namespace Async
/--
@@ -266,6 +264,4 @@ def Selector.cancelled : ContextAsync (Selector Unit) := do
ContextAsync.doneSelector
end Async
end IO
end Internal
end Std

View File

@@ -8,14 +8,14 @@ module
prelude
public import Std.Time
public import Std.Internal.UV
public import Std.Internal.Async.Basic
public import Std.Async.Basic
public import Init.Data.Function
public section
open Std.Internal
namespace Std
namespace Internal
namespace IO
namespace Async
namespace DNS
@@ -59,6 +59,4 @@ def getNameInfo (host : @& SocketAddress) : Async NameInfo :=
end DNS
end Async
end IO
end Internal
end Std

View File

@@ -6,16 +6,15 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Async.Select
public import Std.Async.Select
public section
namespace Std
namespace Internal
namespace Async
namespace IO
open Std.Internal.IO.Async
open Std.Async
/-!
This module provides buffered asynchronous I/O operations for efficient reading and writing.
@@ -50,5 +49,4 @@ class AsyncStream (α : Type) (β : outParam Type) where
end IO
end Async
end Internal
end Std

View File

@@ -15,9 +15,9 @@ public section
open Std Time
open System
open Std.Internal
namespace Std
namespace Internal
namespace IO
namespace Process
@@ -239,5 +239,4 @@ def availableMemory : IO UInt64 :=
end Process
end IO
end Internal
end Std

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.Random
public import Std.Internal.Async.Basic
public import Std.Async.Basic
import Init.Data.ByteArray.Extra
import Init.Data.Array.Lemmas
import Init.Omega
@@ -21,8 +21,6 @@ The main entrypoint for users is `Selectable.one` and the various functions to p
-/
namespace Std
namespace Internal
namespace IO
namespace Async
/--
@@ -132,6 +130,8 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let gen := mkStdGen seed
let selectables := shuffleIt selectables gen
let gate IO.Promise.new
for selectable in selectables do
if let some val selectable.selector.tryFn then
let result selectable.cont val
@@ -141,11 +141,14 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let promise IO.Promise.new
for selectable in selectables do
if finished.get then
break
let waiterPromise IO.Promise.new
let waiter := Waiter.mk finished waiterPromise
selectable.selector.registerFn waiter
discard <| IO.bindTask (t := waiterPromise.result?) fun res? => do
discard <| IO.bindTask (t := waiterPromise.result?) (sync := true) fun res? => do
match res? with
| none =>
/-
@@ -157,18 +160,20 @@ partial def Selectable.one (selectables : Array (Selectable α)) : Async α := d
let async : Async _ :=
try
let res IO.ofExcept res
discard <| await gate.result?
for selectable in selectables do
selectable.selector.unregisterFn
let contRes selectable.cont res
promise.resolve (.ok contRes)
promise.resolve (.ok ( selectable.cont res))
catch e =>
promise.resolve (.error e)
async.toBaseIO
Async.ofPromise (pure promise)
gate.resolve ()
let result Async.ofPromise (pure promise)
return result
/--
Performs fair and data-loss free non-blocking multiplexing on the `Selectable`s in `selectables`.
@@ -224,6 +229,8 @@ def Selectable.combine (selectables : Array (Selectable α)) : IO (Selector α)
let derivedWaiter := Waiter.mk waiter.finished waiterPromise
selectable.selector.registerFn derivedWaiter
let barrier IO.Promise.new
discard <| IO.bindTask (t := waiterPromise.result?) fun res? => do
match res? with
| none => return (Task.pure (.ok ()))
@@ -231,6 +238,7 @@ def Selectable.combine (selectables : Array (Selectable α)) : IO (Selector α)
let async : Async _ := do
let mainPromise := waiter.promise
await barrier
for selectable in selectables do
selectable.selector.unregisterFn
@@ -248,6 +256,4 @@ def Selectable.combine (selectables : Array (Selectable α)) : IO (Selector α)
}
end Async
end IO
end Internal
end Std

View File

@@ -8,13 +8,11 @@ module
prelude
public import Std.Time
public import Std.Internal.UV.Signal
public import Std.Internal.Async.Select
public import Std.Async.Select
public section
namespace Std
namespace Internal
namespace IO
namespace Async
/--

View File

@@ -19,10 +19,9 @@ manipulation.
open Std Time
open System
open Std.Internal
namespace Std
namespace Internal
namespace IO
namespace Async
namespace System
@@ -316,6 +315,4 @@ def getGroup (groupId : GroupId) : IO (Option GroupInfo) := do
end System
end Async
end IO
end Internal
end Std

View File

@@ -8,13 +8,11 @@ module
prelude
public import Std.Time
public import Std.Internal.UV.TCP
public import Std.Internal.Async.Select
public import Std.Async.Select
public section
namespace Std
namespace Internal
namespace IO
namespace Async
namespace TCP
open Std.Net
@@ -258,6 +256,4 @@ end Client
end Socket
end TCP
end Async
end IO
end Internal
end Std

View File

@@ -8,14 +8,12 @@ module
prelude
public import Std.Time
public import Std.Internal.UV.Timer
public import Std.Internal.Async.Select
public import Std.Async.Select
public section
namespace Std
namespace Internal
namespace IO
namespace Async
/--
@@ -168,6 +166,4 @@ def stop (i : Interval) : IO Unit :=
end Interval
end Async
end IO
end Internal
end Std

View File

@@ -8,13 +8,11 @@ module
prelude
public import Std.Time
public import Std.Internal.UV.UDP
public import Std.Internal.Async.Select
public import Std.Async.Select
public section
namespace Std
namespace Internal
namespace IO
namespace Async
namespace UDP
@@ -192,6 +190,4 @@ end Socket
end UDP
end Async
end IO
end Internal
end Std

194
src/Std/Http.lean Normal file
View File

@@ -0,0 +1,194 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Http.Server
public import Std.Http.Test.Helpers
public section
/-!
# HTTP Library
A low-level HTTP/1.1 server implementation for Lean. This library provides a pure,
sans-I/O protocol implementation that can be used with the `Async` library or with
custom connection handlers.
## Overview
This module provides a complete HTTP/1.1 server implementation with support for:
- Request/response handling with directional streaming bodies
- Keep-alive connections
- Chunked transfer encoding
- Header validation and management
- Configurable timeouts and limits
**Sans I/O Architecture**: The core protocol logic doesn't perform any actual I/O itself -
it just defines how data should be processed. This separation allows the protocol implementation
to remain pure and testable, while different transports (TCP sockets, mock clients) handle
the actual reading and writing of bytes.
## Quick Start
The main entry point is `Server.serve`, which starts an HTTP/1.1 server. Implement the
`Server.Handler` type class to define how the server handles requests, errors, and
`Expect: 100-continue` headers:
```lean
import Std.Http
open Std Async
open Std Http Server
structure MyHandler
instance : Handler MyHandler where
onRequest _ req := do
Response.ok |>.text "Hello, World!"
def main : IO Unit := Async.block do
let addr : Net.SocketAddress := .v4 ⟨.ofParts 127 0 0 1, 8080⟩
let server ← Server.serve addr MyHandler.mk
server.waitShutdown
```
## Working with Requests
Incoming requests are represented by `Request Body.Stream`, which bundles the request
line, parsed headers, and a lazily-consumed body. Headers are available immediately,
while the body can be streamed or collected on demand, allowing handlers to process both
small and large payloads efficiently.
### Reading Headers
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
-- Access request method and URI
let method := req.head.method -- Method.get, Method.post, etc.
let uri := req.head.uri -- RequestTarget
-- Read a specific header
if let some contentType := req.head.headers.get? (.mk "content-type") then
IO.println s!"Content-Type: {contentType}"
Response.ok |>.text "OK"
```
### URI Query Semantics
`RequestTarget.query` is parsed using form-style key/value conventions (`k=v&...`), and `+` is decoded as a
space in query components. If you need RFC 3986 opaque query handling, use the raw request target string
(`toString req.head.uri`) and parse it with custom logic.
### Reading the Request Body
The request body is exposed as `Body.Stream`, which can be consumed incrementally or
collected into memory. The `readAll` method reads the entire body, with an optional size
limit to protect against unbounded payloads.
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
-- Collect entire body as a String
let bodyStr : String ← req.body.readAll
-- Or with a maximum size limit
let bodyStr : String ← req.body.readAll (maximumSize := some 1024)
Response.ok |>.text s!"Received: {bodyStr}"
```
## Building Responses
Responses are constructed using a builder API that starts from a status code and adds
headers and a body. Common helpers exist for text, HTML, JSON, and binary responses, while
still allowing full control over status codes and header values.
Response builders produce `Async (Response Body.Stream)`.
```lean
-- Text response
Response.ok |>.text "Hello!"
-- HTML response
Response.ok |>.html "<h1>Hello!</h1>"
-- JSON response
Response.ok |>.json "{\"key\": \"value\"}"
-- Binary response
Response.ok |>.bytes someByteArray
-- Custom status
Response.new |>.status .created |>.text "Resource created"
-- With custom headers
Response.ok
|>.header! "X-Custom-Header" "value"
|>.header! "Cache-Control" "no-cache"
|>.text "Response with headers"
```
### Streaming Responses
For large responses or server-sent events, use streaming:
```lean
def handler (req : Request Body.Stream) : ContextAsync (Response Body.Stream) := do
Response.ok
|>.header! "Content-Type" "text/plain"
|>.stream fun stream => do
for i in [0:10] do
stream.send { data := s!"chunk {i}\n".toUTF8 }
Async.sleep 1000
stream.close
```
## Server Configuration
Configure server behavior with `Config`:
```lean
def config : Config := {
maxRequests := 10000000,
lingeringTimeout := 5000,
}
let server ← Server.serve addr MyHandler.mk config
```
## Handler Type Class
Implement `Server.Handler` to define how the server processes events. The class has three
methods, all with default implementations:
- `onRequest` — called for each incoming request; returns a response inside `ContextAsync`
- `onFailure` — called when an error occurs while processing a request
- `onContinue` — called when a request includes an `Expect: 100-continue` header; return
`true` to accept the body or `false` to reject it
```lean
structure MyHandler where
greeting : String
instance : Handler MyHandler where
onRequest self req := do
Response.ok |>.text self.greeting
onFailure self err := do
IO.eprintln s!"Error: {err}"
```
The handler methods operate in the following monads:
- `onRequest` uses `ContextAsync` — an asynchronous monad (`ReaderT CancellationContext Async`) that provides:
- Full access to `Async` operations (spawning tasks, sleeping, concurrent I/O)
- A `CancellationContext` tied to the client connection — when the client disconnects, the
context is cancelled, allowing your handler to detect this and stop work early
- `onFailure` uses `Async`
- `onContinue` uses `Async`
-/

24
src/Std/Http/Data.lean Normal file
View File

@@ -0,0 +1,24 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Http.Data.Method
public import Std.Http.Data.Version
public import Std.Http.Data.Request
public import Std.Http.Data.Response
public import Std.Http.Data.Status
public import Std.Http.Data.Chunk
public import Std.Http.Data.Headers
public import Std.Http.Data.URI
public import Std.Http.Data.Body
/-!
# HTTP Data Types
This module re-exports all HTTP data types including request/response structures,
methods, status codes, chunks, and extension metadata.
-/

View File

@@ -6,12 +6,12 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Body.Basic
public import Std.Internal.Http.Data.Body.Length
public import Std.Internal.Http.Data.Body.Any
public import Std.Internal.Http.Data.Body.Stream
public import Std.Internal.Http.Data.Body.Empty
public import Std.Internal.Http.Data.Body.Full
public import Std.Http.Data.Body.Basic
public import Std.Http.Data.Body.Length
public import Std.Http.Data.Body.Any
public import Std.Http.Data.Body.Stream
public import Std.Http.Data.Body.Empty
public import Std.Http.Data.Body.Full
public section

View File

@@ -6,7 +6,7 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Body.Basic
public import Std.Http.Data.Body.Basic
public section
@@ -18,7 +18,7 @@ type that also implements `Http.Body`. Used as the default handler response body
-/
namespace Std.Http.Body
open Std Internal IO Async
open Std Async
set_option linter.all true
@@ -48,6 +48,12 @@ structure Any where
-/
recvSelector : Selector (Option Chunk)
/--
Non-blocking receive attempt. Returns `none` if no chunk is immediately available,
`some (some chunk)` when a chunk is ready, or `some none` at end-of-stream.
-/
tryRecv : Async (Option (Option Chunk))
/--
Returns the declared size.
-/
@@ -67,6 +73,7 @@ def ofBody [Http.Body α] (body : α) : Any where
close := Http.Body.close body
isClosed := Http.Body.isClosed body
recvSelector := Http.Body.recvSelector body
tryRecv := Http.Body.tryRecv body
getKnownSize := Http.Body.getKnownSize body
setKnownSize := Http.Body.setKnownSize body
@@ -77,6 +84,7 @@ instance : Http.Body Any where
close := Any.close
isClosed := Any.isClosed
recvSelector := Any.recvSelector
tryRecv := Any.tryRecv
getKnownSize := Any.getKnownSize
setKnownSize := Any.setKnownSize

View File

@@ -6,11 +6,11 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Async
public import Std.Internal.Async.ContextAsync
public import Std.Internal.Http.Data.Chunk
public import Std.Internal.Http.Data.Headers
public import Std.Internal.Http.Data.Body.Length
public import Std.Async
public import Std.Async.ContextAsync
public import Std.Http.Data.Chunk
public import Std.Http.Data.Headers
public import Std.Http.Data.Body.Length
public section
@@ -22,7 +22,7 @@ This module defines the `Body` typeclass for HTTP body streams, and shared conve
-/
namespace Std.Http
open Std Internal IO Async
open Std Async
set_option linter.all true
@@ -50,6 +50,12 @@ class Body (α : Type) where
-/
recvSelector : α Selector (Option Chunk)
/--
Non-blocking receive attempt. Returns `none` if no chunk is immediately available,
`some (some chunk)` when a chunk is ready, or `some none` at end-of-stream.
-/
tryRecv (body : α) : Async (Option (Option Chunk))
/--
Gets the declared size of the body.
-/

View File

@@ -6,9 +6,9 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Request
public import Std.Internal.Http.Data.Response
public import Std.Internal.Http.Data.Body.Any
public import Std.Http.Data.Request
public import Std.Http.Data.Response
public import Std.Http.Data.Body.Any
public section
@@ -19,7 +19,7 @@ Represents an always-empty, already-closed body handle.
-/
namespace Std.Http.Body
open Std Internal IO Async
open Std Async
set_option linter.all true
@@ -52,6 +52,13 @@ Empty bodies are always closed for reading.
def isClosed (_ : Empty) : Async Bool :=
pure true
/--
Non-blocking receive. Empty bodies are always at EOF.
-/
@[inline]
def tryRecv (_ : Empty) : Async (Option (Option Chunk)) :=
pure (some none)
/--
Selector that immediately resolves with end-of-stream for an empty body.
-/
@@ -72,6 +79,7 @@ instance : Http.Body Empty where
close := Empty.close
isClosed := Empty.isClosed
recvSelector := Empty.recvSelector
tryRecv := Empty.tryRecv
getKnownSize _ := pure (some <| .fixed 0)
setKnownSize _ _ := pure ()
@@ -94,7 +102,7 @@ instance : Coe (Async (Response Empty)) (ContextAsync (Response Any)) where
end Body
namespace Request.Builder
open Internal.IO.Async
open Async
/--
Builds a request with no body.
@@ -105,7 +113,7 @@ def empty (builder : Builder) : Async (Request Body.Empty) :=
end Request.Builder
namespace Response.Builder
open Internal.IO.Async
open Async
/--
Builds a response with no body.

View File

@@ -7,9 +7,9 @@ module
prelude
public import Std.Sync
public import Std.Internal.Http.Data.Request
public import Std.Internal.Http.Data.Response
public import Std.Internal.Http.Data.Body.Any
public import Std.Http.Data.Request
public import Std.Http.Data.Response
public import Std.Http.Data.Body.Any
public import Init.Data.ByteArray
public section
@@ -25,7 +25,7 @@ Closing the body discards any unconsumed data.
-/
namespace Std.Http.Body
open Std Internal IO Async
open Std Async
set_option linter.all true
@@ -100,6 +100,14 @@ def getKnownSize (full : Full) : Async (Option Body.Length) :=
| none => pure (some (.fixed 0))
| some data => pure (some (.fixed data.size))
/--
Non-blocking receive. `Full` bodies are always in-memory, so data is always
immediately available. Returns `some chunk` on first call, `some none` (EOF)
once consumed or closed.
-/
def tryRecv (full : Full) : Async (Option (Option Chunk)) := do
return some ( full.state.atomically takeChunk)
/--
Selector that immediately resolves to the remaining chunk (or EOF).
-/
@@ -128,6 +136,7 @@ instance : Http.Body Full where
close := Full.close
isClosed := Full.isClosed
recvSelector := Full.recvSelector
tryRecv := Full.tryRecv
getKnownSize := Full.getKnownSize
setKnownSize _ _ := pure ()
@@ -150,7 +159,7 @@ end Body
namespace Request.Builder
open Internal.IO.Async
open Async
/--
Builds a request body from raw bytes without setting any headers.
@@ -191,7 +200,7 @@ def html (builder : Builder) (content : String) : Async (Request Body.Full) :=
end Request.Builder
namespace Response.Builder
open Internal.IO.Async
open Async
/--
Builds a response body from raw bytes without setting any headers.

View File

@@ -7,12 +7,12 @@ module
prelude
public import Std.Sync
public import Std.Internal.Async
public import Std.Internal.Http.Data.Request
public import Std.Internal.Http.Data.Response
public import Std.Internal.Http.Data.Chunk
public import Std.Internal.Http.Data.Body.Basic
public import Std.Internal.Http.Data.Body.Any
public import Std.Async
public import Std.Http.Data.Request
public import Std.Http.Data.Response
public import Std.Http.Data.Chunk
public import Std.Http.Data.Body.Basic
public import Std.Http.Data.Body.Any
public import Init.Data.ByteArray
public section
@@ -31,13 +31,13 @@ namespace Std.Http
namespace Body
open Std Internal IO Async
open Std Async
set_option linter.all true
namespace Channel
open Internal.IO.Async in
open Async in
private inductive Consumer where
| normal (promise : IO.Promise (Option Chunk))
| select (finished : Waiter (Option Chunk))
@@ -62,7 +62,7 @@ private structure Producer where
-/
done : IO.Promise Bool
open Internal.IO.Async in
open Async in
private def resolveInterestWaiter (waiter : Waiter Bool) (x : Bool) : BaseIO Bool := do
let lose := return false
let win promise := do
@@ -84,7 +84,7 @@ private structure State where
/--
A waiter for `Stream.interestSelector`.
-/
interestWaiter : Option (Internal.IO.Async.Waiter Bool)
interestWaiter : Option (Async.Waiter Bool)
/--
Whether the channel is closed.
@@ -227,6 +227,19 @@ def tryRecv (stream : Stream) : Async (Option Chunk) :=
Channel.pruneFinishedWaiters
Channel.tryRecv'
/--
Non-blocking receive for the `Body` typeclass. Returns `none` when no producer is
waiting and the channel is still open, `some (some chunk)` when data is ready,
or `some none` at end-of-stream (channel closed with no pending producer).
-/
def tryRecvBody (stream : Stream) : Async (Option (Option Chunk)) :=
stream.state.atomically do
Channel.pruneFinishedWaiters
if Channel.recvReady' then
return some ( Channel.tryRecv')
else
return none
private def recv' (stream : Stream) : BaseIO (AsyncTask (Option Chunk)) := do
stream.state.atomically do
Channel.pruneFinishedWaiters
@@ -286,7 +299,7 @@ def setKnownSize (stream : Stream) (size : Option Body.Length) : Async Unit :=
stream.state.atomically do
modify fun st => { st with knownSize := size }
open Internal.IO.Async in
open Async in
/--
Creates a selector that resolves when a producer is waiting (or the channel closes).
@@ -508,7 +521,7 @@ def hasInterest (stream : Stream) : Async Bool :=
Channel.pruneFinishedWaiters
Channel.hasInterest'
open Internal.IO.Async in
open Async in
/--
Creates a selector that resolves when consumer interest is present.
Returns `true` when a consumer is waiting, `false` when the channel closes first.
@@ -598,6 +611,7 @@ instance : Http.Body Stream where
close := Stream.close
isClosed := Stream.isClosed
recvSelector := Stream.recvSelector
tryRecv := Stream.tryRecvBody
getKnownSize := Stream.getKnownSize
setKnownSize := Stream.setKnownSize
@@ -620,7 +634,7 @@ end Body
namespace Request.Builder
open Internal.IO.Async
open Async
/--
Builds a request with a streaming body generator.
@@ -635,7 +649,7 @@ def stream
end Request.Builder
namespace Response.Builder
open Internal.IO.Async
open Async
/--
Builds a response with a streaming body generator.

View File

@@ -6,9 +6,9 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Data.Headers
public meta import Std.Internal.Http.Internal.String
public import Std.Http.Internal
public import Std.Http.Data.Headers
public meta import Std.Http.Internal.String
public section
@@ -156,6 +156,17 @@ end Chunk.ExtensionValue
/--
Represents a chunk of data with optional extensions (key-value pairs).
The interpretation of a chunk depends on the protocol layer consuming it:
- HTTP/1.1: The zero-size wire encoding (`0\r\n\r\n`) is reserved
exclusively as the `last-chunk` terminator. The HTTP/1.1 writer silently discards
any empty chunk (including its extensions) rather than emitting a premature
end-of-body signal. `Encode.encode` on a `Chunk.empty` value does produce
`"0\r\n\r\n"`, but that path bypasses the writer's framing logic.
- HTTP/2 (not yet implemented): Chunked transfer encoding does not exist; HTTP/2 uses DATA
frames instead. This type is specific to the HTTP/1.1 wire format.
Reference: https://httpwg.org/specs/rfc9112.html#rfc.section.7.1
-/
structure Chunk where
@@ -201,7 +212,7 @@ def toString? (chunk : Chunk) : Option String :=
instance : Encode .v11 Chunk where
encode buffer chunk :=
let chunkLen := chunk.data.size
let exts := chunk.extensions.foldl (fun acc (name, value) =>
let exts := chunk.extensions.foldl (fun acc (name, value) =>
acc ++ ";" ++ name.value ++ (value.elim "" (fun x => "=" ++ x.quote))) ""
let size := Nat.toDigits 16 chunkLen |>.toArray |>.map Char.toUInt8 |> ByteArray.mk
buffer.append #[size, exts.toUTF8, "\r\n".toUTF8, chunk.data, "\r\n".toUTF8]

View File

@@ -6,9 +6,9 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Headers.Basic
public import Std.Internal.Http.Data.Headers.Name
public import Std.Internal.Http.Data.Headers.Value
public import Std.Http.Data.Headers.Basic
public import Std.Http.Data.Headers.Name
public import Std.Http.Data.Headers.Value
public section

View File

@@ -6,9 +6,9 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.URI
public import Std.Internal.Http.Data.Headers.Name
public import Std.Internal.Http.Data.Headers.Value
public import Std.Http.Data.URI
public import Std.Http.Data.Headers.Name
public import Std.Http.Data.Headers.Value
public import Std.Internal.Parsec.Basic
import Init.Data.String.Search
@@ -78,7 +78,9 @@ namespace ContentLength
Parses a content length header value.
-/
def parse (v : Value) : Option ContentLength :=
v.value.toNat?.map (.mk)
let s := v.value
if s.isEmpty || !s.all Char.isDigit then none
else s.toNat?.map (.mk)
/--
Serializes a content length header back to a name-value pair.

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.ToString
public import Std.Internal.Http.Internal
public import Std.Http.Internal
import Init.Data.String.Search
import Init.Data.String.Iter

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.ToString
public import Std.Internal.Http.Internal
public import Std.Http.Internal
public section

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.ToString
public import Std.Internal.Http.Internal
public import Std.Http.Internal
public section

View File

@@ -6,11 +6,11 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Extensions
public import Std.Internal.Http.Data.Method
public import Std.Internal.Http.Data.Version
public import Std.Internal.Http.Data.Headers
public import Std.Internal.Http.Data.URI
public import Std.Http.Data.Extensions
public import Std.Http.Data.Method
public import Std.Http.Data.Version
public import Std.Http.Data.Headers
public import Std.Http.Data.URI
public section

View File

@@ -6,10 +6,10 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.Extensions
public import Std.Internal.Http.Data.Status
public import Std.Internal.Http.Data.Version
public import Std.Internal.Http.Data.Headers
public import Std.Http.Data.Extensions
public import Std.Http.Data.Status
public import Std.Http.Data.Version
public import Std.Http.Data.Headers
public section

View File

@@ -6,7 +6,7 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Internal
public import Std.Http.Internal
public section

View File

@@ -6,8 +6,8 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data.URI.Basic
public import Std.Internal.Http.Data.URI.Parser
public import Std.Http.Data.URI.Basic
public import Std.Http.Data.URI.Parser
public section

View File

@@ -8,8 +8,8 @@ module
prelude
import Init.Data.ToString
public import Std.Net
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Data.URI.Encoding
public import Std.Http.Internal
public import Std.Http.Data.URI.Encoding
public import Init.Data.String.Search
public section

View File

@@ -13,7 +13,7 @@ import Init.Data.UInt.Lemmas
import Init.Data.UInt.Bitwise
import Init.Data.Array.Lemmas
public import Init.Data.String.Basic
public import Std.Internal.Http.Internal.Char
public import Std.Http.Internal.Char
public section

View File

@@ -10,8 +10,8 @@ import Init.While
public import Init.Data.String.Basic
public import Std.Internal.Parsec
public import Std.Internal.Parsec.ByteArray
public import Std.Internal.Http.Data.URI.Basic
public import Std.Internal.Http.Data.URI.Config
public import Std.Http.Data.URI.Basic
public import Std.Http.Data.URI.Config
import Init.Data.String.Search
public section

View File

@@ -0,0 +1,22 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Http.Internal.Char
public import Std.Http.Internal.ChunkedBuffer
public import Std.Http.Internal.LowerCase
public import Std.Http.Internal.IndexMultiMap
public import Std.Http.Internal.Encode
public import Std.Http.Internal.String
public import Std.Http.Internal.Char
/-!
# HTTP Internal Utilities
This module re-exports internal utilities used by the HTTP library including
data structures, encoding functions, and buffer management.
-/

View File

@@ -6,8 +6,8 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Internal.ChunkedBuffer
public import Std.Internal.Http.Data.Version
public import Std.Http.Internal.ChunkedBuffer
public import Std.Http.Data.Version
public section

View File

@@ -8,7 +8,7 @@ module
prelude
import Init.Grind
public import Init.Data.String.TakeDrop
public import Std.Internal.Http.Internal.Char
public import Std.Http.Internal.Char
public section

View File

@@ -8,14 +8,14 @@ module
prelude
public import Std.Time
public import Init.Data.Array
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Protocol.H1.Parser
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Internal.Http.Protocol.H1.Message
public import Std.Internal.Http.Protocol.H1.Reader
public import Std.Internal.Http.Protocol.H1.Writer
public import Std.Internal.Http.Protocol.H1.Event
public import Std.Http.Data
public import Std.Http.Internal
public import Std.Http.Protocol.H1.Parser
public import Std.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Message
public import Std.Http.Protocol.H1.Reader
public import Std.Http.Protocol.H1.Writer
public import Std.Http.Protocol.H1.Event
public section
@@ -703,22 +703,37 @@ private def writeHead (messageHead : Message.Head dir.swap) (machine : Machine d
let machine := machine.updateKeepAlive shouldKeepAlive
let size := Writer.determineTransferMode machine.writer
-- RFC 7230 §3.3.1: HTTP/1.0 does not support Transfer-Encoding. When the
-- response body length is unknown (chunked mode), fall back to connection-close
-- framing: disable keep-alive and write raw bytes (no chunk encoding, no TE header).
let machine :=
match dir, machine.reader.messageHead.version, size with
| .receiving, Version.v10, .chunked => machine.disableKeepAlive
| _, _, _ => machine
let headers := messageHead.headers
-- Add identity header based on direction
-- Add identity header based on direction. handler wins if it already set one.
let headers :=
let identityOpt := machine.config.agentName
match dir, identityOpt with
| .receiving, some server => headers.insert Header.Name.server server
| .sending, some userAgent => headers.insert Header.Name.userAgent userAgent
| .receiving, some server =>
if headers.contains Header.Name.server then headers
else headers.insert Header.Name.server server
| .sending, some userAgent =>
if headers.contains Header.Name.userAgent then headers
else headers.insert Header.Name.userAgent userAgent
| _, none => headers
-- Add Connection header based on keep-alive state and protocol version
-- Add Connection header based on keep-alive state and protocol version.
-- Erase any handler-supplied value first to avoid duplicate or conflicting
-- Connection headers on the wire.
let headers := headers.erase Header.Name.connection
let headers :=
if !machine.keepAlive !headers.hasEntry Header.Name.connection (.mk "close") then
if !machine.keepAlive then
headers.insert Header.Name.connection (.mk "close")
else if machine.keepAlive machine.reader.messageHead.version == .v10
!headers.hasEntry Header.Name.connection (.mk "keep-alive") then
else if machine.reader.messageHead.version == .v10 then
-- RFC 2616 §19.7.1: HTTP/1.0 keep-alive responses must echo Connection: keep-alive
headers.insert Header.Name.connection (.mk "keep-alive")
else
@@ -729,18 +744,29 @@ private def writeHead (messageHead : Message.Head dir.swap) (machine : Machine d
let headers :=
match dir, messageHead with
| .receiving, messageHead =>
if responseForbidsFramingHeaders messageHead.status then
headers.erase Header.Name.contentLength |>.erase Header.Name.transferEncoding
else if messageHead.status == .notModified then
-- `304` carries no body; keep explicit Content-Length metadata if the
-- user supplied it, but never keep Transfer-Encoding.
headers.erase Header.Name.transferEncoding
if responseForbidsFramingHeaders messageHead.status messageHead.status == .notModified then
headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
else if machine.reader.messageHead.version == .v10 && size == .chunked then
-- RFC 7230 §3.3.1: connection-close framing for HTTP/1.0 — strip all framing
-- headers so neither Content-Length nor Transfer-Encoding appears on the wire.
headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
else
normalizeFramingHeaders headers size
| .sending, _ =>
normalizeFramingHeaders headers size
let state := Writer.State.writingBody size
let state : Writer.State :=
match size with
| .fixed n => .writingBodyFixed n
| .chunked =>
-- RFC 7230 §3.3.1: HTTP/1.0 server-side uses connection-close framing (no chunk framing).
match dir, machine.reader.messageHead.version with
| .receiving, .v10 => .writingBodyClosingFrame
| _, _ => .writingBodyChunked
machine.modifyWriter (fun writer => {
writer with
@@ -891,6 +917,13 @@ def send (machine : Machine dir) (message : Message.Head dir.swap) : Machine dir
| .receiving => message.status.isInformational
| .sending => false
if isInterim then
-- RFC 9110 §15.2: 1xx responses MUST NOT carry a body, so framing headers
-- are meaningless and must not be forwarded even if the handler set them.
let message := Message.Head.setHeaders message
<| message.headers
|>.erase Header.Name.contentLength
|>.erase Header.Name.transferEncoding
machine.modifyWriter (fun w => {
w with outputData := Encode.encode (v := .v11) w.outputData message
})
@@ -1091,11 +1124,11 @@ private partial def processFixedBufferedBody (machine : Machine dir) (n : Nat) :
if writer.userClosedBody then
completeWriterMessage machine
else
machine.setWriterState (.writingBody (.fixed 0))
machine.setWriterState (.writingBodyFixed 0)
else
closeOnBadMessage machine
else
machine.setWriterState (.writingBody (.fixed remaining))
machine.setWriterState (.writingBodyFixed remaining)
/--
Handles fixed-length writer state when no user bytes are currently buffered.
@@ -1127,20 +1160,28 @@ private partial def processFixedBody (machine : Machine dir) (n : Nat) : Machine
processFixedIdleBody machine
/--
Processes chunked transfer-encoding output.
Writes buffered chunks when available, writes terminal `0\\r\\n\\r\\n` on
producer close, and supports omitted-body completion.
Processes chunked transfer-encoding output (HTTP/1.1).
-/
private partial def processChunkedBody (machine : Machine dir) : Machine dir :=
if machine.writer.omitBody then
completeOmittedBody machine
else if machine.writer.userClosedBody then
machine.modifyWriter Writer.writeFinalChunk
|> completeWriterMessage
machine.modifyWriter Writer.writeFinalChunk |> completeWriterMessage
else if machine.writer.userData.size > 0 then
machine.modifyWriter Writer.writeChunkedBody
|> processWrite
machine.modifyWriter Writer.writeChunkedBody |> processWrite
else
machine
/--
Processes connection-close body output (HTTP/1.0 server, unknown body length).
-/
private partial def processClosingFrameBody (machine : Machine dir) : Machine dir :=
if machine.writer.omitBody then
completeOmittedBody machine
else if machine.writer.userClosedBody then
machine.modifyWriter Writer.writeRawBody |> completeWriterMessage
else if machine.writer.userData.size > 0 then
machine.modifyWriter Writer.writeRawBody |> processWrite
else
machine
@@ -1174,10 +1215,12 @@ partial def processWrite (machine : Machine dir) : Machine dir :=
|> processWrite
else
machine
| .writingBody (.fixed n) =>
| .writingBodyFixed n =>
processFixedBody machine n
| .writingBody .chunked =>
| .writingBodyChunked =>
processChunkedBody machine
| .writingBodyClosingFrame =>
processClosingFrameBody machine
| .complete =>
processCompleteStep machine
| .closed =>

View File

@@ -6,8 +6,8 @@ Authors: Sofia Rodrigues
module
prelude
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Http.Data
public import Std.Http.Internal
public section

View File

@@ -7,11 +7,11 @@ module
prelude
public import Std.Time
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Protocol.H1.Parser
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Internal.Http.Protocol.H1.Message
public import Std.Http.Data
public import Std.Http.Internal
public import Std.Http.Protocol.H1.Parser
public import Std.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Message
public section

View File

@@ -7,12 +7,12 @@ module
prelude
public import Std.Time
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Protocol.H1.Parser
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Internal.Http.Protocol.H1.Message
public import Std.Internal.Http.Protocol.H1.Error
public import Std.Http.Data
public import Std.Http.Internal
public import Std.Http.Protocol.H1.Parser
public import Std.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Message
public import Std.Http.Protocol.H1.Error
public section

View File

@@ -7,7 +7,7 @@ module
prelude
import Init.Data.Array
public import Std.Internal.Http.Data
public import Std.Http.Data
public section
@@ -65,6 +65,14 @@ def Message.Head.headers (m : Message.Head dir) : Headers :=
| .receiving => Request.Head.headers m
| .sending => Response.Head.headers m
/--
Returns a copy of the message head with the headers replaced.
-/
def Message.Head.setHeaders (m : Message.Head dir) (headers : Headers) : Message.Head dir :=
match dir with
| .receiving => { (m : Request.Head) with headers }
| .sending => { (m : Response.Head) with headers }
/--
Gets the version of a `Message`.
-/
@@ -82,7 +90,7 @@ def Message.Head.getSize (message : Message.Head dir) (allowEOFBody : Bool) : Op
match message.headers.getAll? .transferEncoding with
| none =>
match contentLength with
| some #[cl] => .fixed <$> cl.value.toNat?
| some #[cl] => .fixed <$> (Header.ContentLength.parse cl |>.map (·.length))
| some _ => none -- To avoid request smuggling with malformed/multiple content-length headers.
| none => if allowEOFBody then some (.fixed 0) else none

View File

@@ -7,9 +7,9 @@ module
prelude
public import Std.Internal.Parsec
public import Std.Internal.Http.Data
public import Std.Http.Data
public import Std.Internal.Parsec.ByteArray
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Config
/-!
This module defines parsers for HTTP/1.1 request and response lines, headers, and body framing. The

View File

@@ -7,12 +7,12 @@ module
prelude
public import Std.Time
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Protocol.H1.Parser
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Internal.Http.Protocol.H1.Message
public import Std.Internal.Http.Protocol.H1.Error
public import Std.Http.Data
public import Std.Http.Internal
public import Std.Http.Protocol.H1.Parser
public import Std.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Message
public import Std.Http.Protocol.H1.Error
public section

View File

@@ -7,12 +7,12 @@ module
prelude
public import Std.Time
public import Std.Internal.Http.Data
public import Std.Internal.Http.Internal
public import Std.Internal.Http.Protocol.H1.Parser
public import Std.Internal.Http.Protocol.H1.Config
public import Std.Internal.Http.Protocol.H1.Message
public import Std.Internal.Http.Protocol.H1.Error
public import Std.Http.Data
public import Std.Http.Internal
public import Std.Http.Protocol.H1.Parser
public import Std.Http.Protocol.H1.Config
public import Std.Http.Protocol.H1.Message
public import Std.Http.Protocol.H1.Error
public section
@@ -51,9 +51,20 @@ inductive Writer.State
| waitingForFlush
/--
Writing the body output (either fixed-length or chunked).
Writing a fixed-length body; `n` is the number of bytes still to be sent.
-/
| writingBody (mode : Body.Length)
| writingBodyFixed (n : Nat)
/--
Writing a chunked transfer-encoding body (HTTP/1.1).
-/
| writingBodyChunked
/--
Writing a connection-close body (HTTP/1.0 server, unknown length).
Raw bytes are written without chunk framing; the peer reads until the connection closes.
-/
| writingBodyClosingFrame
/--
Completed writing a single message and ready to begin the next one.
@@ -162,7 +173,9 @@ def canAcceptData (writer : Writer dir) : Bool :=
match writer.state with
| .waitingHeaders => true
| .waitingForFlush => true
| .writingBody _ => !writer.userClosedBody
| .writingBodyFixed _
| .writingBodyChunked
| .writingBodyClosingFrame => !writer.userClosedBody
| _ => false
/--
@@ -185,6 +198,9 @@ def determineTransferMode (writer : Writer dir) : Body.Length :=
/--
Adds user data chunks to the writer's buffer if the writer can accept data.
Empty chunks (zero bytes of data) are accepted here but will be silently dropped
during the chunked-encoding write step see `writeChunkedBody`.
-/
@[inline]
def addUserData (data : Array Chunk) (writer : Writer dir) : Writer dir :=
@@ -223,12 +239,14 @@ def writeFixedBody (writer : Writer dir) (limitSize : Nat) : Writer dir × Nat :
/--
Writes accumulated user data to output using chunked transfer encoding.
Empty chunks are silently discarded. See `Chunk.empty` for the protocol-level rationale.
-/
def writeChunkedBody (writer : Writer dir) : Writer dir :=
if writer.userData.size = 0 then
writer
else
let data := writer.userData
let data := writer.userData.filter (fun c => !c.data.isEmpty)
{ writer with userData := #[], userDataBytes := 0, outputData := data.foldl (Encode.encode .v11) writer.outputData }
/--
@@ -241,6 +259,15 @@ def writeFinalChunk (writer : Writer dir) : Writer dir :=
state := .complete
}
/--
Writes accumulated user data to output as raw bytes (HTTP/1.0 connection-close framing).
No chunk framing is added; the peer reads until the connection closes.
-/
def writeRawBody (writer : Writer dir) : Writer dir :=
{ writer with
outputData := writer.userData.foldl (fun buf c => buf.write c.data) writer.outputData,
userData := #[], userDataBytes := 0 }
/--
Extracts all accumulated output data and returns it with a cleared output buffer.
-/

188
src/Std/Http/Server.lean Normal file
View File

@@ -0,0 +1,188 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Async
public import Std.Async.TCP
public import Std.Sync.CancellationToken
public import Std.Sync.Semaphore
public import Std.Http.Server.Config
public import Std.Http.Server.Handler
public import Std.Http.Server.Connection
public section
/-!
# HTTP Server
This module defines a simple, asynchronous HTTP/1.1 server implementation.
It provides the `Std.Http.Server` structure, which encapsulates all server state, and functions for
starting, managing, and gracefully shutting down the server.
The server runs entirely using `Async` and uses a shared `CancellationContext` to signal shutdowns.
Each active client connection is tracked, and the server automatically resolves its shutdown
promise once all connections have closed.
-/
namespace Std.Http
open Std.Async TCP
set_option linter.all true
/--
The `Server` structure holds all state required to manage the lifecycle of an HTTP server, including
connection tracking and shutdown coordination.
-/
structure Server where
/--
The context used for shutting down all connections and the server.
-/
context : Std.CancellationContext
/--
Active HTTP connections
-/
activeConnections : Std.Mutex Nat
/--
Semaphore used to enforce the maximum number of simultaneous active connections.
`none` means no connection limit.
-/
connectionLimit : Option Std.Semaphore
/--
Indicates when the server has successfully shut down.
-/
shutdownPromise : Std.Channel Unit
/--
Configuration of the server
-/
config : Std.Http.Config
namespace Server
/--
Create a new `Server` structure with an optional configuration.
-/
def new (config : Std.Http.Config := {}) : IO Server := do
let context Std.CancellationContext.new
let activeConnections Std.Mutex.new 0
let connectionLimit
if config.maxConnections = 0 then
pure none
else
some <$> Std.Semaphore.new config.maxConnections
let shutdownPromise Std.Channel.new
return { context, activeConnections, connectionLimit, shutdownPromise, config }
/--
Triggers cancellation of all requests and the accept loop in the server. This function should be used
in conjunction with `waitShutdown` to properly coordinate the shutdown sequence.
-/
@[inline]
def shutdown (s : Server) : Async Unit :=
s.context.cancel .shutdown
/--
Waits for the server to shut down. Blocks until another task or async operation calls the `shutdown` function.
-/
@[inline]
def waitShutdown (s : Server) : Async Unit := do
Async.ofAsyncTask (( s.shutdownPromise.recv).map Except.ok)
/--
Returns a `Selector` that waits for the server to shut down.
-/
@[inline]
def waitShutdownSelector (s : Server) : Selector Unit :=
s.shutdownPromise.recvSelector
/--
Triggers cancellation of all requests and the accept loop, then waits for the server to fully shut down.
This is a convenience function combining `shutdown` and then `waitShutdown`.
-/
@[inline]
def shutdownAndWait (s : Server) : Async Unit := do
s.context.cancel .shutdown
s.waitShutdown
@[inline]
private def frameCancellation (s : Server) (releaseConnectionPermit : Bool := false)
(action : ContextAsync α) : ContextAsync α := do
s.activeConnections.atomically (modify (· + 1))
try
action
finally
if releaseConnectionPermit then
if let some limit := s.connectionLimit then
limit.release
s.activeConnections.atomically do
modify (· - 1)
if ( get) = 0 ( s.context.isCancelled) then
discard <| s.shutdownPromise.send ()
/--
Start a new HTTP/1.1 server on the given socket address. This function uses `Async` to handle tasks
and TCP connections, and returns a `Server` structure that can be used to cancel the server.
-/
def serve {σ : Type} [Handler σ]
(addr : Net.SocketAddress)
(handler : σ)
(config : Config := {}) (backlog : UInt32 := 1024) : Async Server := do
let httpServer Server.new config
let server Socket.Server.mk
server.bind addr
server.listen backlog
server.noDelay
let runServer := do
frameCancellation httpServer (action := do
while true do
let permitAcquired
if let some limit := httpServer.connectionLimit then
let permit limit.acquire
await permit
pure true
else
pure false
let result Selectable.one #[
.case (server.acceptSelector) (fun x => pure <| some x),
.case ( ContextAsync.doneSelector) (fun _ => pure none)
]
match result with
| some client =>
let extensions do
match ( EIO.toBaseIO client.getPeerName) with
| .ok addr => pure <| Extensions.empty.insert (Server.RemoteAddr.mk addr)
| .error _ => pure Extensions.empty
ContextAsync.background
(frameCancellation httpServer (releaseConnectionPermit := permitAcquired)
(action := do
serveConnection client handler config extensions))
| none =>
if permitAcquired then
if let some limit := httpServer.connectionLimit then
limit.release
break
)
background (runServer httpServer.context)
return httpServer
end Std.Http.Server

View File

@@ -0,0 +1,196 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Time
public import Std.Http.Protocol.H1
public section
/-!
# Config
This module exposes the `Config`, a structure that describes timeout, request and headers
configuration of an HTTP Server.
-/
namespace Std.Http
set_option linter.all true
/--
Connection limits configuration with validation.
-/
structure Config where
/--
Maximum number of simultaneous active connections (default: 1024).
Setting this to `0` disables the limit entirely: the server will accept any number of
concurrent connections and no semaphore-based cap is enforced. Use with care — an
unconstrained server may exhaust file descriptors or memory under adversarial load.
-/
maxConnections : Nat := 1024
/--
Maximum number of requests per connection.
-/
maxRequests : Nat := 100
/--
Maximum number of headers allowed per request.
-/
maxHeaders : Nat := 50
/--
Maximum aggregate byte size of all header field lines in a single message
(name + value bytes plus 4 bytes per line for `: ` and `\r\n`). Default: 64 KiB.
-/
maxHeaderBytes : Nat := 65536
/--
Timeout (in milliseconds) for receiving additional data while a request is actively being
processed (e.g. reading the request body). Applies after the request headers have been parsed
and replaces the keep-alive timeout for the duration of the request.
-/
lingeringTimeout : Time.Millisecond.Offset := 10000
/--
Timeout for keep-alive connections
-/
keepAliveTimeout : { x : Time.Millisecond.Offset // x > 0 } := 12000, by decide
/--
Maximum time (in milliseconds) allowed to receive the complete request headers after the first
byte of a new request arrives. This prevents Slowloris-style attacks where a client sends bytes
at a slow rate to hold a connection slot open without completing a request. Once a request starts,
each individual read must complete within this window. Default: 5 seconds.
-/
headerTimeout : Time.Millisecond.Offset := 5000
/--
Whether to enable keep-alive connections by default.
-/
enableKeepAlive : Bool := true
/--
The maximum size that the connection can receive in a single recv call.
-/
maximumRecvSize : Nat := 8192
/--
Default buffer size for the connection
-/
defaultPayloadBytes : Nat := 8192
/--
Whether to automatically generate the `Date` header in responses.
-/
generateDate : Bool := true
/--
The `Server` header value injected into outgoing responses.
`none` suppresses the header entirely.
-/
serverName : Option Header.Value := some (.mk "LeanHTTP/1.1")
/--
Maximum length of request URI (default: 8192 bytes)
-/
maxUriLength : Nat := 8192
/--
Maximum number of bytes consumed while parsing request start-lines (default: 8192 bytes).
-/
maxStartLineLength : Nat := 8192
/--
Maximum length of header field name (default: 256 bytes)
-/
maxHeaderNameLength : Nat := 256
/--
Maximum length of header field value (default: 8192 bytes)
-/
maxHeaderValueLength : Nat := 8192
/--
Maximum number of spaces in delimiter sequences (default: 16)
-/
maxSpaceSequence : Nat := 16
/--
Maximum number of leading empty lines (bare CRLF) to skip before a request-line
(RFC 9112 §2.2 robustness). Default: 8.
-/
maxLeadingEmptyLines : Nat := 8
/--
Maximum length of chunk extension name (default: 256 bytes)
-/
maxChunkExtNameLength : Nat := 256
/--
Maximum length of chunk extension value (default: 256 bytes)
-/
maxChunkExtValueLength : Nat := 256
/--
Maximum number of bytes consumed while parsing one chunk-size line with extensions (default: 8192 bytes).
-/
maxChunkLineLength : Nat := 8192
/--
Maximum allowed chunk payload size in bytes (default: 8 MiB).
-/
maxChunkSize : Nat := 8 * 1024 * 1024
/--
Maximum allowed total body size per request in bytes (default: 64 MiB).
-/
maxBodySize : Nat := 64 * 1024 * 1024
/--
Maximum length of reason phrase (default: 512 bytes)
-/
maxReasonPhraseLength : Nat := 512
/--
Maximum number of trailer headers (default: 20)
-/
maxTrailerHeaders : Nat := 20
/--
Maximum number of extensions on a single chunk-size line (default: 16).
-/
maxChunkExtensions : Nat := 16
namespace Config
/--
Converts to HTTP/1.1 config.
-/
def toH1Config (config : Config) : Protocol.H1.Config where
maxMessages := config.maxRequests
maxHeaders := config.maxHeaders
maxHeaderBytes := config.maxHeaderBytes
enableKeepAlive := config.enableKeepAlive
agentName := config.serverName
maxUriLength := config.maxUriLength
maxStartLineLength := config.maxStartLineLength
maxHeaderNameLength := config.maxHeaderNameLength
maxHeaderValueLength := config.maxHeaderValueLength
maxSpaceSequence := config.maxSpaceSequence
maxLeadingEmptyLines := config.maxLeadingEmptyLines
maxChunkExtensions := config.maxChunkExtensions
maxChunkExtNameLength := config.maxChunkExtNameLength
maxChunkExtValueLength := config.maxChunkExtValueLength
maxChunkLineLength := config.maxChunkLineLength
maxChunkSize := config.maxChunkSize
maxBodySize := config.maxBodySize
maxReasonPhraseLength := config.maxReasonPhraseLength
maxTrailerHeaders := config.maxTrailerHeaders
end Std.Http.Config

View File

@@ -0,0 +1,560 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Sofia Rodrigues
-/
module
prelude
public import Std.Async.TCP
public import Std.Async.ContextAsync
public import Std.Http.Transport
public import Std.Http.Protocol.H1
public import Std.Http.Server.Config
public import Std.Http.Server.Handler
public section
namespace Std
namespace Http
namespace Server
open Std Async TCP Protocol
open Time
/-!
# Connection
This module defines `Server.Connection`, a structure used to handle a single HTTP connection with
possibly multiple requests.
-/
set_option linter.all true
/--
Represents the remote address of a client connection.
-/
structure RemoteAddr where
/--
The socket address of the remote client.
-/
addr : Net.SocketAddress
deriving TypeName
instance : ToString RemoteAddr where
toString addr := toString addr.addr
/--
A single HTTP connection.
-/
structure Connection (α : Type) where
/--
The client connection.
-/
socket : α
/--
The processing machine for HTTP/1.1.
-/
machine : H1.Machine .receiving
/--
Extensions to attach to each request (e.g., remote address).
-/
extensions : Extensions := .empty
namespace Connection
/--
Events produced by the async select loop in `receiveWithTimeout`.
Each variant corresponds to one possible outcome of waiting for I/O.
-/
private inductive Recv (β : Type)
| bytes (x : Option ByteArray)
| responseBody (x : Option Chunk)
| bodyInterest (x : Bool)
| response (x : (Except IO.Error (Response β)))
| timeout
| shutdown
| close
/--
The set of I/O sources to wait on during a single poll iteration.
Each `Option` field is `none` when that source is not currently active.
-/
private structure PollSources (α β : Type) where
socket : Option α
expect : Option Nat
response : Option (Std.Channel (Except IO.Error (Response β)))
responseBody : Option β
requestBody : Option Body.Stream
timeout : Millisecond.Offset
keepAliveTimeout : Option Millisecond.Offset
headerTimeout : Option Timestamp
connectionContext : CancellationContext
/--
Waits for the next I/O event across all active sources described by `sources`.
Computes the socket recv size from `config`, then races all active selectables.
Calls `Handler.onFailure` and returns `.close` on transport errors.
-/
private def pollNextEvent
{σ β : Type} [Transport α] [Handler σ] [Body β]
(config : Config) (handler : σ) (sources : PollSources α β)
: Async (Recv β) := do
let expectedBytes := sources.expect
|>.getD config.defaultPayloadBytes
|>.min config.maximumRecvSize
|>.toUInt64
let mut selectables : Array (Selectable (Recv β)) := #[
.case sources.connectionContext.doneSelector (fun _ => do
let reason sources.connectionContext.getCancellationReason
match reason with
| some .deadline => pure .timeout
| _ => pure .shutdown)
]
if let some socket := sources.socket then
selectables := selectables.push (.case (Transport.recvSelector socket expectedBytes) (Recv.bytes · |> pure))
if sources.keepAliveTimeout.isNone then
if let some timeout := sources.headerTimeout then
selectables := selectables.push (.case ( Selector.sleep (timeout - ( Timestamp.now)).toMilliseconds) (fun _ => pure .timeout))
else
selectables := selectables.push (.case ( Selector.sleep sources.timeout) (fun _ => pure .timeout))
if let some responseBody := sources.responseBody then
selectables := selectables.push (.case (Body.recvSelector responseBody) (Recv.responseBody · |> pure))
if let some requestBody := sources.requestBody then
selectables := selectables.push (.case (requestBody.interestSelector) (Recv.bodyInterest · |> pure))
if let some response := sources.response then
selectables := selectables.push (.case response.recvSelector (Recv.response · |> pure))
try Selectable.one selectables
catch e =>
Handler.onFailure handler e
pure .close
/--
Handles the `Expect: 100-continue` protocol for a pending request head.
Races between the handler's decision (`Handler.onContinue`), the connection being
cancelled, and a lingering timeout. Returns the updated machine and whether
`pendingHead` should be cleared (i.e. when the request is rejected).
-/
private def handleContinueEvent
{σ : Type} [Handler σ]
(handler : σ) (machine : H1.Machine .receiving) (head : Request.Head)
(config : Config) (connectionContext : CancellationContext)
: Async (H1.Machine .receiving × Bool) := do
let continueChannel : Std.Channel Bool Std.Channel.new
let continueTask Handler.onContinue handler head |>.asTask
BaseIO.chainTask continueTask fun
| .ok v => discard <| continueChannel.send v
| .error _ => discard <| continueChannel.send false
let canContinue Selectable.one #[
.case continueChannel.recvSelector pure,
.case connectionContext.doneSelector (fun _ => pure false),
.case ( Selector.sleep config.lingeringTimeout) (fun _ => pure false)
]
let status := if canContinue then Status.«continue» else Status.expectationFailed
return (machine.canContinue status, !canContinue)
/--
Injects a `Date` header into a response head if `Config.generateDate` is set
and the response does not already include one.
-/
private def prepareResponseHead (config : Config) (head : Response.Head) : Async Response.Head := do
if config.generateDate ¬head.headers.contains Header.Name.date then
let now Std.Time.DateTime.now (tz := .UTC)
return { head with headers := head.headers.insert Header.Name.date (Header.Value.ofString! now.toRFC822String) }
else
return head
/--
Applies a successful handler response to the machine.
Optionally injects a `Date` header, records the known body size, and sends the
response head. Returns the updated machine and the body stream to drain, or `none`
when the body should be omitted (e.g., for HEAD requests).
-/
private def applyResponse
{β : Type} [Body β]
(config : Config) (machine : H1.Machine .receiving) (res : Response β)
: Async (H1.Machine .receiving × Option β) := do
let size Body.getKnownSize res.body
let machineSized :=
if let some knownSize := size
then machine.setKnownSize knownSize
else machine
let responseHead prepareResponseHead config res.line
let machineWithHead := machineSized.send responseHead
if machineWithHead.writer.omitBody then
if ¬( Body.isClosed res.body) then
Body.close res.body
return (machineWithHead, none)
else
return (machineWithHead, some res.body)
/--
All mutable state carried through the connection processing loop.
Bundled into a struct so it can be passed to and returned from helper functions.
-/
private structure ConnectionState (β : Type) where
machine : H1.Machine .receiving
requestStream : Body.Stream
keepAliveTimeout : Option Millisecond.Offset
currentTimeout : Millisecond.Offset
headerTimeout : Option Timestamp
response : Std.Channel (Except IO.Error (Response β))
respStream : Option β
requiresData : Bool
expectData : Option Nat
handlerDispatched : Bool
pendingHead : Option Request.Head
/--
Processes all H1 events from a single machine step, updating the connection state.
Handles keep-alive resets, body-size tracking, `Expect: 100-continue`, and parse errors.
Returns the updated state; stops early on `.failed`.
-/
private def processH1Events
{σ β : Type} [Handler σ] [Body β]
(handler : σ) (config : Config) (connectionContext : CancellationContext)
(events : Array (H1.Event .receiving))
(state : ConnectionState β)
: Async (ConnectionState β) := do
let mut st := state
for event in events do
match event with
| .needMoreData expect =>
st := { st with requiresData := true, expectData := expect }
| .needAnswer => pure ()
| .endHeaders head =>
-- Sets the pending head and removes the KeepAlive or Header timeout.
st := { st with
currentTimeout := config.lingeringTimeout
keepAliveTimeout := none
headerTimeout := none
pendingHead := some head
}
if let some length := head.getSize true then
-- Sets the size of the body that is going out of the connection.
Body.setKnownSize st.requestStream (some length)
| .«continue» =>
if let some head := st.pendingHead then
let (newMachine, clearPending) handleContinueEvent handler st.machine head config connectionContext
st := { st with machine := newMachine }
if clearPending then
st := { st with pendingHead := none }
| .next =>
-- Reset all per-request state for the next pipelined request.
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
if let some res := st.respStream then
if ¬( Body.isClosed res) then
Body.close res
let newStream Body.mkStream
st := { st with
requestStream := newStream
response := Std.Channel.new
respStream := none
keepAliveTimeout := some config.keepAliveTimeout.val
currentTimeout := config.keepAliveTimeout.val
headerTimeout := none
handlerDispatched := false
}
| .failed err =>
Handler.onFailure handler (toString err)
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
st := { st with requiresData := false, pendingHead := none }
break
| .closeBody =>
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
| .close => pure ()
return st
/--
Dispatches a pending request head to the handler if one is waiting.
Spawns the handler as an async task and routes its result back through `state.response`.
Returns the updated state with `pendingHead` cleared and `handlerDispatched` set.
-/
private def dispatchPendingRequest
{σ : Type} [Handler σ]
(handler : σ) (extensions : Extensions) (connectionContext : CancellationContext)
(state : ConnectionState (Handler.ResponseBody σ))
: Async (ConnectionState (Handler.ResponseBody σ)) := do
if let some line := state.pendingHead then
let task Handler.onRequest handler { line, body := state.requestStream, extensions } connectionContext
|>.asTask
BaseIO.chainTask task (discard state.response.send)
return { state with pendingHead := none, handlerDispatched := true }
else
return state
/--
Attempts a single non-blocking receive from the body and feeds any available chunk
into the machine, without going through the `Selectable.one` scheduler.
For fully-buffered bodies (e.g. `Body.Full`, `Body.Buffered`) this avoids one
`Selectable.one` round-trip when the chunk is already in memory. Streaming bodies
that have no producer waiting return `none` and fall through to the normal poll loop
unchanged.
Only one chunk is consumed here. Looping would introduce yield points between
`Body.tryRecv` calls, allowing a background producer to race ahead and close the
stream before `writeHead` runs — turning a streaming response with unknown size
into a fixed-length one.
-/
private def tryDrainBody [Body β]
(machine : H1.Machine .receiving) (body : β)
: Async (H1.Machine .receiving × Option β) := do
match Body.tryRecv body with
| none => pure (machine, some body)
| some (some chunk) => pure (machine.sendData #[chunk], some body)
| some none =>
if !( Body.isClosed body) then Body.close body
pure (machine.userClosedBody, none)
/--
Processes a single async I/O event and updates the connection state.
Returns the updated state and `true` if the connection should be closed immediately.
-/
private def handleRecvEvent
{σ β : Type} [Handler σ] [Body β]
(handler : σ) (config : Config)
(event : Recv β) (state : ConnectionState β)
: Async (ConnectionState β × Bool) := do
match event with
| .bytes (some bs) =>
let mut st := state
-- After the first byte after idle we switch from keep-alive timeout to per-request header timeout.
if st.keepAliveTimeout.isSome then
st := { st with
keepAliveTimeout := none
headerTimeout := some <| ( Timestamp.now) + config.headerTimeout
}
return ({ st with machine := st.machine.feed bs }, false)
| .bytes none =>
return ({ state with machine := state.machine.noMoreInput }, false)
| .responseBody (some chunk) =>
return ({ state with machine := state.machine.sendData #[chunk] }, false)
| .responseBody none =>
if let some res := state.respStream then
if ¬( Body.isClosed res) then Body.close res
return ({ state with machine := state.machine.userClosedBody, respStream := none }, false)
| .bodyInterest interested =>
if interested then
let (newMachine, pulledChunk) := state.machine.pullBody
let mut st := { state with machine := newMachine }
if let some pulled := pulledChunk then
try st.requestStream.send pulled.chunk pulled.incomplete
catch e => Handler.onFailure handler e
if pulled.final then
if ¬( Body.isClosed st.requestStream) then
Body.close st.requestStream
return (st, false)
else
return (state, false)
| .close => return (state, true)
| .timeout =>
Handler.onFailure handler "request header timeout"
return ({ state with machine := state.machine.closeWithError .requestTimeout, handlerDispatched := false }, false)
| .shutdown =>
return ({ state with machine := state.machine.closeWithError .serviceUnavailable, handlerDispatched := false }, false)
| .response (.error err) =>
Handler.onFailure handler err
return ({ state with machine := state.machine.closeWithError .internalServerError, handlerDispatched := false }, false)
| .response (.ok res) =>
if state.machine.failed then
if ¬( Body.isClosed res.body) then Body.close res.body
return ({ state with handlerDispatched := false }, false)
else
let (newMachine, newRespStream) applyResponse config state.machine res
-- Eagerly consume one chunk if immediately available (avoids a Selectable.one round-trip).
let (drainedMachine, drainedRespStream)
match newRespStream with
| none => pure (newMachine, none)
| some body => tryDrainBody newMachine body
return ({ state with machine := drainedMachine, handlerDispatched := false, respStream := drainedRespStream }, false)
/--
Computes the active `PollSources` for the current connection state.
Determines which IO sources need attention and whether to include the socket.
-/
private def buildPollSources
{α β : Type} [Transport α]
(socket : α) (connectionContext : CancellationContext) (state : ConnectionState β)
: Async (PollSources α β) := do
let requestBodyOpen
if state.machine.canPullBody then pure !( Body.isClosed state.requestStream)
else pure false
let requestBodyInterested
if state.machine.canPullBody requestBodyOpen then state.requestStream.hasInterest
else pure false
let requestBody
if state.machine.canPullBodyNow requestBodyOpen then pure (some state.requestStream)
else pure none
-- Include the socket only when there is more to do than waiting for the handler alone.
let pollSocket :=
state.requiresData !state.handlerDispatched state.respStream.isSome
state.machine.writer.sentMessage (state.machine.canPullBody requestBodyInterested)
return {
socket := if pollSocket then some socket else none
expect := state.expectData
response := if state.handlerDispatched then some state.response else none
responseBody := state.respStream
requestBody := requestBody
timeout := state.currentTimeout
keepAliveTimeout := state.keepAliveTimeout
headerTimeout := state.headerTimeout
connectionContext := connectionContext
}
/--
Runs the main request/response processing loop for a single connection.
Drives the HTTP/1.1 state machine through four phases each iteration:
send buffered output, process H1 events, dispatch pending requests, poll for I/O.
-/
private def handle
{σ : Type} [Transport α] [h : Handler σ]
(connection : Connection α)
(config : Config)
(connectionContext : CancellationContext)
(handler : σ) : Async Unit := do
let _ : Body (Handler.ResponseBody σ) := Handler.responseBodyInstance
let socket := connection.socket
let initStream Body.mkStream
let mut state : ConnectionState (Handler.ResponseBody σ) := {
machine := connection.machine
requestStream := initStream
keepAliveTimeout := some config.keepAliveTimeout.val
currentTimeout := config.keepAliveTimeout.val
headerTimeout := none
response := Std.Channel.new
respStream := none
requiresData := false
expectData := none
handlerDispatched := false
pendingHead := none
}
while ¬state.machine.halted do
-- Phase 1: advance the state machine and flush any output.
let (newMachine, step) := state.machine.step
state := { state with machine := newMachine }
if step.output.size > 0 then
try Transport.sendAll socket step.output.data
catch e =>
Handler.onFailure handler e
break
-- Phase 2: process all events emitted by this step.
state processH1Events handler config connectionContext step.events state
-- Phase 3: dispatch any newly parsed request to the handler.
state dispatchPendingRequest handler connection.extensions connectionContext state
-- Phase 4: wait for the next IO event when any source needs attention.
if state.requiresData state.handlerDispatched state.respStream.isSome state.machine.canPullBody then
state := { state with requiresData := false }
let sources buildPollSources socket connectionContext state
let event pollNextEvent config handler sources
let (newState, shouldClose) handleRecvEvent handler config event state
state := newState
if shouldClose then break
-- Clean up: close all open streams and the socket.
if ¬( Body.isClosed state.requestStream) then
Body.close state.requestStream
if let some res := state.respStream then
if ¬( Body.isClosed res) then Body.close res
Transport.close socket
end Connection
/--
Handles request/response processing for a single connection using an `Async` handler.
The library-level entry point for running a server is `Server.serve`.
This function can be used with a `TCP.Socket` or any other type that implements
`Transport` to build custom server loops.
# Example
```lean
-- Create a TCP socket server instance
let server ← Socket.Server.mk
server.bind addr
server.listen backlog
-- Enter an infinite loop to handle incoming client connections
while true do
let client ← server.accept
background (serveConnection client handler config)
```
-/
def serveConnection
{σ : Type} [Transport t] [Handler σ]
(client : t) (handler : σ)
(config : Config) (extensions : Extensions := .empty) : ContextAsync Unit := do
(Connection.mk client { config := config.toH1Config } extensions)
|>.handle config ( ContextAsync.getContext) handler
end Std.Http.Server

Some files were not shown because too many files have changed in this diff Show More