Compare commits

...

35 Commits

Author SHA1 Message Date
Joscha
978f81d363 chore: bump patch version to 1 2026-04-14 14:52:44 +02:00
Garmelon
76dea4d656 chore: stop using cached namespace.so checkout (#12714)
The namespace cache volumes were running out of space and preventing CI
from running.
2026-04-14 14:52:44 +02:00
Henrik Böving
1df9f3b862 fix: file read buffer overflow (#13392)
This PR fixes a heap buffer overflow in `lean_io_prim_handle_read` that
was triggered through an
integer overflow in the size computation of an allocation. In addition
it places several checked
arithmetic operations on all relevant allocation paths to have potential
future overflows be turned
into crashes instead. The offending code now throws an out of memory
error instead.

Closes: #13388
2026-04-14 14:11:30 +02:00
Kim Morrison
7e01a1bf5c doc: document release notes process and add guard check
This PR documents the release notes writing process in detail and adds a guard
check to release_checklist.py to ensure release notes are created for -rc1
releases before proceeding with downstream repository updates.

Changes:
- doc/dev/release_checklist.md: Expanded "Writing the release notes" section
  with detailed steps for generating, reviewing, and formatting release notes
  in Verso format
- script/release_checklist.py: Added check_release_notes_file_exists() to
  verify the release notes file exists in reference-manual repository
- .claude/commands/release.md: Added "Release Notes" section explaining the
  process for creating release notes during -rc1 releases

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 18:11:39 +11:00
Kim Morrison
e18f78acfb chore: add plausible as verso dependency in release_repos.yml 2026-01-26 13:54:16 +11:00
Kim Morrison
3b0f286219 chore: set LEAN_VERSION_IS_RELEASE to 1 for v4.28.0 release 2026-01-26 11:40:44 +11:00
Kim Morrison
9e241a4087 fix: revert "split ngen on async elab" (#12148)
This PR reverts #12000, which introduced a regression where `simp`
incorrectly rejects valid rewrites for perm lemmas.

The issue is that `NameGenerator.mkChild` creates names that don't
maintain the ordering assumption used by `acLt` for perm lemma
decisions. For example, after the change:
- Child generator creates names like `_uniq.102.2`
- Parent continues with `_uniq.7`
- But `Name.lt (.num (.num `_uniq 102) 2) (.num `_uniq 7)` is true

This causes fvars created later (in async tasks) to compare as smaller
than fvars created earlier, breaking the assumption that later fvars
compare greater according to `Name.lt`.

Fixes #12136.

🤖 Prepared with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 03:18:24 +00:00
Leonardo de Moura
e90f6f77db test: local rewrite with Sym.simp (#12147)
This PR adds a new API for helping users write focused rewrites.
2026-01-25 01:32:50 +00:00
Leonardo de Moura
9deb9ab59d refactor: move commonly shared expressions to SymM (#12145)
This PR moves the pre-shared commonly used expressions from `GrindM` to
`SymM`.
2026-01-25 00:17:53 +00:00
Leonardo de Moura
6de7100f69 feat: add Goal API for SymM + grind (#12143)
This PR adds an API for building symbolic simulation engines and
verification
condition generators that leverage `grind`. The API wraps `Sym`
operations to
work with `grind`'s `Goal` type, enabling lightweight symbolic execution
while
carrying `grind` state for discharge steps.

New operations on `Goal`:
- `mkGoal`: create a `Goal` from an `MVarId`
- `introN`, `intros`: introduce binders
- `apply`: apply backward rules
- `simp`, `simpIgnoringNoProgress`: simplify using `Sym.Simp`
- `internalize`, `internalizeAll`: add hypotheses to the E-graph
- `grind`: attempt to close the goal using `grind`
- `assumption`: close by matching a hypothesis

A new test demonstrates the API on a stateful program with conditionals,
using `grind` to discharge arithmetic side conditions.
2026-01-24 20:30:08 +00:00
Mac Malone
6f409e0eea fix: lake: --no-build exit code w/ release fetch (#12142)
This PR fixes a bug introduced in #12086 where a `lake build :release
--no-build` would exit with code 1 rather than the `--no-build ` code 3.
Now both the bug from #12086 and this bug are fixed.
2026-01-24 17:03:07 +00:00
Sebastian Ullrich
3de1cc54c5 test: compiler test with big meta closure (#12141) 2026-01-24 15:18:33 +00:00
Kim Morrison
a3755fe0a5 fix: add syntax to recommended_spelling for inv (#12139)
This PR adds `«term_⁻¹»` to the `recommended_spelling` for `inv`,
matching
the pattern used by all other operators which include both the function
and the syntax in their spelling lists.

Reported at
https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/.60recommended_spelling.60.20for.20.60.C2.ABterm_.E2.81.BB.C2.B9.C2.BB.60

🤖 Prepared with Claude Code

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 11:04:36 +00:00
Leonardo de Moura
4c1e4a77b4 test: MetaM vs SymM on do notation (#12134)
This PR adds a new benchmark `shallow_add_sub_cancel.lean` that
demonstrates symbolic simulation using a shallow embedding into monadic
`do` notation, as opposed to the deep embedding approach in
`add_sub_cancel.lean`.

The shallow embedding approach:
- Uses Lean's `StateM` monad directly instead of a custom command
language

- Defines `Exec s k post` as a simple predicate: `post (k s).1 (k s).2`

- Proves helper theorems for reasoning about monadic operations (`pure`,
`bind`, `get`, `set`, `modify`, `ite`)

- Programs are written in actual `do`-notation rather than a custom AST

The benchmark solves goals using both the `MetaM` and `SymM` frameworks,
showing that the shallow embedding integrates well with the symbolic
simulation infrastructure. `SymM` is again way faster than `MetaM`

### Symbolic simulation benchmark — tactic time only

Problem size `n` corresponds to a program with `4·n` monadic actions.

| n   | MetaM tactic (ms) | SymM tactic (ms) | Speedup |
|-----|-------------------|------------------|---------|
| 10  | 82.10  | 11.37 | ~7.2×  |
| 20  | 176.21 | 17.71 | ~9.9×  |
| 30  | 306.47 | 25.39 | ~12.1× |
| 40  | 509.52 | 34.53 | ~14.7× |
| 50  | 689.19 | 43.51 | ~15.8× |
| 60  | 905.86 | 52.47 | ~17.3× |
| 70  | 1172.31 | 62.50 | ~18.8× |
| 80  | 1448.48 | 70.65 | ~20.5× |
| 90  | 1787.15 | 80.89 | ~22.1× |
| 100 | 2128.12 | 90.77 | ~23.5× |

<img width="580" height="455" alt="image"
src="https://github.com/user-attachments/assets/3511aaab-4d53-4520-8302-65d2d100df4a"
/>
2026-01-24 03:38:02 +00:00
Kim Morrison
896da85304 fix: CI CMakeLists.txt version check extracts wrong value (#12131)
This PR fixes a bug in the CI version validation where `grep -oE
'[0-9]+'` matches
multiple numbers from the comment on the same line:

```
set(LEAN_VERSION_IS_RELEASE 1)  # This number is 1 in the release revision, and 0 otherwise.
```

The grep extracts `1`, `1`, and `0`, causing the comparison to fail.

🤖 Prepared with Claude Code
2026-01-24 00:34:31 +00:00
Kim Morrison
11cd55b4f1 chore: check reference-manual release notes title in release checklist (#12130)
This PR adds a check to the release checklist script that verifies the
reference-manual release notes title matches the release type:

- For RC releases (e.g., v4.27.0-rc1): verifies the title contains the
exact
  RC suffix (e.g., "Lean 4.27.0-rc1")
- For final releases (e.g., v4.27.0): verifies the title does NOT
contain
  any "-rc" suffix (e.g., "Lean 4.27.0")

The check looks at the PR branch (bump_to_vX.Y.Z) for the release notes
file
at `Manual/Releases/v4_X_Y.lean` and parses the `#doc (Manual) "Lean
..."` line.

🤖 Prepared with Claude Code

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 00:33:53 +00:00
Kim Morrison
88823b27a6 doc: document nightly infrastructure in release command (#12129)
This PR adds documentation about the nightly build infrastructure to the
`/release` command to help future release managers understand the
relationship between branches and tags:

- `nightly` and `nightly-with-mathlib` are **branches** in
`leanprover/lean4`
- Dated tags like `nightly-YYYY-MM-DD` are **tags** in
`leanprover/lean4-nightly`
- When a nightly succeeds with mathlib, all three should point to the
same commit

🤖 Prepared with Claude Code

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 00:06:44 +00:00
Henrik Böving
c9facc8102 fix: move allocation of execvp args before fork (#12123)
This PR fixes an issue that may sporadically trigger ASAN to got into a
deadlock when running a subprocess through the `IO.Process.spawn`
framework.

The general issue here is that we run `fork()` and then perform an
allocation in the child before going to `execvp` (for allocating the
arguments to `execvp`). As it turns out, doing this can cause a race
condition in ASAN that ultimately causes a deadlock in the child. This
was fixed upstream but then rolled back (see
https://github.com/google/sanitizers/issues/774). Thus, we must avoid
allocating any memory in between `fork` and `execvp`.
2026-01-23 23:12:23 +00:00
Lean stage0 autoupdater
63d1b530ba chore: update stage0 2026-01-23 18:35:59 +00:00
Sebastian Ullrich
3f09741fb9 chore: revert "fix: do not compile with -fwrapv" (#12125)
We are seeing (non-deterministically?) some ubsan reports from implicit
casting to `int`:
https://github.com/leanprover/lean4/actions/runs/21290374536/job/61282105493?pr=12082

Reverts leanprover/lean4#12098
2026-01-23 17:39:52 +00:00
Sebastian Ullrich
9f9531fa13 fix: getParentDeclName? inside where inside public def (#12119)
This PR fixes the call hierarchy for `where` declarations under the
module system

---------

Co-authored-by: mhuisi <mhuisi@protonmail.com>
2026-01-23 17:32:05 +00:00
David Thrane Christiansen
dae0d6fa05 fix: context for info trees and warning hints in Verso docstrings (#12121)
This PR wraps info trees produced by the `lean` Verso docstring
codeblock in a context info node.

Closes #12065.
2026-01-23 16:22:09 +00:00
David Thrane Christiansen
4a3401f69a fix: enable Verso docstrings in where-blocks (#12122)
This PR adds support for Verso docstrings in `where` clauses.

Closes #12066.
2026-01-23 14:02:11 +00:00
Paul Reichert
4526cdda5f fix: fix verso's +warning hint (#12116)
This PR fixes the verso hint that appears when using `sorry` in an
example block. It previously said: `` The `+error` flag indicates that
warnings are expected: +warning `` This PR replaces `error` with
`warning`. Fixes #12064
2026-01-23 13:31:02 +00:00
Eric Wieser
c4639150c1 fix: do not compile with -fwrapv (#12098)
This PR removes the requirement that libraries compiled against the lean
headers must use `-fwrapv`.

clang
[documents](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#:~:text=Note%20that%20checks%20are%20still%20added%20even%20when%20%2Dfwrapv%20is%20enabled)
that `-fwrapv` does not automatically turn off the integer overflow
sanitizer; and so overflow should still be avoided in normal execution.
2026-01-23 12:14:17 +00:00
Kim Morrison
37870c168b chore: more updates to grind_indexmap test case (#12120) 2026-01-23 10:38:05 +00:00
Kim Morrison
57003e5c79 chore: updates to grind_indexmap test case (#12115)
This PR contains further updates to the `IndexMap` test case in
preparation for a demo at Lean Together.
2026-01-23 06:19:51 +00:00
Kim Morrison
b2f485e352 chore: grind test file demonstrating interactive use (#12114) 2026-01-23 04:55:09 +00:00
Mac Malone
5e29d7660a feat: lake: +mod target keys for modules in deps (#12112)
This PR revives the ability to specify modules in dependencies via the
basic `+mod` target key.

Implementation-wise, this removes deprecation of `BuildKey.module` and
once again uses it for `+mod` target keys. It also adds a test for
depending on a module of a dependency via `needs`.
2026-01-23 02:35:02 +00:00
Kim Morrison
567cf74f1b feat: updates to grind_indexmap test case (#12111)
This PR updates the `grind_indexmap.lean` tests, in preparation for
using them in a Lean Together talk.
2026-01-22 23:54:19 +00:00
Mac Malone
fa2ddf1c56 chore: lake: disable import all check in module disambiguation (#12104)
This PR disables an overlooked check (in #12045) of `import all` during
module disambiguation.
2026-01-22 18:12:14 +00:00
Mac Malone
f9af240bc4 fix: lake: query :deps output (#12105)
This PR fixes the `lake query` output for targets which produce an
`Array` or `List` of a value with a custom `QueryText` or `QueryJson`
instance (e.g., `deps` and `transDeps`).
2026-01-22 17:52:43 +00:00
Joachim Breitner
3bfeb0bc1f refactor: use isRecursiveDefinition when validating macro_inline (#12106)
This PR uses `isRecursiveDefinition` when validating `macro_inline`,
instead of rummaging in the internals of the definition.
2026-01-22 16:31:34 +00:00
Garmelon
8447586fea chore: make bench suite more similar to mathlib's (#12091)
The most important change is that all bench scripts now must always
output to `measurements.jsonl` instead of being allowed to output
results on stdout/err.
2026-01-22 14:20:10 +00:00
Lean stage0 autoupdater
470e3b7fd0 chore: update stage0 2026-01-22 12:59:28 +00:00
155 changed files with 2533 additions and 517 deletions

View File

@@ -13,12 +13,54 @@ These comments explain the scripts' behavior, which repositories get special han
## Arguments
- `version`: The version to release (e.g., v4.24.0)
## Release Notes (Required for -rc1 releases)
For first release candidates (`-rc1`), you must create release notes BEFORE the reference-manual toolchain bump PR can be merged.
**Steps to create release notes:**
1. Generate the release notes:
```bash
cd /path/to/lean4
python3 script/release_notes.py --since <previous_version> > /tmp/release-notes-<version>.md
```
Replace `<previous_version>` with the last stable release (e.g., `v4.27.0` when releasing `v4.28.0-rc1`).
2. Review `/tmp/release-notes-<version>.md` for common issues:
- **Unterminated code blocks**: Look for code fences that aren't closed. Fetch original PR with `gh pr view <number>` to repair.
- **Truncated descriptions**: Some may end mid-sentence. Complete them from the original PR.
- **Markdown issues**: Other syntax problems that could cause parsing errors.
3. Create the release notes file in the reference-manual repository:
- File path: `Manual/Releases/v<version>.lean` (e.g., `v4_28_0.lean`)
- Use Verso format with proper imports and `#doc (Manual)` block
- **Use `#` for headers, not `##`** (Verso uses level 1 for subsections)
- **Use plain ` ``` ` not ` ```lean `** (the latter executes code)
- **Wrap underscore identifiers in backticks**: `` `bv_decide` `` not `bv_decide`
4. Update `Manual/Releases.lean`:
- Add import: `import Manual.Releases.«v4_28_0»`
- Add include: `{include 0 Manual.Releases.«v4_28_0»}`
5. Build to verify: `lake build Manual.Releases.v4_28_0`
6. Create a **separate PR** for release notes (not bundled with toolchain bump):
```bash
git checkout -b v<version>-release-notes
gh pr create --title "doc: add v<version> release notes"
```
For subsequent RCs (`-rc2`, etc.) and stable releases, just update the version number in the existing release notes file title.
See `doc/dev/release_checklist.md` section "Writing the release notes" for full details.
## Process
1. Run `script/release_checklist.py {version}` to check the current status
2. **CRITICAL: If preliminary lean4 checks fail, STOP immediately and alert the user**
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes exist
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes file exists
- **IMPORTANT**: The release page is created AUTOMATICALLY by CI after pushing the tag - DO NOT create it manually
- **IMPORTANT**: For -rc1 releases, release notes must be created before proceeding
- Do NOT create any PRs or proceed with repository updates if these checks fail
3. Create a todo list tracking all repositories that need updates
4. **CRITICAL RULE: You can ONLY run `release_steps.py` for a repository if `release_checklist.py` explicitly says to do so**
@@ -61,6 +103,15 @@ Every time you run `release_checklist.py`, you MUST:
This summary should be provided EVERY time you run the checklist, not just after creating new PRs.
The user needs to see the complete picture of what's waiting for review.
## Nightly Infrastructure
The nightly build system uses branches and tags across two repositories:
- `leanprover/lean4` has **branches** `nightly` and `nightly-with-mathlib` tracking the latest nightly builds
- `leanprover/lean4-nightly` has **dated tags** like `nightly-2026-01-23`
When a nightly succeeds with mathlib, all three should point to the same commit. Don't confuse these: branches are in the main lean4 repo, dated tags are in lean4-nightly.
## Error Handling
**CRITICAL**: If something goes wrong or a command fails:

View File

@@ -66,16 +66,10 @@ jobs:
brew install ccache tree zstd coreutils gmp libuv
if: runner.os == 'macOS'
- name: Checkout
if: (!endsWith(matrix.os, '-with-cache'))
uses: actions/checkout@v6
with:
# the default is to use a virtual merge commit between the PR and master: just use the PR
ref: ${{ github.event.pull_request.head.sha }}
- name: Namespace Checkout
if: endsWith(matrix.os, '-with-cache')
uses: namespacelabs/nscloud-checkout-action@v8
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Open Nix shell once
run: true
if: runner.os == 'Linux'

View File

@@ -115,7 +115,7 @@ jobs:
CMAKE_MAJOR=$(grep -E "^set\(LEAN_VERSION_MAJOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_MINOR=$(grep -E "^set\(LEAN_VERSION_MINOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_PATCH=$(grep -E "^set\(LEAN_VERSION_PATCH " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | sed -nE 's/^set\(LEAN_VERSION_IS_RELEASE ([0-9]+)\).*/\1/p')
# Expected values from tag parsing
TAG_MAJOR="${{ steps.set-release.outputs.LEAN_VERSION_MAJOR }}"

View File

@@ -218,6 +218,11 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
# Writing the release notes
Release notes are only needed for the first release candidate (`-rc1`). For subsequent RCs and stable releases,
just update the version number in the title of the existing release notes file.
## Generating the release notes
Release notes are automatically generated from the commit history, using `script/release_notes.py`.
Run this as `script/release_notes.py --since v4.6.0`, where `v4.6.0` is the *previous* release version.
@@ -232,4 +237,93 @@ Some judgement is required here: ignore commits which look minor,
but manually add items to the release notes for significant PRs that were rebase-merged.
There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
## Reviewing and fixing the generated markdown
Before adding the release notes to the reference manual, carefully review the generated markdown for these common issues:
1. **Unterminated code blocks**: PR descriptions sometimes have unclosed code fences. Look for code blocks
that don't have a closing ` ``` `. If found, fetch the original PR description with `gh pr view <number>`
and repair the code block with the complete content.
2. **Truncated descriptions**: Some PR descriptions may end abruptly mid-sentence. Review these and complete
the descriptions based on the original PR.
3. **Markdown syntax issues**: Check for other markdown problems that could cause parsing errors.
## Creating the release notes file
The release notes go in `Manual/Releases/v4_7_0.lean` in the reference-manual repository.
The file structure must follow the Verso format:
```lean
/-
Copyright (c) 2025 Lean FRO LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: <Your Name>
-/
import VersoManual
import Manual.Meta
import Manual.Meta.Markdown
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
#doc (Manual) "Lean 4.7.0-rc1 (YYYY-MM-DD)" =>
%%%
tag := "release-v4.7.0"
file := "v4.7.0"
%%%
<release notes content here>
```
**Important formatting rules for Verso:**
- Use `#` for section headers inside the document, not `##` (Verso uses header level 1 for subsections)
- Use plain ` ``` ` for code blocks, not ` ```lean ` (the latter will cause Lean to execute the code)
- Identifiers with underscores like `bv_decide` should be wrapped in backticks: `` `bv_decide` ``
(otherwise the underscore may be interpreted as markdown emphasis)
## Updating Manual/Releases.lean
After creating the release notes file, update `Manual/Releases.lean` to include it:
1. Add the import near the top with other version imports:
```lean
import Manual.Releases.«v4_7_0»
```
2. Add the include statement after the other includes:
```lean
{include 0 Manual.Releases.«v4_7_0»}
```
## Building and verifying
Build the release notes to check for errors:
```bash
lake build Manual.Releases.v4_7_0
```
Common errors and fixes:
- "Wrong header nesting - got ## but expected at most #": Change `##` to `#`
- "Tactic 'X' failed" or similar: Code is being executed; change ` ```lean ` to ` ``` `
- "'_'" errors: Underscore in identifier being parsed as emphasis; wrap in backticks
## Creating the PR
Create a separate PR for the release notes (don't bundle with the toolchain bump PR):
```bash
git checkout -b v4.7.0-release-notes
git add Manual/Releases/v4_7_0.lean Manual/Releases.lean
git commit -m "doc: add v4.7.0 release notes"
git push -u origin v4.7.0-release-notes
gh pr create --title "doc: add v4.7.0 release notes" --body "This PR adds the release notes for Lean v4.7.0."
```
See `./releases_drafts/README.md` for more information about pre-written release note entries.
See `./releases_drafts/README.md` for more information.

View File

@@ -185,6 +185,30 @@ def get_release_notes(tag_name):
except Exception:
return None
def check_release_notes_file_exists(toolchain, github_token):
"""Check if the release notes file exists in the reference-manual repository.
For -rc1 releases, this checks that the release notes have been created.
For subsequent RCs and stable releases, release notes should already exist.
Returns tuple (exists: bool, is_rc1: bool) where is_rc1 indicates if this is
the first release candidate (when release notes need to be written).
"""
# Determine the release notes file path
# e.g., v4.28.0-rc1 -> Manual/Releases/v4_28_0.lean
base_version = strip_rc_suffix(toolchain.lstrip('v')) # "4.28.0"
file_name = f"v{base_version.replace('.', '_')}.lean" # "v4_28_0.lean"
file_path = f"Manual/Releases/{file_name}"
is_rc1 = toolchain.endswith("-rc1")
repo_url = "https://github.com/leanprover/reference-manual"
# Check if the file exists on main branch
content = get_branch_content(repo_url, "main", file_path, github_token)
return (content is not None, is_rc1)
def get_branch_content(repo_url, branch, file_path, github_token):
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/contents/{file_path}?ref={branch}"
headers = {'Authorization': f'token {github_token}'} if github_token else {}
@@ -501,6 +525,76 @@ def check_proofwidgets4_release(repo_url, target_toolchain, github_token):
print(f" You will need to create and push a tag v0.0.{next_version}")
return False
def check_reference_manual_release_title(repo_url, toolchain, pr_branch, github_token):
"""Check if the reference-manual release notes title matches the release type.
For RC releases (e.g., v4.27.0-rc1), the title should contain the exact RC suffix.
For final releases (e.g., v4.27.0), the title should NOT contain any "-rc".
Returns True if check passes or is not applicable, False if title needs updating.
"""
is_rc = is_release_candidate(toolchain)
# For RC releases, get the base version and RC suffix
# e.g., "v4.27.0-rc1" -> version="4.27.0", rc_suffix="-rc1"
if is_rc:
parts = toolchain.lstrip('v').split('-', 1)
version = parts[0]
rc_suffix = '-' + parts[1] if len(parts) > 1 else ''
else:
version = toolchain.lstrip('v')
rc_suffix = ''
# Construct the release notes file path (e.g., Manual/Releases/v4_27_0.lean for v4.27.0)
file_name = f"v{version.replace('.', '_')}.lean" # "v4_27_0.lean"
file_path = f"Manual/Releases/{file_name}"
# Try to get the file from the PR branch first, then fall back to main branch
content = get_branch_content(repo_url, pr_branch, file_path, github_token)
if content is None:
# Try the default branch
content = get_branch_content(repo_url, "main", file_path, github_token)
if content is None:
print(f" ⚠️ Could not check release notes file: {file_path}")
return True # Don't block on this
# Look for the #doc line with the title
for line in content.splitlines():
if line.strip().startswith('#doc') and 'Manual' in line:
has_rc_in_title = '-rc' in line.lower()
if is_rc:
# For RC releases, title should contain the exact RC suffix (e.g., "-rc1")
# Use regex to match exact suffix followed by non-digit (to avoid -rc1 matching -rc10)
# Pattern matches the RC suffix followed by a non-digit or end-of-string context
# e.g., "-rc1" followed by space, quote, paren, or similar
exact_match = re.search(rf'{re.escape(rc_suffix)}(?![0-9])', line, re.IGNORECASE)
if exact_match:
print(f" ✅ Release notes title correctly shows {rc_suffix}")
return True
elif has_rc_in_title:
print(f" ❌ Release notes title shows wrong RC version (expected {rc_suffix})")
print(f" Update {file_path} to use '{rc_suffix}' in the title")
return False
else:
print(f" ❌ Release notes title missing RC suffix")
print(f" Update {file_path} to include '{rc_suffix}' in the title")
return False
else:
# For final releases, title should NOT contain -rc
if has_rc_in_title:
print(f" ❌ Release notes title still shows RC version")
print(f" Update {file_path} to remove '-rcN' from the title")
return False
else:
print(f" ✅ Release notes title is updated for final release")
return True
# If we didn't find the #doc line, don't block
print(f" ⚠️ Could not find release notes title in {file_path}")
return True
def run_mathlib_verify_version_tags(toolchain, verbose=False):
"""Run mathlib4's verify_version_tags.py script to validate the release tag.
@@ -644,6 +738,27 @@ def main():
else:
print(f" ✅ Release notes page title looks good ('{actual_title}').")
# Check if release notes file exists in reference-manual repository
# For -rc1 releases, this is when release notes need to be written
# For subsequent RCs and stable releases, they should already exist
release_notes_exists, is_rc1 = check_release_notes_file_exists(toolchain, github_token)
base_version = strip_rc_suffix(toolchain.lstrip('v'))
release_notes_file = f"Manual/Releases/v{base_version.replace('.', '_')}.lean"
if not release_notes_exists:
if is_rc1:
print(f" ❌ Release notes file not found: {release_notes_file}")
print(f" This is an -rc1 release, so release notes need to be written.")
print(f" Run `script/release_notes.py --since <previous_version>` to generate them.")
print(f" See doc/dev/release_checklist.md section 'Writing the release notes' for details.")
lean4_success = False
else:
print(f" ❌ Release notes file not found: {release_notes_file}")
print(f" Release notes should have been created for -rc1. Check the reference-manual repository.")
lean4_success = False
else:
print(f" ✅ Release notes file exists: {release_notes_file}")
repo_status["lean4"] = lean4_success
# If the release page doesn't exist, skip repository checks and master branch checks
@@ -709,6 +824,11 @@ def main():
print(f" ⚠️ CI: {ci_message}")
else:
print(f" ❓ CI: {ci_message}")
# For reference-manual, check that the release notes title has been updated
if name == "reference-manual":
pr_branch = f"bump_to_{toolchain}"
check_reference_manual_release_title(url, toolchain, pr_branch, github_token)
else:
print(f" ❌ PR with title '{pr_title}' does not exist")
print(f" Run `script/release_steps.py {toolchain} {name}` to create it")

View File

@@ -14,13 +14,6 @@ repositories:
bump-branch: true
dependencies: []
- name: verso
url: https://github.com/leanprover/verso
toolchain-tag: true
stable-branch: false
branch: main
dependencies: []
- name: lean4checker
url: https://github.com/leanprover/lean4checker
toolchain-tag: true
@@ -42,6 +35,14 @@ repositories:
branch: main
dependencies: []
- name: verso
url: https://github.com/leanprover/verso
toolchain-tag: true
stable-branch: false
branch: main
dependencies:
- plausible
- name: import-graph
url: https://github.com/leanprover-community/import-graph
toolchain-tag: true

View File

@@ -11,8 +11,8 @@ include(ExternalProject)
project(LEAN CXX C)
set(LEAN_VERSION_MAJOR 4)
set(LEAN_VERSION_MINOR 28)
set(LEAN_VERSION_PATCH 0)
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_VERSION_PATCH 1)
set(LEAN_VERSION_IS_RELEASE 1) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")
set(LEAN_VERSION_STRING "${LEAN_VERSION_MAJOR}.${LEAN_VERSION_MINOR}.${LEAN_VERSION_PATCH}")
if (LEAN_SPECIAL_VERSION_DESC)

View File

@@ -360,7 +360,7 @@ recommended_spelling "smul" for "•" in [HSMul.hSMul, «term_•_»]
recommended_spelling "append" for "++" in [HAppend.hAppend, «term_++_»]
/-- when used as a unary operator -/
recommended_spelling "neg" for "-" in [Neg.neg, «term-_»]
recommended_spelling "inv" for "⁻¹" in [Inv.inv]
recommended_spelling "inv" for "⁻¹" in [Inv.inv, «term_⁻¹»]
recommended_spelling "dvd" for "" in [Dvd.dvd, «term__»]
recommended_spelling "shiftLeft" for "<<<" in [HShiftLeft.hShiftLeft, «term_<<<_»]
recommended_spelling "shiftRight" for ">>>" in [HShiftRight.hShiftRight, «term_>>>_»]

View File

@@ -7,6 +7,7 @@ module
prelude
public import Lean.Attributes
import Lean.Meta.RecExt
public section
@@ -33,14 +34,8 @@ private def isValidMacroInline (declName : Name) : CoreM Bool := do
unless info.all.length = 1 do
-- We do not allow `[macro_inline]` attributes at mutual recursive definitions
return false
let env getEnv
let isRec (declName' : Name) : Bool :=
isBRecOnRecursor env declName' ||
declName' == ``WellFounded.fix ||
declName' == ``WellFounded.Nat.fix ||
declName' == declName ++ `_unary -- Auxiliary declaration created by `WF` module
if Option.isSome <| info.value.find? fun e => e.isConst && isRec e.constName! then
-- It contains a `brecOn` or `WellFounded.fix` application. So, it should be recursvie
if ( Meta.isRecursiveDefinition declName) then
-- It is recursive
return false
return true

View File

@@ -543,12 +543,10 @@ def logSnapshotTask (task : Language.SnapshotTask Language.SnapshotTree) : CoreM
/-- Wraps the given action for use in `EIO.asTask` etc., discarding its final monadic state. -/
def wrapAsync {α : Type} (act : α CoreM β) (cancelTk? : Option IO.CancelToken) :
CoreM (α EIO Exception β) := do
let (childNGen, parentNGen) := ( getNGen).mkChild
setNGen parentNGen
let (childDeclNGen, parentDeclNGen) := ( getDeclNGen).mkChild
setDeclNGen parentDeclNGen
let (childNGen, parentNGen) := ( getDeclNGen).mkChild
setDeclNGen parentNGen
let st get
let st := { st with auxDeclNGen := childDeclNGen, ngen := childNGen }
let st := { st with auxDeclNGen := childNGen }
let ctx read
let ctx := { ctx with cancelTk? }
let heartbeats := ( IO.getNumHeartbeats) - ctx.initHeartbeats

View File

@@ -274,12 +274,10 @@ def wrapAsync {α β : Type} (act : α → CommandElabM β) (cancelTk? : Option
CommandElabM (α EIO Exception β) := do
let ctx read
let ctx := { ctx with cancelTk? }
let (childNGen, parentNGen) := ( get).ngen.mkChild
modify fun s => { s with ngen := parentNGen }
let (childDeclNGen, parentDeclNGen) := ( getDeclNGen).mkChild
setDeclNGen parentDeclNGen
let (childNGen, parentNGen) := ( getDeclNGen).mkChild
setDeclNGen parentNGen
let st get
let st := { st with auxDeclNGen := childDeclNGen, ngen := childNGen }
let st := { st with auxDeclNGen := childNGen }
return (act · |>.run ctx |>.run' st)
open Language in

View File

@@ -907,23 +907,26 @@ def lean (name : Option Ident := none) (error warning : flag false) («show» :
(endPos := endPos) (endPos_valid := by simp only [endPos]; split <;> simp [*])
let cctx : Command.Context := {fileName := getFileName, fileMap := text, snap? := none, cancelTk? := none}
let scopes := ( get).scopes
let mut cmdState : Command.State := { env, maxRecDepth := MonadRecDepth.getMaxRecDepth, scopes }
let mut pstate : Parser.ModuleParserState := {pos := pos, recovering := false}
let mut cmds := #[]
repeat
let scope := cmdState.scopes.head!
let pmctx := { env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace, openDecls := scope.openDecls }
let (cmd, ps', messages) := Parser.parseCommand ictx pmctx pstate cmdState.messages
cmds := cmds.push cmd
pstate := ps'
cmdState := { cmdState with messages := messages }
cmdState runCommand (Command.elabCommand cmd) cmd cctx cmdState
if Parser.isTerminalCommand cmd then break
setEnv cmdState.env
modify fun st => { st with scopes := cmdState.scopes }
let (cmds, cmdState, trees) withSaveInfoContext do
let mut cmdState : Command.State := { env, maxRecDepth := MonadRecDepth.getMaxRecDepth, scopes }
let mut pstate : Parser.ModuleParserState := {pos := pos, recovering := false}
let mut cmds := #[]
repeat
let scope := cmdState.scopes.head!
let pmctx := { env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace, openDecls := scope.openDecls }
let (cmd, ps', messages) := Parser.parseCommand ictx pmctx pstate cmdState.messages
cmds := cmds.push cmd
pstate := ps'
cmdState := { cmdState with messages := messages }
cmdState runCommand (Command.elabCommand cmd) cmd cctx cmdState
if Parser.isTerminalCommand cmd then break
setEnv cmdState.env
modify fun st => { st with scopes := cmdState.scopes }
for t in cmdState.infoState.trees do
pushInfoTree t
for t in cmdState.infoState.trees do
pushInfoTree t
let trees := ( getInfoTrees)
pure (cmds, cmdState, trees)
let mut output := #[]
for msg in cmdState.messages.toArray do
@@ -937,14 +940,13 @@ def lean (name : Option Ident := none) (error warning : flag false) («show» :
let hint flagHint m!"The `+error` flag indicates that errors are expected:" #[" +error"]
logErrorAt msgStx m!"Unexpected error:{indentD msg.data}{hint.getD m!""}"
if msg.severity == .warning && !warning then
let hint flagHint m!"The `+error` flag indicates that warnings are expected:" #[" +warning"]
let hint flagHint m!"The `+warning` flag indicates that warnings are expected:" #[" +warning"]
logErrorAt msgStx m!"Unexpected warning:{indentD msg.data}{hint.getD m!""}"
else
withRef msgStx <| log msg.data (severity := .information) (isSilent := true)
if let some x := name then
modifyEnv (leanOutputExt.modifyState · (·.insert x.getId output))
if «show» then
let trees := ( getInfoTrees)
if h : trees.size > 0 then
let hl := Data.LeanBlock.mk ( highlightSyntax trees (mkNullNode cmds))
return .other {name := ``Data.LeanBlock, val := .mk hl} #[.code code.getString]

View File

@@ -20,10 +20,12 @@ structure LetRecDeclView where
declName : Name
parentName? : Option Name
binderIds : Array Syntax
binders : Syntax -- binder syntax for docstring elaboration
type : Expr
mvar : Expr -- auxiliary metavariable used to lift the 'let rec'
valStx : Syntax
termination : TerminationHints
docString? : Option (TSyntax ``Parser.Command.docComment × Bool) := none
structure LetRecView where
decls : Array LetRecDeclView
@@ -32,8 +34,9 @@ structure LetRecView where
/- group ("let " >> nonReservedSymbol "rec ") >> sepBy1 (group (optional «attributes» >> letDecl)) ", " >> "; " >> termParser -/
private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
let mut decls : Array LetRecDeclView := #[]
let isVerso := doc.verso.get ( getOptions)
for attrDeclStx in letRec[1][0].getSepArgs do
let docStr? := attrDeclStx[0].getOptional?.map TSyntax.mk
let docStr? := attrDeclStx[0].getOptional?.map (TSyntax.mk ·, isVerso)
let attrOptStx := attrDeclStx[1]
let attrs if attrOptStx.isNone then pure #[] else elabDeclAttrs attrOptStx[0]
let decl := attrDeclStx[2][0]
@@ -45,16 +48,21 @@ private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
throwErrorAt declId "'let rec' expressions must be named"
let shortDeclName := declId.getId
let parentName? getDeclName?
let declName := parentName?.getD Name.anonymous ++ shortDeclName
let mut declName := parentName?.getD Name.anonymous ++ shortDeclName
let env getEnv
if env.header.isModule && !env.isExporting then
declName := mkPrivateName env declName
if decls.any fun decl => decl.declName == declName then
withRef declId do
throwError "`{.ofConstName declName}` has already been declared"
let binders := decl[1]
let binderStx := decl[1]
checkNotAlreadyDeclared declName
applyAttributesAt declName attrs AttributeApplicationTime.beforeElaboration
addDocString' declName binders docStr?
-- Docstring processing is deferred until the declaration is added to the environment.
-- This is necessary for Verso docstrings to work correctly, as they may reference the
-- declaration being defined.
addDeclarationRangesFromSyntax declName decl declId
let binders := binders.getArgs
let binders := binderStx.getArgs
let typeStx := expandOptType declId decl[2]
let (type, binderIds) elabBindersEx binders fun xs => do
let type elabType typeStx
@@ -70,7 +78,7 @@ private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
let termination elabTerminationHints attrDeclStx[3]
decls := decls.push {
ref := declId, attrs, shortDeclName, declName, parentName?,
binderIds, type, mvar, valStx, termination
binderIds, binders := binderStx, type, mvar, valStx, termination, docString? := docStr?
}
else
throwUnsupportedSyntax
@@ -111,15 +119,12 @@ private def registerLetRecsToLift (views : Array LetRecDeclView) (fvars : Array
let toLift views.mapIdxM fun i view => do
let value := values[i]!
let termination := view.termination.rememberExtraParams view.binderIds.size value
let env getEnv
pure {
ref := view.ref
fvarId := fvars[i]!.fvarId!
attrs := view.attrs
shortDeclName := view.shortDeclName
declName :=
if env.isExporting || !env.header.isModule then view.declName
else mkPrivateName env view.declName
declName := view.declName
parentName? := view.parentName?
lctx
localInstances
@@ -127,6 +132,8 @@ private def registerLetRecsToLift (views : Array LetRecDeclView) (fvars : Array
val := value
mvarId := view.mvar.mvarId!
termination
binders := view.binders
docString? := view.docString?
}
modify fun s => { s with letRecsToLift := toLift.toList ++ s.letRecsToLift }

View File

@@ -1092,8 +1092,8 @@ def pushLetRecs (preDefs : Array PreDefinition) (letRecClosures : List LetRecClo
ref := c.ref
declName := c.toLift.declName
levelParams := [] -- we set it later
binders := mkNullNode -- No docstrings, so we don't need these
modifiers := { modifiers with attrs := c.toLift.attrs }
binders := c.toLift.binders
modifiers := { modifiers with attrs := c.toLift.attrs, docString? := c.toLift.docString? }
kind, type, value,
termination := c.toLift.termination
}

View File

@@ -29,6 +29,10 @@ def addPreDefsFromUnary (docCtx : LocalContext × LocalInstances) (preDefs : Arr
let preDefNonRec := unaryPreDefNonRec.filterAttrs fun attr => attr.name != `implemented_by
let declNames := preDefs.toList.map (·.declName)
preDefs.forM fun preDef =>
unless preDef.kind.isTheorem do
markAsRecursive preDef.declName
-- Do not complain if the user sets @[semireducible], which usually is a noop,
-- we recognize that below and then do not set @[irreducible]
withOptions (allowUnsafeReducibility.set · true) do
@@ -53,8 +57,6 @@ def cleanPreDef (preDef : PreDefinition) (cacheProofs := true) : MetaM PreDefini
Assign final attributes to the definitions. Assumes the EqnInfos to be already present.
-/
def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
for preDef in preDefs do
markAsRecursive preDef.declName
for preDef in preDefs.reverse do
-- must happen before `generateEagerEqns`
-- must happen in reverse order so that constants realized as part of the first decl

View File

@@ -140,6 +140,8 @@ def structuralRecursion
preDefsNonRec.forM fun preDefNonRec => do
let preDefNonRec eraseRecAppSyntax preDefNonRec
prependError m!"structural recursion failed, produced type incorrect term" do
unless preDefNonRec.kind.isTheorem do
markAsRecursive preDefNonRec.declName
-- We create the `_unsafe_rec` before we abstract nested proofs.
-- Reason: the nested proofs may be referring to the _unsafe_rec.
addNonRec docCtx preDefNonRec (applyAttrAfterCompilation := false) (all := names.toList)
@@ -157,7 +159,6 @@ def structuralRecursion
-/
registerEqnsInfo preDef (preDefs.map (·.declName)) recArgPos fixedParamPerms
addSmartUnfoldingDef docCtx preDef recArgPos
markAsRecursive preDef.declName
for preDef in preDefs do
-- must happen in separate loop so realizations can see eqnInfos of all other preDefs
enableRealizationsForConst preDef.declName

View File

@@ -16,6 +16,7 @@ open Meta
structure Context extends Tactic.Context where
ctx : Meta.Grind.Context
sctx : Meta.Sym.Context
methods : Grind.Methods
params : Grind.Params
@@ -289,7 +290,7 @@ open Grind
def liftGrindM (k : GrindM α) : GrindTacticM α := do
let ctx read
let s get
let ((a, grindState), symState) liftMetaM <| StateRefT'.run ((Grind.withGTransparency k) ctx.methods.toMethodsRef ctx.ctx |>.run s.grindState) s.symState
let ((a, grindState), symState) liftMetaM <| StateRefT'.run (((Grind.withGTransparency k) ctx.methods.toMethodsRef ctx.ctx |>.run s.grindState) ctx.sctx) s.symState
modify fun s => { s with grindState, symState }
return a
@@ -358,12 +359,13 @@ def mkEvalTactic' (elaborator : Name) (params : Params) : TermElabM (Goal → TS
let eval (goal : Goal) (stx : TSyntax `grind) : GrindM (List Goal) := do
let methods getMethods
let grindCtx readThe Meta.Grind.Context
let symCtx readThe Meta.Sym.Context
let grindState get
let symState getThe Sym.State
-- **Note**: we discard changes to `Term.State`
let (subgoals, grindState', symState') Term.TermElabM.run' (ctx := termCtx) (s := termState) do
let (_, s) GrindTacticM.run
(ctx := { recover := false, methods, ctx := grindCtx, params, elaborator })
(ctx := { recover := false, methods, ctx := grindCtx, sctx := symCtx, params, elaborator })
(s := { grindState, symState, goals := [goal] }) do
evalGrindTactic stx.raw
pruneSolvedGoals
@@ -383,7 +385,7 @@ def GrindTacticM.runAtGoal (mvarId : MVarId) (params : Params) (k : GrindTacticM
Reconsider the option `useSorry`.
-/
let params' := { params with config.useSorry := false }
let (methods, ctx, state) liftMetaM <| GrindM.runAtGoal mvarId params' (evalTactic? := some evalTactic) fun goal => do
let (methods, ctx, sctx, state) liftMetaM <| GrindM.runAtGoal mvarId params' (evalTactic? := some evalTactic) fun goal => do
let a : Action := Action.intros 0 >> Action.assertAll
let goals match ( a.run goal) with
| .closed _ => pure []
@@ -392,10 +394,11 @@ def GrindTacticM.runAtGoal (mvarId : MVarId) (params : Params) (k : GrindTacticM
let ctx readThe Meta.Grind.Context
/- Restore original config -/
let ctx := { ctx with config := params.config }
let sctx readThe Meta.Sym.Context
let grindState get
let symState getThe Sym.State
return (methods, ctx, { grindState, symState, goals })
return (methods, ctx, sctx, { grindState, symState, goals })
let tctx read
k { tctx with methods, ctx, params } |>.run state
k { tctx with methods, ctx, sctx, params } |>.run state
end Lean.Elab.Tactic.Grind

View File

@@ -167,6 +167,11 @@ structure LetRecToLift where
val : Expr
mvarId : MVarId
termination : TerminationHints
/-- The binders syntax for the declaration, used for docstring elaboration. -/
binders : Syntax := .missing
/-- The docstring, if present, and whether it's Verso. Docstring processing is deferred until the
declaration is added to the environment (needed for Verso docstrings to work). -/
docString? : Option (TSyntax ``Lean.Parser.Command.docComment × Bool) := none
deriving Inhabited
/--

View File

@@ -8,6 +8,7 @@ module
prelude
public import Lean.Meta.Match.MatcherInfo
public import Lean.DefEqAttrib
public import Lean.Meta.RecExt
public import Lean.Meta.LetToHave
import Lean.Meta.AppBuilder
@@ -40,26 +41,6 @@ This is implemented by
-/
def eqnAffectingOptions : Array (Lean.Option Bool) := #[backward.eqns.nonrecursive, backward.eqns.deepRecursiveSplit]
/--
Environment extension for storing which declarations are recursive.
This information is populated by the `PreDefinition` module, but the simplifier
uses when unfolding declarations.
-/
builtin_initialize recExt : TagDeclarationExtension
mkTagDeclarationExtension `recExt (asyncMode := .async .asyncEnv)
/--
Marks the given declaration as recursive.
-/
def markAsRecursive (declName : Name) : CoreM Unit :=
modifyEnv (recExt.tag · declName)
/--
Returns `true` if `declName` was defined using well-founded recursion, or structural recursion.
-/
def isRecursiveDefinition (declName : Name) : CoreM Bool :=
return recExt.isTagged ( getEnv) declName
def eqnThmSuffixBase := "eq"
def eqnThmSuffixBasePrefix := eqnThmSuffixBase ++ "_"
def eqn1ThmSuffix := eqnThmSuffixBasePrefix ++ "1"

33
src/Lean/Meta/RecExt.lean Normal file
View File

@@ -0,0 +1,33 @@
/-
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Lean.Attributes
public section
namespace Lean.Meta
/--
Environment extension for storing which declarations are recursive.
This information is populated by the `PreDefinition` module, but the simplifier
uses when unfolding declarations.
-/
builtin_initialize recExt : TagDeclarationExtension
mkTagDeclarationExtension `recExt (asyncMode := .async .asyncEnv)
/--
Marks the given declaration as recursive.
-/
def markAsRecursive (declName : Name) : CoreM Unit :=
modifyEnv (recExt.tag · declName)
/--
Returns `true` if `declName` was defined using well-founded recursion, or structural recursion.
-/
def isRecursiveDefinition (declName : Name) : CoreM Bool :=
return recExt.isTagged ( getEnv) declName

View File

@@ -23,13 +23,14 @@ public import Lean.Meta.Sym.Apply
public import Lean.Meta.Sym.InferType
public import Lean.Meta.Sym.Simp
public import Lean.Meta.Sym.Util
public import Lean.Meta.Sym.Grind
/-!
# Symbolic simulation support.
# Symbolic computation support.
This module provides `SymM`, a monad for implementing symbolic simulators (e.g., verification condition generators)
using Lean. The monad addresses performance issues found in symbolic simulators built on top of user-facing
tactics (e.g., `apply` and `intros`).
This module provides `SymM`, a monad for implementing symbolic computation (e.g., decision procedures and
verification condition generators) using Lean. The monad addresses performance issues found in symbolic
computation engines built on top of user-facing tactics (e.g., `apply` and `intros`).
## Overview
@@ -65,14 +66,14 @@ whether `maxFVar[e]` is in `?m.lctx` — a single hash lookup, O(1).
**The problem:** The `isDefEq` predicate in `MetaM` is designed for elaboration and user-facing tactics.
It supports reduction, type-class resolution, and many other features that can be expensive or have
unpredictable running time. For symbolic simulation, where pattern matching is called frequently on
unpredictable running time. For symbolic computation, where pattern matching is called frequently on
large ground terms, these features become performance bottlenecks.
**The solution:** In `SymM`, pattern matching and definitional equality are restricted to a more syntactic,
predictable subset. Key design choices:
1. **Reducible declarations are abbreviations.** Reducible declarations are eagerly expanded when indexing
terms and when entering symbolic simulation mode. During matching, we assume abbreviations have already
terms and when entering symbolic computation mode. During matching, we assume abbreviations have already
been expanded.
**Why `MetaM` `simp` cannot make this assumption**: The simplifier in `MetaM` is designed for interactive use,
@@ -99,7 +100,7 @@ predictable subset. Key design choices:
4. **Types must be indexed.** Unlike proofs and instances, types cannot be ignored, without indexing them,
pattern matching produces too many candidates. Like other abbreviations, type abbreviations are expanded.
Note that given `def Foo : Type := Bla`, the terms `Foo` and `Bla` are *not* considered structurally
equal in the symbolic simulator framework.
equal in the symbolic computation framework.
### Skipping type checks on assignment
@@ -117,7 +118,7 @@ so the check is almost always skipped.
### `GrindM` state
**The problem:** In symbolic simulation, we often want to discharge many goals using proof automation such
**The problem:** In symbolic computation, we often want to discharge many goals using proof automation such
as `grind`. Many of these goals share very similar local contexts. If we invoke `grind` on each goal
independently, we repeatedly reprocess the same hypotheses.

View File

@@ -100,8 +100,8 @@ def mkValue (expr : Expr) (pattern : Pattern) (result : MatchUnifyResult) : Expr
mkAppN (expr.instantiateLevelParams pattern.levelParams result.us) result.args
public inductive ApplyResult where
| notApplicable
| goals (mvarId : List MVarId)
| failed
| goals (mvarIds : List MVarId)
/--
Applies a backward rule to a goal, returning new subgoals.
@@ -119,7 +119,7 @@ public def BackwardRule.apply (mvarId : MVarId) (rule : BackwardRule) : SymM App
return .goals <| rule.resultPos.map fun i =>
result.args[i]!.mvarId!
else
return .notApplicable
return .failed
/--
Similar to `BackwardRule.apply', but throws an error if unification fails.

View File

@@ -0,0 +1,129 @@
/-
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Lean.Meta.Tactic.Grind.Types
public import Lean.Meta.Sym.Simp.SimpM
public import Lean.Meta.Sym.Apply
import Lean.Meta.Tactic.Grind.Main
import Lean.Meta.Sym.Simp.Goal
import Lean.Meta.Sym.Intro
import Lean.Meta.Sym.Util
import Lean.Meta.Tactic.Grind.Solve
import Lean.Meta.Tactic.Assumption
namespace Lean.Meta.Grind
/-!
# Grind Goal API for Symbolic Simulation
This module provides an API for building symbolic simulation engines and
verification condition generators on top of `grind`. It wraps `Sym` operations
to work with `grind`'s `Goal` type, enabling users to carry `grind` state
through symbolic execution while using lightweight `Sym` operations for
the main loop.
## Typical usage pattern
```
let goal ← mkGoal mvarId
let .goal xs goal ← goal.introN 2 | failure
let .goal goal ← goal.simp methods | failure
let goal ← goal.internalizeAll
-- ... symbolic execution loop using goal.apply ...
let .closed ← goal.grind | failure
```
## Design
Operations like `introN`, `apply`, and `simp` run in `SymM` for performance.
`internalize` and `grind` run in `GrindM` to access the E-graph.
-/
/--
Creates a `Goal` from an `MVarId`, applying `Sym` preprocessing.
Preprocessing ensures the goal is compatible with `Sym` operations.
-/
public def mkGoal (mvarId : MVarId) : GrindM Goal := do
let mvarId Sym.preprocessMVar mvarId
mkGoalCore mvarId
open Sym (SymM)
public inductive IntrosResult where
| failed
| goal (newDecls : Array FVarId) (goal : Goal)
/-- Introduces `num` binders from the goal's target. -/
public def Goal.introN (goal : Goal) (num : Nat) : SymM IntrosResult := do
let .goal xs mvarId Sym.introN goal.mvarId num | return .failed
return .goal xs { goal with mvarId }
/-- Introduces binders with the specified names. -/
public def Goal.intros (goal : Goal) (names : Array Name) : SymM IntrosResult := do
let .goal xs mvarId Sym.intros goal.mvarId names | return .failed
return .goal xs { goal with mvarId }
public inductive ApplyResult where
| failed
| goals (subgoals : List Goal)
/-- Applies a backward rule, returning subgoals on success. -/
public def Goal.apply (goal : Goal) (rule : Sym.BackwardRule) : SymM ApplyResult := do
let .goals mvarIds rule.apply goal.mvarId | return .failed
return .goals <| mvarIds.map fun mvarId => { goal with mvarId }
public inductive SimpGoalResult where
| noProgress
| closed
| goal (goal : Goal)
/-- Simplifies the goal using the given methods. -/
public def Goal.simp (goal : Goal) (methods : Sym.Simp.Methods := {}) (config : Sym.Simp.Config := {}) : SymM SimpGoalResult := do
match ( Sym.simpGoal goal.mvarId methods config) with
| .goal mvarId => return .goal { goal with mvarId }
| .noProgress => return .noProgress
| .closed => return .closed
/-- Like `simp`, but returns the original goal unchanged when no progress is made. -/
public def Goal.simpIgnoringNoProgress (goal : Goal) (methods : Sym.Simp.Methods := {}) (config : Sym.Simp.Config := {}) : SymM SimpGoalResult := do
match ( Sym.simpGoal goal.mvarId methods config) with
| .goal mvarId => return .goal { goal with mvarId }
| .noProgress => return .goal goal
| .closed => return .closed
/--
Internalizes the next `num` hypotheses from the local context into the `grind` state (e.g., its E-graph).
-/
public def Goal.internalize (goal : Goal) (num : Nat) : GrindM Goal := do
Grind.processHypotheses goal (some num)
/-- Internalizes all (un-internalized) hypotheses from the local context into the `grind` state. -/
public def Goal.internalizeAll (goal : Goal) : GrindM Goal := do
Grind.processHypotheses goal none
public inductive GrindResult where
| failed (goal : Goal)
| closed
/--
Attempts to close the goal using `grind`.
Returns `.closed` on success, or `.failed` with the first subgoal that failed to be closed.
-/
public def Goal.grind (goal : Goal) : GrindM GrindResult := do
if let some failure solve goal then
return .failed failure
else
return .closed
/--
Closes the goal if its target matches a hypothesis.
Returns `true` on success.
-/
public def Goal.assumption (goal : Goal) : MetaM Bool := do
-- **TODO**: add indexing
goal.mvarId.assumptionCore
end Lean.Meta.Grind

View File

@@ -96,48 +96,39 @@ def introCore (mvarId : MVarId) (max : Nat) (names : Array Name) : SymM (Array F
def hugeNat := 1000000
public inductive IntrosResult where
| failed
| goal (newDecls : Array FVarId) (mvarId : MVarId)
/--
Introduces leading binders (universal quantifiers and let-expressions) from the goal's target type.
If `names` is non-empty, introduces (at most) `names.size` binders using the provided names.
If `names` is empty, introduces all leading binders using inaccessible names.
Returns the introduced free variable Ids and the updated goal.
Throws an error if the target type does not have a leading binder.
Returns `.goal newDecls mvarId` with new introduced free variable Ids and the updated goal.
Returns `.failed` if no new declaration was introduced.
-/
public def intros (mvarId : MVarId) (names : Array Name := #[]) : SymM (Array FVarId × MVarId) := do
public def intros (mvarId : MVarId) (names : Array Name := #[]) : SymM IntrosResult := do
let result if names.isEmpty then
introCore mvarId hugeNat #[]
else
introCore mvarId names.size names
if result.1.isEmpty then
throwError "`intros` failed, binder expected"
return result
/--
Introduces a single binder from the goal's target type with the given name.
Returns the introduced free variable ID and the updated goal.
Throws an error if the target type does not have a leading binder.
-/
public def intro (mvarId : MVarId) (name : Name) : SymM (FVarId × MVarId) := do
let (fvarIds, goal') introCore mvarId 1 #[name]
if h : 0 < fvarIds.size then
return (fvarIds[0], goal')
else
throwError "`intro` failed, binder expected"
return .failed
return .goal result.1 result.2
/--
Introduces exactly `num` binders from the goal's target type.
Returns the introduced free variable IDs and the updated goal.
Throws an error if the target type has fewer than `num` leading binders.
Returns `.goal newDecls mvarId` if successful where `newDecls` are the introduced free variable IDs,
`mvarId` the updated goal.
Returns `.failed` if it was not possible to introduce `num` new local declarations.
-/
public def introN (mvarId : MVarId) (num : Nat) : SymM (Array FVarId × MVarId) := do
public def introN (mvarId : MVarId) (num : Nat) : SymM IntrosResult := do
let result introCore mvarId num #[]
unless result.1.size == num do
throwError "`introN` failed, insufficient number of binders"
return result
return .failed
return .goal result.1 result.2
end Lean.Meta.Sym

View File

@@ -224,7 +224,7 @@ position. However, the type is only meaningful (non-`default`) when `Result` is
`.step`, since we only need types for constructing congruence proofs. This avoids
unnecessary type inference when no rewriting occurs.
-/
def simpFixedPrefix (e : Expr) (prefixSize : Nat) (suffixSize : Nat) : SimpM Result := do
public def simpFixedPrefix (e : Expr) (prefixSize : Nat) (suffixSize : Nat) : SimpM Result := do
let numArgs := e.getAppNumArgs
if numArgs prefixSize then
-- Nothing to be done
@@ -274,7 +274,7 @@ Uses `rewritable[i]` to determine whether argument `i` should be simplified.
For rewritable arguments, calls `simp` and uses `congrFun'`, `congrArg`, and `congr`; for fixed arguments,
uses `congrFun` to propagate changes from earlier arguments.
-/
def simpInterlaced (e : Expr) (rewritable : Array Bool) : SimpM Result := do
public def simpInterlaced (e : Expr) (rewritable : Array Bool) : SimpM Result := do
let numArgs := e.getAppNumArgs
if h : numArgs = 0 then
-- Nothing to be done

View File

@@ -27,16 +27,16 @@ def simpIte : Simproc := fun e => do
let_expr f@ite α c _ a b := e | return .rfl
match ( simp c) with
| .rfl _ =>
if c.isTrue then
if isSameExpr c ( getTrueExpr) then
return .step a <| mkApp3 (mkConst ``ite_true f.constLevels!) α a b
else if c.isFalse then
else if isSameExpr c ( getFalseExpr) then
return .step b <| mkApp3 (mkConst ``ite_false f.constLevels!) α a b
else
return .rfl (done := true)
| .step c' h _ =>
if c'.isTrue then
if isSameExpr c' ( getTrueExpr) then
return .step a <| mkApp (e.replaceFn ``ite_cond_eq_true) h
else if c'.isFalse then
else if isSameExpr c' ( getFalseExpr) then
return .step b <| mkApp (e.replaceFn ``ite_cond_eq_false) h
else
let .some inst' trySynthInstance (mkApp (mkConst ``Decidable) c') | return .rfl
@@ -56,20 +56,20 @@ def simpDIte : Simproc := fun e => do
let_expr f@dite α c _ a b := e | return .rfl
match ( simp c) with
| .rfl _ =>
if c.isTrue then
if isSameExpr c ( getTrueExpr) then
let a' share <| a.betaRev #[mkConst ``True.intro]
return .step a' <| mkApp3 (mkConst ``dite_true f.constLevels!) α a b
else if c.isFalse then
else if isSameExpr c ( getFalseExpr) then
let b' share <| b.betaRev #[mkConst ``not_false]
return .step b' <| mkApp3 (mkConst ``dite_false f.constLevels!) α a b
else
return .rfl (done := true)
| .step c' h _ =>
if c'.isTrue then
if isSameExpr c' ( getTrueExpr) then
let h' shareCommon <| mkOfEqTrueCore c h
let a share <| a.betaRev #[h']
return .step a <| mkApp (e.replaceFn ``dite_cond_eq_true) h
else if c'.isFalse then
else if isSameExpr c' ( getFalseExpr) then
let h' shareCommon <| mkOfEqFalseCore c h
let b share <| b.betaRev #[h']
return .step b <| mkApp (e.replaceFn ``dite_cond_eq_false) h
@@ -94,16 +94,16 @@ def simpCond : Simproc := fun e => do
let_expr f@cond α c a b := e | return .rfl
match ( simp c) with
| .rfl _ =>
if c.isConstOf ``true then
if isSameExpr c ( getBoolTrueExpr) then
return .step a <| mkApp3 (mkConst ``cond_true f.constLevels!) α a b
else if c.isConstOf ``false then
else if isSameExpr c ( getBoolFalseExpr) then
return .step b <| mkApp3 (mkConst ``cond_false f.constLevels!) α a b
else
return .rfl (done := true)
| .step c' h _ =>
if c'.isConstOf ``true then
if isSameExpr c' ( getBoolTrueExpr) then
return .step a <| mkApp (e.replaceFn ``Sym.cond_cond_eq_true) h
else if c'.isConstOf ``false then
else if isSameExpr c' ( getBoolFalseExpr) then
return .step b <| mkApp (e.replaceFn ``Sym.cond_cond_eq_false) h
else
let e' := e.getBoundedAppFn 3

View File

@@ -344,10 +344,10 @@ abbrev evalBinPred (toValue? : Expr → Option α) (trueThm falseThm : Expr) (op
let some va := toValue? a | return .rfl
let some vb := toValue? b | return .rfl
if op va vb then
let e share <| mkConst ``True
let e getTrueExpr
return .step e (mkApp3 trueThm a b eagerReflBoolTrue) (done := true)
else
let e share <| mkConst ``False
let e getFalseExpr
return .step e (mkApp3 falseThm a b eagerReflBoolFalse) (done := true)
def evalBitVecPred (n : Expr) (trueThm falseThm : Expr) (op : {n : Nat} BitVec n BitVec n Bool) (a b : Expr) : SimpM Result := do
@@ -355,10 +355,10 @@ def evalBitVecPred (n : Expr) (trueThm falseThm : Expr) (op : {n : Nat} → BitV
let some vb := getBitVecValue? b | return .rfl
if h : va.n = vb.n then
if op va.val (h vb.val) then
let e share <| mkConst ``True
let e getTrueExpr
return .step e (mkApp4 trueThm n a b eagerReflBoolTrue) (done := true)
else
let e share <| mkConst ``False
let e getFalseExpr
return .step e (mkApp4 falseThm n a b eagerReflBoolFalse) (done := true)
else
return .rfl
@@ -368,10 +368,10 @@ def evalFinPred (n : Expr) (trueThm falseThm : Expr) (op : {n : Nat} → Fin n
let some vb := getFinValue? b | return .rfl
if h : va.n = vb.n then
if op va.val (h vb.val) then
let e share <| mkConst ``True
let e getTrueExpr
return .step e (mkApp4 trueThm n a b eagerReflBoolTrue) (done := true)
else
let e share <| mkConst ``False
let e getFalseExpr
return .step e (mkApp4 falseThm n a b eagerReflBoolFalse) (done := true)
else
return .rfl
@@ -418,7 +418,7 @@ def evalLE (α : Expr) (a b : Expr) : SimpM Result :=
def evalEq (α : Expr) (a b : Expr) : SimpM Result :=
if isSameExpr a b then do
let e share <| mkConst ``True
let e getTrueExpr
let u getLevel α
return .step e (mkApp2 (mkConst ``eq_self [u]) α a) (done := true)
else match_expr α with
@@ -512,8 +512,8 @@ def evalNot (a : Expr) : SimpM Result :=
**Note**: We added `evalNot` because some abbreviations expanded into `Not`s.
-/
match_expr a with
| True => return .step ( mkConstS ``False) (mkConst ``Sym.not_true_eq) (done := true)
| False => return .step ( mkConstS ``True) (mkConst ``Sym.not_false_eq) (done := true)
| True => return .step ( getFalseExpr) (mkConst ``Sym.not_true_eq) (done := true)
| False => return .step ( getTrueExpr) (mkConst ``Sym.not_false_eq) (done := true)
| _ => return .rfl
public structure EvalStepConfig where

View File

@@ -72,7 +72,7 @@ public def simpGoal (mvarId : MVarId) (methods : Simp.Methods := {}) (config :
/--
Similar to `simpGoal`, but returns `.goal mvarId` if no progress was made.
-/
public def trySimpGoal (mvarId : MVarId) (methods : Simp.Methods := {}) (config : Simp.Config := {})
public def simpGoalIgnoringNoProgress (mvarId : MVarId) (methods : Simp.Methods := {}) (config : Simp.Config := {})
: SymM SimpGoalResult := do
match ( simpGoal mvarId methods config) with
| .noProgress => return .goal mvarId

View File

@@ -22,4 +22,8 @@ public abbrev mkEqTransResult (e₁ : Expr) (e₂ : Expr) (h₁ : Expr) (r₂ :
| .rfl done => return .step e₂ h₁ done
| .step e₃ h₂ done => return .step e₃ ( mkEqTrans e₁ e₂ h₁ e₃ h₂) done
public def Result.markAsDone : Result Result
| .rfl _ => .rfl true
| .step e h _ => .step e h true
end Lean.Meta.Sym.Simp

View File

@@ -83,7 +83,21 @@ inductive CongrInfo where
-/
congrTheorem (thm : CongrTheorem)
/-- Mutable state for the symbolic simulator framework. -/
/-- Pre-shared expressions for commonly used terms. -/
structure SharedExprs where
trueExpr : Expr
falseExpr : Expr
natZExpr : Expr
btrueExpr : Expr
bfalseExpr : Expr
ordEqExpr : Expr
intExpr : Expr
/-- Readonly context for the symbolic computation framework. -/
structure Context where
sharedExprs : SharedExprs
/-- Mutable state for the symbolic computation framework. -/
structure State where
/-- `ShareCommon` (aka `Hash-consing`) state. -/
share : AlphaShareCommon.State := {}
@@ -120,11 +134,41 @@ structure State where
congrInfo : PHashMap ExprPtr CongrInfo := {}
debug : Bool := false
abbrev SymM := StateRefT State MetaM
abbrev SymM := ReaderT Context <| StateRefT State MetaM
private def mkSharedExprs : AlphaShareCommonM SharedExprs := do
let falseExpr shareCommonAlphaInc <| mkConst ``False
let trueExpr shareCommonAlphaInc <| mkConst ``True
let bfalseExpr shareCommonAlphaInc <| mkConst ``Bool.false
let btrueExpr shareCommonAlphaInc <| mkConst ``Bool.true
let natZExpr shareCommonAlphaInc <| mkNatLit 0
let ordEqExpr shareCommonAlphaInc <| mkConst ``Ordering.eq
let intExpr shareCommonAlphaInc <| Int.mkType
return { falseExpr, trueExpr, bfalseExpr, btrueExpr, natZExpr, ordEqExpr, intExpr }
def SymM.run (x : SymM α) : MetaM α := do
let (sharedExprs, share) := mkSharedExprs |>.run {}
let debug := sym.debug.get ( getOptions)
x |>.run' { debug }
x { sharedExprs } |>.run' { debug, share }
/-- Returns maximally shared commonly used terms -/
def getSharedExprs : SymM SharedExprs :=
return ( read).sharedExprs
/-- Returns the internalized `True` constant. -/
def getTrueExpr : SymM Expr := return ( getSharedExprs).trueExpr
/-- Returns the internalized `False` constant. -/
def getFalseExpr : SymM Expr := return ( getSharedExprs).falseExpr
/-- Returns the internalized `Bool.true`. -/
def getBoolTrueExpr : SymM Expr := return ( getSharedExprs).btrueExpr
/-- Returns the internalized `Bool.false`. -/
def getBoolFalseExpr : SymM Expr := return ( getSharedExprs).bfalseExpr
/-- Returns the internalized `0 : Nat` numeral. -/
def getNatZeroExpr : SymM Expr := return ( getSharedExprs).natZExpr
/-- Returns the internalized `Ordering.eq`. -/
def getOrderingEqExpr : SymM Expr := return ( getSharedExprs).ordEqExpr
/-- Returns the internalized `Int`. -/
def getIntExpr : SymM Expr := return ( getSharedExprs).intExpr
/--
Applies hash-consing to `e`. Recall that all expressions in a `grind` goal have

View File

@@ -107,13 +107,6 @@ private def discharge? (e : Expr) : SimpM (Option Expr) := do
open Sym
def GrindM.run (x : GrindM α) (params : Params) (evalTactic? : Option EvalTactic := none) : MetaM α := Sym.SymM.run do
let falseExpr share <| mkConst ``False
let trueExpr share <| mkConst ``True
let bfalseExpr share <| mkConst ``Bool.false
let btrueExpr share <| mkConst ``Bool.true
let natZExpr share <| mkNatLit 0
let ordEqExpr share <| mkConst ``Ordering.eq
let intExpr share <| Int.mkType
/- **Note**: Consider using `Sym.simp` in the future. -/
let simprocs := params.normProcs
let simpMethods := Simp.mkMethods simprocs discharge? (wellBehavedDischarge := true)
@@ -124,9 +117,7 @@ def GrindM.run (x : GrindM α) (params : Params) (evalTactic? : Option EvalTacti
let anchorRefs? := params.anchorRefs?
let debug := grind.debug.get ( getOptions)
x ( mkMethods evalTactic?).toMethodsRef
{ config, anchorRefs?, simpMethods, simp, extensions, symPrios
trueExpr, falseExpr, natZExpr, btrueExpr, bfalseExpr, ordEqExpr, intExpr
debug }
{ config, anchorRefs?, simpMethods, simp, extensions, symPrios, debug }
|>.run' {}
private def mkCleanState (mvarId : MVarId) : GrindM Clean.State := mvarId.withContext do
@@ -155,7 +146,7 @@ private def initENodeCore (e : Expr) (interpreted ctor : Bool) : GoalM Unit := d
mkENodeCore e interpreted ctor (generation := 0) (funCC := false)
/-- Returns a new goal for the given metavariable. -/
public def mkGoal (mvarId : MVarId) : GrindM Goal := do
public def mkGoalCore (mvarId : MVarId) : GrindM Goal := do
let config getConfig
let mvarId if config.clean then mvarId.exposeNames else pure mvarId
let trueExpr getTrueExpr
@@ -288,7 +279,7 @@ private def initCore (mvarId : MVarId) : GrindM Goal := do
let mvarId mvarId.unfoldReducible
let mvarId mvarId.betaReduce
appendTagSuffix mvarId `grind
let goal mkGoal mvarId
let goal mkGoalCore mvarId
if config.revert then
return goal
else

View File

@@ -160,15 +160,10 @@ structure Context where
/-- Symbol priorities for inferring E-matching patterns -/
symPrios : SymbolPriorities
extensions : ExtensionStateArray := #[]
trueExpr : Expr
falseExpr : Expr
natZExpr : Expr
btrueExpr : Expr
bfalseExpr : Expr
ordEqExpr : Expr -- `Ordering.eq`
intExpr : Expr -- `Int`
debug : Bool -- Cached `grind.debug (← getOptions)`
export Sym (getTrueExpr getFalseExpr getBoolTrueExpr getBoolFalseExpr getNatZeroExpr getOrderingEqExpr getIntExpr)
/-- Key for the congruence theorem cache. -/
structure CongrTheoremCacheKey where
f : Expr
@@ -305,34 +300,6 @@ abbrev withGTransparency [MonadControlT MetaM n] [MonadLiftT GrindM n] [Monad n]
let m := if ( getConfig).reducible then .reducible else .default
withTransparency m k
/-- Returns the internalized `True` constant. -/
def getTrueExpr : GrindM Expr := do
return ( readThe Context).trueExpr
/-- Returns the internalized `False` constant. -/
def getFalseExpr : GrindM Expr := do
return ( readThe Context).falseExpr
/-- Returns the internalized `Bool.true`. -/
def getBoolTrueExpr : GrindM Expr := do
return ( readThe Context).btrueExpr
/-- Returns the internalized `Bool.false`. -/
def getBoolFalseExpr : GrindM Expr := do
return ( readThe Context).bfalseExpr
/-- Returns the internalized `0 : Nat` numeral. -/
def getNatZeroExpr : GrindM Expr := do
return ( readThe Context).natZExpr
/-- Returns the internalized `Ordering.eq`. -/
def getOrderingEqExpr : GrindM Expr := do
return ( readThe Context).ordEqExpr
/-- Returns the internalized `Int`. -/
def getIntExpr : GrindM Expr := do
return ( readThe Context).intExpr
/-- Returns the anchor references (if any) being used to restrict the search. -/
def getAnchorRefs : GrindM (Option (Array AnchorRef)) := do
return ( readThe Context).anchorRefs?

View File

@@ -188,6 +188,12 @@ def applyEqLemma (e : Expr → EqResult) (lemmaName : Name) (args : Array Expr)
return .some (e (mkAppN (mkConst lemmaName) args))
def reduceNatEqExpr (x y : Expr) : SimpM (Option EqResult):= do
/-
**TODO**: These proofs rely too much on definitional equality.
Example:
`x + 1 + 1 + ... + 1 = x + 1 + ... + 1`
It will treat both sides as `x + n = x + n`.
-/
let some xno NatOffset.fromExpr? x | return none
let some yno NatOffset.fromExpr? y | return none
match xno, yno with

View File

@@ -324,6 +324,55 @@ LEAN_EXPORT LEAN_NORETURN void lean_internal_panic(char const * msg);
LEAN_EXPORT LEAN_NORETURN void lean_internal_panic_out_of_memory(void);
LEAN_EXPORT LEAN_NORETURN void lean_internal_panic_unreachable(void);
LEAN_EXPORT LEAN_NORETURN void lean_internal_panic_rc_overflow(void);
LEAN_EXPORT LEAN_NORETURN void lean_internal_panic_overflow(void);
static inline bool lean_usize_mul_would_overflow(size_t a, size_t b) {
#if defined(__GNUC__) || defined(__clang__)
size_t r;
return __builtin_mul_overflow(a, b, &r);
#else
return a != 0 && b > SIZE_MAX / a;
#endif
}
static inline bool lean_usize_add_would_overflow(size_t a, size_t b) {
#if defined(__GNUC__) || defined(__clang__)
size_t r;
return __builtin_add_overflow(a, b, &r);
#else
return a > SIZE_MAX - b;
#endif
}
static inline size_t lean_usize_mul_checked(size_t a, size_t b) {
#if defined(__GNUC__) || defined(__clang__)
size_t r;
if (LEAN_UNLIKELY(__builtin_mul_overflow(a, b, &r))) {
lean_internal_panic_overflow();
}
return r;
#else
if (a != 0 && b > SIZE_MAX / a) {
lean_internal_panic_overflow();
}
return a * b;
#endif
}
static inline size_t lean_usize_add_checked(size_t a, size_t b) {
#if defined(__GNUC__) || defined(__clang__)
size_t r;
if (LEAN_UNLIKELY(__builtin_add_overflow(a, b, &r))) {
lean_internal_panic_overflow();
}
return r;
#else
if (a > SIZE_MAX - b) {
lean_internal_panic_overflow();
}
return a + b;
#endif
}
static inline size_t lean_align(size_t v, size_t a) {
return (v / a)*a + a * (v % a != 0);
@@ -609,7 +658,7 @@ static inline uint8_t * lean_ctor_scalar_cptr(lean_object * o) {
static inline lean_object * lean_alloc_ctor(unsigned tag, unsigned num_objs, unsigned scalar_sz) {
assert(tag <= LeanMaxCtorTag && num_objs < LEAN_MAX_CTOR_FIELDS && scalar_sz < LEAN_MAX_CTOR_SCALARS_SIZE);
lean_object * o = lean_alloc_ctor_memory(sizeof(lean_ctor_object) + sizeof(void*)*num_objs + scalar_sz);
lean_object * o = lean_alloc_ctor_memory(lean_usize_add_checked(lean_usize_add_checked(sizeof(lean_ctor_object), lean_usize_mul_checked(sizeof(void*), num_objs)), scalar_sz));
lean_set_st_header(o, tag, num_objs);
return o;
}
@@ -715,7 +764,7 @@ static inline lean_object ** lean_closure_arg_cptr(lean_object * o) { return lea
static inline lean_obj_res lean_alloc_closure(void * fun, unsigned arity, unsigned num_fixed) {
assert(arity > 0);
assert(num_fixed < arity);
lean_closure_object * o = (lean_closure_object*)lean_alloc_object(sizeof(lean_closure_object) + sizeof(void*)*num_fixed);
lean_closure_object * o = (lean_closure_object*)lean_alloc_object(lean_usize_add_checked(sizeof(lean_closure_object), lean_usize_mul_checked(sizeof(void*), num_fixed)));
lean_set_st_header((lean_object*)o, LeanClosure, 0);
o->m_fun = fun;
o->m_arity = arity;
@@ -761,7 +810,7 @@ LEAN_EXPORT lean_object* lean_apply_m(lean_object* f, unsigned n, lean_object**
/* Arrays of objects (low level API) */
static inline lean_obj_res lean_alloc_array(size_t size, size_t capacity) {
lean_array_object * o = (lean_array_object*)lean_alloc_object(sizeof(lean_array_object) + sizeof(void*)*capacity);
lean_array_object * o = (lean_array_object*)lean_alloc_object(lean_usize_add_checked(sizeof(lean_array_object), lean_usize_mul_checked(sizeof(void*), capacity)));
lean_set_st_header((lean_object*)o, LeanArray, 0);
o->m_size = size;
o->m_capacity = capacity;
@@ -934,8 +983,18 @@ LEAN_EXPORT lean_object * lean_mk_array(lean_obj_arg n, lean_obj_arg v);
/* Array of scalars */
static inline bool lean_alloc_sarray_would_overflow(unsigned elem_size, size_t capacity) {
if (lean_usize_mul_would_overflow(elem_size, capacity)) {
return true;
}
if (lean_usize_add_would_overflow(sizeof(lean_sarray_object), elem_size * capacity)) {
return true;
}
return false;
}
static inline lean_obj_res lean_alloc_sarray(unsigned elem_size, size_t size, size_t capacity) {
lean_sarray_object * o = (lean_sarray_object*)lean_alloc_object(sizeof(lean_sarray_object) + elem_size*capacity);
lean_sarray_object * o = (lean_sarray_object*)lean_alloc_object(lean_usize_add_checked(sizeof(lean_sarray_object), lean_usize_mul_checked(elem_size, capacity)));
lean_set_st_header((lean_object*)o, LeanScalarArray, elem_size);
o->m_size = size;
o->m_capacity = capacity;
@@ -1090,7 +1149,7 @@ static inline lean_obj_res lean_float_array_set(lean_obj_arg a, b_lean_obj_arg i
/* Strings */
static inline lean_obj_res lean_alloc_string(size_t size, size_t capacity, size_t len) {
lean_string_object * o = (lean_string_object*)lean_alloc_object(sizeof(lean_string_object) + capacity);
lean_string_object * o = (lean_string_object*)lean_alloc_object(lean_usize_add_checked(sizeof(lean_string_object), capacity));
lean_set_st_header((lean_object*)o, LeanString, 0);
o->m_size = size;
o->m_capacity = capacity;

View File

@@ -154,7 +154,6 @@ public instance [h : FamilyDef CustomOut (p, t) α] : FamilyDef (CustomData p) t
/-! ## Build Data -/
--------------------------------------------------------------------------------
set_option linter.deprecated false in
/--
A mapping between a build key and its associated build data in the store.
It is a simple type function composed of the separate open type families for
@@ -178,7 +177,6 @@ public instance [FamilyOut DataType Module.facetKind α]
: FamilyDef BuildData (.packageModule p m) α where
fam_eq := by unfold BuildData; simp
set_option linter.deprecated false in
public instance [FamilyOut DataType Module.facetKind α]
: FamilyDef BuildData (.module k) α where
fam_eq := by unfold BuildData; simp

View File

@@ -77,7 +77,11 @@ where
else
return .package (stringToLegalOrSimpleName pkg.copy)
else if target.startsWith "+" then
return .module (stringToLegalOrSimpleName (target.drop 1).copy)
let mod := target.drop 1
if mod.isEmpty then
throw "ill-formed target: expected module name after '+'"
else
return .module (stringToLegalOrSimpleName mod.copy)
else
parsePackageTarget .anonymous target
| [pkg, target] =>
@@ -241,5 +245,3 @@ public instance : Std.LawfulEqCmp quickCmp where
· simp only [quickCmp]
split <;> simp_all
· simp_all [quickCmp]
attribute [deprecated packageModule (since := "2025-11-13")] module

View File

@@ -357,6 +357,7 @@ private def Package.discriminant (self : Package) :=
else
s!"{self.prettyName}@{self.version}"
set_option linter.unusedVariables.funArgs false in
private def fetchImportInfo
(fileName : String) (pkgName modName : Name) (header : ModuleHeader)
: FetchM (Job ModuleImportInfo) := do
@@ -381,22 +382,23 @@ private def fetchImportInfo
let importJob mod.exportInfo.fetch
return s.zipWith (sync := true) (·.addImport nonModule imp ·) importJob
else
let isImportable (mod) :=
mod.allowImportAll || pkgName == mod.pkg.keyName
let allImportable :=
if imp.importAll then
mods.all isImportable
else true
unless allImportable do
let msg := s!"{fileName}: cannot `import all` the module `{imp.module}` \
from the following packages:"
let msg := mods.foldl (init := msg) fun msg mod =>
if isImportable mod then
msg
else
s!"{msg}\n {mod.pkg.discriminant}"
logError msg
return .error
-- Remark: We've decided to disable this check for now
-- let isImportable (mod) :=
-- mod.allowImportAll || pkgName == mod.pkg.keyName
-- let allImportable :=
-- if imp.importAll then
-- mods.all isImportable
-- else true
-- unless allImportable do
-- let msg := s!"{fileName}: cannot `import all` the module `{imp.module}` \
-- from the following packages:"
-- let msg := mods.foldl (init := msg) fun msg mod =>
-- if isImportable mod then
-- msg
-- else
-- s!"{msg}\n {mod.pkg.discriminant}"
-- logError msg
-- return .error
let mods : Vector Module n := .mk mods rfl
let expInfosJob Job.collectVector <$> mods.mapM (·.exportInfo.fetch)
s.bindM (sync := true) fun impInfo => do

View File

@@ -120,11 +120,10 @@ private def reportJob (job : OpaqueJob) : MonitorM PUnit := do
let {log, action, wantsRebuild, buildTime, ..} := task.get.state
let maxLv := log.maxLv
let failed := strictAnd log.hasEntries (maxLv failLv)
if wantsRebuild then
modify fun s => if s.wantsRebuild then s else {s with wantsRebuild := true}
if failed && !optional then
modify fun s => {s with
failures := s.failures.push caption
wantsRebuild := s.wantsRebuild || wantsRebuild
}
modify fun s => {s with failures := s.failures.push caption}
let hasOutput := failed || (log.hasEntries && maxLv outLv)
let showJob :=
(!optional || showOptional) &&
@@ -335,10 +334,14 @@ def Workspace.finalizeBuild
reportResult cfg ctx.out result
if let some outputsFile := cfg.outputsFile? then
ws.saveOutputs (logger := ctx.logger) ctx.out outputsFile (cfg.verbosity matches .verbose)
if cfg.noBuild && result.wantsRebuild then
IO.Process.exit noBuildCode.toUInt8
else
IO.ofExcept result.out
match result.out with
| .ok a =>
return a
| .error e =>
if cfg.noBuild && result.wantsRebuild then
IO.Process.exit noBuildCode.toUInt8
else
throw (IO.userError e)
/--
Run a build function in the Workspace's context using the provided configuration.

View File

@@ -15,8 +15,7 @@ open Lean
namespace Lake
set_option linter.deprecated false in
variable (defaultPkg : Package) (root : BuildKey) in
variable (defaultPkg : Package) (root : PartialBuildKey) in
private def PartialBuildKey.fetchInCoreAux
(self : PartialBuildKey) (facetless : Bool := false)
: FetchM ((key : BuildKey) × Job (BuildData key)) := do
@@ -24,7 +23,7 @@ private def PartialBuildKey.fetchInCoreAux
| .module modName =>
let some mod findModule? modName
| error s!"invalid target '{root}': module '{modName}' not found in workspace"
return .module modName, cast (by simp) <| Job.pure mod
return .packageModule mod.pkg.keyName modName, cast (by simp) <| Job.pure mod
| .package pkgName =>
let pkg resolveTargetPackageD pkgName
return .package pkg.keyName, cast (by simp) <| Job.pure pkg
@@ -98,7 +97,6 @@ Fetches the target specified by this key, resolving gaps as needed.
@[inline] public def PartialBuildKey.fetchIn (defaultPkg : Package) (self : PartialBuildKey) : FetchM OpaqueJob :=
(·.2.toOpaque) <$> fetchInCore defaultPkg self
set_option linter.deprecated false in
variable (root : BuildKey) in
private def BuildKey.fetchCore
(self : BuildKey)
@@ -109,17 +107,17 @@ private def BuildKey.fetchCore
| error s!"invalid target '{root}': module '{modName}' not found in workspace"
return cast (by simp) <| Job.pure mod
| package pkgName =>
let some pkg findPackage? pkgName
let some pkg findPackageByKey? pkgName
| error s!"invalid target '{root}': package '{pkgName}' not found in workspace"
return cast (by simp) <| Job.pure pkg.toPackage
| packageModule pkgName modName =>
let some pkg findPackage? pkgName
let some pkg findPackageByKey? pkgName
| error s!"invalid target '{root}': package '{pkgName}' not found in workspace"
let some mod := pkg.findTargetModule? modName
| error s!"invalid target '{root}': module '{modName}' not found in package '{pkg.name}'"
| error s!"invalid target '{root}': module '{modName}' not found in package '{pkg.prettyName}'"
return cast (by simp) <| Job.pure mod
| packageTarget pkgName target =>
let some pkg findPackage? pkgName
let some pkg findPackageByKey? pkgName
| error s!"invalid target '{root}': package '{pkgName}' not found in workspace"
fetch <| pkg.target target
| facet target facetName =>

View File

@@ -221,7 +221,6 @@ protected def PathPatDescr.toLean? (p : PathPatDescr) : Option Term :=
instance : ToLean? PathPatDescr := PathPatDescr.toLean?
set_option linter.deprecated false in
@[inline] protected def PartialBuildKey.toLean (k : PartialBuildKey) : Term :=
go k []
where

View File

@@ -25,9 +25,15 @@ export ToText (toText)
public instance (priority := 0) [ToString α] : ToText α := toString
@[inline] public def listToLines (as : List α) (f : α String) : String :=
as.foldl (s!"{·}{f ·}\n") "" |>.dropEnd 1 |>.copy
@[inline] public def arrayToLines (as : Array α) (f : α String) : String :=
as.foldl (s!"{·}{f ·}\n") "" |>.dropEnd 1 |>.copy
public instance : ToText Json := Json.compress
public instance [ToText α] : ToText (List α) := (·.foldl (s!"{·}{toText ·}\n") "" |>.dropEnd 1 |>.copy)
public instance [ToText α] : ToText (Array α) := (·.foldl (s!"{·}{toText ·}\n") "" |>.dropEnd 1 |>.copy)
public instance [ToText α] : ToText (List α) := (listToLines · toText)
public instance [ToText α] : ToText (Array α) := (arrayToLines · toText)
/-- Class used to format target output as text for `lake query`. -/
public class QueryText (α : Type u) where
@@ -38,6 +44,8 @@ export QueryText (queryText)
public instance (priority := 0) : QueryText α := fun _ => ""
public instance (priority := low) [ToText α] : QueryText α := toText
public instance [QueryText α] : QueryText (List α) := (listToLines · queryText)
public instance [QueryText α] : QueryText (Array α) := (arrayToLines · queryText)
public instance : QueryText Unit := fun _ => ""
attribute [deprecated QueryText (since := "2025-07-25")] ToText
@@ -51,6 +59,8 @@ export QueryJson (queryJson)
public instance (priority := 0) : QueryJson α := fun _ => .null
public instance (priority := low) [ToJson α] : QueryJson α := toJson
public instance [QueryJson α] : QueryJson (List α) := (Json.arr <| ·.toArray.map queryJson)
public instance [QueryJson α] : QueryJson (Array α) := (Json.arr <| ·.map queryJson)
public instance : QueryJson Unit := fun _ => .null
/-- Class used to format target output for `lake query`. -/

View File

@@ -39,8 +39,7 @@ def expandModuleTargetKeyLit : Macro := fun stx => do
| Macro.throwUnsupported
withRef tk do
let modLit := Name.quoteFrom mod mod.getId
let pkgLit := Name.quoteFrom tk Name.anonymous
let tgt `(BuildKey.packageModule $pkgLit $modLit)
let tgt `(BuildKey.module $modLit)
let key expandFacets tgt facets
`(PartialBuildKey.mk $key)

View File

@@ -143,7 +143,7 @@ object * object_compactor::copy_object(object * o) {
void object_compactor::insert_sarray(object * o) {
size_t sz = lean_sarray_size(o);
unsigned elem_sz = lean_sarray_elem_size(o);
size_t obj_sz = sizeof(lean_sarray_object) + elem_sz*sz;
size_t obj_sz = lean_usize_add_checked(sizeof(lean_sarray_object), lean_usize_mul_checked(elem_sz, sz));
lean_sarray_object * new_o = (lean_sarray_object*)alloc(obj_sz);
lean_set_non_heap_header_for_big((lean_object*)new_o, LeanScalarArray, elem_sz);
new_o->m_size = sz;
@@ -155,7 +155,7 @@ void object_compactor::insert_sarray(object * o) {
void object_compactor::insert_string(object * o) {
size_t sz = lean_string_size(o);
size_t len = lean_string_len(o);
size_t obj_sz = sizeof(lean_string_object) + sz;
size_t obj_sz = lean_usize_add_checked(sizeof(lean_string_object), sz);
lean_string_object * new_o = (lean_string_object*)alloc(obj_sz);
lean_set_non_heap_header_for_big((lean_object*)new_o, LeanString, 0);
new_o->m_size = sz;
@@ -214,7 +214,7 @@ bool object_compactor::insert_array(object * o) {
}
if (missing_children)
return false;
size_t obj_sz = sizeof(lean_array_object) + sizeof(void*)*sz;
size_t obj_sz = lean_usize_add_checked(sizeof(lean_array_object), lean_usize_mul_checked(sizeof(void*), sz));
lean_array_object * new_o = (lean_array_object*)alloc(obj_sz);
lean_set_non_heap_header_for_big((lean_object*)new_o, LeanArray, 0);
new_o->m_size = sz;
@@ -274,8 +274,8 @@ bool object_compactor::insert_promise(object * o) {
void object_compactor::insert_mpz(object * o) {
#ifdef LEAN_USE_GMP
size_t nlimbs = mpz_size(to_mpz(o)->m_value.m_val);
size_t data_sz = sizeof(mp_limb_t) * nlimbs;
size_t sz = sizeof(mpz_object) + data_sz;
size_t data_sz = lean_usize_mul_checked(sizeof(mp_limb_t), nlimbs);
size_t sz = lean_usize_add_checked(sizeof(mpz_object), data_sz);
mpz_object * new_o = (mpz_object *)alloc(sz);
memcpy(new_o, to_mpz(o), sizeof(mpz_object));
lean_set_non_heap_header((lean_object*)new_o, sz, LeanMPZ, 0);
@@ -287,8 +287,8 @@ void object_compactor::insert_mpz(object * o) {
m._mp_alloc = nlimbs;
save(o, (lean_object*)new_o);
#else
size_t data_sz = sizeof(mpn_digit) * to_mpz(o)->m_value.m_size;
size_t sz = sizeof(mpz_object) + data_sz;
size_t data_sz = lean_usize_mul_checked(sizeof(mpn_digit), to_mpz(o)->m_value.m_size);
size_t sz = lean_usize_add_checked(sizeof(mpz_object), data_sz);
mpz_object * new_o = (mpz_object *)alloc(sz);
// Manually copy the `mpz_object` to ensure `mpz` struct padding is left as
// zero as prepared by `object_compactor::alloc`. `memcpy` would copy the

View File

@@ -583,6 +583,9 @@ extern "C" LEAN_EXPORT obj_res lean_io_prim_handle_truncate(b_obj_arg h) {
/* Handle.read : (@& Handle) → USize → IO ByteArray */
extern "C" LEAN_EXPORT obj_res lean_io_prim_handle_read(b_obj_arg h, usize nbytes) {
FILE * fp = io_get_handle(h);
if (lean_alloc_sarray_would_overflow(1, nbytes)) {
return io_result_mk_error(decode_io_error(ENOMEM, NULL));
}
obj_res res = lean_alloc_sarray(1, 0, nbytes);
usize n = std::fread(lean_sarray_cptr(res), 1, nbytes, fp);
if (n > 0) {
@@ -861,6 +864,9 @@ extern "C" LEAN_EXPORT obj_res lean_io_get_random_bytes (size_t nbytes) {
}
#endif
if (lean_alloc_sarray_would_overflow(1, nbytes)) {
return io_result_mk_error(decode_io_error(ENOMEM, NULL));
}
obj_res res = lean_alloc_sarray(1, 0, nbytes);
size_t remain = nbytes;
uint8_t *dst = lean_sarray_cptr(res);

View File

@@ -340,7 +340,7 @@ static void mpz_dealloc(void *ptr, size_t size) {
void mpz::allocate(size_t s) {
m_size = s;
m_digits = static_cast<mpn_digit*>(mpz_alloc(s * sizeof(mpn_digit)));
m_digits = static_cast<mpn_digit*>(mpz_alloc(lean_usize_mul_checked(s, sizeof(mpn_digit))));
}
void mpz::init() {
@@ -409,8 +409,8 @@ void mpz::init_int64(int64 v) {
void mpz::init_mpz(mpz const & v) {
m_sign = v.m_sign;
m_size = v.m_size;
m_digits = static_cast<mpn_digit*>(mpz_alloc(m_size * sizeof(mpn_digit)));
memcpy(m_digits, v.m_digits, m_size * sizeof(mpn_digit));
m_digits = static_cast<mpn_digit*>(mpz_alloc(lean_usize_mul_checked(m_size, sizeof(mpn_digit))));
memcpy(m_digits, v.m_digits, lean_usize_mul_checked(m_size, sizeof(mpn_digit)));
}
mpz::mpz() {

View File

@@ -99,6 +99,10 @@ extern "C" LEAN_EXPORT void lean_internal_panic_rc_overflow() {
lean_internal_panic("reference counter overflowed");
}
extern "C" LEAN_EXPORT void lean_internal_panic_overflow() {
lean_internal_panic("integer overflow in runtime computation");
}
bool g_exit_on_panic = false;
bool g_panic_messages = true;

View File

@@ -442,6 +442,13 @@ static obj_res spawn(string_ref const & proc_name, array_ref<string_ref> const &
auto stdout_pipe = setup_stdio(stdout_mode);
auto stderr_pipe = setup_stdio(stderr_mode);
// It is crucial to not allocate between `fork` and `execvp` for ASAN to work.
buffer<char *> pargs;
pargs.push_back(strdup(proc_name.data()));
for (auto & arg : args)
pargs.push_back(strdup(arg.data()));
pargs.push_back(NULL);
int pid = fork();
if (pid == 0) {
@@ -495,12 +502,6 @@ static obj_res spawn(string_ref const & proc_name, array_ref<string_ref> const &
lean_always_assert(setsid() >= 0);
}
buffer<char *> pargs;
pargs.push_back(strdup(proc_name.data()));
for (auto & arg : args)
pargs.push_back(strdup(arg.data()));
pargs.push_back(NULL);
if (execvp(pargs[0], pargs.data()) < 0) {
std::cerr << "could not execute external process '" << pargs[0] << "'" << std::endl;
exit(-1);
@@ -509,6 +510,12 @@ static obj_res spawn(string_ref const & proc_name, array_ref<string_ref> const &
throw errno;
}
for (char* parg : pargs) {
if (parg != NULL) {
free(parg);
}
}
object * parent_stdin = box(0);
object * parent_stdout = box(0);
object * parent_stderr = box(0);

View File

@@ -182,6 +182,9 @@ extern "C" LEAN_EXPORT lean_obj_res lean_uv_tcp_send(b_obj_arg socket, obj_arg d
}
// Allocate buffer array for uv_write
if (lean_usize_mul_would_overflow(array_len, sizeof(uv_buf_t))) {
return lean_io_result_mk_error(decode_io_error(ENOMEM, nullptr));
}
uv_buf_t* bufs = (uv_buf_t*)malloc(array_len * sizeof(uv_buf_t));
for (size_t i = 0; i < array_len; i++) {

View File

@@ -140,6 +140,9 @@ extern "C" LEAN_EXPORT lean_obj_res lean_uv_udp_send(b_obj_arg socket, obj_arg d
return lean_io_result_mk_ok(promise);
}
if (lean_usize_mul_would_overflow(array_len, sizeof(uv_buf_t))) {
return lean_io_result_mk_error(decode_io_error(ENOMEM, nullptr));
}
uv_buf_t* bufs = (uv_buf_t*)malloc(array_len * sizeof(uv_buf_t));
for (size_t i = 0; i < array_len; i++) {

Binary file not shown.

Binary file not shown.

BIN
stage0/stdlib/Init/Data/Char/Ordinal.c generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/stdlib/Init/Data/Option/Function.c generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/stdlib/Lean/Meta/RecExt.c generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More