mirror of
https://github.com/leanprover/lean4.git
synced 2026-04-16 09:04:09 +00:00
Compare commits
125 Commits
sofia/asyn
...
v4.28.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
978f81d363 | ||
|
|
76dea4d656 | ||
|
|
1df9f3b862 | ||
|
|
7e01a1bf5c | ||
|
|
e18f78acfb | ||
|
|
3b0f286219 | ||
|
|
9e241a4087 | ||
|
|
e90f6f77db | ||
|
|
9deb9ab59d | ||
|
|
6de7100f69 | ||
|
|
6f409e0eea | ||
|
|
3de1cc54c5 | ||
|
|
a3755fe0a5 | ||
|
|
4c1e4a77b4 | ||
|
|
896da85304 | ||
|
|
11cd55b4f1 | ||
|
|
88823b27a6 | ||
|
|
c9facc8102 | ||
|
|
63d1b530ba | ||
|
|
3f09741fb9 | ||
|
|
9f9531fa13 | ||
|
|
dae0d6fa05 | ||
|
|
4a3401f69a | ||
|
|
4526cdda5f | ||
|
|
c4639150c1 | ||
|
|
37870c168b | ||
|
|
57003e5c79 | ||
|
|
b2f485e352 | ||
|
|
5e29d7660a | ||
|
|
567cf74f1b | ||
|
|
fa2ddf1c56 | ||
|
|
f9af240bc4 | ||
|
|
3bfeb0bc1f | ||
|
|
8447586fea | ||
|
|
470e3b7fd0 | ||
|
|
0a0323734b | ||
|
|
69b058dc82 | ||
|
|
2c48ae7dfb | ||
|
|
c81a8897a9 | ||
|
|
3bc63aefb7 | ||
|
|
fa40491c78 | ||
|
|
af438425d5 | ||
|
|
648e1b1877 | ||
|
|
f84aa23d6d | ||
|
|
6bec8adf16 | ||
|
|
16873fb123 | ||
|
|
34d8eeb3be | ||
|
|
f1cc85eb19 | ||
|
|
08e6f714ca | ||
|
|
b8f8dde0b3 | ||
|
|
b09e33f76b | ||
|
|
a95227c7d7 | ||
|
|
8258cfe2a1 | ||
|
|
94e8fd4845 | ||
|
|
9063adbd51 | ||
|
|
3e16f5332f | ||
|
|
974fdd85c4 | ||
|
|
e8a16dfcc8 | ||
|
|
ad43266357 | ||
|
|
9efb2bf35c | ||
|
|
9fbbe6554d | ||
|
|
db30cf3954 | ||
|
|
e9a1c9ef63 | ||
|
|
df8ff255cb | ||
|
|
fdd30d9250 | ||
|
|
36eaa68744 | ||
|
|
99b26ce49e | ||
|
|
aac353c6b9 | ||
|
|
9167b13afa | ||
|
|
ea9c7cf2ae | ||
|
|
c3726bdf05 | ||
|
|
30e23eae2b | ||
|
|
d8fb702d73 | ||
|
|
f63ddd67a2 | ||
|
|
5457a227ba | ||
|
|
de6ff061ed | ||
|
|
6a87c0e530 | ||
|
|
86da5ae26e | ||
|
|
1b8dd80ed1 | ||
|
|
07b2913969 | ||
|
|
8f9fb4c5b2 | ||
|
|
12adfbf0e3 | ||
|
|
f47dfe9e7f | ||
|
|
4af9cc0592 | ||
|
|
196cdb6039 | ||
|
|
3833984756 | ||
|
|
5433fe129d | ||
|
|
fb3238d47c | ||
|
|
960c01fcae | ||
|
|
21cf5881f5 | ||
|
|
2d87d50e34 | ||
|
|
4b63048825 | ||
|
|
2f7f63243f | ||
|
|
dc70d0cc43 | ||
|
|
b994cb4497 | ||
|
|
d0493e4c1e | ||
|
|
c7d3401417 | ||
|
|
8435dea274 | ||
|
|
3dfd125337 | ||
|
|
c24df9e8d6 | ||
|
|
c2918b2701 | ||
|
|
bd514319d6 | ||
|
|
4133dc06f4 | ||
|
|
38c6d9110d | ||
|
|
abed967ded | ||
|
|
48a1b07516 | ||
|
|
1cd6db1579 | ||
|
|
d68de2e018 | ||
|
|
e2353689f2 | ||
|
|
b81608d0d9 | ||
|
|
aa4539750a | ||
|
|
94c45c3f00 | ||
|
|
e56351da7a | ||
|
|
58e599f2f9 | ||
|
|
c91a2c63c2 | ||
|
|
d7cbdebf0b | ||
|
|
28a5e9f93c | ||
|
|
470498cc06 | ||
|
|
d57f71c1c0 | ||
|
|
eaf8cf15ff | ||
|
|
cae739c27c | ||
|
|
9280a0ba9e | ||
|
|
e42262e397 | ||
|
|
a96ae4bb12 | ||
|
|
14039942f3 |
@@ -45,3 +45,7 @@ feat: add optional binder limit to `mkPatternFromTheorem`
|
||||
This PR adds a `num?` parameter to `mkPatternFromTheorem` to control how many
|
||||
leading quantifiers are stripped when creating a pattern.
|
||||
```
|
||||
|
||||
## CI Log Retrieval
|
||||
|
||||
When CI jobs fail, investigate immediately - don't wait for other jobs to complete. Individual job logs are often available even while other jobs are still running. Try `gh run view <run-id> --log` or `gh run view <run-id> --log-failed`, or use `gh run view <run-id> --job=<job-id>` to target the specific failed job. Sleeping is fine when asked to monitor CI and no failures exist yet, but once any job fails, investigate that failure immediately.
|
||||
|
||||
@@ -13,12 +13,54 @@ These comments explain the scripts' behavior, which repositories get special han
|
||||
## Arguments
|
||||
- `version`: The version to release (e.g., v4.24.0)
|
||||
|
||||
## Release Notes (Required for -rc1 releases)
|
||||
|
||||
For first release candidates (`-rc1`), you must create release notes BEFORE the reference-manual toolchain bump PR can be merged.
|
||||
|
||||
**Steps to create release notes:**
|
||||
|
||||
1. Generate the release notes:
|
||||
```bash
|
||||
cd /path/to/lean4
|
||||
python3 script/release_notes.py --since <previous_version> > /tmp/release-notes-<version>.md
|
||||
```
|
||||
Replace `<previous_version>` with the last stable release (e.g., `v4.27.0` when releasing `v4.28.0-rc1`).
|
||||
|
||||
2. Review `/tmp/release-notes-<version>.md` for common issues:
|
||||
- **Unterminated code blocks**: Look for code fences that aren't closed. Fetch original PR with `gh pr view <number>` to repair.
|
||||
- **Truncated descriptions**: Some may end mid-sentence. Complete them from the original PR.
|
||||
- **Markdown issues**: Other syntax problems that could cause parsing errors.
|
||||
|
||||
3. Create the release notes file in the reference-manual repository:
|
||||
- File path: `Manual/Releases/v<version>.lean` (e.g., `v4_28_0.lean`)
|
||||
- Use Verso format with proper imports and `#doc (Manual)` block
|
||||
- **Use `#` for headers, not `##`** (Verso uses level 1 for subsections)
|
||||
- **Use plain ` ``` ` not ` ```lean `** (the latter executes code)
|
||||
- **Wrap underscore identifiers in backticks**: `` `bv_decide` `` not `bv_decide`
|
||||
|
||||
4. Update `Manual/Releases.lean`:
|
||||
- Add import: `import Manual.Releases.«v4_28_0»`
|
||||
- Add include: `{include 0 Manual.Releases.«v4_28_0»}`
|
||||
|
||||
5. Build to verify: `lake build Manual.Releases.v4_28_0`
|
||||
|
||||
6. Create a **separate PR** for release notes (not bundled with toolchain bump):
|
||||
```bash
|
||||
git checkout -b v<version>-release-notes
|
||||
gh pr create --title "doc: add v<version> release notes"
|
||||
```
|
||||
|
||||
For subsequent RCs (`-rc2`, etc.) and stable releases, just update the version number in the existing release notes file title.
|
||||
|
||||
See `doc/dev/release_checklist.md` section "Writing the release notes" for full details.
|
||||
|
||||
## Process
|
||||
|
||||
1. Run `script/release_checklist.py {version}` to check the current status
|
||||
2. **CRITICAL: If preliminary lean4 checks fail, STOP immediately and alert the user**
|
||||
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes exist
|
||||
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes file exists
|
||||
- **IMPORTANT**: The release page is created AUTOMATICALLY by CI after pushing the tag - DO NOT create it manually
|
||||
- **IMPORTANT**: For -rc1 releases, release notes must be created before proceeding
|
||||
- Do NOT create any PRs or proceed with repository updates if these checks fail
|
||||
3. Create a todo list tracking all repositories that need updates
|
||||
4. **CRITICAL RULE: You can ONLY run `release_steps.py` for a repository if `release_checklist.py` explicitly says to do so**
|
||||
@@ -61,6 +103,15 @@ Every time you run `release_checklist.py`, you MUST:
|
||||
This summary should be provided EVERY time you run the checklist, not just after creating new PRs.
|
||||
The user needs to see the complete picture of what's waiting for review.
|
||||
|
||||
## Nightly Infrastructure
|
||||
|
||||
The nightly build system uses branches and tags across two repositories:
|
||||
|
||||
- `leanprover/lean4` has **branches** `nightly` and `nightly-with-mathlib` tracking the latest nightly builds
|
||||
- `leanprover/lean4-nightly` has **dated tags** like `nightly-2026-01-23`
|
||||
|
||||
When a nightly succeeds with mathlib, all three should point to the same commit. Don't confuse these: branches are in the main lean4 repo, dated tags are in lean4-nightly.
|
||||
|
||||
## Error Handling
|
||||
|
||||
**CRITICAL**: If something goes wrong or a command fails:
|
||||
|
||||
6
.github/workflows/build-template.yml
vendored
6
.github/workflows/build-template.yml
vendored
@@ -66,16 +66,10 @@ jobs:
|
||||
brew install ccache tree zstd coreutils gmp libuv
|
||||
if: runner.os == 'macOS'
|
||||
- name: Checkout
|
||||
if: (!endsWith(matrix.os, '-with-cache'))
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
# the default is to use a virtual merge commit between the PR and master: just use the PR
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: Namespace Checkout
|
||||
if: endsWith(matrix.os, '-with-cache')
|
||||
uses: namespacelabs/nscloud-checkout-action@v8
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- name: Open Nix shell once
|
||||
run: true
|
||||
if: runner.os == 'Linux'
|
||||
|
||||
21
.github/workflows/ci.yml
vendored
21
.github/workflows/ci.yml
vendored
@@ -115,7 +115,7 @@ jobs:
|
||||
CMAKE_MAJOR=$(grep -E "^set\(LEAN_VERSION_MAJOR " src/CMakeLists.txt | grep -oE '[0-9]+')
|
||||
CMAKE_MINOR=$(grep -E "^set\(LEAN_VERSION_MINOR " src/CMakeLists.txt | grep -oE '[0-9]+')
|
||||
CMAKE_PATCH=$(grep -E "^set\(LEAN_VERSION_PATCH " src/CMakeLists.txt | grep -oE '[0-9]+')
|
||||
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | grep -oE '[0-9]+')
|
||||
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | sed -nE 's/^set\(LEAN_VERSION_IS_RELEASE ([0-9]+)\).*/\1/p')
|
||||
|
||||
# Expected values from tag parsing
|
||||
TAG_MAJOR="${{ steps.set-release.outputs.LEAN_VERSION_MAJOR }}"
|
||||
@@ -267,14 +267,17 @@ jobs:
|
||||
"test": true,
|
||||
// turn off custom allocator & symbolic functions to make LSAN do its magic
|
||||
"CMAKE_PRESET": "sanitize",
|
||||
// `StackOverflow*` correctly triggers ubsan.
|
||||
// `reverse-ffi` fails to link in sanitizers.
|
||||
// `interactive` and `async_select_channel` fail nondeterministically, would need to
|
||||
// be investigated..
|
||||
// 9366 is too close to timeout.
|
||||
// `bv_` sometimes times out calling into cadical even though we should be using the
|
||||
// standard compile flags for it.
|
||||
"CTEST_OPTIONS": "-E 'StackOverflow|reverse-ffi|interactive|async_select_channel|9366|run/bv_'"
|
||||
// * `StackOverflow*` correctly triggers ubsan.
|
||||
// * `reverse-ffi` fails to link in sanitizers.
|
||||
// * `interactive` and `async_select_channel` fail nondeterministically, would need
|
||||
// to be investigated..
|
||||
// * 9366 is too close to timeout.
|
||||
// * `bv_` sometimes times out calling into cadical even though we should be using
|
||||
// the standard compile flags for it.
|
||||
// * `grind_guide` always times out.
|
||||
// * `pkg/|lake/` tests sometimes time out (likely even hang), related to Lake CI
|
||||
// failures?
|
||||
"CTEST_OPTIONS": "-E 'StackOverflow|reverse-ffi|interactive|async_select_channel|9366|run/bv_|grind_guide|pkg/|lake/'"
|
||||
},
|
||||
{
|
||||
"name": "macOS",
|
||||
|
||||
@@ -218,6 +218,11 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
|
||||
|
||||
# Writing the release notes
|
||||
|
||||
Release notes are only needed for the first release candidate (`-rc1`). For subsequent RCs and stable releases,
|
||||
just update the version number in the title of the existing release notes file.
|
||||
|
||||
## Generating the release notes
|
||||
|
||||
Release notes are automatically generated from the commit history, using `script/release_notes.py`.
|
||||
|
||||
Run this as `script/release_notes.py --since v4.6.0`, where `v4.6.0` is the *previous* release version.
|
||||
@@ -232,4 +237,93 @@ Some judgement is required here: ignore commits which look minor,
|
||||
but manually add items to the release notes for significant PRs that were rebase-merged.
|
||||
|
||||
There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
|
||||
|
||||
## Reviewing and fixing the generated markdown
|
||||
|
||||
Before adding the release notes to the reference manual, carefully review the generated markdown for these common issues:
|
||||
|
||||
1. **Unterminated code blocks**: PR descriptions sometimes have unclosed code fences. Look for code blocks
|
||||
that don't have a closing ` ``` `. If found, fetch the original PR description with `gh pr view <number>`
|
||||
and repair the code block with the complete content.
|
||||
|
||||
2. **Truncated descriptions**: Some PR descriptions may end abruptly mid-sentence. Review these and complete
|
||||
the descriptions based on the original PR.
|
||||
|
||||
3. **Markdown syntax issues**: Check for other markdown problems that could cause parsing errors.
|
||||
|
||||
## Creating the release notes file
|
||||
|
||||
The release notes go in `Manual/Releases/v4_7_0.lean` in the reference-manual repository.
|
||||
|
||||
The file structure must follow the Verso format:
|
||||
|
||||
```lean
|
||||
/-
|
||||
Copyright (c) 2025 Lean FRO LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Author: <Your Name>
|
||||
-/
|
||||
|
||||
import VersoManual
|
||||
import Manual.Meta
|
||||
import Manual.Meta.Markdown
|
||||
|
||||
open Manual
|
||||
open Verso.Genre
|
||||
open Verso.Genre.Manual
|
||||
open Verso.Genre.Manual.InlineLean
|
||||
|
||||
#doc (Manual) "Lean 4.7.0-rc1 (YYYY-MM-DD)" =>
|
||||
%%%
|
||||
tag := "release-v4.7.0"
|
||||
file := "v4.7.0"
|
||||
%%%
|
||||
|
||||
<release notes content here>
|
||||
```
|
||||
|
||||
**Important formatting rules for Verso:**
|
||||
- Use `#` for section headers inside the document, not `##` (Verso uses header level 1 for subsections)
|
||||
- Use plain ` ``` ` for code blocks, not ` ```lean ` (the latter will cause Lean to execute the code)
|
||||
- Identifiers with underscores like `bv_decide` should be wrapped in backticks: `` `bv_decide` ``
|
||||
(otherwise the underscore may be interpreted as markdown emphasis)
|
||||
|
||||
## Updating Manual/Releases.lean
|
||||
|
||||
After creating the release notes file, update `Manual/Releases.lean` to include it:
|
||||
|
||||
1. Add the import near the top with other version imports:
|
||||
```lean
|
||||
import Manual.Releases.«v4_7_0»
|
||||
```
|
||||
|
||||
2. Add the include statement after the other includes:
|
||||
```lean
|
||||
{include 0 Manual.Releases.«v4_7_0»}
|
||||
```
|
||||
|
||||
## Building and verifying
|
||||
|
||||
Build the release notes to check for errors:
|
||||
```bash
|
||||
lake build Manual.Releases.v4_7_0
|
||||
```
|
||||
|
||||
Common errors and fixes:
|
||||
- "Wrong header nesting - got ## but expected at most #": Change `##` to `#`
|
||||
- "Tactic 'X' failed" or similar: Code is being executed; change ` ```lean ` to ` ``` `
|
||||
- "'_'" errors: Underscore in identifier being parsed as emphasis; wrap in backticks
|
||||
|
||||
## Creating the PR
|
||||
|
||||
Create a separate PR for the release notes (don't bundle with the toolchain bump PR):
|
||||
```bash
|
||||
git checkout -b v4.7.0-release-notes
|
||||
git add Manual/Releases/v4_7_0.lean Manual/Releases.lean
|
||||
git commit -m "doc: add v4.7.0 release notes"
|
||||
git push -u origin v4.7.0-release-notes
|
||||
gh pr create --title "doc: add v4.7.0 release notes" --body "This PR adds the release notes for Lean v4.7.0."
|
||||
```
|
||||
|
||||
See `./releases_drafts/README.md` for more information about pre-written release note entries.
|
||||
See `./releases_drafts/README.md` for more information.
|
||||
|
||||
@@ -29,7 +29,7 @@ def main (args : List String) : IO Unit := do
|
||||
if !msgs.toList.isEmpty then -- skip this file if there are parse errors
|
||||
msgs.forM fun msg => msg.toString >>= IO.println
|
||||
throw <| .userError "parse errors in file"
|
||||
let `(header| $[module%$moduleTk?]? $imps:import*) := header
|
||||
let `(header| $[module%$moduleTk?]? $[prelude%$preludeTk?]? $imps:import*) := header
|
||||
| throw <| .userError s!"unexpected header syntax of {path}"
|
||||
if moduleTk?.isSome then
|
||||
continue
|
||||
@@ -38,11 +38,11 @@ def main (args : List String) : IO Unit := do
|
||||
let startPos := header.raw.getPos? |>.getD parserState.pos
|
||||
|
||||
let dummyEnv ← mkEmptyEnvironment
|
||||
let (initCmd, parserState', _) :=
|
||||
let (initCmd, parserState', msgs') :=
|
||||
Parser.parseCommand inputCtx { env := dummyEnv, options := {} } parserState msgs
|
||||
|
||||
-- insert section if any trailing command
|
||||
if !initCmd.isOfKind ``Parser.Command.eoi then
|
||||
-- insert section if any trailing command (or error, which could be from an unknown command)
|
||||
if !initCmd.isOfKind ``Parser.Command.eoi || msgs'.hasErrors then
|
||||
let insertPos? :=
|
||||
-- put below initial module docstring if any
|
||||
guard (initCmd.isOfKind ``Parser.Command.moduleDoc) *> initCmd.getTailPos? <|>
|
||||
@@ -57,19 +57,21 @@ def main (args : List String) : IO Unit := do
|
||||
sec := "\n\n" ++ sec
|
||||
if insertPos?.isNone then
|
||||
sec := sec ++ "\n\n"
|
||||
text := text.extract 0 insertPos ++ sec ++ text.extract insertPos text.rawEndPos
|
||||
let insertPos := text.pos! insertPos
|
||||
text := text.extract text.startPos insertPos ++ sec ++ text.extract insertPos text.endPos
|
||||
|
||||
-- prepend each import with `public `
|
||||
for imp in imps.reverse do
|
||||
let insertPos := imp.raw.getPos?.get!
|
||||
let prfx := if doMeta then "public meta " else "public "
|
||||
text := text.extract 0 insertPos ++ prfx ++ text.extract insertPos text.rawEndPos
|
||||
let insertPos := text.pos! insertPos
|
||||
text := text.extract text.startPos insertPos ++ prfx ++ text.extract insertPos text.endPos
|
||||
|
||||
-- insert `module` header
|
||||
let mut initText := text.extract 0 startPos
|
||||
if !initText.trim.isEmpty then
|
||||
let mut initText := text.extract text.startPos (text.pos! startPos)
|
||||
if !initText.trimAscii.isEmpty then
|
||||
-- If there is a header comment, preserve it and put `module` in the line after
|
||||
initText := initText.trimRight ++ "\n"
|
||||
text := initText ++ "module\n\n" ++ text.extract startPos text.rawEndPos
|
||||
initText := initText.trimAsciiEnd.toString ++ "\n"
|
||||
text := initText ++ "module\n\n" ++ text.extract (text.pos! startPos) text.endPos
|
||||
|
||||
IO.FS.writeFile path text
|
||||
|
||||
@@ -3,9 +3,3 @@ name = "scripts"
|
||||
[[lean_exe]]
|
||||
name = "modulize"
|
||||
root = "Modulize"
|
||||
|
||||
[[lean_exe]]
|
||||
name = "shake"
|
||||
root = "Shake"
|
||||
# needed by `Lake.loadWorkspace`
|
||||
supportInterpreter = true
|
||||
|
||||
@@ -185,6 +185,30 @@ def get_release_notes(tag_name):
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def check_release_notes_file_exists(toolchain, github_token):
|
||||
"""Check if the release notes file exists in the reference-manual repository.
|
||||
|
||||
For -rc1 releases, this checks that the release notes have been created.
|
||||
For subsequent RCs and stable releases, release notes should already exist.
|
||||
|
||||
Returns tuple (exists: bool, is_rc1: bool) where is_rc1 indicates if this is
|
||||
the first release candidate (when release notes need to be written).
|
||||
"""
|
||||
# Determine the release notes file path
|
||||
# e.g., v4.28.0-rc1 -> Manual/Releases/v4_28_0.lean
|
||||
base_version = strip_rc_suffix(toolchain.lstrip('v')) # "4.28.0"
|
||||
file_name = f"v{base_version.replace('.', '_')}.lean" # "v4_28_0.lean"
|
||||
file_path = f"Manual/Releases/{file_name}"
|
||||
|
||||
is_rc1 = toolchain.endswith("-rc1")
|
||||
|
||||
repo_url = "https://github.com/leanprover/reference-manual"
|
||||
|
||||
# Check if the file exists on main branch
|
||||
content = get_branch_content(repo_url, "main", file_path, github_token)
|
||||
|
||||
return (content is not None, is_rc1)
|
||||
|
||||
def get_branch_content(repo_url, branch, file_path, github_token):
|
||||
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/contents/{file_path}?ref={branch}"
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
@@ -501,6 +525,76 @@ def check_proofwidgets4_release(repo_url, target_toolchain, github_token):
|
||||
print(f" You will need to create and push a tag v0.0.{next_version}")
|
||||
return False
|
||||
|
||||
def check_reference_manual_release_title(repo_url, toolchain, pr_branch, github_token):
|
||||
"""Check if the reference-manual release notes title matches the release type.
|
||||
|
||||
For RC releases (e.g., v4.27.0-rc1), the title should contain the exact RC suffix.
|
||||
For final releases (e.g., v4.27.0), the title should NOT contain any "-rc".
|
||||
|
||||
Returns True if check passes or is not applicable, False if title needs updating.
|
||||
"""
|
||||
is_rc = is_release_candidate(toolchain)
|
||||
|
||||
# For RC releases, get the base version and RC suffix
|
||||
# e.g., "v4.27.0-rc1" -> version="4.27.0", rc_suffix="-rc1"
|
||||
if is_rc:
|
||||
parts = toolchain.lstrip('v').split('-', 1)
|
||||
version = parts[0]
|
||||
rc_suffix = '-' + parts[1] if len(parts) > 1 else ''
|
||||
else:
|
||||
version = toolchain.lstrip('v')
|
||||
rc_suffix = ''
|
||||
|
||||
# Construct the release notes file path (e.g., Manual/Releases/v4_27_0.lean for v4.27.0)
|
||||
file_name = f"v{version.replace('.', '_')}.lean" # "v4_27_0.lean"
|
||||
file_path = f"Manual/Releases/{file_name}"
|
||||
|
||||
# Try to get the file from the PR branch first, then fall back to main branch
|
||||
content = get_branch_content(repo_url, pr_branch, file_path, github_token)
|
||||
if content is None:
|
||||
# Try the default branch
|
||||
content = get_branch_content(repo_url, "main", file_path, github_token)
|
||||
|
||||
if content is None:
|
||||
print(f" ⚠️ Could not check release notes file: {file_path}")
|
||||
return True # Don't block on this
|
||||
|
||||
# Look for the #doc line with the title
|
||||
for line in content.splitlines():
|
||||
if line.strip().startswith('#doc') and 'Manual' in line:
|
||||
has_rc_in_title = '-rc' in line.lower()
|
||||
|
||||
if is_rc:
|
||||
# For RC releases, title should contain the exact RC suffix (e.g., "-rc1")
|
||||
# Use regex to match exact suffix followed by non-digit (to avoid -rc1 matching -rc10)
|
||||
# Pattern matches the RC suffix followed by a non-digit or end-of-string context
|
||||
# e.g., "-rc1" followed by space, quote, paren, or similar
|
||||
exact_match = re.search(rf'{re.escape(rc_suffix)}(?![0-9])', line, re.IGNORECASE)
|
||||
if exact_match:
|
||||
print(f" ✅ Release notes title correctly shows {rc_suffix}")
|
||||
return True
|
||||
elif has_rc_in_title:
|
||||
print(f" ❌ Release notes title shows wrong RC version (expected {rc_suffix})")
|
||||
print(f" Update {file_path} to use '{rc_suffix}' in the title")
|
||||
return False
|
||||
else:
|
||||
print(f" ❌ Release notes title missing RC suffix")
|
||||
print(f" Update {file_path} to include '{rc_suffix}' in the title")
|
||||
return False
|
||||
else:
|
||||
# For final releases, title should NOT contain -rc
|
||||
if has_rc_in_title:
|
||||
print(f" ❌ Release notes title still shows RC version")
|
||||
print(f" Update {file_path} to remove '-rcN' from the title")
|
||||
return False
|
||||
else:
|
||||
print(f" ✅ Release notes title is updated for final release")
|
||||
return True
|
||||
|
||||
# If we didn't find the #doc line, don't block
|
||||
print(f" ⚠️ Could not find release notes title in {file_path}")
|
||||
return True
|
||||
|
||||
def run_mathlib_verify_version_tags(toolchain, verbose=False):
|
||||
"""Run mathlib4's verify_version_tags.py script to validate the release tag.
|
||||
|
||||
@@ -644,6 +738,27 @@ def main():
|
||||
else:
|
||||
print(f" ✅ Release notes page title looks good ('{actual_title}').")
|
||||
|
||||
# Check if release notes file exists in reference-manual repository
|
||||
# For -rc1 releases, this is when release notes need to be written
|
||||
# For subsequent RCs and stable releases, they should already exist
|
||||
release_notes_exists, is_rc1 = check_release_notes_file_exists(toolchain, github_token)
|
||||
base_version = strip_rc_suffix(toolchain.lstrip('v'))
|
||||
release_notes_file = f"Manual/Releases/v{base_version.replace('.', '_')}.lean"
|
||||
|
||||
if not release_notes_exists:
|
||||
if is_rc1:
|
||||
print(f" ❌ Release notes file not found: {release_notes_file}")
|
||||
print(f" This is an -rc1 release, so release notes need to be written.")
|
||||
print(f" Run `script/release_notes.py --since <previous_version>` to generate them.")
|
||||
print(f" See doc/dev/release_checklist.md section 'Writing the release notes' for details.")
|
||||
lean4_success = False
|
||||
else:
|
||||
print(f" ❌ Release notes file not found: {release_notes_file}")
|
||||
print(f" Release notes should have been created for -rc1. Check the reference-manual repository.")
|
||||
lean4_success = False
|
||||
else:
|
||||
print(f" ✅ Release notes file exists: {release_notes_file}")
|
||||
|
||||
repo_status["lean4"] = lean4_success
|
||||
|
||||
# If the release page doesn't exist, skip repository checks and master branch checks
|
||||
@@ -709,6 +824,11 @@ def main():
|
||||
print(f" ⚠️ CI: {ci_message}")
|
||||
else:
|
||||
print(f" ❓ CI: {ci_message}")
|
||||
|
||||
# For reference-manual, check that the release notes title has been updated
|
||||
if name == "reference-manual":
|
||||
pr_branch = f"bump_to_{toolchain}"
|
||||
check_reference_manual_release_title(url, toolchain, pr_branch, github_token)
|
||||
else:
|
||||
print(f" ❌ PR with title '{pr_title}' does not exist")
|
||||
print(f" Run `script/release_steps.py {toolchain} {name}` to create it")
|
||||
|
||||
@@ -14,13 +14,6 @@ repositories:
|
||||
bump-branch: true
|
||||
dependencies: []
|
||||
|
||||
- name: verso
|
||||
url: https://github.com/leanprover/verso
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: lean4checker
|
||||
url: https://github.com/leanprover/lean4checker
|
||||
toolchain-tag: true
|
||||
@@ -42,6 +35,14 @@ repositories:
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: verso
|
||||
url: https://github.com/leanprover/verso
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies:
|
||||
- plausible
|
||||
|
||||
- name: import-graph
|
||||
url: https://github.com/leanprover-community/import-graph
|
||||
toolchain-tag: true
|
||||
|
||||
@@ -11,8 +11,8 @@ include(ExternalProject)
|
||||
project(LEAN CXX C)
|
||||
set(LEAN_VERSION_MAJOR 4)
|
||||
set(LEAN_VERSION_MINOR 28)
|
||||
set(LEAN_VERSION_PATCH 0)
|
||||
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
|
||||
set(LEAN_VERSION_PATCH 1)
|
||||
set(LEAN_VERSION_IS_RELEASE 1) # This number is 1 in the release revision, and 0 otherwise.
|
||||
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")
|
||||
set(LEAN_VERSION_STRING "${LEAN_VERSION_MAJOR}.${LEAN_VERSION_MINOR}.${LEAN_VERSION_PATCH}")
|
||||
if (LEAN_SPECIAL_VERSION_DESC)
|
||||
@@ -40,6 +40,10 @@ find_program(LLD_PATH lld)
|
||||
if(LLD_PATH)
|
||||
string(APPEND LEAN_EXTRA_LINKER_FLAGS_DEFAULT " -fuse-ld=lld")
|
||||
endif()
|
||||
if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
# Create space in install names so they can be patched later in Nix.
|
||||
string(APPEND LEAN_EXTRA_LINKER_FLAGS_DEFAULT " -headerpad_max_install_names")
|
||||
endif()
|
||||
|
||||
set(LEAN_EXTRA_LINKER_FLAGS ${LEAN_EXTRA_LINKER_FLAGS_DEFAULT} CACHE STRING "Additional flags used by the linker")
|
||||
set(LEAN_EXTRA_CXX_FLAGS "" CACHE STRING "Additional flags used by the C++ compiler. Unlike `CMAKE_CXX_FLAGS`, these will not be used to build e.g. cadical.")
|
||||
@@ -452,11 +456,14 @@ if(LLVM AND ${STAGE} GREATER 0)
|
||||
message(VERBOSE "leanshared linker flags: '${LEANSHARED_LINKER_FLAGS}' | lean extra cxx flags '${CMAKE_CXX_FLAGS}'")
|
||||
endif()
|
||||
|
||||
# get rid of unused parts of C++ stdlib
|
||||
# We always strip away unused declarations to reduce binary sizes as the time cost is small and the
|
||||
# potential benefit can be huge, especially when stripping `meta import`s.
|
||||
if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
string(APPEND TOOLCHAIN_SHARED_LINKER_FLAGS " -Wl,-dead_strip")
|
||||
string(APPEND LEANC_EXTRA_CC_FLAGS " -fdata-sections -ffunction-sections")
|
||||
string(APPEND LEAN_EXTRA_LINKER_FLAGS " -Wl,-dead_strip")
|
||||
elseif(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Emscripten")
|
||||
string(APPEND TOOLCHAIN_SHARED_LINKER_FLAGS " -Wl,--gc-sections")
|
||||
string(APPEND LEANC_EXTRA_CC_FLAGS " -fdata-sections -ffunction-sections")
|
||||
string(APPEND LEAN_EXTRA_LINKER_FLAGS " -Wl,--gc-sections")
|
||||
endif()
|
||||
|
||||
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
@@ -631,6 +638,9 @@ if(${STAGE} GREATER 1)
|
||||
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/lean/libleanrt.a" "${CMAKE_BINARY_DIR}/lib/lean/libleanrt.a"
|
||||
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/lean/libleancpp.a" "${CMAKE_BINARY_DIR}/lib/lean/libleancpp.a"
|
||||
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/temp/libleancpp_1.a" "${CMAKE_BINARY_DIR}/lib/temp/libleancpp_1.a")
|
||||
add_dependencies(leanrt_initial-exec copy-leancpp)
|
||||
add_dependencies(leanrt copy-leancpp)
|
||||
add_dependencies(leancpp_1 copy-leancpp)
|
||||
add_dependencies(leancpp copy-leancpp)
|
||||
if(LLVM)
|
||||
add_custom_target(copy-lean-h-bc
|
||||
|
||||
@@ -4,7 +4,6 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Prelude
|
||||
public import Init.Notation
|
||||
@@ -38,6 +37,7 @@ public import Init.Omega
|
||||
public import Init.MacroTrace
|
||||
public import Init.Grind
|
||||
public import Init.GrindInstances
|
||||
public import Init.Sym
|
||||
public import Init.While
|
||||
public import Init.Syntax
|
||||
public import Init.Internal
|
||||
|
||||
@@ -13,6 +13,10 @@ public import Init.SizeOf
|
||||
public section
|
||||
set_option linter.missingDocs true -- keep it documented
|
||||
|
||||
-- BEq instance for Option defined here so it's available early in the import chain
|
||||
-- (before Init.Grind.Config and Init.MetaTypes which need BEq (Option Nat))
|
||||
deriving instance BEq for Option
|
||||
|
||||
@[expose] section
|
||||
|
||||
universe u v w
|
||||
@@ -1561,6 +1565,10 @@ instance {p q : Prop} [d : Decidable (p ↔ q)] : Decidable (p = q) :=
|
||||
| isTrue h => isTrue (propext h)
|
||||
| isFalse h => isFalse fun heq => h (heq ▸ Iff.rfl)
|
||||
|
||||
/-- Helper theorem for proving injectivity theorems -/
|
||||
theorem Lean.injEq_helper {P Q R : Prop} :
|
||||
(P → Q → R) → (P ∧ Q → R) := by intro h ⟨h₁,h₂⟩; exact h h₁ h₂
|
||||
|
||||
gen_injective_theorems% Array
|
||||
gen_injective_theorems% BitVec
|
||||
gen_injective_theorems% ByteArray
|
||||
|
||||
@@ -159,4 +159,17 @@ theorem setWidth_neg_of_le {x : BitVec v} (h : w ≤ v) : BitVec.setWidth w (-x)
|
||||
omega
|
||||
omega
|
||||
|
||||
@[induction_eliminator, elab_as_elim]
|
||||
theorem cons_induction {motive : (w : Nat) → BitVec w → Prop} (nil : motive 0 .nil)
|
||||
(cons : ∀ {w : Nat} (b : Bool) (bv : BitVec w), motive w bv → motive (w + 1) (.cons b bv)) :
|
||||
∀ {w : Nat} (x : BitVec w), motive w x := by
|
||||
intros w x
|
||||
induction w
|
||||
case zero =>
|
||||
simp only [BitVec.eq_nil x, nil]
|
||||
case succ wl ih =>
|
||||
rw [← cons_msb_setWidth x]
|
||||
apply cons
|
||||
apply ih
|
||||
|
||||
end BitVec
|
||||
|
||||
@@ -3362,6 +3362,26 @@ theorem extractLsb'_concat {x : BitVec (w + 1)} {y : Bool} :
|
||||
· simp
|
||||
· simp [show i - 1 < t by omega]
|
||||
|
||||
theorem concat_extractLsb'_getLsb {x : BitVec (w + 1)} :
|
||||
BitVec.concat (x.extractLsb' 1 w) (x.getLsb 0) = x := by
|
||||
ext i hw
|
||||
by_cases h : i = 0
|
||||
· simp [h]
|
||||
· simp [h, hw, show (1 + (i - 1)) = i by omega, getElem_concat]
|
||||
|
||||
@[elab_as_elim]
|
||||
theorem concat_induction {motive : (w : Nat) → BitVec w → Prop} (nil : motive 0 .nil)
|
||||
(concat : ∀ {w : Nat} (bv : BitVec w) (b : Bool), motive w bv → motive (w + 1) (bv.concat b)) :
|
||||
∀ {w : Nat} (x : BitVec w), motive w x := by
|
||||
intros w x
|
||||
induction w
|
||||
case zero =>
|
||||
simp only [BitVec.eq_nil x, nil]
|
||||
case succ wl ih =>
|
||||
rw [← concat_extractLsb'_getLsb (x := x)]
|
||||
apply concat
|
||||
apply ih
|
||||
|
||||
/-! ### shiftConcat -/
|
||||
|
||||
@[grind =]
|
||||
@@ -6383,73 +6403,6 @@ theorem cpopNatRec_add {x : BitVec w} {acc n : Nat} :
|
||||
x.cpopNatRec n (acc + acc') = x.cpopNatRec n acc + acc' := by
|
||||
rw [cpopNatRec_eq (acc := acc + acc'), cpopNatRec_eq (acc := acc), Nat.add_assoc]
|
||||
|
||||
theorem cpopNatRec_le {x : BitVec w} (n : Nat) :
|
||||
x.cpopNatRec n acc ≤ acc + n := by
|
||||
induction n generalizing acc
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ n ihn =>
|
||||
have : (x.getLsbD n).toNat ≤ 1 := by cases x.getLsbD n <;> simp
|
||||
specialize ihn (acc := acc + (x.getLsbD n).toNat)
|
||||
simp
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem cpopNatRec_of_le {x : BitVec w} (k n : Nat) (hn : w ≤ n) :
|
||||
x.cpopNatRec (n + k) acc = x.cpopNatRec n acc := by
|
||||
induction k
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ k ihk =>
|
||||
simp [show n + (k + 1) = (n + k) + 1 by omega, ihk, show w ≤ n + k by omega]
|
||||
|
||||
theorem cpopNatRec_zero_le (x : BitVec w) (n : Nat) :
|
||||
x.cpopNatRec n 0 ≤ w := by
|
||||
induction n
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ n ihn =>
|
||||
by_cases hle : n ≤ w
|
||||
· by_cases hx : x.getLsbD n
|
||||
· have := cpopNatRec_le (x := x) (acc := 1) (by omega)
|
||||
have := lt_of_getLsbD hx
|
||||
simp [hx]
|
||||
omega
|
||||
· have := cpopNatRec_le (x := x) (acc := 0) (by omega)
|
||||
simp [hx]
|
||||
omega
|
||||
· simp [show w ≤ n by omega]
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem cpopNatRec_allOnes (h : n ≤ w) :
|
||||
(allOnes w).cpopNatRec n acc = acc + n := by
|
||||
induction n
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ n ihn =>
|
||||
specialize ihn (by omega)
|
||||
simp [show n < w by omega, ihn,
|
||||
cpopNatRec_add (acc := acc) (acc' := 1)]
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem cpop_allOnes :
|
||||
(allOnes w).cpop = BitVec.ofNat w w := by
|
||||
simp [cpop, cpopNatRec_allOnes]
|
||||
|
||||
@[simp]
|
||||
theorem cpop_zero :
|
||||
(0#w).cpop = 0#w := by
|
||||
simp [cpop]
|
||||
|
||||
theorem toNat_cpop_le (x : BitVec w) :
|
||||
x.cpop.toNat ≤ w := by
|
||||
have hlt := Nat.lt_two_pow_self (n := w)
|
||||
have hle := cpopNatRec_zero_le (x := x) (n := w)
|
||||
simp only [cpop, toNat_ofNat, ge_iff_le]
|
||||
rw [Nat.mod_eq_of_lt (by omega)]
|
||||
exact hle
|
||||
|
||||
@[simp]
|
||||
theorem cpopNatRec_cons_of_le {x : BitVec w} {b : Bool} (hn : n ≤ w) :
|
||||
@@ -6475,6 +6428,68 @@ theorem cpopNatRec_cons_of_lt {x : BitVec w} {b : Bool} (hn : w < n) :
|
||||
· simp [show w = n by omega, getElem_cons,
|
||||
cpopNatRec_add (acc := acc) (acc' := b.toNat), Nat.add_comm]
|
||||
|
||||
theorem cpopNatRec_le {x : BitVec w} (n : Nat) :
|
||||
x.cpopNatRec n acc ≤ acc + n := by
|
||||
induction n generalizing acc
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ n ihn =>
|
||||
have : (x.getLsbD n).toNat ≤ 1 := by cases x.getLsbD n <;> simp
|
||||
specialize ihn (acc := acc + (x.getLsbD n).toNat)
|
||||
simp
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem cpopNatRec_of_le {x : BitVec w} (k n : Nat) (hn : w ≤ n) :
|
||||
x.cpopNatRec (n + k) acc = x.cpopNatRec n acc := by
|
||||
induction k
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ k ihk =>
|
||||
simp [show n + (k + 1) = (n + k) + 1 by omega, ihk, show w ≤ n + k by omega]
|
||||
|
||||
@[simp]
|
||||
theorem cpopNatRec_allOnes (h : n ≤ w) :
|
||||
(allOnes w).cpopNatRec n acc = acc + n := by
|
||||
induction n
|
||||
· case zero =>
|
||||
simp
|
||||
· case succ n ihn =>
|
||||
specialize ihn (by omega)
|
||||
simp [show n < w by omega, ihn,
|
||||
cpopNatRec_add (acc := acc) (acc' := 1)]
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem cpop_allOnes :
|
||||
(allOnes w).cpop = BitVec.ofNat w w := by
|
||||
simp [cpop, cpopNatRec_allOnes]
|
||||
|
||||
@[simp]
|
||||
theorem cpop_zero :
|
||||
(0#w).cpop = 0#w := by
|
||||
simp [cpop]
|
||||
|
||||
theorem cpopNatRec_zero_le (x : BitVec w) (n : Nat) :
|
||||
x.cpopNatRec n 0 ≤ w := by
|
||||
induction x
|
||||
· case nil => simp
|
||||
· case cons w b bv ih =>
|
||||
by_cases hle : n ≤ w
|
||||
· have := cpopNatRec_cons_of_le (b := b) (x := bv) (n := n) (acc := 0) hle
|
||||
omega
|
||||
· rw [cpopNatRec_cons_of_lt (by omega)]
|
||||
have : b.toNat ≤ 1 := by cases b <;> simp
|
||||
omega
|
||||
|
||||
theorem toNat_cpop_le (x : BitVec w) :
|
||||
x.cpop.toNat ≤ w := by
|
||||
have hlt := Nat.lt_two_pow_self (n := w)
|
||||
have hle := cpopNatRec_zero_le (x := x) (n := w)
|
||||
simp only [cpop, toNat_ofNat, ge_iff_le]
|
||||
rw [Nat.mod_eq_of_lt (by omega)]
|
||||
exact hle
|
||||
|
||||
theorem cpopNatRec_concat_of_lt {x : BitVec w} {b : Bool} (hn : 0 < n) :
|
||||
(concat x b).cpopNatRec n acc = b.toNat + x.cpopNatRec (n - 1) acc := by
|
||||
induction n generalizing acc
|
||||
@@ -6572,12 +6587,12 @@ theorem cpop_cast (x : BitVec w) (h : w = v) :
|
||||
@[simp]
|
||||
theorem toNat_cpop_append {x : BitVec w} {y : BitVec u} :
|
||||
(x ++ y).cpop.toNat = x.cpop.toNat + y.cpop.toNat := by
|
||||
induction w generalizing u
|
||||
· case zero =>
|
||||
simp [cpop]
|
||||
· case succ w ihw =>
|
||||
rw [← cons_msb_setWidth x, toNat_cpop_cons, cons_append, cpop_cast, toNat_cast,
|
||||
toNat_cpop_cons, ihw, ← Nat.add_assoc]
|
||||
induction x generalizing y
|
||||
· case nil =>
|
||||
simp
|
||||
· case cons w b bv ih =>
|
||||
simp [cons_append, ih]
|
||||
omega
|
||||
|
||||
theorem cpop_append {x : BitVec w} {y : BitVec u} :
|
||||
(x ++ y).cpop = x.cpop.setWidth (w + u) + y.cpop.setWidth (w + u) := by
|
||||
@@ -6588,4 +6603,14 @@ theorem cpop_append {x : BitVec w} {y : BitVec u} :
|
||||
simp only [toNat_cpop_append, toNat_add, toNat_setWidth, Nat.add_mod_mod, Nat.mod_add_mod]
|
||||
rw [Nat.mod_eq_of_lt (by omega)]
|
||||
|
||||
theorem toNat_cpop_not {x : BitVec w} :
|
||||
(~~~x).cpop.toNat = w - x.cpop.toNat := by
|
||||
induction x
|
||||
· case nil =>
|
||||
simp
|
||||
· case cons b x ih =>
|
||||
have := toNat_cpop_le x
|
||||
cases b
|
||||
<;> (simp [ih]; omega)
|
||||
|
||||
end BitVec
|
||||
|
||||
@@ -9,3 +9,4 @@ prelude
|
||||
public import Init.Data.Char.Basic
|
||||
public import Init.Data.Char.Lemmas
|
||||
public import Init.Data.Char.Order
|
||||
public import Init.Data.Char.Ordinal
|
||||
|
||||
242
src/Init/Data/Char/Ordinal.lean
Normal file
242
src/Init/Data/Char/Ordinal.lean
Normal file
@@ -0,0 +1,242 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Author: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Fin.OverflowAware
|
||||
public import Init.Data.UInt.Basic
|
||||
public import Init.Data.Function
|
||||
import Init.Data.Char.Lemmas
|
||||
import Init.Data.Char.Order
|
||||
import Init.Grind
|
||||
|
||||
/-!
|
||||
# Bijection between `Char` and `Fin Char.numCodePoints`
|
||||
|
||||
In this file, we construct a bijection between `Char` and `Fin Char.numCodePoints` and show that
|
||||
it is compatible with various operations. Since `Fin` is simpler than `Char` due to being based
|
||||
on natural numbers instead of `UInt32` and not having a hole in the middle (surrogate code points),
|
||||
this is sometimes useful to simplify reasoning about `Char`.
|
||||
|
||||
We use these declarations in the construction of `Char` ranges, see the module
|
||||
`Init.Data.Range.Polymorphic.Char`.
|
||||
-/
|
||||
|
||||
set_option doc.verso true
|
||||
|
||||
public section
|
||||
|
||||
namespace Char
|
||||
|
||||
/-- The number of surrogate code points. -/
|
||||
abbrev numSurrogates : Nat :=
|
||||
-- 0xe000 - 0xd800
|
||||
2048
|
||||
|
||||
/-- The size of the {name}`Char` type. -/
|
||||
abbrev numCodePoints : Nat :=
|
||||
-- 0x110000 - numSurrogates
|
||||
1112064
|
||||
|
||||
/--
|
||||
Packs {name}`Char` bijectively into {lean}`Fin Char.numCodePoints` by shifting code points which are
|
||||
greater than the surrogate code points by the number of surrogate code points.
|
||||
|
||||
The inverse of this function is called {name (scope := "Init.Data.Char.Ordinal")}`Char.ofOrdinal`.
|
||||
-/
|
||||
def ordinal (c : Char) : Fin Char.numCodePoints :=
|
||||
if h : c.val < 0xd800 then
|
||||
⟨c.val.toNat, by grind [UInt32.lt_iff_toNat_lt]⟩
|
||||
else
|
||||
⟨c.val.toNat - Char.numSurrogates, by grind [UInt32.lt_iff_toNat_lt]⟩
|
||||
|
||||
/--
|
||||
Unpacks {lean}`Fin Char.numCodePoints` bijectively to {name}`Char` by shifting code points which are
|
||||
greater than the surrogate code points by the number of surrogate code points.
|
||||
|
||||
The inverse of this function is called {name}`Char.ordinal`.
|
||||
-/
|
||||
def ofOrdinal (f : Fin Char.numCodePoints) : Char :=
|
||||
if h : (f : Nat) < 0xd800 then
|
||||
⟨UInt32.ofNatLT f (by grind), by grind [UInt32.toNat_ofNatLT]⟩
|
||||
else
|
||||
⟨UInt32.ofNatLT (f + Char.numSurrogates) (by grind), by grind [UInt32.toNat_ofNatLT]⟩
|
||||
|
||||
/--
|
||||
Computes the next {name}`Char`, skipping over surrogate code points (which are not valid
|
||||
{name}`Char`s) as necessary.
|
||||
|
||||
This function is specified by its interaction with {name}`Char.ordinal`, see
|
||||
{name (scope := "Init.Data.Char.Ordinal")}`Char.succ?_eq`.
|
||||
-/
|
||||
def succ? (c : Char) : Option Char :=
|
||||
if h₀ : c.val < 0xd7ff then
|
||||
some ⟨c.val + 1, by grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_add]⟩
|
||||
else if h₁ : c.val = 0xd7ff then
|
||||
some ⟨0xe000, by decide⟩
|
||||
else if h₂ : c.val < 0x10ffff then
|
||||
some ⟨c.val + 1, by
|
||||
simp only [UInt32.lt_iff_toNat_lt, UInt32.reduceToNat, Nat.not_lt, ← UInt32.toNat_inj,
|
||||
UInt32.isValidChar, Nat.isValidChar, UInt32.toNat_add, Nat.reducePow] at *
|
||||
grind⟩
|
||||
else none
|
||||
|
||||
/--
|
||||
Computes the {name}`m`-th next {name}`Char`, skipping over surrogate code points (which are not
|
||||
valid {name}`Char`s) as necessary.
|
||||
|
||||
This function is specified by its interaction with {name}`Char.ordinal`, see
|
||||
{name (scope := "Init.Data.Char.Ordinal")}`Char.succMany?_eq`.
|
||||
-/
|
||||
def succMany? (m : Nat) (c : Char) : Option Char :=
|
||||
c.ordinal.addNat? m |>.map Char.ofOrdinal
|
||||
|
||||
@[grind =]
|
||||
theorem coe_ordinal {c : Char} :
|
||||
(c.ordinal : Nat) =
|
||||
if c.val < 0xd800 then
|
||||
c.val.toNat
|
||||
else
|
||||
c.val.toNat - Char.numSurrogates := by
|
||||
grind [Char.ordinal]
|
||||
|
||||
@[simp]
|
||||
theorem ordinal_zero : '\x00'.ordinal = 0 := by
|
||||
ext
|
||||
simp [coe_ordinal]
|
||||
|
||||
@[grind =]
|
||||
theorem val_ofOrdinal {f : Fin Char.numCodePoints} :
|
||||
(Char.ofOrdinal f).val =
|
||||
if h : (f : Nat) < 0xd800 then
|
||||
UInt32.ofNatLT f (by grind)
|
||||
else
|
||||
UInt32.ofNatLT (f + Char.numSurrogates) (by grind) := by
|
||||
grind [Char.ofOrdinal]
|
||||
|
||||
@[simp]
|
||||
theorem ofOrdinal_ordinal {c : Char} : Char.ofOrdinal c.ordinal = c := by
|
||||
ext
|
||||
simp only [val_ofOrdinal, coe_ordinal, UInt32.ofNatLT_add]
|
||||
split
|
||||
· grind [UInt32.lt_iff_toNat_lt, UInt32.ofNatLT_toNat]
|
||||
· rw [dif_neg]
|
||||
· simp only [← UInt32.toNat_inj, UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
|
||||
grind [UInt32.toNat_lt, UInt32.lt_iff_toNat_lt]
|
||||
· grind [UInt32.lt_iff_toNat_lt]
|
||||
|
||||
@[simp]
|
||||
theorem ordinal_ofOrdinal {f : Fin Char.numCodePoints} : (Char.ofOrdinal f).ordinal = f := by
|
||||
ext
|
||||
simp [coe_ordinal, val_ofOrdinal]
|
||||
split
|
||||
· rw [if_pos, UInt32.toNat_ofNatLT]
|
||||
simpa [UInt32.lt_iff_toNat_lt]
|
||||
· rw [if_neg, UInt32.toNat_add, UInt32.toNat_ofNatLT, UInt32.toNat_ofNatLT, Nat.mod_eq_of_lt,
|
||||
Nat.add_sub_cancel]
|
||||
· grind
|
||||
· simp only [UInt32.lt_iff_toNat_lt, UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow,
|
||||
UInt32.reduceToNat, Nat.not_lt]
|
||||
grind
|
||||
|
||||
@[simp]
|
||||
theorem ordinal_comp_ofOrdinal : Char.ordinal ∘ Char.ofOrdinal = id := by
|
||||
ext; simp
|
||||
|
||||
@[simp]
|
||||
theorem ofOrdinal_comp_ordinal : Char.ofOrdinal ∘ Char.ordinal = id := by
|
||||
ext; simp
|
||||
|
||||
@[simp]
|
||||
theorem ordinal_inj {c d : Char} : c.ordinal = d.ordinal ↔ c = d :=
|
||||
⟨fun h => by simpa using congrArg Char.ofOrdinal h, (· ▸ rfl)⟩
|
||||
|
||||
theorem ordinal_injective : Function.Injective Char.ordinal :=
|
||||
fun _ _ => ordinal_inj.1
|
||||
|
||||
@[simp]
|
||||
theorem ofOrdinal_inj {f g : Fin Char.numCodePoints} :
|
||||
Char.ofOrdinal f = Char.ofOrdinal g ↔ f = g :=
|
||||
⟨fun h => by simpa using congrArg Char.ordinal h, (· ▸ rfl)⟩
|
||||
|
||||
theorem ofOrdinal_injective : Function.Injective Char.ofOrdinal :=
|
||||
fun _ _ => ofOrdinal_inj.1
|
||||
|
||||
theorem ordinal_le_of_le {c d : Char} (h : c ≤ d) : c.ordinal ≤ d.ordinal := by
|
||||
simp only [le_def, UInt32.le_iff_toNat_le] at h
|
||||
simp only [Fin.le_def, coe_ordinal, UInt32.lt_iff_toNat_lt, UInt32.reduceToNat]
|
||||
grind
|
||||
|
||||
theorem ofOrdinal_le_of_le {f g : Fin Char.numCodePoints} (h : f ≤ g) :
|
||||
Char.ofOrdinal f ≤ Char.ofOrdinal g := by
|
||||
simp only [Fin.le_def] at h
|
||||
simp only [le_def, val_ofOrdinal, UInt32.ofNatLT_add, UInt32.le_iff_toNat_le]
|
||||
split
|
||||
· simp only [UInt32.toNat_ofNatLT]
|
||||
split
|
||||
· simpa
|
||||
· simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
|
||||
grind
|
||||
· simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
|
||||
rw [dif_neg (by grind)]
|
||||
simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
|
||||
grind
|
||||
|
||||
theorem le_iff_ordinal_le {c d : Char} : c ≤ d ↔ c.ordinal ≤ d.ordinal :=
|
||||
⟨ordinal_le_of_le, fun h => by simpa using ofOrdinal_le_of_le h⟩
|
||||
|
||||
theorem le_iff_ofOrdinal_le {f g : Fin Char.numCodePoints} :
|
||||
f ≤ g ↔ Char.ofOrdinal f ≤ Char.ofOrdinal g :=
|
||||
⟨ofOrdinal_le_of_le, fun h => by simpa using ordinal_le_of_le h⟩
|
||||
|
||||
theorem lt_iff_ordinal_lt {c d : Char} : c < d ↔ c.ordinal < d.ordinal := by
|
||||
simp only [Std.lt_iff_le_and_not_ge, le_iff_ordinal_le]
|
||||
|
||||
theorem lt_iff_ofOrdinal_lt {f g : Fin Char.numCodePoints} :
|
||||
f < g ↔ Char.ofOrdinal f < Char.ofOrdinal g := by
|
||||
simp only [Std.lt_iff_le_and_not_ge, le_iff_ofOrdinal_le]
|
||||
|
||||
theorem succ?_eq {c : Char} : c.succ? = (c.ordinal.addNat? 1).map Char.ofOrdinal := by
|
||||
fun_cases Char.succ? with
|
||||
| case1 h =>
|
||||
rw [Fin.addNat?_eq_some]
|
||||
· simp only [coe_ordinal, Option.map_some, Option.some.injEq, Char.ext_iff, val_ofOrdinal,
|
||||
UInt32.ofNatLT_add, UInt32.reduceOfNatLT]
|
||||
split
|
||||
· simp only [UInt32.ofNatLT_toNat, dite_eq_ite, left_eq_ite_iff, Nat.not_lt,
|
||||
Nat.reduceLeDiff, UInt32.left_eq_add]
|
||||
grind [UInt32.lt_iff_toNat_lt]
|
||||
· grind
|
||||
· simp [coe_ordinal]
|
||||
grind [UInt32.lt_iff_toNat_lt]
|
||||
| case2 =>
|
||||
rw [Fin.addNat?_eq_some]
|
||||
· simp [coe_ordinal, *, Char.ext_iff, val_ofOrdinal, numSurrogates]
|
||||
· simp [coe_ordinal, *, numCodePoints]
|
||||
| case3 =>
|
||||
rw [Fin.addNat?_eq_some]
|
||||
· simp only [coe_ordinal, Option.map_some, Option.some.injEq, Char.ext_iff, val_ofOrdinal,
|
||||
UInt32.ofNatLT_add, UInt32.reduceOfNatLT]
|
||||
split
|
||||
· grind
|
||||
· rw [dif_neg]
|
||||
· simp only [← UInt32.toNat_inj, UInt32.toNat_add, UInt32.reduceToNat, Nat.reducePow,
|
||||
UInt32.toNat_ofNatLT, Nat.mod_add_mod]
|
||||
grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_inj]
|
||||
· grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_inj]
|
||||
· grind [UInt32.lt_iff_toNat_lt]
|
||||
| case4 =>
|
||||
rw [eq_comm]
|
||||
grind [UInt32.lt_iff_toNat_lt]
|
||||
|
||||
theorem map_ordinal_succ? {c : Char} : c.succ?.map ordinal = c.ordinal.addNat? 1 := by
|
||||
simp [succ?_eq]
|
||||
|
||||
theorem succMany?_eq {m : Nat} {c : Char} :
|
||||
c.succMany? m = (c.ordinal.addNat? m).map Char.ofOrdinal := by
|
||||
rfl
|
||||
|
||||
end Char
|
||||
@@ -11,3 +11,4 @@ public import Init.Data.Fin.Log2
|
||||
public import Init.Data.Fin.Iterate
|
||||
public import Init.Data.Fin.Fold
|
||||
public import Init.Data.Fin.Lemmas
|
||||
public import Init.Data.Fin.OverflowAware
|
||||
|
||||
51
src/Init/Data/Fin/OverflowAware.lean
Normal file
51
src/Init/Data/Fin/OverflowAware.lean
Normal file
@@ -0,0 +1,51 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Author: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Fin.Basic
|
||||
import Init.Data.Fin.Lemmas
|
||||
|
||||
set_option doc.verso true
|
||||
|
||||
public section
|
||||
|
||||
namespace Fin
|
||||
|
||||
/--
|
||||
Overflow-aware addition of a natural number to an element of {lean}`Fin n`.
|
||||
|
||||
Examples:
|
||||
* {lean}`(2 : Fin 3).addNat? 1 = (none : Option (Fin 3))`
|
||||
* {lean}`(2 : Fin 4).addNat? 1 = (some 3 : Option (Fin 4))`
|
||||
-/
|
||||
@[inline]
|
||||
protected def addNat? (i : Fin n) (m : Nat) : Option (Fin n) :=
|
||||
if h : i + m < n then some ⟨i + m, h⟩ else none
|
||||
|
||||
theorem addNat?_eq_some {i : Fin n} (h : i + m < n) : i.addNat? m = some ⟨i + m, h⟩ := by
|
||||
simp [Fin.addNat?, h]
|
||||
|
||||
theorem addNat?_eq_some_iff {i : Fin n} :
|
||||
i.addNat? m = some j ↔ i + m < n ∧ j = i + m := by
|
||||
simp only [Fin.addNat?]
|
||||
split <;> simp [Fin.ext_iff, eq_comm, *]
|
||||
|
||||
@[simp]
|
||||
theorem addNat?_eq_none_iff {i : Fin n} : i.addNat? m = none ↔ n ≤ i + m := by
|
||||
simp only [Fin.addNat?]
|
||||
split <;> simp_all [Nat.not_lt]
|
||||
|
||||
@[simp]
|
||||
theorem addNat?_zero {i : Fin n} : i.addNat? 0 = some i := by
|
||||
simp [addNat?_eq_some_iff]
|
||||
|
||||
@[grind =]
|
||||
theorem addNat?_eq_dif {i : Fin n} :
|
||||
i.addNat? m = if h : i + m < n then some ⟨i + m, h⟩ else none := by
|
||||
rfl
|
||||
|
||||
end Fin
|
||||
@@ -15,3 +15,4 @@ public import Init.Data.Option.Attach
|
||||
public import Init.Data.Option.List
|
||||
public import Init.Data.Option.Monadic
|
||||
public import Init.Data.Option.Array
|
||||
public import Init.Data.Option.Function
|
||||
|
||||
26
src/Init/Data/Option/Function.lean
Normal file
26
src/Init/Data/Option/Function.lean
Normal file
@@ -0,0 +1,26 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Function
|
||||
import Init.Data.Option.Lemmas
|
||||
|
||||
public section
|
||||
|
||||
namespace Option
|
||||
|
||||
theorem map_injective {f : α → β} (hf : Function.Injective f) :
|
||||
Function.Injective (Option.map f) := by
|
||||
intros a b hab
|
||||
cases a <;> cases b
|
||||
· simp
|
||||
· simp at hab
|
||||
· simp at hab
|
||||
· simp only [map_some, some.injEq] at hab
|
||||
simpa using hf hab
|
||||
|
||||
end Option
|
||||
@@ -307,12 +307,20 @@ theorem map_id' {x : Option α} : (x.map fun a => a) = x := congrFun map_id x
|
||||
|
||||
theorem map_id_apply' {α : Type u} {x : Option α} : Option.map (fun (a : α) => a) x = x := by simp
|
||||
|
||||
/-- See `Option.apply_get` for a version that can be rewritten in the reverse direction. -/
|
||||
@[simp, grind =] theorem get_map {f : α → β} {o : Option α} {h : (o.map f).isSome} :
|
||||
(o.map f).get h = f (o.get (by simpa using h)) := by
|
||||
cases o with
|
||||
| none => simp at h
|
||||
| some a => simp
|
||||
|
||||
/-- See `Option.get_map` for a version that can be rewritten in the reverse direction. -/
|
||||
theorem apply_get {f : α → β} {o : Option α} {h} :
|
||||
f (o.get h) = (o.map f).get (by simp [h]) := by
|
||||
cases o
|
||||
· simp at h
|
||||
· simp
|
||||
|
||||
@[simp] theorem map_map (h : β → γ) (g : α → β) (x : Option α) :
|
||||
(x.map g).map h = x.map (h ∘ g) := by
|
||||
cases x <;> simp only [map_none, map_some, ·∘·]
|
||||
@@ -732,6 +740,11 @@ theorem get_merge {o o' : Option α} {f : α → α → α} {i : α} [Std.Lawful
|
||||
theorem elim_guard : (guard p a).elim b f = if p a then f a else b := by
|
||||
cases h : p a <;> simp [*, guard]
|
||||
|
||||
@[simp]
|
||||
theorem Option.elim_map {f : α → β} {g' : γ} {g : β → γ} (o : Option α) :
|
||||
(o.map f).elim g' g = o.elim g' (g ∘ f) := by
|
||||
cases o <;> simp
|
||||
|
||||
-- I don't see how to construct a good grind pattern to instantiate this.
|
||||
@[simp] theorem getD_map (f : α → β) (x : α) (o : Option α) :
|
||||
(o.map f).getD (f x) = f (getD o x) := by cases o <;> rfl
|
||||
|
||||
@@ -10,7 +10,10 @@ public import Init.Data.Range.Polymorphic.Basic
|
||||
public import Init.Data.Range.Polymorphic.Iterators
|
||||
public import Init.Data.Range.Polymorphic.Stream
|
||||
public import Init.Data.Range.Polymorphic.Lemmas
|
||||
public import Init.Data.Range.Polymorphic.Map
|
||||
|
||||
public import Init.Data.Range.Polymorphic.Fin
|
||||
public import Init.Data.Range.Polymorphic.Char
|
||||
public import Init.Data.Range.Polymorphic.Nat
|
||||
public import Init.Data.Range.Polymorphic.Int
|
||||
public import Init.Data.Range.Polymorphic.BitVec
|
||||
|
||||
79
src/Init/Data/Range/Polymorphic/Char.lean
Normal file
79
src/Init/Data/Range/Polymorphic/Char.lean
Normal file
@@ -0,0 +1,79 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Author: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Char.Ordinal
|
||||
public import Init.Data.Range.Polymorphic.Fin
|
||||
import Init.Data.Range.Polymorphic.Lemmas
|
||||
import Init.Data.Range.Polymorphic.Map
|
||||
import Init.Data.Char.Order
|
||||
|
||||
open Std Std.PRange Std.PRange.UpwardEnumerable
|
||||
|
||||
namespace Char
|
||||
|
||||
public instance : UpwardEnumerable Char where
|
||||
succ?
|
||||
succMany?
|
||||
|
||||
@[simp]
|
||||
public theorem pRangeSucc?_eq : PRange.succ? (α := Char) = Char.succ? := rfl
|
||||
|
||||
@[simp]
|
||||
public theorem pRangeSuccMany?_eq : PRange.succMany? (α := Char) = Char.succMany? := rfl
|
||||
|
||||
public instance : Rxc.HasSize Char where
|
||||
size lo hi := Rxc.HasSize.size lo.ordinal hi.ordinal
|
||||
|
||||
public instance : Rxo.HasSize Char where
|
||||
size lo hi := Rxo.HasSize.size lo.ordinal hi.ordinal
|
||||
|
||||
public instance : Rxi.HasSize Char where
|
||||
size hi := Rxi.HasSize.size hi.ordinal
|
||||
|
||||
public instance : Least? Char where
|
||||
least? := some '\x00'
|
||||
|
||||
@[simp]
|
||||
public theorem least?_eq : Least?.least? (α := Char) = some '\x00' := rfl
|
||||
|
||||
def map : Map Char (Fin Char.numCodePoints) where
|
||||
toFun := Char.ordinal
|
||||
injective := ordinal_injective
|
||||
succ?_toFun := by simp [succ?_eq]
|
||||
succMany?_toFun := by simp [succMany?_eq]
|
||||
|
||||
@[simp]
|
||||
theorem toFun_map : map.toFun = Char.ordinal := rfl
|
||||
|
||||
instance : Map.PreservesLE map where
|
||||
le_iff := by simp [le_iff_ordinal_le]
|
||||
|
||||
instance : Map.PreservesRxcSize map where
|
||||
size_eq := rfl
|
||||
|
||||
instance : Map.PreservesRxoSize map where
|
||||
size_eq := rfl
|
||||
|
||||
instance : Map.PreservesRxiSize map where
|
||||
size_eq := rfl
|
||||
|
||||
instance : Map.PreservesLeast? map where
|
||||
map_least? := by simp
|
||||
|
||||
public instance : LawfulUpwardEnumerable Char := .ofMap map
|
||||
public instance : LawfulUpwardEnumerableLE Char := .ofMap map
|
||||
public instance : LawfulUpwardEnumerableLT Char := .ofMap map
|
||||
public instance : LawfulUpwardEnumerableLeast? Char := .ofMap map
|
||||
public instance : Rxc.LawfulHasSize Char := .ofMap map
|
||||
public instance : Rxc.IsAlwaysFinite Char := .ofMap map
|
||||
public instance : Rxo.LawfulHasSize Char := .ofMap map
|
||||
public instance : Rxo.IsAlwaysFinite Char := .ofMap map
|
||||
public instance : Rxi.LawfulHasSize Char := .ofMap map
|
||||
public instance : Rxi.IsAlwaysFinite Char := .ofMap map
|
||||
|
||||
end Char
|
||||
92
src/Init/Data/Range/Polymorphic/Fin.lean
Normal file
92
src/Init/Data/Range/Polymorphic/Fin.lean
Normal file
@@ -0,0 +1,92 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Range.Polymorphic.Instances
|
||||
public import Init.Data.Fin.OverflowAware
|
||||
import Init.Grind
|
||||
|
||||
public section
|
||||
|
||||
open Std Std.PRange
|
||||
|
||||
namespace Fin
|
||||
|
||||
instance : UpwardEnumerable (Fin n) where
|
||||
succ? i := i.addNat? 1
|
||||
succMany? m i := i.addNat? m
|
||||
|
||||
@[simp, grind =]
|
||||
theorem pRangeSucc?_eq : PRange.succ? (α := Fin n) = (·.addNat? 1) := rfl
|
||||
|
||||
@[simp, grind =]
|
||||
theorem pRangeSuccMany?_eq : PRange.succMany? m (α := Fin n) = (·.addNat? m) :=
|
||||
rfl
|
||||
|
||||
instance : LawfulUpwardEnumerable (Fin n) where
|
||||
ne_of_lt a b := by grind [UpwardEnumerable.LT]
|
||||
succMany?_zero a := by simp
|
||||
succMany?_add_one m a := by grind
|
||||
|
||||
instance : LawfulUpwardEnumerableLE (Fin n) where
|
||||
le_iff x y := by
|
||||
simp only [le_def, UpwardEnumerable.LE, pRangeSuccMany?_eq, Fin.addNat?_eq_dif,
|
||||
Option.dite_none_right_eq_some, Option.some.injEq, ← val_inj, exists_prop]
|
||||
exact ⟨fun h => ⟨y - x, by grind⟩, by grind⟩
|
||||
|
||||
instance : Least? (Fin 0) where
|
||||
least? := none
|
||||
|
||||
instance : LawfulUpwardEnumerableLeast? (Fin 0) where
|
||||
least?_le a := False.elim (Nat.not_lt_zero _ a.isLt)
|
||||
|
||||
@[simp]
|
||||
theorem least?_eq_of_zero : Least?.least? (α := Fin 0) = none := rfl
|
||||
|
||||
instance [NeZero n] : Least? (Fin n) where
|
||||
least? := some 0
|
||||
|
||||
instance [NeZero n] : LawfulUpwardEnumerableLeast? (Fin n) where
|
||||
least?_le a := ⟨0, rfl, (LawfulUpwardEnumerableLE.le_iff 0 a).1 (Fin.zero_le _)⟩
|
||||
|
||||
@[simp]
|
||||
theorem least?_eq [NeZero n] : Least?.least? (α := Fin n) = some 0 := rfl
|
||||
|
||||
instance : LawfulUpwardEnumerableLT (Fin n) := inferInstance
|
||||
|
||||
instance : Rxc.HasSize (Fin n) where
|
||||
size lo hi := hi + 1 - lo
|
||||
|
||||
@[grind =]
|
||||
theorem rxcHasSize_eq :
|
||||
Rxc.HasSize.size (α := Fin n) = fun (lo hi : Fin n) => (hi + 1 - lo : Nat) := rfl
|
||||
|
||||
instance : Rxc.LawfulHasSize (Fin n) where
|
||||
size_eq_zero_of_not_le bound x := by grind
|
||||
size_eq_one_of_succ?_eq_none lo hi := by grind
|
||||
size_eq_succ_of_succ?_eq_some lo hi x := by grind
|
||||
|
||||
instance : Rxc.IsAlwaysFinite (Fin n) := inferInstance
|
||||
|
||||
instance : Rxo.HasSize (Fin n) := .ofClosed
|
||||
instance : Rxo.LawfulHasSize (Fin n) := inferInstance
|
||||
instance : Rxo.IsAlwaysFinite (Fin n) := inferInstance
|
||||
|
||||
instance : Rxi.HasSize (Fin n) where
|
||||
size lo := n - lo
|
||||
|
||||
@[grind =]
|
||||
theorem rxiHasSize_eq :
|
||||
Rxi.HasSize.size (α := Fin n) = fun (lo : Fin n) => (n - lo : Nat) := rfl
|
||||
|
||||
instance : Rxi.LawfulHasSize (Fin n) where
|
||||
size_eq_one_of_succ?_eq_none x := by grind
|
||||
size_eq_succ_of_succ?_eq_some lo lo' := by grind
|
||||
|
||||
instance : Rxi.IsAlwaysFinite (Fin n) := inferInstance
|
||||
|
||||
end Fin
|
||||
195
src/Init/Data/Range/Polymorphic/Map.lean
Normal file
195
src/Init/Data/Range/Polymorphic/Map.lean
Normal file
@@ -0,0 +1,195 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Markus Himmel
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Data.Range.Polymorphic.Instances
|
||||
public import Init.Data.Function
|
||||
import Init.Data.Order.Lemmas
|
||||
import Init.Data.Option.Function
|
||||
|
||||
public section
|
||||
|
||||
/-!
|
||||
# Mappings between `UpwardEnumerable` types
|
||||
|
||||
In this file we build machinery for pulling back lawfulness properties for `UpwardEnumerable` along
|
||||
injective functions that commute with the relevant operations.
|
||||
-/
|
||||
|
||||
namespace Std
|
||||
|
||||
namespace PRange
|
||||
|
||||
namespace UpwardEnumerable
|
||||
|
||||
/--
|
||||
An injective mapping between two types implementing `UpwardEnumerable` that commutes with `succ?`
|
||||
and `succMany?`.
|
||||
|
||||
Having such a mapping means that all of the `Prop`-valued lawfulness classes around
|
||||
`UpwardEnumerable` can be pulled back.
|
||||
-/
|
||||
structure Map (α : Type u) (β : Type v) [UpwardEnumerable α] [UpwardEnumerable β] where
|
||||
toFun : α → β
|
||||
injective : Function.Injective toFun
|
||||
succ?_toFun (a : α) : succ? (toFun a) = (succ? a).map toFun
|
||||
succMany?_toFun (n : Nat) (a : α) : succMany? n (toFun a) = (succMany? n a).map toFun
|
||||
|
||||
namespace Map
|
||||
|
||||
variable [UpwardEnumerable α] [UpwardEnumerable β]
|
||||
|
||||
theorem succ?_eq_none_iff (f : Map α β) {a : α} :
|
||||
succ? a = none ↔ succ? (f.toFun a) = none := by
|
||||
rw [← (Option.map_injective f.injective).eq_iff, Option.map_none, ← f.succ?_toFun]
|
||||
|
||||
theorem succ?_eq_some_iff (f : Map α β) {a b : α} :
|
||||
succ? a = some b ↔ succ? (f.toFun a) = some (f.toFun b) := by
|
||||
rw [← (Option.map_injective f.injective).eq_iff, Option.map_some, ← f.succ?_toFun]
|
||||
|
||||
theorem le_iff (f : Map α β) {a b : α} :
|
||||
UpwardEnumerable.LE a b ↔ UpwardEnumerable.LE (f.toFun a) (f.toFun b) := by
|
||||
simp only [UpwardEnumerable.LE, f.succMany?_toFun, Option.map_eq_some_iff]
|
||||
refine ⟨fun ⟨n, hn⟩ => ⟨n, b, by simp [hn]⟩, fun ⟨n, c, hn⟩ => ⟨n, ?_⟩⟩
|
||||
rw [hn.1, Option.some_inj, f.injective hn.2]
|
||||
|
||||
theorem lt_iff (f : Map α β) {a b : α} :
|
||||
UpwardEnumerable.LT a b ↔ UpwardEnumerable.LT (f.toFun a) (f.toFun b) := by
|
||||
simp only [UpwardEnumerable.LT, f.succMany?_toFun, Option.map_eq_some_iff]
|
||||
refine ⟨fun ⟨n, hn⟩ => ⟨n, b, by simp [hn]⟩, fun ⟨n, c, hn⟩ => ⟨n, ?_⟩⟩
|
||||
rw [hn.1, Option.some_inj, f.injective hn.2]
|
||||
|
||||
theorem succ?_toFun' (f : Map α β) : succ? ∘ f.toFun = Option.map f.toFun ∘ succ? := by
|
||||
ext
|
||||
simp [f.succ?_toFun]
|
||||
|
||||
/-- Compatibility class for `Map` and `≤`. -/
|
||||
class PreservesLE [LE α] [LE β] (f : Map α β) where
|
||||
le_iff : a ≤ b ↔ f.toFun a ≤ f.toFun b
|
||||
|
||||
/-- Compatibility class for `Map` and `<`. -/
|
||||
class PreservesLT [LT α] [LT β] (f : Map α β) where
|
||||
lt_iff : a < b ↔ f.toFun a < f.toFun b
|
||||
|
||||
/-- Compatibility class for `Map` and `Rxc.HasSize`. -/
|
||||
class PreservesRxcSize [Rxc.HasSize α] [Rxc.HasSize β] (f : Map α β) where
|
||||
size_eq : Rxc.HasSize.size a b = Rxc.HasSize.size (f.toFun a) (f.toFun b)
|
||||
|
||||
/-- Compatibility class for `Map` and `Rxo.HasSize`. -/
|
||||
class PreservesRxoSize [Rxo.HasSize α] [Rxo.HasSize β] (f : Map α β) where
|
||||
size_eq : Rxo.HasSize.size a b = Rxo.HasSize.size (f.toFun a) (f.toFun b)
|
||||
|
||||
/-- Compatibility class for `Map` and `Rxi.HasSize`. -/
|
||||
class PreservesRxiSize [Rxi.HasSize α] [Rxi.HasSize β] (f : Map α β) where
|
||||
size_eq : Rxi.HasSize.size b = Rxi.HasSize.size (f.toFun b)
|
||||
|
||||
/-- Compatibility class for `Map` and `Least?`. -/
|
||||
class PreservesLeast? [Least? α] [Least? β] (f : Map α β) where
|
||||
map_least? : Least?.least?.map f.toFun = Least?.least?
|
||||
|
||||
end UpwardEnumerable.Map
|
||||
|
||||
open UpwardEnumerable
|
||||
|
||||
variable [UpwardEnumerable α] [UpwardEnumerable β]
|
||||
|
||||
theorem LawfulUpwardEnumerable.ofMap [LawfulUpwardEnumerable β] (f : Map α β) :
|
||||
LawfulUpwardEnumerable α where
|
||||
ne_of_lt a b := by
|
||||
simpa only [f.lt_iff, ← f.injective.ne_iff] using LawfulUpwardEnumerable.ne_of_lt _ _
|
||||
succMany?_zero a := by
|
||||
apply Option.map_injective f.injective
|
||||
simpa [← f.succMany?_toFun] using LawfulUpwardEnumerable.succMany?_zero _
|
||||
succMany?_add_one n a := by
|
||||
apply Option.map_injective f.injective
|
||||
rw [← f.succMany?_toFun, LawfulUpwardEnumerable.succMany?_add_one,
|
||||
f.succMany?_toFun, Option.bind_map, Map.succ?_toFun', Option.map_bind]
|
||||
|
||||
instance [LE α] [LT α] [LawfulOrderLT α] [LE β] [LT β] [LawfulOrderLT β] (f : Map α β)
|
||||
[f.PreservesLE] : f.PreservesLT where
|
||||
lt_iff := by simp [lt_iff_le_and_not_ge, Map.PreservesLE.le_iff (f := f)]
|
||||
|
||||
theorem LawfulUpwardEnumerableLE.ofMap [LE α] [LE β] [LawfulUpwardEnumerableLE β] (f : Map α β)
|
||||
[f.PreservesLE] : LawfulUpwardEnumerableLE α where
|
||||
le_iff := by simp [Map.PreservesLE.le_iff (f := f), f.le_iff, LawfulUpwardEnumerableLE.le_iff]
|
||||
|
||||
theorem LawfulUpwardEnumerableLT.ofMap [LT α] [LT β] [LawfulUpwardEnumerableLT β] (f : Map α β)
|
||||
[f.PreservesLT] : LawfulUpwardEnumerableLT α where
|
||||
lt_iff := by simp [Map.PreservesLT.lt_iff (f := f), f.lt_iff, LawfulUpwardEnumerableLT.lt_iff]
|
||||
|
||||
theorem LawfulUpwardEnumerableLeast?.ofMap [Least? α] [Least? β] [LawfulUpwardEnumerableLeast? β]
|
||||
(f : Map α β) [f.PreservesLeast?] : LawfulUpwardEnumerableLeast? α where
|
||||
least?_le a := by
|
||||
obtain ⟨l, hl, hl'⟩ := LawfulUpwardEnumerableLeast?.least?_le (f.toFun a)
|
||||
have : (Least?.least? (α := α)).isSome := by
|
||||
rw [← Option.isSome_map (f := f.toFun), Map.PreservesLeast?.map_least?,
|
||||
hl, Option.isSome_some]
|
||||
refine ⟨Option.get _ this, by simp, ?_⟩
|
||||
rw [f.le_iff, Option.apply_get (f := f.toFun)]
|
||||
simpa [Map.PreservesLeast?.map_least?, hl] using hl'
|
||||
|
||||
end PRange
|
||||
|
||||
open PRange PRange.UpwardEnumerable
|
||||
|
||||
variable [UpwardEnumerable α] [UpwardEnumerable β]
|
||||
|
||||
theorem Rxc.LawfulHasSize.ofMap [LE α] [LE β] [Rxc.HasSize α] [Rxc.HasSize β] [Rxc.LawfulHasSize β]
|
||||
(f : Map α β) [f.PreservesLE] [f.PreservesRxcSize] : Rxc.LawfulHasSize α where
|
||||
size_eq_zero_of_not_le a b := by
|
||||
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f)] using
|
||||
Rxc.LawfulHasSize.size_eq_zero_of_not_le _ _
|
||||
size_eq_one_of_succ?_eq_none lo hi := by
|
||||
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f),
|
||||
f.succ?_eq_none_iff] using
|
||||
Rxc.LawfulHasSize.size_eq_one_of_succ?_eq_none _ _
|
||||
size_eq_succ_of_succ?_eq_some lo hi lo' := by
|
||||
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f),
|
||||
f.succ?_eq_some_iff] using
|
||||
Rxc.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _ _
|
||||
|
||||
theorem Rxo.LawfulHasSize.ofMap [LT α] [LT β] [Rxo.HasSize α] [Rxo.HasSize β] [Rxo.LawfulHasSize β]
|
||||
(f : Map α β) [f.PreservesLT] [f.PreservesRxoSize] : Rxo.LawfulHasSize α where
|
||||
size_eq_zero_of_not_le a b := by
|
||||
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f)] using
|
||||
Rxo.LawfulHasSize.size_eq_zero_of_not_le _ _
|
||||
size_eq_one_of_succ?_eq_none lo hi := by
|
||||
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f),
|
||||
f.succ?_eq_none_iff] using
|
||||
Rxo.LawfulHasSize.size_eq_one_of_succ?_eq_none _ _
|
||||
size_eq_succ_of_succ?_eq_some lo hi lo' := by
|
||||
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f),
|
||||
f.succ?_eq_some_iff] using
|
||||
Rxo.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _ _
|
||||
|
||||
theorem Rxi.LawfulHasSize.ofMap [Rxi.HasSize α] [Rxi.HasSize β] [Rxi.LawfulHasSize β]
|
||||
(f : Map α β) [f.PreservesRxiSize] : Rxi.LawfulHasSize α where
|
||||
size_eq_one_of_succ?_eq_none lo := by
|
||||
simpa [Map.PreservesRxiSize.size_eq (f := f), f.succ?_eq_none_iff] using
|
||||
Rxi.LawfulHasSize.size_eq_one_of_succ?_eq_none _
|
||||
size_eq_succ_of_succ?_eq_some lo lo' := by
|
||||
simpa [Map.PreservesRxiSize.size_eq (f := f), f.succ?_eq_some_iff] using
|
||||
Rxi.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _
|
||||
|
||||
theorem Rxc.IsAlwaysFinite.ofMap [LE α] [LE β] [Rxc.IsAlwaysFinite β] (f : Map α β)
|
||||
[f.PreservesLE] : Rxc.IsAlwaysFinite α where
|
||||
finite init hi := by
|
||||
obtain ⟨n, hn⟩ := Rxc.IsAlwaysFinite.finite (f.toFun init) (f.toFun hi)
|
||||
exact ⟨n, by simpa [f.succMany?_toFun, Map.PreservesLE.le_iff (f := f)] using hn⟩
|
||||
|
||||
theorem Rxo.IsAlwaysFinite.ofMap [LT α] [LT β] [Rxo.IsAlwaysFinite β] (f : Map α β)
|
||||
[f.PreservesLT] : Rxo.IsAlwaysFinite α where
|
||||
finite init hi := by
|
||||
obtain ⟨n, hn⟩ := Rxo.IsAlwaysFinite.finite (f.toFun init) (f.toFun hi)
|
||||
exact ⟨n, by simpa [f.succMany?_toFun, Map.PreservesLT.lt_iff (f := f)] using hn⟩
|
||||
|
||||
theorem Rxi.IsAlwaysFinite.ofMap [Rxi.IsAlwaysFinite β] (f : Map α β) : Rxi.IsAlwaysFinite α where
|
||||
finite init := by
|
||||
obtain ⟨n, hn⟩ := Rxi.IsAlwaysFinite.finite (f.toFun init)
|
||||
exact ⟨n, by simpa [f.succMany?_toFun] using hn⟩
|
||||
|
||||
end Std
|
||||
@@ -157,7 +157,7 @@ Converts an 8-bit signed integer to a natural number, mapping all negative numbe
|
||||
|
||||
Use `Int8.toBitVec` to obtain the two's complement representation.
|
||||
-/
|
||||
@[inline] def Int8.toNatClampNeg (i : Int8) : Nat := i.toInt.toNat
|
||||
@[suggest_for Int8.toNat, inline] def Int8.toNatClampNeg (i : Int8) : Nat := i.toInt.toNat
|
||||
|
||||
/-- Obtains the `Int8` whose 2's complement representation is the given `BitVec 8`. -/
|
||||
@[inline] def Int8.ofBitVec (b : BitVec 8) : Int8 := ⟨⟨b⟩⟩
|
||||
@@ -510,7 +510,7 @@ Converts a 16-bit signed integer to a natural number, mapping all negative numbe
|
||||
|
||||
Use `Int16.toBitVec` to obtain the two's complement representation.
|
||||
-/
|
||||
@[inline] def Int16.toNatClampNeg (i : Int16) : Nat := i.toInt.toNat
|
||||
@[suggest_for Int16.toNat, inline] def Int16.toNatClampNeg (i : Int16) : Nat := i.toInt.toNat
|
||||
|
||||
/-- Obtains the `Int16` whose 2's complement representation is the given `BitVec 16`. -/
|
||||
@[inline] def Int16.ofBitVec (b : BitVec 16) : Int16 := ⟨⟨b⟩⟩
|
||||
@@ -880,7 +880,7 @@ Converts a 32-bit signed integer to a natural number, mapping all negative numbe
|
||||
|
||||
Use `Int32.toBitVec` to obtain the two's complement representation.
|
||||
-/
|
||||
@[inline] def Int32.toNatClampNeg (i : Int32) : Nat := i.toInt.toNat
|
||||
@[suggest_for Int32.toNat, inline] def Int32.toNatClampNeg (i : Int32) : Nat := i.toInt.toNat
|
||||
|
||||
/-- Obtains the `Int32` whose 2's complement representation is the given `BitVec 32`. -/
|
||||
@[inline] def Int32.ofBitVec (b : BitVec 32) : Int32 := ⟨⟨b⟩⟩
|
||||
@@ -1270,7 +1270,7 @@ Converts a 64-bit signed integer to a natural number, mapping all negative numbe
|
||||
|
||||
Use `Int64.toBitVec` to obtain the two's complement representation.
|
||||
-/
|
||||
@[inline] def Int64.toNatClampNeg (i : Int64) : Nat := i.toInt.toNat
|
||||
@[suggest_for Int64.toNat, inline] def Int64.toNatClampNeg (i : Int64) : Nat := i.toInt.toNat
|
||||
|
||||
/-- Obtains the `Int64` whose 2's complement representation is the given `BitVec 64`. -/
|
||||
@[inline] def Int64.ofBitVec (b : BitVec 64) : Int64 := ⟨⟨b⟩⟩
|
||||
@@ -1637,7 +1637,7 @@ Converts a word-sized signed integer to a natural number, mapping all negative n
|
||||
|
||||
Use `ISize.toBitVec` to obtain the two's complement representation.
|
||||
-/
|
||||
@[inline] def ISize.toNatClampNeg (i : ISize) : Nat := i.toInt.toNat
|
||||
@[suggest_for ISize.toNat, inline] def ISize.toNatClampNeg (i : ISize) : Nat := i.toInt.toNat
|
||||
|
||||
/-- Obtains the `ISize` whose 2's complement representation is the given `BitVec`. -/
|
||||
@[inline] def ISize.ofBitVec (b : BitVec System.Platform.numBits) : ISize := ⟨⟨b⟩⟩
|
||||
|
||||
@@ -148,6 +148,7 @@ theorem Subarray.copy_eq_toArray {s : Subarray α} :
|
||||
s.copy = s.toArray :=
|
||||
(rfl)
|
||||
|
||||
@[grind =]
|
||||
theorem Subarray.sliceToArray_eq_toArray {s : Subarray α} :
|
||||
Slice.toArray s = s.toArray :=
|
||||
(rfl)
|
||||
|
||||
@@ -119,6 +119,13 @@ public theorem forIn_toList {α : Type u} {s : Subarray α}
|
||||
ForIn.forIn s.toList init f = ForIn.forIn s init f :=
|
||||
Slice.forIn_toList
|
||||
|
||||
@[grind =]
|
||||
public theorem forIn_eq_forIn_toList {α : Type u} {s : Subarray α}
|
||||
{m : Type v → Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
|
||||
{f : α → γ → m (ForInStep γ)} :
|
||||
ForIn.forIn s init f = ForIn.forIn s.toList init f :=
|
||||
forIn_toList.symm
|
||||
|
||||
@[simp]
|
||||
public theorem forIn_toArray {α : Type u} {s : Subarray α}
|
||||
{m : Type v → Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
|
||||
@@ -167,22 +174,22 @@ public theorem Array.toSubarray_eq_min {xs : Array α} {lo hi : Nat} :
|
||||
simp only [Array.toSubarray]
|
||||
split <;> split <;> simp [Nat.min_eq_right (Nat.le_of_not_ge _), *]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Array.array_toSubarray {xs : Array α} {lo hi : Nat} :
|
||||
(xs.toSubarray lo hi).array = xs := by
|
||||
simp [toSubarray_eq_min, Subarray.array]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Array.start_toSubarray {xs : Array α} {lo hi : Nat} :
|
||||
(xs.toSubarray lo hi).start = min lo (min hi xs.size) := by
|
||||
simp [toSubarray_eq_min, Subarray.start]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Array.stop_toSubarray {xs : Array α} {lo hi : Nat} :
|
||||
(xs.toSubarray lo hi).stop = min hi xs.size := by
|
||||
simp [toSubarray_eq_min, Subarray.stop]
|
||||
|
||||
theorem Subarray.toList_eq {xs : Subarray α} :
|
||||
public theorem Subarray.toList_eq {xs : Subarray α} :
|
||||
xs.toList = (xs.array.extract xs.start xs.stop).toList := by
|
||||
let aslice := xs
|
||||
obtain ⟨⟨array, start, stop, h₁, h₂⟩⟩ := xs
|
||||
@@ -199,45 +206,46 @@ theorem Subarray.toList_eq {xs : Subarray α} :
|
||||
simp [Subarray.array, Subarray.start, Subarray.stop]
|
||||
simp [this, ListSlice.toList_eq, lslice]
|
||||
|
||||
@[grind =]
|
||||
public theorem Subarray.size_eq {xs : Subarray α} :
|
||||
xs.size = xs.stop - xs.start := by
|
||||
simp [Subarray.size]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Subarray.toArray_toList {xs : Subarray α} :
|
||||
xs.toList.toArray = xs.toArray := by
|
||||
simp [Std.Slice.toList, Subarray.toArray, Std.Slice.toArray]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Subarray.toList_toArray {xs : Subarray α} :
|
||||
xs.toArray.toList = xs.toList := by
|
||||
simp [Std.Slice.toList, Subarray.toArray, Std.Slice.toArray]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Subarray.length_toList {xs : Subarray α} :
|
||||
xs.toList.length = xs.size := by
|
||||
have : xs.start ≤ xs.stop := xs.internalRepresentation.start_le_stop
|
||||
have : xs.stop ≤ xs.array.size := xs.internalRepresentation.stop_le_array_size
|
||||
simp [Subarray.toList_eq, Subarray.size]; omega
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem Subarray.size_toArray {xs : Subarray α} :
|
||||
xs.toArray.size = xs.size := by
|
||||
simp [← Subarray.toArray_toList, Subarray.size, Slice.size, SliceSize.size, start, stop]
|
||||
|
||||
namespace Array
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem array_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].array = xs := by
|
||||
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem start_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].start = min lo (min hi xs.size) := by
|
||||
simp [Std.Rco.Sliceable.mkSlice]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem stop_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].stop = min hi xs.size := by
|
||||
simp [Std.Rco.Sliceable.mkSlice]
|
||||
@@ -246,14 +254,14 @@ public theorem mkSlice_rco_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi] = xs[(min lo (min hi xs.size))...(min hi xs.size)] := by
|
||||
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray_eq_min]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toList_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
|
||||
rw [List.take_eq_take_min, List.drop_eq_drop_min]
|
||||
simp [Std.Rco.Sliceable.mkSlice, Subarray.toList_eq, List.take_drop,
|
||||
Nat.add_sub_of_le (Nat.min_le_right _ _)]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toArray_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].toArray = xs.extract lo hi := by
|
||||
simp only [← Subarray.toArray_toList, toList_mkSlice_rco]
|
||||
@@ -266,12 +274,12 @@ public theorem toArray_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
· simp; omega
|
||||
· simp; omega
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem size_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...hi].size = min hi xs.size - lo := by
|
||||
simp [← Subarray.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...=hi] = xs[lo...(hi + 1)] := by
|
||||
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -280,7 +288,7 @@ public theorem mkSlice_rcc_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...=hi] = xs[(min lo (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
|
||||
simp [mkSlice_rco_eq_mkSlice_rco_min]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem array_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo...=hi].array = xs := by
|
||||
simp [Std.Rcc.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
|
||||
@@ -325,7 +333,7 @@ public theorem stop_mkSlice_rci {xs : Array α} {lo : Nat} :
|
||||
xs[lo...*].stop = xs.size := by
|
||||
simp [Std.Rci.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Array α} {lo : Nat} :
|
||||
xs[lo...*] = xs[lo...xs.size] := by
|
||||
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
|
||||
@@ -344,7 +352,7 @@ public theorem toArray_mkSlice_rci {xs : Array α} {lo : Nat} :
|
||||
xs[lo...*].toArray = xs.extract lo := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem size_mkSlice_rci {xs : Array α} {lo : Nat} :
|
||||
xs[lo...*].size = xs.size - lo := by
|
||||
simp [← Subarray.length_toList]
|
||||
@@ -364,7 +372,7 @@ public theorem stop_mkSlice_roo {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo<...hi].stop = min hi xs.size := by
|
||||
simp [Std.Roo.Sliceable.mkSlice]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo<...hi] = xs[(lo + 1)...hi] := by
|
||||
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -408,6 +416,11 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_roc_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
public theorem mkSlice_roc_eq_mkSlice_roo_min {xs : Array α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(min (lo + 1) (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
|
||||
simp [mkSlice_rco_eq_mkSlice_rco_min]
|
||||
@@ -452,6 +465,11 @@ public theorem mkSlice_roi_eq_mkSlice_roo {xs : Array α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[lo<...xs.size] := by
|
||||
simp [mkSlice_rci_eq_mkSlice_rco]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_roi_eq_mkSlice_rco {xs : Array α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[(lo + 1)...xs.size] := by
|
||||
simp [mkSlice_rci_eq_mkSlice_rco]
|
||||
|
||||
public theorem mkSlice_roi_eq_mkSlice_roo_min {xs : Array α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[(min (lo + 1) xs.size)...xs.size] := by
|
||||
simp [mkSlice_rco_eq_mkSlice_rco_min]
|
||||
@@ -476,7 +494,7 @@ public theorem array_mkSlice_rio {xs : Array α} {hi : Nat} :
|
||||
xs[*...hi].array = xs := by
|
||||
simp [Std.Rio.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem start_mkSlice_rio {xs : Array α} {hi : Nat} :
|
||||
xs[*...hi].start = 0 := by
|
||||
simp [Std.Rio.Sliceable.mkSlice]
|
||||
@@ -486,7 +504,7 @@ public theorem stop_mkSlice_rio {xs : Array α} {hi : Nat} :
|
||||
xs[*...hi].stop = min hi xs.size := by
|
||||
simp [Std.Rio.Sliceable.mkSlice]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Array α} {hi : Nat} :
|
||||
xs[*...hi] = xs[0...hi] := by
|
||||
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -515,7 +533,7 @@ public theorem array_mkSlice_ric {xs : Array α} {hi : Nat} :
|
||||
xs[*...=hi].array = xs := by
|
||||
simp [Std.Ric.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem start_mkSlice_ric {xs : Array α} {hi : Nat} :
|
||||
xs[*...=hi].start = 0 := by
|
||||
simp [Std.Ric.Sliceable.mkSlice]
|
||||
@@ -530,6 +548,11 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : Array α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[*...(hi + 1)] := by
|
||||
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_ric_eq_mkSlice_rco {xs : Array α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[0...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
public theorem mkSlice_ric_eq_mkSlice_rio_min {xs : Array α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[*...(min (hi + 1) xs.size)] := by
|
||||
simp [mkSlice_rco_eq_mkSlice_rco_min]
|
||||
@@ -559,11 +582,16 @@ public theorem mkSlice_rii_eq_mkSlice_rio {xs : Array α} :
|
||||
xs[*...*] = xs[*...xs.size] := by
|
||||
simp [mkSlice_rci_eq_mkSlice_rco]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_rii_eq_mkSlice_rco {xs : Array α} :
|
||||
xs[*...*] = xs[0...xs.size] := by
|
||||
simp
|
||||
|
||||
public theorem mkSlice_rii_eq_mkSlice_rio_min {xs : Array α} :
|
||||
xs[*...*] = xs[*...xs.size] := by
|
||||
simp [mkSlice_rco_eq_mkSlice_rco_min]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toList_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].toList = xs.toList := by
|
||||
rw [mkSlice_rii_eq_mkSlice_rci, toList_mkSlice_rci, List.drop_zero]
|
||||
@@ -573,7 +601,7 @@ public theorem toArray_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].toArray = xs := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem size_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].size = xs.size := by
|
||||
simp [← Subarray.length_toList]
|
||||
@@ -583,12 +611,12 @@ public theorem array_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].array = xs := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem start_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].start = 0 := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem stop_mkSlice_rii {xs : Array α} :
|
||||
xs[*...*].stop = xs.size := by
|
||||
simp [Std.Rii.Sliceable.mkSlice]
|
||||
@@ -599,7 +627,7 @@ section SubarraySlices
|
||||
|
||||
namespace Subarray
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toList_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
|
||||
simp only [Std.Rco.Sliceable.mkSlice, Std.Rco.HasRcoIntersection.intersection, toList_eq,
|
||||
@@ -608,12 +636,12 @@ public theorem toList_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
rw [Nat.add_sub_cancel' (by omega)]
|
||||
simp [Subarray.size, ← Array.length_toList, ← List.take_eq_take_min, Nat.add_comm xs.start]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toArray_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
|
||||
simp [← Subarray.toArray_toList, List.drop_take]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo...=hi] = xs[lo...(hi + 1)] := by
|
||||
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
|
||||
@@ -629,7 +657,7 @@ public theorem toArray_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo...=hi].toArray = xs.toArray.extract lo (hi + 1) := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Subarray α} {lo : Nat} :
|
||||
xs[lo...*] = xs[lo...xs.size] := by
|
||||
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
|
||||
@@ -651,12 +679,17 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice,
|
||||
Std.Roc.HasRcoIntersection.intersection, Std.Roo.HasRcoIntersection.intersection]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo<...hi] = xs[(lo + 1)...hi] := by
|
||||
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
|
||||
Std.Roo.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_roc_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo<...hi].toList = (xs.toList.take hi).drop (lo + 1) := by
|
||||
@@ -670,8 +703,7 @@ public theorem toArray_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
|
||||
@[simp]
|
||||
public theorem mkSlice_roc_eq_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
|
||||
Std.Roc.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roc {xs : Subarray α} {lo hi : Nat} :
|
||||
@@ -689,6 +721,11 @@ public theorem mkSlice_roi_eq_mkSlice_rci {xs : Subarray α} {lo : Nat} :
|
||||
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice,
|
||||
Std.Roi.HasRcoIntersection.intersection, Std.Rci.HasRcoIntersection.intersection]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_roi_eq_mkSlice_rco {xs : Subarray α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[(lo + 1)...xs.size] := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roi {xs : Subarray α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
|
||||
@@ -705,12 +742,17 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : Subarray α} {hi : Nat} :
|
||||
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice,
|
||||
Std.Ric.HasRcoIntersection.intersection, Std.Rio.HasRcoIntersection.intersection]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Subarray α} {hi : Nat} :
|
||||
xs[*...hi] = xs[0...hi] := by
|
||||
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
|
||||
Std.Rio.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_ric_eq_mkSlice_rco {xs : Subarray α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[0...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_rio {xs : Subarray α} {hi : Nat} :
|
||||
xs[*...hi].toList = xs.toList.take hi := by
|
||||
@@ -737,7 +779,7 @@ public theorem toArray_mkSlice_ric {xs : Subarray α} {hi : Nat} :
|
||||
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rii {xs : Subarray α} :
|
||||
xs[*...*] = xs := by
|
||||
simp [Std.Rii.Sliceable.mkSlice]
|
||||
|
||||
@@ -47,21 +47,28 @@ public theorem toList_eq {xs : ListSlice α} :
|
||||
simp only [Std.Slice.toList, toList_internalIter]
|
||||
rfl
|
||||
|
||||
@[simp, grind =]
|
||||
public theorem toArray_toList {xs : ListSlice α} :
|
||||
xs.toList.toArray = xs.toArray := by
|
||||
simp [Std.Slice.toArray, Std.Slice.toList]
|
||||
|
||||
@[simp, grind =]
|
||||
public theorem toList_toArray {xs : ListSlice α} :
|
||||
xs.toArray.toList = xs.toList := by
|
||||
simp [Std.Slice.toArray, Std.Slice.toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem length_toList {xs : ListSlice α} :
|
||||
xs.toList.length = xs.size := by
|
||||
simp [ListSlice.toList_eq, Std.Slice.size, Std.Slice.SliceSize.size, ← Iter.length_toList_eq_count,
|
||||
toList_internalIter]; rfl
|
||||
|
||||
@[simp]
|
||||
@[grind =]
|
||||
public theorem size_eq_length_toList {xs : ListSlice α} :
|
||||
xs.size = xs.toList.length :=
|
||||
length_toList.symm
|
||||
|
||||
@[simp, grind =]
|
||||
public theorem size_toArray {xs : ListSlice α} :
|
||||
xs.toArray.size = xs.size := by
|
||||
simp [← ListSlice.toArray_toList]
|
||||
@@ -70,7 +77,7 @@ end ListSlice
|
||||
|
||||
namespace List
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toList_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo...hi].toList = (xs.take hi).drop lo := by
|
||||
rw [List.take_eq_take_min, List.drop_eq_drop_min]
|
||||
@@ -81,17 +88,17 @@ public theorem toList_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
· have : min hi xs.length ≤ lo := by omega
|
||||
simp [h, Nat.min_eq_right this]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toArray_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo...hi].toArray = ((xs.take hi).drop lo).toArray := by
|
||||
simp [← ListSlice.toArray_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem size_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo...hi].size = min hi xs.length - lo := by
|
||||
simp [← ListSlice.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo...=hi] = xs[lo...(hi + 1)] := by
|
||||
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -122,12 +129,22 @@ public theorem toArray_mkSlice_rci {xs : List α} {lo : Nat} :
|
||||
xs[lo...*].toArray = (xs.drop lo).toArray := by
|
||||
simp [← ListSlice.toArray_toList]
|
||||
|
||||
@[grind =]
|
||||
public theorem toList_mkSlice_rci_eq_toList_mkSlice_rco {xs : List α} {lo : Nat} :
|
||||
xs[lo...*].toList = xs[lo...xs.length].toList := by
|
||||
simp
|
||||
|
||||
@[grind =]
|
||||
public theorem toArray_mkSlice_rci_eq_toArray_mkSlice_rco {xs : List α} {lo : Nat} :
|
||||
xs[lo...*].toArray = xs[lo...xs.length].toArray := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem size_mkSlice_rci {xs : List α} {lo : Nat} :
|
||||
xs[lo...*].size = xs.length - lo := by
|
||||
simp [← ListSlice.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roo_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo<...hi] = xs[(lo + 1)...hi] := by
|
||||
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -152,6 +169,11 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : List α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
|
||||
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roc_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roc {xs : List α} {lo hi : Nat} :
|
||||
xs[lo<...=hi].toList = (xs.take (hi + 1)).drop (lo + 1) := by
|
||||
@@ -167,11 +189,27 @@ public theorem size_mkSlice_roc {xs : List α} {lo hi : Nat} :
|
||||
xs[lo<...=hi].size = min (hi + 1) xs.length - (lo + 1) := by
|
||||
simp [← ListSlice.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roi_eq_mkSlice_rci {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[(lo + 1)...*] := by
|
||||
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
|
||||
|
||||
public theorem toList_mkSlice_roi_eq_toList_mkSlice_roo {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs[lo<...xs.length].toList := by
|
||||
simp
|
||||
|
||||
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_roo {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].toArray = xs[lo<...xs.length].toArray := by
|
||||
simp
|
||||
|
||||
public theorem toList_mkSlice_roi_eq_toList_mkSlice_rco {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs[(lo + 1)...xs.length].toList := by
|
||||
simp
|
||||
|
||||
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_rco {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].toArray = xs[(lo + 1)...xs.length].toArray := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roi {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs.drop (lo + 1) := by
|
||||
@@ -187,7 +225,7 @@ public theorem size_mkSlice_roi {xs : List α} {lo : Nat} :
|
||||
xs[lo<...*].size = xs.length - (lo + 1) := by
|
||||
simp [← ListSlice.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rio_eq_mkSlice_rco {xs : List α} {hi : Nat} :
|
||||
xs[*...hi] = xs[0...hi] := by
|
||||
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -212,6 +250,11 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : List α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[*...(hi + 1)] := by
|
||||
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_ric_eq_mkSlice_rco {xs : List α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[0...(hi + 1)] := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_ric {xs : List α} {hi : Nat} :
|
||||
xs[*...=hi].toList = xs.take (hi + 1) := by
|
||||
@@ -227,11 +270,19 @@ public theorem size_mkSlice_ric {xs : List α} {hi : Nat} :
|
||||
xs[*...=hi].size = min (hi + 1) xs.length := by
|
||||
simp [← ListSlice.length_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rii_eq_mkSlice_rci {xs : List α} :
|
||||
xs[*...*] = xs[0...*] := by
|
||||
simp [Std.Rii.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
|
||||
|
||||
public theorem toList_mkSlice_rii_eq_toList_mkSlice_rco {xs : List α} :
|
||||
xs[*...*].toList = xs[0...xs.length].toList := by
|
||||
simp
|
||||
|
||||
public theorem toArray_mkSlice_rii_eq_toArray_mkSlice_rco {xs : List α} :
|
||||
xs[*...*].toArray = xs[0...xs.length].toArray := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_rii {xs : List α} :
|
||||
xs[*...*].toList = xs := by
|
||||
@@ -253,7 +304,7 @@ section ListSubslices
|
||||
|
||||
namespace ListSlice
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toList_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
|
||||
simp only [instSliceableListSliceNat_1, List.toList_mkSlice_rco, ListSlice.toList_eq (xs := xs)]
|
||||
@@ -262,12 +313,12 @@ public theorem toList_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
· simp
|
||||
· simp [List.take_take, Nat.min_comm]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem toArray_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
|
||||
simp [← toArray_toList, List.drop_take]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo...=hi] = xs[lo...(hi + 1)] := by
|
||||
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -295,9 +346,19 @@ public theorem toArray_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo...*].toArray = xs.toArray.extract lo := by
|
||||
simp only [← toArray_toList, toList_mkSlice_rci]
|
||||
rw (occs := [1]) [← List.take_length (l := List.drop lo xs.toList)]
|
||||
simp [- toArray_toList]
|
||||
|
||||
@[grind =]
|
||||
public theorem toList_mkSlice_rci_eq_toList_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo...*].toList = xs[lo...xs.size].toList := by
|
||||
simp [← length_toList, - Slice.length_toList_eq_size]
|
||||
|
||||
@[grind =]
|
||||
public theorem toArray_mkSlice_rci_eq_toArray_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo...*].toArray = xs[lo...xs.size].toArray := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roo_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo<...hi] = xs[(lo + 1)...hi] := by
|
||||
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -322,6 +383,11 @@ public theorem mkSlice_roc_eq_mkSlice_rcc {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roc_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
|
||||
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo<...=hi].toList = (xs.toList.take (hi + 1)).drop (lo + 1) := by
|
||||
@@ -332,11 +398,28 @@ public theorem toArray_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
|
||||
xs[lo<...=hi].toArray = xs.toArray.extract (lo + 1) (hi + 1) := by
|
||||
simp [← toArray_toList, List.drop_take]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_roi_eq_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*] = xs[(lo + 1)...*] := by
|
||||
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
|
||||
|
||||
public theorem toList_mkSlice_roi_eq_toList_mkSlice_roo {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs[lo<...xs.size].toList := by
|
||||
simp [← length_toList, - Slice.length_toList_eq_size]
|
||||
|
||||
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_roo {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toArray = xs[lo<...xs.size].toArray := by
|
||||
simp only [mkSlice_roi_eq_mkSlice_rci, toArray_mkSlice_rci, size_toArray_eq_size,
|
||||
mkSlice_roo_eq_mkSlice_rco, toArray_mkSlice_rco]
|
||||
|
||||
public theorem toList_mkSlice_roi_eq_toList_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs[(lo + 1)...xs.size].toList := by
|
||||
simp [← length_toList, - Slice.length_toList_eq_size]
|
||||
|
||||
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toArray = xs[(lo + 1)...xs.size].toArray := by
|
||||
simp
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
|
||||
@@ -347,9 +430,9 @@ public theorem toArray_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
|
||||
xs[lo<...*].toArray = xs.toArray.extract (lo + 1) := by
|
||||
simp only [← toArray_toList, toList_mkSlice_roi]
|
||||
rw (occs := [1]) [← List.take_length (l := List.drop (lo + 1) xs.toList)]
|
||||
simp
|
||||
simp [- toArray_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rio_eq_mkSlice_rco {xs : ListSlice α} {hi : Nat} :
|
||||
xs[*...hi] = xs[0...hi] := by
|
||||
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
@@ -374,6 +457,11 @@ public theorem mkSlice_ric_eq_mkSlice_rcc {xs : ListSlice α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[0...=hi] := by
|
||||
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
|
||||
@[grind =]
|
||||
public theorem mkSlice_ric_eq_mkSlice_rco {xs : ListSlice α} {hi : Nat} :
|
||||
xs[*...=hi] = xs[0...(hi + 1)] := by
|
||||
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
|
||||
|
||||
@[simp]
|
||||
public theorem toList_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
|
||||
xs[*...=hi].toList = xs.toList.take (hi + 1) := by
|
||||
@@ -384,7 +472,7 @@ public theorem toArray_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
|
||||
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
|
||||
simp [← toArray_toList]
|
||||
|
||||
@[simp]
|
||||
@[simp, grind =]
|
||||
public theorem mkSlice_rii {xs : ListSlice α} :
|
||||
xs[*...*] = xs := by
|
||||
simp [Std.Rii.Sliceable.mkSlice]
|
||||
|
||||
@@ -123,18 +123,6 @@ opaque getUTF8Byte (s : @& String) (n : Nat) (h : n < s.utf8ByteSize) : UInt8
|
||||
|
||||
end String.Internal
|
||||
|
||||
/--
|
||||
Creates a string that contains the characters in a list, in order.
|
||||
|
||||
Examples:
|
||||
* `['L', '∃', '∀', 'N'].asString = "L∃∀N"`
|
||||
* `[].asString = ""`
|
||||
* `['a', 'a', 'a'].asString = "aaa"`
|
||||
-/
|
||||
@[extern "lean_string_mk", expose]
|
||||
def String.ofList (data : List Char) : String :=
|
||||
⟨List.utf8Encode data,.intro data rfl⟩
|
||||
|
||||
@[extern "lean_string_mk", expose, deprecated String.ofList (since := "2025-10-30")]
|
||||
def String.mk (data : List Char) : String :=
|
||||
⟨List.utf8Encode data,.intro data rfl⟩
|
||||
|
||||
@@ -4,12 +4,9 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Init.Classical
|
||||
|
||||
public section
|
||||
|
||||
namespace Lean.Grind
|
||||
|
||||
/-- A helper gadget for annotating nested proofs in goals. -/
|
||||
|
||||
@@ -143,6 +143,7 @@ end DSimp
|
||||
|
||||
namespace Simp
|
||||
|
||||
@[inline]
|
||||
def defaultMaxSteps := 100000
|
||||
|
||||
/--
|
||||
|
||||
@@ -360,7 +360,7 @@ recommended_spelling "smul" for "•" in [HSMul.hSMul, «term_•_»]
|
||||
recommended_spelling "append" for "++" in [HAppend.hAppend, «term_++_»]
|
||||
/-- when used as a unary operator -/
|
||||
recommended_spelling "neg" for "-" in [Neg.neg, «term-_»]
|
||||
recommended_spelling "inv" for "⁻¹" in [Inv.inv]
|
||||
recommended_spelling "inv" for "⁻¹" in [Inv.inv, «term_⁻¹»]
|
||||
recommended_spelling "dvd" for "∣" in [Dvd.dvd, «term_∣_»]
|
||||
recommended_spelling "shiftLeft" for "<<<" in [HShiftLeft.hShiftLeft, «term_<<<_»]
|
||||
recommended_spelling "shiftRight" for ">>>" in [HShiftRight.hShiftRight, «term_>>>_»]
|
||||
|
||||
@@ -2810,6 +2810,8 @@ structure Char where
|
||||
/-- The value must be a legal scalar value. -/
|
||||
valid : val.isValidChar
|
||||
|
||||
grind_pattern Char.valid => self.val
|
||||
|
||||
private theorem isValidChar_UInt32 {n : Nat} (h : n.isValidChar) : LT.lt n UInt32.size :=
|
||||
match h with
|
||||
| Or.inl h => Nat.lt_trans h (of_decide_eq_true rfl)
|
||||
@@ -3192,7 +3194,7 @@ Constructs a new empty array with initial capacity `0`.
|
||||
|
||||
Use `Array.emptyWithCapacity` to create an array with a greater initial capacity.
|
||||
-/
|
||||
@[expose]
|
||||
@[expose, inline]
|
||||
def Array.empty {α : Type u} : Array α := emptyWithCapacity 0
|
||||
|
||||
/--
|
||||
@@ -3481,6 +3483,18 @@ structure String where ofByteArray ::
|
||||
attribute [extern "lean_string_to_utf8"] String.toByteArray
|
||||
attribute [extern "lean_string_from_utf8_unchecked"] String.ofByteArray
|
||||
|
||||
/--
|
||||
Creates a string that contains the characters in a list, in order.
|
||||
|
||||
Examples:
|
||||
* `String.ofList ['L', '∃', '∀', 'N'] = "L∃∀N"`
|
||||
* `String.ofList [] = ""`
|
||||
* `String.ofList ['a', 'a', 'a'] = "aaa"`
|
||||
-/
|
||||
@[extern "lean_string_mk"]
|
||||
def String.ofList (data : List Char) : String :=
|
||||
⟨List.utf8Encode data, .intro data rfl⟩
|
||||
|
||||
/--
|
||||
Decides whether two strings are equal. Normally used via the `DecidableEq String` instance and the
|
||||
`=` operator.
|
||||
|
||||
8
src/Init/Sym.lean
Normal file
8
src/Init/Sym.lean
Normal file
@@ -0,0 +1,8 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Init.Sym.Lemmas
|
||||
140
src/Init/Sym/Lemmas.lean
Normal file
140
src/Init/Sym/Lemmas.lean
Normal file
@@ -0,0 +1,140 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Init.Data.Nat.Basic
|
||||
public import Init.Data.Rat.Basic
|
||||
public import Init.Data.Int.Basic
|
||||
public import Init.Data.UInt.Basic
|
||||
public import Init.Data.SInt.Basic
|
||||
public section
|
||||
namespace Lean.Sym
|
||||
|
||||
theorem ne_self (a : α) : (a ≠ a) = False := by simp
|
||||
theorem not_true_eq : (¬ True) = False := by simp
|
||||
theorem not_false_eq : (¬ False) = True := by simp
|
||||
|
||||
theorem ite_cond_congr {α : Sort u} (c : Prop) {inst : Decidable c} (a b : α)
|
||||
(c' : Prop) {inst' : Decidable c'} (h : c = c') : @ite α c inst a b = @ite α c' inst' a b := by
|
||||
simp [*]
|
||||
|
||||
theorem dite_cond_congr {α : Sort u} (c : Prop) {inst : Decidable c} (a : c → α) (b : ¬ c → α)
|
||||
(c' : Prop) {inst' : Decidable c'} (h : c = c')
|
||||
: @dite α c inst a b = @dite α c' inst' (fun h' => a (h.mpr_prop h')) (fun h' => b (h.mpr_not h')) := by
|
||||
simp [*]
|
||||
|
||||
theorem cond_cond_eq_true {α : Sort u} (c : Bool) (a b : α) (h : c = true) : cond c a b = a := by
|
||||
simp [*]
|
||||
|
||||
theorem cond_cond_eq_false {α : Sort u} (c : Bool) (a b : α) (h : c = false) : cond c a b = b := by
|
||||
simp [*]
|
||||
|
||||
theorem cond_cond_congr {α : Sort u} (c : Bool) (a b : α) (c' : Bool) (h : c = c') : cond c a b = cond c' a b := by
|
||||
simp [*]
|
||||
|
||||
theorem Nat.lt_eq_true (a b : Nat) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Int.lt_eq_true (a b : Int) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Rat.lt_eq_true (a b : Rat) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Int8.lt_eq_true (a b : Int8) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Int16.lt_eq_true (a b : Int16) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Int32.lt_eq_true (a b : Int32) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Int64.lt_eq_true (a b : Int64) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem UInt8.lt_eq_true (a b : UInt8) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem UInt16.lt_eq_true (a b : UInt16) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem UInt32.lt_eq_true (a b : UInt32) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem UInt64.lt_eq_true (a b : UInt64) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Fin.lt_eq_true (a b : Fin n) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem BitVec.lt_eq_true (a b : BitVec n) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem String.lt_eq_true (a b : String) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
theorem Char.lt_eq_true (a b : Char) (h : decide (a < b) = true) : (a < b) = True := by simp_all
|
||||
|
||||
theorem Nat.lt_eq_false (a b : Nat) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Int.lt_eq_false (a b : Int) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Rat.lt_eq_false (a b : Rat) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Int8.lt_eq_false (a b : Int8) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Int16.lt_eq_false (a b : Int16) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Int32.lt_eq_false (a b : Int32) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Int64.lt_eq_false (a b : Int64) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem UInt8.lt_eq_false (a b : UInt8) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem UInt16.lt_eq_false (a b : UInt16) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem UInt32.lt_eq_false (a b : UInt32) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem UInt64.lt_eq_false (a b : UInt64) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Fin.lt_eq_false (a b : Fin n) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem BitVec.lt_eq_false (a b : BitVec n) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem String.lt_eq_false (a b : String) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
theorem Char.lt_eq_false (a b : Char) (h : decide (a < b) = false) : (a < b) = False := by simp_all
|
||||
|
||||
theorem Nat.le_eq_true (a b : Nat) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Int.le_eq_true (a b : Int) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Rat.le_eq_true (a b : Rat) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Int8.le_eq_true (a b : Int8) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Int16.le_eq_true (a b : Int16) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Int32.le_eq_true (a b : Int32) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Int64.le_eq_true (a b : Int64) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem UInt8.le_eq_true (a b : UInt8) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem UInt16.le_eq_true (a b : UInt16) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem UInt32.le_eq_true (a b : UInt32) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem UInt64.le_eq_true (a b : UInt64) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Fin.le_eq_true (a b : Fin n) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem BitVec.le_eq_true (a b : BitVec n) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem String.le_eq_true (a b : String) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
theorem Char.le_eq_true (a b : Char) (h : decide (a ≤ b) = true) : (a ≤ b) = True := by simp_all
|
||||
|
||||
theorem Nat.le_eq_false (a b : Nat) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Int.le_eq_false (a b : Int) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Rat.le_eq_false (a b : Rat) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Int8.le_eq_false (a b : Int8) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Int16.le_eq_false (a b : Int16) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Int32.le_eq_false (a b : Int32) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Int64.le_eq_false (a b : Int64) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem UInt8.le_eq_false (a b : UInt8) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem UInt16.le_eq_false (a b : UInt16) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem UInt32.le_eq_false (a b : UInt32) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem UInt64.le_eq_false (a b : UInt64) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Fin.le_eq_false (a b : Fin n) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem BitVec.le_eq_false (a b : BitVec n) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem String.le_eq_false (a b : String) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
theorem Char.le_eq_false (a b : Char) (h : decide (a ≤ b) = false) : (a ≤ b) = False := by simp_all
|
||||
|
||||
theorem Nat.eq_eq_true (a b : Nat) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Int.eq_eq_true (a b : Int) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Rat.eq_eq_true (a b : Rat) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Int8.eq_eq_true (a b : Int8) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Int16.eq_eq_true (a b : Int16) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Int32.eq_eq_true (a b : Int32) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Int64.eq_eq_true (a b : Int64) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem UInt8.eq_eq_true (a b : UInt8) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem UInt16.eq_eq_true (a b : UInt16) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem UInt32.eq_eq_true (a b : UInt32) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem UInt64.eq_eq_true (a b : UInt64) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Fin.eq_eq_true (a b : Fin n) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem BitVec.eq_eq_true (a b : BitVec n) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem String.eq_eq_true (a b : String) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
theorem Char.eq_eq_true (a b : Char) (h : decide (a = b) = true) : (a = b) = True := by simp_all
|
||||
|
||||
theorem Nat.eq_eq_false (a b : Nat) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Int.eq_eq_false (a b : Int) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Rat.eq_eq_false (a b : Rat) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Int8.eq_eq_false (a b : Int8) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Int16.eq_eq_false (a b : Int16) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Int32.eq_eq_false (a b : Int32) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Int64.eq_eq_false (a b : Int64) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem UInt8.eq_eq_false (a b : UInt8) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem UInt16.eq_eq_false (a b : UInt16) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem UInt32.eq_eq_false (a b : UInt32) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem UInt64.eq_eq_false (a b : UInt64) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Fin.eq_eq_false (a b : Fin n) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem BitVec.eq_eq_false (a b : BitVec n) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem String.eq_eq_false (a b : String) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
theorem Char.eq_eq_false (a b : Char) (h : decide (a = b) = false) : (a = b) = False := by simp_all
|
||||
|
||||
theorem Nat.dvd_eq_true (a b : Nat) (h : decide (a ∣ b) = true) : (a ∣ b) = True := by simp_all
|
||||
theorem Int.dvd_eq_true (a b : Int) (h : decide (a ∣ b) = true) : (a ∣ b) = True := by simp_all
|
||||
|
||||
theorem Nat.dvd_eq_false (a b : Nat) (h : decide (a ∣ b) = false) : (a ∣ b) = False := by simp_all
|
||||
theorem Int.dvd_eq_false (a b : Int) (h : decide (a ∣ b) = false) : (a ∣ b) = False := by simp_all
|
||||
|
||||
end Lean.Sym
|
||||
@@ -518,14 +518,13 @@ syntax location := withPosition(ppGroup(" at" (locationWildcard <|> locationHyp)
|
||||
assuming these are definitionally equal.
|
||||
* `change t' at h` will change hypothesis `h : t` to have type `t'`, assuming
|
||||
assuming `t` and `t'` are definitionally equal.
|
||||
-/
|
||||
syntax (name := change) "change " term (location)? : tactic
|
||||
|
||||
/--
|
||||
* `change a with b` will change occurrences of `a` to `b` in the goal,
|
||||
assuming `a` and `b` are definitionally equal.
|
||||
* `change a with b at h` similarly changes `a` to `b` in the type of hypothesis `h`.
|
||||
-/
|
||||
syntax (name := change) "change " term (location)? : tactic
|
||||
|
||||
@[tactic_alt change]
|
||||
syntax (name := changeWith) "change " term " with " term (location)? : tactic
|
||||
|
||||
/--
|
||||
@@ -546,7 +545,7 @@ introducing new local definitions.
|
||||
For example, given a local hypotheses if the form `h : let x := v; b x`, then `extract_lets z at h`
|
||||
introduces a new local definition `z := v` and changes `h` to be `h : b z`.
|
||||
-/
|
||||
syntax (name := extractLets) "extract_lets " optConfig (ppSpace colGt (ident <|> hole))* (location)? : tactic
|
||||
syntax (name := extractLets) "extract_lets" ppSpace optConfig (ppSpace colGt (ident <|> hole))* (location)? : tactic
|
||||
|
||||
/--
|
||||
Lifts `let` and `have` expressions within a term as far out as possible.
|
||||
@@ -905,8 +904,13 @@ The tactic supports all the same syntax variants and options as the `let` term.
|
||||
-/
|
||||
macro "let" c:letConfig d:letDecl : tactic => `(tactic| refine_lift let $c:letConfig $d:letDecl; ?_)
|
||||
|
||||
/-- `let rec f : t := e` adds a recursive definition `f` to the current goal.
|
||||
The syntax is the same as term-mode `let rec`. -/
|
||||
/--
|
||||
`let rec f : t := e` adds a recursive definition `f` to the current goal.
|
||||
The syntax is the same as term-mode `let rec`.
|
||||
|
||||
The tactic supports all the same syntax variants and options as the `let` term.
|
||||
-/
|
||||
@[tactic_name "let rec"]
|
||||
syntax (name := letrec) withPosition(atomic("let " &"rec ") letRecDecls) : tactic
|
||||
macro_rules
|
||||
| `(tactic| let rec $d) => `(tactic| refine_lift let rec $d; ?_)
|
||||
@@ -1212,22 +1216,6 @@ while `congr 2` produces the intended `⊢ x + y = y + x`.
|
||||
syntax (name := congr) "congr" (ppSpace num)? : tactic
|
||||
|
||||
|
||||
/--
|
||||
In tactic mode, `if h : t then tac1 else tac2` can be used as alternative syntax for:
|
||||
```
|
||||
by_cases h : t
|
||||
· tac1
|
||||
· tac2
|
||||
```
|
||||
It performs case distinction on `h : t` or `h : ¬t` and `tac1` and `tac2` are the subproofs.
|
||||
|
||||
You can use `?_` or `_` for either subproof to delay the goal to after the tactic, but
|
||||
if a tactic sequence is provided for `tac1` or `tac2` then it will require the goal to be closed
|
||||
by the end of the block.
|
||||
-/
|
||||
syntax (name := tacDepIfThenElse)
|
||||
ppRealGroup(ppRealFill(ppIndent("if " binderIdent " : " term " then") ppSpace matchRhsTacticSeq)
|
||||
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
|
||||
|
||||
/--
|
||||
In tactic mode, `if t then tac1 else tac2` is alternative syntax for:
|
||||
@@ -1236,16 +1224,34 @@ by_cases t
|
||||
· tac1
|
||||
· tac2
|
||||
```
|
||||
It performs case distinction on `h† : t` or `h† : ¬t`, where `h†` is an anonymous
|
||||
hypothesis, and `tac1` and `tac2` are the subproofs. (It doesn't actually use
|
||||
nondependent `if`, since this wouldn't add anything to the context and hence would be
|
||||
useless for proving theorems. To actually insert an `ite` application use
|
||||
`refine if t then ?_ else ?_`.)
|
||||
It performs case distinction on `h† : t` or `h† : ¬t`, where `h†` is an anonymous hypothesis, and
|
||||
`tac1` and `tac2` are the subproofs. (It doesn't actually use nondependent `if`, since this wouldn't
|
||||
add anything to the context and hence would be useless for proving theorems. To actually insert an
|
||||
`ite` application use `refine if t then ?_ else ?_`.)
|
||||
|
||||
The assumptions in each subgoal can be named. `if h : t then tac1 else tac2` can be used as
|
||||
alternative syntax for:
|
||||
```
|
||||
by_cases h : t
|
||||
· tac1
|
||||
· tac2
|
||||
```
|
||||
It performs case distinction on `h : t` or `h : ¬t`.
|
||||
|
||||
You can use `?_` or `_` for either subproof to delay the goal to after the tactic, but
|
||||
if a tactic sequence is provided for `tac1` or `tac2` then it will require the goal to be closed
|
||||
by the end of the block.
|
||||
-/
|
||||
syntax (name := tacIfThenElse)
|
||||
ppRealGroup(ppRealFill(ppIndent("if " term " then") ppSpace matchRhsTacticSeq)
|
||||
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
|
||||
|
||||
|
||||
@[tactic_alt tacIfThenElse]
|
||||
syntax (name := tacDepIfThenElse)
|
||||
ppRealGroup(ppRealFill(ppIndent("if " binderIdent " : " term " then") ppSpace matchRhsTacticSeq)
|
||||
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
|
||||
|
||||
/--
|
||||
The tactic `nofun` is shorthand for `exact nofun`: it introduces the assumptions, then performs an
|
||||
empty pattern match, closing the goal if the introduced pattern is impossible.
|
||||
|
||||
@@ -28,7 +28,8 @@ builtin_initialize closedTermCacheExt : EnvExtension ClosedTermCache ←
|
||||
{ s with map := s.map.insert e c, constNames := s.constNames.insert c, revExprs := e :: s.revExprs })
|
||||
|
||||
def cacheClosedTermName (env : Environment) (e : Expr) (n : Name) : Environment :=
|
||||
closedTermCacheExt.modifyState env fun s => { s with map := s.map.insert e n, constNames := s.constNames.insert n }
|
||||
closedTermCacheExt.modifyState env fun s =>
|
||||
{ s with map := s.map.insert e n, constNames := s.constNames.insert n, revExprs := e :: s.revExprs }
|
||||
|
||||
def getClosedTermName? (env : Environment) (e : Expr) : Option Name :=
|
||||
(closedTermCacheExt.getState env).map.find? e
|
||||
|
||||
@@ -44,7 +44,7 @@ def log (entry : LogEntry) : CompilerM Unit :=
|
||||
def tracePrefixOptionName := `trace.compiler.ir
|
||||
|
||||
private def isLogEnabledFor (opts : Options) (optName : Name) : Bool :=
|
||||
match opts.find optName with
|
||||
match opts.get? optName with
|
||||
| some (DataValue.ofBool v) => v
|
||||
| _ => opts.getBool tracePrefixOptionName
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ module
|
||||
|
||||
prelude
|
||||
public import Lean.Attributes
|
||||
import Lean.Meta.RecExt
|
||||
|
||||
public section
|
||||
|
||||
@@ -33,14 +34,8 @@ private def isValidMacroInline (declName : Name) : CoreM Bool := do
|
||||
unless info.all.length = 1 do
|
||||
-- We do not allow `[macro_inline]` attributes at mutual recursive definitions
|
||||
return false
|
||||
let env ← getEnv
|
||||
let isRec (declName' : Name) : Bool :=
|
||||
isBRecOnRecursor env declName' ||
|
||||
declName' == ``WellFounded.fix ||
|
||||
declName' == ``WellFounded.Nat.fix ||
|
||||
declName' == declName ++ `_unary -- Auxiliary declaration created by `WF` module
|
||||
if Option.isSome <| info.value.find? fun e => e.isConst && isRec e.constName! then
|
||||
-- It contains a `brecOn` or `WellFounded.fix` application. So, it should be recursvie
|
||||
if (← Meta.isRecursiveDefinition declName) then
|
||||
-- It is recursive
|
||||
return false
|
||||
return true
|
||||
|
||||
|
||||
@@ -45,3 +45,4 @@ public import Lean.Compiler.LCNF.LambdaLifting
|
||||
public import Lean.Compiler.LCNF.ReduceArity
|
||||
public import Lean.Compiler.LCNF.Probing
|
||||
public import Lean.Compiler.LCNF.Irrelevant
|
||||
public import Lean.Compiler.LCNF.SplitSCC
|
||||
|
||||
@@ -258,45 +258,4 @@ end Check
|
||||
def Decl.check (decl : Decl) : CompilerM Unit := do
|
||||
Check.run do decl.value.forCodeM (Check.checkFunDeclCore decl.name decl.params decl.type)
|
||||
|
||||
/--
|
||||
Check whether every local declaration in the local context is used in one of given `decls`.
|
||||
-/
|
||||
partial def checkDeadLocalDecls (decls : Array Decl) : CompilerM Unit := do
|
||||
let (_, s) := visitDecls decls |>.run {}
|
||||
let usesFVar (binderName : Name) (fvarId : FVarId) :=
|
||||
unless s.contains fvarId do
|
||||
throwError "LCNF local context contains unused local variable declaration `{binderName}`"
|
||||
let lctx := (← get).lctx
|
||||
lctx.params.forM fun fvarId decl => usesFVar decl.binderName fvarId
|
||||
lctx.letDecls.forM fun fvarId decl => usesFVar decl.binderName fvarId
|
||||
lctx.funDecls.forM fun fvarId decl => usesFVar decl.binderName fvarId
|
||||
where
|
||||
visitFVar (fvarId : FVarId) : StateM FVarIdHashSet Unit :=
|
||||
modify (·.insert fvarId)
|
||||
|
||||
visitParam (param : Param) : StateM FVarIdHashSet Unit := do
|
||||
visitFVar param.fvarId
|
||||
|
||||
visitParams (params : Array Param) : StateM FVarIdHashSet Unit := do
|
||||
params.forM visitParam
|
||||
|
||||
visitCode (code : Code) : StateM FVarIdHashSet Unit := do
|
||||
match code with
|
||||
| .jmp .. | .return .. | .unreach .. => return ()
|
||||
| .let decl k => visitFVar decl.fvarId; visitCode k
|
||||
| .fun decl k | .jp decl k =>
|
||||
visitFVar decl.fvarId; visitParams decl.params; visitCode decl.value
|
||||
visitCode k
|
||||
| .cases c => c.alts.forM fun alt => do
|
||||
match alt with
|
||||
| .default k => visitCode k
|
||||
| .alt _ ps k => visitParams ps; visitCode k
|
||||
|
||||
visitDecl (decl : Decl) : StateM FVarIdHashSet Unit := do
|
||||
visitParams decl.params
|
||||
decl.value.forCodeM visitCode
|
||||
|
||||
visitDecls (decls : Array Decl) : StateM FVarIdHashSet Unit :=
|
||||
decls.forM visitDecl
|
||||
|
||||
end Lean.Compiler.LCNF
|
||||
|
||||
@@ -156,7 +156,8 @@ mutual
|
||||
|
||||
/-- Collect dependencies of the given expression. -/
|
||||
partial def collectType (type : Expr) : ClosureM Unit := do
|
||||
type.forEachWhere Expr.isFVar fun e => collectFVar e.fvarId!
|
||||
if type.hasFVar then
|
||||
type.forEachWhere Expr.isFVar fun e => collectFVar e.fvarId!
|
||||
|
||||
end
|
||||
|
||||
|
||||
@@ -52,6 +52,10 @@ structure Context where
|
||||
|
||||
structure State where
|
||||
decls : Array Decl := {}
|
||||
/--
|
||||
Cache for `shouldExtractFVar` in order to avoid superlinear behavior.
|
||||
-/
|
||||
fvarDecisionCache : Std.HashMap FVarId Bool := {}
|
||||
|
||||
abbrev M := ReaderT Context $ StateRefT State CompilerM
|
||||
|
||||
@@ -78,6 +82,10 @@ partial def shouldExtractLetValue (isRoot : Bool) (v : LetValue) : M Bool := do
|
||||
| _ => true
|
||||
if !shouldExtract then
|
||||
return false
|
||||
if let some decl ← LCNF.getMonoDecl? name then
|
||||
-- We don't want to extract constants as root terms
|
||||
if decl.getArity == 0 then
|
||||
return false
|
||||
args.allM shouldExtractArg
|
||||
| .fvar fnVar args => return (← shouldExtractFVar fnVar) && (← args.allM shouldExtractArg)
|
||||
| .proj _ _ baseVar => shouldExtractFVar baseVar
|
||||
@@ -88,10 +96,18 @@ partial def shouldExtractArg (arg : Arg) : M Bool := do
|
||||
| .type _ | .erased => return true
|
||||
|
||||
partial def shouldExtractFVar (fvarId : FVarId) : M Bool := do
|
||||
if let some letDecl ← findLetDecl? fvarId then
|
||||
shouldExtractLetValue false letDecl.value
|
||||
if let some result := (← get).fvarDecisionCache[fvarId]? then
|
||||
return result
|
||||
else
|
||||
return false
|
||||
let result ← go
|
||||
modify fun s => { s with fvarDecisionCache := s.fvarDecisionCache.insert fvarId result }
|
||||
return result
|
||||
where
|
||||
go : M Bool := do
|
||||
if let some letDecl ← findLetDecl? fvarId then
|
||||
shouldExtractLetValue false letDecl.value
|
||||
else
|
||||
return false
|
||||
|
||||
end
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ module
|
||||
prelude
|
||||
public import Lean.Compiler.LCNF.FVarUtil
|
||||
public import Lean.Compiler.LCNF.PassManager
|
||||
import Lean.Compiler.IR.CompilerM
|
||||
|
||||
public section
|
||||
|
||||
@@ -19,30 +20,27 @@ namespace FloatLetIn
|
||||
The decision of the float mechanism.
|
||||
-/
|
||||
inductive Decision where
|
||||
|
|
||||
/--
|
||||
Push into the arm with name `name`.
|
||||
-/
|
||||
arm (name : Name)
|
||||
| /--
|
||||
| arm (name : Name)
|
||||
/--
|
||||
Push into the default arm.
|
||||
-/
|
||||
default
|
||||
|
|
||||
| default
|
||||
/--
|
||||
Don't move this declaration it is needed where it is right now.
|
||||
-/
|
||||
dont
|
||||
|
|
||||
| dont
|
||||
/--
|
||||
No decision has been made yet.
|
||||
-/
|
||||
unknown
|
||||
| unknown
|
||||
deriving Hashable, BEq, Inhabited, Repr
|
||||
|
||||
def Decision.ofAlt : Alt → Decision
|
||||
| .alt name _ _ => .arm name
|
||||
| .default _ => .default
|
||||
| .alt name _ _ => .arm name
|
||||
| .default _ => .default
|
||||
|
||||
/--
|
||||
The context for `BaseFloatM`.
|
||||
@@ -112,6 +110,7 @@ def ignore? (decl : LetDecl) : BaseFloatM Bool := do
|
||||
Compute the initial decision for all declarations that `BaseFloatM` collected
|
||||
up to this point, with respect to `cs`. The initial decisions are:
|
||||
- `dont` if the declaration is detected by `ignore?`
|
||||
- `dont` if the a variable used by the declaration is later used as a potentially owned parameter
|
||||
- `dont` if the declaration is the discriminant of `cs` since we obviously need
|
||||
the discriminant to be computed before the match.
|
||||
- `dont` if we see the declaration being used in more than one cases arm
|
||||
@@ -120,20 +119,55 @@ up to this point, with respect to `cs`. The initial decisions are:
|
||||
-/
|
||||
def initialDecisions (cs : Cases) : BaseFloatM (Std.HashMap FVarId Decision) := do
|
||||
let mut map := Std.HashMap.emptyWithCapacity (← read).decls.length
|
||||
map ← (← read).decls.foldrM (init := map) fun val acc => do
|
||||
let owned : Std.HashSet FVarId := ∅
|
||||
(map, _) ← (← read).decls.foldlM (init := (map, owned)) fun (acc, owned) val => do
|
||||
if let .let decl := val then
|
||||
if (← ignore? decl) then
|
||||
return acc.insert decl.fvarId .dont
|
||||
return acc.insert val.fvarId .unknown
|
||||
return (acc.insert decl.fvarId .dont, owned)
|
||||
let (dont, owned) := (visitDecl (← getEnv) val).run owned
|
||||
if dont then
|
||||
return (acc.insert val.fvarId .dont, owned)
|
||||
else
|
||||
return (acc.insert val.fvarId .unknown, owned)
|
||||
|
||||
if map.contains cs.discr then
|
||||
map := map.insert cs.discr .dont
|
||||
(_, map) ← goCases cs |>.run map
|
||||
return map
|
||||
where
|
||||
visitDecl (env : Environment) (value : CodeDecl) : StateM (Std.HashSet FVarId) Bool := do
|
||||
match value with
|
||||
| .let decl => visitLetValue env decl.value
|
||||
| _ => return false -- will need to investigate whether that can be a problem
|
||||
|
||||
visitLetValue (env : Environment) (value : LetValue) : StateM (Std.HashSet FVarId) Bool := do
|
||||
match value with
|
||||
| .proj _ _ x => visitArg (.fvar x) true
|
||||
| .const nm _ args =>
|
||||
let decl? := IR.findEnvDecl env nm
|
||||
match decl? with
|
||||
| none => args.foldlM (fun b arg => visitArg arg false <||> pure b) false
|
||||
| some decl =>
|
||||
let mut res := false
|
||||
for h : i in *...args.size do
|
||||
if ← visitArg args[i] (decl.params[i]?.any (·.borrow)) then
|
||||
res := true
|
||||
return res
|
||||
| .fvar x args =>
|
||||
args.foldlM (fun b arg => visitArg arg false <||> pure b)
|
||||
(← visitArg (.fvar x) false)
|
||||
| .erased | .lit _ => return false
|
||||
|
||||
visitArg (var : Arg) (borrowed : Bool) : StateM (Std.HashSet FVarId) Bool := do
|
||||
let .fvar v := var | return false
|
||||
let res := (← get).contains v
|
||||
unless borrowed do
|
||||
modify (·.insert v)
|
||||
return res
|
||||
|
||||
goFVar (plannedDecision : Decision) (var : FVarId) : StateRefT (Std.HashMap FVarId Decision) BaseFloatM Unit := do
|
||||
if let some decision := (← get)[var]? then
|
||||
if decision == .unknown then
|
||||
if decision matches .unknown then
|
||||
modify fun s => s.insert var plannedDecision
|
||||
else if decision != plannedDecision then
|
||||
modify fun s => s.insert var .dont
|
||||
|
||||
@@ -11,6 +11,7 @@ public import Lean.Compiler.LCNF.Passes
|
||||
public import Lean.Compiler.LCNF.ToDecl
|
||||
public import Lean.Compiler.LCNF.Check
|
||||
import Lean.Meta.Match.MatcherInfo
|
||||
import Lean.Compiler.LCNF.SplitSCC
|
||||
public section
|
||||
namespace Lean.Compiler.LCNF
|
||||
/--
|
||||
@@ -50,14 +51,12 @@ The trace can be viewed with `set_option trace.Compiler.step true`.
|
||||
def checkpoint (stepName : Name) (decls : Array Decl) (shouldCheck : Bool) : CompilerM Unit := do
|
||||
for decl in decls do
|
||||
trace[Compiler.stat] "{decl.name} : {decl.size}"
|
||||
withOptions (fun opts => opts.setBool `pp.motives.pi false) do
|
||||
withOptions (fun opts => opts.set `pp.motives.pi false) do
|
||||
let clsName := `Compiler ++ stepName
|
||||
if (← Lean.isTracingEnabledFor clsName) then
|
||||
Lean.addTrace clsName m!"size: {decl.size}\n{← ppDecl' decl}"
|
||||
if shouldCheck then
|
||||
decl.check
|
||||
if shouldCheck then
|
||||
checkDeadLocalDecls decls
|
||||
|
||||
def isValidMainType (type : Expr) : Bool :=
|
||||
let isValidResultName (name : Name) : Bool :=
|
||||
@@ -74,7 +73,7 @@ def isValidMainType (type : Expr) : Bool :=
|
||||
|
||||
namespace PassManager
|
||||
|
||||
def run (declNames : Array Name) : CompilerM (Array IR.Decl) := withAtLeastMaxRecDepth 8192 do
|
||||
def run (declNames : Array Name) : CompilerM (Array (Array IR.Decl)) := withAtLeastMaxRecDepth 8192 do
|
||||
/-
|
||||
Note: we need to increase the recursion depth because we currently do to save phase1
|
||||
declarations in .olean files. Then, we have to recursively compile all dependencies,
|
||||
@@ -100,31 +99,33 @@ def run (declNames : Array Name) : CompilerM (Array IR.Decl) := withAtLeastMaxRe
|
||||
let decls := markRecDecls decls
|
||||
let manager ← getPassManager
|
||||
let isCheckEnabled := compiler.check.get (← getOptions)
|
||||
let decls ← profileitM Exception "compilation (LCNF base)" (← getOptions) do
|
||||
let mut decls := decls
|
||||
for pass in manager.basePasses do
|
||||
decls ← withTraceNode `Compiler (fun _ => return m!"compiler phase: {pass.phase}, pass: {pass.name}") do
|
||||
withPhase pass.phase <| pass.run decls
|
||||
withPhase pass.phaseOut <| checkpoint pass.name decls (isCheckEnabled || pass.shouldAlwaysRunCheck)
|
||||
return decls
|
||||
let decls ← profileitM Exception "compilation (LCNF mono)" (← getOptions) do
|
||||
let mut decls := decls
|
||||
for pass in manager.monoPasses do
|
||||
decls ← withTraceNode `Compiler (fun _ => return m!"compiler phase: {pass.phase}, pass: {pass.name}") do
|
||||
withPhase pass.phase <| pass.run decls
|
||||
withPhase pass.phaseOut <| checkpoint pass.name decls (isCheckEnabled || pass.shouldAlwaysRunCheck)
|
||||
return decls
|
||||
if (← Lean.isTracingEnabledFor `Compiler.result) then
|
||||
for decl in decls do
|
||||
let decl ← normalizeFVarIds decl
|
||||
Lean.addTrace `Compiler.result m!"size: {decl.size}\n{← ppDecl' decl}"
|
||||
profileitM Exception "compilation (IR)" (← getOptions) do
|
||||
let irDecls ← IR.toIR decls
|
||||
IR.compile irDecls
|
||||
let decls ← runPassManagerPart "compilation (LCNF base)" manager.basePasses decls isCheckEnabled
|
||||
let decls ← runPassManagerPart "compilation (LCNF mono)" manager.monoPasses decls isCheckEnabled
|
||||
let sccs ← withTraceNode `Compiler.splitSCC (fun _ => return m!"Splitting up SCC") do
|
||||
splitScc decls
|
||||
sccs.mapM fun decls => do
|
||||
let decls ← runPassManagerPart "compilation (LCNF mono)" manager.monoPassesNoLambda decls isCheckEnabled
|
||||
if (← Lean.isTracingEnabledFor `Compiler.result) then
|
||||
for decl in decls do
|
||||
let decl ← normalizeFVarIds decl
|
||||
Lean.addTrace `Compiler.result m!"size: {decl.size}\n{← ppDecl' decl}"
|
||||
profileitM Exception "compilation (IR)" (← getOptions) do
|
||||
let irDecls ← IR.toIR decls
|
||||
IR.compile irDecls
|
||||
where
|
||||
runPassManagerPart (profilerName : String) (passes : Array Pass) (decls : Array Decl)
|
||||
(isCheckEnabled : Bool) : CompilerM (Array Decl) := do
|
||||
profileitM Exception profilerName (← getOptions) do
|
||||
let mut decls := decls
|
||||
for pass in passes do
|
||||
decls ← withTraceNode `Compiler (fun _ => return m!"compiler phase: {pass.phase}, pass: {pass.name}") do
|
||||
withPhase pass.phase <| pass.run decls
|
||||
withPhase pass.phaseOut <| checkpoint pass.name decls (isCheckEnabled || pass.shouldAlwaysRunCheck)
|
||||
return decls
|
||||
|
||||
end PassManager
|
||||
|
||||
def compile (declNames : Array Name) : CoreM (Array IR.Decl) :=
|
||||
def compile (declNames : Array Name) : CoreM (Array (Array IR.Decl)) :=
|
||||
CompilerM.run <| PassManager.run declNames
|
||||
|
||||
def showDecl (phase : Phase) (declName : Name) : CoreM Format := do
|
||||
|
||||
@@ -87,6 +87,7 @@ pipeline.
|
||||
structure PassManager where
|
||||
basePasses : Array Pass
|
||||
monoPasses : Array Pass
|
||||
monoPassesNoLambda : Array Pass
|
||||
deriving Inhabited
|
||||
|
||||
instance : ToString Phase where
|
||||
@@ -114,6 +115,7 @@ private def validatePasses (phase : Phase) (passes : Array Pass) : CoreM Unit :=
|
||||
def validate (manager : PassManager) : CoreM Unit := do
|
||||
validatePasses .base manager.basePasses
|
||||
validatePasses .mono manager.monoPasses
|
||||
validatePasses .mono manager.monoPassesNoLambda
|
||||
|
||||
def findOccurrenceBounds (targetName : Name) (passes : Array Pass) : CoreM (Nat × Nat) := do
|
||||
let mut lowest := none
|
||||
|
||||
@@ -115,6 +115,8 @@ def builtinPassManager : PassManager := {
|
||||
simp (occurrence := 4) (phase := .mono),
|
||||
floatLetIn (phase := .mono) (occurrence := 2),
|
||||
lambdaLifting,
|
||||
]
|
||||
monoPassesNoLambda := #[
|
||||
extendJoinPointContext (phase := .mono) (occurrence := 1),
|
||||
simp (occurrence := 5) (phase := .mono),
|
||||
elimDeadBranches,
|
||||
|
||||
@@ -213,13 +213,17 @@ def Folder.mkBinary [Literal α] [Literal β] [Literal γ] (folder : α → β
|
||||
mkLit <| folder arg₁ arg₂
|
||||
|
||||
def Folder.mkBinaryDecisionProcedure [Literal α] [Literal β] {r : α → β → Prop} (folder : (a : α) → (b : β) → Decidable (r a b)) : Folder := fun args => do
|
||||
if (← getPhase) < .mono then
|
||||
return none
|
||||
let #[.fvar fvarId₁, .fvar fvarId₂] := args | return none
|
||||
let some arg₁ ← getLit fvarId₁ | return none
|
||||
let some arg₂ ← getLit fvarId₂ | return none
|
||||
let boolLit := folder arg₁ arg₂ |>.decide
|
||||
mkLit boolLit
|
||||
let result := folder arg₁ arg₂ |>.decide
|
||||
if (← getPhase) < .mono then
|
||||
if result then
|
||||
return some <| .const ``Decidable.isTrue [] #[.erased, .erased]
|
||||
else
|
||||
return some <| .const ``Decidable.isFalse [] #[.erased, .erased]
|
||||
else
|
||||
mkLit result
|
||||
|
||||
/--
|
||||
Provide a folder for an operation with a left neutral element.
|
||||
|
||||
52
src/Lean/Compiler/LCNF/SplitSCC.lean
Normal file
52
src/Lean/Compiler/LCNF/SplitSCC.lean
Normal file
@@ -0,0 +1,52 @@
|
||||
/-
|
||||
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Henrik Böving
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Lean.Compiler.LCNF.CompilerM
|
||||
import Lean.Util.SCC
|
||||
|
||||
namespace Lean.Compiler.LCNF
|
||||
|
||||
namespace SplitScc
|
||||
|
||||
partial def findSccCalls (scc : Std.HashMap Name Decl) (decl : Decl) : BaseIO (Std.HashSet Name) := do
|
||||
match decl.value with
|
||||
| .code code =>
|
||||
let (_, calls) ← goCode code |>.run {}
|
||||
return calls
|
||||
| .extern .. => return {}
|
||||
where
|
||||
goCode (c : Code) : StateRefT (Std.HashSet Name) BaseIO Unit := do
|
||||
match c with
|
||||
| .let decl k =>
|
||||
if let .const name .. := decl.value then
|
||||
if scc.contains name then
|
||||
modify fun s => s.insert name
|
||||
goCode k
|
||||
| .fun decl k | .jp decl k =>
|
||||
goCode decl.value
|
||||
goCode k
|
||||
| .cases cases => cases.alts.forM (·.forCodeM goCode)
|
||||
| .jmp .. | .return .. | .unreach .. => return ()
|
||||
|
||||
end SplitScc
|
||||
|
||||
public def splitScc (scc : Array Decl) : CompilerM (Array (Array Decl)) := do
|
||||
if scc.size == 1 then
|
||||
return #[scc]
|
||||
let declMap := Std.HashMap.ofArray <| scc.map fun decl => (decl.name, decl)
|
||||
let callers := Std.HashMap.ofArray <| ← scc.mapM fun decl => do
|
||||
let calls ← SplitScc.findSccCalls declMap decl
|
||||
return (decl.name, calls.toList)
|
||||
let newSccs := Lean.SCC.scc (scc.toList.map (·.name)) (callers.getD · [])
|
||||
trace[Compiler.splitSCC] m!"Split SCC into {newSccs}"
|
||||
return newSccs.toArray.map (fun scc => scc.toArray.map declMap.get!)
|
||||
|
||||
builtin_initialize
|
||||
registerTraceClass `Compiler.splitSCC (inherited := true)
|
||||
|
||||
end Lean.Compiler.LCNF
|
||||
@@ -56,9 +56,9 @@ public def Environment.getModulePackageByIdx? (env : Environment) (idx : ModuleI
|
||||
Returns the standard base of the native symbol for the compiled constant {lean}`declName`.
|
||||
|
||||
For many constants, this is the full symbol. However, initializers have an additional prefix
|
||||
(i.e., {lit}`_init_`) and boxed functions have an additional suffix (i.e., {lit}`___boxed`).
|
||||
Furthermore, some constants do not use this stem at all (e.g., {lit}`main` and definitions
|
||||
with {lit}`@[export]`).
|
||||
(i.e., {lit}`_init_`) and boxed functions have an additional suffix
|
||||
(see {name}`mkMangledBoxedName`). Furthermore, some constants do not use this stem at all
|
||||
(e.g., {lit}`main` and definitions with {lit}`@[export]`).
|
||||
-/
|
||||
@[export lean_get_symbol_stem]
|
||||
public def getSymbolStem (env : Environment) (declName : Name) : String :=
|
||||
|
||||
@@ -7,7 +7,7 @@ module
|
||||
|
||||
prelude
|
||||
public import Lean.Setup
|
||||
import Init.Data.String.Termination
|
||||
import Init.Data.String.TakeDrop
|
||||
|
||||
namespace String
|
||||
|
||||
@@ -133,6 +133,18 @@ def Name.mangleAux : Name → String
|
||||
public def Name.mangle (n : Name) (pre : String := "l_") : String :=
|
||||
pre ++ Name.mangleAux n
|
||||
|
||||
/--
|
||||
Given `s = nm.mangle pre` for some `nm : Name` and `pre : String` with `nm != Name.anonymous`,
|
||||
returns `(mkBoxedName nm).mangle pre`. This is used in the interpreter to find names of boxed
|
||||
IR declarations.
|
||||
-/
|
||||
@[export lean_mk_mangled_boxed_name]
|
||||
public def mkMangledBoxedName (s : String) : String :=
|
||||
if s.endsWith "__" then
|
||||
s ++ "_00__boxed"
|
||||
else
|
||||
s ++ "___boxed"
|
||||
|
||||
/--
|
||||
The mangled name of the name used to create the module initialization function.
|
||||
|
||||
|
||||
@@ -226,7 +226,13 @@ def opt [ToJson α] (k : String) : Option α → List (String × Json)
|
||||
| none => []
|
||||
| some o => [⟨k, toJson o⟩]
|
||||
|
||||
/-- Parses a JSON-encoded `structure` or `inductive` constructor. Used mostly by `deriving FromJson`. -/
|
||||
/-- Returns the string value or single key name, if any. -/
|
||||
def getTag? : Json → Option String
|
||||
| .str tag => some tag
|
||||
| .obj kvs => guard (kvs.size == 1) *> kvs.minKey?
|
||||
| _ => none
|
||||
|
||||
-- TODO: delete after rebootstrap
|
||||
def parseTagged
|
||||
(json : Json)
|
||||
(tag : String)
|
||||
@@ -259,5 +265,28 @@ def parseTagged
|
||||
| Except.error err => Except.error err
|
||||
| Except.error err => Except.error err
|
||||
|
||||
/--
|
||||
Parses a JSON-encoded `structure` or `inductive` constructor, assuming the tag has already been
|
||||
checked and `nFields` is nonzero. Used mostly by `deriving FromJson`.
|
||||
-/
|
||||
def parseCtorFields
|
||||
(json : Json)
|
||||
(tag : String)
|
||||
(nFields : Nat)
|
||||
(fieldNames? : Option (Array Name)) : Except String (Array Json) := do
|
||||
let payload ← getObjVal? json tag
|
||||
match fieldNames? with
|
||||
| some fieldNames =>
|
||||
fieldNames.mapM (getObjVal? payload ·.getString!)
|
||||
| none =>
|
||||
if nFields == 1 then
|
||||
Except.ok #[payload]
|
||||
else
|
||||
let fields ← getArr? payload
|
||||
if fields.size == nFields then
|
||||
Except.ok fields
|
||||
else
|
||||
Except.error s!"incorrect number of fields: {fields.size} ≟ {nFields}"
|
||||
|
||||
end Json
|
||||
end Lean
|
||||
|
||||
@@ -14,14 +14,72 @@ public section
|
||||
|
||||
namespace Lean
|
||||
|
||||
@[expose] def Options := KVMap
|
||||
structure Options where
|
||||
private map : NameMap DataValue
|
||||
/--
|
||||
Whether any option with prefix `trace` is set. This does *not* imply that any of such option is
|
||||
set to `true` but it does capture the most common case that no such option has ever been touched.
|
||||
-/
|
||||
hasTrace : Bool
|
||||
|
||||
namespace Options
|
||||
|
||||
def empty : Options where
|
||||
map := {}
|
||||
hasTrace := false
|
||||
|
||||
@[export lean_options_get_empty]
|
||||
private def getEmpty (_ : Unit) : Options := .empty
|
||||
|
||||
def Options.empty : Options := {}
|
||||
instance : Inhabited Options where
|
||||
default := {}
|
||||
instance : ToString Options := inferInstanceAs (ToString KVMap)
|
||||
instance [Monad m] : ForIn m Options (Name × DataValue) := inferInstanceAs (ForIn _ KVMap _)
|
||||
instance : BEq Options := inferInstanceAs (BEq KVMap)
|
||||
default := .empty
|
||||
instance : ToString Options where
|
||||
toString o := private toString o.map.toList
|
||||
instance [Monad m] : ForIn m Options (Name × DataValue) where
|
||||
forIn o init f := private forIn o.map init f
|
||||
instance : BEq Options where
|
||||
beq o1 o2 := private o1.map.beq o2.map
|
||||
instance : EmptyCollection Options where
|
||||
emptyCollection := .empty
|
||||
|
||||
@[inline] def find? (o : Options) (k : Name) : Option DataValue :=
|
||||
o.map.find? k
|
||||
|
||||
@[deprecated find? (since := "2026-01-15")]
|
||||
def find := find?
|
||||
|
||||
@[inline] def get? {α : Type} [KVMap.Value α] (o : Options) (k : Name) : Option α :=
|
||||
o.map.find? k |>.bind KVMap.Value.ofDataValue?
|
||||
|
||||
@[inline] def get {α : Type} [KVMap.Value α] (o : Options) (k : Name) (defVal : α) : α :=
|
||||
o.get? k |>.getD defVal
|
||||
|
||||
@[inline] def getBool (o : Options) (k : Name) (defVal : Bool := false) : Bool :=
|
||||
o.get k defVal
|
||||
|
||||
@[inline] def contains (o : Options) (k : Name) : Bool :=
|
||||
o.map.contains k
|
||||
|
||||
@[inline] def insert (o : Options) (k : Name) (v : DataValue) : Options where
|
||||
map := o.map.insert k v
|
||||
hasTrace := o.hasTrace || (`trace).isPrefixOf k
|
||||
|
||||
def set {α : Type} [KVMap.Value α] (o : Options) (k : Name) (v : α) : Options :=
|
||||
o.insert k (KVMap.Value.toDataValue v)
|
||||
|
||||
@[inline] def setBool (o : Options) (k : Name) (v : Bool) : Options :=
|
||||
o.set k v
|
||||
|
||||
def erase (o : Options) (k : Name) : Options where
|
||||
map := o.map.erase k
|
||||
-- `erase` is expected to be used even more rarely than `set` so O(n) is fine
|
||||
hasTrace := o.map.keys.any (`trace).isPrefixOf
|
||||
|
||||
def mergeBy (f : Name → DataValue → DataValue → DataValue) (o1 o2 : Options) : Options where
|
||||
map := o1.map.mergeWith f o2.map
|
||||
hasTrace := o1.hasTrace || o2.hasTrace
|
||||
|
||||
end Options
|
||||
|
||||
structure OptionDecl where
|
||||
name : Name
|
||||
@@ -90,11 +148,11 @@ variable [Monad m] [MonadOptions m]
|
||||
|
||||
def getBoolOption (k : Name) (defValue := false) : m Bool := do
|
||||
let opts ← getOptions
|
||||
return opts.getBool k defValue
|
||||
return opts.get k defValue
|
||||
|
||||
def getNatOption (k : Name) (defValue := 0) : m Nat := do
|
||||
let opts ← getOptions
|
||||
return opts.getNat k defValue
|
||||
return opts.get k defValue
|
||||
|
||||
class MonadWithOptions (m : Type → Type) where
|
||||
withOptions (f : Options → Options) (x : m α) : m α
|
||||
@@ -108,10 +166,10 @@ instance [MonadFunctor m n] [MonadWithOptions m] : MonadWithOptions n where
|
||||
the term being delaborated should be treated as a pattern. -/
|
||||
|
||||
def withInPattern [MonadWithOptions m] (x : m α) : m α :=
|
||||
withOptions (fun o => o.setBool `_inPattern true) x
|
||||
withOptions (fun o => o.set `_inPattern true) x
|
||||
|
||||
def Options.getInPattern (o : Options) : Bool :=
|
||||
o.getBool `_inPattern
|
||||
o.get `_inPattern false
|
||||
|
||||
/-- A strongly-typed reference to an option. -/
|
||||
protected structure Option (α : Type) where
|
||||
@@ -131,12 +189,20 @@ protected def get? [KVMap.Value α] (opts : Options) (opt : Lean.Option α) : Op
|
||||
protected def get [KVMap.Value α] (opts : Options) (opt : Lean.Option α) : α :=
|
||||
opts.get opt.name opt.defValue
|
||||
|
||||
@[export lean_options_get_bool]
|
||||
private def getBool (opts : Options) (name : Name) (defValue : Bool) : Bool :=
|
||||
opts.get name defValue
|
||||
|
||||
protected def getM [Monad m] [MonadOptions m] [KVMap.Value α] (opt : Lean.Option α) : m α :=
|
||||
return opt.get (← getOptions)
|
||||
|
||||
protected def set [KVMap.Value α] (opts : Options) (opt : Lean.Option α) (val : α) : Options :=
|
||||
opts.set opt.name val
|
||||
|
||||
@[export lean_options_update_bool]
|
||||
private def updateBool (opts : Options) (name : Name) (val : Bool) : Options :=
|
||||
opts.set name val
|
||||
|
||||
/-- Similar to `set`, but update `opts` only if it doesn't already contains an setting for `opt.name` -/
|
||||
protected def setIfNotSet [KVMap.Value α] (opts : Options) (opt : Lean.Option α) (val : α) : Options :=
|
||||
if opts.contains opt.name then opts else opt.set opts val
|
||||
|
||||
@@ -125,7 +125,7 @@ Parses and elaborates a Verso module docstring.
|
||||
def versoModDocString
|
||||
(range : DeclarationRange) (doc : TSyntax ``document) :
|
||||
TermElabM VersoModuleDocs.Snippet := do
|
||||
let level := getVersoModuleDocs (← getEnv) |>.terminalNesting |>.map (· + 1)
|
||||
let level := getMainVersoModuleDocs (← getEnv) |>.terminalNesting |>.map (· + 1)
|
||||
Doc.elabModSnippet range (doc.raw.getArgs.map (⟨·⟩)) (level.getD 0) |>.execForModule
|
||||
|
||||
|
||||
|
||||
@@ -409,11 +409,29 @@ private builtin_initialize versoModuleDocExt :
|
||||
}
|
||||
|
||||
|
||||
def getVersoModuleDocs (env : Environment) : VersoModuleDocs :=
|
||||
/--
|
||||
Returns the Verso module docs for the current main module.
|
||||
|
||||
During elaboration, this will return the modules docs that have been added thus far, rather than
|
||||
those for the entire module.
|
||||
-/
|
||||
def getMainVersoModuleDocs (env : Environment) : VersoModuleDocs :=
|
||||
versoModuleDocExt.getState env
|
||||
|
||||
@[deprecated getMainVersoModuleDocs (since := "2026-01-21")]
|
||||
def getVersoModuleDocs := @getMainVersoModuleDocs
|
||||
|
||||
|
||||
/--
|
||||
Returns all snippets of the Verso module docs from the indicated module, if they exist.
|
||||
-/
|
||||
def getVersoModuleDoc? (env : Environment) (moduleName : Name) :
|
||||
Option (Array VersoModuleDocs.Snippet) :=
|
||||
env.getModuleIdx? moduleName |>.map fun modIdx =>
|
||||
versoModuleDocExt.getModuleEntries (level := .server) env modIdx
|
||||
|
||||
def addVersoModuleDocSnippet (env : Environment) (snippet : VersoModuleDocs.Snippet) : Except String Environment :=
|
||||
let docs := getVersoModuleDocs env
|
||||
let docs := getMainVersoModuleDocs env
|
||||
if docs.canAdd snippet then
|
||||
pure <| versoModuleDocExt.addEntry env snippet
|
||||
else throw s!"Can't add - incorrect nesting {docs.terminalNesting.map (s!"(expected at most {·})") |>.getD ""})"
|
||||
|
||||
@@ -1220,7 +1220,7 @@ Disables the option `doc.verso` while running a parser.
|
||||
public def withoutVersoSyntax (p : Parser) : Parser where
|
||||
fn :=
|
||||
adaptUncacheableContextFn
|
||||
(fun c => { c with options := c.options.setBool `doc.verso false })
|
||||
(fun c => { c with options := c.options.set `doc.verso false })
|
||||
p.fn
|
||||
info := p.info
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ namespace Lean.Elab.Command
|
||||
|
||||
match stx[1] with
|
||||
| Syntax.atom _ val =>
|
||||
if getVersoModuleDocs (← getEnv) |>.isEmpty then
|
||||
if getMainVersoModuleDocs (← getEnv) |>.isEmpty then
|
||||
let doc := String.Pos.Raw.extract val 0 (val.rawEndPos.unoffsetBy ⟨2⟩)
|
||||
modifyEnv fun env => addMainModuleDoc env ⟨doc, range⟩
|
||||
else
|
||||
@@ -456,7 +456,7 @@ where
|
||||
withRef tk <| Meta.check e
|
||||
let e ← Term.levelMVarToParam (← instantiateMVars e)
|
||||
-- TODO: add options or notation for setting the following parameters
|
||||
withTheReader Core.Context (fun ctx => { ctx with options := ctx.options.setBool `smartUnfolding false }) do
|
||||
withTheReader Core.Context (fun ctx => { ctx with options := ctx.options.set `smartUnfolding false }) do
|
||||
let e ← withTransparency (mode := TransparencyMode.all) <| reduce e (skipProofs := skipProofs) (skipTypes := skipTypes)
|
||||
logInfoAt tk e
|
||||
|
||||
|
||||
@@ -232,7 +232,7 @@ def applyDerivingHandlers (className : Name) (typeNames : Array Name) (setExpose
|
||||
withScope (fun sc => { sc with
|
||||
attrs := if setExpose then Unhygienic.run `(Parser.Term.attrInstance| expose) :: sc.attrs else sc.attrs
|
||||
-- Deactivate some linting options that only make writing deriving handlers more painful.
|
||||
opts := sc.opts.setBool `warn.exposeOnPrivate false
|
||||
opts := sc.opts.set `warn.exposeOnPrivate false
|
||||
-- When any of the types are private, the deriving handler will need access to the private scope
|
||||
-- and should create private instances.
|
||||
isPublic := !typeNames.any isPrivateName }) do
|
||||
|
||||
@@ -111,14 +111,18 @@ def mkFromJsonBodyForStruct (indName : Name) : TermElabM Term := do
|
||||
|
||||
def mkFromJsonBodyForInduct (ctx : Context) (indName : Name) : TermElabM Term := do
|
||||
let indVal ← getConstInfoInduct indName
|
||||
let alts ← mkAlts indVal
|
||||
let auxTerm ← alts.foldrM (fun xs x => `(Except.orElseLazy $xs (fun _ => $x))) (← `(Except.error "no inductive constructor matched"))
|
||||
`($auxTerm)
|
||||
let (ctors, alts) := (← mkAlts indVal).unzip
|
||||
`(match Json.getTag? json with
|
||||
| some tag => match tag with
|
||||
$[| $(ctors.map Syntax.mkStrLit) => $(alts)]*
|
||||
| _ => Except.error "no inductive constructor matched"
|
||||
| none => Except.error "no inductive tag found")
|
||||
where
|
||||
mkAlts (indVal : InductiveVal) : TermElabM (Array Term) := do
|
||||
mkAlts (indVal : InductiveVal) : TermElabM (Array (String × Term)) := do
|
||||
let mut alts := #[]
|
||||
for ctorName in indVal.ctors do
|
||||
let ctorInfo ← getConstInfoCtor ctorName
|
||||
let ctorStr := ctorName.eraseMacroScopes.getString!
|
||||
let alt ← do forallTelescopeReducing ctorInfo.type fun xs _ => do
|
||||
let mut binders := #[]
|
||||
let mut userNames := #[]
|
||||
@@ -142,11 +146,14 @@ where
|
||||
else
|
||||
``(none)
|
||||
let stx ←
|
||||
`((Json.parseTagged json $(quote ctorName.eraseMacroScopes.getString!) $(quote ctorInfo.numFields) $(quote userNamesOpt)).bind
|
||||
(fun jsons => do
|
||||
$[let $identNames:ident ← $fromJsons:doExpr]*
|
||||
return $(mkIdent ctorName):ident $identNames*))
|
||||
pure (stx, ctorInfo.numFields)
|
||||
if ctorInfo.numFields == 0 then
|
||||
`(return $(mkIdent ctorName):ident $identNames*)
|
||||
else
|
||||
`((Json.parseCtorFields json $(quote ctorStr) $(quote ctorInfo.numFields) $(quote userNamesOpt)).bind
|
||||
(fun jsons => do
|
||||
$[let $identNames:ident ← $fromJsons:doExpr]*
|
||||
return $(mkIdent ctorName):ident $identNames*))
|
||||
pure ((ctorStr, stx), ctorInfo.numFields)
|
||||
alts := alts.push alt
|
||||
-- the smaller cases, especially the ones without fields are likely faster
|
||||
let alts' := alts.qsort (fun (_, x) (_, y) => x < y)
|
||||
|
||||
@@ -907,23 +907,26 @@ def lean (name : Option Ident := none) (error warning : flag false) («show» :
|
||||
(endPos := endPos) (endPos_valid := by simp only [endPos]; split <;> simp [*])
|
||||
let cctx : Command.Context := {fileName := ← getFileName, fileMap := text, snap? := none, cancelTk? := none}
|
||||
let scopes := (← get).scopes
|
||||
let mut cmdState : Command.State := { env, maxRecDepth := ← MonadRecDepth.getMaxRecDepth, scopes }
|
||||
let mut pstate : Parser.ModuleParserState := {pos := pos, recovering := false}
|
||||
let mut cmds := #[]
|
||||
repeat
|
||||
let scope := cmdState.scopes.head!
|
||||
let pmctx := { env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace, openDecls := scope.openDecls }
|
||||
let (cmd, ps', messages) := Parser.parseCommand ictx pmctx pstate cmdState.messages
|
||||
cmds := cmds.push cmd
|
||||
pstate := ps'
|
||||
cmdState := { cmdState with messages := messages }
|
||||
cmdState ← runCommand (Command.elabCommand cmd) cmd cctx cmdState
|
||||
if Parser.isTerminalCommand cmd then break
|
||||
setEnv cmdState.env
|
||||
modify fun st => { st with scopes := cmdState.scopes }
|
||||
let (cmds, cmdState, trees) ← withSaveInfoContext do
|
||||
let mut cmdState : Command.State := { env, maxRecDepth := ← MonadRecDepth.getMaxRecDepth, scopes }
|
||||
let mut pstate : Parser.ModuleParserState := {pos := pos, recovering := false}
|
||||
let mut cmds := #[]
|
||||
repeat
|
||||
let scope := cmdState.scopes.head!
|
||||
let pmctx := { env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace, openDecls := scope.openDecls }
|
||||
let (cmd, ps', messages) := Parser.parseCommand ictx pmctx pstate cmdState.messages
|
||||
cmds := cmds.push cmd
|
||||
pstate := ps'
|
||||
cmdState := { cmdState with messages := messages }
|
||||
cmdState ← runCommand (Command.elabCommand cmd) cmd cctx cmdState
|
||||
if Parser.isTerminalCommand cmd then break
|
||||
setEnv cmdState.env
|
||||
modify fun st => { st with scopes := cmdState.scopes }
|
||||
|
||||
for t in cmdState.infoState.trees do
|
||||
pushInfoTree t
|
||||
for t in cmdState.infoState.trees do
|
||||
pushInfoTree t
|
||||
let trees := (← getInfoTrees)
|
||||
pure (cmds, cmdState, trees)
|
||||
|
||||
let mut output := #[]
|
||||
for msg in cmdState.messages.toArray do
|
||||
@@ -937,14 +940,13 @@ def lean (name : Option Ident := none) (error warning : flag false) («show» :
|
||||
let hint ← flagHint m!"The `+error` flag indicates that errors are expected:" #[" +error"]
|
||||
logErrorAt msgStx m!"Unexpected error:{indentD msg.data}{hint.getD m!""}"
|
||||
if msg.severity == .warning && !warning then
|
||||
let hint ← flagHint m!"The `+error` flag indicates that warnings are expected:" #[" +warning"]
|
||||
let hint ← flagHint m!"The `+warning` flag indicates that warnings are expected:" #[" +warning"]
|
||||
logErrorAt msgStx m!"Unexpected warning:{indentD msg.data}{hint.getD m!""}"
|
||||
else
|
||||
withRef msgStx <| log msg.data (severity := .information) (isSilent := true)
|
||||
if let some x := name then
|
||||
modifyEnv (leanOutputExt.modifyState · (·.insert x.getId output))
|
||||
if «show» then
|
||||
let trees := (← getInfoTrees)
|
||||
if h : trees.size > 0 then
|
||||
let hl := Data.LeanBlock.mk (← highlightSyntax trees (mkNullNode cmds))
|
||||
return .other {name := ``Data.LeanBlock, val := .mk hl} #[.code code.getString]
|
||||
@@ -1267,7 +1269,7 @@ def «set_option» (option : Ident) (value : DataValue) : DocM (Block ElabInline
|
||||
pushInfoLeaf <| .ofOptionInfo { stx := option, optionName, declName := decl.declName }
|
||||
validateOptionValue optionName decl value
|
||||
let o ← getOptions
|
||||
modify fun s => { s with options := o.insert optionName value }
|
||||
modify fun s => { s with options := o.set optionName value }
|
||||
return .empty
|
||||
|
||||
/--
|
||||
|
||||
@@ -20,10 +20,12 @@ structure LetRecDeclView where
|
||||
declName : Name
|
||||
parentName? : Option Name
|
||||
binderIds : Array Syntax
|
||||
binders : Syntax -- binder syntax for docstring elaboration
|
||||
type : Expr
|
||||
mvar : Expr -- auxiliary metavariable used to lift the 'let rec'
|
||||
valStx : Syntax
|
||||
termination : TerminationHints
|
||||
docString? : Option (TSyntax ``Parser.Command.docComment × Bool) := none
|
||||
|
||||
structure LetRecView where
|
||||
decls : Array LetRecDeclView
|
||||
@@ -32,8 +34,9 @@ structure LetRecView where
|
||||
/- group ("let " >> nonReservedSymbol "rec ") >> sepBy1 (group (optional «attributes» >> letDecl)) ", " >> "; " >> termParser -/
|
||||
private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
|
||||
let mut decls : Array LetRecDeclView := #[]
|
||||
let isVerso := doc.verso.get (← getOptions)
|
||||
for attrDeclStx in letRec[1][0].getSepArgs do
|
||||
let docStr? := attrDeclStx[0].getOptional?.map TSyntax.mk
|
||||
let docStr? := attrDeclStx[0].getOptional?.map (TSyntax.mk ·, isVerso)
|
||||
let attrOptStx := attrDeclStx[1]
|
||||
let attrs ← if attrOptStx.isNone then pure #[] else elabDeclAttrs attrOptStx[0]
|
||||
let decl := attrDeclStx[2][0]
|
||||
@@ -45,16 +48,21 @@ private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
|
||||
throwErrorAt declId "'let rec' expressions must be named"
|
||||
let shortDeclName := declId.getId
|
||||
let parentName? ← getDeclName?
|
||||
let declName := parentName?.getD Name.anonymous ++ shortDeclName
|
||||
let mut declName := parentName?.getD Name.anonymous ++ shortDeclName
|
||||
let env ← getEnv
|
||||
if env.header.isModule && !env.isExporting then
|
||||
declName := mkPrivateName env declName
|
||||
if decls.any fun decl => decl.declName == declName then
|
||||
withRef declId do
|
||||
throwError "`{.ofConstName declName}` has already been declared"
|
||||
let binders := decl[1]
|
||||
let binderStx := decl[1]
|
||||
checkNotAlreadyDeclared declName
|
||||
applyAttributesAt declName attrs AttributeApplicationTime.beforeElaboration
|
||||
addDocString' declName binders docStr?
|
||||
-- Docstring processing is deferred until the declaration is added to the environment.
|
||||
-- This is necessary for Verso docstrings to work correctly, as they may reference the
|
||||
-- declaration being defined.
|
||||
addDeclarationRangesFromSyntax declName decl declId
|
||||
let binders := binders.getArgs
|
||||
let binders := binderStx.getArgs
|
||||
let typeStx := expandOptType declId decl[2]
|
||||
let (type, binderIds) ← elabBindersEx binders fun xs => do
|
||||
let type ← elabType typeStx
|
||||
@@ -70,7 +78,7 @@ private def mkLetRecDeclView (letRec : Syntax) : TermElabM LetRecView := do
|
||||
let termination ← elabTerminationHints ⟨attrDeclStx[3]⟩
|
||||
decls := decls.push {
|
||||
ref := declId, attrs, shortDeclName, declName, parentName?,
|
||||
binderIds, type, mvar, valStx, termination
|
||||
binderIds, binders := binderStx, type, mvar, valStx, termination, docString? := docStr?
|
||||
}
|
||||
else
|
||||
throwUnsupportedSyntax
|
||||
@@ -111,15 +119,12 @@ private def registerLetRecsToLift (views : Array LetRecDeclView) (fvars : Array
|
||||
let toLift ← views.mapIdxM fun i view => do
|
||||
let value := values[i]!
|
||||
let termination := view.termination.rememberExtraParams view.binderIds.size value
|
||||
let env ← getEnv
|
||||
pure {
|
||||
ref := view.ref
|
||||
fvarId := fvars[i]!.fvarId!
|
||||
attrs := view.attrs
|
||||
shortDeclName := view.shortDeclName
|
||||
declName :=
|
||||
if env.isExporting || !env.header.isModule then view.declName
|
||||
else mkPrivateName env view.declName
|
||||
declName := view.declName
|
||||
parentName? := view.parentName?
|
||||
lctx
|
||||
localInstances
|
||||
@@ -127,6 +132,8 @@ private def registerLetRecsToLift (views : Array LetRecDeclView) (fvars : Array
|
||||
val := value
|
||||
mvarId := view.mvar.mvarId!
|
||||
termination
|
||||
binders := view.binders
|
||||
docString? := view.docString?
|
||||
}
|
||||
modify fun s => { s with letRecsToLift := toLift.toList ++ s.letRecsToLift }
|
||||
|
||||
|
||||
@@ -1092,8 +1092,8 @@ def pushLetRecs (preDefs : Array PreDefinition) (letRecClosures : List LetRecClo
|
||||
ref := c.ref
|
||||
declName := c.toLift.declName
|
||||
levelParams := [] -- we set it later
|
||||
binders := mkNullNode -- No docstrings, so we don't need these
|
||||
modifiers := { modifiers with attrs := c.toLift.attrs }
|
||||
binders := c.toLift.binders
|
||||
modifiers := { modifiers with attrs := c.toLift.attrs, docString? := c.toLift.docString? }
|
||||
kind, type, value,
|
||||
termination := c.toLift.termination
|
||||
}
|
||||
|
||||
@@ -1210,8 +1210,8 @@ private def applyComputedFields (indViews : Array InductiveView) : CommandElabM
|
||||
computedFields := computedFields.push (declName, computedFieldNames)
|
||||
withScope (fun scope => { scope with
|
||||
opts := scope.opts
|
||||
|>.setBool `bootstrap.genMatcherCode false
|
||||
|>.setBool `elaboratingComputedFields true}) <|
|
||||
|>.set `bootstrap.genMatcherCode false
|
||||
|>.set `elaboratingComputedFields true}) <|
|
||||
elabCommand <| ← `(mutual $computedFieldDefs* end)
|
||||
|
||||
liftTermElabM do Term.withDeclName indViews[0]!.declName do
|
||||
|
||||
@@ -29,6 +29,10 @@ def addPreDefsFromUnary (docCtx : LocalContext × LocalInstances) (preDefs : Arr
|
||||
let preDefNonRec := unaryPreDefNonRec.filterAttrs fun attr => attr.name != `implemented_by
|
||||
let declNames := preDefs.toList.map (·.declName)
|
||||
|
||||
preDefs.forM fun preDef =>
|
||||
unless preDef.kind.isTheorem do
|
||||
markAsRecursive preDef.declName
|
||||
|
||||
-- Do not complain if the user sets @[semireducible], which usually is a noop,
|
||||
-- we recognize that below and then do not set @[irreducible]
|
||||
withOptions (allowUnsafeReducibility.set · true) do
|
||||
@@ -53,8 +57,6 @@ def cleanPreDef (preDef : PreDefinition) (cacheProofs := true) : MetaM PreDefini
|
||||
Assign final attributes to the definitions. Assumes the EqnInfos to be already present.
|
||||
-/
|
||||
def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
|
||||
for preDef in preDefs do
|
||||
markAsRecursive preDef.declName
|
||||
for preDef in preDefs.reverse do
|
||||
-- must happen before `generateEagerEqns`
|
||||
-- must happen in reverse order so that constants realized as part of the first decl
|
||||
|
||||
@@ -140,6 +140,8 @@ def structuralRecursion
|
||||
preDefsNonRec.forM fun preDefNonRec => do
|
||||
let preDefNonRec ← eraseRecAppSyntax preDefNonRec
|
||||
prependError m!"structural recursion failed, produced type incorrect term" do
|
||||
unless preDefNonRec.kind.isTheorem do
|
||||
markAsRecursive preDefNonRec.declName
|
||||
-- We create the `_unsafe_rec` before we abstract nested proofs.
|
||||
-- Reason: the nested proofs may be referring to the _unsafe_rec.
|
||||
addNonRec docCtx preDefNonRec (applyAttrAfterCompilation := false) (all := names.toList)
|
||||
@@ -157,7 +159,6 @@ def structuralRecursion
|
||||
-/
|
||||
registerEqnsInfo preDef (preDefs.map (·.declName)) recArgPos fixedParamPerms
|
||||
addSmartUnfoldingDef docCtx preDef recArgPos
|
||||
markAsRecursive preDef.declName
|
||||
for preDef in preDefs do
|
||||
-- must happen in separate loop so realizations can see eqnInfos of all other preDefs
|
||||
enableRealizationsForConst preDef.declName
|
||||
|
||||
@@ -52,7 +52,7 @@ def elabSetOption (id : Syntax) (val : Syntax) : m Options := do
|
||||
pushInfoLeaf <| .ofOptionInfo { stx := id, optionName, declName := decl.declName }
|
||||
let rec setOption (val : DataValue) : m Options := do
|
||||
validateOptionValue optionName decl val
|
||||
return (← getOptions).insert optionName val
|
||||
return (← getOptions).set optionName val
|
||||
match val.isStrLit? with
|
||||
| some str => setOption (DataValue.ofString str)
|
||||
| none =>
|
||||
|
||||
@@ -290,7 +290,7 @@ private def declareSyntaxCatQuotParser (catName : Name) : CommandElabM Unit := d
|
||||
let quotSymbol := "`(" ++ suffix ++ "| "
|
||||
let name := catName ++ `quot
|
||||
let cmd ← `(
|
||||
@[term_parser] meta def $(mkIdent name) : Lean.ParserDescr :=
|
||||
@[term_parser] public meta def $(mkIdent name) : Lean.ParserDescr :=
|
||||
Lean.ParserDescr.node `Lean.Parser.Term.quot $(quote Lean.Parser.maxPrec)
|
||||
(Lean.ParserDescr.node $(quote name) $(quote Lean.Parser.maxPrec)
|
||||
(Lean.ParserDescr.binary `andthen (Lean.ParserDescr.symbol $(quote quotSymbol))
|
||||
@@ -312,7 +312,7 @@ private def declareSyntaxCatQuotParser (catName : Name) : CommandElabM Unit := d
|
||||
let attrName := catName.appendAfter "_parser"
|
||||
let catDeclName := ``Lean.Parser.Category ++ catName
|
||||
setEnv (← Parser.registerParserCategory (← getEnv) attrName catName catBehavior catDeclName)
|
||||
let cmd ← `($[$docString?]? meta def $(mkIdentFrom stx[2] (`_root_ ++ catDeclName) (canonical := true)) : Lean.Parser.Category := {})
|
||||
let cmd ← `($[$docString?]? public meta def $(mkIdentFrom stx[2] (`_root_ ++ catDeclName) (canonical := true)) : Lean.Parser.Category := {})
|
||||
declareSyntaxCatQuotParser catName
|
||||
elabCommand cmd
|
||||
|
||||
|
||||
@@ -309,7 +309,7 @@ where
|
||||
Add an auxiliary declaration. Only used to create constants that appear in our reflection proof.
|
||||
-/
|
||||
mkAuxDecl (name : Name) (value type : Expr) : CoreM Unit :=
|
||||
withOptions (fun opt => opt.setBool `compiler.extract_closed false) do
|
||||
withOptions (fun opt => opt.set `compiler.extract_closed false) do
|
||||
addAndCompile <| .defnDecl {
|
||||
name := name,
|
||||
levelParams := [],
|
||||
|
||||
@@ -41,8 +41,8 @@ public def findSpec (database : SpecTheorems) (wp : Expr) : MetaM SpecTheorem :=
|
||||
-- information why the defeq check failed, so we do it again.
|
||||
withOptions (fun o =>
|
||||
if o.getBool `trace.Elab.Tactic.Do.spec then
|
||||
o |>.setBool `pp.universes true
|
||||
|>.setBool `trace.Meta.isDefEq true
|
||||
o |>.set `pp.universes true
|
||||
|>.set `trace.Meta.isDefEq true
|
||||
else
|
||||
o) do
|
||||
withTraceNode `Elab.Tactic.Do.spec (fun _ => return m!"Defeq check for {type} failed.") do
|
||||
|
||||
@@ -47,10 +47,10 @@ partial def genVCs (goal : MVarId) (ctx : Context) (fuel : Fuel) : MetaM Result
|
||||
mvar.withContext <| withReducible do
|
||||
let (prf, state) ← StateRefT'.run (ReaderT.run (onGoal goal (← mvar.getTag)) ctx) { fuel }
|
||||
mvar.assign prf
|
||||
for h : idx in [:state.invariants.size] do
|
||||
for h : idx in *...state.invariants.size do
|
||||
let mv := state.invariants[idx]
|
||||
mv.setTag (Name.mkSimple ("inv" ++ toString (idx + 1)))
|
||||
for h : idx in [:state.vcs.size] do
|
||||
for h : idx in *...state.vcs.size do
|
||||
let mv := state.vcs[idx]
|
||||
mv.setTag (Name.mkSimple ("vc" ++ toString (idx + 1)) ++ (← mv.getTag).eraseMacroScopes)
|
||||
return { invariants := state.invariants, vcs := state.vcs }
|
||||
|
||||
@@ -94,14 +94,15 @@ def ifOutOfFuel (x : VCGenM α) (k : VCGenM α) : VCGenM α := do
|
||||
def addSubGoalAsVC (goal : MVarId) : VCGenM PUnit := do
|
||||
goal.freshenLCtxUserNamesSinceIdx (← read).initialCtxSize
|
||||
let ty ← goal.getType
|
||||
if ty.isAppOf ``Std.Do.PostCond || ty.isAppOf ``Std.Do.SPred then
|
||||
-- Here we make `mvar` a synthetic opaque goal upon discharge failure.
|
||||
-- This is the right call for (previously natural) holes such as loop invariants, which
|
||||
-- would otherwise lead to spurious instantiations and unwanted renamings (when leaving the
|
||||
-- scope of a local).
|
||||
-- But it's wrong for, e.g., schematic variables. The latter should never be PostConds,
|
||||
-- Invariants or SPreds, hence the condition.
|
||||
goal.setKind .syntheticOpaque
|
||||
-- Here we make `mvar` a synthetic opaque goal upon discharge failure.
|
||||
-- This is the right call for (previously natural) holes such as loop invariants, which
|
||||
-- would otherwise lead to spurious instantiations and unwanted renamings (when leaving the
|
||||
-- scope of a local).
|
||||
-- We also do this for, e.g. schematic variables. One reason is that at this point, we have
|
||||
-- already tried to assign them by unification. Another reason is that we want to display the
|
||||
-- VC to the user as-is, without abstracting any variables in the local context.
|
||||
-- This only makes sense for synthetic opaque metavariables.
|
||||
goal.setKind .syntheticOpaque
|
||||
if ty.isAppOf ``Std.Do.Invariant then
|
||||
modify fun s => { s with invariants := s.invariants.push goal }
|
||||
else
|
||||
|
||||
@@ -8,6 +8,7 @@ module
|
||||
prelude
|
||||
import Lean.DocString
|
||||
public import Lean.Elab.Command
|
||||
public import Lean.Parser.Tactic.Doc
|
||||
|
||||
public section
|
||||
|
||||
@@ -38,30 +39,42 @@ open Lean.Parser.Command
|
||||
| _ => throwError "Malformed 'register_tactic_tag' command"
|
||||
|
||||
/--
|
||||
Gets the first string token in a parser description. For example, for a declaration like
|
||||
`syntax "squish " term " with " term : tactic`, it returns `some "squish "`, and for a declaration
|
||||
like `syntax tactic " <;;;> " tactic : tactic`, it returns `some " <;;;> "`.
|
||||
|
||||
Returns `none` for syntax declarations that don't contain a string constant.
|
||||
Computes a table that heuristically maps parser syntax kinds to their first tokens by inspecting the
|
||||
Pratt parsing tables for the `tactic syntax kind. If a custom name is provided for the tactic, then
|
||||
it is returned instead.
|
||||
-/
|
||||
private partial def getFirstTk (e : Expr) : MetaM (Option String) := do
|
||||
match (← Meta.whnf e).getAppFnArgs with
|
||||
| (``ParserDescr.node, #[_, _, p]) => getFirstTk p
|
||||
| (``ParserDescr.trailingNode, #[_, _, _, p]) => getFirstTk p
|
||||
| (``ParserDescr.unary, #[.app _ (.lit (.strVal "withPosition")), p]) => getFirstTk p
|
||||
| (``ParserDescr.unary, #[.app _ (.lit (.strVal "atomic")), p]) => getFirstTk p
|
||||
| (``ParserDescr.binary, #[.app _ (.lit (.strVal "andthen")), p, _]) => getFirstTk p
|
||||
| (``ParserDescr.nonReservedSymbol, #[.lit (.strVal tk), _]) => pure (some tk)
|
||||
| (``ParserDescr.symbol, #[.lit (.strVal tk)]) => pure (some tk)
|
||||
| (``Parser.withAntiquot, #[_, p]) => getFirstTk p
|
||||
| (``Parser.leadingNode, #[_, _, p]) => getFirstTk p
|
||||
| (``HAndThen.hAndThen, #[_, _, _, _, p1, p2]) =>
|
||||
if let some tk ← getFirstTk p1 then pure (some tk)
|
||||
else getFirstTk (.app p2 (.const ``Unit.unit []))
|
||||
| (``Parser.nonReservedSymbol, #[.lit (.strVal tk), _]) => pure (some tk)
|
||||
| (``Parser.symbol, #[.lit (.strVal tk)]) => pure (some tk)
|
||||
| _ => pure none
|
||||
def firstTacticTokens [Monad m] [MonadEnv m] : m (NameMap String) := do
|
||||
let env ← getEnv
|
||||
|
||||
let some tactics := (Lean.Parser.parserExtension.getState env).categories.find? `tactic
|
||||
| return {}
|
||||
|
||||
let mut firstTokens : NameMap String :=
|
||||
tacticNameExt.toEnvExtension.getState env
|
||||
|>.importedEntries
|
||||
|>.push (tacticNameExt.exportEntriesFn env (tacticNameExt.getState env) .exported)
|
||||
|>.foldl (init := {}) fun names inMods =>
|
||||
inMods.foldl (init := names) fun names (k, n) =>
|
||||
names.insert k n
|
||||
|
||||
firstTokens := addFirstTokens tactics tactics.tables.leadingTable firstTokens
|
||||
firstTokens := addFirstTokens tactics tactics.tables.trailingTable firstTokens
|
||||
|
||||
return firstTokens
|
||||
where
|
||||
addFirstTokens tactics table firsts : NameMap String := Id.run do
|
||||
let mut firsts := firsts
|
||||
for (tok, ps) in table do
|
||||
-- Skip antiquotes
|
||||
if tok == `«$» then continue
|
||||
for (p, _) in ps do
|
||||
for (k, ()) in p.info.collectKinds {} do
|
||||
if tactics.kinds.contains k then
|
||||
let tok := tok.toString (escape := false)
|
||||
-- It's important here that the already-existing mapping is preserved, because it will
|
||||
-- contain any user-provided custom name, and these shouldn't be overridden.
|
||||
firsts := firsts.alter k (·.getD tok)
|
||||
return firsts
|
||||
|
||||
/--
|
||||
Creates some `MessageData` for a parser name.
|
||||
@@ -71,18 +84,14 @@ identifiable leading token, then that token is shown. Otherwise, the underlying
|
||||
without an `@`. The name includes metadata that makes infoview hovers and the like work. This
|
||||
only works for global constants, as the local context is not included.
|
||||
-/
|
||||
private def showParserName (n : Name) : MetaM MessageData := do
|
||||
private def showParserName [Monad m] [MonadEnv m] (firsts : NameMap String) (n : Name) : m MessageData := do
|
||||
let env ← getEnv
|
||||
let params :=
|
||||
env.constants.find?' n |>.map (·.levelParams.map Level.param) |>.getD []
|
||||
let tok ←
|
||||
if let some descr := env.find? n |>.bind (·.value?) then
|
||||
if let some tk ← getFirstTk descr then
|
||||
pure <| Std.Format.text tk.trimAscii.copy
|
||||
else pure <| format n
|
||||
else pure <| format n
|
||||
|
||||
let tok := ((← customTacticName n) <|> firsts.get? n).map Std.Format.text |>.getD (format n)
|
||||
pure <| .ofFormatWithInfos {
|
||||
fmt := "'" ++ .tag 0 tok ++ "'",
|
||||
fmt := "`" ++ .tag 0 tok ++ "`",
|
||||
infos :=
|
||||
.ofList [(0, .ofTermInfo {
|
||||
lctx := .empty,
|
||||
@@ -93,7 +102,6 @@ private def showParserName (n : Name) : MetaM MessageData := do
|
||||
})] _
|
||||
}
|
||||
|
||||
|
||||
/--
|
||||
Displays all available tactic tags, with documentation.
|
||||
-/
|
||||
@@ -106,20 +114,22 @@ Displays all available tactic tags, with documentation.
|
||||
for (tac, tag) in arr do
|
||||
mapping := mapping.insert tag (mapping.getD tag {} |>.insert tac)
|
||||
|
||||
let firsts ← firstTacticTokens
|
||||
|
||||
let showDocs : Option String → MessageData
|
||||
| none => .nil
|
||||
| some d => Format.line ++ MessageData.joinSep ((d.split '\n').map (toMessageData ∘ String.Slice.copy)).toList Format.line
|
||||
|
||||
let showTactics (tag : Name) : MetaM MessageData := do
|
||||
let showTactics (tag : Name) : CommandElabM MessageData := do
|
||||
match mapping.find? tag with
|
||||
| none => pure .nil
|
||||
| some tacs =>
|
||||
if tacs.isEmpty then pure .nil
|
||||
else
|
||||
let tacs := tacs.toArray.qsort (·.toString < ·.toString) |>.toList
|
||||
pure (Format.line ++ MessageData.joinSep (← tacs.mapM showParserName) ", ")
|
||||
pure (Format.line ++ MessageData.joinSep (← tacs.mapM (showParserName firsts)) ", ")
|
||||
|
||||
let tagDescrs ← liftTermElabM <| (← allTagsWithInfo).mapM fun (name, userName, docs) => do
|
||||
let tagDescrs ← (← allTagsWithInfo).mapM fun (name, userName, docs) => do
|
||||
pure <| m!"• " ++
|
||||
MessageData.nestD (m!"`{name}`" ++
|
||||
(if name.toString != userName then m!" — \"{userName}\"" else MessageData.nil) ++
|
||||
@@ -146,13 +156,13 @@ structure TacticDoc where
|
||||
/-- Any docstring extensions that have been specified -/
|
||||
extensionDocs : Array String
|
||||
|
||||
def allTacticDocs : MetaM (Array TacticDoc) := do
|
||||
def allTacticDocs (includeUnnamed : Bool := true) : MetaM (Array TacticDoc) := do
|
||||
let env ← getEnv
|
||||
let all :=
|
||||
tacticTagExt.toEnvExtension.getState (← getEnv)
|
||||
|>.importedEntries |>.push (tacticTagExt.exportEntriesFn (← getEnv) (tacticTagExt.getState (← getEnv)) .exported)
|
||||
let allTags :=
|
||||
tacticTagExt.toEnvExtension.getState env |>.importedEntries
|
||||
|>.push (tacticTagExt.exportEntriesFn env (tacticTagExt.getState env) .exported)
|
||||
let mut tacTags : NameMap NameSet := {}
|
||||
for arr in all do
|
||||
for arr in allTags do
|
||||
for (tac, tag) in arr do
|
||||
tacTags := tacTags.insert tac (tacTags.getD tac {} |>.insert tag)
|
||||
|
||||
@@ -160,15 +170,18 @@ def allTacticDocs : MetaM (Array TacticDoc) := do
|
||||
|
||||
let some tactics := (Lean.Parser.parserExtension.getState env).categories.find? `tactic
|
||||
| return #[]
|
||||
|
||||
let firstTokens ← firstTacticTokens
|
||||
|
||||
for (tac, _) in tactics.kinds do
|
||||
-- Skip noncanonical tactics
|
||||
if let some _ := alternativeOfTactic env tac then continue
|
||||
let userName : String ←
|
||||
if let some descr := env.find? tac |>.bind (·.value?) then
|
||||
if let some tk ← getFirstTk descr then
|
||||
pure tk.trimAscii.copy
|
||||
else pure tac.toString
|
||||
else pure tac.toString
|
||||
|
||||
let userName? : Option String := firstTokens.get? tac
|
||||
let userName ←
|
||||
if let some n := userName? then pure n
|
||||
else if includeUnnamed then pure tac.toString
|
||||
else continue
|
||||
|
||||
docs := docs.push {
|
||||
internalName := tac,
|
||||
|
||||
@@ -16,6 +16,7 @@ open Meta
|
||||
|
||||
structure Context extends Tactic.Context where
|
||||
ctx : Meta.Grind.Context
|
||||
sctx : Meta.Sym.Context
|
||||
methods : Grind.Methods
|
||||
params : Grind.Params
|
||||
|
||||
@@ -289,7 +290,7 @@ open Grind
|
||||
def liftGrindM (k : GrindM α) : GrindTacticM α := do
|
||||
let ctx ← read
|
||||
let s ← get
|
||||
let ((a, grindState), symState) ← liftMetaM <| StateRefT'.run ((Grind.withGTransparency k) ctx.methods.toMethodsRef ctx.ctx |>.run s.grindState) s.symState
|
||||
let ((a, grindState), symState) ← liftMetaM <| StateRefT'.run (((Grind.withGTransparency k) ctx.methods.toMethodsRef ctx.ctx |>.run s.grindState) ctx.sctx) s.symState
|
||||
modify fun s => { s with grindState, symState }
|
||||
return a
|
||||
|
||||
@@ -358,12 +359,13 @@ def mkEvalTactic' (elaborator : Name) (params : Params) : TermElabM (Goal → TS
|
||||
let eval (goal : Goal) (stx : TSyntax `grind) : GrindM (List Goal) := do
|
||||
let methods ← getMethods
|
||||
let grindCtx ← readThe Meta.Grind.Context
|
||||
let symCtx ← readThe Meta.Sym.Context
|
||||
let grindState ← get
|
||||
let symState ← getThe Sym.State
|
||||
-- **Note**: we discard changes to `Term.State`
|
||||
let (subgoals, grindState', symState') ← Term.TermElabM.run' (ctx := termCtx) (s := termState) do
|
||||
let (_, s) ← GrindTacticM.run
|
||||
(ctx := { recover := false, methods, ctx := grindCtx, params, elaborator })
|
||||
(ctx := { recover := false, methods, ctx := grindCtx, sctx := symCtx, params, elaborator })
|
||||
(s := { grindState, symState, goals := [goal] }) do
|
||||
evalGrindTactic stx.raw
|
||||
pruneSolvedGoals
|
||||
@@ -383,7 +385,7 @@ def GrindTacticM.runAtGoal (mvarId : MVarId) (params : Params) (k : GrindTacticM
|
||||
Reconsider the option `useSorry`.
|
||||
-/
|
||||
let params' := { params with config.useSorry := false }
|
||||
let (methods, ctx, state) ← liftMetaM <| GrindM.runAtGoal mvarId params' (evalTactic? := some evalTactic) fun goal => do
|
||||
let (methods, ctx, sctx, state) ← liftMetaM <| GrindM.runAtGoal mvarId params' (evalTactic? := some evalTactic) fun goal => do
|
||||
let a : Action := Action.intros 0 >> Action.assertAll
|
||||
let goals ← match (← a.run goal) with
|
||||
| .closed _ => pure []
|
||||
@@ -392,10 +394,11 @@ def GrindTacticM.runAtGoal (mvarId : MVarId) (params : Params) (k : GrindTacticM
|
||||
let ctx ← readThe Meta.Grind.Context
|
||||
/- Restore original config -/
|
||||
let ctx := { ctx with config := params.config }
|
||||
let sctx ← readThe Meta.Sym.Context
|
||||
let grindState ← get
|
||||
let symState ← getThe Sym.State
|
||||
return (methods, ctx, { grindState, symState, goals })
|
||||
return (methods, ctx, sctx, { grindState, symState, goals })
|
||||
let tctx ← read
|
||||
k { tctx with methods, ctx, params } |>.run state
|
||||
k { tctx with methods, ctx, sctx, params } |>.run state
|
||||
|
||||
end Lean.Elab.Tactic.Grind
|
||||
|
||||
@@ -167,6 +167,11 @@ structure LetRecToLift where
|
||||
val : Expr
|
||||
mvarId : MVarId
|
||||
termination : TerminationHints
|
||||
/-- The binders syntax for the declaration, used for docstring elaboration. -/
|
||||
binders : Syntax := .missing
|
||||
/-- The docstring, if present, and whether it's Verso. Docstring processing is deferred until the
|
||||
declaration is added to the environment (needed for Verso docstrings to work). -/
|
||||
docString? : Option (TSyntax ``Lean.Parser.Command.docComment × Bool) := none
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
|
||||
@@ -179,6 +179,13 @@ structure EnvironmentHeader where
|
||||
`ModuleIdx` for the same module.
|
||||
-/
|
||||
modules : Array EffectiveImport := #[]
|
||||
/-- For `getModuleIdx?` -/
|
||||
private moduleName2Idx : Std.HashMap Name ModuleIdx := Id.run do
|
||||
let mut m := {}
|
||||
for _h : idx in [0:modules.size] do
|
||||
let mod := modules[idx]
|
||||
m := m.insert mod.module idx
|
||||
return m
|
||||
/--
|
||||
Subset of `modules` for which `importAll` is `true`. This is assumed to be a much smaller set so
|
||||
we precompute it instead of iterating over all of `modules` multiple times. However, note that
|
||||
@@ -267,7 +274,7 @@ structure Environment where
|
||||
-/
|
||||
private irBaseExts : Array EnvExtensionState
|
||||
/-- The header contains additional information that is set at import time. -/
|
||||
header : EnvironmentHeader := {}
|
||||
header : EnvironmentHeader := private_decl% {}
|
||||
deriving Nonempty
|
||||
|
||||
/-- Exceptions that can be raised by the kernel when type checking new declarations. -/
|
||||
@@ -1174,7 +1181,7 @@ def isSafeDefinition (env : Environment) (declName : Name) : Bool :=
|
||||
| _ => false
|
||||
|
||||
def getModuleIdx? (env : Environment) (moduleName : Name) : Option ModuleIdx :=
|
||||
env.header.modules.findIdx? (·.module == moduleName)
|
||||
env.header.moduleName2Idx[moduleName]?
|
||||
|
||||
end Environment
|
||||
|
||||
|
||||
@@ -2386,4 +2386,27 @@ def eagerReflBoolTrue : Expr :=
|
||||
def eagerReflBoolFalse : Expr :=
|
||||
mkApp2 (mkConst ``eagerReduce [0]) (mkApp3 (mkConst ``Eq [1]) (mkConst ``Bool) (mkConst ``Bool.false) (mkConst ``Bool.false)) reflBoolFalse
|
||||
|
||||
/--
|
||||
Replaces the head constant in a function application chain with a different constant.
|
||||
|
||||
Given an expression that is either a constant or a function application chain,
|
||||
replaces the head constant with `declName` while preserving all arguments and universe levels.
|
||||
|
||||
**Examples**:
|
||||
- `f.replaceFn g` → `g` (where `f` is a constant)
|
||||
- `(f a b c).replaceFn g` → `g a b c`
|
||||
- `(@f.{u, v} a b).replaceFn g` → `@g.{u, v} a b`
|
||||
|
||||
**Panics**: If the expression is neither a constant nor a function application.
|
||||
|
||||
**Use case**: Useful for substituting one function for another while maintaining
|
||||
the same application structure, such as replacing a theorem with a related theorem
|
||||
that has the same type and universe parameters.
|
||||
-/
|
||||
def Expr.replaceFn (e : Expr) (declName : Name) : Expr :=
|
||||
match e with
|
||||
| .app f a => mkApp (f.replaceFn declName) a
|
||||
| .const _ us => mkConst declName us
|
||||
| _ => panic! "function application or constant expected"
|
||||
|
||||
end Lean
|
||||
|
||||
@@ -308,8 +308,8 @@ def setOption (opts : Options) (decl : OptionDecl) (name : Name) (val : String)
|
||||
match decl.defValue with
|
||||
| .ofBool _ =>
|
||||
match val with
|
||||
| "true" => return opts.insert name true
|
||||
| "false" => return opts.insert name false
|
||||
| "true" => return opts.set name true
|
||||
| "false" => return opts.set name false
|
||||
| _ =>
|
||||
throw <| .userError s!"invalid -D parameter, invalid configuration option '{val}' value, \
|
||||
it must be true/false"
|
||||
@@ -317,8 +317,8 @@ def setOption (opts : Options) (decl : OptionDecl) (name : Name) (val : String)
|
||||
let some val := val.toNat?
|
||||
| throw <| .userError s!"invalid -D parameter, invalid configuration option '{val}' value, \
|
||||
it must be a natural number"
|
||||
return opts.insert name val
|
||||
| .ofString _ => return opts.insert name val
|
||||
return opts.set name val
|
||||
| .ofString _ => return opts.set name val
|
||||
| _ => throw <| .userError s!"invalid -D parameter, configuration option '{name}' \
|
||||
cannot be set in the command line, use set_option command"
|
||||
|
||||
@@ -342,7 +342,7 @@ def reparseOptions (opts : Options) : IO Options := do
|
||||
If the option is defined in a library, use '-D{`weak ++ name}' to set it conditionally"
|
||||
|
||||
let .ofString val := val
|
||||
| opts' := opts'.insert name val -- Already parsed
|
||||
| opts' := opts'.set name val -- Already parsed
|
||||
|
||||
opts' ← setOption opts' decl name val
|
||||
|
||||
|
||||
@@ -316,9 +316,10 @@ builtin_initialize typePrefixDenyListExt : SimplePersistentEnvExtension Name (Li
|
||||
def isDeniedModule (env : Environment) (moduleName : Name) : Bool :=
|
||||
(moduleDenyListExt.getState env).any fun p => moduleName.anyS (· == p)
|
||||
|
||||
def isDeniedPremise (env : Environment) (name : Name) : Bool := Id.run do
|
||||
def isDeniedPremise (env : Environment) (name : Name) (allowPrivate : Bool := false) : Bool := Id.run do
|
||||
if name == ``sorryAx then return true
|
||||
if name.isInternalDetail then return true
|
||||
-- Allow private names through if allowPrivate is set (e.g., for currentFile selector)
|
||||
if name.isInternalDetail && !(allowPrivate && isPrivateName name) then return true
|
||||
if Lean.Meta.isInstanceCore env name then return true
|
||||
if Lean.Linter.isDeprecated env name then return true
|
||||
if (nameDenyListExt.getState env).any (fun p => name.anyS (· == p)) then return true
|
||||
@@ -358,14 +359,14 @@ def currentFile : Selector := fun _ cfg => do
|
||||
let max := cfg.maxSuggestions
|
||||
-- Use map₂ from the staged map, which contains locally defined constants
|
||||
let mut suggestions := #[]
|
||||
for (name, ci) in env.constants.map₂.toList do
|
||||
for (name, _) in env.constants.map₂ do
|
||||
if suggestions.size >= max then
|
||||
break
|
||||
if isDeniedPremise env name then
|
||||
-- Allow private names since they're accessible from the current module
|
||||
if isDeniedPremise env name (allowPrivate := true) then
|
||||
continue
|
||||
match ci with
|
||||
| .thmInfo _ => suggestions := suggestions.push { name := name, score := 1.0 }
|
||||
| _ => continue
|
||||
if wasOriginallyTheorem env name then
|
||||
suggestions := suggestions.push { name := name, score := 1.0 }
|
||||
return suggestions
|
||||
|
||||
builtin_initialize librarySuggestionsExt : SimplePersistentEnvExtension Name (Option Name) ←
|
||||
|
||||
@@ -74,7 +74,7 @@ def prepareTriggers (names : Array Name) (maxTolerance : Float := 3.0) : MetaM (
|
||||
let mut map := {}
|
||||
let env ← getEnv
|
||||
let names := names.filter fun n =>
|
||||
!isDeniedPremise env n && Lean.wasOriginallyTheorem env n
|
||||
!isDeniedPremise env n && wasOriginallyTheorem env n
|
||||
for name in names do
|
||||
let triggers ← triggerSymbols (← getConstInfo name) maxTolerance
|
||||
for (trigger, tolerance) in triggers do
|
||||
|
||||
@@ -28,7 +28,7 @@ skipping instance arguments and proofs.
|
||||
public def localSymbolFrequencyMap : MetaM (NameMap Nat) := do
|
||||
let env := (← getEnv)
|
||||
env.constants.map₂.foldlM (init := ∅) (fun acc m ci => do
|
||||
if isDeniedPremise env m || !Lean.wasOriginallyTheorem env m then
|
||||
if isDeniedPremise env m || !wasOriginallyTheorem env m then
|
||||
pure acc
|
||||
else
|
||||
ci.type.foldRelevantConstants (init := acc) fun n' acc => return acc.alter n' fun i? => some (i?.getD 0 + 1))
|
||||
|
||||
@@ -247,7 +247,7 @@ def ofConstName (constName : Name) (fullNames : Bool := false) : MessageData :=
|
||||
let msg ← ofFormatWithInfos <$> match ctx? with
|
||||
| .none => pure (format constName)
|
||||
| .some ctx =>
|
||||
let ctx := if fullNames then { ctx with opts := ctx.opts.insert `pp.fullNames fullNames } else ctx
|
||||
let ctx := if fullNames then { ctx with opts := ctx.opts.set `pp.fullNames fullNames } else ctx
|
||||
ppConstNameWithInfos ctx constName
|
||||
return Dynamic.mk msg)
|
||||
(fun _ => false)
|
||||
|
||||
@@ -8,6 +8,7 @@ module
|
||||
prelude
|
||||
public import Lean.Meta.Match.MatcherInfo
|
||||
public import Lean.DefEqAttrib
|
||||
public import Lean.Meta.RecExt
|
||||
public import Lean.Meta.LetToHave
|
||||
import Lean.Meta.AppBuilder
|
||||
|
||||
@@ -40,26 +41,6 @@ This is implemented by
|
||||
-/
|
||||
def eqnAffectingOptions : Array (Lean.Option Bool) := #[backward.eqns.nonrecursive, backward.eqns.deepRecursiveSplit]
|
||||
|
||||
/--
|
||||
Environment extension for storing which declarations are recursive.
|
||||
This information is populated by the `PreDefinition` module, but the simplifier
|
||||
uses when unfolding declarations.
|
||||
-/
|
||||
builtin_initialize recExt : TagDeclarationExtension ←
|
||||
mkTagDeclarationExtension `recExt (asyncMode := .async .asyncEnv)
|
||||
|
||||
/--
|
||||
Marks the given declaration as recursive.
|
||||
-/
|
||||
def markAsRecursive (declName : Name) : CoreM Unit :=
|
||||
modifyEnv (recExt.tag · declName)
|
||||
|
||||
/--
|
||||
Returns `true` if `declName` was defined using well-founded recursion, or structural recursion.
|
||||
-/
|
||||
def isRecursiveDefinition (declName : Name) : CoreM Bool :=
|
||||
return recExt.isTagged (← getEnv) declName
|
||||
|
||||
def eqnThmSuffixBase := "eq"
|
||||
def eqnThmSuffixBasePrefix := eqnThmSuffixBase ++ "_"
|
||||
def eqn1ThmSuffix := eqnThmSuffixBasePrefix ++ "1"
|
||||
|
||||
@@ -124,17 +124,41 @@ def mkInjectiveEqTheoremNameFor (ctorName : Name) : Name :=
|
||||
private def mkInjectiveEqTheoremType? (ctorVal : ConstructorVal) : MetaM (Option Expr) :=
|
||||
mkInjectiveTheoremTypeCore? ctorVal true
|
||||
|
||||
/--
|
||||
Collects all components of a nested `And`, as projections.
|
||||
(Avoids the binders that `MVarId.casesAnd` would introduce.)
|
||||
-/
|
||||
private partial def andProjections (e : Expr) : MetaM (Array Expr) := do
|
||||
let rec go (e : Expr) (t : Expr) (acc : Array Expr) : MetaM (Array Expr) := do
|
||||
match_expr t with
|
||||
| And t1 t2 =>
|
||||
let acc ← go (mkProj ``And 0 e) t1 acc
|
||||
let acc ← go (mkProj ``And 0 e) t2 acc
|
||||
return acc
|
||||
| _ =>
|
||||
return acc.push e
|
||||
go e (← inferType e) #[]
|
||||
|
||||
private def mkInjectiveEqTheoremValue (ctorName : Name) (targetType : Expr) : MetaM Expr := do
|
||||
forallTelescopeReducing targetType fun xs type => do
|
||||
let mvar ← mkFreshExprSyntheticOpaqueMVar type
|
||||
let [mvarId₁, mvarId₂] ← mvar.mvarId!.apply (mkConst ``Eq.propIntro)
|
||||
| throwError "unexpected number of subgoals when proving injective theorem for constructor `{ctorName}`"
|
||||
let (h, mvarId₁) ← mvarId₁.intro1
|
||||
let (_, mvarId₂) ← mvarId₂.intro1
|
||||
solveEqOfCtorEq ctorName mvarId₁ h
|
||||
let mvarId₂ ← mvarId₂.casesAnd
|
||||
if let some mvarId₂ ← mvarId₂.substEqs then
|
||||
try mvarId₂.refl catch _ => throwError (injTheoremFailureHeader ctorName)
|
||||
let mut mvarId₂ := mvarId₂
|
||||
while true do
|
||||
let t ← mvarId₂.getType
|
||||
let some (conj, body) := t.arrow? | break
|
||||
match_expr conj with
|
||||
| And lhs rhs =>
|
||||
let [mvarId₂'] ← mvarId₂.applyN (mkApp3 (mkConst `Lean.injEq_helper) lhs rhs body) 1
|
||||
| throwError "unexpected number of goals after applying `Lean.and_imp`"
|
||||
mvarId₂ := mvarId₂'
|
||||
| _ => pure ()
|
||||
let (h, mvarId₂') ← mvarId₂.intro1
|
||||
(_, mvarId₂) ← substEq mvarId₂' h
|
||||
try mvarId₂.refl catch _ => throwError (injTheoremFailureHeader ctorName)
|
||||
mkLambdaFVars xs mvar
|
||||
|
||||
private def mkInjectiveEqTheorem (ctorVal : ConstructorVal) : MetaM Unit := do
|
||||
|
||||
33
src/Lean/Meta/RecExt.lean
Normal file
33
src/Lean/Meta/RecExt.lean
Normal file
@@ -0,0 +1,33 @@
|
||||
/-
|
||||
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
|
||||
prelude
|
||||
public import Lean.Attributes
|
||||
|
||||
public section
|
||||
|
||||
namespace Lean.Meta
|
||||
|
||||
/--
|
||||
Environment extension for storing which declarations are recursive.
|
||||
This information is populated by the `PreDefinition` module, but the simplifier
|
||||
uses when unfolding declarations.
|
||||
-/
|
||||
builtin_initialize recExt : TagDeclarationExtension ←
|
||||
mkTagDeclarationExtension `recExt (asyncMode := .async .asyncEnv)
|
||||
|
||||
/--
|
||||
Marks the given declaration as recursive.
|
||||
-/
|
||||
def markAsRecursive (declName : Name) : CoreM Unit :=
|
||||
modifyEnv (recExt.tag · declName)
|
||||
|
||||
/--
|
||||
Returns `true` if `declName` was defined using well-founded recursion, or structural recursion.
|
||||
-/
|
||||
def isRecursiveDefinition (declName : Name) : CoreM Bool :=
|
||||
return recExt.isTagged (← getEnv) declName
|
||||
@@ -23,13 +23,14 @@ public import Lean.Meta.Sym.Apply
|
||||
public import Lean.Meta.Sym.InferType
|
||||
public import Lean.Meta.Sym.Simp
|
||||
public import Lean.Meta.Sym.Util
|
||||
public import Lean.Meta.Sym.Grind
|
||||
|
||||
/-!
|
||||
# Symbolic simulation support.
|
||||
# Symbolic computation support.
|
||||
|
||||
This module provides `SymM`, a monad for implementing symbolic simulators (e.g., verification condition generators)
|
||||
using Lean. The monad addresses performance issues found in symbolic simulators built on top of user-facing
|
||||
tactics (e.g., `apply` and `intros`).
|
||||
This module provides `SymM`, a monad for implementing symbolic computation (e.g., decision procedures and
|
||||
verification condition generators) using Lean. The monad addresses performance issues found in symbolic
|
||||
computation engines built on top of user-facing tactics (e.g., `apply` and `intros`).
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -65,14 +66,14 @@ whether `maxFVar[e]` is in `?m.lctx` — a single hash lookup, O(1).
|
||||
|
||||
**The problem:** The `isDefEq` predicate in `MetaM` is designed for elaboration and user-facing tactics.
|
||||
It supports reduction, type-class resolution, and many other features that can be expensive or have
|
||||
unpredictable running time. For symbolic simulation, where pattern matching is called frequently on
|
||||
unpredictable running time. For symbolic computation, where pattern matching is called frequently on
|
||||
large ground terms, these features become performance bottlenecks.
|
||||
|
||||
**The solution:** In `SymM`, pattern matching and definitional equality are restricted to a more syntactic,
|
||||
predictable subset. Key design choices:
|
||||
|
||||
1. **Reducible declarations are abbreviations.** Reducible declarations are eagerly expanded when indexing
|
||||
terms and when entering symbolic simulation mode. During matching, we assume abbreviations have already
|
||||
terms and when entering symbolic computation mode. During matching, we assume abbreviations have already
|
||||
been expanded.
|
||||
|
||||
**Why `MetaM` `simp` cannot make this assumption**: The simplifier in `MetaM` is designed for interactive use,
|
||||
@@ -99,7 +100,7 @@ predictable subset. Key design choices:
|
||||
4. **Types must be indexed.** Unlike proofs and instances, types cannot be ignored, without indexing them,
|
||||
pattern matching produces too many candidates. Like other abbreviations, type abbreviations are expanded.
|
||||
Note that given `def Foo : Type := Bla`, the terms `Foo` and `Bla` are *not* considered structurally
|
||||
equal in the symbolic simulator framework.
|
||||
equal in the symbolic computation framework.
|
||||
|
||||
### Skipping type checks on assignment
|
||||
|
||||
@@ -117,7 +118,7 @@ so the check is almost always skipped.
|
||||
|
||||
### `GrindM` state
|
||||
|
||||
**The problem:** In symbolic simulation, we often want to discharge many goals using proof automation such
|
||||
**The problem:** In symbolic computation, we often want to discharge many goals using proof automation such
|
||||
as `grind`. Many of these goals share very similar local contexts. If we invoke `grind` on each goal
|
||||
independently, we repeatedly reprocess the same hypotheses.
|
||||
|
||||
|
||||
@@ -177,4 +177,16 @@ def mkHaveS (x : Name) (t : Expr) (v : Expr) (b : Expr) : m Expr := do
|
||||
else
|
||||
mkLetS n newType newVal newBody nondep
|
||||
|
||||
def mkAppS₂ (f a₁ a₂ : Expr) : m Expr := do
|
||||
mkAppS (← mkAppS f a₁) a₂
|
||||
|
||||
def mkAppS₃ (f a₁ a₂ a₃ : Expr) : m Expr := do
|
||||
mkAppS (← mkAppS₂ f a₁ a₂) a₃
|
||||
|
||||
def mkAppS₄ (f a₁ a₂ a₃ a₄ : Expr) : m Expr := do
|
||||
mkAppS (← mkAppS₃ f a₁ a₂ a₃) a₄
|
||||
|
||||
def mkAppS₅ (f a₁ a₂ a₃ a₄ a₅ : Expr) : m Expr := do
|
||||
mkAppS (← mkAppS₄ f a₁ a₂ a₃ a₄) a₅
|
||||
|
||||
end Lean.Meta.Sym.Internal
|
||||
|
||||
@@ -44,8 +44,11 @@ first because solving it often solves `?w`.
|
||||
def mkResultPos (pattern : Pattern) : List Nat := Id.run do
|
||||
let auxPrefix := `_sym_pre
|
||||
-- Initialize "found" mask with arguments that can be synthesized by type class resolution.
|
||||
let mut found := pattern.isInstance
|
||||
let numArgs := pattern.varTypes.size
|
||||
let mut found := if let some varInfos := pattern.varInfos? then
|
||||
varInfos.argsInfo.map fun info : ProofInstArgInfo => info.isInstance
|
||||
else
|
||||
Array.replicate numArgs false
|
||||
let auxVars := pattern.varTypes.mapIdx fun i _ => mkFVar ⟨.num auxPrefix i⟩
|
||||
-- Collect arguments that occur in the pattern
|
||||
for fvarId in collectFVars {} (pattern.pattern.instantiateRev auxVars) |>.fvarIds do
|
||||
@@ -96,6 +99,10 @@ def mkValue (expr : Expr) (pattern : Pattern) (result : MatchUnifyResult) : Expr
|
||||
else
|
||||
mkAppN (expr.instantiateLevelParams pattern.levelParams result.us) result.args
|
||||
|
||||
public inductive ApplyResult where
|
||||
| failed
|
||||
| goals (mvarIds : List MVarId)
|
||||
|
||||
/--
|
||||
Applies a backward rule to a goal, returning new subgoals.
|
||||
|
||||
@@ -103,15 +110,23 @@ Applies a backward rule to a goal, returning new subgoals.
|
||||
2. Assigns the goal metavariable to the theorem application
|
||||
3. Returns new goals for unassigned arguments (per `resultPos`)
|
||||
|
||||
Throws an error if unification fails.
|
||||
Returns `.notApplicable` if unification fails.
|
||||
-/
|
||||
public def BackwardRule.apply (mvarId : MVarId) (rule : BackwardRule) : SymM (List MVarId) := mvarId.withContext do
|
||||
public def BackwardRule.apply (mvarId : MVarId) (rule : BackwardRule) : SymM ApplyResult := mvarId.withContext do
|
||||
let decl ← mvarId.getDecl
|
||||
if let some result ← rule.pattern.unify? decl.type then
|
||||
mvarId.assign (mkValue rule.expr rule.pattern result)
|
||||
return rule.resultPos.map fun i =>
|
||||
return .goals <| rule.resultPos.map fun i =>
|
||||
result.args[i]!.mvarId!
|
||||
else
|
||||
throwError "rule is not applicable to goal{mvarId}rule:{indentExpr rule.expr}"
|
||||
return .failed
|
||||
|
||||
/--
|
||||
Similar to `BackwardRule.apply', but throws an error if unification fails.
|
||||
-/
|
||||
public def BackwardRule.apply' (mvarId : MVarId) (rule : BackwardRule) : SymM (List MVarId) := do
|
||||
let .goals mvarIds ← rule.apply mvarId
|
||||
| throwError "rule is not applicable to goal{mvarId}rule:{indentExpr rule.expr}"
|
||||
return mvarIds
|
||||
|
||||
end Lean.Meta.Sym
|
||||
|
||||
129
src/Lean/Meta/Sym/Grind.lean
Normal file
129
src/Lean/Meta/Sym/Grind.lean
Normal file
@@ -0,0 +1,129 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Tactic.Grind.Types
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
public import Lean.Meta.Sym.Apply
|
||||
import Lean.Meta.Tactic.Grind.Main
|
||||
import Lean.Meta.Sym.Simp.Goal
|
||||
import Lean.Meta.Sym.Intro
|
||||
import Lean.Meta.Sym.Util
|
||||
import Lean.Meta.Tactic.Grind.Solve
|
||||
import Lean.Meta.Tactic.Assumption
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/-!
|
||||
# Grind Goal API for Symbolic Simulation
|
||||
|
||||
This module provides an API for building symbolic simulation engines and
|
||||
verification condition generators on top of `grind`. It wraps `Sym` operations
|
||||
to work with `grind`'s `Goal` type, enabling users to carry `grind` state
|
||||
through symbolic execution while using lightweight `Sym` operations for
|
||||
the main loop.
|
||||
|
||||
## Typical usage pattern
|
||||
```
|
||||
let goal ← mkGoal mvarId
|
||||
let .goal xs goal ← goal.introN 2 | failure
|
||||
let .goal goal ← goal.simp methods | failure
|
||||
let goal ← goal.internalizeAll
|
||||
-- ... symbolic execution loop using goal.apply ...
|
||||
let .closed ← goal.grind | failure
|
||||
```
|
||||
|
||||
## Design
|
||||
|
||||
Operations like `introN`, `apply`, and `simp` run in `SymM` for performance.
|
||||
`internalize` and `grind` run in `GrindM` to access the E-graph.
|
||||
-/
|
||||
|
||||
|
||||
/--
|
||||
Creates a `Goal` from an `MVarId`, applying `Sym` preprocessing.
|
||||
Preprocessing ensures the goal is compatible with `Sym` operations.
|
||||
-/
|
||||
public def mkGoal (mvarId : MVarId) : GrindM Goal := do
|
||||
let mvarId ← Sym.preprocessMVar mvarId
|
||||
mkGoalCore mvarId
|
||||
|
||||
open Sym (SymM)
|
||||
|
||||
public inductive IntrosResult where
|
||||
| failed
|
||||
| goal (newDecls : Array FVarId) (goal : Goal)
|
||||
|
||||
/-- Introduces `num` binders from the goal's target. -/
|
||||
public def Goal.introN (goal : Goal) (num : Nat) : SymM IntrosResult := do
|
||||
let .goal xs mvarId ← Sym.introN goal.mvarId num | return .failed
|
||||
return .goal xs { goal with mvarId }
|
||||
|
||||
/-- Introduces binders with the specified names. -/
|
||||
public def Goal.intros (goal : Goal) (names : Array Name) : SymM IntrosResult := do
|
||||
let .goal xs mvarId ← Sym.intros goal.mvarId names | return .failed
|
||||
return .goal xs { goal with mvarId }
|
||||
|
||||
public inductive ApplyResult where
|
||||
| failed
|
||||
| goals (subgoals : List Goal)
|
||||
|
||||
/-- Applies a backward rule, returning subgoals on success. -/
|
||||
public def Goal.apply (goal : Goal) (rule : Sym.BackwardRule) : SymM ApplyResult := do
|
||||
let .goals mvarIds ← rule.apply goal.mvarId | return .failed
|
||||
return .goals <| mvarIds.map fun mvarId => { goal with mvarId }
|
||||
|
||||
public inductive SimpGoalResult where
|
||||
| noProgress
|
||||
| closed
|
||||
| goal (goal : Goal)
|
||||
|
||||
/-- Simplifies the goal using the given methods. -/
|
||||
public def Goal.simp (goal : Goal) (methods : Sym.Simp.Methods := {}) (config : Sym.Simp.Config := {}) : SymM SimpGoalResult := do
|
||||
match (← Sym.simpGoal goal.mvarId methods config) with
|
||||
| .goal mvarId => return .goal { goal with mvarId }
|
||||
| .noProgress => return .noProgress
|
||||
| .closed => return .closed
|
||||
|
||||
/-- Like `simp`, but returns the original goal unchanged when no progress is made. -/
|
||||
public def Goal.simpIgnoringNoProgress (goal : Goal) (methods : Sym.Simp.Methods := {}) (config : Sym.Simp.Config := {}) : SymM SimpGoalResult := do
|
||||
match (← Sym.simpGoal goal.mvarId methods config) with
|
||||
| .goal mvarId => return .goal { goal with mvarId }
|
||||
| .noProgress => return .goal goal
|
||||
| .closed => return .closed
|
||||
|
||||
/--
|
||||
Internalizes the next `num` hypotheses from the local context into the `grind` state (e.g., its E-graph).
|
||||
-/
|
||||
public def Goal.internalize (goal : Goal) (num : Nat) : GrindM Goal := do
|
||||
Grind.processHypotheses goal (some num)
|
||||
|
||||
/-- Internalizes all (un-internalized) hypotheses from the local context into the `grind` state. -/
|
||||
public def Goal.internalizeAll (goal : Goal) : GrindM Goal := do
|
||||
Grind.processHypotheses goal none
|
||||
|
||||
public inductive GrindResult where
|
||||
| failed (goal : Goal)
|
||||
| closed
|
||||
|
||||
/--
|
||||
Attempts to close the goal using `grind`.
|
||||
Returns `.closed` on success, or `.failed` with the first subgoal that failed to be closed.
|
||||
-/
|
||||
public def Goal.grind (goal : Goal) : GrindM GrindResult := do
|
||||
if let some failure ← solve goal then
|
||||
return .failed failure
|
||||
else
|
||||
return .closed
|
||||
|
||||
/--
|
||||
Closes the goal if its target matches a hypothesis.
|
||||
Returns `true` on success.
|
||||
-/
|
||||
public def Goal.assumption (goal : Goal) : MetaM Bool := do
|
||||
-- **TODO**: add indexing
|
||||
goal.mvarId.assumptionCore
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -96,48 +96,39 @@ def introCore (mvarId : MVarId) (max : Nat) (names : Array Name) : SymM (Array F
|
||||
|
||||
def hugeNat := 1000000
|
||||
|
||||
public inductive IntrosResult where
|
||||
| failed
|
||||
| goal (newDecls : Array FVarId) (mvarId : MVarId)
|
||||
|
||||
/--
|
||||
Introduces leading binders (universal quantifiers and let-expressions) from the goal's target type.
|
||||
|
||||
If `names` is non-empty, introduces (at most) `names.size` binders using the provided names.
|
||||
If `names` is empty, introduces all leading binders using inaccessible names.
|
||||
|
||||
Returns the introduced free variable Ids and the updated goal.
|
||||
|
||||
Throws an error if the target type does not have a leading binder.
|
||||
Returns `.goal newDecls mvarId` with new introduced free variable Ids and the updated goal.
|
||||
Returns `.failed` if no new declaration was introduced.
|
||||
-/
|
||||
public def intros (mvarId : MVarId) (names : Array Name := #[]) : SymM (Array FVarId × MVarId) := do
|
||||
public def intros (mvarId : MVarId) (names : Array Name := #[]) : SymM IntrosResult := do
|
||||
let result ← if names.isEmpty then
|
||||
introCore mvarId hugeNat #[]
|
||||
else
|
||||
introCore mvarId names.size names
|
||||
if result.1.isEmpty then
|
||||
throwError "`intros` failed, binder expected"
|
||||
return result
|
||||
|
||||
/--
|
||||
Introduces a single binder from the goal's target type with the given name.
|
||||
|
||||
Returns the introduced free variable ID and the updated goal.
|
||||
Throws an error if the target type does not have a leading binder.
|
||||
-/
|
||||
public def intro (mvarId : MVarId) (name : Name) : SymM (FVarId × MVarId) := do
|
||||
let (fvarIds, goal') ← introCore mvarId 1 #[name]
|
||||
if h : 0 < fvarIds.size then
|
||||
return (fvarIds[0], goal')
|
||||
else
|
||||
throwError "`intro` failed, binder expected"
|
||||
return .failed
|
||||
return .goal result.1 result.2
|
||||
|
||||
/--
|
||||
Introduces exactly `num` binders from the goal's target type.
|
||||
|
||||
Returns the introduced free variable IDs and the updated goal.
|
||||
Throws an error if the target type has fewer than `num` leading binders.
|
||||
Returns `.goal newDecls mvarId` if successful where `newDecls` are the introduced free variable IDs,
|
||||
`mvarId` the updated goal.
|
||||
Returns `.failed` if it was not possible to introduce `num` new local declarations.
|
||||
-/
|
||||
public def introN (mvarId : MVarId) (num : Nat) : SymM (Array FVarId × MVarId) := do
|
||||
public def introN (mvarId : MVarId) (num : Nat) : SymM IntrosResult := do
|
||||
let result ← introCore mvarId num #[]
|
||||
unless result.1.size == num do
|
||||
throwError "`introN` failed, insufficient number of binders"
|
||||
return result
|
||||
return .failed
|
||||
return .goal result.1 result.2
|
||||
|
||||
end Lean.Meta.Sym
|
||||
|
||||
86
src/Lean/Meta/Sym/LitValues.lean
Normal file
86
src/Lean/Meta/Sym/LitValues.lean
Normal file
@@ -0,0 +1,86 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Expr
|
||||
public import Init.Data.Rat
|
||||
public section
|
||||
namespace Lean.Meta.Sym
|
||||
/-!
|
||||
Pure functions for extracting values. They are pure (`OptionT Id`) rather than monadic (`MetaM`).
|
||||
This is possible because `Sym` assumes terms are in canonical form, no `whnf` or
|
||||
reduction is needed to recognize literals.
|
||||
-/
|
||||
def getNatValue? (e : Expr) : OptionT Id Nat := do
|
||||
let_expr OfNat.ofNat _ n _ := e | failure
|
||||
let .lit (.natVal n) := n | failure
|
||||
return n
|
||||
|
||||
def getIntValue? (e : Expr) : OptionT Id Int := do
|
||||
let_expr Neg.neg _ _ a := e | getNatValue? e
|
||||
let v : Int ← getNatValue? a
|
||||
return -v
|
||||
|
||||
def getRatValue? (e : Expr) : OptionT Id Rat := do
|
||||
let_expr HDiv.hDiv _ _ _ _ n d := e | getIntValue? e
|
||||
let n : Rat ← getIntValue? n
|
||||
let d : Rat ← getNatValue? d
|
||||
return n / d
|
||||
|
||||
structure BitVecValue where
|
||||
n : Nat
|
||||
val : BitVec n
|
||||
|
||||
def getBitVecValue? (e : Expr) : OptionT Id BitVecValue :=
|
||||
match_expr e with
|
||||
| BitVec.ofNat nExpr vExpr => do
|
||||
let n ← getNatValue? nExpr
|
||||
let v ← getNatValue? vExpr
|
||||
return ⟨n, BitVec.ofNat n v⟩
|
||||
| BitVec.ofNatLT nExpr vExpr _ => do
|
||||
let n ← getNatValue? nExpr
|
||||
let v ← getNatValue? vExpr
|
||||
return ⟨n, BitVec.ofNat n v⟩
|
||||
| OfNat.ofNat α v _ => do
|
||||
let_expr BitVec n := α | failure
|
||||
let n ← getNatValue? n
|
||||
let .lit (.natVal v) := v | failure
|
||||
return ⟨n, BitVec.ofNat n v⟩
|
||||
| _ => failure
|
||||
|
||||
def getUInt8Value? (e : Expr) : OptionT Id UInt8 := return UInt8.ofNat (← getNatValue? e)
|
||||
def getUInt16Value? (e : Expr) : OptionT Id UInt16 := return UInt16.ofNat (← getNatValue? e)
|
||||
def getUInt32Value? (e : Expr) : OptionT Id UInt32 := return UInt32.ofNat (← getNatValue? e)
|
||||
def getUInt64Value? (e : Expr) : OptionT Id UInt64 := return UInt64.ofNat (← getNatValue? e)
|
||||
def getInt8Value? (e : Expr) : OptionT Id Int8 := return Int8.ofInt (← getIntValue? e)
|
||||
def getInt16Value? (e : Expr) : OptionT Id Int16 := return Int16.ofInt (← getIntValue? e)
|
||||
def getInt32Value? (e : Expr) : OptionT Id Int32 := return Int32.ofInt (← getIntValue? e)
|
||||
def getInt64Value? (e : Expr) : OptionT Id Int64 := return Int64.ofInt (← getIntValue? e)
|
||||
|
||||
structure FinValue where
|
||||
n : Nat
|
||||
val : Fin n
|
||||
|
||||
def getFinValue? (e : Expr) : OptionT Id FinValue := do
|
||||
let_expr OfNat.ofNat α v _ := e | failure
|
||||
let_expr Fin n := α | failure
|
||||
let n ← getNatValue? n
|
||||
let .lit (.natVal v) := v | failure
|
||||
if h : n = 0 then failure else
|
||||
let : NeZero n := ⟨h⟩
|
||||
return { n, val := Fin.ofNat n v }
|
||||
|
||||
def getCharValue? (e : Expr) : OptionT Id Char := do
|
||||
let_expr Char.ofNat n := e | failure
|
||||
let .lit (.natVal n) := n | failure
|
||||
return Char.ofNat n
|
||||
|
||||
def getStringValue? (e : Expr) : Option String :=
|
||||
match e with
|
||||
| .lit (.strVal s) => some s
|
||||
| _ => none
|
||||
|
||||
end Lean.Meta.Sym
|
||||
93
src/Lean/Meta/Sym/Offset.lean
Normal file
93
src/Lean/Meta/Sym/Offset.lean
Normal file
@@ -0,0 +1,93 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.LitValues
|
||||
public section
|
||||
namespace Lean.Meta.Sym
|
||||
/-!
|
||||
# Offset representation for natural number expressions
|
||||
|
||||
This module provides utilities for representing `Nat` expressions in the form `e + k`,
|
||||
where `e` is an arbitrary expression and `k` is a constant.
|
||||
This normalization is used during pattern matching and unification.
|
||||
-/
|
||||
|
||||
/--
|
||||
Represents a natural number expression as a base plus a constant offset.
|
||||
- `.num k` represents the literal `k`
|
||||
- `.add e k` represents `e + k`
|
||||
|
||||
Used for pattern matching and unification.
|
||||
-/
|
||||
inductive Offset where
|
||||
| num (k : Nat)
|
||||
| add (e : Expr) (k : Nat)
|
||||
deriving Inhabited
|
||||
|
||||
/-- Increments the constant part of the offset by `k'`. -/
|
||||
def Offset.inc : Offset → Nat → Offset
|
||||
| .num k, k' => .num (k+k')
|
||||
| .add e k, k' => .add e (k+k')
|
||||
|
||||
/--
|
||||
Returns `some offset` if `e` is an offset term. That is, it is of the form
|
||||
- `Nat.succ a`, OR
|
||||
- `a + k` where `k` is a numeral.
|
||||
|
||||
Assumption: standard instances are used for `OfNat Nat n` and `HAdd Nat Nat Nat`
|
||||
-/
|
||||
partial def isOffset? (e : Expr) : OptionT Id Offset :=
|
||||
match_expr e with
|
||||
| Nat.succ a => do
|
||||
return get a |>.inc 1
|
||||
| HAdd.hAdd α _ _ _ a b => do
|
||||
guard (α.isConstOf ``Nat)
|
||||
let n ← getNatValue? b
|
||||
return get a |>.inc n
|
||||
| _ => failure
|
||||
where
|
||||
get (e : Expr) : Offset :=
|
||||
isOffset? e |>.getD (.add e 0)
|
||||
|
||||
/-- Variant of `isOffset?` that first checks if `declName` is `Nat.succ` or `HAdd.hAdd`. -/
|
||||
def isOffset?' (declName : Name) (p : Expr) : OptionT Id Offset := do
|
||||
guard (declName == ``Nat.succ || declName == ``HAdd.hAdd)
|
||||
isOffset? p
|
||||
|
||||
/-- Returns `true` if `e` is an offset term.-/
|
||||
partial def isOffset (e : Expr) : Bool :=
|
||||
match_expr e with
|
||||
| Nat.succ _ => true
|
||||
| HAdd.hAdd α _ _ _ _ b =>
|
||||
α.isConstOf ``Nat &&
|
||||
match_expr b with
|
||||
| OfNat.ofNat _ n _ => (n matches .lit (.natVal _))
|
||||
| _ => false
|
||||
| _ => false
|
||||
|
||||
/-- Variant of `isOffset?` that first checks if `declName` is `Nat.succ` or `HAdd.hAdd`. -/
|
||||
def isOffset' (declName : Name) (p : Expr) : Bool :=
|
||||
(declName == ``Nat.succ || declName == ``HAdd.hAdd) && isOffset p
|
||||
|
||||
/--
|
||||
Converts the given expression into an offset.
|
||||
Assumptions:
|
||||
- `e` has type `Nat`.
|
||||
- standard instances are used for `OfNat Nat n` and `HAdd Nat Nat Nat`.
|
||||
-/
|
||||
partial def toOffset (e : Expr) : Offset :=
|
||||
match_expr e with
|
||||
| Nat.succ a => toOffset a |>.inc 1
|
||||
| HAdd.hAdd _ _ _ _ a b => Id.run do
|
||||
let some n := getNatValue? b | .add e 0
|
||||
toOffset a |>.inc n
|
||||
| OfNat.ofNat _ n _ => Id.run do
|
||||
let .lit (.natVal n) := n | .add e 0
|
||||
.num n
|
||||
| _ => .add e 0
|
||||
|
||||
end Lean.Meta.Sym
|
||||
@@ -16,6 +16,8 @@ import Lean.Meta.Sym.IsClass
|
||||
import Lean.Meta.Sym.MaxFVar
|
||||
import Lean.Meta.Sym.ProofInstInfo
|
||||
import Lean.Meta.Sym.AlphaShareBuilder
|
||||
import Lean.Meta.Sym.LitValues
|
||||
import Lean.Meta.Sym.Offset
|
||||
namespace Lean.Meta.Sym
|
||||
open Internal
|
||||
|
||||
@@ -42,6 +44,10 @@ framework (`Sym`). The design prioritizes performance by using a two-phase appro
|
||||
- `instantiateRevS` ensures maximal sharing of result expressions
|
||||
-/
|
||||
|
||||
/-- Helper function for checking whether types `α` and `β` are definitionally equal during unification/matching. -/
|
||||
def isDefEqTypes (α β : Expr) : MetaM Bool := do
|
||||
withReducible <| isDefEq α β
|
||||
|
||||
/--
|
||||
Collects `ProofInstInfo` for all function symbols occurring in `pattern`.
|
||||
|
||||
@@ -56,15 +62,36 @@ def mkProofInstInfoMapFor (pattern : Expr) : MetaM (AssocList Name ProofInstInfo
|
||||
return fnInfos
|
||||
|
||||
public structure Pattern where
|
||||
levelParams : List Name
|
||||
varTypes : Array Expr
|
||||
isInstance : Array Bool
|
||||
pattern : Expr
|
||||
fnInfos : AssocList Name ProofInstInfo
|
||||
levelParams : List Name
|
||||
varTypes : Array Expr
|
||||
/--
|
||||
If `some argsInfo`, `argsInfo` stores whether the pattern variables are instances/proofs.
|
||||
It is `none` if no pattern variables are instance/proof.
|
||||
-/
|
||||
varInfos? : Option ProofInstInfo
|
||||
pattern : Expr
|
||||
fnInfos : AssocList Name ProofInstInfo
|
||||
/--
|
||||
If `checkTypeMask? = some mask`, then we must check the type of pattern variable `i`
|
||||
if `mask[i]` is true.
|
||||
Moreover `mask.size == varTypes.size`.
|
||||
See `mkCheckTypeMask`
|
||||
-/
|
||||
checkTypeMask? : Option (Array Bool)
|
||||
deriving Inhabited
|
||||
|
||||
def uvarPrefix : Name := `_uvar
|
||||
|
||||
/-- Returns `true` if the `i`th argument / pattern variable is an instance. -/
|
||||
def Pattern.isInstance (p : Pattern) (i : Nat) : Bool := Id.run do
|
||||
let some varInfos := p.varInfos? | return false
|
||||
varInfos.argsInfo[i]!.isInstance
|
||||
|
||||
/-- Returns `true` if the `i`th argument / pattern variable is a proof. -/
|
||||
def Pattern.isProof (p : Pattern) (i : Nat) : Bool := Id.run do
|
||||
let some varInfos := p.varInfos? | return false
|
||||
varInfos.argsInfo[i]!.isProof
|
||||
|
||||
def isUVar? (n : Name) : Option Nat := Id.run do
|
||||
let .num p idx := n | return none
|
||||
unless p == uvarPrefix do return none
|
||||
@@ -79,6 +106,66 @@ def preprocessPattern (declName : Name) : MetaM (List Name × Expr) := do
|
||||
let type ← preprocessType type
|
||||
return (levelParams, type)
|
||||
|
||||
/--
|
||||
Creates a mask indicating which pattern variables require type checking during matching.
|
||||
|
||||
When matching a pattern against a target expression, we must ensure that pattern variable
|
||||
assignments are type-correct. However, checking types for every variable is expensive.
|
||||
This function identifies which variables actually need type checking.
|
||||
|
||||
**Key insight**: A pattern variable appearing as an argument to a function application
|
||||
does not need its type checked separately, because the type information is already
|
||||
encoded in the application structure, and we assume the input is type correct.
|
||||
|
||||
**Variables that need type checking**:
|
||||
- Variables in function position: `f x` where `f` is a pattern variable
|
||||
- Variables in binder domains or bodies: `∀ x : α, β` or `fun x : α => b`
|
||||
- Variables appearing alone (not as part of any application)
|
||||
|
||||
**Variables that skip type checking**:
|
||||
- Variables appearing only as arguments to applications: in `f x`, the variable `x`
|
||||
does not need checking because the type of `f` constrains the type of `x`
|
||||
|
||||
**Examples**:
|
||||
- `bv0_eq (x : BitVec 0) : x = 0`: pattern is just `x`, must check type to ensure `BitVec 0`
|
||||
- `forall_true : (∀ _ : α, True) = True`: `α` appears in binder domain, must check
|
||||
- `Nat.add_zero (x : Nat) : x + 0 = x`: `x` is argument to `HAdd.hAdd`, no check needed
|
||||
|
||||
**Note**: This analysis is conservative. It may mark some variables for checking even when
|
||||
the type information is redundant (already determined by other constraints). This is
|
||||
harmless—just extra work, not incorrect behavior.
|
||||
|
||||
Returns an array of booleans parallel to the pattern's `varTypes`, where `true` indicates
|
||||
the variable's type must be checked against the matched subterm's type.
|
||||
-/
|
||||
def mkCheckTypeMask (pattern : Expr) (numPatternVars : Nat) : Array Bool :=
|
||||
let mask := Array.replicate numPatternVars false
|
||||
go pattern 0 false mask
|
||||
where
|
||||
go (e : Expr) (offset : Nat) (isArg : Bool) : Array Bool → Array Bool :=
|
||||
match e with
|
||||
| .app f a => go f offset isArg ∘ go a offset true
|
||||
| .letE .. => unreachable! -- We zeta-reduce at `preprocessType`
|
||||
| .const .. | .fvar _ | .sort _ | .mvar _ | .lit _ => id
|
||||
| .mdata _ b => go b offset isArg
|
||||
| .proj .. => id -- Should not occur in patterns
|
||||
| .forallE _ d b _
|
||||
| .lam _ d b _ => go d offset false ∘ go b (offset+1) false
|
||||
| .bvar idx => fun mask =>
|
||||
if idx >= offset && !isArg then
|
||||
let idx := idx - offset
|
||||
mask.set! (mask.size - idx - 1) true
|
||||
else
|
||||
mask
|
||||
|
||||
def mkPatternCore (type : Expr) (levelParams : List Name) (varTypes : Array Expr) (pattern : Expr) : MetaM Pattern := do
|
||||
let fnInfos ← mkProofInstInfoMapFor pattern
|
||||
let checkTypeMask := mkCheckTypeMask pattern varTypes.size
|
||||
let checkTypeMask? := if checkTypeMask.all (· == false) then none else some checkTypeMask
|
||||
let varInfos? ← forallBoundedTelescope type varTypes.size fun xs _ =>
|
||||
mkProofInstArgInfo? xs
|
||||
return { levelParams, varTypes, pattern, fnInfos, varInfos?, checkTypeMask? }
|
||||
|
||||
/--
|
||||
Creates a `Pattern` from the type of a theorem.
|
||||
|
||||
@@ -96,14 +183,12 @@ public def mkPatternFromDecl (declName : Name) (num? : Option Nat := none) : Met
|
||||
let (levelParams, type) ← preprocessPattern declName
|
||||
let hugeNumber := 10000000
|
||||
let num := num?.getD hugeNumber
|
||||
let rec go (i : Nat) (type : Expr) (varTypes : Array Expr) (isInstance : Array Bool) : MetaM Pattern := do
|
||||
let rec go (i : Nat) (pattern : Expr) (varTypes : Array Expr) : MetaM Pattern := do
|
||||
if i < num then
|
||||
if let .forallE _ d b _ := type then
|
||||
return (← go (i+1) b (varTypes.push d) (isInstance.push (isClass? (← getEnv) d).isSome))
|
||||
let pattern := type
|
||||
let fnInfos ← mkProofInstInfoMapFor pattern
|
||||
return { levelParams, varTypes, isInstance, pattern, fnInfos }
|
||||
go 0 type #[] #[]
|
||||
if let .forallE _ d b _ := pattern then
|
||||
return (← go (i+1) b (varTypes.push d))
|
||||
mkPatternCore type levelParams varTypes pattern
|
||||
go 0 type #[]
|
||||
|
||||
/--
|
||||
Creates a `Pattern` from an equational theorem, using the left-hand side of the equation.
|
||||
@@ -118,15 +203,14 @@ Throws an error if the theorem's conclusion is not an equality.
|
||||
-/
|
||||
public def mkEqPatternFromDecl (declName : Name) : MetaM (Pattern × Expr) := do
|
||||
let (levelParams, type) ← preprocessPattern declName
|
||||
let rec go (type : Expr) (varTypes : Array Expr) (isInstance : Array Bool) : MetaM (Pattern × Expr) := do
|
||||
if let .forallE _ d b _ := type then
|
||||
return (← go b (varTypes.push d) (isInstance.push (isClass? (← getEnv) d).isSome))
|
||||
let rec go (pattern : Expr) (varTypes : Array Expr) : MetaM (Pattern × Expr) := do
|
||||
if let .forallE _ d b _ := pattern then
|
||||
return (← go b (varTypes.push d))
|
||||
else
|
||||
let_expr Eq _ lhs rhs := type | throwError "resulting type for `{.ofConstName declName}` is not an equality"
|
||||
let pattern := lhs
|
||||
let fnInfos ← mkProofInstInfoMapFor pattern
|
||||
return ({ levelParams, varTypes, isInstance, pattern, fnInfos }, rhs)
|
||||
go type #[] #[]
|
||||
let_expr Eq _ lhs rhs := pattern | throwError "resulting type for `{.ofConstName declName}` is not an equality"
|
||||
let pattern ← mkPatternCore type levelParams varTypes lhs
|
||||
return (pattern, rhs)
|
||||
go type #[]
|
||||
|
||||
structure UnifyM.Context where
|
||||
pattern : Pattern
|
||||
@@ -139,6 +223,11 @@ structure UnifyM.State where
|
||||
ePending : Array (Expr × Expr) := #[]
|
||||
uPending : Array (Level × Level) := #[]
|
||||
iPending : Array (Expr × Expr) := #[]
|
||||
/--
|
||||
Contains the index of the pattern variables that we must check whether its type
|
||||
matches the type of the value assigned to it.
|
||||
-/
|
||||
tPending : Array Nat := #[]
|
||||
us : List Level := []
|
||||
args : Array Expr := #[]
|
||||
|
||||
@@ -153,6 +242,14 @@ def pushLevelPending (u : Level) (v : Level) : UnifyM Unit :=
|
||||
def pushInstPending (p : Expr) (e : Expr) : UnifyM Unit :=
|
||||
modify fun s => { s with iPending := s.iPending.push (p, e) }
|
||||
|
||||
/--
|
||||
Mark pattern variable `i` for type checking. That is, at the end of phase 1
|
||||
we must check whether the type of this pattern variable is compatible with the type of
|
||||
the value assigned to it.
|
||||
-/
|
||||
def pushCheckTypePending (i : Nat) : UnifyM Unit :=
|
||||
modify fun s => { s with tPending := s.tPending.push i }
|
||||
|
||||
def assignExprIfUnassigned (bidx : Nat) (e : Expr) : UnifyM Unit := do
|
||||
let s ← get
|
||||
let i := s.eAssignment.size - bidx - 1
|
||||
@@ -169,6 +266,8 @@ def assignExpr (bidx : Nat) (e : Expr) : UnifyM Bool := do
|
||||
return true
|
||||
else
|
||||
modify fun s => { s with eAssignment := s.eAssignment.set! i (some e) }
|
||||
if (← read).pattern.checkTypeMask?.isSome then
|
||||
pushCheckTypePending i
|
||||
return true
|
||||
|
||||
def assignLevel (uidx : Nat) (u : Level) : UnifyM Bool := do
|
||||
@@ -265,13 +364,43 @@ where
|
||||
let some value ← fvarId.getValue? | return false
|
||||
process p value
|
||||
|
||||
processApp (p : Expr) (e : Expr) : UnifyM Bool := do
|
||||
let f := p.getAppFn
|
||||
let .const declName _ := f | processAppDefault p e
|
||||
processOffset (p : Offset) (e : Offset) : UnifyM Bool := do
|
||||
-- **Note** Recall that we don't assume patterns are maximally shared terms.
|
||||
match p, e with
|
||||
| .num _, .num _ => unreachable!
|
||||
| .num k₁, .add e k₂ =>
|
||||
if k₁ < k₂ then return false
|
||||
process (mkNatLit (k₁ - k₂)) e
|
||||
| .add p k₁, .num k₂ =>
|
||||
if k₂ < k₁ then return false
|
||||
process p (← share (mkNatLit (k₂ - k₁)))
|
||||
| .add p k₁, .add e k₂ =>
|
||||
if k₁ == k₂ then
|
||||
process p e
|
||||
else if k₁ < k₂ then
|
||||
if k₁ == 0 then return false
|
||||
process p (← share (mkNatAdd e (mkNatLit (k₂ - k₁))))
|
||||
else
|
||||
if k₂ == 0 then return false
|
||||
process (mkNatAdd p (mkNatLit (k₁ - k₂))) e
|
||||
|
||||
processConstApp (declName : Name) (p : Expr) (e : Expr) : UnifyM Bool := do
|
||||
let some info := (← read).pattern.fnInfos.find? declName | process.processAppDefault p e
|
||||
let numArgs := p.getAppNumArgs
|
||||
processAppWithInfo p e (numArgs - 1) info
|
||||
|
||||
processApp (p : Expr) (e : Expr) : UnifyM Bool := withIncRecDepth do
|
||||
let f := p.getAppFn
|
||||
let .const declName _ := f | processAppDefault p e
|
||||
if (← processConstApp declName p e) then
|
||||
return true
|
||||
else if let some p' := isOffset?' declName p then
|
||||
processOffset p' (toOffset e)
|
||||
else if let some e' := isOffset? e then
|
||||
processOffset (toOffset p) e'
|
||||
else
|
||||
return false
|
||||
|
||||
processAppWithInfo (p : Expr) (e : Expr) (i : Nat) (info : ProofInstInfo) : UnifyM Bool := do
|
||||
let .app fp ap := p | if e.isApp then return false else process p e
|
||||
let .app fe ae := e | checkLetVar p e
|
||||
@@ -369,6 +498,11 @@ structure DefEqM.Context where
|
||||
If `unify` is `false`, it contains which variables can be assigned.
|
||||
-/
|
||||
mvarsNew : Array MVarId := #[]
|
||||
/--
|
||||
If a metavariable is in this collection, when we perform the assignment `?m := v`,
|
||||
we must check whether their types are compatible.
|
||||
-/
|
||||
mvarsToCheckType : Array MVarId := #[]
|
||||
|
||||
abbrev DefEqM := ReaderT DefEqM.Context SymM
|
||||
|
||||
@@ -481,6 +615,12 @@ def mayAssign (t s : Expr) : SymM Bool := do
|
||||
let tMaxFVarDecl ← tMaxFVarId.getDecl
|
||||
return tMaxFVarDecl.index ≥ sMaxFVarDecl.index
|
||||
|
||||
@[inline] def whenUndefDo (x : DefEqM LBool) (k : DefEqM Bool) : DefEqM Bool := do
|
||||
match (← x) with
|
||||
| .true => return true
|
||||
| .false => return false
|
||||
| .undef => k
|
||||
|
||||
/--
|
||||
Attempts to solve a unification constraint `t =?= s` where `t` has the form `?m a₁ ... aₙ`
|
||||
and satisfies the Miller pattern condition (all `aᵢ` are distinct, newly-introduced free variables).
|
||||
@@ -495,17 +635,20 @@ The `tFn` parameter must equal `t.getAppFn` (enforced by the proof argument).
|
||||
|
||||
Remark: `t` may be of the form `?m`.
|
||||
-/
|
||||
def tryAssignMillerPattern (tFn : Expr) (t : Expr) (s : Expr) (_ : tFn = t.getAppFn) : DefEqM Bool := do
|
||||
let .mvar mvarId := tFn | return false
|
||||
if !(← isAssignableMVar mvarId) then return false
|
||||
if !(← isMillerPatternArgs t) then return false
|
||||
def tryAssignMillerPattern (tFn : Expr) (t : Expr) (s : Expr) (_ : tFn = t.getAppFn) : DefEqM LBool := do
|
||||
let .mvar mvarId := tFn | return .undef
|
||||
if !(← isAssignableMVar mvarId) then return .undef
|
||||
if !(← isMillerPatternArgs t) then return .undef
|
||||
let s ← if t.isApp then
|
||||
mkLambdaFVarsS t.getAppArgs s
|
||||
else
|
||||
pure s
|
||||
if !(← mayAssign tFn s) then return false
|
||||
if !(← mayAssign tFn s) then return .undef
|
||||
if (← read).mvarsToCheckType.contains mvarId then
|
||||
unless (← Sym.isDefEqTypes (← mvarId.getDecl).type (← inferType s)) do
|
||||
return .false
|
||||
mvarId.assign s
|
||||
return true
|
||||
return .true
|
||||
|
||||
/--
|
||||
Structural definitional equality for applications without `ProofInstInfo`.
|
||||
@@ -531,6 +674,11 @@ where
|
||||
if (← mvarId.isAssigned) then return false
|
||||
if !(← isAssignableMVar mvarId) then return false
|
||||
if !(← mayAssign t s) then return false
|
||||
/-
|
||||
**Note**: we don't need to check the type of `mvarId` here even if the variable is marked for
|
||||
checking. This is the case because `tryAssignUnassigned` is invoked only from a context where `t` and `s` are the arguments
|
||||
of function applications.
|
||||
-/
|
||||
mvarId.assign s
|
||||
return true
|
||||
|
||||
@@ -619,11 +767,10 @@ def isDefEqMainImpl (t : Expr) (s : Expr) : DefEqM Bool := do
|
||||
isDefEqMain (← instantiateMVarsS t) s
|
||||
else if (← isAssignedMVar sFn) then
|
||||
isDefEqMain t (← instantiateMVarsS s)
|
||||
else if (← tryAssignMillerPattern tFn t s rfl) then
|
||||
return true
|
||||
else if (← tryAssignMillerPattern sFn s t rfl) then
|
||||
return true
|
||||
else if let .fvar fvarId₁ := t then
|
||||
else
|
||||
whenUndefDo (tryAssignMillerPattern tFn t s rfl) do
|
||||
whenUndefDo (tryAssignMillerPattern sFn s t rfl) do
|
||||
if let .fvar fvarId₁ := t then
|
||||
unless (← read).zetaDelta do return false
|
||||
let some val₁ ← fvarId₁.getValue? | return false
|
||||
isDefEqMain val₁ s
|
||||
@@ -634,17 +781,19 @@ def isDefEqMainImpl (t : Expr) (s : Expr) : DefEqM Bool := do
|
||||
else
|
||||
isDefEqApp tFn t s rfl
|
||||
|
||||
abbrev DefEqM.run (unify := true) (zetaDelta := true) (mvarsNew : Array MVarId := #[]) (x : DefEqM α) : SymM α := do
|
||||
abbrev DefEqM.run (unify := true) (zetaDelta := true) (mvarsNew : Array MVarId := #[])
|
||||
(mvarsToCheckType : Array MVarId := #[]) (x : DefEqM α) : SymM α := do
|
||||
let lctx ← getLCtx
|
||||
let lctxInitialNextIndex := lctx.decls.size
|
||||
x { zetaDelta, lctxInitialNextIndex, unify, mvarsNew }
|
||||
x { zetaDelta, lctxInitialNextIndex, unify, mvarsNew, mvarsToCheckType }
|
||||
|
||||
/--
|
||||
A lightweight structural definitional equality for the symbolic simulation framework.
|
||||
Unlike the full `isDefEq`, it avoids expensive operations while still supporting Miller pattern unification.
|
||||
-/
|
||||
public def isDefEqS (t : Expr) (s : Expr) (unify := true) (zetaDelta := true) (mvarsNew : Array MVarId := #[]) : SymM Bool := do
|
||||
DefEqM.run (unify := unify) (zetaDelta := zetaDelta) (mvarsNew := mvarsNew) do
|
||||
public def isDefEqS (t : Expr) (s : Expr) (unify := true) (zetaDelta := true)
|
||||
(mvarsNew : Array MVarId := #[]) (mvarsToCheckType : Array MVarId := #[]): SymM Bool := do
|
||||
DefEqM.run (unify := unify) (zetaDelta := zetaDelta) (mvarsNew := mvarsNew) (mvarsToCheckType := mvarsToCheckType) do
|
||||
isDefEqMain t s
|
||||
|
||||
def noPending : UnifyM Bool := do
|
||||
@@ -655,30 +804,48 @@ def instantiateLevelParamsS (e : Expr) (paramNames : List Name) (us : List Level
|
||||
-- We do not assume `e` is maximally shared
|
||||
shareCommon (e.instantiateLevelParams paramNames us)
|
||||
|
||||
def mkPreResult : UnifyM Unit := do
|
||||
inductive MkPreResultResult where
|
||||
| failed
|
||||
| success (mvarsToCheckType : Array MVarId)
|
||||
|
||||
def mkPreResult : UnifyM MkPreResultResult := do
|
||||
let us ← (← get).uAssignment.toList.mapM fun
|
||||
| some val => pure val
|
||||
| none => mkFreshLevelMVar
|
||||
let pattern := (← read).pattern
|
||||
let varTypes := pattern.varTypes
|
||||
let isInstance := pattern.isInstance
|
||||
let eAssignment := (← get).eAssignment
|
||||
let tPending := (← get).tPending
|
||||
let mut args := #[]
|
||||
let mut mvarsToCheckType := #[]
|
||||
for h : i in *...eAssignment.size do
|
||||
if let .some val := eAssignment[i] then
|
||||
if tPending.contains i then
|
||||
let type := varTypes[i]!
|
||||
let type ← instantiateLevelParamsS type pattern.levelParams us
|
||||
let type ← instantiateRevBetaS type args
|
||||
let valType ← inferType val
|
||||
-- **Note**: we have to use the default `isDefEq` because the type of `val`
|
||||
-- is not necessarily normalized.
|
||||
unless (← isDefEqTypes type valType) do
|
||||
return .failed
|
||||
args := args.push val
|
||||
else
|
||||
let type := varTypes[i]!
|
||||
let type ← instantiateLevelParamsS type pattern.levelParams us
|
||||
let type ← instantiateRevBetaS type args
|
||||
if isInstance[i]! then
|
||||
if pattern.isInstance i then
|
||||
if let .some val ← trySynthInstance type then
|
||||
args := args.push (← shareCommon val)
|
||||
continue
|
||||
let mvar ← mkFreshExprMVar type
|
||||
let mvar ← shareCommon mvar
|
||||
if let some mask := (← read).pattern.checkTypeMask? then
|
||||
if mask[i]! then
|
||||
mvarsToCheckType := mvarsToCheckType.push mvar.mvarId!
|
||||
args := args.push mvar
|
||||
modify fun s => { s with args, us }
|
||||
return .success mvarsToCheckType
|
||||
|
||||
def processPendingLevel : UnifyM Bool := do
|
||||
let uPending := (← get).uPending
|
||||
@@ -704,7 +871,7 @@ def processPendingInst : UnifyM Bool := do
|
||||
return false
|
||||
return true
|
||||
|
||||
def processPendingExpr : UnifyM Bool := do
|
||||
def processPendingExpr (mvarsToCheckType : Array MVarId) : UnifyM Bool := do
|
||||
let ePending := (← get).ePending
|
||||
if ePending.isEmpty then return true
|
||||
let pattern := (← read).pattern
|
||||
@@ -715,7 +882,7 @@ def processPendingExpr : UnifyM Bool := do
|
||||
let mvarsNew := if unify then #[] else args.filterMap fun
|
||||
| .mvar mvarId => some mvarId
|
||||
| _ => none
|
||||
DefEqM.run unify zetaDelta mvarsNew do
|
||||
DefEqM.run unify zetaDelta mvarsNew mvarsToCheckType do
|
||||
for (t, s) in ePending do
|
||||
let t ← instantiateLevelParamsS t pattern.levelParams us
|
||||
let t ← instantiateRevBetaS t args
|
||||
@@ -723,11 +890,11 @@ def processPendingExpr : UnifyM Bool := do
|
||||
return false
|
||||
return true
|
||||
|
||||
def processPending : UnifyM Bool := do
|
||||
def processPending (mvarsToCheckType : Array MVarId) : UnifyM Bool := do
|
||||
if (← noPending) then
|
||||
return true
|
||||
else
|
||||
processPendingLevel <&&> processPendingInst <&&> processPendingExpr
|
||||
processPendingLevel <&&> processPendingInst <&&> processPendingExpr mvarsToCheckType
|
||||
|
||||
abbrev UnifyM.run (pattern : Pattern) (unify : Bool) (zetaDelta : Bool) (k : UnifyM α) : SymM α := do
|
||||
let eAssignment := pattern.varTypes.map fun _ => none
|
||||
@@ -745,9 +912,11 @@ def mkResult : UnifyM MatchUnifyResult := do
|
||||
def main (p : Pattern) (e : Expr) (unify : Bool) (zetaDelta : Bool) : SymM (Option (MatchUnifyResult)) :=
|
||||
UnifyM.run p unify zetaDelta do
|
||||
unless (← process p.pattern e) do return none
|
||||
mkPreResult
|
||||
unless (← processPending) do return none
|
||||
return some (← mkResult)
|
||||
match (← mkPreResult) with
|
||||
| .failed => return none
|
||||
| .success mvarsToCheckType =>
|
||||
unless (← processPending mvarsToCheckType) do return none
|
||||
return some (← mkResult)
|
||||
|
||||
/--
|
||||
Attempts to match expression `e` against pattern `p` using purely syntactic matching.
|
||||
|
||||
@@ -7,17 +7,38 @@ module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.SymM
|
||||
import Lean.Meta.Sym.IsClass
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Sym.Util
|
||||
import Lean.Meta.Transform
|
||||
namespace Lean.Meta.Sym
|
||||
|
||||
/--
|
||||
Preprocesses types that used for pattern matching and unification.
|
||||
-/
|
||||
public def preprocessType (type : Expr) : MetaM Expr := do
|
||||
let type ← Grind.unfoldReducible type
|
||||
let type ← Sym.unfoldReducible type
|
||||
let type ← Core.betaReduce type
|
||||
zetaReduce type
|
||||
|
||||
/--
|
||||
Analyzes whether the given free variables (aka arguments) are proofs or instances.
|
||||
Returns `none` if no arguments are proofs or instances.
|
||||
-/
|
||||
public def mkProofInstArgInfo? (xs : Array Expr) : MetaM (Option ProofInstInfo) := do
|
||||
let env ← getEnv
|
||||
let mut argsInfo := #[]
|
||||
let mut found := false
|
||||
for x in xs do
|
||||
let type ← Meta.inferType x
|
||||
let isInstance := isClass? env type |>.isSome
|
||||
let isProof ← isProp type
|
||||
if isInstance || isProof then
|
||||
found := true
|
||||
argsInfo := argsInfo.push { isInstance, isProof }
|
||||
if found then
|
||||
return some { argsInfo }
|
||||
else
|
||||
return none
|
||||
|
||||
/--
|
||||
Analyzes the type signature of `declName` and returns information about which arguments
|
||||
are proofs or instances. Returns `none` if no arguments are proofs or instances.
|
||||
@@ -25,21 +46,7 @@ are proofs or instances. Returns `none` if no arguments are proofs or instances.
|
||||
public def mkProofInstInfo? (declName : Name) : MetaM (Option ProofInstInfo) := do
|
||||
let info ← getConstInfo declName
|
||||
let type ← preprocessType info.type
|
||||
forallTelescopeReducing type fun xs _ => do
|
||||
let env ← getEnv
|
||||
let mut argsInfo := #[]
|
||||
let mut found := false
|
||||
for x in xs do
|
||||
let type ← Meta.inferType x
|
||||
let isInstance := isClass? env type |>.isSome
|
||||
let isProof ← isProp type
|
||||
if isInstance || isProof then
|
||||
found := true
|
||||
argsInfo := argsInfo.push { isInstance, isProof }
|
||||
if found then
|
||||
return some { argsInfo }
|
||||
else
|
||||
return none
|
||||
forallTelescopeReducing type fun xs _ => mkProofInstArgInfo? xs
|
||||
|
||||
/--
|
||||
Returns information about the type signature of `declName`. It contains information about which arguments
|
||||
|
||||
@@ -5,7 +5,7 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.Congr
|
||||
public import Lean.Meta.Sym.Simp.App
|
||||
public import Lean.Meta.Sym.Simp.CongrInfo
|
||||
public import Lean.Meta.Sym.Simp.DiscrTree
|
||||
public import Lean.Meta.Sym.Simp.Main
|
||||
@@ -14,3 +14,11 @@ public import Lean.Meta.Sym.Simp.Rewrite
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
public import Lean.Meta.Sym.Simp.Simproc
|
||||
public import Lean.Meta.Sym.Simp.Theorems
|
||||
public import Lean.Meta.Sym.Simp.Have
|
||||
public import Lean.Meta.Sym.Simp.Lambda
|
||||
public import Lean.Meta.Sym.Simp.Forall
|
||||
public import Lean.Meta.Sym.Simp.Debug
|
||||
public import Lean.Meta.Sym.Simp.EvalGround
|
||||
public import Lean.Meta.Sym.Simp.Discharger
|
||||
public import Lean.Meta.Sym.Simp.ControlFlow
|
||||
public import Lean.Meta.Sym.Simp.Goal
|
||||
|
||||
481
src/Lean/Meta/Sym/Simp/App.lean
Normal file
481
src/Lean/Meta/Sym/Simp/App.lean
Normal file
@@ -0,0 +1,481 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
import Lean.Meta.SynthInstance
|
||||
import Lean.Meta.Tactic.Simp.Types
|
||||
import Lean.Meta.Sym.AlphaShareBuilder
|
||||
import Lean.Meta.Sym.InferType
|
||||
import Lean.Meta.Sym.Simp.Result
|
||||
import Lean.Meta.Sym.Simp.CongrInfo
|
||||
namespace Lean.Meta.Sym.Simp
|
||||
open Internal
|
||||
|
||||
/-!
|
||||
# Simplifying Application Arguments and Congruence Lemma Application
|
||||
|
||||
This module provides functions for building congruence proofs during simplification.
|
||||
Given a function application `f a₁ ... aₙ` where some arguments are rewritable,
|
||||
we recursively simplify those arguments (via `simp`) and construct a proof that the
|
||||
original expression equals the simplified one.
|
||||
|
||||
The key challenge is efficiency: we want to avoid repeatedly inferring types, or destroying sharing,
|
||||
The `CongrInfo` type (see `SymM.lean`) categorizes functions
|
||||
by their argument structure, allowing us to choose the most efficient proof strategy:
|
||||
|
||||
- `fixedPrefix`: Use simple `congrArg`/`congrFun'`/`congr` for trailing arguments. We exploit
|
||||
the fact that there are no dependent arguments in the suffix and use the cheaper `congrFun'`
|
||||
instead of `congrFun`.
|
||||
- `interlaced`: Mix rewritable and fixed arguments. It may have to use `congrFun` for fixed
|
||||
dependent arguments.
|
||||
- `congrTheorem`: Apply a pre-generated congruence theorem for dependent arguments
|
||||
|
||||
**Design principle**: Never infer the type of proofs. This avoids expensive type
|
||||
inference on proof terms, which can be arbitrarily complex, and often destroys sharing.
|
||||
-/
|
||||
|
||||
/--
|
||||
Helper function for constructing a congruence proof using `congrFun'`, `congrArg`, `congr`.
|
||||
For the dependent case, use `mkCongrFun`
|
||||
-/
|
||||
public def mkCongr (e : Expr) (f a : Expr) (fr : Result) (ar : Result) (_ : e = .app f a) : SymM Result := do
|
||||
let mkCongrPrefix (declName : Name) : SymM Expr := do
|
||||
let α ← inferType a
|
||||
let u ← getLevel α
|
||||
let β ← inferType e
|
||||
let v ← getLevel β
|
||||
return mkApp2 (mkConst declName [u, v]) α β
|
||||
match fr, ar with
|
||||
| .rfl _, .rfl _ => return .rfl
|
||||
| .step f' hf _, .rfl _ =>
|
||||
let e' ← mkAppS f' a
|
||||
let h := mkApp4 (← mkCongrPrefix ``congrFun') f f' hf a
|
||||
return .step e' h
|
||||
| .rfl _, .step a' ha _ =>
|
||||
let e' ← mkAppS f a'
|
||||
let h := mkApp4 (← mkCongrPrefix ``congrArg) a a' f ha
|
||||
return .step e' h
|
||||
| .step f' hf _, .step a' ha _ =>
|
||||
let e' ← mkAppS f' a'
|
||||
let h := mkApp6 (← mkCongrPrefix ``congr) f f' a a' hf ha
|
||||
return .step e' h
|
||||
|
||||
/--
|
||||
Returns a proof using `congrFun`
|
||||
```
|
||||
congrFun.{u, v} {α : Sort u} {β : α → Sort v} {f g : (x : α) → β x} (h : f = g) (a : α) : f a = g a
|
||||
```
|
||||
-/
|
||||
def mkCongrFun (e : Expr) (f a : Expr) (f' : Expr) (hf : Expr) (_ : e = .app f a) (done := false) : SymM Result := do
|
||||
let .forallE x _ βx _ ← whnfD (← inferType f)
|
||||
| throwError "failed to build congruence proof, function expected{indentExpr f}"
|
||||
let α ← inferType a
|
||||
let u ← getLevel α
|
||||
let v ← getLevel (← inferType e)
|
||||
let β := Lean.mkLambda x .default α βx
|
||||
let e' ← mkAppS f' a
|
||||
let h := mkApp6 (mkConst ``congrFun [u, v]) α β f f' hf a
|
||||
return .step e' h done
|
||||
|
||||
/--
|
||||
Handles simplification of over-applied function terms.
|
||||
|
||||
When a function has more arguments than expected by its `CongrInfo`, we need to handle
|
||||
the "extra" arguments separately. This function peels off `numArgs` trailing applications,
|
||||
simplifies the remaining function using `simpFn`, then rebuilds the term by simplifying
|
||||
and re-applying the trailing arguments.
|
||||
|
||||
**Over-application** occurs when:
|
||||
- A function with `fixedPrefix prefixSize suffixSize` is applied to more than `prefixSize + suffixSize` arguments
|
||||
- A function with `interlaced` rewritable mask is applied to more than `mask.size` arguments
|
||||
- A function with a congruence theorem is applied to more than the theorem expects
|
||||
|
||||
**Example**: If `f` has `CongrInfo.fixedPrefix 2 3` (expects 5 arguments) but we see `f a₁ a₂ a₃ a₄ a₅ b₁ b₂`,
|
||||
then `numArgs = 2` (the extra arguments) and we:
|
||||
1. Recursively simplify `f a₁ a₂ a₃ a₄ a₅` using the fixed prefix strategy (via `simpFn`)
|
||||
2. Simplify each extra argument `b₁` and `b₂`
|
||||
3. Rebuild the term using either `mkCongr` (for non-dependent arrows) or `mkCongrFun` (for dependent functions)
|
||||
|
||||
**Parameters**:
|
||||
- `e`: The over-applied expression to simplify
|
||||
- `numArgs`: Number of excess arguments to peel off
|
||||
- `simpFn`: Strategy for simplifying the function after peeling (e.g., `simpFixedPrefix`, `simpInterlaced`, or `simpUsingCongrThm`)
|
||||
|
||||
**Note**: This is a fallback path without specialized optimizations. The common case (correct number of arguments)
|
||||
is handled more efficiently by the specific strategies.
|
||||
-/
|
||||
public def simpOverApplied (e : Expr) (numArgs : Nat) (simpFn : Expr → SimpM Result) : SimpM Result := do
|
||||
let rec visit (e : Expr) (i : Nat) : SimpM Result := do
|
||||
if i == 0 then
|
||||
simpFn e
|
||||
else
|
||||
let i := i - 1
|
||||
match h : e with
|
||||
| .app f a =>
|
||||
let fr ← visit f i
|
||||
let .forallE _ α β _ ← whnfD (← inferType f) | unreachable!
|
||||
if !β.hasLooseBVars then
|
||||
if (← isProp α) then
|
||||
mkCongr e f a fr .rfl h
|
||||
else
|
||||
mkCongr e f a fr (← simp a) h
|
||||
else match fr with
|
||||
| .rfl _ => return .rfl
|
||||
| .step f' hf _ => mkCongrFun e f a f' hf h
|
||||
| _ => unreachable!
|
||||
visit e numArgs
|
||||
|
||||
/--
|
||||
Handles over-applied function expressions by simplifying only the base function and
|
||||
propagating changes through extra arguments WITHOUT simplifying them.
|
||||
|
||||
Unlike `simpOverApplied`, this function does not simplify the extra arguments themselves.
|
||||
It only uses congruence (`mkCongrFun`) to propagate changes when the base function is simplified.
|
||||
|
||||
**Algorithm**:
|
||||
1. Peel off `numArgs` extra arguments from `e`
|
||||
2. Apply `simpFn` to simplify the base function
|
||||
3. If the base changed, propagate the change through each extra argument using `mkCongrFun`
|
||||
4. Return `.rfl` if the base function was not simplified
|
||||
|
||||
**Parameters**:
|
||||
- `e`: The over-applied expression
|
||||
- `numArgs`: Number of excess arguments to peel off
|
||||
- `simpFn`: Strategy for simplifying the base function after peeling
|
||||
|
||||
**Contrast with `simpOverApplied`**:
|
||||
- `simpOverApplied`: Fully simplifies both base and extra arguments
|
||||
- `propagateOverApplied`: Only simplifies base, appends extra arguments unchanged
|
||||
-/
|
||||
public def propagateOverApplied (e : Expr) (numArgs : Nat) (simpFn : Expr → SimpM Result) : SimpM Result := do
|
||||
let rec visit (e : Expr) (i : Nat) : SimpM Result := do
|
||||
if i == 0 then
|
||||
simpFn e
|
||||
else
|
||||
let i := i - 1
|
||||
match h : e with
|
||||
| .app f a =>
|
||||
let r ← visit f i
|
||||
match r with
|
||||
| .rfl _ => return r
|
||||
| .step f' hf done => mkCongrFun e f a f' hf h done
|
||||
| _ => unreachable!
|
||||
visit e numArgs
|
||||
|
||||
/--
|
||||
Reduces `type` to weak head normal form and verifies it is a `forall` expression.
|
||||
If `type` is already a `forall`, returns it unchanged (avoiding unnecessary work).
|
||||
The result is shared via `share` to maintain maximal sharing invariants.
|
||||
-/
|
||||
def whnfToForall (type : Expr) : SymM Expr := do
|
||||
if type.isForall then return type
|
||||
let type ← whnfD type
|
||||
unless type.isForall do throwError "function type expected{indentD type}"
|
||||
share type
|
||||
|
||||
/--
|
||||
Returns the type of an expression `e`. If `n > 0`, then `e` is an application
|
||||
with at least `n` arguments. This function assumes the `n` trailing arguments are non-dependent.
|
||||
Given `e` of the form `f a₁ a₂ ... aₙ`, the type of `e` is computed by
|
||||
inferring the type of `f` and traversing the forall telescope.
|
||||
|
||||
We use this function to implement `congrFixedPrefix`. Recall that `inferType` is cached.
|
||||
This function tries to maximize the likelihood of a cache hit. For example,
|
||||
suppose `e` is `@HAdd.hAdd Nat Nat Nat instAdd 5` and `n = 1`. It is much more likely that
|
||||
`@HAdd.hAdd Nat Nat Nat instAdd` is already in the cache than
|
||||
`@HAdd.hAdd Nat Nat Nat instAdd 5`.
|
||||
-/
|
||||
def getFnType (e : Expr) (n : Nat) : SymM Expr := do
|
||||
match n with
|
||||
| 0 => inferType e
|
||||
| n+1 =>
|
||||
let type ← getFnType e.appFn! n
|
||||
let .forallE _ _ β _ ← whnfToForall type | unreachable!
|
||||
return β
|
||||
|
||||
/--
|
||||
Simplifies arguments of a function application with a fixed prefix structure.
|
||||
Recursively simplifies the trailing `suffixSize` arguments, leaving the first
|
||||
`prefixSize` arguments unchanged.
|
||||
|
||||
For a function with `CongrInfo.fixedPrefix prefixSize suffixSize`, the arguments
|
||||
are structured as:
|
||||
```
|
||||
f a₁ ... aₚ b₁ ... bₛ
|
||||
└───────┘ └───────┘
|
||||
prefix suffix (rewritable)
|
||||
```
|
||||
|
||||
The prefix arguments (types, instances) should
|
||||
not be rewritten directly. Only the suffix arguments are recursively simplified.
|
||||
|
||||
**Performance optimization**: We avoid calling `inferType` on applied expressions
|
||||
like `f a₁ ... aₚ b₁` or `f a₁ ... aₚ b₁ ... bₛ`, which would have poor cache hit rates.
|
||||
Instead, we infer the type of the function prefix `f a₁ ... aₚ`
|
||||
(e.g., `@HAdd.hAdd Nat Nat Nat instAdd`) which is probably shared across many applications,
|
||||
then traverse the forall telescope to extract argument and result types as needed.
|
||||
|
||||
The helper `go` returns `Result × Expr` where the `Expr` is the function type at that
|
||||
position. However, the type is only meaningful (non-`default`) when `Result` is
|
||||
`.step`, since we only need types for constructing congruence proofs. This avoids
|
||||
unnecessary type inference when no rewriting occurs.
|
||||
-/
|
||||
public def simpFixedPrefix (e : Expr) (prefixSize : Nat) (suffixSize : Nat) : SimpM Result := do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs ≤ prefixSize then
|
||||
-- Nothing to be done
|
||||
return .rfl
|
||||
else if numArgs > prefixSize + suffixSize then
|
||||
simpOverApplied e (numArgs - prefixSize - suffixSize) (main suffixSize)
|
||||
else
|
||||
main (numArgs - prefixSize) e
|
||||
where
|
||||
main (n : Nat) (e : Expr) : SimpM Result := do
|
||||
return (← go n e).1
|
||||
|
||||
go (i : Nat) (e : Expr) : SimpM (Result × Expr) := do
|
||||
if i == 0 then
|
||||
return (.rfl, default)
|
||||
else
|
||||
let .app f a := e | unreachable!
|
||||
let (hf, fType) ← go (i-1) f
|
||||
match hf, (← simp a) with
|
||||
| .rfl _, .rfl _ => return (.rfl, default)
|
||||
| .step f' hf _, .rfl _ =>
|
||||
let .forallE _ α β _ ← whnfToForall fType | unreachable!
|
||||
let e' ← mkAppS f' a
|
||||
let u ← getLevel α
|
||||
let v ← getLevel β
|
||||
let h := mkApp6 (mkConst ``congrFun' [u, v]) α β f f' hf a
|
||||
return (.step e' h, β)
|
||||
| .rfl _, .step a' ha _ =>
|
||||
let fType ← getFnType f (i-1)
|
||||
let .forallE _ α β _ ← whnfToForall fType | unreachable!
|
||||
let e' ← mkAppS f a'
|
||||
let u ← getLevel α
|
||||
let v ← getLevel β
|
||||
let h := mkApp6 (mkConst ``congrArg [u, v]) α β a a' f ha
|
||||
return (.step e' h, β)
|
||||
| .step f' hf _, .step a' ha _ =>
|
||||
let .forallE _ α β _ ← whnfToForall fType | unreachable!
|
||||
let e' ← mkAppS f' a'
|
||||
let u ← getLevel α
|
||||
let v ← getLevel β
|
||||
let h := mkApp8 (mkConst ``congr [u, v]) α β f f' a a' hf ha
|
||||
return (.step e' h, β)
|
||||
|
||||
/--
|
||||
Simplifies arguments of a function application with interlaced rewritable/fixed arguments.
|
||||
Uses `rewritable[i]` to determine whether argument `i` should be simplified.
|
||||
For rewritable arguments, calls `simp` and uses `congrFun'`, `congrArg`, and `congr`; for fixed arguments,
|
||||
uses `congrFun` to propagate changes from earlier arguments.
|
||||
-/
|
||||
public def simpInterlaced (e : Expr) (rewritable : Array Bool) : SimpM Result := do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if h : numArgs = 0 then
|
||||
-- Nothing to be done
|
||||
return .rfl
|
||||
else if h : numArgs > rewritable.size then
|
||||
simpOverApplied e (numArgs - rewritable.size) (go rewritable.size · (Nat.le_refl _))
|
||||
else
|
||||
go numArgs e (by omega)
|
||||
where
|
||||
go (i : Nat) (e : Expr) (h : i ≤ rewritable.size) : SimpM Result := do
|
||||
if h : i = 0 then
|
||||
return .rfl
|
||||
else
|
||||
match h : e with
|
||||
| .app f a =>
|
||||
let fr ← go (i - 1) f (by omega)
|
||||
if rewritable[i - 1] then
|
||||
mkCongr e f a fr (← simp a) h
|
||||
else match fr with
|
||||
| .rfl _ => return .rfl
|
||||
| .step f' hf _ => mkCongrFun e f a f' hf h
|
||||
| _ => unreachable!
|
||||
|
||||
/--
|
||||
Helper function used at `congrThm`. The idea is to initialize `argResults` lazily
|
||||
when we get the first non-`.rfl` result.
|
||||
-/
|
||||
def pushResult (argResults : Array Result) (numEqs : Nat) (result : Result) : Array Result :=
|
||||
match result with
|
||||
| .rfl .. => if argResults.size > 0 then argResults.push result else argResults
|
||||
| .step .. =>
|
||||
if argResults.size < numEqs then
|
||||
Array.replicate numEqs .rfl |>.push result
|
||||
else
|
||||
argResults.push result
|
||||
|
||||
/--
|
||||
Simplifies arguments of a function application using a pre-generated congruence theorem.
|
||||
|
||||
This strategy is used for functions that have complex argument dependencies, particularly
|
||||
those with proof arguments or `Decidable` instances. Unlike `congrFixedPrefix` and
|
||||
`congrInterlaced`, which construct proofs on-the-fly using basic congruence lemmas
|
||||
(`congrArg`, `congrFun`, `congrFun'`, `congr`), this function applies a specialized congruence theorem
|
||||
that was pre-generated for the specific function being simplified.
|
||||
|
||||
See type `CongrArgKind`.
|
||||
|
||||
**Algorithm**:
|
||||
1. Recursively simplify all `.eq` arguments (via `simpEqArgs`)
|
||||
2. If all simplifications return `.rfl`, the overall result is `.rfl`
|
||||
3. Otherwise, construct the final proof by:
|
||||
- Starting with the congruence theorem's proof term
|
||||
- Applying original arguments and their simplification results
|
||||
- Re-synthesizing subsingleton instances when their dependencies change
|
||||
- Removing unnecessary casts from the result
|
||||
|
||||
**Key examples**:
|
||||
|
||||
1. `ite`: Has type `{α : Sort u} → (c : Prop) → [Decidable c] → α → α → α`
|
||||
- Argument kinds: `[.fixed, .eq, .subsingletonInst, .eq, .eq]`
|
||||
- When simplifying `ite (x > 0) a b`, if `x > 0` simplifies to `true`, we must
|
||||
re-synthesize `[Decidable true]` because the original `[Decidable (x > 0)]`
|
||||
instance is no longer type-correct
|
||||
|
||||
2. `GetElem.getElem`: Has type
|
||||
```
|
||||
{coll : Type u} → {idx : Type v} → {elem : Type w} → {valid : coll → idx → Prop} →
|
||||
[GetElem coll idx elem valid] → (xs : coll) → (i : idx) → valid xs i → elem
|
||||
```
|
||||
- The proof argument `valid xs i` depends on earlier arguments `xs` and `i`
|
||||
- When `xs` or `i` are simplified, the proof is adjusted in the `rhs` of the auto-generated
|
||||
theorem.
|
||||
-/
|
||||
def simpUsingCongrThm (e : Expr) (thm : CongrTheorem) : SimpM Result := do
|
||||
let argKinds := thm.argKinds
|
||||
/-
|
||||
Constructs the non-`rfl` result. `argResults` contains the result for arguments with kind `.eq`.
|
||||
There is at least one non-`rfl` result in `argResults`.
|
||||
-/
|
||||
let mkNonRflResult (argResults : Array Result) : SimpM Result := do
|
||||
let mut proof := thm.proof
|
||||
let mut type := thm.type
|
||||
let mut j := 0 -- index at argResults
|
||||
let mut subst := #[]
|
||||
let args := e.getAppArgs
|
||||
for arg in args, kind in argKinds do
|
||||
proof := mkApp proof arg
|
||||
type := type.bindingBody!
|
||||
match kind with
|
||||
| .fixed => subst := subst.push arg
|
||||
| .cast => subst := subst.push arg
|
||||
| .subsingletonInst =>
|
||||
subst := subst.push arg
|
||||
let clsNew := type.bindingDomain!.instantiateRev subst
|
||||
let instNew ← if (← isDefEqI (← inferType arg) clsNew) then
|
||||
pure arg
|
||||
else
|
||||
let .some val ← trySynthInstance clsNew | return .rfl
|
||||
pure val
|
||||
proof := mkApp proof instNew
|
||||
subst := subst.push instNew
|
||||
type := type.bindingBody!
|
||||
| .eq =>
|
||||
subst := subst.push arg
|
||||
match argResults[j]! with
|
||||
| .rfl _ =>
|
||||
let h ← mkEqRefl arg
|
||||
proof := mkApp2 proof arg h
|
||||
subst := subst.push arg |>.push h
|
||||
| .step arg' h _ =>
|
||||
proof := mkApp2 proof arg' h
|
||||
subst := subst.push arg' |>.push h
|
||||
type := type.bindingBody!.bindingBody!
|
||||
j := j + 1
|
||||
| _ => unreachable!
|
||||
let_expr Eq _ _ rhs := type | unreachable!
|
||||
let rhs := rhs.instantiateRev subst
|
||||
let hasCast := argKinds.any (· matches .cast)
|
||||
let rhs ← if hasCast then Simp.removeUnnecessaryCasts rhs else pure rhs
|
||||
let rhs ← share rhs
|
||||
return .step rhs proof
|
||||
/-
|
||||
Recursively simplifies arguments of kind `.eq`. The array `argResults` is initialized lazily
|
||||
as soon as the simplifier returns a non-`rfl` result for some arguments.
|
||||
`numEqs` is the number of `.eq` arguments found so far.
|
||||
-/
|
||||
let rec simpEqArgs (e : Expr) (i : Nat) (numEqs : Nat) (argResults : Array Result) : SimpM Result := do
|
||||
match e with
|
||||
| .app f a =>
|
||||
match argKinds[i]! with
|
||||
| .subsingletonInst
|
||||
| .fixed => simpEqArgs f (i-1) numEqs argResults
|
||||
| .cast => simpEqArgs f (i-1) numEqs argResults
|
||||
| .eq => simpEqArgs f (i-1) (numEqs+1) (pushResult argResults numEqs (← simp a))
|
||||
| _ => unreachable!
|
||||
| _ =>
|
||||
if argResults.isEmpty then
|
||||
return .rfl
|
||||
else
|
||||
mkNonRflResult argResults.reverse
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs > argKinds.size then
|
||||
simpOverApplied e (numArgs - argKinds.size) (simpEqArgs · (argKinds.size - 1) 0 #[])
|
||||
else if numArgs < argKinds.size then
|
||||
/-
|
||||
**Note**: under-applied case. This can be optimized, but this case is so
|
||||
rare that it is not worth doing it. We just reuse `simpOverApplied`
|
||||
-/
|
||||
simpOverApplied e e.getAppNumArgs (fun _ => return .rfl)
|
||||
else
|
||||
simpEqArgs e (argKinds.size - 1) 0 #[]
|
||||
|
||||
/--
|
||||
Main entry point for simplifying function application arguments.
|
||||
Dispatches to the appropriate strategy based on the function's `CongrInfo`.
|
||||
-/
|
||||
public def simpAppArgs (e : Expr) : SimpM Result := do
|
||||
let f := e.getAppFn
|
||||
match (← getCongrInfo f) with
|
||||
| .none => return .rfl
|
||||
| .fixedPrefix prefixSize suffixSize => simpFixedPrefix e prefixSize suffixSize
|
||||
| .interlaced rewritable => simpInterlaced e rewritable
|
||||
| .congrTheorem thm => simpUsingCongrThm e thm
|
||||
|
||||
/--
|
||||
Simplifies arguments in a specified range `[start, stop)` of a function application.
|
||||
|
||||
Given an expression `f a₀ a₁ ... aₙ`, this function simplifies only the arguments
|
||||
at positions `start ≤ i < stop`, leaving arguments outside this range unchanged.
|
||||
Changes are propagated using congruence lemmas.
|
||||
|
||||
**Use case**: Useful for control-flow simplification where we want to simplify only
|
||||
discriminants of a `match` expression without touching the branches.
|
||||
-/
|
||||
public def simpAppArgRange (e : Expr) (start stop : Nat) : SimpM Result := do
|
||||
let numArgs := e.getAppNumArgs
|
||||
assert! start < stop
|
||||
if numArgs < start then return .rfl
|
||||
let numArgs := numArgs - start
|
||||
let stop := stop - start
|
||||
let rec visit (e : Expr) (i : Nat) : SimpM Result := do
|
||||
if i == 0 then
|
||||
return .rfl
|
||||
let i := i - 1
|
||||
match h : e with
|
||||
| .app f a =>
|
||||
let fr ← visit f i
|
||||
let skip : SimpM Result := do
|
||||
match fr with
|
||||
| .rfl _ => return .rfl
|
||||
| .step f' hf _ => mkCongrFun e f a f' hf h
|
||||
if i < stop then
|
||||
let .forallE _ α β _ ← whnfD (← inferType f) | unreachable!
|
||||
if !β.hasLooseBVars then
|
||||
if (← isProp α) then
|
||||
mkCongr e f a fr .rfl h
|
||||
else
|
||||
mkCongr e f a fr (← simp a) h
|
||||
else skip
|
||||
else skip
|
||||
| _ => unreachable!
|
||||
visit e numArgs
|
||||
|
||||
end Lean.Meta.Sym.Simp
|
||||
@@ -1,157 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
import Lean.Meta.Sym.AlphaShareBuilder
|
||||
import Lean.Meta.Sym.InferType
|
||||
import Lean.Meta.Sym.Simp.Result
|
||||
import Lean.Meta.Sym.Simp.CongrInfo
|
||||
namespace Lean.Meta.Sym.Simp
|
||||
open Internal
|
||||
|
||||
/-!
|
||||
# Simplifying Application Arguments and Congruence Lemma Application
|
||||
|
||||
This module provides functions for building congruence proofs during simplification.
|
||||
Given a function application `f a₁ ... aₙ` where some arguments are rewritable,
|
||||
we recursively simplify those arguments (via `simp`) and construct a proof that the
|
||||
original expression equals the simplified one.
|
||||
|
||||
The key challenge is efficiency: we want to avoid repeatedly inferring types, or destroying sharing,
|
||||
The `CongrInfo` type (see `SymM.lean`) categorizes functions
|
||||
by their argument structure, allowing us to choose the most efficient proof strategy:
|
||||
|
||||
- `fixedPrefix`: Use simple `congrArg`/`congrFun'`/`congr` for trailing arguments. We exploit
|
||||
the fact that there are no dependent arguments in the suffix and use the cheaper `congrFun'`
|
||||
instead of `congrFun`.
|
||||
- `interlaced`: Mix rewritable and fixed arguments. It may have to use `congrFun` for fixed
|
||||
dependent arguments.
|
||||
- `congrTheorem`: Apply a pre-generated congruence theorem for dependent arguments
|
||||
|
||||
**Design principle**: Never infer the type of proofs. This avoids expensive type
|
||||
inference on proof terms, which can be arbitrarily complex, and often destroys sharing.
|
||||
-/
|
||||
|
||||
/--
|
||||
Helper function for constructing a congruence proof using `congrFun'`, `congrArg`, `congr`.
|
||||
For the dependent case, use `mkCongrFun`
|
||||
-/
|
||||
def mkCongr (e : Expr) (f a : Expr) (fr : Result) (ar : Result) (_ : e = .app f a) : SymM Result := do
|
||||
let mkCongrPrefix (declName : Name) : SymM Expr := do
|
||||
let α ← inferType a
|
||||
let u ← getLevel α
|
||||
let β ← inferType e
|
||||
let v ← getLevel β
|
||||
return mkApp2 (mkConst declName [u, v]) α β
|
||||
match fr, ar with
|
||||
| .rfl _, .rfl _ => return .rfl
|
||||
| .step f' hf _, .rfl _ =>
|
||||
let e' ← mkAppS f' a
|
||||
let h := mkApp4 (← mkCongrPrefix ``congrFun') f f' hf a
|
||||
return .step e' h
|
||||
| .rfl _, .step a' ha _ =>
|
||||
let e' ← mkAppS f a'
|
||||
let h := mkApp4 (← mkCongrPrefix ``congrArg) a a' f ha
|
||||
return .step e' h
|
||||
| .step f' hf _, .step a' ha _ =>
|
||||
let e' ← mkAppS f' a'
|
||||
let h := mkApp6 (← mkCongrPrefix ``congr) f f' a a' hf ha
|
||||
return .step e' h
|
||||
|
||||
/--
|
||||
Returns a proof using `congrFun`
|
||||
```
|
||||
congrFun.{u, v} {α : Sort u} {β : α → Sort v} {f g : (x : α) → β x} (h : f = g) (a : α) : f a = g a
|
||||
```
|
||||
-/
|
||||
def mkCongrFun (e : Expr) (f a : Expr) (f' : Expr) (hf : Expr) (_ : e = .app f a) : SymM Result := do
|
||||
let .forallE x _ βx _ ← whnfD (← inferType f)
|
||||
| throwError "failed to build congruence proof, function expected{indentExpr f}"
|
||||
let α ← inferType a
|
||||
let u ← getLevel α
|
||||
let v ← getLevel (← inferType e)
|
||||
let β := Lean.mkLambda x .default α βx
|
||||
let e' ← mkAppS f' a
|
||||
let h := mkApp6 (mkConst ``congrFun [u, v]) α β f f' hf a
|
||||
return .step e' h
|
||||
|
||||
/--
|
||||
Simplify arguments of a function application with a fixed prefix structure.
|
||||
Recursively simplifies the trailing `suffixSize` arguments, leaving the first
|
||||
`prefixSize` arguments unchanged.
|
||||
-/
|
||||
def congrFixedPrefix (e : Expr) (prefixSize : Nat) (suffixSize : Nat) : SimpM Result := do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs ≤ prefixSize then
|
||||
-- Nothing to be done
|
||||
return .rfl
|
||||
else if numArgs > prefixSize + suffixSize then
|
||||
-- **TODO**: over-applied case
|
||||
return .rfl
|
||||
else
|
||||
go numArgs e
|
||||
where
|
||||
go (i : Nat) (e : Expr) : SimpM Result := do
|
||||
if i == prefixSize then
|
||||
return .rfl
|
||||
else
|
||||
match h : e with
|
||||
| .app f a => mkCongr e f a (← go (i - 1) f) (← simp a) h
|
||||
| _ => unreachable!
|
||||
|
||||
/--
|
||||
Simplify arguments of a function application with interlaced rewritable/fixed arguments.
|
||||
Uses `rewritable[i]` to determine whether argument `i` should be simplified.
|
||||
For rewritable arguments, calls `simp` and uses `congrFun'`, `congrArg`, and `congr`; for fixed arguments,
|
||||
uses `congrFun` to propagate changes from earlier arguments.
|
||||
-/
|
||||
def congrInterlaced (e : Expr) (rewritable : Array Bool) : SimpM Result := do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if h : numArgs = 0 then
|
||||
-- Nothing to be done
|
||||
return .rfl
|
||||
else if h : numArgs > rewritable.size then
|
||||
-- **TODO**: over-applied case
|
||||
return .rfl
|
||||
else
|
||||
go numArgs e (by omega)
|
||||
where
|
||||
go (i : Nat) (e : Expr) (h : i ≤ rewritable.size) : SimpM Result := do
|
||||
if h : i = 0 then
|
||||
return .rfl
|
||||
else
|
||||
match h : e with
|
||||
| .app f a =>
|
||||
let fr ← go (i - 1) f (by omega)
|
||||
if rewritable[i - 1] then
|
||||
mkCongr e f a fr (← simp a) h
|
||||
else match fr with
|
||||
| .rfl _ => return .rfl
|
||||
| .step f' hf _ => mkCongrFun e f a f' hf h
|
||||
| _ => unreachable!
|
||||
|
||||
/--
|
||||
Simplify arguments using a pre-generated congruence theorem.
|
||||
Used for functions with proof or `Decidable` arguments.
|
||||
-/
|
||||
def congrThm (_e : Expr) (_ : CongrTheorem) : SimpM Result := do
|
||||
-- **TODO**
|
||||
return .rfl
|
||||
|
||||
/--
|
||||
Main entry point for simplifying function application arguments.
|
||||
Dispatches to the appropriate strategy based on the function's `CongrInfo`.
|
||||
-/
|
||||
public def congrArgs (e : Expr) : SimpM Result := do
|
||||
let f := e.getAppFn
|
||||
match (← getCongrInfo f) with
|
||||
| .none => return .rfl
|
||||
| .fixedPrefix prefixSize suffixSize => congrFixedPrefix e prefixSize suffixSize
|
||||
| .interlaced rewritable => congrInterlaced e rewritable
|
||||
| .congrTheorem thm => congrThm e thm
|
||||
|
||||
end Lean.Meta.Sym.Simp
|
||||
146
src/Lean/Meta/Sym/Simp/ControlFlow.lean
Normal file
146
src/Lean/Meta/Sym/Simp/ControlFlow.lean
Normal file
@@ -0,0 +1,146 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
import Lean.Meta.Sym.AlphaShareBuilder
|
||||
import Lean.Meta.Sym.InstantiateS
|
||||
import Lean.Meta.Sym.InferType
|
||||
import Lean.Meta.Sym.Simp.App
|
||||
import Lean.Meta.SynthInstance
|
||||
import Lean.Meta.WHNF
|
||||
import Lean.Meta.AppBuilder
|
||||
import Init.Sym.Lemmas
|
||||
namespace Lean.Meta.Sym.Simp
|
||||
open Internal
|
||||
|
||||
/--
|
||||
Simplifies a non-dependent `if-then-else` expression.
|
||||
-/
|
||||
def simpIte : Simproc := fun e => do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs < 5 then return .rfl (done := true)
|
||||
propagateOverApplied e (numArgs - 5) fun e => do
|
||||
let_expr f@ite α c _ a b := e | return .rfl
|
||||
match (← simp c) with
|
||||
| .rfl _ =>
|
||||
if isSameExpr c (← getTrueExpr) then
|
||||
return .step a <| mkApp3 (mkConst ``ite_true f.constLevels!) α a b
|
||||
else if isSameExpr c (← getFalseExpr) then
|
||||
return .step b <| mkApp3 (mkConst ``ite_false f.constLevels!) α a b
|
||||
else
|
||||
return .rfl (done := true)
|
||||
| .step c' h _ =>
|
||||
if isSameExpr c' (← getTrueExpr) then
|
||||
return .step a <| mkApp (e.replaceFn ``ite_cond_eq_true) h
|
||||
else if isSameExpr c' (← getFalseExpr) then
|
||||
return .step b <| mkApp (e.replaceFn ``ite_cond_eq_false) h
|
||||
else
|
||||
let .some inst' ← trySynthInstance (mkApp (mkConst ``Decidable) c') | return .rfl
|
||||
let inst' ← shareCommon inst'
|
||||
let e' := e.getBoundedAppFn 4
|
||||
let e' ← mkAppS₄ e' c' inst' a b
|
||||
let h' := mkApp3 (e.replaceFn ``Sym.ite_cond_congr) c' inst' h
|
||||
return .step e' h' (done := true)
|
||||
|
||||
/--
|
||||
Simplifies a dependent `if-then-else` expression.
|
||||
-/
|
||||
def simpDIte : Simproc := fun e => do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs < 5 then return .rfl (done := true)
|
||||
propagateOverApplied e (numArgs - 5) fun e => do
|
||||
let_expr f@dite α c _ a b := e | return .rfl
|
||||
match (← simp c) with
|
||||
| .rfl _ =>
|
||||
if isSameExpr c (← getTrueExpr) then
|
||||
let a' ← share <| a.betaRev #[mkConst ``True.intro]
|
||||
return .step a' <| mkApp3 (mkConst ``dite_true f.constLevels!) α a b
|
||||
else if isSameExpr c (← getFalseExpr) then
|
||||
let b' ← share <| b.betaRev #[mkConst ``not_false]
|
||||
return .step b' <| mkApp3 (mkConst ``dite_false f.constLevels!) α a b
|
||||
else
|
||||
return .rfl (done := true)
|
||||
| .step c' h _ =>
|
||||
if isSameExpr c' (← getTrueExpr) then
|
||||
let h' ← shareCommon <| mkOfEqTrueCore c h
|
||||
let a ← share <| a.betaRev #[h']
|
||||
return .step a <| mkApp (e.replaceFn ``dite_cond_eq_true) h
|
||||
else if isSameExpr c' (← getFalseExpr) then
|
||||
let h' ← shareCommon <| mkOfEqFalseCore c h
|
||||
let b ← share <| b.betaRev #[h']
|
||||
return .step b <| mkApp (e.replaceFn ``dite_cond_eq_false) h
|
||||
else
|
||||
let .some inst' ← trySynthInstance (mkApp (mkConst ``Decidable) c') | return .rfl
|
||||
let inst' ← shareCommon inst'
|
||||
let e' := e.getBoundedAppFn 4
|
||||
let h ← shareCommon h
|
||||
let a ← share <| mkLambda `h .default c' (a.betaRev #[mkApp4 (mkConst ``Eq.mpr_prop) c c' h (mkBVar 0)])
|
||||
let b ← share <| mkLambda `h .default (mkNot c') (b.betaRev #[mkApp4 (mkConst ``Eq.mpr_not) c c' h (mkBVar 0)])
|
||||
let e' ← mkAppS₄ e' c' inst' a b
|
||||
let h' := mkApp3 (e.replaceFn ``Sym.dite_cond_congr) c' inst' h
|
||||
return .step e' h' (done := true)
|
||||
|
||||
/--
|
||||
Simplifies a `cond` expression (aka Boolean `if-then-else`).
|
||||
-/
|
||||
def simpCond : Simproc := fun e => do
|
||||
let numArgs := e.getAppNumArgs
|
||||
if numArgs < 4 then return .rfl (done := true)
|
||||
propagateOverApplied e (numArgs - 4) fun e => do
|
||||
let_expr f@cond α c a b := e | return .rfl
|
||||
match (← simp c) with
|
||||
| .rfl _ =>
|
||||
if isSameExpr c (← getBoolTrueExpr) then
|
||||
return .step a <| mkApp3 (mkConst ``cond_true f.constLevels!) α a b
|
||||
else if isSameExpr c (← getBoolFalseExpr) then
|
||||
return .step b <| mkApp3 (mkConst ``cond_false f.constLevels!) α a b
|
||||
else
|
||||
return .rfl (done := true)
|
||||
| .step c' h _ =>
|
||||
if isSameExpr c' (← getBoolTrueExpr) then
|
||||
return .step a <| mkApp (e.replaceFn ``Sym.cond_cond_eq_true) h
|
||||
else if isSameExpr c' (← getBoolFalseExpr) then
|
||||
return .step b <| mkApp (e.replaceFn ``Sym.cond_cond_eq_false) h
|
||||
else
|
||||
let e' := e.getBoundedAppFn 3
|
||||
let e' ← mkAppS₃ e' c' a b
|
||||
let h' := mkApp2 (e.replaceFn ``Sym.cond_cond_congr) c' h
|
||||
return .step e' h' (done := true)
|
||||
|
||||
/--
|
||||
Simplifies a `match`-expression.
|
||||
-/
|
||||
def simpMatch (declName : Name) : Simproc := fun e => do
|
||||
if let some e' ← reduceRecMatcher? e then
|
||||
return .step e' (← mkEqRefl e')
|
||||
let some info ← getMatcherInfo? declName
|
||||
| return .rfl
|
||||
-- **Note**: Simplify only the discriminants
|
||||
let start := info.numParams + 1
|
||||
let stop := start + info.numDiscrs
|
||||
let r ← simpAppArgRange e start stop
|
||||
match r with
|
||||
| .step .. => return r
|
||||
| _ => return .rfl (done := true)
|
||||
|
||||
/--
|
||||
Simplifies control-flow expressions such as `if-then-else` and `match` expressions.
|
||||
It visits only the conditions and discriminants.
|
||||
-/
|
||||
public def simpControl : Simproc := fun e => do
|
||||
if !e.isApp then return .rfl
|
||||
let .const declName _ := e.getAppFn | return .rfl
|
||||
if declName == ``ite then
|
||||
simpIte e
|
||||
else if declName == ``cond then
|
||||
simpCond e
|
||||
else if declName == ``dite then
|
||||
simpDIte e
|
||||
else
|
||||
simpMatch declName e
|
||||
|
||||
end Lean.Meta.Sym.Simp
|
||||
36
src/Lean/Meta/Sym/Simp/Debug.lean
Normal file
36
src/Lean/Meta/Sym/Simp/Debug.lean
Normal file
@@ -0,0 +1,36 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
public import Lean.Meta.Sym.Simp.Discharger
|
||||
import Lean.Meta.Sym.Simp.Theorems
|
||||
import Lean.Meta.Sym.Simp.Rewrite
|
||||
import Lean.Meta.Sym.Simp.Goal
|
||||
import Lean.Meta.Sym.Util
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.AppBuilder
|
||||
namespace Lean.Meta.Sym
|
||||
open Simp
|
||||
/-!
|
||||
Helper functions for debugging purposes and creating tests.
|
||||
-/
|
||||
|
||||
public def mkSimprocFor (declNames : Array Name) (d : Discharger := dischargeNone) : MetaM Simproc := do
|
||||
let mut thms : Theorems := {}
|
||||
for declName in declNames do
|
||||
thms := thms.insert (← mkTheoremFromDecl declName)
|
||||
return thms.rewrite d
|
||||
|
||||
public def mkMethods (declNames : Array Name) : MetaM Methods := do
|
||||
return { post := (← mkSimprocFor declNames) }
|
||||
|
||||
public def simpGoalUsing (declNames : Array Name) (mvarId : MVarId) : MetaM (Option MVarId) := SymM.run do
|
||||
let methods ← mkMethods declNames
|
||||
let mvarId ← preprocessMVar mvarId
|
||||
(← simpGoal mvarId methods).toOption
|
||||
|
||||
end Lean.Meta.Sym
|
||||
121
src/Lean/Meta/Sym/Simp/Discharger.lean
Normal file
121
src/Lean/Meta/Sym/Simp/Discharger.lean
Normal file
@@ -0,0 +1,121 @@
|
||||
/-
|
||||
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
module
|
||||
prelude
|
||||
public import Lean.Meta.Sym.Simp.SimpM
|
||||
import Lean.Meta.AppBuilder
|
||||
namespace Lean.Meta.Sym.Simp
|
||||
|
||||
/-!
|
||||
# Dischargers for Conditional Rewrite Rules
|
||||
|
||||
This module provides dischargers for handling conditional rewrite rules in `Sym.simp`.
|
||||
A discharger attempts to prove side conditions that arise during rewriting.
|
||||
|
||||
## Overview
|
||||
|
||||
When applying a conditional rewrite rule `h : P → a = b`, the simplifier must prove
|
||||
the precondition `P` before using the rule. A `Discharger` is a function that attempts
|
||||
to construct such proofs.
|
||||
|
||||
**Example**: Consider the rewrite rule:
|
||||
```
|
||||
theorem div_self (n : Nat) (h : n ≠ 0) : n / n = 1
|
||||
```
|
||||
When simplifying `x / x`, the discharger must prove `x ≠ 0` to apply this rule.
|
||||
|
||||
## Design
|
||||
|
||||
Dischargers work by:
|
||||
1. Attempting to simplify the side condition to `True`
|
||||
2. If successful, extracting a proof from the simplification result
|
||||
3. Returning `none` if the condition cannot be discharged
|
||||
|
||||
This integrates naturally with `Simproc`-based simplification.
|
||||
|
||||
## Important
|
||||
|
||||
When using dischargers that access new local declarations introduced when
|
||||
visiting binders, it is the user's responsibility to set `wellBehavedMethods := false`.
|
||||
This setting will instruct `simp` to discard the cache after visiting the binder's body.
|
||||
-/
|
||||
|
||||
/--
|
||||
A discharger attempts to prove propositions that arise as side conditions during rewriting.
|
||||
|
||||
Given a proposition `e : Prop`, returns:
|
||||
- `some proof` if `e` can be proven
|
||||
- `none` if `e` cannot be discharged
|
||||
|
||||
**Usage**: Dischargers are used by the simplifier when applying conditional rewrite rules.
|
||||
-/
|
||||
public abbrev Discharger := Expr → SimpM (Option Expr)
|
||||
|
||||
def resultToOptionProof (e : Expr) (result : Result) : Option Expr :=
|
||||
match result with
|
||||
| .rfl _ => none
|
||||
| .step e' h _ =>
|
||||
if e'.isTrue then
|
||||
some <| mkOfEqTrueCore e h
|
||||
else
|
||||
none
|
||||
|
||||
/--
|
||||
Converts a simplification procedure into a discharger.
|
||||
|
||||
A `Simproc` can be used as a discharger by simplifying the side condition and
|
||||
checking if it reduces to `True`. If so, the equality proof is converted to
|
||||
a proof of the original proposition.
|
||||
|
||||
**Algorithm**:
|
||||
1. Apply the simproc to the side condition `e`
|
||||
2. If `e` simplifies to `True` (via proof `h : e = True`), return `ofEqTrue h : e`
|
||||
3. Otherwise, return `none` (cannot discharge)
|
||||
|
||||
**Parameters**:
|
||||
- `p`: A simplification procedure to use for discharging conditions
|
||||
|
||||
**Example**: If `p` simplifies `5 < 10` to `True` via proof `h : (5 < 10) = True`,
|
||||
then `mkDischargerFromSimproc p` returns `ofEqTrue h : 5 < 10`.
|
||||
-/
|
||||
public def mkDischargerFromSimproc (p : Simproc) : Discharger := fun e => do
|
||||
return resultToOptionProof e (← p e)
|
||||
|
||||
/--
|
||||
The default discharger uses the simplifier itself to discharge side conditions.
|
||||
|
||||
This creates a natural recursive behavior: when applying conditional rules,
|
||||
the simplifier is invoked to prove their preconditions. This is effective because:
|
||||
|
||||
1. **Ground terms**: Conditions like `5 ≠ 0` are evaluated by simprocs
|
||||
2. **Recursive simplification**: Complex conditions are reduced to simpler ones
|
||||
3. **Lemma application**: The simplifier can apply other rewrite rules to conditions
|
||||
|
||||
It ensures the cached results are discarded, and increases the discharge depth to avoid
|
||||
infinite recursion.
|
||||
-/
|
||||
public def dischargeSimpSelf : Discharger := fun e => do
|
||||
if (← readThe Context).dischargeDepth > (← getConfig).maxDischargeDepth then
|
||||
return none
|
||||
withoutModifyingCache do
|
||||
withTheReader Context (fun ctx => { ctx with dischargeDepth := ctx.dischargeDepth + 1 }) do
|
||||
return resultToOptionProof e (← simp e)
|
||||
|
||||
/--
|
||||
A discharger that fails to prove any side condition.
|
||||
|
||||
This is used when conditional rewrite rules should not be applied. It immediately
|
||||
returns `none` for all propositions, effectively disabling conditional rewriting.
|
||||
|
||||
**Use cases**:
|
||||
- Testing: Isolating unconditional rewriting behavior
|
||||
- Performance: Avoiding expensive discharge attempts when conditions are unlikely to hold
|
||||
- Controlled rewriting: Explicitly disabling conditional rules in specific contexts
|
||||
-/
|
||||
public def dischargeNone : Discharger := fun _ =>
|
||||
return none
|
||||
|
||||
end Lean.Meta.Sym.Simp
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user