mirror of
https://github.com/leanprover/lean4.git
synced 2026-03-18 10:54:09 +00:00
Compare commits
1 Commits
array_appe
...
grind_patt
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
32adce928c |
1189
RELEASES.md
1189
RELEASES.md
File diff suppressed because it is too large
Load Diff
@@ -49,9 +49,8 @@ In the case of `@[extern]` all *irrelevant* types are removed first; see next se
|
||||
is represented by the representation of that parameter's type.
|
||||
|
||||
For example, `{ x : α // p }`, the `Subtype` structure of a value of type `α` and an irrelevant proof, is represented by the representation of `α`.
|
||||
Similarly, the signed integer types `Int8`, ..., `Int64`, `ISize` are also represented by the unsigned C types `uint8_t`, ..., `uint64_t`, `size_t`, respectively, because they have a trivial structure.
|
||||
* `Nat` and `Int` are represented by `lean_object *`.
|
||||
Their runtime values is either a pointer to an opaque bignum object or, if the lowest bit of the "pointer" is 1 (`lean_is_scalar`), an encoded unboxed natural number or integer (`lean_box`/`lean_unbox`).
|
||||
* `Nat` is represented by `lean_object *`.
|
||||
Its runtime value is either a pointer to an opaque bignum object or, if the lowest bit of the "pointer" is 1 (`lean_is_scalar`), an encoded unboxed natural number (`lean_box`/`lean_unbox`).
|
||||
* A universe `Sort u`, type constructor `... → Sort u`, or proposition `p : Prop` is *irrelevant* and is either statically erased (see above) or represented as a `lean_object *` with the runtime value `lean_box(0)`
|
||||
* Any other type is represented by `lean_object *`.
|
||||
Its runtime value is a pointer to an object of a subtype of `lean_object` (see the "Inductive types" section below) or the unboxed value `lean_box(cidx)` for the `cidx`th constructor of an inductive type if this constructor does not have any relevant parameters.
|
||||
|
||||
@@ -5,6 +5,11 @@ See below for the checklist for release candidates.
|
||||
|
||||
We'll use `v4.6.0` as the intended release version as a running example.
|
||||
|
||||
- One week before the planned release, ensure that
|
||||
(1) someone has written the release notes and
|
||||
(2) someone has written the first draft of the release blog post.
|
||||
If there is any material in `./releases_drafts/` on the `releases/v4.6.0` branch, then the release notes are not done.
|
||||
(See the section "Writing the release notes".)
|
||||
- `git checkout releases/v4.6.0`
|
||||
(This branch should already exist, from the release candidates.)
|
||||
- `git pull`
|
||||
@@ -37,32 +42,16 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Create the tag `v4.6.0` from `master`/`main` and push it.
|
||||
- Merge the tag `v4.6.0` into the `stable` branch and push it.
|
||||
- We do this for the repositories:
|
||||
- [Batteries](https://github.com/leanprover-community/batteries)
|
||||
- No dependencies
|
||||
- Toolchain bump PR
|
||||
- Create and push the tag
|
||||
- Merge the tag into `stable`
|
||||
- [lean4checker](https://github.com/leanprover/lean4checker)
|
||||
- No dependencies
|
||||
- Toolchain bump PR
|
||||
- Create and push the tag
|
||||
- Merge the tag into `stable`
|
||||
- [doc-gen4](https://github.com/leanprover/doc-gen4)
|
||||
- Dependencies: exist, but they're not part of the release workflow
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [Verso](https://github.com/leanprover/verso)
|
||||
- Dependencies: exist, but they're not part of the release workflow
|
||||
- The `SubVerso` dependency should be compatible with _every_ Lean release simultaneously, rather than following this workflow
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [Cli](https://github.com/leanprover/lean4-cli)
|
||||
- [Batteries](https://github.com/leanprover-community/batteries)
|
||||
- No dependencies
|
||||
- Toolchain bump PR
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- Merge the tag into `stable`
|
||||
- [ProofWidgets4](https://github.com/leanprover-community/ProofWidgets4)
|
||||
- Dependencies: `Batteries`
|
||||
- Note on versions and branches:
|
||||
@@ -77,11 +66,18 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- Merge the tag into `stable`
|
||||
- [import-graph](https://github.com/leanprover-community/import-graph)
|
||||
- [doc-gen4](https://github.com/leanprover/doc-gen4)
|
||||
- Dependencies: exist, but they're not part of the release workflow
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [plausible](https://github.com/leanprover-community/plausible)
|
||||
- [Verso](https://github.com/leanprover/verso)
|
||||
- Dependencies: exist, but they're not part of the release workflow
|
||||
- The `SubVerso` dependency should be compatible with _every_ Lean release simultaneously, rather than following this workflow
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [import-graph](https://github.com/leanprover-community/import-graph)
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
@@ -90,7 +86,7 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Toolchain bump PR notes:
|
||||
- In addition to updating the `lean-toolchain` and `lakefile.lean`,
|
||||
in `.github/workflows/lean4checker.yml` update the line
|
||||
`git checkout v4.6.0` to the appropriate tag.
|
||||
`git checkout v4.6.0` to the appropriate tag.
|
||||
- Push the PR branch to the main Mathlib repository rather than a fork, or CI may not work reliably
|
||||
- Create and push the tag
|
||||
- Create a new branch from the tag, push it, and open a pull request against `stable`.
|
||||
@@ -102,7 +98,6 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- Merge the tag into `stable`
|
||||
- Run `scripts/release_checklist.py v4.6.0` to check that everything is in order.
|
||||
- The `v4.6.0` section of `RELEASES.md` is out of sync between
|
||||
`releases/v4.6.0` and `master`. This should be reconciled:
|
||||
- Replace the `v4.6.0` section on `master` with the `v4.6.0` section on `releases/v4.6.0`
|
||||
@@ -144,13 +139,16 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
|
||||
git checkout -b releases/v4.7.0
|
||||
```
|
||||
- In `RELEASES.md` replace `Development in progress` in the `v4.7.0` section with `Release notes to be written.`
|
||||
- It is essential to choose the nightly that will become the release candidate as early as possible, to avoid confusion.
|
||||
- We will rely on automatically generated release notes for release candidates,
|
||||
and the written release notes will be used for stable versions only.
|
||||
It is essential to choose the nightly that will become the release candidate as early as possible, to avoid confusion.
|
||||
- In `src/CMakeLists.txt`,
|
||||
- verify that you see `set(LEAN_VERSION_MINOR 7)` (for whichever `7` is appropriate); this should already have been updated when the development cycle began.
|
||||
- `set(LEAN_VERSION_IS_RELEASE 1)` (this should be a change; on `master` and nightly releases it is always `0`).
|
||||
- Commit your changes to `src/CMakeLists.txt`, and push.
|
||||
- `git tag v4.7.0-rc1`
|
||||
- `git push origin v4.7.0-rc1`
|
||||
- Ping the FRO Zulip that release notes need to be written. The release notes do not block completing the rest of this checklist.
|
||||
- Now wait, while CI runs.
|
||||
- You can monitor this at `https://github.com/leanprover/lean4/actions/workflows/ci.yml`, looking for the `v4.7.0-rc1` tag.
|
||||
- This step can take up to an hour.
|
||||
@@ -250,12 +248,15 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
|
||||
|
||||
# Writing the release notes
|
||||
|
||||
Release notes are automatically generated from the commit history, using `script/release_notes.py`.
|
||||
We are currently trying a system where release notes are compiled all at once from someone looking through the commit history.
|
||||
The exact steps are a work in progress.
|
||||
Here is the general idea:
|
||||
|
||||
Run this as `script/release_notes.py v4.6.0`, where `v4.6.0` is the *previous* release version. This will generate output
|
||||
for all commits since that tag. Note that there is output on both stderr, which should be manually reviewed,
|
||||
and on stdout, which should be manually copied to `RELEASES.md`.
|
||||
|
||||
There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
|
||||
* The work is done right on the `releases/v4.6.0` branch sometime after it is created but before the stable release is made.
|
||||
The release notes for `v4.6.0` will later be copied to `master` when we begin a new development cycle.
|
||||
* There can be material for release notes entries in commit messages.
|
||||
* There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
|
||||
See `./releases_drafts/README.md` for more information.
|
||||
* The release notes should be written from a downstream expert user's point of view.
|
||||
|
||||
This section will be updated when the next release notes are written (for `v4.10.0`).
|
||||
|
||||
16
releases_drafts/list_lex.md
Normal file
16
releases_drafts/list_lex.md
Normal file
@@ -0,0 +1,16 @@
|
||||
We replace the inductive predicate `List.lt` with an upstreamed version of `List.Lex` from Mathlib.
|
||||
(Previously `Lex.lt` was defined in terms of `<`; now it is generalized to take an arbitrary relation.)
|
||||
This subtely changes the notion of ordering on `List α`.
|
||||
|
||||
`List.lt` was a weaker relation: in particular if `l₁ < l₂`, then
|
||||
`a :: l₁ < b :: l₂` may hold according to `List.lt` even if `a` and `b` are merely incomparable
|
||||
(either neither `a < b` nor `b < a`), whereas according to `List.Lex` this would require `a = b`.
|
||||
|
||||
When `<` is total, in the sense that `¬ · < ·` is antisymmetric, then the two relations coincide.
|
||||
|
||||
Mathlib was already overriding the order instances for `List α`,
|
||||
so this change should not be noticed by anyone already using Mathlib.
|
||||
|
||||
We simultaneously add the boolean valued `List.lex` function, parameterised by a `BEq` typeclass
|
||||
and an arbitrary `lt` function. This will support the flexibility previously provided for `List.lt`,
|
||||
via a `==` function which is weaker than strict equality.
|
||||
@@ -63,8 +63,8 @@ else
|
||||
fi
|
||||
# use `-nostdinc` to make sure headers are not visible by default (in particular, not to `#include_next` in the clang headers),
|
||||
# but do not change sysroot so users can still link against system libs
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='--sysroot ROOT -nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='--sysroot ROOT -L ROOT/lib -L ROOT/lib/glibc ROOT/lib/glibc/libc_nonshared.a ROOT/lib/glibc/libpthread_nonshared.a -Wl,--as-needed -Wl,-Bstatic -lgmp -lunwind -luv -Wl,-Bdynamic -Wl,--no-as-needed -fuse-ld=lld'"
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='-nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -L ROOT/lib/glibc ROOT/lib/glibc/libc_nonshared.a ROOT/lib/glibc/libpthread_nonshared.a -Wl,--as-needed -Wl,-Bstatic -lgmp -lunwind -luv -Wl,-Bdynamic -Wl,--no-as-needed -fuse-ld=lld'"
|
||||
# when not using the above flags, link GMP dynamically/as usual
|
||||
echo -n " -DLEAN_EXTRA_LINKER_FLAGS='-Wl,--as-needed -lgmp -luv -lpthread -ldl -lrt -Wl,--no-as-needed'"
|
||||
# do not set `LEAN_CC` for tests
|
||||
|
||||
@@ -48,11 +48,12 @@ if [[ -L llvm-host ]]; then
|
||||
echo -n " -DCMAKE_C_COMPILER=$PWD/stage1/bin/clang"
|
||||
gcp $GMP/lib/libgmp.a stage1/lib/
|
||||
gcp $LIBUV/lib/libuv.a stage1/lib/
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -L ROOT/lib/libc -fuse-ld=lld'"
|
||||
echo -n " -DLEAN_EXTRA_LINKER_FLAGS='-lgmp -luv'"
|
||||
else
|
||||
echo -n " -DCMAKE_C_COMPILER=$PWD/llvm-host/bin/clang -DLEANC_OPTS='--sysroot $PWD/stage1 -resource-dir $PWD/stage1/lib/clang/15.0.1 ${EXTRA_FLAGS:-}'"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -L ROOT/lib/libc -fuse-ld=lld'"
|
||||
fi
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='--sysroot ROOT -nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='--sysroot ROOT -L ROOT/lib -L ROOT/lib/libc -fuse-ld=lld'"
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='-nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang"
|
||||
# do not set `LEAN_CC` for tests
|
||||
echo -n " -DLEAN_TEST_VARS=''"
|
||||
|
||||
@@ -43,7 +43,7 @@ echo -n " -DCMAKE_C_COMPILER=$PWD/stage1/bin/clang.exe -DCMAKE_C_COMPILER_WORKS=
|
||||
echo -n " -DSTAGE0_CMAKE_C_COMPILER=clang -DSTAGE0_CMAKE_CXX_COMPILER=clang++"
|
||||
echo -n " -DLEAN_EXTRA_CXX_FLAGS='--sysroot $PWD/llvm -idirafter /clang64/include/'"
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='--sysroot ROOT -nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang.exe"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='--sysroot ROOT -L ROOT/lib -Wl,-Bstatic -lgmp $(pkg-config --static --libs libuv) -lunwind -Wl,-Bdynamic -fuse-ld=lld'"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -static-libgcc -Wl,-Bstatic -lgmp $(pkg-config --static --libs libuv) -lunwind -Wl,-Bdynamic -fuse-ld=lld'"
|
||||
# when not using the above flags, link GMP dynamically/as usual. Always link ICU dynamically.
|
||||
echo -n " -DLEAN_EXTRA_LINKER_FLAGS='-lgmp $(pkg-config --libs libuv) -lucrtbase'"
|
||||
# do not set `LEAN_CC` for tests
|
||||
|
||||
@@ -1,132 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import yaml
|
||||
import requests
|
||||
import base64
|
||||
import subprocess
|
||||
import sys
|
||||
import os
|
||||
|
||||
def parse_repos_config(file_path):
|
||||
with open(file_path, "r") as f:
|
||||
return yaml.safe_load(f)["repositories"]
|
||||
|
||||
def get_github_token():
|
||||
try:
|
||||
import subprocess
|
||||
result = subprocess.run(['gh', 'auth', 'token'], capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except FileNotFoundError:
|
||||
print("Warning: 'gh' CLI not found. Some API calls may be rate-limited.")
|
||||
return None
|
||||
|
||||
def get_branch_content(repo_url, branch, file_path, github_token):
|
||||
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/contents/{file_path}?ref={branch}"
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
response = requests.get(api_url, headers=headers)
|
||||
if response.status_code == 200:
|
||||
content = response.json().get("content", "")
|
||||
content = content.replace("\n", "")
|
||||
try:
|
||||
return base64.b64decode(content).decode('utf-8').strip()
|
||||
except Exception:
|
||||
return None
|
||||
return None
|
||||
|
||||
def tag_exists(repo_url, tag_name, github_token):
|
||||
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/git/refs/tags/{tag_name}"
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
response = requests.get(api_url, headers=headers)
|
||||
return response.status_code == 200
|
||||
|
||||
def is_merged_into_stable(repo_url, tag_name, stable_branch, github_token):
|
||||
# First get the commit SHA for the tag
|
||||
api_base = repo_url.replace("https://github.com/", "https://api.github.com/repos/")
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
|
||||
# Get tag's commit SHA
|
||||
tag_response = requests.get(f"{api_base}/git/refs/tags/{tag_name}", headers=headers)
|
||||
if tag_response.status_code != 200:
|
||||
return False
|
||||
tag_sha = tag_response.json()['object']['sha']
|
||||
|
||||
# Get commits on stable branch containing this SHA
|
||||
commits_response = requests.get(
|
||||
f"{api_base}/commits?sha={stable_branch}&per_page=100",
|
||||
headers=headers
|
||||
)
|
||||
if commits_response.status_code != 200:
|
||||
return False
|
||||
|
||||
# Check if any commit in stable's history matches our tag's SHA
|
||||
stable_commits = [commit['sha'] for commit in commits_response.json()]
|
||||
return tag_sha in stable_commits
|
||||
|
||||
def parse_version(version_str):
|
||||
# Remove 'v' prefix and split into components
|
||||
# Handle Lean toolchain format (leanprover/lean4:v4.x.y)
|
||||
if ':' in version_str:
|
||||
version_str = version_str.split(':')[1]
|
||||
version = version_str.lstrip('v')
|
||||
# Handle release candidates by removing -rc part for comparison
|
||||
version = version.split('-')[0]
|
||||
return tuple(map(int, version.split('.')))
|
||||
|
||||
def is_version_gte(version1, version2):
|
||||
"""Check if version1 >= version2"""
|
||||
return parse_version(version1) >= parse_version(version2)
|
||||
|
||||
def is_release_candidate(version):
|
||||
return "-rc" in version
|
||||
|
||||
def main():
|
||||
github_token = get_github_token()
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python3 release_checklist.py <toolchain>")
|
||||
sys.exit(1)
|
||||
|
||||
toolchain = sys.argv[1]
|
||||
|
||||
with open(os.path.join(os.path.dirname(__file__), "release_repos.yml")) as f:
|
||||
repos = yaml.safe_load(f)["repositories"]
|
||||
|
||||
for repo in repos:
|
||||
name = repo["name"]
|
||||
url = repo["url"]
|
||||
branch = repo["branch"]
|
||||
check_stable = repo["stable-branch"]
|
||||
check_tag = repo.get("toolchain-tag", True)
|
||||
|
||||
print(f"\nRepository: {name}")
|
||||
|
||||
# Check if branch is on at least the target toolchain
|
||||
lean_toolchain_content = get_branch_content(url, branch, "lean-toolchain", github_token)
|
||||
if lean_toolchain_content is None:
|
||||
print(f" ❌ No lean-toolchain file found in {branch} branch")
|
||||
continue
|
||||
|
||||
on_target_toolchain = is_version_gte(lean_toolchain_content.strip(), toolchain)
|
||||
if not on_target_toolchain:
|
||||
print(f" ❌ Not on target toolchain (needs ≥ {toolchain}, but {branch} is on {lean_toolchain_content.strip()})")
|
||||
continue
|
||||
print(f" ✅ On compatible toolchain (>= {toolchain})")
|
||||
|
||||
# Only check for tag if toolchain-tag is true
|
||||
if check_tag:
|
||||
if not tag_exists(url, toolchain, github_token):
|
||||
print(f" ❌ Tag {toolchain} does not exist")
|
||||
continue
|
||||
print(f" ✅ Tag {toolchain} exists")
|
||||
|
||||
# Only check merging into stable if stable-branch is true and not a release candidate
|
||||
if check_stable and not is_release_candidate(toolchain):
|
||||
if not is_merged_into_stable(url, toolchain, "stable", github_token):
|
||||
print(f" ❌ Tag {toolchain} is not merged into stable")
|
||||
continue
|
||||
print(f" ✅ Tag {toolchain} is merged into stable")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,145 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
import requests
|
||||
import subprocess
|
||||
from collections import defaultdict
|
||||
from git import Repo
|
||||
|
||||
def get_commits_since_tag(repo, tag):
|
||||
try:
|
||||
tag_commit = repo.commit(tag)
|
||||
commits = list(repo.iter_commits(f"{tag_commit.hexsha}..HEAD"))
|
||||
return [
|
||||
(commit.hexsha, commit.message.splitlines()[0], commit.message)
|
||||
for commit in commits
|
||||
]
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"Error retrieving commits: {e}\n")
|
||||
sys.exit(1)
|
||||
|
||||
def check_pr_number(first_line):
|
||||
match = re.search(r"\(\#(\d+)\)$", first_line)
|
||||
if match:
|
||||
return int(match.group(1))
|
||||
return None
|
||||
|
||||
def fetch_pr_labels(pr_number):
|
||||
try:
|
||||
# Use gh CLI to fetch PR details
|
||||
result = subprocess.run([
|
||||
"gh", "api", f"repos/leanprover/lean4/pulls/{pr_number}"
|
||||
], capture_output=True, text=True, check=True)
|
||||
pr_data = result.stdout
|
||||
pr_json = json.loads(pr_data)
|
||||
return [label["name"] for label in pr_json.get("labels", [])]
|
||||
except subprocess.CalledProcessError as e:
|
||||
sys.stderr.write(f"Failed to fetch PR #{pr_number} using gh: {e.stderr}\n")
|
||||
return []
|
||||
|
||||
def format_section_title(label):
|
||||
title = label.replace("changelog-", "").capitalize()
|
||||
if title == "Doc":
|
||||
return "Documentation"
|
||||
elif title == "Pp":
|
||||
return "Pretty Printing"
|
||||
return title
|
||||
|
||||
def sort_sections_order():
|
||||
return [
|
||||
"Language",
|
||||
"Library",
|
||||
"Compiler",
|
||||
"Pretty Printing",
|
||||
"Documentation",
|
||||
"Server",
|
||||
"Lake",
|
||||
"Other",
|
||||
"Uncategorised"
|
||||
]
|
||||
|
||||
def format_markdown_description(pr_number, description):
|
||||
link = f"[#{pr_number}](https://github.com/leanprover/lean4/pull/{pr_number})"
|
||||
return f"{link} {description}"
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write("Usage: script.py <git-tag>\n")
|
||||
sys.exit(1)
|
||||
|
||||
tag = sys.argv[1]
|
||||
try:
|
||||
repo = Repo(".")
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"Error opening Git repository: {e}\n")
|
||||
sys.exit(1)
|
||||
|
||||
commits = get_commits_since_tag(repo, tag)
|
||||
|
||||
sys.stderr.write(f"Found {len(commits)} commits since tag {tag}:\n")
|
||||
for commit_hash, first_line, _ in commits:
|
||||
sys.stderr.write(f"- {commit_hash}: {first_line}\n")
|
||||
|
||||
changelog = defaultdict(list)
|
||||
|
||||
for commit_hash, first_line, full_message in commits:
|
||||
# Skip commits with the specific first lines
|
||||
if first_line == "chore: update stage0" or first_line.startswith("chore: CI: bump "):
|
||||
continue
|
||||
|
||||
pr_number = check_pr_number(first_line)
|
||||
|
||||
if not pr_number:
|
||||
sys.stderr.write(f"No PR number found in {first_line}\n")
|
||||
continue
|
||||
|
||||
# Remove the first line from the full_message for further processing
|
||||
body = full_message[len(first_line):].strip()
|
||||
|
||||
paragraphs = body.split('\n\n')
|
||||
second_paragraph = paragraphs[0] if len(paragraphs) > 0 else ""
|
||||
|
||||
labels = fetch_pr_labels(pr_number)
|
||||
|
||||
# Skip entries with the "changelog-no" label
|
||||
if "changelog-no" in labels:
|
||||
continue
|
||||
|
||||
report_errors = first_line.startswith("feat:") or first_line.startswith("fix:")
|
||||
|
||||
if not second_paragraph.startswith("This PR "):
|
||||
if report_errors:
|
||||
sys.stderr.write(f"No PR description found in commit:\n{commit_hash}\n{first_line}\n{body}\n\n")
|
||||
fallback_description = re.sub(r":$", "", first_line.split(" ", 1)[1]).rsplit(" (#", 1)[0]
|
||||
markdown_description = format_markdown_description(pr_number, fallback_description)
|
||||
else:
|
||||
continue
|
||||
else:
|
||||
markdown_description = format_markdown_description(pr_number, second_paragraph.replace("This PR ", ""))
|
||||
|
||||
changelog_labels = [label for label in labels if label.startswith("changelog-")]
|
||||
if len(changelog_labels) > 1:
|
||||
sys.stderr.write(f"Warning: Multiple changelog-* labels found for PR #{pr_number}: {changelog_labels}\n")
|
||||
|
||||
if not changelog_labels:
|
||||
if report_errors:
|
||||
sys.stderr.write(f"Warning: No changelog-* label found for PR #{pr_number}\n")
|
||||
else:
|
||||
continue
|
||||
|
||||
for label in changelog_labels:
|
||||
changelog[label].append((pr_number, markdown_description))
|
||||
|
||||
section_order = sort_sections_order()
|
||||
sorted_changelog = sorted(changelog.items(), key=lambda item: section_order.index(format_section_title(item[0])) if format_section_title(item[0]) in section_order else len(section_order))
|
||||
|
||||
for label, entries in sorted_changelog:
|
||||
section_title = format_section_title(label) if label != "Uncategorised" else "Uncategorised"
|
||||
print(f"## {section_title}\n")
|
||||
for _, entry in sorted(entries, key=lambda x: x[0]):
|
||||
print(f"* {entry}\n")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,86 +0,0 @@
|
||||
repositories:
|
||||
- name: Batteries
|
||||
url: https://github.com/leanprover-community/batteries
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: lean4checker
|
||||
url: https://github.com/leanprover/lean4checker
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies: []
|
||||
|
||||
- name: doc-gen4
|
||||
url: https://github.com/leanprover/doc-gen4
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: Verso
|
||||
url: https://github.com/leanprover/verso
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: Cli
|
||||
url: https://github.com/leanprover/lean4-cli
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: ProofWidgets4
|
||||
url: https://github.com/leanprover-community/ProofWidgets4
|
||||
toolchain-tag: false
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies:
|
||||
- Batteries
|
||||
|
||||
- name: Aesop
|
||||
url: https://github.com/leanprover-community/aesop
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Batteries
|
||||
|
||||
- name: import-graph
|
||||
url: https://github.com/leanprover-community/import-graph
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: plausible
|
||||
url: https://github.com/leanprover-community/plausible
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: Mathlib
|
||||
url: https://github.com/leanprover-community/mathlib4
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Aesop
|
||||
- ProofWidgets4
|
||||
- lean4checker
|
||||
- Batteries
|
||||
- doc-gen4
|
||||
- import-graph
|
||||
|
||||
- name: REPL
|
||||
url: https://github.com/leanprover-community/repl
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Mathlib
|
||||
@@ -37,4 +37,3 @@ import Init.MacroTrace
|
||||
import Init.Grind
|
||||
import Init.While
|
||||
import Init.Syntax
|
||||
import Init.Internal
|
||||
|
||||
@@ -150,10 +150,6 @@ See the `simp` tactic for more information. -/
|
||||
syntax (name := simp) "simp" optConfig (discharger)? (&" only")?
|
||||
(" [" withoutPosition((simpStar <|> simpErase <|> simpLemma),*) "]")? : conv
|
||||
|
||||
/-- `simp?` takes the same arguments as `simp`, but reports an equivalent call to `simp only`
|
||||
that would be sufficient to close the goal. See the `simp?` tactic for more information. -/
|
||||
syntax (name := simpTrace) "simp?" optConfig (discharger)? (&" only")? (simpArgs)? : conv
|
||||
|
||||
/--
|
||||
`dsimp` is the definitional simplifier in `conv`-mode. It differs from `simp` in that it only
|
||||
applies theorems that hold by reflexivity.
|
||||
@@ -171,9 +167,6 @@ example (a : Nat): (0 + 0) = a - a := by
|
||||
syntax (name := dsimp) "dsimp" optConfig (discharger)? (&" only")?
|
||||
(" [" withoutPosition((simpErase <|> simpLemma),*) "]")? : conv
|
||||
|
||||
@[inherit_doc simpTrace]
|
||||
syntax (name := dsimpTrace) "dsimp?" optConfig (&" only")? (dsimpArgs)? : conv
|
||||
|
||||
/-- `simp_match` simplifies match expressions. For example,
|
||||
```
|
||||
match [a, b] with
|
||||
|
||||
@@ -244,7 +244,8 @@ def ofFn {n} (f : Fin n → α) : Array α := go 0 (mkEmpty n) where
|
||||
def range (n : Nat) : Array Nat :=
|
||||
ofFn fun (i : Fin n) => i
|
||||
|
||||
@[inline] protected def singleton (v : α) : Array α := #[v]
|
||||
def singleton (v : α) : Array α :=
|
||||
mkArray 1 v
|
||||
|
||||
def back! [Inhabited α] (a : Array α) : α :=
|
||||
a[a.size - 1]!
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -9,9 +9,7 @@ import Init.Data.Bool
|
||||
import Init.Data.BitVec.Basic
|
||||
import Init.Data.Fin.Lemmas
|
||||
import Init.Data.Nat.Lemmas
|
||||
import Init.Data.Nat.Div.Lemmas
|
||||
import Init.Data.Nat.Mod
|
||||
import Init.Data.Nat.Div.Lemmas
|
||||
import Init.Data.Int.Bitwise.Lemmas
|
||||
import Init.Data.Int.Pow
|
||||
|
||||
@@ -100,12 +98,6 @@ theorem ofFin_eq_ofNat : @BitVec.ofFin w (Fin.mk x lt) = BitVec.ofNat w x := by
|
||||
theorem eq_of_toNat_eq {n} : ∀ {x y : BitVec n}, x.toNat = y.toNat → x = y
|
||||
| ⟨_, _⟩, ⟨_, _⟩, rfl => rfl
|
||||
|
||||
/-- Prove nonequality of bitvectors in terms of nat operations. -/
|
||||
theorem toNat_ne_iff_ne {n} {x y : BitVec n} : x.toNat ≠ y.toNat ↔ x ≠ y := by
|
||||
constructor
|
||||
· rintro h rfl; apply h rfl
|
||||
· intro h h_eq; apply h <| eq_of_toNat_eq h_eq
|
||||
|
||||
@[simp] theorem val_toFin (x : BitVec w) : x.toFin.val = x.toNat := rfl
|
||||
|
||||
@[bv_toNat] theorem toNat_eq {x y : BitVec n} : x = y ↔ x.toNat = y.toNat :=
|
||||
@@ -450,10 +442,6 @@ theorem toInt_eq_toNat_cond (x : BitVec n) :
|
||||
(x.toNat : Int) - (2^n : Nat) :=
|
||||
rfl
|
||||
|
||||
theorem toInt_eq_toNat_of_lt {x : BitVec n} (h : 2 * x.toNat < 2^n) :
|
||||
x.toInt = x.toNat := by
|
||||
simp [toInt_eq_toNat_cond, h]
|
||||
|
||||
theorem msb_eq_false_iff_two_mul_lt {x : BitVec w} : x.msb = false ↔ 2 * x.toNat < 2^w := by
|
||||
cases w <;> simp [Nat.pow_succ, Nat.mul_comm _ 2, msb_eq_decide, toNat_of_zero_length]
|
||||
|
||||
@@ -466,9 +454,6 @@ theorem toInt_eq_msb_cond (x : BitVec w) :
|
||||
simp only [BitVec.toInt, ← msb_eq_false_iff_two_mul_lt]
|
||||
cases x.msb <;> rfl
|
||||
|
||||
theorem toInt_eq_toNat_of_msb {x : BitVec w} (h : x.msb = false) :
|
||||
x.toInt = x.toNat := by
|
||||
simp [toInt_eq_msb_cond, h]
|
||||
|
||||
theorem toInt_eq_toNat_bmod (x : BitVec n) : x.toInt = Int.bmod x.toNat (2^n) := by
|
||||
simp only [toInt_eq_toNat_cond]
|
||||
@@ -800,19 +785,6 @@ theorem extractLsb'_eq_extractLsb {w : Nat} (x : BitVec w) (start len : Nat) (h
|
||||
unfold allOnes
|
||||
simp
|
||||
|
||||
@[simp] theorem toInt_allOnes : (allOnes w).toInt = if 0 < w then -1 else 0 := by
|
||||
norm_cast
|
||||
by_cases h : w = 0
|
||||
· subst h
|
||||
simp
|
||||
· have : 1 < 2 ^ w := by simp [h]
|
||||
simp [BitVec.toInt]
|
||||
omega
|
||||
|
||||
@[simp] theorem toFin_allOnes : (allOnes w).toFin = Fin.ofNat' (2^w) (2^w - 1) := by
|
||||
ext
|
||||
simp
|
||||
|
||||
@[simp] theorem getLsbD_allOnes : (allOnes v).getLsbD i = decide (i < v) := by
|
||||
simp [allOnes]
|
||||
|
||||
@@ -1170,16 +1142,11 @@ theorem getMsb_not {x : BitVec w} :
|
||||
/-! ### shiftLeft -/
|
||||
|
||||
@[simp, bv_toNat] theorem toNat_shiftLeft {x : BitVec v} :
|
||||
(x <<< n).toNat = x.toNat <<< n % 2^v :=
|
||||
BitVec.toNat (x <<< n) = BitVec.toNat x <<< n % 2^v :=
|
||||
BitVec.toNat_ofNat _ _
|
||||
|
||||
@[simp] theorem toInt_shiftLeft {x : BitVec w} :
|
||||
(x <<< n).toInt = (x.toNat <<< n : Int).bmod (2^w) := by
|
||||
rw [toInt_eq_toNat_bmod, toNat_shiftLeft, Nat.shiftLeft_eq]
|
||||
simp
|
||||
|
||||
@[simp] theorem toFin_shiftLeft {n : Nat} (x : BitVec w) :
|
||||
(x <<< n).toFin = Fin.ofNat' (2^w) (x.toNat <<< n) := rfl
|
||||
BitVec.toFin (x <<< n) = Fin.ofNat' (2^w) (x.toNat <<< n) := rfl
|
||||
|
||||
@[simp]
|
||||
theorem shiftLeft_zero (x : BitVec w) : x <<< 0 = x := by
|
||||
@@ -2315,12 +2282,6 @@ theorem ofNat_sub_ofNat {n} (x y : Nat) : BitVec.ofNat n x - BitVec.ofNat n y =
|
||||
@[simp, bv_toNat] theorem toNat_neg (x : BitVec n) : (- x).toNat = (2^n - x.toNat) % 2^n := by
|
||||
simp [Neg.neg, BitVec.neg]
|
||||
|
||||
theorem toNat_neg_of_pos {x : BitVec n} (h : 0#n < x) :
|
||||
(- x).toNat = 2^n - x.toNat := by
|
||||
change 0 < x.toNat at h
|
||||
rw [toNat_neg, Nat.mod_eq_of_lt]
|
||||
omega
|
||||
|
||||
theorem toInt_neg {x : BitVec w} :
|
||||
(-x).toInt = (-x.toInt).bmod (2 ^ w) := by
|
||||
rw [← BitVec.zero_sub, toInt_sub]
|
||||
@@ -2416,54 +2377,6 @@ theorem not_neg (x : BitVec w) : ~~~(-x) = x + -1#w := by
|
||||
show (_ - x.toNat) % _ = _ by rw [Nat.mod_eq_of_lt (by omega)]]
|
||||
omega
|
||||
|
||||
/-! ### fill -/
|
||||
|
||||
@[simp]
|
||||
theorem getLsbD_fill {w i : Nat} {v : Bool} :
|
||||
(fill w v).getLsbD i = (v && decide (i < w)) := by
|
||||
by_cases h : v
|
||||
<;> simp [h, BitVec.fill, BitVec.negOne_eq_allOnes]
|
||||
|
||||
@[simp]
|
||||
theorem getMsbD_fill {w i : Nat} {v : Bool} :
|
||||
(fill w v).getMsbD i = (v && decide (i < w)) := by
|
||||
by_cases h : v
|
||||
<;> simp [h, BitVec.fill, BitVec.negOne_eq_allOnes]
|
||||
|
||||
@[simp]
|
||||
theorem getElem_fill {w i : Nat} {v : Bool} (h : i < w) :
|
||||
(fill w v)[i] = v := by
|
||||
by_cases h : v
|
||||
<;> simp [h, BitVec.fill, BitVec.negOne_eq_allOnes]
|
||||
|
||||
@[simp]
|
||||
theorem msb_fill {w : Nat} {v : Bool} :
|
||||
(fill w v).msb = (v && decide (0 < w)) := by
|
||||
simp [BitVec.msb]
|
||||
|
||||
theorem fill_eq {w : Nat} {v : Bool} : fill w v = if v = true then allOnes w else 0#w := by
|
||||
by_cases h : v <;> (simp only [h] ; ext ; simp)
|
||||
|
||||
@[simp]
|
||||
theorem fill_true {w : Nat} : fill w true = allOnes w := by
|
||||
simp [fill_eq]
|
||||
|
||||
@[simp]
|
||||
theorem fill_false {w : Nat} : fill w false = 0#w := by
|
||||
simp [fill_eq]
|
||||
|
||||
@[simp] theorem fill_toNat {w : Nat} {v : Bool} :
|
||||
(fill w v).toNat = if v = true then 2^w - 1 else 0 := by
|
||||
by_cases h : v <;> simp [h]
|
||||
|
||||
@[simp] theorem fill_toInt {w : Nat} {v : Bool} :
|
||||
(fill w v).toInt = if v = true && 0 < w then -1 else 0 := by
|
||||
by_cases h : v <;> simp [h]
|
||||
|
||||
@[simp] theorem fill_toFin {w : Nat} {v : Bool} :
|
||||
(fill w v).toFin = if v = true then (allOnes w).toFin else Fin.ofNat' (2 ^ w) 0 := by
|
||||
by_cases h : v <;> simp [h]
|
||||
|
||||
/-! ### mul -/
|
||||
|
||||
theorem mul_def {n} {x y : BitVec n} : x * y = (ofFin <| x.toFin * y.toFin) := by rfl
|
||||
@@ -2607,13 +2520,13 @@ theorem udiv_def {x y : BitVec n} : x / y = BitVec.ofNat n (x.toNat / y.toNat) :
|
||||
rw [← udiv_eq]
|
||||
simp [udiv, bv_toNat, h, Nat.mod_eq_of_lt]
|
||||
|
||||
@[simp]
|
||||
theorem toFin_udiv {x y : BitVec n} : (x / y).toFin = x.toFin / y.toFin := by
|
||||
rfl
|
||||
|
||||
@[simp, bv_toNat]
|
||||
theorem toNat_udiv {x y : BitVec n} : (x / y).toNat = x.toNat / y.toNat := by
|
||||
rfl
|
||||
rw [udiv_def]
|
||||
by_cases h : y = 0
|
||||
· simp [h]
|
||||
· rw [toNat_ofNat, Nat.mod_eq_of_lt]
|
||||
exact Nat.lt_of_le_of_lt (Nat.div_le_self ..) (by omega)
|
||||
|
||||
@[simp]
|
||||
theorem zero_udiv {x : BitVec w} : (0#w) / x = 0#w := by
|
||||
@@ -2649,45 +2562,6 @@ theorem udiv_self {x : BitVec w} :
|
||||
↓reduceIte, toNat_udiv]
|
||||
rw [Nat.div_self (by omega), Nat.mod_eq_of_lt (by omega)]
|
||||
|
||||
theorem msb_udiv (x y : BitVec w) :
|
||||
(x / y).msb = (x.msb && y == 1#w) := by
|
||||
cases msb_x : x.msb
|
||||
· suffices x.toNat / y.toNat < 2 ^ (w - 1) by simpa [msb_eq_decide]
|
||||
calc
|
||||
x.toNat / y.toNat ≤ x.toNat := by apply Nat.div_le_self
|
||||
_ < 2 ^ (w - 1) := by simpa [msb_eq_decide] using msb_x
|
||||
. rcases w with _|w
|
||||
· contradiction
|
||||
· have : (y == 1#_) = decide (y.toNat = 1) := by
|
||||
simp [(· == ·), toNat_eq]
|
||||
simp only [this, Bool.true_and]
|
||||
match hy : y.toNat with
|
||||
| 0 =>
|
||||
obtain rfl : y = 0#_ := eq_of_toNat_eq hy
|
||||
simp
|
||||
| 1 =>
|
||||
obtain rfl : y = 1#_ := eq_of_toNat_eq (by simp [hy])
|
||||
simpa using msb_x
|
||||
| y + 2 =>
|
||||
suffices x.toNat / (y + 2) < 2 ^ w by
|
||||
simp_all [msb_eq_decide, hy]
|
||||
calc
|
||||
x.toNat / (y + 2)
|
||||
≤ x.toNat / 2 := by apply Nat.div_add_le_right (by omega)
|
||||
_ < 2 ^ w := by omega
|
||||
|
||||
theorem msb_udiv_eq_false_of {x : BitVec w} (h : x.msb = false) (y : BitVec w) :
|
||||
(x / y).msb = false := by
|
||||
simp [msb_udiv, h]
|
||||
|
||||
/--
|
||||
If `x` is nonnegative (i.e., does not have its msb set),
|
||||
then `x / y` is nonnegative, thus `toInt` and `toNat` coincide.
|
||||
-/
|
||||
theorem toInt_udiv_of_msb {x : BitVec w} (h : x.msb = false) (y : BitVec w) :
|
||||
(x / y).toInt = x.toNat / y.toNat := by
|
||||
simp [toInt_eq_msb_cond, msb_udiv_eq_false_of h]
|
||||
|
||||
/-! ### umod -/
|
||||
|
||||
theorem umod_def {x y : BitVec n} :
|
||||
@@ -2700,10 +2574,6 @@ theorem umod_def {x y : BitVec n} :
|
||||
theorem toNat_umod {x y : BitVec n} :
|
||||
(x % y).toNat = x.toNat % y.toNat := rfl
|
||||
|
||||
@[simp]
|
||||
theorem toFin_umod {x y : BitVec w} :
|
||||
(x % y).toFin = x.toFin % y.toFin := rfl
|
||||
|
||||
@[simp]
|
||||
theorem umod_zero {x : BitVec n} : x % 0#n = x := by
|
||||
simp [umod_def]
|
||||
@@ -2731,55 +2601,6 @@ theorem umod_eq_and {x y : BitVec 1} : x % y = x &&& (~~~y) := by
|
||||
rcases hy with rfl | rfl <;>
|
||||
rfl
|
||||
|
||||
theorem umod_eq_of_lt {x y : BitVec w} (h : x < y) :
|
||||
x % y = x := by
|
||||
apply eq_of_toNat_eq
|
||||
simp [Nat.mod_eq_of_lt h]
|
||||
|
||||
@[simp]
|
||||
theorem msb_umod {x y : BitVec w} :
|
||||
(x % y).msb = (x.msb && (x < y || y == 0#w)) := by
|
||||
rw [msb_eq_decide, toNat_umod]
|
||||
cases msb_x : x.msb
|
||||
· suffices x.toNat % y.toNat < 2 ^ (w - 1) by simpa
|
||||
calc
|
||||
x.toNat % y.toNat ≤ x.toNat := by apply Nat.mod_le
|
||||
_ < 2 ^ (w - 1) := by simpa [msb_eq_decide] using msb_x
|
||||
. by_cases hy : y = 0
|
||||
· simp_all [msb_eq_decide]
|
||||
· suffices 2 ^ (w - 1) ≤ x.toNat % y.toNat ↔ x < y by simp_all
|
||||
by_cases x_lt_y : x < y
|
||||
. simp_all [Nat.mod_eq_of_lt x_lt_y, msb_eq_decide]
|
||||
· suffices x.toNat % y.toNat < 2 ^ (w - 1) by
|
||||
simpa [x_lt_y]
|
||||
have y_le_x : y.toNat ≤ x.toNat := by
|
||||
simpa using x_lt_y
|
||||
replace hy : y.toNat ≠ 0 :=
|
||||
toNat_ne_iff_ne.mpr hy
|
||||
by_cases msb_y : y.toNat < 2 ^ (w - 1)
|
||||
· have : x.toNat % y.toNat < y.toNat := Nat.mod_lt _ (by omega)
|
||||
omega
|
||||
· rcases w with _|w
|
||||
· contradiction
|
||||
simp only [Nat.add_one_sub_one]
|
||||
replace msb_y : 2 ^ w ≤ y.toNat := by
|
||||
simpa using msb_y
|
||||
have : y.toNat ≤ y.toNat * (x.toNat / y.toNat) := by
|
||||
apply Nat.le_mul_of_pos_right
|
||||
apply Nat.div_pos y_le_x
|
||||
omega
|
||||
have : x.toNat % y.toNat ≤ x.toNat - y.toNat := by
|
||||
rw [Nat.mod_eq_sub]; omega
|
||||
omega
|
||||
|
||||
theorem toInt_umod {x y : BitVec w} :
|
||||
(x % y).toInt = (x.toNat % y.toNat : Int).bmod (2 ^ w) := by
|
||||
simp [toInt_eq_toNat_bmod]
|
||||
|
||||
theorem toInt_umod_of_msb {x y : BitVec w} (h : x.msb = false) :
|
||||
(x % y).toInt = x.toInt % y.toNat := by
|
||||
simp [toInt_eq_msb_cond, h]
|
||||
|
||||
/-! ### smtUDiv -/
|
||||
|
||||
theorem smtUDiv_eq (x y : BitVec w) : smtUDiv x y = if y = 0#w then allOnes w else x / y := by
|
||||
@@ -2936,12 +2757,7 @@ theorem smod_zero {x : BitVec n} : x.smod 0#n = x := by
|
||||
|
||||
/-! # Rotate Left -/
|
||||
|
||||
/--`rotateLeft` is defined in terms of left and right shifts. -/
|
||||
theorem rotateLeft_def {x : BitVec w} {r : Nat} :
|
||||
x.rotateLeft r = (x <<< (r % w)) ||| (x >>> (w - r % w)) := by
|
||||
simp only [rotateLeft, rotateLeftAux]
|
||||
|
||||
/-- `rotateLeft` is invariant under `mod` by the bitwidth. -/
|
||||
/-- rotateLeft is invariant under `mod` by the bitwidth. -/
|
||||
@[simp]
|
||||
theorem rotateLeft_mod_eq_rotateLeft {x : BitVec w} {r : Nat} :
|
||||
x.rotateLeft (r % w) = x.rotateLeft r := by
|
||||
@@ -3085,18 +2901,8 @@ theorem msb_rotateLeft {m w : Nat} {x : BitVec w} :
|
||||
· simp
|
||||
omega
|
||||
|
||||
@[simp]
|
||||
theorem toNat_rotateLeft {x : BitVec w} {r : Nat} :
|
||||
(x.rotateLeft r).toNat = (x.toNat <<< (r % w)) % (2^w) ||| x.toNat >>> (w - r % w) := by
|
||||
simp only [rotateLeft_def, toNat_shiftLeft, toNat_ushiftRight, toNat_or]
|
||||
|
||||
/-! ## Rotate Right -/
|
||||
|
||||
/-- `rotateRight` is defined in terms of left and right shifts. -/
|
||||
theorem rotateRight_def {x : BitVec w} {r : Nat} :
|
||||
x.rotateRight r = (x >>> (r % w)) ||| (x <<< (w - r % w)) := by
|
||||
simp only [rotateRight, rotateRightAux]
|
||||
|
||||
/--
|
||||
Accessing bits in `x.rotateRight r` the range `[0, w-r)` is equal to
|
||||
accessing bits `x` in the range `[r, w)`.
|
||||
@@ -3232,11 +3038,6 @@ theorem msb_rotateRight {r w : Nat} {x : BitVec w} :
|
||||
simp [h₁]
|
||||
· simp [show w = 0 by omega]
|
||||
|
||||
@[simp]
|
||||
theorem toNat_rotateRight {x : BitVec w} {r : Nat} :
|
||||
(x.rotateRight r).toNat = (x.toNat >>> (r % w)) ||| x.toNat <<< (w - r % w) % (2^w) := by
|
||||
simp only [rotateRight_def, toNat_shiftLeft, toNat_ushiftRight, toNat_or]
|
||||
|
||||
/- ## twoPow -/
|
||||
|
||||
theorem twoPow_eq (w : Nat) (i : Nat) : twoPow w i = 1#w <<< i := by
|
||||
|
||||
@@ -534,13 +534,6 @@ theorem mul_emod (a b n : Int) : (a * b) % n = (a % n) * (b % n) % n := by
|
||||
@[simp] theorem emod_emod (a b : Int) : (a % b) % b = a % b := by
|
||||
conv => rhs; rw [← emod_add_ediv a b, add_mul_emod_self_left]
|
||||
|
||||
@[simp] theorem emod_sub_emod (m n k : Int) : (m % n - k) % n = (m - k) % n :=
|
||||
Int.emod_add_emod m n (-k)
|
||||
|
||||
@[simp] theorem sub_emod_emod (m n k : Int) : (m - n % k) % k = (m - n) % k := by
|
||||
apply (emod_add_cancel_right (n % k)).mp
|
||||
rw [Int.sub_add_cancel, Int.add_emod_emod, Int.sub_add_cancel]
|
||||
|
||||
theorem sub_emod (a b n : Int) : (a - b) % n = (a % n - b % n) % n := by
|
||||
apply (emod_add_cancel_right b).mp
|
||||
rw [Int.sub_add_cancel, ← Int.add_emod_emod, Int.sub_add_cancel, emod_emod]
|
||||
@@ -1105,32 +1098,6 @@ theorem bmod_def (x : Int) (m : Nat) : bmod x m =
|
||||
(x % m) - m :=
|
||||
rfl
|
||||
|
||||
theorem bdiv_add_bmod (x : Int) (m : Nat) : m * bdiv x m + bmod x m = x := by
|
||||
unfold bdiv bmod
|
||||
split
|
||||
· simp_all only [Nat.cast_ofNat_Int, Int.mul_zero, emod_zero, Int.zero_add, Int.sub_zero,
|
||||
ite_self]
|
||||
· dsimp only
|
||||
split
|
||||
· exact ediv_add_emod x m
|
||||
· rw [Int.mul_add, Int.mul_one, Int.add_assoc, Int.add_comm m, Int.sub_add_cancel]
|
||||
exact ediv_add_emod x m
|
||||
|
||||
theorem bmod_add_bdiv (x : Int) (m : Nat) : bmod x m + m * bdiv x m = x := by
|
||||
rw [Int.add_comm]; exact bdiv_add_bmod x m
|
||||
|
||||
theorem bdiv_add_bmod' (x : Int) (m : Nat) : bdiv x m * m + bmod x m = x := by
|
||||
rw [Int.mul_comm]; exact bdiv_add_bmod x m
|
||||
|
||||
theorem bmod_add_bdiv' (x : Int) (m : Nat) : bmod x m + bdiv x m * m = x := by
|
||||
rw [Int.add_comm]; exact bdiv_add_bmod' x m
|
||||
|
||||
theorem bmod_eq_self_sub_mul_bdiv (x : Int) (m : Nat) : bmod x m = x - m * bdiv x m := by
|
||||
rw [← Int.add_sub_cancel (bmod x m), bmod_add_bdiv]
|
||||
|
||||
theorem bmod_eq_self_sub_bdiv_mul (x : Int) (m : Nat) : bmod x m = x - bdiv x m * m := by
|
||||
rw [← Int.add_sub_cancel (bmod x m), bmod_add_bdiv']
|
||||
|
||||
theorem bmod_pos (x : Int) (m : Nat) (p : x % m < (m + 1) / 2) : bmod x m = x % m := by
|
||||
simp [bmod_def, p]
|
||||
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
/-
|
||||
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro,
|
||||
Kim Morrison
|
||||
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Bool
|
||||
@@ -758,6 +757,207 @@ theorem length_eq_of_beq [BEq α] {l₁ l₂ : List α} (h : l₁ == l₂) : l
|
||||
| nil => simp
|
||||
| cons b l₂ => simp [isEqv, ih]
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_reverse [Monad m] (l : List α) (f : β → α → m β) (b) :
|
||||
l.reverse.foldlM f b = l.foldrM (fun x y => f y x) b := rfl
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
induction l generalizing b <;> simp [*]
|
||||
|
||||
@[simp] theorem foldrM_cons [Monad m] [LawfulMonad m] (a : α) (l) (f : α → β → m β) (b) :
|
||||
(a :: l).foldrM f b = l.foldrM f b >>= f a := by
|
||||
simp only [foldrM]
|
||||
induction l <;> simp_all
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : List α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
induction l generalizing b <;> simp [*, foldl]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : List α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
induction l <;> simp [*, foldr]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
@[simp] theorem foldr_cons_eq_append (l : List α) : l.foldr cons l' = l ++ l' := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[deprecated foldr_cons_eq_append (since := "2024-08-22")] abbrev foldr_self_append := @foldr_cons_eq_append
|
||||
|
||||
@[simp] theorem foldl_flip_cons_eq_append (l : List α) : l.foldl (fun x y => y :: x) l' = l.reverse ++ l' := by
|
||||
induction l generalizing l' <;> simp [*]
|
||||
|
||||
theorem foldr_cons_nil (l : List α) : l.foldr cons [] = l := by simp
|
||||
|
||||
@[deprecated foldr_cons_nil (since := "2024-09-04")] abbrev foldr_self := @foldr_cons_nil
|
||||
|
||||
theorem foldl_map (f : β₁ → β₂) (g : α → β₂ → α) (l : List β₁) (init : α) :
|
||||
(l.map f).foldl g init = l.foldl (fun x y => g x (f y)) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldr_map (f : α₁ → α₂) (g : α₂ → β → β) (l : List α₁) (init : β) :
|
||||
(l.map f).foldr g init = l.foldr (fun x y => g (f x) y) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldl_filterMap (f : α → Option β) (g : γ → β → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldl g init = l.foldl (fun x y => match f y with | some b => g x b | none => x) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldl_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldr_filterMap (f : α → Option β) (g : β → γ → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldr g init = l.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldr_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldl_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldr_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldl_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldl op (op a₁ a₂) = op a₁ (l.foldl op a₂)
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldl_cons, ha.assoc]
|
||||
rw [foldl_assoc]
|
||||
|
||||
theorem foldr_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldr op (op a₁ a₂) = op (l.foldr op a₁) a₂
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldr_cons, ha.assoc]
|
||||
rw [foldr_assoc]
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : List β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
induction l generalizing init <;> simp [*, H]
|
||||
|
||||
theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ : α → β₂ → β₂) (l : List α) (init : β₁)
|
||||
(H : ∀ x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
|
||||
induction l <;> simp [*, H]
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldl`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldlRecOn {motive : β → Sort _} : ∀ (l : List α) (op : β → α → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op b a)), motive (List.foldl op b l)
|
||||
| [], _, _, hb, _ => hb
|
||||
| hd :: tl, op, b, hb, hl =>
|
||||
foldlRecOn tl op (op b hd) (hl b hb hd (mem_cons_self hd tl))
|
||||
fun y hy x hx => hl y hy x (mem_cons_of_mem hd hx)
|
||||
|
||||
@[simp] theorem foldlRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op b a)) :
|
||||
foldlRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldlRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op b a)) :
|
||||
foldlRecOn (x :: l) op b hb hl =
|
||||
foldlRecOn l op (op b x) (hl b hb x (mem_cons_self x l))
|
||||
(fun b c a m => hl b c a (mem_cons_of_mem x m)) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldr`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldrRecOn {motive : β → Sort _} : ∀ (l : List α) (op : α → β → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op a b)), motive (List.foldr op b l)
|
||||
| nil, _, _, hb, _ => hb
|
||||
| x :: l, op, b, hb, hl =>
|
||||
hl (foldr op b l)
|
||||
(foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m)) x (mem_cons_self x l)
|
||||
|
||||
@[simp] theorem foldrRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op a b)) :
|
||||
foldrRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldrRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op a b)) :
|
||||
foldrRecOn (x :: l) op b hb hl =
|
||||
hl _ (foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m))
|
||||
x (mem_cons_self x l) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldl_rel {l : List α} {f g : β → α → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f c a) (g c' a)) :
|
||||
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldl_cons]
|
||||
apply ih
|
||||
· simp_all
|
||||
· exact fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldr_rel {l : List α} {f g : α → β → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f a c) (g a c')) :
|
||||
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldr_cons]
|
||||
apply h'
|
||||
· simp
|
||||
· exact ih h fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
@[simp] theorem foldl_add_const (l : List α) (a b : Nat) :
|
||||
l.foldl (fun x _ => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldl_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc,
|
||||
Nat.add_comm a]
|
||||
|
||||
@[simp] theorem foldr_add_const (l : List α) (a b : Nat) :
|
||||
l.foldr (fun _ x => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldr_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc]
|
||||
|
||||
/-! ### getLast -/
|
||||
|
||||
theorem getLast_eq_getElem : ∀ (l : List α) (h : l ≠ []),
|
||||
@@ -1016,6 +1216,27 @@ theorem getLast?_tail (l : List α) : (tail l).getLast? = if l.length = 1 then n
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : List α) : map (id : α → α) l = l := by
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : List α) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : List α) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f [a] = [f a] := rfl
|
||||
|
||||
@[simp] theorem length_map (as : List α) (f : α → β) : (as.map f).length = as.length := by
|
||||
induction as with
|
||||
| nil => simp [List.map]
|
||||
@@ -1041,27 +1262,6 @@ theorem get_map (f : α → β) {l i} :
|
||||
get (map f l) i = f (get l ⟨i, length_map l f ▸ i.2⟩) := by
|
||||
simp
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : List α) : map (id : α → α) l = l := by
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : List α) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : List α) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f [a] = [f a] := rfl
|
||||
|
||||
@[simp] theorem mem_map {f : α → β} : ∀ {l : List α}, b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b
|
||||
| [] => by simp
|
||||
| _ :: l => by simp [mem_map (l := l), eq_comm (a := b)]
|
||||
@@ -1115,10 +1315,6 @@ theorem map_eq_cons_iff' {f : α → β} {l : List α} :
|
||||
|
||||
@[deprecated map_eq_cons' (since := "2024-09-05")] abbrev map_eq_cons' := @map_eq_cons_iff'
|
||||
|
||||
@[simp] theorem map_eq_singleton_iff {f : α → β} {l : List α} {b : β} :
|
||||
map f l = [b] ↔ ∃ a, l = [a] ∧ f a = b := by
|
||||
simp [map_eq_cons_iff]
|
||||
|
||||
theorem map_eq_map_iff : map f l = map g l ↔ ∀ a ∈ l, f a = g a := by
|
||||
induction l <;> simp
|
||||
|
||||
@@ -1285,7 +1481,7 @@ theorem map_filter_eq_foldr (f : α → β) (p : α → Bool) (as : List α) :
|
||||
@[simp] theorem filter_append {p : α → Bool} :
|
||||
∀ (l₁ l₂ : List α), filter p (l₁ ++ l₂) = filter p l₁ ++ filter p l₂
|
||||
| [], _ => rfl
|
||||
| a :: l₁, l₂ => by simp only [cons_append, filter]; split <;> simp [filter_append l₁]
|
||||
| a :: l₁, l₂ => by simp [filter]; split <;> simp [filter_append l₁]
|
||||
|
||||
theorem filter_eq_cons_iff {l} {a} {as} :
|
||||
filter p l = a :: as ↔
|
||||
@@ -1765,6 +1961,16 @@ theorem set_append {s t : List α} :
|
||||
(s ++ t).set i x = s ++ t.set (i - s.length) x := by
|
||||
rw [set_append, if_neg (by simp_all)]
|
||||
|
||||
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α → β → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[simp] theorem foldl_append {β : Type _} (f : β → α → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldl f b = l'.foldl f (l.foldl f b) := by simp [foldl_eq_foldlM]
|
||||
|
||||
@[simp] theorem foldr_append (f : α → β → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM]
|
||||
|
||||
theorem filterMap_eq_append_iff {f : α → Option β} :
|
||||
filterMap f l = L₁ ++ L₂ ↔ ∃ l₁ l₂, l = l₁ ++ l₂ ∧ filterMap f l₁ = L₁ ∧ filterMap f l₂ = L₂ := by
|
||||
constructor
|
||||
@@ -1913,6 +2119,14 @@ theorem head?_flatten {L : List (List α)} : (flatten L).head? = L.findSome? fun
|
||||
-- `getLast?_flatten` is proved later, after the `reverse` section.
|
||||
-- `head_flatten` and `getLast_flatten` are proved in `Init.Data.List.Find`.
|
||||
|
||||
theorem foldl_flatten (f : β → α → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
|
||||
induction L generalizing b <;> simp_all
|
||||
|
||||
theorem foldr_flatten (f : α → β → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldr f b = L.foldr (fun l b => l.foldr f b) b := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@[simp] theorem map_flatten (f : α → β) (L : List (List α)) : map f (flatten L) = flatten (map (map f) L) := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@@ -2485,114 +2699,10 @@ theorem flatMap_reverse {β} (l : List α) (f : α → List β) : (l.reverse.fla
|
||||
@[simp] theorem reverseAux_eq (as bs : List α) : reverseAux as bs = reverse as ++ bs :=
|
||||
reverseAux_eq_append ..
|
||||
|
||||
@[simp] theorem reverse_replicate (n) (a : α) : reverse (replicate n a) = replicate n a :=
|
||||
eq_replicate_iff.2
|
||||
⟨by rw [length_reverse, length_replicate],
|
||||
fun _ h => eq_of_mem_replicate (mem_reverse.1 h)⟩
|
||||
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
induction l generalizing b <;> simp [*]
|
||||
|
||||
@[simp] theorem foldrM_cons [Monad m] [LawfulMonad m] (a : α) (l) (f : α → β → m β) (b) :
|
||||
(a :: l).foldrM f b = l.foldrM f b >>= f a := by
|
||||
simp only [foldrM]
|
||||
induction l <;> simp_all
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : List α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
induction l generalizing b <;> simp [*, foldl]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : List α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
induction l <;> simp [*, foldr]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
@[simp] theorem foldlM_reverse [Monad m] (l : List α) (f : β → α → m β) (b) :
|
||||
l.reverse.foldlM f b = l.foldrM (fun x y => f y x) b := rfl
|
||||
|
||||
@[simp] theorem foldrM_reverse [Monad m] (l : List α) (f : α → β → m β) (b) :
|
||||
l.reverse.foldrM f b = l.foldlM (fun x y => f y x) b :=
|
||||
(foldlM_reverse ..).symm.trans <| by simp
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
@[simp] theorem foldr_cons_eq_append (l : List α) : l.foldr cons l' = l ++ l' := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[deprecated foldr_cons_eq_append (since := "2024-08-22")] abbrev foldr_self_append := @foldr_cons_eq_append
|
||||
|
||||
@[simp] theorem foldl_flip_cons_eq_append (l : List α) : l.foldl (fun x y => y :: x) l' = l.reverse ++ l' := by
|
||||
induction l generalizing l' <;> simp [*]
|
||||
|
||||
theorem foldr_cons_nil (l : List α) : l.foldr cons [] = l := by simp
|
||||
|
||||
@[deprecated foldr_cons_nil (since := "2024-09-04")] abbrev foldr_self := @foldr_cons_nil
|
||||
|
||||
theorem foldl_map (f : β₁ → β₂) (g : α → β₂ → α) (l : List β₁) (init : α) :
|
||||
(l.map f).foldl g init = l.foldl (fun x y => g x (f y)) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldr_map (f : α₁ → α₂) (g : α₂ → β → β) (l : List α₁) (init : β) :
|
||||
(l.map f).foldr g init = l.foldr (fun x y => g (f x) y) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldl_filterMap (f : α → Option β) (g : γ → β → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldl g init = l.foldl (fun x y => match f y with | some b => g x b | none => x) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldl_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldr_filterMap (f : α → Option β) (g : β → γ → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldr g init = l.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldr_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldl_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldr_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α → β → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[simp] theorem foldl_append {β : Type _} (f : β → α → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldl f b = l'.foldl f (l.foldl f b) := by simp [foldl_eq_foldlM]
|
||||
|
||||
@[simp] theorem foldr_append (f : α → β → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM]
|
||||
|
||||
theorem foldl_flatten (f : β → α → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
|
||||
induction L generalizing b <;> simp_all
|
||||
|
||||
theorem foldr_flatten (f : α → β → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldr f b = L.foldr (fun l b => l.foldr f b) b := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@[simp] theorem foldl_reverse (l : List α) (f : β → α → β) (b) :
|
||||
l.reverse.foldl f b = l.foldr (fun x y => f y x) b := by simp [foldl_eq_foldlM, foldr_eq_foldrM]
|
||||
|
||||
@@ -2606,127 +2716,10 @@ theorem foldl_eq_foldr_reverse (l : List α) (f : β → α → β) (b) :
|
||||
theorem foldr_eq_foldl_reverse (l : List α) (f : α → β → β) (b) :
|
||||
l.foldr f b = l.reverse.foldl (fun x y => f y x) b := by simp
|
||||
|
||||
theorem foldl_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldl op (op a₁ a₂) = op a₁ (l.foldl op a₂)
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldl_cons, ha.assoc]
|
||||
rw [foldl_assoc]
|
||||
|
||||
theorem foldr_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldr op (op a₁ a₂) = op (l.foldr op a₁) a₂
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldr_cons, ha.assoc]
|
||||
rw [foldr_assoc]
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : List β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
induction l generalizing init <;> simp [*, H]
|
||||
|
||||
theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ : α → β₂ → β₂) (l : List α) (init : β₁)
|
||||
(H : ∀ x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
|
||||
induction l <;> simp [*, H]
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldl`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldlRecOn {motive : β → Sort _} : ∀ (l : List α) (op : β → α → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op b a)), motive (List.foldl op b l)
|
||||
| [], _, _, hb, _ => hb
|
||||
| hd :: tl, op, b, hb, hl =>
|
||||
foldlRecOn tl op (op b hd) (hl b hb hd (mem_cons_self hd tl))
|
||||
fun y hy x hx => hl y hy x (mem_cons_of_mem hd hx)
|
||||
|
||||
@[simp] theorem foldlRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op b a)) :
|
||||
foldlRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldlRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op b a)) :
|
||||
foldlRecOn (x :: l) op b hb hl =
|
||||
foldlRecOn l op (op b x) (hl b hb x (mem_cons_self x l))
|
||||
(fun b c a m => hl b c a (mem_cons_of_mem x m)) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldr`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldrRecOn {motive : β → Sort _} : ∀ (l : List α) (op : α → β → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op a b)), motive (List.foldr op b l)
|
||||
| nil, _, _, hb, _ => hb
|
||||
| x :: l, op, b, hb, hl =>
|
||||
hl (foldr op b l)
|
||||
(foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m)) x (mem_cons_self x l)
|
||||
|
||||
@[simp] theorem foldrRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op a b)) :
|
||||
foldrRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldrRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op a b)) :
|
||||
foldrRecOn (x :: l) op b hb hl =
|
||||
hl _ (foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m))
|
||||
x (mem_cons_self x l) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldl_rel {l : List α} {f g : β → α → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f c a) (g c' a)) :
|
||||
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldl_cons]
|
||||
apply ih
|
||||
· simp_all
|
||||
· exact fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldr_rel {l : List α} {f g : α → β → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f a c) (g a c')) :
|
||||
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldr_cons]
|
||||
apply h'
|
||||
· simp
|
||||
· exact ih h fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
@[simp] theorem foldl_add_const (l : List α) (a b : Nat) :
|
||||
l.foldl (fun x _ => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldl_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc,
|
||||
Nat.add_comm a]
|
||||
|
||||
@[simp] theorem foldr_add_const (l : List α) (a b : Nat) :
|
||||
l.foldr (fun _ x => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldr_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc]
|
||||
|
||||
@[simp] theorem reverse_replicate (n) (a : α) : reverse (replicate n a) = replicate n a :=
|
||||
eq_replicate_iff.2
|
||||
⟨by rw [length_reverse, length_replicate],
|
||||
fun _ h => eq_of_mem_replicate (mem_reverse.1 h)⟩
|
||||
|
||||
/-! #### Further results about `getLast` and `getLast?` -/
|
||||
|
||||
|
||||
@@ -510,18 +510,4 @@ theorem Perm.eraseP (f : α → Bool) {l₁ l₂ : List α}
|
||||
refine (IH₁ H).trans (IH₂ ((p₁.pairwise_iff ?_).1 H))
|
||||
exact fun h h₁ h₂ => h h₂ h₁
|
||||
|
||||
theorem perm_insertIdx {α} (x : α) (l : List α) {n} (h : n ≤ l.length) :
|
||||
insertIdx n x l ~ x :: l := by
|
||||
induction l generalizing n with
|
||||
| nil =>
|
||||
cases n with
|
||||
| zero => rfl
|
||||
| succ => cases h
|
||||
| cons _ _ ih =>
|
||||
cases n with
|
||||
| zero => simp [insertIdx]
|
||||
| succ =>
|
||||
simp only [insertIdx, modifyTailIdx]
|
||||
refine .trans (.cons _ (ih (Nat.le_of_succ_le_succ h))) (.swap ..)
|
||||
|
||||
end List
|
||||
|
||||
@@ -253,10 +253,6 @@ theorem merge_perm_append : ∀ {xs ys : List α}, merge xs ys le ~ xs ++ ys
|
||||
· exact (merge_perm_append.cons y).trans
|
||||
((Perm.swap x y _).trans (perm_middle.symm.cons x))
|
||||
|
||||
theorem Perm.merge (s₁ s₂ : α → α → Bool) (hl : l₁ ~ l₂) (hr : r₁ ~ r₂) :
|
||||
merge l₁ r₁ s₁ ~ merge l₂ r₂ s₂ :=
|
||||
Perm.trans (merge_perm_append ..) <| Perm.trans (Perm.append hl hr) <| Perm.symm (merge_perm_append ..)
|
||||
|
||||
/-! ### mergeSort -/
|
||||
|
||||
@[simp] theorem mergeSort_nil : [].mergeSort r = [] := by rw [List.mergeSort]
|
||||
|
||||
@@ -46,7 +46,7 @@ theorem toArray_cons (a : α) (l : List α) : (a :: l).toArray = #[a] ++ l.toArr
|
||||
@[simp] theorem isEmpty_toArray (l : List α) : l.toArray.isEmpty = l.isEmpty := by
|
||||
cases l <;> simp [Array.isEmpty]
|
||||
|
||||
@[simp] theorem toArray_singleton (a : α) : (List.singleton a).toArray = Array.singleton a := rfl
|
||||
@[simp] theorem toArray_singleton (a : α) : (List.singleton a).toArray = singleton a := rfl
|
||||
|
||||
@[simp] theorem back!_toArray [Inhabited α] (l : List α) : l.toArray.back! = l.getLast! := by
|
||||
simp only [back!, size_toArray, Array.get!_eq_getElem!, getElem!_toArray, getLast!_eq_getElem!]
|
||||
|
||||
@@ -49,17 +49,4 @@ theorem lt_div_mul_self (h : 0 < k) (w : k ≤ x) : x - k < x / k * k := by
|
||||
have : x % k < k := mod_lt x h
|
||||
omega
|
||||
|
||||
theorem div_pos (hba : b ≤ a) (hb : 0 < b) : 0 < a / b := by
|
||||
cases b
|
||||
· contradiction
|
||||
· simp [Nat.pos_iff_ne_zero, div_eq_zero_iff_lt, hba]
|
||||
|
||||
theorem div_le_div_left (hcb : c ≤ b) (hc : 0 < c) : a / b ≤ a / c :=
|
||||
(Nat.le_div_iff_mul_le hc).2 <|
|
||||
Nat.le_trans (Nat.mul_le_mul_left _ hcb) (Nat.div_mul_le_self a b)
|
||||
|
||||
theorem div_add_le_right {z : Nat} (h : 0 < z) (x y : Nat) :
|
||||
x / (y + z) ≤ x / z :=
|
||||
div_le_div_left (Nat.le_add_left z y) h
|
||||
|
||||
end Nat
|
||||
|
||||
@@ -159,8 +159,6 @@ def UInt32.xor (a b : UInt32) : UInt32 := ⟨a.toBitVec ^^^ b.toBitVec⟩
|
||||
def UInt32.shiftLeft (a b : UInt32) : UInt32 := ⟨a.toBitVec <<< (mod b 32).toBitVec⟩
|
||||
@[extern "lean_uint32_shift_right"]
|
||||
def UInt32.shiftRight (a b : UInt32) : UInt32 := ⟨a.toBitVec >>> (mod b 32).toBitVec⟩
|
||||
def UInt32.lt (a b : UInt32) : Prop := a.toBitVec < b.toBitVec
|
||||
def UInt32.le (a b : UInt32) : Prop := a.toBitVec ≤ b.toBitVec
|
||||
|
||||
instance : Add UInt32 := ⟨UInt32.add⟩
|
||||
instance : Sub UInt32 := ⟨UInt32.sub⟩
|
||||
@@ -171,8 +169,6 @@ set_option linter.deprecated false in
|
||||
instance : HMod UInt32 Nat UInt32 := ⟨UInt32.modn⟩
|
||||
|
||||
instance : Div UInt32 := ⟨UInt32.div⟩
|
||||
instance : LT UInt32 := ⟨UInt32.lt⟩
|
||||
instance : LE UInt32 := ⟨UInt32.le⟩
|
||||
|
||||
@[extern "lean_uint32_complement"]
|
||||
def UInt32.complement (a : UInt32) : UInt32 := ⟨~~~a.toBitVec⟩
|
||||
|
||||
@@ -103,7 +103,7 @@ of bounds.
|
||||
@[inline] def head [NeZero n] (v : Vector α n) := v[0]'(Nat.pos_of_neZero n)
|
||||
|
||||
/-- Push an element `x` to the end of a vector. -/
|
||||
@[inline] def push (v : Vector α n) (x : α) : Vector α (n + 1) :=
|
||||
@[inline] def push (x : α) (v : Vector α n) : Vector α (n + 1) :=
|
||||
⟨v.toArray.push x, by simp⟩
|
||||
|
||||
/-- Remove the last element of a vector. -/
|
||||
@@ -136,18 +136,6 @@ This will perform the update destructively provided that the vector has a refere
|
||||
@[inline] def set! (v : Vector α n) (i : Nat) (x : α) : Vector α n :=
|
||||
⟨v.toArray.set! i x, by simp⟩
|
||||
|
||||
@[inline] def foldlM [Monad m] (f : β → α → m β) (b : β) (v : Vector α n) : m β :=
|
||||
v.toArray.foldlM f b
|
||||
|
||||
@[inline] def foldrM [Monad m] (f : α → β → m β) (b : β) (v : Vector α n) : m β :=
|
||||
v.toArray.foldrM f b
|
||||
|
||||
@[inline] def foldl (f : β → α → β) (b : β) (v : Vector α n) : β :=
|
||||
v.toArray.foldl f b
|
||||
|
||||
@[inline] def foldr (f : α → β → β) (b : β) (v : Vector α n) : β :=
|
||||
v.toArray.foldr f b
|
||||
|
||||
/-- Append two vectors. -/
|
||||
@[inline] def append (v : Vector α n) (w : Vector α m) : Vector α (n + m) :=
|
||||
⟨v.toArray ++ w.toArray, by simp⟩
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/-
|
||||
Copyright (c) 2024 Shreyas Srinivas. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Shreyas Srinivas, Francois Dorais, Kim Morrison
|
||||
Authors: Shreyas Srinivas, Francois Dorais
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Vector.Basic
|
||||
@@ -66,18 +66,6 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
|
||||
@[simp] theorem back?_mk (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).back? = a.back? := rfl
|
||||
|
||||
@[simp] theorem foldlM_mk [Monad m] (f : β → α → m β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldlM f b = a.foldlM f b := rfl
|
||||
|
||||
@[simp] theorem foldrM_mk [Monad m] (f : α → β → m β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldrM f b = a.foldrM f b := rfl
|
||||
|
||||
@[simp] theorem foldl_mk (f : β → α → β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldl f b = a.foldl f b := rfl
|
||||
|
||||
@[simp] theorem foldr_mk (f : α → β → β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldr f b = a.foldr f b := rfl
|
||||
|
||||
@[simp] theorem drop_mk (a : Array α) (h : a.size = n) (m) :
|
||||
(Vector.mk a h).drop m = Vector.mk (a.extract m a.size) (by simp [h]) := rfl
|
||||
|
||||
@@ -153,14 +141,6 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
|
||||
@[simp] theorem all_mk (p : α → Bool) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).all p = a.all p := rfl
|
||||
|
||||
@[simp] theorem eq_mk : v = Vector.mk a h ↔ v.toArray = a := by
|
||||
cases v
|
||||
simp
|
||||
|
||||
@[simp] theorem mk_eq : Vector.mk a h = v ↔ a = v.toArray := by
|
||||
cases v
|
||||
simp
|
||||
|
||||
/-! ### toArray lemmas -/
|
||||
|
||||
@[simp] theorem getElem_toArray {α n} (xs : Vector α n) (i : Nat) (h : i < xs.toArray.size) :
|
||||
@@ -1043,12 +1023,11 @@ theorem mem_setIfInBounds (v : Vector α n) (i : Nat) (hi : i < n) (a : α) :
|
||||
cases l₂
|
||||
simp
|
||||
|
||||
/-! ### map -/
|
||||
/-! Content below this point has not yet been aligned with `List` and `Array`. -/
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (a : Vector α n) (i : Nat) (hi : i < n) :
|
||||
(a.map f)[i] = f a[i] := by
|
||||
cases a
|
||||
simp
|
||||
@[simp] theorem getElem_ofFn {α n} (f : Fin n → α) (i : Nat) (h : i < n) :
|
||||
(Vector.ofFn f)[i] = f ⟨i, by simpa using h⟩ := by
|
||||
simp [ofFn]
|
||||
|
||||
/-- The empty vector maps to the empty vector. -/
|
||||
@[simp]
|
||||
@@ -1056,123 +1035,6 @@ theorem map_empty (f : α → β) : map f #v[] = #v[] := by
|
||||
rw [map, mk.injEq]
|
||||
exact Array.map_empty f
|
||||
|
||||
@[simp] theorem map_push {f : α → β} {as : Vector α n} {x : α} :
|
||||
(as.push x).map f = (as.map f).push (f x) := by
|
||||
cases as
|
||||
simp
|
||||
|
||||
@[simp] theorem map_id_fun : map (n := n) (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (n := n) (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : Vector α n) : map (id : α → α) l = l := by
|
||||
cases l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : Vector α n) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : Vector α n) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f #v[a] = #v[f a] := rfl
|
||||
|
||||
@[simp] theorem mem_map {f : α → β} {l : Vector α n} : b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
theorem exists_of_mem_map (h : b ∈ map f l) : ∃ a, a ∈ l ∧ f a = b := mem_map.1 h
|
||||
|
||||
theorem mem_map_of_mem (f : α → β) (h : a ∈ l) : f a ∈ map f l := mem_map.2 ⟨_, h, rfl⟩
|
||||
|
||||
theorem forall_mem_map {f : α → β} {l : Vector α n} {P : β → Prop} :
|
||||
(∀ (i) (_ : i ∈ l.map f), P i) ↔ ∀ (j) (_ : j ∈ l), P (f j) := by
|
||||
simp
|
||||
|
||||
@[simp] theorem map_inj_left {f g : α → β} : map f l = map g l ↔ ∀ a ∈ l, f a = g a := by
|
||||
cases l <;> simp_all
|
||||
|
||||
theorem map_congr_left (h : ∀ a ∈ l, f a = g a) : map f l = map g l :=
|
||||
map_inj_left.2 h
|
||||
|
||||
theorem map_inj [NeZero n] : map (n := n) f = map g ↔ f = g := by
|
||||
constructor
|
||||
· intro h
|
||||
ext a
|
||||
replace h := congrFun h (mkVector n a)
|
||||
simp only [mkVector, map_mk, mk.injEq, Array.map_inj_left, Array.mem_mkArray, and_imp,
|
||||
forall_eq_apply_imp_iff] at h
|
||||
exact h (NeZero.ne n)
|
||||
· intro h; subst h; rfl
|
||||
|
||||
theorem map_eq_push_iff {f : α → β} {l : Vector α (n + 1)} {l₂ : Vector β n} {b : β} :
|
||||
map f l = l₂.push b ↔ ∃ l₁ a, l = l₁.push a ∧ map f l₁ = l₂ ∧ f a = b := by
|
||||
rcases l with ⟨l, h⟩
|
||||
rcases l₂ with ⟨l₂, rfl⟩
|
||||
simp only [map_mk, push_mk, mk.injEq, Array.map_eq_push_iff]
|
||||
constructor
|
||||
· rintro ⟨l₁, a, rfl, rfl, rfl⟩
|
||||
refine ⟨⟨l₁, by simp⟩, a, by simp⟩
|
||||
· rintro ⟨l₁, a, h₁, h₂, rfl⟩
|
||||
refine ⟨l₁.toArray, a, by simp_all⟩
|
||||
|
||||
@[simp] theorem map_eq_singleton_iff {f : α → β} {l : Vector α 1} {b : β} :
|
||||
map f l = #v[b] ↔ ∃ a, l = #v[a] ∧ f a = b := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
theorem map_eq_map_iff {f g : α → β} {l : Vector α n} :
|
||||
map f l = map g l ↔ ∀ a ∈ l, f a = g a := by
|
||||
cases l <;> simp_all
|
||||
|
||||
theorem map_eq_iff {f : α → β} {l : Vector α n} {l' : Vector β n} :
|
||||
map f l = l' ↔ ∀ i (h : i < n), l'[i] = f l[i] := by
|
||||
rcases l with ⟨l, rfl⟩
|
||||
rcases l' with ⟨l', h'⟩
|
||||
simp only [map_mk, eq_mk, Array.map_eq_iff, getElem_mk]
|
||||
constructor
|
||||
· intro w i h
|
||||
simpa [h, h'] using w i
|
||||
· intro w i
|
||||
if h : i < l.size then
|
||||
simpa [h, h'] using w i h
|
||||
else
|
||||
rw [getElem?_neg, getElem?_neg, Option.map_none'] <;> omega
|
||||
|
||||
@[simp] theorem map_set {f : α → β} {l : Vector α n} {i : Nat} {h : i < n} {a : α} :
|
||||
(l.set i a).map f = (l.map f).set i (f a) (by simpa using h) := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
@[simp] theorem map_setIfInBounds {f : α → β} {l : Vector α n} {i : Nat} {a : α} :
|
||||
(l.setIfInBounds i a).map f = (l.map f).setIfInBounds i (f a) := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
@[simp] theorem map_pop {f : α → β} {l : Vector α n} : l.pop.map f = (l.map f).pop := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
@[simp] theorem back?_map {f : α → β} {l : Vector α n} : (l.map f).back? = l.back?.map f := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
@[simp] theorem map_map {f : α → β} {g : β → γ} {as : Vector α n} :
|
||||
(as.map f).map g = as.map (g ∘ f) := by
|
||||
cases as
|
||||
simp
|
||||
|
||||
/-! Content below this point has not yet been aligned with `List` and `Array`. -/
|
||||
|
||||
@[simp] theorem getElem_ofFn {α n} (f : Fin n → α) (i : Nat) (h : i < n) :
|
||||
(Vector.ofFn f)[i] = f ⟨i, by simpa using h⟩ := by
|
||||
simp [ofFn]
|
||||
|
||||
@[simp] theorem getElem_push_last {v : Vector α n} {x : α} : (v.push x)[n] = x := by
|
||||
rcases v with ⟨data, rfl⟩
|
||||
simp
|
||||
@@ -1226,6 +1088,13 @@ theorem getElem_append_right {a : Vector α n} {b : Vector α m} {i : Nat} (h :
|
||||
cases a
|
||||
simp
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (a : Vector α n) (i : Nat) (hi : i < n) :
|
||||
(a.map f)[i] = f a[i] := by
|
||||
cases a
|
||||
simp
|
||||
|
||||
/-! ### zipWith -/
|
||||
|
||||
@[simp] theorem getElem_zipWith (f : α → β → γ) (a : Vector α n) (b : Vector β n) (i : Nat)
|
||||
@@ -1234,37 +1103,6 @@ theorem getElem_append_right {a : Vector α n} {b : Vector α m} {i : Nat} (h :
|
||||
cases b
|
||||
simp
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l : Vector α n) (l' : Vector α n') :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
cases l
|
||||
cases l'
|
||||
simp
|
||||
|
||||
@[simp] theorem foldrM_push [Monad m] (f : α → β → m β) (init : β) (l : Vector α n) (a : α) :
|
||||
(l.push a).foldrM f init = f a init >>= l.foldrM f := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : Vector α n) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
cases l
|
||||
simp [Array.foldl_eq_foldlM]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : Vector α n) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
cases l
|
||||
simp [Array.foldr_eq_foldrM]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : Vector α n) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : Vector α n) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
/-! ### take -/
|
||||
|
||||
@[simp] theorem take_size (a : Vector α n) : a.take n = a.cast (by simp) := by
|
||||
|
||||
@@ -10,4 +10,3 @@ import Init.Grind.Lemmas
|
||||
import Init.Grind.Cases
|
||||
import Init.Grind.Propagator
|
||||
import Init.Grind.Util
|
||||
import Init.Grind.Offset
|
||||
|
||||
@@ -8,7 +8,6 @@ import Init.Core
|
||||
import Init.SimpLemmas
|
||||
import Init.Classical
|
||||
import Init.ByCases
|
||||
import Init.Grind.Util
|
||||
|
||||
namespace Lean.Grind
|
||||
|
||||
@@ -25,9 +24,6 @@ theorem and_eq_of_eq_false_right {a b : Prop} (h : b = False) : (a ∧ b) = Fals
|
||||
theorem eq_true_of_and_eq_true_left {a b : Prop} (h : (a ∧ b) = True) : a = True := by simp_all
|
||||
theorem eq_true_of_and_eq_true_right {a b : Prop} (h : (a ∧ b) = True) : b = True := by simp_all
|
||||
|
||||
theorem or_of_and_eq_false {a b : Prop} (h : (a ∧ b) = False) : (¬a ∨ ¬b) := by
|
||||
by_cases a <;> by_cases b <;> simp_all
|
||||
|
||||
/-! Or -/
|
||||
|
||||
theorem or_eq_of_eq_true_left {a b : Prop} (h : a = True) : (a ∨ b) = True := by simp [h]
|
||||
@@ -38,15 +34,6 @@ theorem or_eq_of_eq_false_right {a b : Prop} (h : b = False) : (a ∨ b) = a :=
|
||||
theorem eq_false_of_or_eq_false_left {a b : Prop} (h : (a ∨ b) = False) : a = False := by simp_all
|
||||
theorem eq_false_of_or_eq_false_right {a b : Prop} (h : (a ∨ b) = False) : b = False := by simp_all
|
||||
|
||||
/-! Implies -/
|
||||
|
||||
theorem imp_eq_of_eq_false_left {a b : Prop} (h : a = False) : (a → b) = True := by simp [h]
|
||||
theorem imp_eq_of_eq_true_right {a b : Prop} (h : b = True) : (a → b) = True := by simp [h]
|
||||
theorem imp_eq_of_eq_true_left {a b : Prop} (h : a = True) : (a → b) = b := by simp [h]
|
||||
|
||||
theorem eq_true_of_imp_eq_false {a b : Prop} (h : (a → b) = False) : a = True := by simp_all
|
||||
theorem eq_false_of_imp_eq_false {a b : Prop} (h : (a → b) = False) : b = False := by simp_all
|
||||
|
||||
/-! Not -/
|
||||
|
||||
theorem not_eq_of_eq_true {a : Prop} (h : a = True) : (Not a) = False := by simp [h]
|
||||
@@ -63,38 +50,4 @@ theorem false_of_not_eq_self {a : Prop} (h : (Not a) = a) : False := by
|
||||
theorem eq_eq_of_eq_true_left {a b : Prop} (h : a = True) : (a = b) = b := by simp [h]
|
||||
theorem eq_eq_of_eq_true_right {a b : Prop} (h : b = True) : (a = b) = a := by simp [h]
|
||||
|
||||
theorem eq_congr {α : Sort u} {a₁ b₁ a₂ b₂ : α} (h₁ : a₁ = a₂) (h₂ : b₁ = b₂) : (a₁ = b₁) = (a₂ = b₂) := by simp [*]
|
||||
theorem eq_congr' {α : Sort u} {a₁ b₁ a₂ b₂ : α} (h₁ : a₁ = b₂) (h₂ : b₁ = a₂) : (a₁ = b₁) = (a₂ = b₂) := by rw [h₁, h₂, Eq.comm (a := a₂)]
|
||||
|
||||
/-! Forall -/
|
||||
|
||||
theorem forall_propagator (p : Prop) (q : p → Prop) (q' : Prop) (h₁ : p = True) (h₂ : q (of_eq_true h₁) = q') : (∀ hp : p, q hp) = q' := by
|
||||
apply propext; apply Iff.intro
|
||||
· intro h'; exact Eq.mp h₂ (h' (of_eq_true h₁))
|
||||
· intro h'; intros; exact Eq.mpr h₂ h'
|
||||
|
||||
theorem of_forall_eq_false (α : Sort u) (p : α → Prop) (h : (∀ x : α, p x) = False) : ∃ x : α, ¬ p x := by simp_all
|
||||
|
||||
/-! dite -/
|
||||
|
||||
theorem dite_cond_eq_true' {α : Sort u} {c : Prop} {_ : Decidable c} {a : c → α} {b : ¬ c → α} {r : α} (h₁ : c = True) (h₂ : a (of_eq_true h₁) = r) : (dite c a b) = r := by simp [h₁, h₂]
|
||||
theorem dite_cond_eq_false' {α : Sort u} {c : Prop} {_ : Decidable c} {a : c → α} {b : ¬ c → α} {r : α} (h₁ : c = False) (h₂ : b (of_eq_false h₁) = r) : (dite c a b) = r := by simp [h₁, h₂]
|
||||
|
||||
/-! Casts -/
|
||||
|
||||
theorem eqRec_heq.{u_1, u_2} {α : Sort u_2} {a : α}
|
||||
{motive : (x : α) → a = x → Sort u_1} (v : motive a (Eq.refl a)) {b : α} (h : a = b)
|
||||
: HEq (@Eq.rec α a motive v b h) v := by
|
||||
subst h; rfl
|
||||
|
||||
theorem eqRecOn_heq.{u_1, u_2} {α : Sort u_2} {a : α}
|
||||
{motive : (x : α) → a = x → Sort u_1} {b : α} (h : a = b) (v : motive a (Eq.refl a))
|
||||
: HEq (@Eq.recOn α a motive b h v) v := by
|
||||
subst h; rfl
|
||||
|
||||
theorem eqNDRec_heq.{u_1, u_2} {α : Sort u_2} {a : α}
|
||||
{motive : α → Sort u_1} (v : motive a) {b : α} (h : a = b)
|
||||
: HEq (@Eq.ndrec α a motive v b h) v := by
|
||||
subst h; rfl
|
||||
|
||||
end Lean.Grind
|
||||
|
||||
@@ -5,7 +5,6 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.SimpLemmas
|
||||
import Init.PropLemmas
|
||||
import Init.Classical
|
||||
import Init.ByCases
|
||||
|
||||
@@ -41,9 +40,8 @@ attribute [grind_norm] not_true
|
||||
-- False
|
||||
attribute [grind_norm] not_false_eq_true
|
||||
|
||||
-- Remark: we disabled the following normalization rule because we want this information when implementing splitting heuristics
|
||||
-- Implication as a clause
|
||||
theorem imp_eq (p q : Prop) : (p → q) = (¬ p ∨ q) := by
|
||||
@[grind_norm↓] theorem imp_eq (p q : Prop) : (p → q) = (¬ p ∨ q) := by
|
||||
by_cases p <;> by_cases q <;> simp [*]
|
||||
|
||||
-- And
|
||||
@@ -60,19 +58,13 @@ attribute [grind_norm] ite_true ite_false
|
||||
@[grind_norm↓] theorem not_ite {_ : Decidable p} (q r : Prop) : (¬ite p q r) = ite p (¬q) (¬r) := by
|
||||
by_cases p <;> simp [*]
|
||||
|
||||
@[grind_norm] theorem ite_true_false {_ : Decidable p} : (ite p True False) = p := by
|
||||
by_cases p <;> simp
|
||||
|
||||
@[grind_norm] theorem ite_false_true {_ : Decidable p} : (ite p False True) = ¬p := by
|
||||
by_cases p <;> simp
|
||||
|
||||
-- Forall
|
||||
@[grind_norm↓] theorem not_forall (p : α → Prop) : (¬∀ x, p x) = ∃ x, ¬p x := by simp
|
||||
attribute [grind_norm] forall_and
|
||||
|
||||
-- Exists
|
||||
@[grind_norm↓] theorem not_exists (p : α → Prop) : (¬∃ x, p x) = ∀ x, ¬p x := by simp
|
||||
attribute [grind_norm] exists_const exists_or exists_prop exists_and_left exists_and_right
|
||||
attribute [grind_norm] exists_const exists_or
|
||||
|
||||
-- Bool cond
|
||||
@[grind_norm] theorem cond_eq_ite (c : Bool) (a b : α) : cond c a b = ite c a b := by
|
||||
@@ -115,7 +107,4 @@ attribute [grind_norm] Nat.le_zero_eq
|
||||
-- GT GE
|
||||
attribute [grind_norm] GT.gt GE.ge
|
||||
|
||||
-- Succ
|
||||
attribute [grind_norm] Nat.succ_eq_add_one
|
||||
|
||||
end Lean.Grind
|
||||
|
||||
@@ -1,165 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Core
|
||||
import Init.Omega
|
||||
|
||||
namespace Lean.Grind.Offset
|
||||
|
||||
abbrev Var := Nat
|
||||
abbrev Context := Lean.RArray Nat
|
||||
|
||||
def fixedVar := 100000000 -- Any big number should work here
|
||||
|
||||
def Var.denote (ctx : Context) (v : Var) : Nat :=
|
||||
bif v == fixedVar then 1 else ctx.get v
|
||||
|
||||
structure Cnstr where
|
||||
x : Var
|
||||
y : Var
|
||||
k : Nat := 0
|
||||
l : Bool := true
|
||||
deriving Repr, DecidableEq, Inhabited
|
||||
|
||||
def Cnstr.denote (c : Cnstr) (ctx : Context) : Prop :=
|
||||
if c.l then
|
||||
c.x.denote ctx + c.k ≤ c.y.denote ctx
|
||||
else
|
||||
c.x.denote ctx ≤ c.y.denote ctx + c.k
|
||||
|
||||
def trivialCnstr : Cnstr := { x := 0, y := 0, k := 0, l := true }
|
||||
|
||||
@[simp] theorem denote_trivial (ctx : Context) : trivialCnstr.denote ctx := by
|
||||
simp [Cnstr.denote, trivialCnstr]
|
||||
|
||||
def Cnstr.trans (c₁ c₂ : Cnstr) : Cnstr :=
|
||||
if c₁.y = c₂.x then
|
||||
let { x, k := k₁, l := l₁, .. } := c₁
|
||||
let { y, k := k₂, l := l₂, .. } := c₂
|
||||
match l₁, l₂ with
|
||||
| false, false =>
|
||||
{ x, y, k := k₁ + k₂, l := false }
|
||||
| false, true =>
|
||||
if k₁ < k₂ then
|
||||
{ x, y, k := k₂ - k₁, l := true }
|
||||
else
|
||||
{ x, y, k := k₁ - k₂, l := false }
|
||||
| true, false =>
|
||||
if k₁ < k₂ then
|
||||
{ x, y, k := k₂ - k₁, l := false }
|
||||
else
|
||||
{ x, y, k := k₁ - k₂, l := true }
|
||||
| true, true =>
|
||||
{ x, y, k := k₁ + k₂, l := true }
|
||||
else
|
||||
trivialCnstr
|
||||
|
||||
@[simp] theorem Cnstr.denote_trans_easy (ctx : Context) (c₁ c₂ : Cnstr) (h : c₁.y ≠ c₂.x) : (c₁.trans c₂).denote ctx := by
|
||||
simp [*, Cnstr.trans]
|
||||
|
||||
@[simp] theorem Cnstr.denote_trans (ctx : Context) (c₁ c₂ : Cnstr) : c₁.denote ctx → c₂.denote ctx → (c₁.trans c₂).denote ctx := by
|
||||
by_cases c₁.y = c₂.x
|
||||
case neg => simp [*]
|
||||
simp [trans, *]
|
||||
let { x, k := k₁, l := l₁, .. } := c₁
|
||||
let { y, k := k₂, l := l₂, .. } := c₂
|
||||
simp_all; split
|
||||
· simp [denote]; omega
|
||||
· split <;> simp [denote] <;> omega
|
||||
· split <;> simp [denote] <;> omega
|
||||
· simp [denote]; omega
|
||||
|
||||
def Cnstr.isTrivial (c : Cnstr) : Bool := c.x == c.y && c.k == 0
|
||||
|
||||
theorem Cnstr.of_isTrivial (ctx : Context) (c : Cnstr) : c.isTrivial = true → c.denote ctx := by
|
||||
cases c; simp [isTrivial]; intros; simp [*, denote]
|
||||
|
||||
def Cnstr.isFalse (c : Cnstr) : Bool := c.x == c.y && c.k != 0 && c.l == true
|
||||
|
||||
theorem Cnstr.of_isFalse (ctx : Context) {c : Cnstr} : c.isFalse = true → ¬c.denote ctx := by
|
||||
cases c; simp [isFalse]; intros; simp [*, denote]; omega
|
||||
|
||||
def Cnstrs := List Cnstr
|
||||
|
||||
def Cnstrs.denoteAnd' (ctx : Context) (c₁ : Cnstr) (c₂ : Cnstrs) : Prop :=
|
||||
match c₂ with
|
||||
| [] => c₁.denote ctx
|
||||
| c::cs => c₁.denote ctx ∧ Cnstrs.denoteAnd' ctx c cs
|
||||
|
||||
theorem Cnstrs.denote'_trans (ctx : Context) (c₁ c : Cnstr) (cs : Cnstrs) : c₁.denote ctx → denoteAnd' ctx c cs → denoteAnd' ctx (c₁.trans c) cs := by
|
||||
induction cs
|
||||
next => simp [denoteAnd', *]; apply Cnstr.denote_trans
|
||||
next c cs ih => simp [denoteAnd']; intros; simp [*]
|
||||
|
||||
def Cnstrs.trans' (c₁ : Cnstr) (c₂ : Cnstrs) : Cnstr :=
|
||||
match c₂ with
|
||||
| [] => c₁
|
||||
| c::c₂ => trans' (c₁.trans c) c₂
|
||||
|
||||
@[simp] theorem Cnstrs.denote'_trans' (ctx : Context) (c₁ : Cnstr) (c₂ : Cnstrs) : denoteAnd' ctx c₁ c₂ → (trans' c₁ c₂).denote ctx := by
|
||||
induction c₂ generalizing c₁
|
||||
next => intros; simp_all [trans', denoteAnd']
|
||||
next c cs ih => simp [denoteAnd']; intros; simp [trans']; apply ih; apply denote'_trans <;> assumption
|
||||
|
||||
def Cnstrs.denoteAnd (ctx : Context) (c : Cnstrs) : Prop :=
|
||||
match c with
|
||||
| [] => True
|
||||
| c::cs => denoteAnd' ctx c cs
|
||||
|
||||
def Cnstrs.trans (c : Cnstrs) : Cnstr :=
|
||||
match c with
|
||||
| [] => trivialCnstr
|
||||
| c::cs => trans' c cs
|
||||
|
||||
theorem Cnstrs.of_denoteAnd_trans {ctx : Context} {c : Cnstrs} : c.denoteAnd ctx → c.trans.denote ctx := by
|
||||
cases c <;> simp [*, trans, denoteAnd] <;> intros <;> simp [*]
|
||||
|
||||
def Cnstrs.isFalse (c : Cnstrs) : Bool :=
|
||||
c.trans.isFalse
|
||||
|
||||
theorem Cnstrs.unsat' (ctx : Context) (c : Cnstrs) : c.isFalse = true → ¬ c.denoteAnd ctx := by
|
||||
simp [isFalse]; intro h₁ h₂
|
||||
have := of_denoteAnd_trans h₂
|
||||
have := Cnstr.of_isFalse ctx h₁
|
||||
contradiction
|
||||
|
||||
/-- `denote ctx [c_1, ..., c_n] C` is `c_1.denote ctx → ... → c_n.denote ctx → C` -/
|
||||
def Cnstrs.denote (ctx : Context) (cs : Cnstrs) (C : Prop) : Prop :=
|
||||
match cs with
|
||||
| [] => C
|
||||
| c::cs => c.denote ctx → denote ctx cs C
|
||||
|
||||
theorem Cnstrs.not_denoteAnd'_eq (ctx : Context) (c : Cnstr) (cs : Cnstrs) (C : Prop) : (denoteAnd' ctx c cs → C) = denote ctx (c::cs) C := by
|
||||
simp [denote]
|
||||
induction cs generalizing c
|
||||
next => simp [denoteAnd', denote]
|
||||
next c' cs ih =>
|
||||
simp [denoteAnd', denote, *]
|
||||
|
||||
theorem Cnstrs.not_denoteAnd_eq (ctx : Context) (cs : Cnstrs) (C : Prop) : (denoteAnd ctx cs → C) = denote ctx cs C := by
|
||||
cases cs
|
||||
next => simp [denoteAnd, denote]
|
||||
next c cs => apply not_denoteAnd'_eq
|
||||
|
||||
def Cnstr.isImpliedBy (cs : Cnstrs) (c : Cnstr) : Bool :=
|
||||
cs.trans == c
|
||||
|
||||
/-! Main theorems used by `grind`. -/
|
||||
|
||||
/-- Auxiliary theorem used by `grind` to prove that a system of offset inequalities is unsatisfiable. -/
|
||||
theorem Cnstrs.unsat (ctx : Context) (cs : Cnstrs) : cs.isFalse = true → cs.denote ctx False := by
|
||||
intro h
|
||||
rw [← not_denoteAnd_eq]
|
||||
apply unsat'
|
||||
assumption
|
||||
|
||||
/-- Auxiliary theorem used by `grind` to prove an implied offset inequality. -/
|
||||
theorem Cnstrs.imp (ctx : Context) (cs : Cnstrs) (c : Cnstr) (h : c.isImpliedBy cs = true) : cs.denote ctx (c.denote ctx) := by
|
||||
rw [← eq_of_beq h]
|
||||
rw [← not_denoteAnd_eq]
|
||||
apply of_denoteAnd_trans
|
||||
|
||||
end Lean.Grind.Offset
|
||||
@@ -6,45 +6,17 @@ Authors: Leonardo de Moura
|
||||
prelude
|
||||
import Init.Tactics
|
||||
|
||||
namespace Lean.Parser.Attr
|
||||
|
||||
syntax grindEq := "="
|
||||
syntax grindEqBoth := atomic("_" "=" "_")
|
||||
syntax grindEqRhs := atomic("=" "_")
|
||||
syntax grindBwd := "←"
|
||||
syntax grindFwd := "→"
|
||||
|
||||
syntax (name := grind) "grind" (grindEqBoth <|> grindEqRhs <|> grindEq <|> grindBwd <|> grindFwd)? : attr
|
||||
|
||||
end Lean.Parser.Attr
|
||||
|
||||
namespace Lean.Grind
|
||||
/--
|
||||
The configuration for `grind`.
|
||||
Passed to `grind` using, for example, the `grind (config := { matchEqs := true })` syntax.
|
||||
Passed to `grind` using, for example, the `grind (config := { eager := true })` syntax.
|
||||
-/
|
||||
structure Config where
|
||||
/-- Maximum number of case-splits in a proof search branch. It does not include splits performed during normalization. -/
|
||||
splits : Nat := 5
|
||||
/-- Maximum number of E-matching (aka heuristic theorem instantiation) rounds before each case split. -/
|
||||
ematch : Nat := 5
|
||||
/--
|
||||
Maximum term generation.
|
||||
The input goal terms have generation 0. When we instantiate a theorem using a term from generation `n`,
|
||||
the new terms have generation `n+1`. Thus, this parameter limits the length of an instantiation chain. -/
|
||||
gen : Nat := 5
|
||||
/-- Maximum number of theorem instances generated using E-matching in a proof search tree branch. -/
|
||||
instances : Nat := 1000
|
||||
/-- If `matchEqs` is `true`, `grind` uses `match`-equations as E-matching theorems. -/
|
||||
matchEqs : Bool := true
|
||||
/-- If `splitMatch` is `true`, `grind` performs case-splitting on `match`-expressions during the search. -/
|
||||
splitMatch : Bool := true
|
||||
/-- If `splitIte` is `true`, `grind` performs case-splitting on `if-then-else` expressions during the search. -/
|
||||
splitIte : Bool := true
|
||||
/--
|
||||
If `splitIndPred` is `true`, `grind` performs case-splitting on inductive predicates.
|
||||
Otherwise, it performs case-splitting only on types marked with `[grind_split]` attribute. -/
|
||||
splitIndPred : Bool := true
|
||||
When `eager` is true (default: `false`), `grind` eagerly splits `if-then-else` and `match`
|
||||
expressions.
|
||||
-/
|
||||
eager : Bool := false
|
||||
deriving Inhabited, BEq
|
||||
|
||||
end Lean.Grind
|
||||
@@ -55,7 +27,7 @@ namespace Lean.Parser.Tactic
|
||||
`grind` tactic and related tactics.
|
||||
-/
|
||||
|
||||
-- TODO: parameters
|
||||
syntax (name := grind) "grind" optConfig ("on_failure " term)? : tactic
|
||||
-- TODO: configuration option, parameters
|
||||
syntax (name := grind) "grind" : tactic
|
||||
|
||||
end Lean.Parser.Tactic
|
||||
|
||||
@@ -11,22 +11,7 @@ namespace Lean.Grind
|
||||
/-- A helper gadget for annotating nested proofs in goals. -/
|
||||
def nestedProof (p : Prop) (h : p) : p := h
|
||||
|
||||
/--
|
||||
Gadget for marking terms that should not be normalized by `grind`s simplifier.
|
||||
`grind` uses a simproc to implement this feature.
|
||||
We use it when adding instances of `match`-equations to prevent them from being simplified to true.
|
||||
-/
|
||||
def doNotSimp {α : Sort u} (a : α) : α := a
|
||||
|
||||
/-- Gadget for representing offsets `t+k` in patterns. -/
|
||||
def offset (a b : Nat) : Nat := a + b
|
||||
|
||||
/--
|
||||
Gadget for annotating the equalities in `match`-equations conclusions.
|
||||
`_origin` is the term used to instantiate the `match`-equation using E-matching.
|
||||
When `EqMatch a b origin` is `True`, we mark `origin` as a resolved case-split.
|
||||
-/
|
||||
def EqMatch (a b : α) {_origin : α} : Prop := a = b
|
||||
set_option pp.proofs true
|
||||
|
||||
theorem nestedProof_congr (p q : Prop) (h : p = q) (hp : p) (hq : q) : HEq (nestedProof p hp) (nestedProof q hq) := by
|
||||
subst h; apply HEq.refl
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Init.Internal.Order
|
||||
|
||||
/-!
|
||||
This directory is used for components of the standard library that are either considered
|
||||
implementation details or not yet ready for public consumption, and that should be available
|
||||
without explicit import (in contrast to `Std.Internal`)
|
||||
-/
|
||||
@@ -1,8 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Init.Internal.Order.Basic
|
||||
import Init.Internal.Order.Tactic
|
||||
@@ -1,693 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
|
||||
import Init.ByCases
|
||||
import Init.RCases
|
||||
|
||||
/-!
|
||||
This module contains some basic definitions and results from domain theory, intended to be used as
|
||||
the underlying construction of the `partial_fixpoint` feature. It is not meant to be used as a
|
||||
general purpose library for domain theory, but can be of interest to users who want to extend
|
||||
the `partial_fixpoint` machinery (e.g. mark more functions as monotone or register more monads).
|
||||
|
||||
This follows the corresponding
|
||||
[Isabelle development](https://isabelle.in.tum.de/library/HOL/HOL/Partial_Function.html), as also
|
||||
described in [Alexander Krauss: Recursive Definitions of Monadic Functions](https://www21.in.tum.de/~krauss/papers/mrec.pdf).
|
||||
-/
|
||||
|
||||
universe u v w
|
||||
|
||||
namespace Lean.Order
|
||||
|
||||
/--
|
||||
A partial order is a reflexive, transitive and antisymmetric relation.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class PartialOrder (α : Sort u) where
|
||||
/--
|
||||
A “less-or-equal-to” or “approximates” relation.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
rel : α → α → Prop
|
||||
rel_refl : ∀ {x}, rel x x
|
||||
rel_trans : ∀ {x y z}, rel x y → rel y z → rel x z
|
||||
rel_antisymm : ∀ {x y}, rel x y → rel y x → x = y
|
||||
|
||||
@[inherit_doc] scoped infix:50 " ⊑ " => PartialOrder.rel
|
||||
|
||||
section PartialOrder
|
||||
|
||||
variable {α : Sort u} [PartialOrder α]
|
||||
|
||||
theorem PartialOrder.rel_of_eq {x y : α} (h : x = y) : x ⊑ y := by cases h; apply rel_refl
|
||||
|
||||
/--
|
||||
A chain is a totally ordered set (representing a set as a predicate).
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def chain (c : α → Prop) : Prop := ∀ x y , c x → c y → x ⊑ y ∨ y ⊑ x
|
||||
|
||||
end PartialOrder
|
||||
|
||||
section CCPO
|
||||
|
||||
/--
|
||||
A chain-complete partial order (CCPO) is a partial order where every chain a least upper bound.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class CCPO (α : Sort u) extends PartialOrder α where
|
||||
/--
|
||||
The least upper bound of a chain.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
csup : (α → Prop) → α
|
||||
csup_spec {c : α → Prop} (hc : chain c) : csup c ⊑ x ↔ (∀ y, c y → y ⊑ x)
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
theorem csup_le {c : α → Prop} (hchain : chain c) : (∀ y, c y → y ⊑ x) → csup c ⊑ x :=
|
||||
(csup_spec hchain).mpr
|
||||
|
||||
theorem le_csup {c : α → Prop} (hchain : chain c) {y : α} (hy : c y) : y ⊑ csup c :=
|
||||
(csup_spec hchain).mp rel_refl y hy
|
||||
|
||||
/--
|
||||
The bottom element is the least upper bound of the empty chain.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def bot : α := csup (fun _ => False)
|
||||
|
||||
scoped notation "⊥" => bot
|
||||
|
||||
theorem bot_le (x : α) : ⊥ ⊑ x := by
|
||||
apply csup_le
|
||||
· intro x y hx hy; contradiction
|
||||
· intro x hx; contradiction
|
||||
|
||||
end CCPO
|
||||
|
||||
section monotone
|
||||
|
||||
variable {α : Sort u} [PartialOrder α]
|
||||
variable {β : Sort v} [PartialOrder β]
|
||||
|
||||
/--
|
||||
A function is monotone if if it maps related elements to releated elements.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def monotone (f : α → β) : Prop := ∀ x y, x ⊑ y → f x ⊑ f y
|
||||
|
||||
theorem monotone_const (c : β) : monotone (fun (_ : α) => c) :=
|
||||
fun _ _ _ => PartialOrder.rel_refl
|
||||
|
||||
theorem monotone_id : monotone (fun (x : α) => x) :=
|
||||
fun _ _ hxy => hxy
|
||||
|
||||
theorem monotone_compose
|
||||
{γ : Sort w} [PartialOrder γ]
|
||||
{f : α → β} {g : β → γ}
|
||||
(hf : monotone f) (hg : monotone g) :
|
||||
monotone (fun x => g (f x)) := fun _ _ hxy => hg _ _ (hf _ _ hxy)
|
||||
|
||||
end monotone
|
||||
|
||||
section admissibility
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
/--
|
||||
A predicate is admissable if it can be transferred from the elements of a chain to the chains least
|
||||
upper bound. Such predicates can be used in fixpoint induction.
|
||||
|
||||
This definition implies `P ⊥`. Sometimes (e.g. in Isabelle) the empty chain is excluded
|
||||
from this definition, and `P ⊥` is a separate condition of the induction predicate.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def admissible (P : α → Prop) :=
|
||||
∀ (c : α → Prop), chain c → (∀ x, c x → P x) → P (csup c)
|
||||
|
||||
theorem admissible_const_true : admissible (fun (_ : α) => True) :=
|
||||
fun _ _ _ => trivial
|
||||
|
||||
theorem admissible_and (P Q : α → Prop)
|
||||
(hadm₁ : admissible P) (hadm₂ : admissible Q) : admissible (fun x => P x ∧ Q x) :=
|
||||
fun c hchain h =>
|
||||
⟨ hadm₁ c hchain fun x hx => (h x hx).1,
|
||||
hadm₂ c hchain fun x hx => (h x hx).2⟩
|
||||
|
||||
theorem chain_conj (c P : α → Prop) (hchain : chain c) : chain (fun x => c x ∧ P x) := by
|
||||
intro x y ⟨hcx, _⟩ ⟨hcy, _⟩
|
||||
exact hchain x y hcx hcy
|
||||
|
||||
theorem csup_conj (c P : α → Prop) (hchain : chain c) (h : ∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ P y) :
|
||||
csup c = csup (fun x => c x ∧ P x) := by
|
||||
apply rel_antisymm
|
||||
· apply csup_le hchain
|
||||
intro x hcx
|
||||
obtain ⟨y, hcy, hxy, hPy⟩ := h x hcx
|
||||
apply rel_trans hxy; clear x hcx hxy
|
||||
apply le_csup (chain_conj _ _ hchain) ⟨hcy, hPy⟩
|
||||
· apply csup_le (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hPx⟩
|
||||
apply le_csup hchain hcx
|
||||
|
||||
theorem admissible_or (P Q : α → Prop)
|
||||
(hadm₁ : admissible P) (hadm₂ : admissible Q) : admissible (fun x => P x ∨ Q x) := by
|
||||
intro c hchain h
|
||||
have : (∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ P y) ∨ (∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ Q y) := by
|
||||
open Classical in
|
||||
apply Decidable.or_iff_not_imp_left.mpr
|
||||
intro h'
|
||||
simp only [not_forall, not_imp, not_exists, not_and] at h'
|
||||
obtain ⟨x, hcx, hx⟩ := h'
|
||||
intro y hcy
|
||||
cases hchain x y hcx hcy with
|
||||
| inl hxy =>
|
||||
refine ⟨y, hcy, rel_refl, ?_⟩
|
||||
cases h y hcy with
|
||||
| inl hPy => exfalso; apply hx y hcy hxy hPy
|
||||
| inr hQy => assumption
|
||||
| inr hyx =>
|
||||
refine ⟨x, hcx, hyx , ?_⟩
|
||||
cases h x hcx with
|
||||
| inl hPx => exfalso; apply hx x hcx rel_refl hPx
|
||||
| inr hQx => assumption
|
||||
cases this with
|
||||
| inl hP =>
|
||||
left
|
||||
rw [csup_conj (h := hP) (hchain := hchain)]
|
||||
apply hadm₁ _ (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hPx⟩
|
||||
exact hPx
|
||||
| inr hQ =>
|
||||
right
|
||||
rw [csup_conj (h := hQ) (hchain := hchain)]
|
||||
apply hadm₂ _ (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hQx⟩
|
||||
exact hQx
|
||||
|
||||
def admissible_pi (P : α → β → Prop)
|
||||
(hadm₁ : ∀ y, admissible (fun x => P x y)) : admissible (fun x => ∀ y, P x y) :=
|
||||
fun c hchain h y => hadm₁ y c hchain fun x hx => h x hx y
|
||||
|
||||
end admissibility
|
||||
|
||||
section fix
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
variable {c : α → Prop} (hchain : chain c)
|
||||
|
||||
/--
|
||||
The transfinite iteration of a function `f` is a set that is `⊥ ` and is closed under application
|
||||
of `f` and `csup`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
inductive iterates (f : α → α) : α → Prop where
|
||||
| step : iterates f x → iterates f (f x)
|
||||
| sup {c : α → Prop} (hc : chain c) (hi : ∀ x, c x → iterates f x) : iterates f (csup c)
|
||||
|
||||
theorem chain_iterates {f : α → α} (hf : monotone f) : chain (iterates f) := by
|
||||
intros x y hx hy
|
||||
induction hx generalizing y
|
||||
case step x hx ih =>
|
||||
induction hy
|
||||
case step y hy _ =>
|
||||
cases ih y hy
|
||||
· left; apply hf; assumption
|
||||
· right; apply hf; assumption
|
||||
case sup c hchain hi ih2 =>
|
||||
show f x ⊑ csup c ∨ csup c ⊑ f x
|
||||
by_cases h : ∃ z, c z ∧ f x ⊑ z
|
||||
· left
|
||||
obtain ⟨z, hz, hfz⟩ := h
|
||||
apply rel_trans hfz
|
||||
apply le_csup hchain hz
|
||||
· right
|
||||
apply csup_le hchain _
|
||||
intro z hz
|
||||
rw [not_exists] at h
|
||||
specialize h z
|
||||
rw [not_and] at h
|
||||
specialize h hz
|
||||
cases ih2 z hz
|
||||
next => contradiction
|
||||
next => assumption
|
||||
case sup c hchain hi ih =>
|
||||
show rel (csup c) y ∨ rel y (csup c)
|
||||
by_cases h : ∃ z, c z ∧ rel y z
|
||||
· right
|
||||
obtain ⟨z, hz, hfz⟩ := h
|
||||
apply rel_trans hfz
|
||||
apply le_csup hchain hz
|
||||
· left
|
||||
apply csup_le hchain _
|
||||
intro z hz
|
||||
rw [not_exists] at h
|
||||
specialize h z
|
||||
rw [not_and] at h
|
||||
specialize h hz
|
||||
cases ih z hz y hy
|
||||
next => assumption
|
||||
next => contradiction
|
||||
|
||||
theorem rel_f_of_iterates {f : α → α} (hf : monotone f) {x : α} (hx : iterates f x) : x ⊑ f x := by
|
||||
induction hx
|
||||
case step ih =>
|
||||
apply hf
|
||||
assumption
|
||||
case sup c hchain hi ih =>
|
||||
apply csup_le hchain
|
||||
intro y hy
|
||||
apply rel_trans (ih y hy)
|
||||
apply hf
|
||||
apply le_csup hchain hy
|
||||
|
||||
set_option linter.unusedVariables false in
|
||||
/--
|
||||
The least fixpoint of a monotone function is the least upper bound of its transfinite iteration.
|
||||
|
||||
The `monotone f` assumption is not strictly necessarily for the definition, but without this the
|
||||
definition is not very meaningful and it simplifies applying theorems like `fix_eq` if every use of
|
||||
`fix` already has the monotonicty requirement.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def fix (f : α → α) (hmono : monotone f) := csup (iterates f)
|
||||
|
||||
/--
|
||||
The main fixpoint theorem for fixedpoints of monotone functions in chain-complete partial orders.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
theorem fix_eq {f : α → α} (hf : monotone f) : fix f hf = f (fix f hf) := by
|
||||
apply rel_antisymm
|
||||
· apply rel_f_of_iterates hf
|
||||
apply iterates.sup (chain_iterates hf)
|
||||
exact fun _ h => h
|
||||
· apply le_csup (chain_iterates hf)
|
||||
apply iterates.step
|
||||
apply iterates.sup (chain_iterates hf)
|
||||
intro y hy
|
||||
exact hy
|
||||
|
||||
/--
|
||||
The fixpoint induction theme: An admissible predicate holds for a least fixpoint if it is preserved
|
||||
by the fixpoint's function.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
theorem fix_induct {f : α → α} (hf : monotone f)
|
||||
(motive : α → Prop) (hadm: admissible motive)
|
||||
(h : ∀ x, motive x → motive (f x)) : motive (fix f hf) := by
|
||||
apply hadm _ (chain_iterates hf)
|
||||
intro x hiterates
|
||||
induction hiterates with
|
||||
| @step x hiter ih => apply h x ih
|
||||
| @sup c hchain hiter ih => apply hadm c hchain ih
|
||||
|
||||
end fix
|
||||
|
||||
section fun_order
|
||||
|
||||
open PartialOrder
|
||||
|
||||
variable {α : Sort u}
|
||||
variable {β : α → Sort v}
|
||||
variable {γ : Sort w}
|
||||
|
||||
instance instOrderPi [∀ x, PartialOrder (β x)] : PartialOrder (∀ x, β x) where
|
||||
rel f g := ∀ x, f x ⊑ g x
|
||||
rel_refl _ := rel_refl
|
||||
rel_trans hf hg x := rel_trans (hf x) (hg x)
|
||||
rel_antisymm hf hg := funext (fun x => rel_antisymm (hf x) (hg x))
|
||||
|
||||
theorem monotone_of_monotone_apply [PartialOrder γ] [∀ x, PartialOrder (β x)] (f : γ → (∀ x, β x))
|
||||
(h : ∀ y, monotone (fun x => f x y)) : monotone f :=
|
||||
fun x y hxy z => h z x y hxy
|
||||
|
||||
theorem monotone_apply [PartialOrder γ] [∀ x, PartialOrder (β x)] (a : α) (f : γ → ∀ x, β x)
|
||||
(h : monotone f) :
|
||||
monotone (fun x => f x a) := fun _ _ hfg => h _ _ hfg a
|
||||
|
||||
theorem chain_apply [∀ x, PartialOrder (β x)] {c : (∀ x, β x) → Prop} (hc : chain c) (x : α) :
|
||||
chain (fun y => ∃ f, c f ∧ f x = y) := by
|
||||
intro _ _ ⟨f, hf, hfeq⟩ ⟨g, hg, hgeq⟩
|
||||
subst hfeq; subst hgeq
|
||||
cases hc f g hf hg
|
||||
next h => left; apply h x
|
||||
next h => right; apply h x
|
||||
|
||||
def fun_csup [∀ x, CCPO (β x)] (c : (∀ x, β x) → Prop) (x : α) :=
|
||||
CCPO.csup (fun y => ∃ f, c f ∧ f x = y)
|
||||
|
||||
instance instCCPOPi [∀ x, CCPO (β x)] : CCPO (∀ x, β x) where
|
||||
csup := fun_csup
|
||||
csup_spec := by
|
||||
intro f c hc
|
||||
constructor
|
||||
next =>
|
||||
intro hf g hg x
|
||||
apply rel_trans _ (hf x); clear hf
|
||||
apply le_csup (chain_apply hc x)
|
||||
exact ⟨g, hg, rfl⟩
|
||||
next =>
|
||||
intro h x
|
||||
apply csup_le (chain_apply hc x)
|
||||
intro y ⟨z, hz, hyz⟩
|
||||
subst y
|
||||
apply h z hz
|
||||
|
||||
def admissible_apply [∀ x, CCPO (β x)] (P : ∀ x, β x → Prop) (x : α)
|
||||
(hadm : admissible (P x)) : admissible (fun (f : ∀ x, β x) => P x (f x)) := by
|
||||
intro c hchain h
|
||||
apply hadm _ (chain_apply hchain x)
|
||||
rintro _ ⟨f, hcf, rfl⟩
|
||||
apply h _ hcf
|
||||
|
||||
def admissible_pi_apply [∀ x, CCPO (β x)] (P : ∀ x, β x → Prop) (hadm : ∀ x, admissible (P x)) :
|
||||
admissible (fun (f : ∀ x, β x) => ∀ x, P x (f x)) := by
|
||||
apply admissible_pi
|
||||
intro
|
||||
apply admissible_apply
|
||||
apply hadm
|
||||
|
||||
end fun_order
|
||||
|
||||
section monotone_lemmas
|
||||
|
||||
theorem monotone_letFun
|
||||
{α : Sort u} {β : Sort v} {γ : Sort w} [PartialOrder α] [PartialOrder β]
|
||||
(v : γ) (k : α → γ → β)
|
||||
(hmono : ∀ y, monotone (fun x => k x y)) :
|
||||
monotone fun (x : α) => letFun v (k x) := hmono v
|
||||
|
||||
theorem monotone_ite
|
||||
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
|
||||
(c : Prop) [Decidable c]
|
||||
(k₁ : α → β) (k₂ : α → β)
|
||||
(hmono₁ : monotone k₁) (hmono₂ : monotone k₂) :
|
||||
monotone fun x => if c then k₁ x else k₂ x := by
|
||||
split
|
||||
· apply hmono₁
|
||||
· apply hmono₂
|
||||
|
||||
theorem monotone_dite
|
||||
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
|
||||
(c : Prop) [Decidable c]
|
||||
(k₁ : α → c → β) (k₂ : α → ¬ c → β)
|
||||
(hmono₁ : monotone k₁) (hmono₂ : monotone k₂) :
|
||||
monotone fun x => dite c (k₁ x) (k₂ x) := by
|
||||
split
|
||||
· apply monotone_apply _ _ hmono₁
|
||||
· apply monotone_apply _ _ hmono₂
|
||||
|
||||
end monotone_lemmas
|
||||
|
||||
section pprod_order
|
||||
|
||||
open PartialOrder
|
||||
|
||||
variable {α : Sort u}
|
||||
variable {β : Sort v}
|
||||
variable {γ : Sort w}
|
||||
|
||||
instance [PartialOrder α] [PartialOrder β] : PartialOrder (α ×' β) where
|
||||
rel a b := a.1 ⊑ b.1 ∧ a.2 ⊑ b.2
|
||||
rel_refl := ⟨rel_refl, rel_refl⟩
|
||||
rel_trans ha hb := ⟨rel_trans ha.1 hb.1, rel_trans ha.2 hb.2⟩
|
||||
rel_antisymm := fun {a} {b} ha hb => by
|
||||
cases a; cases b;
|
||||
dsimp at *
|
||||
rw [rel_antisymm ha.1 hb.1, rel_antisymm ha.2 hb.2]
|
||||
|
||||
theorem monotone_pprod [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α} {g : γ → β} (hf : monotone f) (hg : monotone g) :
|
||||
monotone (fun x => PProd.mk (f x) (g x)) :=
|
||||
fun _ _ h12 => ⟨hf _ _ h12, hg _ _ h12⟩
|
||||
|
||||
theorem monotone_pprod_fst [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α ×' β} (hf : monotone f) : monotone (fun x => (f x).1) :=
|
||||
fun _ _ h12 => (hf _ _ h12).1
|
||||
|
||||
theorem monotone_pprod_snd [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α ×' β} (hf : monotone f) : monotone (fun x => (f x).2) :=
|
||||
fun _ _ h12 => (hf _ _ h12).2
|
||||
|
||||
def chain_pprod_fst [CCPO α] [CCPO β] (c : α ×' β → Prop) : α → Prop := fun a => ∃ b, c ⟨a, b⟩
|
||||
def chain_pprod_snd [CCPO α] [CCPO β] (c : α ×' β → Prop) : β → Prop := fun b => ∃ a, c ⟨a, b⟩
|
||||
|
||||
theorem chain.pprod_fst [CCPO α] [CCPO β] (c : α ×' β → Prop) (hchain : chain c) :
|
||||
chain (chain_pprod_fst c) := by
|
||||
intro a₁ a₂ ⟨b₁, h₁⟩ ⟨b₂, h₂⟩
|
||||
cases hchain ⟨a₁, b₁⟩ ⟨a₂, b₂⟩ h₁ h₂
|
||||
case inl h => left; exact h.1
|
||||
case inr h => right; exact h.1
|
||||
|
||||
theorem chain.pprod_snd [CCPO α] [CCPO β] (c : α ×' β → Prop) (hchain : chain c) :
|
||||
chain (chain_pprod_snd c) := by
|
||||
intro b₁ b₂ ⟨a₁, h₁⟩ ⟨a₂, h₂⟩
|
||||
cases hchain ⟨a₁, b₁⟩ ⟨a₂, b₂⟩ h₁ h₂
|
||||
case inl h => left; exact h.2
|
||||
case inr h => right; exact h.2
|
||||
|
||||
instance [CCPO α] [CCPO β] : CCPO (α ×' β) where
|
||||
csup c := ⟨CCPO.csup (chain_pprod_fst c), CCPO.csup (chain_pprod_snd c)⟩
|
||||
csup_spec := by
|
||||
intro ⟨a, b⟩ c hchain
|
||||
dsimp
|
||||
constructor
|
||||
next =>
|
||||
intro ⟨h₁, h₂⟩ ⟨a', b'⟩ cab
|
||||
constructor <;> dsimp at *
|
||||
· apply rel_trans ?_ h₁
|
||||
apply le_csup hchain.pprod_fst
|
||||
exact ⟨b', cab⟩
|
||||
· apply rel_trans ?_ h₂
|
||||
apply le_csup hchain.pprod_snd
|
||||
exact ⟨a', cab⟩
|
||||
next =>
|
||||
intro h
|
||||
constructor <;> dsimp
|
||||
· apply csup_le hchain.pprod_fst
|
||||
intro a' ⟨b', hcab⟩
|
||||
apply (h _ hcab).1
|
||||
· apply csup_le hchain.pprod_snd
|
||||
intro b' ⟨a', hcab⟩
|
||||
apply (h _ hcab).2
|
||||
|
||||
theorem admissible_pprod_fst {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : α → Prop)
|
||||
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.1) := by
|
||||
intro c hchain h
|
||||
apply hadm _ hchain.pprod_fst
|
||||
intro x ⟨y, hxy⟩
|
||||
apply h ⟨x,y⟩ hxy
|
||||
|
||||
theorem admissible_pprod_snd {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : β → Prop)
|
||||
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.2) := by
|
||||
intro c hchain h
|
||||
apply hadm _ hchain.pprod_snd
|
||||
intro y ⟨x, hxy⟩
|
||||
apply h ⟨x,y⟩ hxy
|
||||
|
||||
end pprod_order
|
||||
|
||||
section flat_order
|
||||
|
||||
variable {α : Sort u}
|
||||
|
||||
set_option linter.unusedVariables false in
|
||||
/--
|
||||
`FlatOrder b` wraps the type `α` with the flat partial order generated by `∀ x, b ⊑ x`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def FlatOrder {α : Sort u} (b : α) := α
|
||||
|
||||
variable {b : α}
|
||||
|
||||
/--
|
||||
The flat partial order generated by `∀ x, b ⊑ x`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
inductive FlatOrder.rel : (x y : FlatOrder b) → Prop where
|
||||
| bot : rel b x
|
||||
| refl : rel x x
|
||||
|
||||
instance FlatOrder.instOrder : PartialOrder (FlatOrder b) where
|
||||
rel := rel
|
||||
rel_refl := .refl
|
||||
rel_trans {x y z : α} (hxy : rel x y) (hyz : rel y z) := by
|
||||
cases hxy <;> cases hyz <;> constructor
|
||||
rel_antisymm {x y : α} (hxy : rel x y) (hyz : rel y x) : x = y := by
|
||||
cases hxy <;> cases hyz <;> constructor
|
||||
|
||||
open Classical in
|
||||
private theorem Classical.some_spec₂ {α : Sort _} {p : α → Prop} {h : ∃ a, p a} (q : α → Prop)
|
||||
(hpq : ∀ a, p a → q a) : q (choose h) := hpq _ <| choose_spec _
|
||||
|
||||
noncomputable def flat_csup (c : FlatOrder b → Prop) : FlatOrder b := by
|
||||
by_cases h : ∃ (x : FlatOrder b), c x ∧ x ≠ b
|
||||
· exact Classical.choose h
|
||||
· exact b
|
||||
|
||||
noncomputable instance FlatOrder.instCCPO : CCPO (FlatOrder b) where
|
||||
csup := flat_csup
|
||||
csup_spec := by
|
||||
intro x c hc
|
||||
unfold flat_csup
|
||||
split
|
||||
next hex =>
|
||||
apply Classical.some_spec₂ (q := (· ⊑ x ↔ (∀ y, c y → y ⊑ x)))
|
||||
clear hex
|
||||
intro z ⟨hz, hnb⟩
|
||||
constructor
|
||||
· intro h y hy
|
||||
apply PartialOrder.rel_trans _ h; clear h
|
||||
cases hc y z hy hz
|
||||
next => assumption
|
||||
next h =>
|
||||
cases h
|
||||
· contradiction
|
||||
· constructor
|
||||
· intro h
|
||||
cases h z hz
|
||||
· contradiction
|
||||
· constructor
|
||||
next hnotex =>
|
||||
constructor
|
||||
· intro h y hy; clear h
|
||||
suffices y = b by rw [this]; exact rel.bot
|
||||
rw [not_exists] at hnotex
|
||||
specialize hnotex y
|
||||
rw [not_and] at hnotex
|
||||
specialize hnotex hy
|
||||
rw [@Classical.not_not] at hnotex
|
||||
assumption
|
||||
· intro; exact rel.bot
|
||||
|
||||
theorem admissible_flatOrder (P : FlatOrder b → Prop) (hnot : P b) : admissible P := by
|
||||
intro c hchain h
|
||||
by_cases h' : ∃ (x : FlatOrder b), c x ∧ x ≠ b
|
||||
· simp [CCPO.csup, flat_csup, h']
|
||||
apply Classical.some_spec₂ (q := (P ·))
|
||||
intro x ⟨hcx, hneb⟩
|
||||
apply h x hcx
|
||||
· simp [CCPO.csup, flat_csup, h', hnot]
|
||||
|
||||
end flat_order
|
||||
|
||||
section mono_bind
|
||||
|
||||
/--
|
||||
The class `MonoBind m` indicates that every `m α` has a `PartialOrder`, and that the bind operation
|
||||
on `m` is monotone in both arguments with regard to that order.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class MonoBind (m : Type u → Type v) [Bind m] [∀ α, PartialOrder (m α)] where
|
||||
bind_mono_left {a₁ a₂ : m α} {f : α → m b} (h : a₁ ⊑ a₂) : a₁ >>= f ⊑ a₂ >>= f
|
||||
bind_mono_right {a : m α} {f₁ f₂ : α → m b} (h : ∀ x, f₁ x ⊑ f₂ x) : a >>= f₁ ⊑ a >>= f₂
|
||||
|
||||
theorem monotone_bind
|
||||
(m : Type u → Type v) [Bind m] [∀ α, PartialOrder (m α)] [MonoBind m]
|
||||
{α β : Type u}
|
||||
{γ : Type w} [PartialOrder γ]
|
||||
(f : γ → m α) (g : γ → α → m β)
|
||||
(hmono₁ : monotone f)
|
||||
(hmono₂ : monotone g) :
|
||||
monotone (fun (x : γ) => f x >>= g x) := by
|
||||
intro x₁ x₂ hx₁₂
|
||||
apply PartialOrder.rel_trans
|
||||
· apply MonoBind.bind_mono_left (hmono₁ _ _ hx₁₂)
|
||||
· apply MonoBind.bind_mono_right (fun y => monotone_apply y _ hmono₂ _ _ hx₁₂)
|
||||
|
||||
instance : PartialOrder (Option α) := inferInstanceAs (PartialOrder (FlatOrder none))
|
||||
noncomputable instance : CCPO (Option α) := inferInstanceAs (CCPO (FlatOrder none))
|
||||
noncomputable instance : MonoBind Option where
|
||||
bind_mono_left h := by
|
||||
cases h
|
||||
· exact FlatOrder.rel.bot
|
||||
· exact FlatOrder.rel.refl
|
||||
bind_mono_right h := by
|
||||
cases ‹Option _›
|
||||
· exact FlatOrder.rel.refl
|
||||
· exact h _
|
||||
|
||||
theorem admissible_eq_some (P : Prop) (y : α) :
|
||||
admissible (fun (x : Option α) => x = some y → P) := by
|
||||
apply admissible_flatOrder; simp
|
||||
|
||||
instance [Monad m] [inst : ∀ α, PartialOrder (m α)] : PartialOrder (ExceptT ε m α) := inst _
|
||||
instance [Monad m] [∀ α, PartialOrder (m α)] [inst : ∀ α, CCPO (m α)] : CCPO (ExceptT ε m α) := inst _
|
||||
instance [Monad m] [∀ α, PartialOrder (m α)] [∀ α, CCPO (m α)] [MonoBind m] : MonoBind (ExceptT ε m) where
|
||||
bind_mono_left h₁₂ := by
|
||||
apply MonoBind.bind_mono_left (m := m)
|
||||
exact h₁₂
|
||||
bind_mono_right h₁₂ := by
|
||||
apply MonoBind.bind_mono_right (m := m)
|
||||
intro x
|
||||
cases x
|
||||
· apply PartialOrder.rel_refl
|
||||
· apply h₁₂
|
||||
|
||||
end mono_bind
|
||||
|
||||
namespace Example
|
||||
|
||||
def findF (P : Nat → Bool) (rec : Nat → Option Nat) (x : Nat) : Option Nat :=
|
||||
if P x then
|
||||
some x
|
||||
else
|
||||
rec (x + 1)
|
||||
|
||||
noncomputable def find (P : Nat → Bool) : Nat → Option Nat := fix (findF P) <| by
|
||||
unfold findF
|
||||
apply monotone_of_monotone_apply
|
||||
intro n
|
||||
split
|
||||
· apply monotone_const
|
||||
· apply monotone_apply
|
||||
apply monotone_id
|
||||
|
||||
theorem find_eq : find P = findF P (find P) := fix_eq ..
|
||||
|
||||
theorem find_spec : ∀ n m, find P n = some m → n ≤ m ∧ P m := by
|
||||
unfold find
|
||||
refine fix_induct (motive := fun (f : Nat → Option Nat) => ∀ n m, f n = some m → n ≤ m ∧ P m) _ ?hadm ?hstep
|
||||
case hadm =>
|
||||
-- apply admissible_pi_apply does not work well, hard to infer everything
|
||||
exact admissible_pi_apply _ (fun n => admissible_pi _ (fun m => admissible_eq_some _ m))
|
||||
case hstep =>
|
||||
intro f ih n m heq
|
||||
simp only [findF] at heq
|
||||
split at heq
|
||||
· simp_all
|
||||
· obtain ⟨ih1, ih2⟩ := ih _ _ heq
|
||||
constructor
|
||||
· exact Nat.le_trans (Nat.le_add_right _ _ ) ih1
|
||||
· exact ih2
|
||||
|
||||
end Example
|
||||
|
||||
end Lean.Order
|
||||
@@ -1,20 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
|
||||
prelude
|
||||
import Init.Notation
|
||||
|
||||
namespace Lean.Order
|
||||
/--
|
||||
`monotonicity` performs one compositional step solving `monotone` goals,
|
||||
using lemma tagged with `@[partial_fixpoint_monotone]`.
|
||||
|
||||
This tactic is mostly used internally by lean in `partial_fixpoint` definitions, but
|
||||
can be useful on its own for debugging or when proving new `@[partial_fixpoint_monotone]` lemmas.
|
||||
-/
|
||||
scoped syntax (name := monotonicity) "monotonicity" : tactic
|
||||
|
||||
end Lean.Order
|
||||
@@ -4170,16 +4170,6 @@ def withRef [Monad m] [MonadRef m] {α} (ref : Syntax) (x : m α) : m α :=
|
||||
let ref := replaceRef ref oldRef
|
||||
MonadRef.withRef ref x
|
||||
|
||||
/--
|
||||
If `ref? = some ref`, run `x : m α` with a modified value for the `ref` by calling `withRef`.
|
||||
Otherwise, run `x` directly.
|
||||
-/
|
||||
@[always_inline, inline]
|
||||
def withRef? [Monad m] [MonadRef m] {α} (ref? : Option Syntax) (x : m α) : m α :=
|
||||
match ref? with
|
||||
| some ref => withRef ref x
|
||||
| _ => x
|
||||
|
||||
/-- A monad that supports syntax quotations. Syntax quotations (in term
|
||||
position) are monadic values that when executed retrieve the current "macro
|
||||
scope" from the monad and apply it to every identifier they introduce
|
||||
|
||||
@@ -818,7 +818,7 @@ syntax inductionAlt := ppDedent(ppLine) inductionAltLHS+ " => " (hole <|> synth
|
||||
After `with`, there is an optional tactic that runs on all branches, and
|
||||
then a list of alternatives.
|
||||
-/
|
||||
syntax inductionAlts := " with" (ppSpace colGt tactic)? withPosition((colGe inductionAlt)*)
|
||||
syntax inductionAlts := " with" (ppSpace colGt tactic)? withPosition((colGe inductionAlt)+)
|
||||
|
||||
/--
|
||||
Assuming `x` is a variable in the local context with an inductive type,
|
||||
|
||||
@@ -11,22 +11,6 @@ import Init.Data.List.Impl
|
||||
namespace Lean
|
||||
namespace Json
|
||||
|
||||
set_option maxRecDepth 1024 in
|
||||
/--
|
||||
This table contains for each UTF-8 byte whether we need to escape a string that contains it.
|
||||
-/
|
||||
private def escapeTable : { xs : ByteArray // xs.size = 256 } :=
|
||||
⟨ByteArray.mk #[
|
||||
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
|
||||
0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
|
||||
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,
|
||||
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
|
||||
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
|
||||
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
|
||||
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
|
||||
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
|
||||
], by rfl⟩
|
||||
|
||||
private def escapeAux (acc : String) (c : Char) : String :=
|
||||
-- escape ", \, \n and \r, keep all other characters ≥ 0x20 and render characters < 0x20 with \u
|
||||
if c = '"' then -- hack to prevent emacs from regarding the rest of the file as a string: "
|
||||
@@ -55,27 +39,8 @@ private def escapeAux (acc : String) (c : Char) : String :=
|
||||
let d4 := Nat.digitChar (n % 16)
|
||||
acc ++ "\\u" |>.push d1 |>.push d2 |>.push d3 |>.push d4
|
||||
|
||||
private def needEscape (s : String) : Bool :=
|
||||
go s 0
|
||||
where
|
||||
go (s : String) (i : Nat) : Bool :=
|
||||
if h : i < s.utf8ByteSize then
|
||||
let byte := s.getUtf8Byte i h
|
||||
have h1 : byte.toNat < 256 := UInt8.toNat_lt_size byte
|
||||
have h2 : escapeTable.val.size = 256 := escapeTable.property
|
||||
if escapeTable.val.get byte.toNat (Nat.lt_of_lt_of_eq h1 h2.symm) == 0 then
|
||||
go s (i + 1)
|
||||
else
|
||||
true
|
||||
else
|
||||
false
|
||||
|
||||
def escape (s : String) (acc : String := "") : String :=
|
||||
-- If we don't have any characters that need to be escaped we can just append right away.
|
||||
if needEscape s then
|
||||
s.foldl escapeAux acc
|
||||
else
|
||||
acc ++ s
|
||||
s.foldl escapeAux acc
|
||||
|
||||
def renderString (s : String) (acc : String := "") : String :=
|
||||
let acc := acc ++ "\""
|
||||
|
||||
@@ -6,7 +6,6 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.Lsp.Basic
|
||||
import Lean.Data.Lsp.CancelParams
|
||||
import Lean.Data.Lsp.Capabilities
|
||||
import Lean.Data.Lsp.Client
|
||||
import Lean.Data.Lsp.Communication
|
||||
|
||||
@@ -6,6 +6,7 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.Json
|
||||
import Lean.Data.JsonRpc
|
||||
|
||||
/-! Defines most of the 'Basic Structures' in the LSP specification
|
||||
(https://microsoft.github.io/language-server-protocol/specifications/specification-current/),
|
||||
@@ -18,6 +19,10 @@ namespace Lsp
|
||||
|
||||
open Json
|
||||
|
||||
structure CancelParams where
|
||||
id : JsonRpc.RequestID
|
||||
deriving Inhabited, BEq, ToJson, FromJson
|
||||
|
||||
abbrev DocumentUri := String
|
||||
|
||||
/-- We adopt the convention that zero-based UTF-16 positions as sent by LSP clients
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2020 Marc Huisinga. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
|
||||
Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.JsonRpc
|
||||
|
||||
/-! # Defines `Lean.Lsp.CancelParams`.
|
||||
|
||||
This is separate from `Lean.Data.Lsp.Basic` to reduce transitive dependencies.
|
||||
-/
|
||||
|
||||
namespace Lean
|
||||
namespace Lsp
|
||||
|
||||
open Json
|
||||
|
||||
structure CancelParams where
|
||||
id : JsonRpc.RequestID
|
||||
deriving Inhabited, BEq, ToJson, FromJson
|
||||
|
||||
end Lsp
|
||||
end Lean
|
||||
@@ -6,6 +6,7 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.String
|
||||
import Init.Data.Array
|
||||
import Lean.Data.Lsp.Basic
|
||||
import Lean.Data.Position
|
||||
import Lean.DeclarationRange
|
||||
|
||||
@@ -49,8 +49,3 @@ variable {_ : BEq α} {_ : Hashable α}
|
||||
|
||||
@[inline] def fold {β : Type v} (f : β → α → β) (init : β) (s : PersistentHashSet α) : β :=
|
||||
Id.run $ s.foldM f init
|
||||
|
||||
def toList (s : PersistentHashSet α) : List α :=
|
||||
s.set.toList.map (·.1)
|
||||
|
||||
end PersistentHashSet
|
||||
|
||||
@@ -131,18 +131,14 @@ def throwCalcFailure (steps : Array CalcStepView) (expectedType result : Expr) :
|
||||
if ← isDefEqGuarded r er then
|
||||
let mut failed := false
|
||||
unless ← isDefEqGuarded lhs elhs do
|
||||
let (lhs, elhs) ← addPPExplicitToExposeDiff lhs elhs
|
||||
let (lhsTy, elhsTy) ← addPPExplicitToExposeDiff (← inferType lhs) (← inferType elhs)
|
||||
logErrorAt steps[0]!.term m!"\
|
||||
invalid 'calc' step, left-hand side is{indentD m!"{lhs} : {lhsTy}"}\n\
|
||||
but is expected to be{indentD m!"{elhs} : {elhsTy}"}"
|
||||
invalid 'calc' step, left-hand side is{indentD m!"{lhs} : {← inferType lhs}"}\n\
|
||||
but is expected to be{indentD m!"{elhs} : {← inferType elhs}"}"
|
||||
failed := true
|
||||
unless ← isDefEqGuarded rhs erhs do
|
||||
let (rhs, erhs) ← addPPExplicitToExposeDiff rhs erhs
|
||||
let (rhsTy, erhsTy) ← addPPExplicitToExposeDiff (← inferType rhs) (← inferType erhs)
|
||||
logErrorAt steps.back!.term m!"\
|
||||
invalid 'calc' step, right-hand side is{indentD m!"{rhs} : {rhsTy}"}\n\
|
||||
but is expected to be{indentD m!"{erhs} : {erhsTy}"}"
|
||||
invalid 'calc' step, right-hand side is{indentD m!"{rhs} : {← inferType rhs}"}\n\
|
||||
but is expected to be{indentD m!"{erhs} : {← inferType erhs}"}"
|
||||
failed := true
|
||||
if failed then
|
||||
throwAbortTerm
|
||||
|
||||
@@ -38,7 +38,6 @@ def elabCheckTactic : CommandElab := fun stx => do
|
||||
| [next] => do
|
||||
let (val, _, _) ← matchCheckGoalType stx (←next.getType)
|
||||
if !(← Meta.withReducible <| isDefEq val expTerm) then
|
||||
let (val, expTerm) ← addPPExplicitToExposeDiff val expTerm
|
||||
throwErrorAt stx
|
||||
m!"Term reduces to{indentExpr val}\nbut is expected to reduce to {indentExpr expTerm}"
|
||||
| _ => do
|
||||
|
||||
@@ -16,4 +16,3 @@ import Lean.Elab.Deriving.FromToJson
|
||||
import Lean.Elab.Deriving.SizeOf
|
||||
import Lean.Elab.Deriving.Hashable
|
||||
import Lean.Elab.Deriving.Ord
|
||||
import Lean.Elab.Deriving.ToExpr
|
||||
|
||||
@@ -1,237 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Kyle Miller
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Transform
|
||||
import Lean.Elab.Deriving.Basic
|
||||
import Lean.Elab.Deriving.Util
|
||||
import Lean.ToLevel
|
||||
import Lean.ToExpr
|
||||
|
||||
/-!
|
||||
# `ToExpr` deriving handler
|
||||
|
||||
This module defines a `ToExpr` deriving handler for inductive types.
|
||||
It supports mutually inductive types as well.
|
||||
|
||||
The `ToExpr` deriving handlers support universe level polymorphism, via the `Lean.ToLevel` class.
|
||||
To use `ToExpr` in places where there is universe polymorphism, make sure a `[ToLevel.{u}]` instance is available,
|
||||
though be aware that the `ToLevel` mechanism does not support `max` or `imax` expressions.
|
||||
|
||||
Implementation note: this deriving handler was initially modeled after the `Repr` deriving handler, but
|
||||
1. we need to account for universe levels,
|
||||
2. the `ToExpr` class has two fields rather than one, and
|
||||
3. we don't handle structures specially.
|
||||
-/
|
||||
|
||||
namespace Lean.Elab.Deriving.ToExpr
|
||||
|
||||
open Lean Elab Parser.Term
|
||||
open Meta Command Deriving
|
||||
|
||||
/--
|
||||
Given `args := #[e₁, e₂, …, eₙ]`, constructs the syntax `Expr.app (… (Expr.app (Expr.app f e₁) e₂) …) eₙ`.
|
||||
-/
|
||||
def mkAppNTerm (f : Term) (args : Array Term) : MetaM Term :=
|
||||
args.foldlM (fun a b => ``(Expr.app $a $b)) f
|
||||
|
||||
/-- Fixes the output of `mkInductiveApp` to explicitly reference universe levels. -/
|
||||
def updateIndType (indVal : InductiveVal) (t : Term) : TermElabM Term :=
|
||||
let levels := indVal.levelParams.toArray.map mkIdent
|
||||
match t with
|
||||
| `(@$f $args*) => `(@$f.{$levels,*} $args*)
|
||||
| _ => throwError "(internal error) expecting output of `mkInductiveApp`"
|
||||
|
||||
/--
|
||||
Creates a term that evaluates to an expression representing the inductive type.
|
||||
Uses `toExpr` and `toTypeExpr` for the arguments to the type constructor.
|
||||
-/
|
||||
def mkToTypeExpr (indVal : InductiveVal) (argNames : Array Name) : TermElabM Term := do
|
||||
let levels ← indVal.levelParams.toArray.mapM (fun u => `(Lean.toLevel.{$(mkIdent u)}))
|
||||
forallTelescopeReducing indVal.type fun xs _ => do
|
||||
let mut args : Array Term := #[]
|
||||
for argName in argNames, x in xs do
|
||||
let a := mkIdent argName
|
||||
if ← Meta.isType x then
|
||||
args := args.push <| ← ``(toTypeExpr $a)
|
||||
else
|
||||
args := args.push <| ← ``(toExpr $a)
|
||||
mkAppNTerm (← ``(Expr.const $(quote indVal.name) [$levels,*])) args
|
||||
|
||||
/--
|
||||
Creates the body of the `toExpr` function for the `ToExpr` instance, which is a `match` expression
|
||||
that calls `toExpr` and `toTypeExpr` to assemble an expression for a given term.
|
||||
For recursive inductive types, `auxFunName` refers to the `ToExpr` instance for the current type.
|
||||
For mutually recursive types, we rely on the local instances set up by `mkLocalInstanceLetDecls`.
|
||||
-/
|
||||
def mkToExprBody (header : Header) (indVal : InductiveVal) (auxFunName : Name) (levelInsts : Array Term) :
|
||||
TermElabM Term := do
|
||||
let discrs ← mkDiscrs header indVal
|
||||
let alts ← mkAlts
|
||||
`(match $[$discrs],* with $alts:matchAlt*)
|
||||
where
|
||||
/-- Create the `match` cases, one per constructor. -/
|
||||
mkAlts : TermElabM (Array (TSyntax ``matchAlt)) := do
|
||||
let levels ← levelInsts.mapM fun inst => `($(inst).toLevel)
|
||||
let mut alts := #[]
|
||||
for ctorName in indVal.ctors do
|
||||
let ctorInfo ← getConstInfoCtor ctorName
|
||||
let alt ← forallTelescopeReducing ctorInfo.type fun xs _ => do
|
||||
let mut patterns := #[]
|
||||
-- add `_` pattern for indices, before the constructor's pattern
|
||||
for _ in [:indVal.numIndices] do
|
||||
patterns := patterns.push (← `(_))
|
||||
let mut ctorArgs := #[]
|
||||
let mut rhsArgs : Array Term := #[]
|
||||
let mkArg (x : Expr) (a : Term) : TermElabM Term := do
|
||||
if (← inferType x).isAppOf indVal.name then
|
||||
`($(mkIdent auxFunName) $levelInsts* $a)
|
||||
else if ← Meta.isType x then
|
||||
``(toTypeExpr $a)
|
||||
else
|
||||
``(toExpr $a)
|
||||
-- add `_` pattern for inductive parameters, which are inaccessible
|
||||
for i in [:ctorInfo.numParams] do
|
||||
let a := mkIdent header.argNames[i]!
|
||||
ctorArgs := ctorArgs.push (← `(_))
|
||||
rhsArgs := rhsArgs.push <| ← mkArg xs[i]! a
|
||||
for i in [:ctorInfo.numFields] do
|
||||
let a := mkIdent (← mkFreshUserName `a)
|
||||
ctorArgs := ctorArgs.push a
|
||||
rhsArgs := rhsArgs.push <| ← mkArg xs[ctorInfo.numParams + i]! a
|
||||
patterns := patterns.push (← `(@$(mkIdent ctorName):ident $ctorArgs:term*))
|
||||
let rhs : Term ← mkAppNTerm (← ``(Expr.const $(quote ctorInfo.name) [$levels,*])) rhsArgs
|
||||
`(matchAltExpr| | $[$patterns:term],* => $rhs)
|
||||
alts := alts.push alt
|
||||
return alts
|
||||
|
||||
/--
|
||||
For nested and mutually recursive inductive types, we define `partial` instances,
|
||||
and the strategy is to have local `ToExpr` instances in scope for the body of each instance.
|
||||
This way, each instance can freely use `toExpr` and `toTypeExpr` for each of the types in `ctx`.
|
||||
|
||||
This is a modified copy of `Lean.Elab.Deriving.mkLocalInstanceLetDecls`,
|
||||
since we need to include the `toTypeExpr` field in the `letDecl`
|
||||
Note that, for simplicity, each instance gets its own definition of each others' `toTypeExpr` fields.
|
||||
These are very simple fields, so avoiding the duplication is not worth it.
|
||||
-/
|
||||
def mkLocalInstanceLetDecls (ctx : Deriving.Context) (argNames : Array Name) (levelInsts : Array Term) :
|
||||
TermElabM (Array (TSyntax ``Parser.Term.letDecl)) := do
|
||||
let mut letDecls := #[]
|
||||
for indVal in ctx.typeInfos, auxFunName in ctx.auxFunNames do
|
||||
let currArgNames ← mkInductArgNames indVal
|
||||
let numParams := indVal.numParams
|
||||
let currIndices := currArgNames[numParams:]
|
||||
let binders ← mkImplicitBinders currIndices
|
||||
let argNamesNew := argNames[:numParams] ++ currIndices
|
||||
let indType ← mkInductiveApp indVal argNamesNew
|
||||
let instName ← mkFreshUserName `localinst
|
||||
let toTypeExpr ← mkToTypeExpr indVal argNames
|
||||
-- Recall that mutually inductive types all use the same universe levels, hence we pass the same ToLevel instances to each aux function.
|
||||
let letDecl ← `(Parser.Term.letDecl| $(mkIdent instName):ident $binders:implicitBinder* : ToExpr $indType :=
|
||||
{ toExpr := $(mkIdent auxFunName) $levelInsts*,
|
||||
toTypeExpr := $toTypeExpr })
|
||||
letDecls := letDecls.push letDecl
|
||||
return letDecls
|
||||
|
||||
open TSyntax.Compat in
|
||||
/--
|
||||
Makes a `toExpr` function for the given inductive type.
|
||||
The implementation of each `toExpr` function for a (mutual) inductive type is given as top-level private definitions.
|
||||
These are assembled into `ToExpr` instances in `mkInstanceCmds`.
|
||||
For mutual/nested inductive types, then each of the types' `ToExpr` instances are provided as local instances,
|
||||
to wire together the recursion (necessitating these auxiliary definitions being `partial`).
|
||||
-/
|
||||
def mkAuxFunction (ctx : Deriving.Context) (i : Nat) : TermElabM Command := do
|
||||
let auxFunName := ctx.auxFunNames[i]!
|
||||
let indVal := ctx.typeInfos[i]!
|
||||
let header ← mkHeader ``ToExpr 1 indVal
|
||||
/- We make the `ToLevel` instances be explicit here so that we can pass the instances from the instances to the
|
||||
aux functions. This lets us ensure universe level variables are being lined up,
|
||||
without needing to use `ident.{u₁,…,uₙ}` syntax, which could conditionally be incorrect
|
||||
depending on the ambient CommandElabM scope state.
|
||||
TODO(kmill): deriving handlers should run in a scope with no `universes` or `variables`. -/
|
||||
let (toLevelInsts, levelBinders) := Array.unzip <| ← indVal.levelParams.toArray.mapM fun u => do
|
||||
let inst := mkIdent (← mkFreshUserName `inst)
|
||||
return (inst, ← `(explicitBinderF| ($inst : ToLevel.{$(mkIdent u)})))
|
||||
let mut body ← mkToExprBody header indVal auxFunName toLevelInsts
|
||||
if ctx.usePartial then
|
||||
let letDecls ← mkLocalInstanceLetDecls ctx header.argNames toLevelInsts
|
||||
body ← mkLet letDecls body
|
||||
/- We need to alter the last binder (the one for the "target") to have explicit universe levels
|
||||
so that the `ToLevel` instance arguments can use them. -/
|
||||
let addLevels binder :=
|
||||
match binder with
|
||||
| `(bracketedBinderF| ($a : $ty)) => do `(bracketedBinderF| ($a : $(← updateIndType indVal ty)))
|
||||
| _ => throwError "(internal error) expecting inst binder"
|
||||
let binders := header.binders.pop ++ levelBinders ++ #[← addLevels header.binders.back!]
|
||||
if ctx.usePartial then
|
||||
`(private partial def $(mkIdent auxFunName):ident $binders:bracketedBinder* : Expr := $body:term)
|
||||
else
|
||||
`(private def $(mkIdent auxFunName):ident $binders:bracketedBinder* : Expr := $body:term)
|
||||
|
||||
/--
|
||||
Creates all the auxiliary functions (using `mkAuxFunction`) for the (mutual) inductive type(s).
|
||||
Wraps the resulting definition commands in `mutual ... end`.
|
||||
-/
|
||||
def mkAuxFunctions (ctx : Deriving.Context) : TermElabM Syntax := do
|
||||
let mut auxDefs := #[]
|
||||
for i in [:ctx.typeInfos.size] do
|
||||
auxDefs := auxDefs.push (← mkAuxFunction ctx i)
|
||||
`(mutual $auxDefs:command* end)
|
||||
|
||||
open TSyntax.Compat in
|
||||
/--
|
||||
Assuming all of the auxiliary definitions exist,
|
||||
creates all the `instance` commands for the `ToExpr` instances for the (mutual) inductive type(s).
|
||||
This is a modified copy of `Lean.Elab.Deriving.mkInstanceCmds` to account for `ToLevel` instances.
|
||||
-/
|
||||
def mkInstanceCmds (ctx : Deriving.Context) (typeNames : Array Name) :
|
||||
TermElabM (Array Command) := do
|
||||
let mut instances := #[]
|
||||
for indVal in ctx.typeInfos, auxFunName in ctx.auxFunNames do
|
||||
if typeNames.contains indVal.name then
|
||||
let argNames ← mkInductArgNames indVal
|
||||
let binders ← mkImplicitBinders argNames
|
||||
let binders := binders ++ (← mkInstImplicitBinders ``ToExpr indVal argNames)
|
||||
let (toLevelInsts, levelBinders) := Array.unzip <| ← indVal.levelParams.toArray.mapM fun u => do
|
||||
let inst := mkIdent (← mkFreshUserName `inst)
|
||||
return (inst, ← `(instBinderF| [$inst : ToLevel.{$(mkIdent u)}]))
|
||||
let binders := binders ++ levelBinders
|
||||
let indType ← updateIndType indVal (← mkInductiveApp indVal argNames)
|
||||
let toTypeExpr ← mkToTypeExpr indVal argNames
|
||||
let instCmd ← `(instance $binders:implicitBinder* : ToExpr $indType where
|
||||
toExpr := $(mkIdent auxFunName) $toLevelInsts*
|
||||
toTypeExpr := $toTypeExpr)
|
||||
instances := instances.push instCmd
|
||||
return instances
|
||||
|
||||
/--
|
||||
Returns all the commands necessary to construct the `ToExpr` instances.
|
||||
-/
|
||||
def mkToExprInstanceCmds (declNames : Array Name) : TermElabM (Array Syntax) := do
|
||||
let ctx ← mkContext "toExpr" declNames[0]!
|
||||
let cmds := #[← mkAuxFunctions ctx] ++ (← mkInstanceCmds ctx declNames)
|
||||
trace[Elab.Deriving.toExpr] "\n{cmds}"
|
||||
return cmds
|
||||
|
||||
/--
|
||||
The main entry point to the `ToExpr` deriving handler.
|
||||
-/
|
||||
def mkToExprInstanceHandler (declNames : Array Name) : CommandElabM Bool := do
|
||||
if (← declNames.allM isInductive) && declNames.size > 0 then
|
||||
let cmds ← withFreshMacroScope <| liftTermElabM <| mkToExprInstanceCmds declNames
|
||||
-- Enable autoimplicits, used for universe levels.
|
||||
withScope (fun scope => { scope with opts := autoImplicit.set scope.opts true }) do
|
||||
elabCommand (mkNullNode cmds)
|
||||
return true
|
||||
else
|
||||
return false
|
||||
|
||||
builtin_initialize
|
||||
registerDerivingHandler ``Lean.ToExpr mkToExprInstanceHandler
|
||||
registerTraceClass `Elab.Deriving.toExpr
|
||||
|
||||
end Lean.Elab.Deriving.ToExpr
|
||||
@@ -691,9 +691,6 @@ private def addProjections (r : ElabHeaderResult) (fieldInfos : Array StructFiel
|
||||
let env ← getEnv
|
||||
let env ← ofExceptKernelException (mkProjections env r.view.declName projNames.toList r.view.isClass)
|
||||
setEnv env
|
||||
for fieldInfo in fieldInfos do
|
||||
if fieldInfo.isSubobject then
|
||||
addDeclarationRangesFromSyntax fieldInfo.declName r.view.ref fieldInfo.ref
|
||||
|
||||
private def registerStructure (structName : Name) (infos : Array StructFieldInfo) : TermElabM Unit := do
|
||||
let fields ← infos.filterMapM fun info => do
|
||||
@@ -778,14 +775,14 @@ private def setSourceInstImplicit (type : Expr) : Expr :=
|
||||
/--
|
||||
Creates a projection function to a non-subobject parent.
|
||||
-/
|
||||
private partial def mkCoercionToCopiedParent (levelParams : List Name) (params : Array Expr) (view : StructView) (source : Expr) (parent : StructParentInfo) (parentType : Expr) : MetaM StructureParentInfo := do
|
||||
private partial def mkCoercionToCopiedParent (levelParams : List Name) (params : Array Expr) (view : StructView) (source : Expr) (parentStructName : Name) (parentType : Expr) : MetaM StructureParentInfo := do
|
||||
let isProp ← Meta.isProp parentType
|
||||
let env ← getEnv
|
||||
let structName := view.declName
|
||||
let sourceFieldNames := getStructureFieldsFlattened env structName
|
||||
let binfo := if view.isClass && isClass env parent.structName then BinderInfo.instImplicit else BinderInfo.default
|
||||
let binfo := if view.isClass && isClass env parentStructName then BinderInfo.instImplicit else BinderInfo.default
|
||||
let mut declType ← instantiateMVars (← mkForallFVars params (← mkForallFVars #[source] parentType))
|
||||
if view.isClass && isClass env parent.structName then
|
||||
if view.isClass && isClass env parentStructName then
|
||||
declType := setSourceInstImplicit declType
|
||||
declType := declType.inferImplicit params.size true
|
||||
let rec copyFields (parentType : Expr) : MetaM Expr := do
|
||||
@@ -826,8 +823,7 @@ private partial def mkCoercionToCopiedParent (levelParams : List Name) (params :
|
||||
-- (Instances will get instance reducibility in `Lean.Elab.Command.addParentInstances`.)
|
||||
if !binfo.isInstImplicit && !(← Meta.isProp parentType) then
|
||||
setReducibleAttribute declName
|
||||
addDeclarationRangesFromSyntax declName view.ref parent.ref
|
||||
return { structName := parent.structName, subobject := false, projFn := declName }
|
||||
return { structName := parentStructName, subobject := false, projFn := declName }
|
||||
|
||||
private def mkRemainingProjections (levelParams : List Name) (params : Array Expr) (view : StructView)
|
||||
(parents : Array StructParentInfo) (fieldInfos : Array StructFieldInfo) : TermElabM (Array StructureParentInfo) := do
|
||||
@@ -848,7 +844,7 @@ private def mkRemainingProjections (levelParams : List Name) (params : Array Exp
|
||||
pure { structName := parent.structName, subobject := true, projFn := info.declName }
|
||||
else
|
||||
let parent_type := (← instantiateMVars parent.type).replace fun e => parentFVarToConst[e]?
|
||||
mkCoercionToCopiedParent levelParams params view source parent parent_type)
|
||||
mkCoercionToCopiedParent levelParams params view source parent.structName parent_type)
|
||||
parentInfos := parentInfos.push parentInfo
|
||||
if let some fvar := parent.fvar? then
|
||||
parentFVarToConst := parentFVarToConst.insert fvar <|
|
||||
|
||||
@@ -45,4 +45,3 @@ import Lean.Elab.Tactic.BVDecide
|
||||
import Lean.Elab.Tactic.BoolToPropSimps
|
||||
import Lean.Elab.Tactic.Classical
|
||||
import Lean.Elab.Tactic.Grind
|
||||
import Lean.Elab.Tactic.Monotonicity
|
||||
|
||||
@@ -362,9 +362,9 @@ partial def evalChoiceAux (tactics : Array Syntax) (i : Nat) : TacticM Unit :=
|
||||
| `(tactic| intro $h:term $hs:term*) => evalTactic (← `(tactic| intro $h:term; intro $hs:term*))
|
||||
| _ => throwUnsupportedSyntax
|
||||
where
|
||||
introStep (ref? : Option Syntax) (n : Name) (typeStx? : Option Syntax := none) : TacticM Unit := do
|
||||
introStep (ref : Option Syntax) (n : Name) (typeStx? : Option Syntax := none) : TacticM Unit := do
|
||||
let fvarId ← liftMetaTacticAux fun mvarId => do
|
||||
let (fvarId, mvarId) ← withRef? ref? <| mvarId.intro n
|
||||
let (fvarId, mvarId) ← mvarId.intro n
|
||||
pure (fvarId, [mvarId])
|
||||
if let some typeStx := typeStx? then
|
||||
withMainContext do
|
||||
@@ -374,9 +374,9 @@ where
|
||||
unless (← isDefEqGuarded type fvarType) do
|
||||
throwError "type mismatch at `intro {fvar}`{← mkHasTypeButIsExpectedMsg fvarType type}"
|
||||
liftMetaTactic fun mvarId => return [← mvarId.replaceLocalDeclDefEq fvarId type]
|
||||
if let some ref := ref? then
|
||||
if let some stx := ref then
|
||||
withMainContext do
|
||||
Term.addLocalVarInfo ref (mkFVar fvarId)
|
||||
Term.addLocalVarInfo stx (mkFVar fvarId)
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.introMatch] def evalIntroMatch : Tactic := fun stx => do
|
||||
let matchAlts := stx[1]
|
||||
|
||||
@@ -24,8 +24,11 @@ def classical [Monad m] [MonadEnv m] [MonadFinally m] [MonadLiftT MetaM m] (t :
|
||||
finally
|
||||
modifyEnv Meta.instanceExtension.popScope
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.classical, builtin_incremental]
|
||||
def evalClassical : Tactic := fun stx =>
|
||||
classical <| Term.withNarrowedArgTacticReuse (argIdx := 1) Elab.Tactic.evalTactic stx
|
||||
@[builtin_tactic Lean.Parser.Tactic.classical]
|
||||
def evalClassical : Tactic := fun stx => do
|
||||
match stx with
|
||||
| `(tactic| classical $tacs:tacticSeq) =>
|
||||
classical <| Elab.Tactic.evalTactic tacs
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
end Lean.Elab.Tactic
|
||||
|
||||
@@ -7,10 +7,9 @@ prelude
|
||||
import Lean.Elab.Tactic.Simp
|
||||
import Lean.Elab.Tactic.Split
|
||||
import Lean.Elab.Tactic.Conv.Basic
|
||||
import Lean.Elab.Tactic.SimpTrace
|
||||
|
||||
namespace Lean.Elab.Tactic.Conv
|
||||
open Meta Tactic TryThis
|
||||
open Meta
|
||||
|
||||
def applySimpResult (result : Simp.Result) : TacticM Unit := do
|
||||
if result.proof?.isNone then
|
||||
@@ -24,19 +23,6 @@ def applySimpResult (result : Simp.Result) : TacticM Unit := do
|
||||
let (result, _) ← dischargeWrapper.with fun d? => simp lhs ctx (simprocs := simprocs) (discharge? := d?)
|
||||
applySimpResult result
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.Conv.simpTrace] def evalSimpTrace : Tactic := fun stx => withMainContext do
|
||||
match stx with
|
||||
| `(conv| simp?%$tk $cfg:optConfig $(discharger)? $[only%$o]? $[[$args,*]]?) => do
|
||||
let stx ← `(tactic| simp%$tk $cfg:optConfig $[$discharger]? $[only%$o]? $[[$args,*]]?)
|
||||
let { ctx, simprocs, dischargeWrapper, .. } ← mkSimpContext stx (eraseLocal := false)
|
||||
let lhs ← getLhs
|
||||
let (result, stats) ← dischargeWrapper.with fun d? =>
|
||||
simp lhs ctx (simprocs := simprocs) (discharge? := d?)
|
||||
applySimpResult result
|
||||
let stx ← mkSimpCallStx stx stats.usedTheorems
|
||||
addSuggestion tk stx (origSpan? := ← getRef)
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.Conv.simpMatch] def evalSimpMatch : Tactic := fun _ => withMainContext do
|
||||
applySimpResult (← Split.simpMatch (← getLhs))
|
||||
|
||||
@@ -44,15 +30,4 @@ def applySimpResult (result : Simp.Result) : TacticM Unit := do
|
||||
let { ctx, .. } ← mkSimpContext stx (eraseLocal := false) (kind := .dsimp)
|
||||
changeLhs (← Lean.Meta.dsimp (← getLhs) ctx).1
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.Conv.dsimpTrace] def evalDSimpTrace : Tactic := fun stx => withMainContext do
|
||||
match stx with
|
||||
| `(conv| dsimp?%$tk $cfg:optConfig $[only%$o]? $[[$args,*]]?) =>
|
||||
let stx ← `(tactic| dsimp%$tk $cfg:optConfig $[only%$o]? $[[$args,*]]?)
|
||||
let { ctx, .. } ← mkSimpContext stx (eraseLocal := false) (kind := .dsimp)
|
||||
let (result, stats) ← Lean.Meta.dsimp (← getLhs) ctx
|
||||
changeLhs result
|
||||
let stx ← mkSimpCallStx stx stats.usedTheorems
|
||||
addSuggestion tk stx (origSpan? := ← getRef)
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
end Lean.Elab.Tactic.Conv
|
||||
|
||||
@@ -6,64 +6,22 @@ Authors: Leonardo de Moura
|
||||
prelude
|
||||
import Init.Grind.Tactics
|
||||
import Lean.Meta.Tactic.Grind
|
||||
import Lean.Elab.Command
|
||||
import Lean.Elab.Tactic.Basic
|
||||
import Lean.Elab.Tactic.Config
|
||||
|
||||
namespace Lean.Elab.Tactic
|
||||
open Meta
|
||||
|
||||
declare_config_elab elabGrindConfig Grind.Config
|
||||
|
||||
open Command Term in
|
||||
@[builtin_command_elab Lean.Parser.Command.grindPattern]
|
||||
def elabGrindPattern : CommandElab := fun stx => do
|
||||
match stx with
|
||||
| `(grind_pattern $thmName:ident => $terms,*) => do
|
||||
liftTermElabM do
|
||||
let declName ← resolveGlobalConstNoOverload thmName
|
||||
discard <| addTermInfo thmName (← mkConstWithLevelParams declName)
|
||||
let info ← getConstInfo declName
|
||||
forallTelescope info.type fun xs _ => do
|
||||
let patterns ← terms.getElems.mapM fun term => do
|
||||
let pattern ← elabTerm term none
|
||||
synthesizeSyntheticMVarsUsingDefault
|
||||
let pattern ← instantiateMVars pattern
|
||||
let pattern ← Grind.preprocessPattern pattern
|
||||
return pattern.abstract xs
|
||||
Grind.addEMatchTheorem declName xs.size patterns.toList
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
def grind (mvarId : MVarId) (config : Grind.Config) (mainDeclName : Name) (fallback : Grind.Fallback) : MetaM Unit := do
|
||||
let mvarIds ← Grind.main mvarId config mainDeclName fallback
|
||||
def grind (mvarId : MVarId) (mainDeclName : Name) : MetaM Unit := do
|
||||
let mvarIds ← Grind.main mvarId mainDeclName
|
||||
unless mvarIds.isEmpty do
|
||||
throwError "`grind` failed\n{goalsToMessageData mvarIds}"
|
||||
|
||||
private def elabFallback (fallback? : Option Term) : TermElabM (Grind.GoalM Unit) := do
|
||||
let some fallback := fallback? | return (pure ())
|
||||
let type := mkApp (mkConst ``Grind.GoalM) (mkConst ``Unit)
|
||||
let value ← withLCtx {} {} do Term.elabTermAndSynthesize fallback type
|
||||
let auxDeclName ← if let .const declName _ := value then
|
||||
pure declName
|
||||
else
|
||||
let auxDeclName ← Term.mkAuxName `_grind_fallback
|
||||
let decl := Declaration.defnDecl {
|
||||
name := auxDeclName
|
||||
levelParams := []
|
||||
type, value, hints := .opaque, safety := .safe
|
||||
}
|
||||
addAndCompile decl
|
||||
pure auxDeclName
|
||||
unsafe evalConst (Grind.GoalM Unit) auxDeclName
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.grind] def evalApplyRfl : Tactic := fun stx => do
|
||||
match stx with
|
||||
| `(tactic| grind $config:optConfig $[on_failure $fallback?]?) =>
|
||||
let fallback ← elabFallback fallback?
|
||||
| `(tactic| grind) =>
|
||||
logWarningAt stx "The `grind` tactic is experimental and still under development. Avoid using it in production projects"
|
||||
let declName := (← Term.getDeclName?).getD `_grind
|
||||
let config ← elabGrindConfig config
|
||||
withMainContext do liftMetaFinishingTactic (grind · config declName fallback)
|
||||
withMainContext do liftMetaFinishingTactic (grind · declName)
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
end Lean.Elab.Tactic
|
||||
|
||||
@@ -258,11 +258,11 @@ private def saveAltVarsInfo (altMVarId : MVarId) (altStx : Syntax) (fvarIds : Ar
|
||||
i := i + 1
|
||||
|
||||
open Language in
|
||||
def evalAlts (elimInfo : ElimInfo) (alts : Array Alt) (optPreTac : Syntax) (altStxs? : Option (Array Syntax))
|
||||
def evalAlts (elimInfo : ElimInfo) (alts : Array Alt) (optPreTac : Syntax) (altStxs : Array Syntax)
|
||||
(initialInfo : Info)
|
||||
(numEqs : Nat := 0) (numGeneralized : Nat := 0) (toClear : Array FVarId := #[])
|
||||
(toTag : Array (Ident × FVarId) := #[]) : TacticM Unit := do
|
||||
let hasAlts := altStxs?.isSome
|
||||
let hasAlts := altStxs.size > 0
|
||||
if hasAlts then
|
||||
-- default to initial state outside of alts
|
||||
-- HACK: because this node has the same span as the original tactic,
|
||||
@@ -274,7 +274,9 @@ def evalAlts (elimInfo : ElimInfo) (alts : Array Alt) (optPreTac : Syntax) (altS
|
||||
where
|
||||
-- continuation in the correct info context
|
||||
goWithInfo := do
|
||||
if let some altStxs := altStxs? then
|
||||
let hasAlts := altStxs.size > 0
|
||||
|
||||
if hasAlts then
|
||||
if let some tacSnap := (← readThe Term.Context).tacSnap? then
|
||||
-- incrementality: create a new promise for each alternative, resolve current snapshot to
|
||||
-- them, eventually put each of them back in `Context.tacSnap?` in `applyAltStx`
|
||||
@@ -307,8 +309,7 @@ where
|
||||
|
||||
-- continuation in the correct incrementality context
|
||||
goWithIncremental (tacSnaps : Array (SnapshotBundle TacticParsedSnapshot)) := do
|
||||
let hasAlts := altStxs?.isSome
|
||||
let altStxs := altStxs?.getD #[]
|
||||
let hasAlts := altStxs.size > 0
|
||||
let mut alts := alts
|
||||
|
||||
-- initial sanity checks: named cases should be known, wildcards should be last
|
||||
@@ -342,12 +343,12 @@ where
|
||||
let altName := getAltName altStx
|
||||
if let some i := alts.findFinIdx? (·.1 == altName) then
|
||||
-- cover named alternative
|
||||
applyAltStx tacSnaps altStxs altStxIdx altStx alts[i]
|
||||
applyAltStx tacSnaps altStxIdx altStx alts[i]
|
||||
alts := alts.eraseIdx i
|
||||
else if !alts.isEmpty && isWildcard altStx then
|
||||
-- cover all alternatives
|
||||
for alt in alts do
|
||||
applyAltStx tacSnaps altStxs altStxIdx altStx alt
|
||||
applyAltStx tacSnaps altStxIdx altStx alt
|
||||
alts := #[]
|
||||
else
|
||||
throwErrorAt altStx "unused alternative '{altName}'"
|
||||
@@ -378,7 +379,7 @@ where
|
||||
altMVarIds.forM fun mvarId => admitGoal mvarId
|
||||
|
||||
/-- Applies syntactic alternative to alternative goal. -/
|
||||
applyAltStx tacSnaps altStxs altStxIdx altStx alt := withRef altStx do
|
||||
applyAltStx tacSnaps altStxIdx altStx alt := withRef altStx do
|
||||
let { name := altName, info, mvarId := altMVarId } := alt
|
||||
-- also checks for unknown alternatives
|
||||
let numFields ← getAltNumFields elimInfo altName
|
||||
@@ -475,7 +476,7 @@ private def generalizeVars (mvarId : MVarId) (stx : Syntax) (targets : Array Exp
|
||||
/--
|
||||
Given `inductionAlts` of the form
|
||||
```
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)*)
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)+)
|
||||
```
|
||||
Return an array containing its alternatives.
|
||||
-/
|
||||
@@ -485,30 +486,21 @@ private def getAltsOfInductionAlts (inductionAlts : Syntax) : Array Syntax :=
|
||||
/--
|
||||
Given `inductionAlts` of the form
|
||||
```
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)*)
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)+)
|
||||
```
|
||||
runs `cont (some alts)` where `alts` is an array containing all `inductionAlt`s while disabling incremental
|
||||
reuse if any other syntax changed. If there's no `with` clause, then runs `cont none`.
|
||||
runs `cont alts` where `alts` is an array containing all `inductionAlt`s while disabling incremental
|
||||
reuse if any other syntax changed.
|
||||
-/
|
||||
private def withAltsOfOptInductionAlts (optInductionAlts : Syntax)
|
||||
(cont : Option (Array Syntax) → TacticM α) : TacticM α :=
|
||||
(cont : Array Syntax → TacticM α) : TacticM α :=
|
||||
Term.withNarrowedTacticReuse (stx := optInductionAlts) (fun optInductionAlts =>
|
||||
if optInductionAlts.isNone then
|
||||
-- if there are no alternatives, what to compare is irrelevant as there will be no reuse
|
||||
(mkNullNode #[], mkNullNode #[])
|
||||
else
|
||||
-- if there are no alts, then use the `with` token for `inner` for a ref for messages
|
||||
let altStxs := optInductionAlts[0].getArg 2
|
||||
let inner := if altStxs.getNumArgs > 0 then altStxs else optInductionAlts[0][0]
|
||||
-- `with` and tactic applied to all branches must be unchanged for reuse
|
||||
(mkNullNode optInductionAlts[0].getArgs[:2], inner))
|
||||
(fun alts? =>
|
||||
if optInductionAlts.isNone then -- no `with` clause
|
||||
cont none
|
||||
else if alts?.isOfKind nullKind then -- has alts
|
||||
cont (some alts?.getArgs)
|
||||
else -- has `with` clause, but no alts
|
||||
cont (some #[]))
|
||||
(mkNullNode optInductionAlts[0].getArgs[:2], optInductionAlts[0].getArg 2))
|
||||
(fun alts => cont alts.getArgs)
|
||||
|
||||
private def getOptPreTacOfOptInductionAlts (optInductionAlts : Syntax) : Syntax :=
|
||||
if optInductionAlts.isNone then mkNullNode else optInductionAlts[0][1]
|
||||
@@ -526,7 +518,7 @@ private def expandMultiAlt? (alt : Syntax) : Option (Array Syntax) := Id.run do
|
||||
/--
|
||||
Given `inductionAlts` of the form
|
||||
```
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)*)
|
||||
syntax inductionAlts := "with " (tactic)? withPosition( (colGe inductionAlt)+)
|
||||
```
|
||||
Return `some inductionAlts'` if one of the alternatives have multiple LHSs, in the new `inductionAlts'`
|
||||
all alternatives have a single LHS.
|
||||
@@ -708,10 +700,10 @@ def evalInduction : Tactic := fun stx =>
|
||||
-- unchanged
|
||||
-- everything up to the alternatives must be unchanged for reuse
|
||||
Term.withNarrowedArgTacticReuse (stx := stx) (argIdx := 4) fun optInductionAlts => do
|
||||
withAltsOfOptInductionAlts optInductionAlts fun alts? => do
|
||||
withAltsOfOptInductionAlts optInductionAlts fun alts => do
|
||||
let optPreTac := getOptPreTacOfOptInductionAlts optInductionAlts
|
||||
mvarId.assign result.elimApp
|
||||
ElimApp.evalAlts elimInfo result.alts optPreTac alts? initInfo (numGeneralized := n) (toClear := targetFVarIds)
|
||||
ElimApp.evalAlts elimInfo result.alts optPreTac alts initInfo (numGeneralized := n) (toClear := targetFVarIds)
|
||||
appendGoals result.others.toList
|
||||
where
|
||||
checkTargets (targets : Array Expr) : MetaM Unit := do
|
||||
|
||||
@@ -1,223 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Split
|
||||
import Lean.Elab.RecAppSyntax
|
||||
import Lean.Elab.Tactic.Basic
|
||||
import Init.Internal.Order
|
||||
|
||||
namespace Lean.Meta.Monotonicity
|
||||
|
||||
open Lean Meta
|
||||
open Lean.Order
|
||||
|
||||
partial def headBetaUnderLambda (f : Expr) : Expr := Id.run do
|
||||
let mut f := f.headBeta
|
||||
if f.isLambda then
|
||||
while f.bindingBody!.isHeadBetaTarget do
|
||||
f := f.updateLambda! f.bindingInfo! f.bindingDomain! f.bindingBody!.headBeta
|
||||
return f
|
||||
|
||||
|
||||
/-- Environment extensions for monotonicity lemmas -/
|
||||
builtin_initialize monotoneExt :
|
||||
SimpleScopedEnvExtension (Name × Array DiscrTree.Key) (DiscrTree Name) ←
|
||||
registerSimpleScopedEnvExtension {
|
||||
addEntry := fun dt (n, ks) => dt.insertCore ks n
|
||||
initial := {}
|
||||
}
|
||||
|
||||
builtin_initialize registerBuiltinAttribute {
|
||||
name := `partial_fixpoint_monotone
|
||||
descr := "monotonicity theorem"
|
||||
add := fun decl _ kind => MetaM.run' do
|
||||
let declTy := (← getConstInfo decl).type
|
||||
let (xs, _, targetTy) ← withReducible <| forallMetaTelescopeReducing declTy
|
||||
let_expr monotone α inst_α β inst_β f := targetTy |
|
||||
throwError "@[partial_fixpoint_monotone] attribute only applies to lemmas proving {.ofConstName ``monotone}"
|
||||
let f := f.headBeta
|
||||
let f ← if f.isLambda then pure f else etaExpand f
|
||||
let f := headBetaUnderLambda f
|
||||
lambdaBoundedTelescope f 1 fun _ e => do
|
||||
let key ← withReducible <| DiscrTree.mkPath e
|
||||
monotoneExt.add (decl, key) kind
|
||||
}
|
||||
|
||||
/--
|
||||
Finds tagged monotonicity theorems of the form `monotone (fun x => e)`.
|
||||
-/
|
||||
def findMonoThms (e : Expr) : MetaM (Array Name) := do
|
||||
(monotoneExt.getState (← getEnv)).getMatch e
|
||||
|
||||
private def defaultFailK (f : Expr) (monoThms : Array Name) : MetaM α :=
|
||||
let extraMsg := if monoThms.isEmpty then m!"" else
|
||||
m!"Tried to apply {.andList (monoThms.toList.map (m!"'{·}'"))}, but failed."
|
||||
throwError "Failed to prove monotonicity of:{indentExpr f}\n{extraMsg}"
|
||||
|
||||
private def applyConst (goal : MVarId) (name : Name) : MetaM (List MVarId) := do
|
||||
mapError (f := (m!"Could not apply {.ofConstName name}:{indentD ·}")) do
|
||||
goal.applyConst name (cfg := { synthAssignedInstances := false})
|
||||
|
||||
/--
|
||||
Base case for solveMonoStep: Handles goals of the form
|
||||
```
|
||||
monotone (fun f => f.1.2 x y)
|
||||
```
|
||||
|
||||
It's tricky to solve them compositionally from the outside in, so here we construct the proof
|
||||
from the inside out.
|
||||
-/
|
||||
partial def solveMonoCall (α inst_α : Expr) (e : Expr) : MetaM (Option Expr) := do
|
||||
if e.isApp && !e.appArg!.hasLooseBVars then
|
||||
let some hmono ← solveMonoCall α inst_α e.appFn! | return none
|
||||
let hmonoType ← inferType hmono
|
||||
let_expr monotone _ _ _ inst _ := hmonoType | throwError "solveMonoCall {e}: unexpected type {hmonoType}"
|
||||
let some inst ← whnfUntil inst ``instOrderPi | throwError "solveMonoCall {e}: unexpected instance {inst}"
|
||||
let_expr instOrderPi γ δ inst ← inst | throwError "solveMonoCall {e}: whnfUntil failed?{indentExpr inst}"
|
||||
return ← mkAppOptM ``monotone_apply #[γ, δ, α, inst_α, inst, e.appArg!, none, hmono]
|
||||
|
||||
if e.isProj then
|
||||
let some hmono ← solveMonoCall α inst_α e.projExpr! | return none
|
||||
let hmonoType ← inferType hmono
|
||||
let_expr monotone _ _ _ inst _ := hmonoType | throwError "solveMonoCall {e}: unexpected type {hmonoType}"
|
||||
let some inst ← whnfUntil inst ``instPartialOrderPProd | throwError "solveMonoCall {e}: unexpected instance {inst}"
|
||||
let_expr instPartialOrderPProd β γ inst_β inst_γ ← inst | throwError "solveMonoCall {e}: whnfUntil failed?{indentExpr inst}"
|
||||
let n := if e.projIdx! == 0 then ``monotone_pprod_fst else ``monotone_pprod_snd
|
||||
return ← mkAppOptM n #[β, γ, α, inst_β, inst_γ, inst_α, none, hmono]
|
||||
|
||||
if e == .bvar 0 then
|
||||
let hmono ← mkAppOptM ``monotone_id #[α, inst_α]
|
||||
return some hmono
|
||||
|
||||
return none
|
||||
|
||||
|
||||
def solveMonoStep (failK : ∀ {α}, Expr → Array Name → MetaM α := @defaultFailK) (goal : MVarId) : MetaM (List MVarId) :=
|
||||
goal.withContext do
|
||||
trace[Elab.Tactic.monotonicity] "monotonicity at\n{goal}"
|
||||
let type ← goal.getType
|
||||
if type.isForall then
|
||||
let (_, goal) ← goal.intro1P
|
||||
return [goal]
|
||||
|
||||
match_expr type with
|
||||
| monotone α inst_α β inst_β f =>
|
||||
-- Ensure f is not headed by a redex and headed by at least one lambda, and clean some
|
||||
-- redexes left by some of the lemmas we tend to apply
|
||||
let f ← instantiateMVars f
|
||||
let f := f.headBeta
|
||||
let f ← if f.isLambda then pure f else etaExpand f
|
||||
let f := headBetaUnderLambda f
|
||||
let e := f.bindingBody!
|
||||
|
||||
-- No recursive calls left
|
||||
if !e.hasLooseBVars then
|
||||
return ← applyConst goal ``monotone_const
|
||||
|
||||
-- NB: `e` is now an open term.
|
||||
|
||||
-- Look through mdata
|
||||
if e.isMData then
|
||||
let f' := f.updateLambdaE! f.bindingDomain! e.mdataExpr!
|
||||
let goal' ← mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! f')
|
||||
goal.assign goal'
|
||||
return [goal'.mvarId!]
|
||||
|
||||
-- Float letE to the environment
|
||||
if let .letE n t v b _nonDep := e then
|
||||
if t.hasLooseBVars || v.hasLooseBVars then
|
||||
failK f #[]
|
||||
let goal' ← withLetDecl n t v fun x => do
|
||||
let b' := f.updateLambdaE! f.bindingDomain! (b.instantiate1 x)
|
||||
let goal' ← mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! b')
|
||||
goal.assign (← mkLetFVars #[x] goal')
|
||||
pure goal'
|
||||
return [goal'.mvarId!]
|
||||
|
||||
-- Float `letFun` to the environment.
|
||||
-- `applyConst` tends to reduce the redex
|
||||
match_expr e with
|
||||
| letFun γ _ v b =>
|
||||
if γ.hasLooseBVars || v.hasLooseBVars then
|
||||
failK f #[]
|
||||
let b' := f.updateLambdaE! f.bindingDomain! b
|
||||
let p ← mkAppOptM ``monotone_letFun #[α, β, γ, inst_α, inst_β, v, b']
|
||||
let new_goals ← mapError (f := (m!"Could not apply {p}:{indentD ·}")) do
|
||||
goal.apply p
|
||||
let [new_goal] := new_goals
|
||||
| throwError "Unexpected number of goals after {.ofConstName ``monotone_letFun}."
|
||||
let (_, new_goal) ←
|
||||
if b.isLambda then
|
||||
new_goal.intro b.bindingName!
|
||||
else
|
||||
new_goal.intro1
|
||||
return [new_goal]
|
||||
| _ => pure ()
|
||||
|
||||
-- Handle lambdas, preserving the name of the binder
|
||||
if e.isLambda then
|
||||
let [new_goal] ← applyConst goal ``monotone_of_monotone_apply
|
||||
| throwError "Unexpected number of goals after {.ofConstName ``monotone_of_monotone_apply}."
|
||||
let (_, new_goal) ← new_goal.intro e.bindingName!
|
||||
return [new_goal]
|
||||
|
||||
-- A recursive call directly here
|
||||
if e.isBVar then
|
||||
return ← applyConst goal ``monotone_id
|
||||
|
||||
-- A recursive call
|
||||
if let some hmono ← solveMonoCall α inst_α e then
|
||||
trace[Elab.Tactic.monotonicity] "Found recursive call {e}:{indentExpr hmono}"
|
||||
unless ← goal.checkedAssign hmono do
|
||||
trace[Elab.Tactic.monotonicity] "Failed to assign {hmono} : {← inferType hmono} to goal"
|
||||
failK f #[]
|
||||
return []
|
||||
|
||||
let monoThms ← withLocalDeclD `f f.bindingDomain! fun f =>
|
||||
-- The discrimination tree does not like open terms
|
||||
findMonoThms (e.instantiate1 f)
|
||||
trace[Elab.Tactic.monotonicity] "Found monoThms: {monoThms.map MessageData.ofConstName}"
|
||||
for monoThm in monoThms do
|
||||
let new_goals? ← try
|
||||
let new_goals ← applyConst goal monoThm
|
||||
trace[Elab.Tactic.monotonicity] "Succeeded with {.ofConstName monoThm}"
|
||||
pure (some new_goals)
|
||||
catch e =>
|
||||
trace[Elab.Tactic.monotonicity] "{e.toMessageData}"
|
||||
pure none
|
||||
if let some new_goals := new_goals? then
|
||||
return new_goals
|
||||
|
||||
-- Split match-expressions
|
||||
if let some info := isMatcherAppCore? (← getEnv) e then
|
||||
let candidate ← id do
|
||||
let args := e.getAppArgs
|
||||
for i in [info.getFirstDiscrPos : info.getFirstDiscrPos + info.numDiscrs] do
|
||||
if args[i]!.hasLooseBVars then
|
||||
return false
|
||||
return true
|
||||
if candidate then
|
||||
-- We could be even more deliberate here and use the `lifter` lemmas
|
||||
-- for the match statements instead of the `split` tactic.
|
||||
-- For now using `splitMatch` works fine.
|
||||
return ← Split.splitMatch goal e
|
||||
|
||||
failK f monoThms
|
||||
| _ =>
|
||||
throwError "Unexpected goal:{goal}"
|
||||
|
||||
partial def solveMono (failK : ∀ {α}, Expr → Array Name → MetaM α := defaultFailK) (goal : MVarId) : MetaM Unit := do
|
||||
let new_goals ← solveMonoStep failK goal
|
||||
new_goals.forM (solveMono failK)
|
||||
|
||||
open Elab Tactic in
|
||||
@[builtin_tactic Lean.Order.monotonicity]
|
||||
def evalMonotonicity : Tactic := fun _stx =>
|
||||
liftMetaTactic Lean.Meta.Monotonicity.solveMonoStep
|
||||
|
||||
end Lean.Meta.Monotonicity
|
||||
|
||||
builtin_initialize Lean.registerTraceClass `Elab.Tactic.monotonicity
|
||||
@@ -63,7 +63,7 @@ def isNumeral? (e : Expr) : Option (Expr × Nat) :=
|
||||
if e.isConstOf ``Nat.zero then
|
||||
(mkConst ``Nat, 0)
|
||||
else if let Expr.app (Expr.app (Expr.app (Expr.const ``OfNat.ofNat ..) α ..)
|
||||
(Expr.lit (Literal.natVal n) ..) ..) .. := e.consumeMData then
|
||||
(Expr.lit (Literal.natVal n) ..) ..) .. := e then
|
||||
some (α, n)
|
||||
else
|
||||
none
|
||||
|
||||
@@ -680,7 +680,7 @@ def omegaTactic (cfg : OmegaConfig) : TacticM Unit := do
|
||||
|
||||
/-- The `omega` tactic, for resolving integer and natural linear arithmetic problems. This
|
||||
`TacticM Unit` frontend with default configuration can be used as an Aesop rule, for example via
|
||||
the tactic call `aesop (add 50% tactic Lean.Elab.Tactic.Omega.omegaDefault)`. -/
|
||||
the tactic call `aesop (add 50% tactic Lean.Omega.omegaDefault)`. -/
|
||||
def omegaDefault : TacticM Unit := omegaTactic {}
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.omega]
|
||||
|
||||
@@ -507,7 +507,7 @@ partial def rintroCore (g : MVarId) (fs : FVarSubst) (clears : Array FVarId) (a
|
||||
match pat with
|
||||
| `(rintroPat| $pat:rcasesPat) =>
|
||||
let pat := (← RCasesPatt.parse pat).typed? ref ty?
|
||||
let (v, g) ← withRef pat.ref <| g.intro (pat.name?.getD `_)
|
||||
let (v, g) ← g.intro (pat.name?.getD `_)
|
||||
rcasesCore g fs clears (.fvar v) a pat cont
|
||||
| `(rintroPat| ($(pats)* $[: $ty?']?)) =>
|
||||
let ref := if pats.size == 1 then pat.raw else .missing
|
||||
|
||||
@@ -4,7 +4,6 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Lean.Parser.Syntax
|
||||
import Lean.Meta.Tactic.Simp.RegisterCommand
|
||||
import Lean.Elab.Command
|
||||
import Lean.Elab.SetOption
|
||||
|
||||
@@ -17,6 +17,14 @@ namespace Lean.Meta
|
||||
private def ensureType (e : Expr) : MetaM Unit := do
|
||||
discard <| getLevel e
|
||||
|
||||
def throwLetTypeMismatchMessage {α} (fvarId : FVarId) : MetaM α := do
|
||||
let lctx ← getLCtx
|
||||
match lctx.find? fvarId with
|
||||
| some (LocalDecl.ldecl _ _ _ t v _ _) => do
|
||||
let vType ← inferType v
|
||||
throwError "invalid let declaration, term{indentExpr v}\nhas type{indentExpr vType}\nbut is expected to have type{indentExpr t}"
|
||||
| _ => unreachable!
|
||||
|
||||
private def checkConstant (constName : Name) (us : List Level) : MetaM Unit := do
|
||||
let cinfo ← getConstInfo constName
|
||||
unless us.length == cinfo.levelParams.length do
|
||||
@@ -169,15 +177,6 @@ where
|
||||
catch _ =>
|
||||
return (a, b)
|
||||
|
||||
def throwLetTypeMismatchMessage {α} (fvarId : FVarId) : MetaM α := do
|
||||
let lctx ← getLCtx
|
||||
match lctx.find? fvarId with
|
||||
| some (LocalDecl.ldecl _ _ _ t v _ _) => do
|
||||
let vType ← inferType v
|
||||
let (vType, t) ← addPPExplicitToExposeDiff vType t
|
||||
throwError "invalid let declaration, term{indentExpr v}\nhas type{indentExpr vType}\nbut is expected to have type{indentExpr t}"
|
||||
| _ => unreachable!
|
||||
|
||||
/--
|
||||
Return error message "has type{givenType}\nbut is expected to have type{expectedType}"
|
||||
-/
|
||||
|
||||
@@ -33,7 +33,7 @@ private def mkEqAndProof (lhs rhs : Expr) : MetaM (Expr × Expr) := do
|
||||
else
|
||||
pure (mkApp4 (mkConst ``HEq [u]) lhsType lhs rhsType rhs, mkApp2 (mkConst ``HEq.refl [u]) lhsType lhs)
|
||||
|
||||
partial def withNewEqs (targets targetsNew : Array Expr) (k : Array Expr → Array Expr → MetaM α) : MetaM α :=
|
||||
private partial def withNewEqs (targets targetsNew : Array Expr) (k : Array Expr → Array Expr → MetaM α) : MetaM α :=
|
||||
let rec loop (i : Nat) (newEqs : Array Expr) (newRefls : Array Expr) := do
|
||||
if i < targets.size then
|
||||
let (newEqType, newRefl) ← mkEqAndProof targets[i]! targetsNew[i]!
|
||||
@@ -66,31 +66,30 @@ structure GeneralizeIndicesSubgoal where
|
||||
numEqs : Nat
|
||||
|
||||
/--
|
||||
Given a metavariable `mvarId` representing the goal
|
||||
```
|
||||
Ctx |- T
|
||||
```
|
||||
and an expression `e : I A j`, where `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, j' : J, h' : I A j' |- j == j' -> e == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> e == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target
|
||||
|
||||
If `varName?` is not none, it is used to name `h'`.
|
||||
-/
|
||||
def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := none) : MetaM GeneralizeIndicesSubgoal :=
|
||||
Similar to `generalizeTargets` but customized for the `casesOn` motive.
|
||||
Given a metavariable `mvarId` representing the
|
||||
```
|
||||
Ctx, h : I A j, D |- T
|
||||
```
|
||||
where `fvarId` is `h`s id, and the type `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, h : I A j, D, j' : J, h' : I A j' |- j == j' -> h == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> h == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target -/
|
||||
def generalizeIndices (mvarId : MVarId) (fvarId : FVarId) : MetaM GeneralizeIndicesSubgoal :=
|
||||
mvarId.withContext do
|
||||
let lctx ← getLCtx
|
||||
let localInsts ← getLocalInstances
|
||||
mvarId.checkNotAssigned `generalizeIndices
|
||||
let type ← whnfD (← inferType e)
|
||||
let fvarDecl ← fvarId.getDecl
|
||||
let type ← whnf fvarDecl.type
|
||||
type.withApp fun f args => matchConstInduct f (fun _ => throwTacticEx `generalizeIndices mvarId "inductive type expected") fun val _ => do
|
||||
unless val.numIndices > 0 do throwTacticEx `generalizeIndices mvarId "indexed inductive type expected"
|
||||
unless args.size == val.numIndices + val.numParams do throwTacticEx `generalizeIndices mvarId "ill-formed inductive datatype"
|
||||
@@ -99,10 +98,9 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
let IAType ← inferType IA
|
||||
forallTelescopeReducing IAType fun newIndices _ => do
|
||||
let newType := mkAppN IA newIndices
|
||||
let varName ← if let some varName := varName? then pure varName else mkFreshUserName `x
|
||||
withLocalDeclD varName newType fun h' =>
|
||||
withLocalDeclD fvarDecl.userName newType fun h' =>
|
||||
withNewEqs indices newIndices fun newEqs newRefls => do
|
||||
let (newEqType, newRefl) ← mkEqAndProof e h'
|
||||
let (newEqType, newRefl) ← mkEqAndProof fvarDecl.toExpr h'
|
||||
let newRefls := newRefls.push newRefl
|
||||
withLocalDeclD `h newEqType fun newEq => do
|
||||
let newEqs := newEqs.push newEq
|
||||
@@ -114,7 +112,7 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
let auxType ← mkForallFVars newIndices auxType
|
||||
let newMVar ← mkFreshExprMVarAt lctx localInsts auxType MetavarKind.syntheticOpaque tag
|
||||
/- assign mvarId := newMVar indices h refls -/
|
||||
mvarId.assign (mkAppN (mkApp (mkAppN newMVar indices) e) newRefls)
|
||||
mvarId.assign (mkAppN (mkApp (mkAppN newMVar indices) fvarDecl.toExpr) newRefls)
|
||||
let (indicesFVarIds, newMVarId) ← newMVar.mvarId!.introNP newIndices.size
|
||||
let (fvarId, newMVarId) ← newMVarId.intro1P
|
||||
return {
|
||||
@@ -124,29 +122,6 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
numEqs := newEqs.size
|
||||
}
|
||||
|
||||
/--
|
||||
Similar to `generalizeTargets` but customized for the `casesOn` motive.
|
||||
Given a metavariable `mvarId` representing the
|
||||
```
|
||||
Ctx, h : I A j, D |- T
|
||||
```
|
||||
where `fvarId` is `h`s id, and the type `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, h : I A j, D, j' : J, h' : I A j' |- j == j' -> h == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> h == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target -/
|
||||
def generalizeIndices (mvarId : MVarId) (fvarId : FVarId) : MetaM GeneralizeIndicesSubgoal :=
|
||||
mvarId.withContext do
|
||||
let fvarDecl ← fvarId.getDecl
|
||||
generalizeIndices' mvarId fvarDecl.toExpr fvarDecl.userName
|
||||
|
||||
structure CasesSubgoal extends InductionSubgoal where
|
||||
ctorName : Name
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ prelude
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Preprocessor
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
@@ -20,38 +21,19 @@ import Lean.Meta.Tactic.Grind.PP
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Ctor
|
||||
import Lean.Meta.Tactic.Grind.Parser
|
||||
import Lean.Meta.Tactic.Grind.EMatchTheorem
|
||||
import Lean.Meta.Tactic.Grind.EMatch
|
||||
import Lean.Meta.Tactic.Grind.Main
|
||||
import Lean.Meta.Tactic.Grind.CasesMatch
|
||||
|
||||
namespace Lean
|
||||
|
||||
/-! Trace options for `grind` users -/
|
||||
builtin_initialize registerTraceClass `grind
|
||||
builtin_initialize registerTraceClass `grind.assert
|
||||
builtin_initialize registerTraceClass `grind.eqc
|
||||
builtin_initialize registerTraceClass `grind.internalize
|
||||
builtin_initialize registerTraceClass `grind.ematch
|
||||
builtin_initialize registerTraceClass `grind.ematch.pattern
|
||||
builtin_initialize registerTraceClass `grind.ematch.pattern.search
|
||||
builtin_initialize registerTraceClass `grind.ematch.instance
|
||||
builtin_initialize registerTraceClass `grind.ematch.instance.assignment
|
||||
builtin_initialize registerTraceClass `grind.eq
|
||||
builtin_initialize registerTraceClass `grind.issues
|
||||
builtin_initialize registerTraceClass `grind.simp
|
||||
builtin_initialize registerTraceClass `grind.split
|
||||
builtin_initialize registerTraceClass `grind.split.candidate
|
||||
builtin_initialize registerTraceClass `grind.split.resolved
|
||||
|
||||
/-! Trace options for `grind` developers -/
|
||||
builtin_initialize registerTraceClass `grind.add
|
||||
builtin_initialize registerTraceClass `grind.pre
|
||||
builtin_initialize registerTraceClass `grind.debug
|
||||
builtin_initialize registerTraceClass `grind.debug.proofs
|
||||
builtin_initialize registerTraceClass `grind.debug.congr
|
||||
builtin_initialize registerTraceClass `grind.debug.proof
|
||||
builtin_initialize registerTraceClass `grind.debug.proj
|
||||
builtin_initialize registerTraceClass `grind.debug.parent
|
||||
builtin_initialize registerTraceClass `grind.debug.final
|
||||
builtin_initialize registerTraceClass `grind.debug.forallPropagator
|
||||
builtin_initialize registerTraceClass `grind.debug.split
|
||||
builtin_initialize registerTraceClass `grind.debug.canon
|
||||
builtin_initialize registerTraceClass `grind.simp
|
||||
builtin_initialize registerTraceClass `grind.congr
|
||||
builtin_initialize registerTraceClass `grind.proof
|
||||
builtin_initialize registerTraceClass `grind.proof.detail
|
||||
|
||||
end Lean
|
||||
|
||||
@@ -22,7 +22,7 @@ to detect when two structurally different atoms are definitionally equal.
|
||||
The `grind` tactic, on the other hand, uses congruence closure. Moreover, types, type formers, proofs, and instances
|
||||
are considered supporting elements and are not factored into congruence detection.
|
||||
|
||||
This module minimizes the number of `isDefEq` checks by comparing two terms `a` and `b` only if they are instances,
|
||||
This module minimizes the number of `isDefEq` checks by comparing two terms `a` and `b` only if they instances,
|
||||
types, or type formers and are the `i`-th arguments of two different `f`-applications. This approach is
|
||||
sufficient for the congruence closure procedure used by the `grind` tactic.
|
||||
|
||||
@@ -41,40 +41,19 @@ Furthermore, `grind` will not be able to infer that `HEq (a + a) (b + b)` even
|
||||
-/
|
||||
|
||||
structure State where
|
||||
argMap : PHashMap (Expr × Nat) (List Expr) := {}
|
||||
canon : PHashMap Expr Expr := {}
|
||||
proofCanon : PHashMap Expr Expr := {}
|
||||
argMap : PHashMap (Expr × Nat) (List Expr) := {}
|
||||
canon : PHashMap Expr Expr := {}
|
||||
deriving Inhabited
|
||||
|
||||
inductive CanonElemKind where
|
||||
| /--
|
||||
Type class instances are canonicalized using `TransparencyMode.instances`.
|
||||
-/
|
||||
instance
|
||||
| /--
|
||||
Types and Type formers are canonicalized using `TransparencyMode.default`.
|
||||
Remark: propositions are just visited. We do not invoke `canonElemCore` for them.
|
||||
-/
|
||||
type
|
||||
| /--
|
||||
Implicit arguments that are not types, type formers, or instances, are canonicalized
|
||||
using `TransparencyMode.reducible`
|
||||
-/
|
||||
implicit
|
||||
deriving BEq
|
||||
|
||||
def CanonElemKind.explain : CanonElemKind → String
|
||||
| .instance => "type class instances"
|
||||
| .type => "types (or type formers)"
|
||||
| .implicit => "implicit arguments (which are not type class instances or types)"
|
||||
|
||||
/--
|
||||
Helper function for canonicalizing `e` occurring as the `i`th argument of an `f`-application.
|
||||
`isInst` is true if `e` is an type class instance.
|
||||
|
||||
Thus, if diagnostics are enabled, we also re-check them using `TransparencyMode.default`. If the result is different
|
||||
Recall that we use `TransparencyMode.instances` for checking whether two instances are definitionally equal or not.
|
||||
Thus, if diagnostics are enabled, we also check them using `TransparencyMode.default`. If the result is different
|
||||
we report to the user.
|
||||
-/
|
||||
def canonElemCore (f : Expr) (i : Nat) (e : Expr) (kind : CanonElemKind) : StateT State MetaM Expr := do
|
||||
def canonElemCore (f : Expr) (i : Nat) (e : Expr) (isInst : Bool) : StateT State MetaM Expr := do
|
||||
let s ← get
|
||||
if let some c := s.canon.find? e then
|
||||
return c
|
||||
@@ -82,23 +61,19 @@ def canonElemCore (f : Expr) (i : Nat) (e : Expr) (kind : CanonElemKind) : State
|
||||
let cs := s.argMap.find? key |>.getD []
|
||||
for c in cs do
|
||||
if (← isDefEq e c) then
|
||||
-- We used to check `c.fvarsSubset e` because it is not
|
||||
-- in general safe to replace `e` with `c` if `c` has more free variables than `e`.
|
||||
-- However, we don't revert previously canonicalized elements in the `grind` tactic.
|
||||
modify fun s => { s with canon := s.canon.insert e c }
|
||||
trace[grind.debug.canon] "found {e} ===> {c}"
|
||||
return c
|
||||
if kind != .type then
|
||||
if (← isTracingEnabledFor `grind.issues <&&> (withDefault <| isDefEq e c)) then
|
||||
if c.fvarsSubset e then
|
||||
-- It is not in general safe to replace `e` with `c` if `c` has more free variables than `e`.
|
||||
modify fun s => { s with canon := s.canon.insert e c }
|
||||
return c
|
||||
if isInst then
|
||||
if (← isDiagnosticsEnabled <&&> pure (c.fvarsSubset e) <&&> (withDefault <| isDefEq e c)) then
|
||||
-- TODO: consider storing this information in some structure that can be browsed later.
|
||||
trace[grind.issues] "the following {kind.explain} are definitionally equal with `default` transparency but not with a more restrictive transparency{indentExpr e}\nand{indentExpr c}"
|
||||
trace[grind.debug.canon] "({f}, {i}) ↦ {e}"
|
||||
trace[grind.issues] "the following `grind` static elements are definitionally equal with `default` transparency, but not with `instances` transparency{indentExpr e}\nand{indentExpr c}"
|
||||
modify fun s => { s with canon := s.canon.insert e e, argMap := s.argMap.insert key (e::cs) }
|
||||
return e
|
||||
|
||||
abbrev canonType (f : Expr) (i : Nat) (e : Expr) := withDefault <| canonElemCore f i e .type
|
||||
abbrev canonInst (f : Expr) (i : Nat) (e : Expr) := withReducibleAndInstances <| canonElemCore f i e .instance
|
||||
abbrev canonImplicit (f : Expr) (i : Nat) (e : Expr) := withReducible <| canonElemCore f i e .implicit
|
||||
abbrev canonType (f : Expr) (i : Nat) (e : Expr) := withDefault <| canonElemCore f i e false
|
||||
abbrev canonInst (f : Expr) (i : Nat) (e : Expr) := withReducibleAndInstances <| canonElemCore f i e true
|
||||
|
||||
/--
|
||||
Return type for the `shouldCanon` function.
|
||||
@@ -108,8 +83,6 @@ private inductive ShouldCanonResult where
|
||||
canonType
|
||||
| /- Nested instances are canonicalized. -/
|
||||
canonInst
|
||||
| /- Implicit argument that is not an instance nor a type. -/
|
||||
canonImplicit
|
||||
| /-
|
||||
Term is not a proof, type (former), nor an instance.
|
||||
Thus, it must be recursively visited by the canonizer.
|
||||
@@ -117,13 +90,6 @@ private inductive ShouldCanonResult where
|
||||
visit
|
||||
deriving Inhabited
|
||||
|
||||
instance : Repr ShouldCanonResult where
|
||||
reprPrec r _ := match r with
|
||||
| .canonType => "canonType"
|
||||
| .canonInst => "canonInst"
|
||||
| .canonImplicit => "canonImplicit"
|
||||
| .visit => "visit"
|
||||
|
||||
/--
|
||||
See comments at `ShouldCanonResult`.
|
||||
-/
|
||||
@@ -134,14 +100,7 @@ def shouldCanon (pinfos : Array ParamInfo) (i : Nat) (arg : Expr) : MetaM Should
|
||||
return .canonInst
|
||||
else if pinfo.isProp then
|
||||
return .visit
|
||||
else if pinfo.isImplicit then
|
||||
if (← isTypeFormer arg) then
|
||||
return .canonType
|
||||
else
|
||||
return .canonImplicit
|
||||
if (← isProp arg) then
|
||||
return .visit
|
||||
else if (← isTypeFormer arg) then
|
||||
if (← isTypeFormer arg) then
|
||||
return .canonType
|
||||
else
|
||||
return .visit
|
||||
@@ -150,50 +109,47 @@ unsafe def canonImpl (e : Expr) : StateT State MetaM Expr := do
|
||||
visit e |>.run' mkPtrMap
|
||||
where
|
||||
visit (e : Expr) : StateRefT (PtrMap Expr Expr) (StateT State MetaM) Expr := do
|
||||
unless e.isApp || e.isForall do return e
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
let e' ← match e with
|
||||
| .app .. => e.withApp fun f args => do
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
match e with
|
||||
| .bvar .. => unreachable!
|
||||
-- Recall that `grind` treats `let`, `forall`, and `lambda` as atomic terms.
|
||||
| .letE .. | .forallE .. | .lam ..
|
||||
| .const .. | .lit .. | .mvar .. | .sort .. | .fvar ..
|
||||
-- Recall that the `grind` preprocessor uses the `foldProjs` preprocessing step.
|
||||
| .proj ..
|
||||
-- Recall that the `grind` preprocessor uses the `eraseIrrelevantMData` preprocessing step.
|
||||
| .mdata .. => return e
|
||||
-- We only visit applications
|
||||
| .app .. =>
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
e.withApp fun f args => do
|
||||
let e' ← if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
-- We just canonize the proposition
|
||||
let prop := args[0]!
|
||||
let prop' ← visit prop
|
||||
if let some r := (← getThe State).proofCanon.find? prop' then
|
||||
pure r
|
||||
else
|
||||
let e' := if ptrEq prop prop' then e else mkAppN f (args.set! 0 prop')
|
||||
modifyThe State fun s => { s with proofCanon := s.proofCanon.insert prop' e' }
|
||||
pure e'
|
||||
pure <| if ptrEq prop prop' then mkAppN f (args.set! 0 prop') else e
|
||||
else
|
||||
let pinfos := (← getFunInfo f).paramInfo
|
||||
let mut modified := false
|
||||
let mut args := args.toVector
|
||||
for h : i in [:args.size] do
|
||||
let arg := args[i]
|
||||
trace[grind.debug.canon] "[{repr (← shouldCanon pinfos i arg)}]: {arg} : {← inferType arg}"
|
||||
let mut args := args
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let arg' ← match (← shouldCanon pinfos i arg) with
|
||||
| .canonType => canonType f i arg
|
||||
| .canonInst => canonInst f i arg
|
||||
| .canonImplicit => canonImplicit f i (← visit arg)
|
||||
| .visit => visit arg
|
||||
unless ptrEq arg arg' do
|
||||
args := args.set i arg'
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
pure <| if modified then mkAppN f args.toArray else e
|
||||
| .forallE _ d b _ =>
|
||||
-- Recall that we have `ForallProp.lean`.
|
||||
let d' ← visit d
|
||||
-- Remark: users may not want to convert `p → q` into `¬p ∨ q`
|
||||
let b' ← if b.hasLooseBVars then pure b else visit b
|
||||
pure <| e.updateForallE! d' b'
|
||||
| _ => unreachable!
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
pure <| if modified then mkAppN f args else e
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
|
||||
/-- Canonicalizes nested types, type formers, and instances in `e`. -/
|
||||
def canon (e : Expr) : StateT State MetaM Expr := do
|
||||
trace[grind.debug.canon] "{e}"
|
||||
/--
|
||||
Canonicalizes nested types, type formers, and instances in `e`.
|
||||
-/
|
||||
def canon (e : Expr) : StateT State MetaM Expr :=
|
||||
unsafe canonImpl e
|
||||
|
||||
end Canon
|
||||
|
||||
@@ -11,56 +11,52 @@ namespace Lean.Meta.Grind
|
||||
The `grind` tactic includes an auxiliary `cases` tactic that is not intended for direct use by users.
|
||||
This method implements it.
|
||||
This tactic is automatically applied when introducing local declarations with a type tagged with `[grind_cases]`.
|
||||
It is also used for "case-splitting" on terms during the search.
|
||||
|
||||
It differs from the user-facing Lean `cases` tactic in the following ways:
|
||||
|
||||
- It avoids unnecessary `revert` and `intro` operations.
|
||||
|
||||
- It does not introduce new local declarations for each minor premise. Instead, the `grind` tactic preprocessor is responsible for introducing them.
|
||||
|
||||
- It assumes that the major premise (i.e., the parameter `fvarId`) is the latest local declaration in the current goal.
|
||||
|
||||
- If the major premise type is an indexed family, auxiliary declarations and (heterogeneous) equalities are introduced.
|
||||
However, these equalities are not resolved using `unifyEqs`. Instead, the `grind` tactic employs union-find and
|
||||
congruence closure to process these auxiliary equalities. This approach avoids applying substitution to propositions
|
||||
that have already been internalized by `grind`.
|
||||
-/
|
||||
def cases (mvarId : MVarId) (e : Expr) : MetaM (List MVarId) := mvarId.withContext do
|
||||
def cases (mvarId : MVarId) (fvarId : FVarId) : MetaM (List MVarId) := mvarId.withContext do
|
||||
let tag ← mvarId.getTag
|
||||
let type ← whnf (← inferType e)
|
||||
let type ← whnf (← fvarId.getType)
|
||||
let .const declName _ := type.getAppFn | throwInductiveExpected type
|
||||
let .inductInfo _ ← getConstInfo declName | throwInductiveExpected type
|
||||
let recursorInfo ← mkRecursorInfo (mkCasesOnName declName)
|
||||
let k (mvarId : MVarId) (fvarId : FVarId) (indices : Array FVarId) : MetaM (List MVarId) := do
|
||||
let indicesExpr := indices.map mkFVar
|
||||
let recursor ← mkRecursorAppPrefix mvarId `grind.cases fvarId recursorInfo indicesExpr
|
||||
let mut recursor := mkApp (mkAppN recursor indicesExpr) (mkFVar fvarId)
|
||||
let k (mvarId : MVarId) (fvarId : FVarId) (indices : Array Expr) (clearMajor : Bool) : MetaM (List MVarId) := do
|
||||
let recursor ← mkRecursorAppPrefix mvarId `grind.cases fvarId recursorInfo indices
|
||||
let mut recursor := mkApp (mkAppN recursor indices) (mkFVar fvarId)
|
||||
let mut recursorType ← inferType recursor
|
||||
let mut mvarIdsNew := #[]
|
||||
let mut idx := 1
|
||||
for _ in [:recursorInfo.numMinors] do
|
||||
let .forallE _ targetNew recursorTypeNew _ ← whnf recursorType
|
||||
| throwTacticEx `grind.cases mvarId "unexpected recursor type"
|
||||
recursorType := recursorTypeNew
|
||||
let tagNew := if recursorInfo.numMinors > 1 then Name.num tag idx else tag
|
||||
let mvar ← mkFreshExprSyntheticOpaqueMVar targetNew tagNew
|
||||
let mvar ← mkFreshExprSyntheticOpaqueMVar targetNew tag
|
||||
recursor := mkApp recursor mvar
|
||||
let mvarIdNew ← mvar.mvarId!.tryClearMany (indices.push fvarId)
|
||||
let mvarIdNew ← if clearMajor then
|
||||
mvar.mvarId!.clear fvarId
|
||||
else
|
||||
pure mvar.mvarId!
|
||||
mvarIdsNew := mvarIdsNew.push mvarIdNew
|
||||
idx := idx + 1
|
||||
mvarId.assign recursor
|
||||
return mvarIdsNew.toList
|
||||
if recursorInfo.numIndices > 0 then
|
||||
let s ← generalizeIndices' mvarId e
|
||||
let s ← generalizeIndices mvarId fvarId
|
||||
s.mvarId.withContext do
|
||||
k s.mvarId s.fvarId s.indicesFVarIds
|
||||
else if let .fvar fvarId := e then
|
||||
k mvarId fvarId #[]
|
||||
k s.mvarId s.fvarId (s.indicesFVarIds.map mkFVar) (clearMajor := false)
|
||||
else
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName `x) type e
|
||||
let (fvarId, mvarId) ← mvarId.intro1
|
||||
mvarId.withContext do k mvarId fvarId #[]
|
||||
let indices ← getMajorTypeIndices mvarId `grind.cases recursorInfo type
|
||||
k mvarId fvarId indices (clearMajor := true)
|
||||
where
|
||||
throwInductiveExpected {α} (type : Expr) : MetaM α := do
|
||||
throwTacticEx `grind.cases mvarId m!"(non-recursive) inductive type expected at {e}{indentExpr type}"
|
||||
throwTacticEx `grind.cases mvarId m!"(non-recursive) inductive type expected at {mkFVar fvarId}{indentExpr type}"
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Cases
|
||||
import Lean.Meta.Match.MatcherApp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def casesMatch (mvarId : MVarId) (e : Expr) : MetaM (List MVarId) := mvarId.withContext do
|
||||
let some app ← matchMatcherApp? e
|
||||
| throwTacticEx `grind.casesMatch mvarId m!"`match`-expression expected{indentExpr e}"
|
||||
let (motive, eqRefls) ← mkMotiveAndRefls app
|
||||
let target ← mvarId.getType
|
||||
let mut us := app.matcherLevels
|
||||
if let some i := app.uElimPos? then
|
||||
us := us.set! i (← getLevel target)
|
||||
let splitterName := (← Match.getEquationsFor app.matcherName).splitterName
|
||||
let splitterApp := mkConst splitterName us.toList
|
||||
let splitterApp := mkAppN splitterApp app.params
|
||||
let splitterApp := mkApp splitterApp motive
|
||||
let splitterApp := mkAppN splitterApp app.discrs
|
||||
let (mvars, _, _) ← forallMetaBoundedTelescope (← inferType splitterApp) app.alts.size (kind := .syntheticOpaque)
|
||||
let splitterApp := mkAppN splitterApp mvars
|
||||
let val := mkAppN splitterApp eqRefls
|
||||
mvarId.assign val
|
||||
updateTags mvars
|
||||
return mvars.toList.map (·.mvarId!)
|
||||
where
|
||||
mkMotiveAndRefls (app : MatcherApp) : MetaM (Expr × Array Expr) := do
|
||||
let dummy := mkSort 0
|
||||
let aux := mkApp (mkAppN e.getAppFn app.params) dummy
|
||||
forallBoundedTelescope (← inferType aux) app.discrs.size fun xs _ => do
|
||||
withNewEqs app.discrs xs fun eqs eqRefls => do
|
||||
let type ← mvarId.getType
|
||||
let type ← mkForallFVars eqs type
|
||||
let motive ← mkLambdaFVars xs type
|
||||
return (motive, eqRefls)
|
||||
|
||||
updateTags (mvars : Array Expr) : MetaM Unit := do
|
||||
let tag ← mvarId.getTag
|
||||
if mvars.size == 1 then
|
||||
mvars[0]!.mvarId!.setTag tag
|
||||
else
|
||||
let mut idx := 1
|
||||
for mvar in mvars do
|
||||
mvar.mvarId!.setTag (Name.num tag idx)
|
||||
idx := idx + 1
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,71 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/-!
|
||||
Combinators for manipulating `GrindTactic`s.
|
||||
TODO: a proper tactic language for `grind`.
|
||||
-/
|
||||
|
||||
def GrindTactic := Goal → GrindM (Option (List Goal))
|
||||
|
||||
def GrindTactic.try (x : GrindTactic) : GrindTactic := fun g => do
|
||||
let some gs ← x g | return some [g]
|
||||
return some gs
|
||||
|
||||
def applyToAll (x : GrindTactic) (goals : List Goal) : GrindM (List Goal) := do
|
||||
go goals []
|
||||
where
|
||||
go (goals : List Goal) (acc : List Goal) : GrindM (List Goal) := do
|
||||
match goals with
|
||||
| [] => return acc.reverse
|
||||
| goal :: goals => match (← x goal) with
|
||||
| none => go goals (goal :: acc)
|
||||
| some goals' => go goals (goals' ++ acc)
|
||||
|
||||
partial def GrindTactic.andThen (x y : GrindTactic) : GrindTactic := fun goal => do
|
||||
let some goals ← x goal | return none
|
||||
applyToAll y goals
|
||||
|
||||
instance : AndThen GrindTactic where
|
||||
andThen a b := GrindTactic.andThen a (b ())
|
||||
|
||||
partial def GrindTactic.iterate (x : GrindTactic) : GrindTactic := fun goal => do
|
||||
go [goal] []
|
||||
where
|
||||
go (todo : List Goal) (result : List Goal) : GrindM (List Goal) := do
|
||||
match todo with
|
||||
| [] => return result
|
||||
| goal :: todo =>
|
||||
if let some goalsNew ← x goal then
|
||||
go (goalsNew ++ todo) result
|
||||
else
|
||||
go todo (goal :: result)
|
||||
|
||||
partial def GrindTactic.orElse (x y : GrindTactic) : GrindTactic := fun goal => do
|
||||
let some goals ← x goal | y goal
|
||||
return goals
|
||||
|
||||
instance : OrElse GrindTactic where
|
||||
orElse a b := GrindTactic.andThen a (b ())
|
||||
|
||||
def toGrindTactic (f : GoalM Unit) : GrindTactic := fun goal => do
|
||||
let goal ← GoalM.run' goal f
|
||||
if goal.inconsistent then
|
||||
return some []
|
||||
else
|
||||
return some [goal]
|
||||
|
||||
def GrindTactic' := Goal → GrindM (List Goal)
|
||||
|
||||
def GrindTactic'.toGrindTactic (x : GrindTactic') : GrindTactic := fun goal => do
|
||||
let goals ← x goal
|
||||
return some goals
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -41,10 +41,7 @@ This is an auxiliary function performed while merging equivalence classes.
|
||||
private def removeParents (root : Expr) : GoalM ParentSet := do
|
||||
let parents ← getParentsAndReset root
|
||||
for parent in parents do
|
||||
-- Recall that we may have `Expr.forallE` in `parents` because of `ForallProp.lean`
|
||||
if (← pure parent.isApp <&&> isCongrRoot parent) then
|
||||
trace_goal[grind.debug.parent] "remove: {parent}"
|
||||
modify fun s => { s with congrTable := s.congrTable.erase { e := parent } }
|
||||
modify fun s => { s with congrTable := s.congrTable.erase { e := parent } }
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -53,9 +50,7 @@ This is an auxiliary function performed while merging equivalence classes.
|
||||
-/
|
||||
private def reinsertParents (parents : ParentSet) : GoalM Unit := do
|
||||
for parent in parents do
|
||||
if (← pure parent.isApp <&&> isCongrRoot parent) then
|
||||
trace_goal[grind.debug.parent] "reinsert: {parent}"
|
||||
addCongrTable parent
|
||||
addCongrTable parent
|
||||
|
||||
/-- Closes the goal when `True` and `False` are in the same equivalence class. -/
|
||||
private def closeGoalWithTrueEqFalse : GoalM Unit := do
|
||||
@@ -74,26 +69,14 @@ private def closeGoalWithValuesEq (lhs rhs : Expr) : GoalM Unit := do
|
||||
let falseProof ← mkEqMP pEqFalse hp
|
||||
closeGoal falseProof
|
||||
|
||||
/--
|
||||
Updates the modification time to `gmt` for the parents of `root`.
|
||||
The modification time is used to decide which terms are considered during e-matching.
|
||||
-/
|
||||
private partial def updateMT (root : Expr) : GoalM Unit := do
|
||||
let gmt := (← get).gmt
|
||||
for parent in (← getParents root) do
|
||||
let node ← getENode parent
|
||||
if node.mt < gmt then
|
||||
setENode parent { node with mt := gmt }
|
||||
updateMT parent
|
||||
|
||||
private partial def addEqStep (lhs rhs proof : Expr) (isHEq : Bool) : GoalM Unit := do
|
||||
trace[grind.eq] "{lhs} {if isHEq then "≡" else "="} {rhs}"
|
||||
let lhsNode ← getENode lhs
|
||||
let rhsNode ← getENode rhs
|
||||
if isSameExpr lhsNode.root rhsNode.root then
|
||||
-- `lhs` and `rhs` are already in the same equivalence class.
|
||||
trace_goal[grind.debug] "{← ppENodeRef lhs} and {← ppENodeRef rhs} are already in the same equivalence class"
|
||||
trace[grind.debug] "{← ppENodeRef lhs} and {← ppENodeRef rhs} are already in the same equivalence class"
|
||||
return ()
|
||||
trace_goal[grind.eqc] "{← if isHEq then mkHEq lhs rhs else mkEq lhs rhs}"
|
||||
let lhsRoot ← getENode lhsNode.root
|
||||
let rhsRoot ← getENode rhsNode.root
|
||||
let mut valueInconsistency := false
|
||||
@@ -118,11 +101,11 @@ private partial def addEqStep (lhs rhs proof : Expr) (isHEq : Bool) : GoalM Unit
|
||||
unless (← isInconsistent) do
|
||||
if valueInconsistency then
|
||||
closeGoalWithValuesEq lhsRoot.self rhsRoot.self
|
||||
trace_goal[grind.debug] "after addEqStep, {← ppState}"
|
||||
trace[grind.debug] "after addEqStep, {← ppState}"
|
||||
checkInvariants
|
||||
where
|
||||
go (lhs rhs : Expr) (lhsNode rhsNode lhsRoot rhsRoot : ENode) (flipped : Bool) : GoalM Unit := do
|
||||
trace_goal[grind.debug] "adding {← ppENodeRef lhs} ↦ {← ppENodeRef rhs}"
|
||||
trace[grind.debug] "adding {← ppENodeRef lhs} ↦ {← ppENodeRef rhs}"
|
||||
/-
|
||||
We have the following `target?/proof?`
|
||||
`lhs -> ... -> lhsNode.root`
|
||||
@@ -139,7 +122,7 @@ where
|
||||
}
|
||||
let parents ← removeParents lhsRoot.self
|
||||
updateRoots lhs rhsNode.root
|
||||
trace_goal[grind.debug] "{← ppENodeRef lhs} new root {← ppENodeRef rhsNode.root}, {← ppENodeRef (← getRoot lhs)}"
|
||||
trace[grind.debug] "{← ppENodeRef lhs} new root {← ppENodeRef rhsNode.root}, {← ppENodeRef (← getRoot lhs)}"
|
||||
reinsertParents parents
|
||||
setENode lhsNode.root { (← getENode lhsRoot.self) with -- We must retrieve `lhsRoot` since it was updated.
|
||||
next := rhsRoot.next
|
||||
@@ -154,8 +137,6 @@ where
|
||||
unless (← isInconsistent) do
|
||||
for parent in parents do
|
||||
propagateUp parent
|
||||
unless (← isInconsistent) do
|
||||
updateMT rhsRoot.self
|
||||
|
||||
updateRoots (lhs : Expr) (rootNew : Expr) : GoalM Unit := do
|
||||
let rec loop (e : Expr) : GoalM Unit := do
|
||||
@@ -186,29 +167,21 @@ where
|
||||
if (← isInconsistent) then
|
||||
resetNewEqs
|
||||
return ()
|
||||
checkSystem "grind"
|
||||
let some { lhs, rhs, proof, isHEq } := (← popNextEq?) | return ()
|
||||
addEqStep lhs rhs proof isHEq
|
||||
processTodo
|
||||
|
||||
/-- Adds a new equality `lhs = rhs`. It assumes `lhs` and `rhs` have already been internalized. -/
|
||||
def addEq (lhs rhs proof : Expr) : GoalM Unit := do
|
||||
addEqCore lhs rhs proof false
|
||||
|
||||
|
||||
/-- Adds a new heterogeneous equality `HEq lhs rhs`. It assumes `lhs` and `rhs` have already been internalized. -/
|
||||
def addHEq (lhs rhs proof : Expr) : GoalM Unit := do
|
||||
addEqCore lhs rhs proof true
|
||||
|
||||
/-- Internalizes `lhs` and `rhs`, and then adds equality `lhs = rhs`. -/
|
||||
def addNewEq (lhs rhs proof : Expr) (generation : Nat) : GoalM Unit := do
|
||||
internalize lhs generation
|
||||
internalize rhs generation
|
||||
addEq lhs rhs proof
|
||||
|
||||
/-- Adds a new `fact` justified by the given proof and using the given generation. -/
|
||||
/--
|
||||
Adds a new `fact` justified by the given proof and using the given generation.
|
||||
-/
|
||||
def add (fact : Expr) (proof : Expr) (generation := 0) : GoalM Unit := do
|
||||
trace_goal[grind.assert] "{fact}"
|
||||
trace[grind.add] "{proof} : {fact}"
|
||||
if (← isInconsistent) then return ()
|
||||
resetNewEqs
|
||||
let_expr Not p := fact
|
||||
@@ -216,6 +189,7 @@ def add (fact : Expr) (proof : Expr) (generation := 0) : GoalM Unit := do
|
||||
go p true
|
||||
where
|
||||
go (p : Expr) (isNeg : Bool) : GoalM Unit := do
|
||||
trace[grind.add] "isNeg: {isNeg}, {p}"
|
||||
match_expr p with
|
||||
| Eq _ lhs rhs => goEq p lhs rhs isNeg false
|
||||
| HEq _ lhs _ rhs => goEq p lhs rhs isNeg true
|
||||
@@ -235,8 +209,10 @@ where
|
||||
internalize rhs generation
|
||||
addEqCore lhs rhs proof isHEq
|
||||
|
||||
/-- Adds a new hypothesis. -/
|
||||
def addHypothesis (fvarId : FVarId) (generation := 0) : GoalM Unit := do
|
||||
/--
|
||||
Adds a new hypothesis.
|
||||
-/
|
||||
def addHyp (fvarId : FVarId) (generation := 0) : GoalM Unit := do
|
||||
add (← fvarId.getType) (mkFVar fvarId) generation
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -9,18 +9,14 @@ import Lean.Meta.Tactic.Grind.Types
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
private partial def propagateInjEqs (eqs : Expr) (proof : Expr) : GoalM Unit := do
|
||||
-- Remark: we must use `shareCommon` before using `pushEq` and `pushHEq`.
|
||||
-- This is needed because the result type of the injection theorem may allocate
|
||||
match_expr eqs with
|
||||
| And left right =>
|
||||
propagateInjEqs left (.proj ``And 0 proof)
|
||||
propagateInjEqs right (.proj ``And 1 proof)
|
||||
| Eq _ lhs rhs =>
|
||||
pushEq (← shareCommon lhs) (← shareCommon rhs) proof
|
||||
| HEq _ lhs _ rhs =>
|
||||
pushHEq (← shareCommon lhs) (← shareCommon rhs) proof
|
||||
| Eq _ lhs rhs => pushEq lhs rhs proof
|
||||
| HEq _ lhs _ rhs => pushHEq lhs rhs proof
|
||||
| _ =>
|
||||
trace_goal[grind.issues] "unexpected injectivity theorem result type{indentExpr eqs}"
|
||||
trace[grind.issues] "unexpected injectivity theorem result type{indentExpr eqs}"
|
||||
return ()
|
||||
|
||||
/--
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Init.Simproc
|
||||
import Lean.Meta.Tactic.Simp.Simproc
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/--
|
||||
Returns `Grind.doNotSimp e`.
|
||||
Recall that `Grind.doNotSimp` is an identity function, but the following simproc is used to prevent the term `e` from being simplified.
|
||||
-/
|
||||
def markAsDoNotSimp (e : Expr) : MetaM Expr :=
|
||||
mkAppM ``Grind.doNotSimp #[e]
|
||||
|
||||
builtin_dsimproc_decl reduceDoNotSimp (Grind.doNotSimp _) := fun e => do
|
||||
let_expr Grind.doNotSimp _ _ ← e | return .continue
|
||||
return .done e
|
||||
|
||||
/-- Adds `reduceDoNotSimp` to `s` -/
|
||||
def addDoNotSimp (s : Simprocs) : CoreM Simprocs := do
|
||||
s.add ``reduceDoNotSimp (post := false)
|
||||
|
||||
/-- Erases `Grind.doNotSimp` annotations. -/
|
||||
def eraseDoNotSimp (e : Expr) : CoreM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
let_expr Grind.doNotSimp _ a := e | return .continue e
|
||||
return .continue a
|
||||
Core.transform e (pre := pre)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,366 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Intro
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
namespace EMatch
|
||||
/-! This module implements a simple E-matching procedure as a backtracking search. -/
|
||||
|
||||
/-- We represent an `E-matching` problem as a list of constraints. -/
|
||||
inductive Cnstr where
|
||||
| /-- Matches pattern `pat` with term `e` -/
|
||||
«match» (pat : Expr) (e : Expr)
|
||||
| /-- Matches offset pattern `pat+k` with term `e` -/
|
||||
offset (pat : Expr) (k : Nat) (e : Expr)
|
||||
| /-- This constraint is used to encode multi-patterns. -/
|
||||
«continue» (pat : Expr)
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
Internal "marker" for representing unassigned elemens in the `assignment` field.
|
||||
This is a small hack to avoid one extra level of indirection by using `Option Expr` at `assignment`.
|
||||
-/
|
||||
private def unassigned : Expr := mkConst (Name.mkSimple "[grind_unassigned]")
|
||||
|
||||
private def assignmentToMessageData (assignment : Array Expr) : Array MessageData :=
|
||||
assignment.reverse.map fun e =>
|
||||
if isSameExpr e unassigned then m!"_" else m!"{e}"
|
||||
|
||||
/--
|
||||
Choice point for the backtracking search.
|
||||
The state of the procedure contains a stack of choices.
|
||||
-/
|
||||
structure Choice where
|
||||
/-- Contraints to be processed. -/
|
||||
cnstrs : List Cnstr
|
||||
/-- Maximum term generation found so far. -/
|
||||
gen : Nat
|
||||
/-- Partial assignment so far. Recall that pattern variables are encoded as de-Bruijn variables. -/
|
||||
assignment : Array Expr
|
||||
deriving Inhabited
|
||||
|
||||
/-- Context for the E-matching monad. -/
|
||||
structure Context where
|
||||
/-- `useMT` is `true` if we are using the mod-time optimization. It is always set to false for new `EMatchTheorem`s. -/
|
||||
useMT : Bool := true
|
||||
/-- `EMatchTheorem` being processed. -/
|
||||
thm : EMatchTheorem := default
|
||||
/-- Initial application used to start E-matching -/
|
||||
initApp : Expr := default
|
||||
deriving Inhabited
|
||||
|
||||
/-- State for the E-matching monad -/
|
||||
structure State where
|
||||
/-- Choices that still have to be processed. -/
|
||||
choiceStack : List Choice := []
|
||||
deriving Inhabited
|
||||
|
||||
abbrev M := ReaderT Context $ StateRefT State GoalM
|
||||
|
||||
def M.run' (x : M α) : GoalM α :=
|
||||
x {} |>.run' {}
|
||||
|
||||
@[inline] private abbrev withInitApp (e : Expr) (x : M α) : M α :=
|
||||
withReader (fun ctx => { ctx with initApp := e }) x
|
||||
|
||||
/--
|
||||
Assigns `bidx := e` in `c`. If `bidx` is already assigned in `c`, we check whether
|
||||
`e` and `c.assignment[bidx]` are in the same equivalence class.
|
||||
This function assumes `bidx < c.assignment.size`.
|
||||
Recall that we initialize the assignment array with the number of theorem parameters.
|
||||
-/
|
||||
private def assign? (c : Choice) (bidx : Nat) (e : Expr) : OptionT GoalM Choice := do
|
||||
if h : bidx < c.assignment.size then
|
||||
let v := c.assignment[bidx]
|
||||
if isSameExpr v unassigned then
|
||||
return { c with assignment := c.assignment.set bidx e }
|
||||
else
|
||||
guard (← isEqv v e)
|
||||
return c
|
||||
else
|
||||
-- `Choice` was not properly initialized
|
||||
unreachable!
|
||||
|
||||
/--
|
||||
Returns `true` if the function `pFn` of a pattern is equivalent to the function `eFn`.
|
||||
Recall that we ignore universe levels in patterns.
|
||||
-/
|
||||
private def eqvFunctions (pFn eFn : Expr) : Bool :=
|
||||
(pFn.isFVar && pFn == eFn)
|
||||
|| (pFn.isConst && eFn.isConstOf pFn.constName!)
|
||||
|
||||
/-- Matches a pattern argument. See `matchArgs?`. -/
|
||||
private def matchArg? (c : Choice) (pArg : Expr) (eArg : Expr) : OptionT GoalM Choice := do
|
||||
if isPatternDontCare pArg then
|
||||
return c
|
||||
else if pArg.isBVar then
|
||||
assign? c pArg.bvarIdx! eArg
|
||||
else if let some pArg := groundPattern? pArg then
|
||||
guard (← isEqv pArg eArg)
|
||||
return c
|
||||
else if let some (pArg, k) := isOffsetPattern? pArg then
|
||||
assert! Option.isNone <| isOffsetPattern? pArg
|
||||
assert! !isPatternDontCare pArg
|
||||
return { c with cnstrs := .offset pArg k eArg :: c.cnstrs }
|
||||
else
|
||||
return { c with cnstrs := .match pArg eArg :: c.cnstrs }
|
||||
|
||||
private def Choice.updateGen (c : Choice) (gen : Nat) : Choice :=
|
||||
{ c with gen := Nat.max gen c.gen }
|
||||
|
||||
private def pushChoice (c : Choice) : M Unit :=
|
||||
modify fun s => { s with choiceStack := c :: s.choiceStack }
|
||||
|
||||
/--
|
||||
Matches arguments of pattern `p` with term `e`. Returns `some` if successful,
|
||||
and `none` otherwise. It may update `c`s assignment and list of contraints to be
|
||||
processed.
|
||||
-/
|
||||
private partial def matchArgs? (c : Choice) (p : Expr) (e : Expr) : OptionT GoalM Choice := do
|
||||
if !p.isApp then return c -- Done
|
||||
let pArg := p.appArg!
|
||||
let eArg := e.appArg!
|
||||
let c ← matchArg? c pArg eArg
|
||||
matchArgs? c p.appFn! e.appFn!
|
||||
|
||||
/--
|
||||
Matches pattern `p` with term `e` with respect to choice `c`.
|
||||
We traverse the equivalence class of `e` looking for applications compatible with `p`.
|
||||
For each candidate application, we match the arguments and may update `c`s assignments and contraints.
|
||||
We add the updated choices to the choice stack.
|
||||
-/
|
||||
private partial def processMatch (c : Choice) (p : Expr) (e : Expr) : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
let pFn := p.getAppFn
|
||||
let numArgs := p.getAppNumArgs
|
||||
let mut curr := e
|
||||
repeat
|
||||
let n ← getENode curr
|
||||
-- Remark: we use `<` because the instance generation is the maximum term generation + 1
|
||||
if n.generation < maxGeneration
|
||||
-- uses heterogeneous equality or is the root of its congruence class
|
||||
&& (n.heqProofs || n.isCongrRoot)
|
||||
&& eqvFunctions pFn curr.getAppFn
|
||||
&& curr.getAppNumArgs == numArgs then
|
||||
if let some c ← matchArgs? c p curr |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
curr ← getNext curr
|
||||
if isSameExpr curr e then break
|
||||
|
||||
/--
|
||||
Matches offset pattern `pArg+k` with term `e` with respect to choice `c`.
|
||||
-/
|
||||
private partial def processOffset (c : Choice) (pArg : Expr) (k : Nat) (e : Expr) : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
let mut curr := e
|
||||
repeat
|
||||
let n ← getENode curr
|
||||
if n.generation < maxGeneration then
|
||||
if let some (eArg, k') ← isOffset? curr |>.run then
|
||||
if k' < k then
|
||||
let c := c.updateGen n.generation
|
||||
pushChoice { c with cnstrs := .offset pArg (k - k') eArg :: c.cnstrs }
|
||||
else if k' == k then
|
||||
if let some c ← matchArg? c pArg eArg |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
else if k' > k then
|
||||
let eArg' := mkNatAdd eArg (mkNatLit (k' - k))
|
||||
let eArg' ← shareCommon (← canon eArg')
|
||||
internalize eArg' (n.generation+1)
|
||||
if let some c ← matchArg? c pArg eArg' |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
else if let some k' ← evalNat curr |>.run then
|
||||
if k' >= k then
|
||||
let eArg' := mkNatLit (k' - k)
|
||||
let eArg' ← shareCommon (← canon eArg')
|
||||
internalize eArg' (n.generation+1)
|
||||
if let some c ← matchArg? c pArg eArg' |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
curr ← getNext curr
|
||||
if isSameExpr curr e then break
|
||||
|
||||
/-- Processes `continue` contraint used to implement multi-patterns. -/
|
||||
private def processContinue (c : Choice) (p : Expr) : M Unit := do
|
||||
let some apps := (← getThe Goal).appMap.find? p.toHeadIndex
|
||||
| return ()
|
||||
let maxGeneration ← getMaxGeneration
|
||||
for app in apps do
|
||||
let n ← getENode app
|
||||
if n.generation < maxGeneration
|
||||
&& (n.heqProofs || n.isCongrRoot) then
|
||||
if let some c ← matchArgs? c p app |>.run then
|
||||
let gen := n.generation
|
||||
let c := { c with gen := Nat.max gen c.gen }
|
||||
modify fun s => { s with choiceStack := c :: s.choiceStack }
|
||||
|
||||
/--
|
||||
Helper function for marking parts of `match`-equation theorem as "do-not-simplify"
|
||||
`initApp` is the match-expression used to instantiate the `match`-equation.
|
||||
-/
|
||||
private partial def annotateMatchEqnType (prop : Expr) (initApp : Expr) : M Expr := do
|
||||
if let .forallE n d b bi := prop then
|
||||
withLocalDecl n bi (← markAsDoNotSimp d) fun x => do
|
||||
mkForallFVars #[x] (← annotateMatchEqnType (b.instantiate1 x) initApp)
|
||||
else
|
||||
let_expr f@Eq α lhs rhs := prop | return prop
|
||||
-- See comment at `Grind.EqMatch`
|
||||
return mkApp4 (mkConst ``Grind.EqMatch f.constLevels!) α (← markAsDoNotSimp lhs) rhs initApp
|
||||
|
||||
/--
|
||||
Stores new theorem instance in the state.
|
||||
Recall that new instances are internalized later, after a full round of ematching.
|
||||
-/
|
||||
private def addNewInstance (origin : Origin) (proof : Expr) (generation : Nat) : M Unit := do
|
||||
let proof ← instantiateMVars proof
|
||||
if grind.debug.proofs.get (← getOptions) then
|
||||
check proof
|
||||
let mut prop ← inferType proof
|
||||
if Match.isMatchEqnTheorem (← getEnv) origin.key then
|
||||
prop ← annotateMatchEqnType prop (← read).initApp
|
||||
trace_goal[grind.ematch.instance] "{← origin.pp}: {prop}"
|
||||
addTheoremInstance proof prop (generation+1)
|
||||
|
||||
/--
|
||||
After processing a (multi-)pattern, use the choice assignment to instantiate the proof.
|
||||
Missing parameters are synthesized using type inference and type class synthesis."
|
||||
-/
|
||||
private partial def instantiateTheorem (c : Choice) : M Unit := withDefault do withNewMCtxDepth do
|
||||
let thm := (← read).thm
|
||||
unless (← markTheoremInstance thm.proof c.assignment) do
|
||||
return ()
|
||||
trace_goal[grind.ematch.instance.assignment] "{← thm.origin.pp}: {assignmentToMessageData c.assignment}"
|
||||
let proof ← thm.getProofWithFreshMVarLevels
|
||||
let numParams := thm.numParams
|
||||
assert! c.assignment.size == numParams
|
||||
let (mvars, bis, _) ← forallMetaBoundedTelescope (← inferType proof) numParams
|
||||
if mvars.size != thm.numParams then
|
||||
trace_goal[grind.issues] "unexpected number of parameters at {← thm.origin.pp}"
|
||||
return ()
|
||||
-- Apply assignment
|
||||
for h : i in [:mvars.size] do
|
||||
let v := c.assignment[numParams - i - 1]!
|
||||
unless isSameExpr v unassigned do
|
||||
let mvarId := mvars[i].mvarId!
|
||||
let mvarIdType ← mvarId.getType
|
||||
let vType ← inferType v
|
||||
unless (← isDefEq mvarIdType vType <&&> mvarId.checkedAssign v) do
|
||||
trace_goal[grind.issues] "type error constructing proof for {← thm.origin.pp}\nwhen assigning metavariable {mvars[i]} with {indentExpr v}\n{← mkHasTypeButIsExpectedMsg vType mvarIdType}"
|
||||
return ()
|
||||
-- Synthesize instances
|
||||
for mvar in mvars, bi in bis do
|
||||
if bi.isInstImplicit && !(← mvar.mvarId!.isAssigned) then
|
||||
let type ← inferType mvar
|
||||
unless (← synthesizeInstance mvar type) do
|
||||
trace_goal[grind.issues] "failed to synthesize instance when instantiating {← thm.origin.pp}{indentExpr type}"
|
||||
return ()
|
||||
let proof := mkAppN proof mvars
|
||||
if (← mvars.allM (·.mvarId!.isAssigned)) then
|
||||
addNewInstance thm.origin proof c.gen
|
||||
else
|
||||
let mvars ← mvars.filterM fun mvar => return !(← mvar.mvarId!.isAssigned)
|
||||
if let some mvarBad ← mvars.findM? fun mvar => return !(← isProof mvar) then
|
||||
trace_goal[grind.issues] "failed to instantiate {← thm.origin.pp}, failed to instantiate non propositional argument with type{indentExpr (← inferType mvarBad)}"
|
||||
let proof ← mkLambdaFVars (binderInfoForMVars := .default) mvars (← instantiateMVars proof)
|
||||
addNewInstance thm.origin proof c.gen
|
||||
where
|
||||
synthesizeInstance (x type : Expr) : MetaM Bool := do
|
||||
let .some val ← trySynthInstance type | return false
|
||||
isDefEq x val
|
||||
|
||||
/-- Process choice stack until we don't have more choices to be processed. -/
|
||||
private def processChoices : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
while !(← get).choiceStack.isEmpty do
|
||||
checkSystem "ematch"
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
let c ← modifyGet fun s : State => (s.choiceStack.head!, { s with choiceStack := s.choiceStack.tail! })
|
||||
if c.gen < maxGeneration then
|
||||
match c.cnstrs with
|
||||
| [] => instantiateTheorem c
|
||||
| .match p e :: cnstrs => processMatch { c with cnstrs } p e
|
||||
| .offset p k e :: cnstrs => processOffset { c with cnstrs } p k e
|
||||
| .continue p :: cnstrs => processContinue { c with cnstrs } p
|
||||
|
||||
private def main (p : Expr) (cnstrs : List Cnstr) : M Unit := do
|
||||
let some apps := (← getThe Goal).appMap.find? p.toHeadIndex
|
||||
| return ()
|
||||
let numParams := (← read).thm.numParams
|
||||
let assignment := mkArray numParams unassigned
|
||||
let useMT := (← read).useMT
|
||||
let gmt := (← getThe Goal).gmt
|
||||
for app in apps do
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
let n ← getENode app
|
||||
if (n.heqProofs || n.isCongrRoot) &&
|
||||
(!useMT || n.mt == gmt) then
|
||||
withInitApp app do
|
||||
if let some c ← matchArgs? { cnstrs, assignment, gen := n.generation } p app |>.run then
|
||||
modify fun s => { s with choiceStack := [c] }
|
||||
processChoices
|
||||
|
||||
def ematchTheorem (thm : EMatchTheorem) : M Unit := do
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
withReader (fun ctx => { ctx with thm }) do
|
||||
let ps := thm.patterns
|
||||
match ps, (← read).useMT with
|
||||
| [p], _ => main p []
|
||||
| p::ps, false => main p (ps.map (.continue ·))
|
||||
| _::_, true => tryAll ps []
|
||||
| _, _ => unreachable!
|
||||
where
|
||||
/--
|
||||
When using the mod-time optimization with multi-patterns,
|
||||
we must start ematching at each different pattern. That is,
|
||||
if we have `[p₁, p₂, p₃]`, we must execute
|
||||
- `main p₁ [.continue p₂, .continue p₃]`
|
||||
- `main p₂ [.continue p₁, .continue p₃]`
|
||||
- `main p₃ [.continue p₁, .continue p₂]`
|
||||
-/
|
||||
tryAll (ps : List Expr) (cs : List Cnstr) : M Unit := do
|
||||
match ps with
|
||||
| [] => return ()
|
||||
| p::ps =>
|
||||
main p (cs.reverse ++ (ps.map (.continue ·)))
|
||||
tryAll ps (.continue p :: cs)
|
||||
|
||||
def ematchTheorems (thms : PArray EMatchTheorem) : M Unit := do
|
||||
thms.forM ematchTheorem
|
||||
|
||||
end EMatch
|
||||
|
||||
open EMatch
|
||||
|
||||
/-- Performs one round of E-matching, and returns new instances. -/
|
||||
def ematch : GoalM Unit := do
|
||||
let go (thms newThms : PArray EMatchTheorem) : EMatch.M Unit := do
|
||||
withReader (fun ctx => { ctx with useMT := true }) <| ematchTheorems thms
|
||||
withReader (fun ctx => { ctx with useMT := false }) <| ematchTheorems newThms
|
||||
if (← checkMaxInstancesExceeded <||> checkMaxEmatchExceeded) then
|
||||
return ()
|
||||
else
|
||||
go (← get).thms (← get).newThms |>.run'
|
||||
modify fun s => { s with
|
||||
thms := s.thms ++ s.newThms
|
||||
newThms := {}
|
||||
gmt := s.gmt + 1
|
||||
numEmatch := s.numEmatch + 1
|
||||
}
|
||||
|
||||
/-- Performs one round of E-matching, and assert new instances. -/
|
||||
def ematchAndAssert : GrindTactic := fun goal => do
|
||||
let numInstances := goal.numInstances
|
||||
let goal ← GoalM.run' goal ematch
|
||||
if goal.numInstances == numInstances then
|
||||
return none
|
||||
assertAll goal
|
||||
|
||||
def ematchStar : GrindTactic :=
|
||||
ematchAndAssert.iterate
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,725 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Init.Grind.Tactics
|
||||
import Lean.HeadIndex
|
||||
import Lean.PrettyPrinter
|
||||
import Lean.Util.FoldConsts
|
||||
import Lean.Util.CollectFVars
|
||||
import Lean.Meta.Basic
|
||||
import Lean.Meta.InferType
|
||||
import Lean.Meta.Eqns
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkOffsetPattern (pat : Expr) (k : Nat) : Expr :=
|
||||
mkApp2 (mkConst ``Grind.offset) pat (mkRawNatLit k)
|
||||
|
||||
private def detectOffsets (pat : Expr) : MetaM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
if e == pat then
|
||||
-- We only consider nested offset patterns
|
||||
return .continue e
|
||||
else match e with
|
||||
| .letE .. | .lam .. | .forallE .. => return .done e
|
||||
| _ =>
|
||||
let some (e, k) ← isOffset? e
|
||||
| return .continue e
|
||||
if k == 0 then return .continue e
|
||||
return .continue <| mkOffsetPattern e k
|
||||
Core.transform pat (pre := pre)
|
||||
|
||||
def isOffsetPattern? (pat : Expr) : Option (Expr × Nat) := Id.run do
|
||||
let_expr Grind.offset pat k := pat | none
|
||||
let .lit (.natVal k) := k | none
|
||||
return some (pat, k)
|
||||
|
||||
def preprocessPattern (pat : Expr) (normalizePattern := true) : MetaM Expr := do
|
||||
let pat ← instantiateMVars pat
|
||||
let pat ← unfoldReducible pat
|
||||
let pat ← if normalizePattern then normalize pat else pure pat
|
||||
let pat ← detectOffsets pat
|
||||
let pat ← foldProjs pat
|
||||
return pat
|
||||
|
||||
inductive Origin where
|
||||
/-- A global declaration in the environment. -/
|
||||
| decl (declName : Name)
|
||||
/-- A local hypothesis. -/
|
||||
| fvar (fvarId : FVarId)
|
||||
/--
|
||||
A proof term provided directly to a call to `grind` where `ref`
|
||||
is the provided grind argument. The `id` is a unique identifier for the call.
|
||||
-/
|
||||
| stx (id : Name) (ref : Syntax)
|
||||
/-- It is local, but we don't have a local hypothesis for it. -/
|
||||
| local (id : Name)
|
||||
deriving Inhabited, Repr, BEq
|
||||
|
||||
/-- A unique identifier corresponding to the origin. -/
|
||||
def Origin.key : Origin → Name
|
||||
| .decl declName => declName
|
||||
| .fvar fvarId => fvarId.name
|
||||
| .stx id _ => id
|
||||
| .local id => id
|
||||
|
||||
def Origin.pp [Monad m] [MonadEnv m] [MonadError m] (o : Origin) : m MessageData := do
|
||||
match o with
|
||||
| .decl declName => return MessageData.ofConst (← mkConstWithLevelParams declName)
|
||||
| .fvar fvarId => return mkFVar fvarId
|
||||
| .stx _ ref => return ref
|
||||
| .local id => return id
|
||||
|
||||
instance : BEq Origin where
|
||||
beq a b := a.key == b.key
|
||||
|
||||
instance : Hashable Origin where
|
||||
hash a := hash a.key
|
||||
|
||||
/-- A theorem for heuristic instantiation based on E-matching. -/
|
||||
structure EMatchTheorem where
|
||||
/--
|
||||
It stores universe parameter names for universe polymorphic proofs.
|
||||
Recall that it is non-empty only when we elaborate an expression provided by the user.
|
||||
When `proof` is just a constant, we can use the universe parameter names stored in the declaration.
|
||||
-/
|
||||
levelParams : Array Name
|
||||
proof : Expr
|
||||
numParams : Nat
|
||||
patterns : List Expr
|
||||
/-- Contains all symbols used in `pattterns`. -/
|
||||
symbols : List HeadIndex
|
||||
origin : Origin
|
||||
deriving Inhabited
|
||||
|
||||
/-- Set of E-matching theorems. -/
|
||||
structure EMatchTheorems where
|
||||
/-- The key is a symbol from `EMatchTheorem.symbols`. -/
|
||||
private map : PHashMap Name (List EMatchTheorem) := {}
|
||||
/-- Set of theorem ids that have been inserted using `insert`. -/
|
||||
private origins : PHashSet Origin := {}
|
||||
/-- Theorems that have been marked as erased -/
|
||||
private erased : PHashSet Origin := {}
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
Inserts a `thm` with symbols `[s_1, ..., s_n]` to `s`.
|
||||
We add `s_1 -> { thm with symbols := [s_2, ..., s_n] }`.
|
||||
When `grind` internalizes a term containing symbol `s`, we
|
||||
process all theorems `thm` associated with key `s`.
|
||||
If their `thm.symbols` is empty, we say they are activated.
|
||||
Otherwise, we reinsert into `map`.
|
||||
-/
|
||||
def EMatchTheorems.insert (s : EMatchTheorems) (thm : EMatchTheorem) : EMatchTheorems := Id.run do
|
||||
let .const declName :: syms := thm.symbols
|
||||
| unreachable!
|
||||
let thm := { thm with symbols := syms }
|
||||
let { map, origins, erased } := s
|
||||
let origins := origins.insert thm.origin
|
||||
let erased := erased.erase thm.origin
|
||||
if let some thms := map.find? declName then
|
||||
return { map := map.insert declName (thm::thms), origins, erased }
|
||||
else
|
||||
return { map := map.insert declName [thm], origins, erased }
|
||||
|
||||
/-- Returns `true` if `s` contains a theorem with the given origin. -/
|
||||
def EMatchTheorems.contains (s : EMatchTheorems) (origin : Origin) : Bool :=
|
||||
s.origins.contains origin
|
||||
|
||||
/-- Mark the theorm with the given origin as `erased` -/
|
||||
def EMatchTheorems.erase (s : EMatchTheorems) (origin : Origin) : EMatchTheorems :=
|
||||
{ s with erased := s.erased.insert origin, origins := s.origins.erase origin }
|
||||
|
||||
/-- Returns true if the theorem has been marked as erased. -/
|
||||
def EMatchTheorems.isErased (s : EMatchTheorems) (origin : Origin) : Bool :=
|
||||
s.erased.contains origin
|
||||
|
||||
/--
|
||||
Retrieves theorems from `s` associated with the given symbol. See `EMatchTheorem.insert`.
|
||||
The theorems are removed from `s`.
|
||||
-/
|
||||
@[inline]
|
||||
def EMatchTheorems.retrieve? (s : EMatchTheorems) (sym : Name) : Option (List EMatchTheorem × EMatchTheorems) :=
|
||||
if let some thms := s.map.find? sym then
|
||||
some (thms, { s with map := s.map.erase sym })
|
||||
else
|
||||
none
|
||||
|
||||
def EMatchTheorem.getProofWithFreshMVarLevels (thm : EMatchTheorem) : MetaM Expr := do
|
||||
if thm.proof.isConst && thm.levelParams.isEmpty then
|
||||
let declName := thm.proof.constName!
|
||||
let info ← getConstInfo declName
|
||||
if info.levelParams.isEmpty then
|
||||
return thm.proof
|
||||
else
|
||||
mkConstWithFreshMVarLevels declName
|
||||
else if thm.levelParams.isEmpty then
|
||||
return thm.proof
|
||||
else
|
||||
let us ← thm.levelParams.mapM fun _ => mkFreshLevelMVar
|
||||
return thm.proof.instantiateLevelParamsArray thm.levelParams us
|
||||
|
||||
private builtin_initialize ematchTheoremsExt : SimpleScopedEnvExtension EMatchTheorem EMatchTheorems ←
|
||||
registerSimpleScopedEnvExtension {
|
||||
addEntry := EMatchTheorems.insert
|
||||
initial := {}
|
||||
}
|
||||
|
||||
-- TODO: create attribute?
|
||||
private def forbiddenDeclNames := #[``Eq, ``HEq, ``Iff, ``And, ``Or, ``Not]
|
||||
|
||||
private def isForbidden (declName : Name) := forbiddenDeclNames.contains declName
|
||||
|
||||
private def dontCare := mkConst (Name.mkSimple "[grind_dontcare]")
|
||||
|
||||
def mkGroundPattern (e : Expr) : Expr :=
|
||||
mkAnnotation `grind.ground_pat e
|
||||
|
||||
def groundPattern? (e : Expr) : Option Expr :=
|
||||
annotation? `grind.ground_pat e
|
||||
|
||||
private def isGroundPattern (e : Expr) : Bool :=
|
||||
groundPattern? e |>.isSome
|
||||
|
||||
def isPatternDontCare (e : Expr) : Bool :=
|
||||
e == dontCare
|
||||
|
||||
private def isAtomicPattern (e : Expr) : Bool :=
|
||||
e.isBVar || isPatternDontCare e || isGroundPattern e
|
||||
|
||||
partial def ppPattern (pattern : Expr) : MessageData := Id.run do
|
||||
if let some e := groundPattern? pattern then
|
||||
return m!"`[{e}]"
|
||||
else if isPatternDontCare pattern then
|
||||
return m!"?"
|
||||
else match pattern with
|
||||
| .bvar idx => return m!"#{idx}"
|
||||
| _ =>
|
||||
let mut r := m!"{pattern.getAppFn}"
|
||||
for arg in pattern.getAppArgs do
|
||||
let mut argFmt ← ppPattern arg
|
||||
if !isAtomicPattern arg then
|
||||
argFmt := MessageData.paren argFmt
|
||||
r := r ++ " " ++ argFmt
|
||||
return r
|
||||
|
||||
namespace NormalizePattern
|
||||
|
||||
structure State where
|
||||
symbols : Array HeadIndex := #[]
|
||||
symbolSet : Std.HashSet HeadIndex := {}
|
||||
bvarsFound : Std.HashSet Nat := {}
|
||||
|
||||
abbrev M := StateRefT State MetaM
|
||||
|
||||
private def saveSymbol (h : HeadIndex) : M Unit := do
|
||||
unless (← get).symbolSet.contains h do
|
||||
modify fun s => { s with symbols := s.symbols.push h, symbolSet := s.symbolSet.insert h }
|
||||
|
||||
private def foundBVar (idx : Nat) : M Bool :=
|
||||
return (← get).bvarsFound.contains idx
|
||||
|
||||
private def saveBVar (idx : Nat) : M Unit := do
|
||||
modify fun s => { s with bvarsFound := s.bvarsFound.insert idx }
|
||||
|
||||
private def getPatternFn? (pattern : Expr) : Option Expr :=
|
||||
if !pattern.isApp then
|
||||
none
|
||||
else match pattern.getAppFn with
|
||||
| f@(.const declName _) => if isForbidden declName then none else some f
|
||||
| f@(.fvar _) => some f
|
||||
| _ => none
|
||||
|
||||
/--
|
||||
Returns a bit-mask `mask` s.t. `mask[i]` is true if the corresponding argument is
|
||||
- a type (that is not a proposition) or type former, or
|
||||
- a proof, or
|
||||
- an instance implicit argument
|
||||
|
||||
When `mask[i]`, we say the corresponding argument is a "support" argument.
|
||||
-/
|
||||
def getPatternSupportMask (f : Expr) (numArgs : Nat) : MetaM (Array Bool) := do
|
||||
forallBoundedTelescope (← inferType f) numArgs fun xs _ => do
|
||||
xs.mapM fun x => do
|
||||
if (← isProp x) then
|
||||
return false
|
||||
else if (← isTypeFormer x <||> isProof x) then
|
||||
return true
|
||||
else
|
||||
return (← x.fvarId!.getDecl).binderInfo matches .instImplicit
|
||||
|
||||
private partial def go (pattern : Expr) (root := false) : M Expr := do
|
||||
if root && !pattern.hasLooseBVars then
|
||||
throwError "invalid pattern, it does not have pattern variables"
|
||||
if let some (e, k) := isOffsetPattern? pattern then
|
||||
let e ← goArg e (isSupport := false)
|
||||
if e == dontCare then
|
||||
return dontCare
|
||||
else
|
||||
return mkOffsetPattern e k
|
||||
let some f := getPatternFn? pattern
|
||||
| throwError "invalid pattern, (non-forbidden) application expected{indentExpr pattern}"
|
||||
assert! f.isConst || f.isFVar
|
||||
saveSymbol f.toHeadIndex
|
||||
let mut args := pattern.getAppArgs.toVector
|
||||
let supportMask ← getPatternSupportMask f args.size
|
||||
for h : i in [:args.size] do
|
||||
let arg := args[i]
|
||||
let isSupport := supportMask[i]?.getD false
|
||||
args := args.set i (← goArg arg isSupport)
|
||||
return mkAppN f args.toArray
|
||||
where
|
||||
goArg (arg : Expr) (isSupport : Bool) : M Expr := do
|
||||
if !arg.hasLooseBVars then
|
||||
if arg.hasMVar then
|
||||
pure dontCare
|
||||
else
|
||||
pure <| mkGroundPattern arg
|
||||
else match arg with
|
||||
| .bvar idx =>
|
||||
if isSupport && (← foundBVar idx) then
|
||||
pure dontCare
|
||||
else
|
||||
saveBVar idx
|
||||
pure arg
|
||||
| _ =>
|
||||
if isSupport then
|
||||
pure dontCare
|
||||
else if let some _ := getPatternFn? arg then
|
||||
go arg
|
||||
else
|
||||
pure dontCare
|
||||
|
||||
def main (patterns : List Expr) : MetaM (List Expr × List HeadIndex × Std.HashSet Nat) := do
|
||||
let (patterns, s) ← patterns.mapM go |>.run {}
|
||||
return (patterns, s.symbols.toList, s.bvarsFound)
|
||||
|
||||
def normalizePattern (e : Expr) : M Expr := do
|
||||
go e
|
||||
|
||||
end NormalizePattern
|
||||
|
||||
/--
|
||||
Returns `true` if free variables in `type` are not in `thmVars` or are in `fvarsFound`.
|
||||
We use this function to check whether `type` is fully instantiated.
|
||||
-/
|
||||
private def checkTypeFVars (thmVars : FVarIdSet) (fvarsFound : FVarIdSet) (type : Expr) : Bool :=
|
||||
let typeFVars := (collectFVars {} type).fvarIds
|
||||
typeFVars.all fun fvarId => !thmVars.contains fvarId || fvarsFound.contains fvarId
|
||||
|
||||
/--
|
||||
Given an type class instance type `instType`, returns true if free variables in input parameters
|
||||
1- are not in `thmVars`, or
|
||||
2- are in `fvarsFound`.
|
||||
Remark: `fvarsFound` is a subset of `thmVars`
|
||||
-/
|
||||
private def canBeSynthesized (thmVars : FVarIdSet) (fvarsFound : FVarIdSet) (instType : Expr) : MetaM Bool := do
|
||||
forallTelescopeReducing instType fun xs type => type.withApp fun classFn classArgs => do
|
||||
for x in xs do
|
||||
unless checkTypeFVars thmVars fvarsFound (← inferType x) do return false
|
||||
forallBoundedTelescope (← inferType classFn) type.getAppNumArgs fun params _ => do
|
||||
for param in params, classArg in classArgs do
|
||||
let paramType ← inferType param
|
||||
if !paramType.isAppOf ``semiOutParam && !paramType.isAppOf ``outParam then
|
||||
unless checkTypeFVars thmVars fvarsFound classArg do
|
||||
return false
|
||||
return true
|
||||
|
||||
/--
|
||||
Auxiliary type for the `checkCoverage` function.
|
||||
-/
|
||||
inductive CheckCoverageResult where
|
||||
| /-- `checkCoverage` succeeded -/
|
||||
ok
|
||||
| /--
|
||||
`checkCoverage` failed because some of the theorem parameters are missing,
|
||||
`pos` contains their positions
|
||||
-/
|
||||
missing (pos : List Nat)
|
||||
|
||||
/--
|
||||
After we process a set of patterns, we obtain the set of de Bruijn indices in these patterns.
|
||||
We say they are pattern variables. This function checks whether the set of pattern variables is sufficient for
|
||||
instantiating the theorem with proof `thmProof`. The theorem has `numParams` parameters.
|
||||
The missing parameters:
|
||||
1- we may be able to infer them using type inference or type class synthesis, or
|
||||
2- they are propositions, and may become hypotheses of the instantiated theorem.
|
||||
|
||||
For type class instance parameters, we must check whether the free variables in class input parameters are available.
|
||||
-/
|
||||
private def checkCoverage (thmProof : Expr) (numParams : Nat) (bvarsFound : Std.HashSet Nat) : MetaM CheckCoverageResult := do
|
||||
if bvarsFound.size == numParams then return .ok
|
||||
forallBoundedTelescope (← inferType thmProof) numParams fun xs _ => do
|
||||
assert! numParams == xs.size
|
||||
let patternVars := bvarsFound.toList.map fun bidx => xs[numParams - bidx - 1]!.fvarId!
|
||||
-- `xs` as a `FVarIdSet`.
|
||||
let thmVars : FVarIdSet := RBTree.ofList <| xs.toList.map (·.fvarId!)
|
||||
-- Collect free variables occurring in `e`, and insert the ones that are in `thmVars` into `fvarsFound`
|
||||
let update (fvarsFound : FVarIdSet) (e : Expr) : FVarIdSet :=
|
||||
(collectFVars {} e).fvarIds.foldl (init := fvarsFound) fun s fvarId =>
|
||||
if thmVars.contains fvarId then s.insert fvarId else s
|
||||
-- Theorem variables found so far. We initialize with the variables occurring in patterns
|
||||
-- Remark: fvarsFound is a subset of thmVars
|
||||
let mut fvarsFound : FVarIdSet := RBTree.ofList patternVars
|
||||
for patternVar in patternVars do
|
||||
let type ← patternVar.getType
|
||||
fvarsFound := update fvarsFound type
|
||||
if fvarsFound.size == numParams then return .ok
|
||||
-- Now, we keep traversing remaining variables and collecting
|
||||
-- `processed` contains the variables we have already processed.
|
||||
let mut processed : FVarIdSet := RBTree.ofList patternVars
|
||||
let mut modified := false
|
||||
repeat
|
||||
modified := false
|
||||
for x in xs do
|
||||
let fvarId := x.fvarId!
|
||||
unless processed.contains fvarId do
|
||||
let xType ← inferType x
|
||||
if fvarsFound.contains fvarId then
|
||||
-- Collect free vars in `x`s type and mark as processed
|
||||
fvarsFound := update fvarsFound xType
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
else if (← isProp xType) then
|
||||
-- If `x` is a proposition, and all theorem variables in `x`s type have already been found
|
||||
-- add it to `fvarsFound` and mark it as processed.
|
||||
if checkTypeFVars thmVars fvarsFound xType then
|
||||
fvarsFound := fvarsFound.insert fvarId
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
else if (← fvarId.getDecl).binderInfo matches .instImplicit then
|
||||
-- If `x` is instance implicit, check whether
|
||||
-- we have found all free variables needed to synthesize instance
|
||||
if (← canBeSynthesized thmVars fvarsFound xType) then
|
||||
fvarsFound := fvarsFound.insert fvarId
|
||||
fvarsFound := update fvarsFound xType
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
if fvarsFound.size == numParams then
|
||||
return .ok
|
||||
if !modified then
|
||||
break
|
||||
let mut pos := #[]
|
||||
for h : i in [:xs.size] do
|
||||
let fvarId := xs[i].fvarId!
|
||||
unless fvarsFound.contains fvarId do
|
||||
pos := pos.push i
|
||||
return .missing pos.toList
|
||||
|
||||
/--
|
||||
Given a theorem with proof `proof` and `numParams` parameters, returns a message
|
||||
containing the parameters at positions `paramPos`.
|
||||
-/
|
||||
private def ppParamsAt (proof : Expr) (numParams : Nat) (paramPos : List Nat) : MetaM MessageData := do
|
||||
forallBoundedTelescope (← inferType proof) numParams fun xs _ => do
|
||||
let mut msg := m!""
|
||||
let mut first := true
|
||||
for h : i in [:xs.size] do
|
||||
if paramPos.contains i then
|
||||
let x := xs[i]
|
||||
if first then first := false else msg := msg ++ "\n"
|
||||
msg := msg ++ m!"{x} : {← inferType x}"
|
||||
addMessageContextFull msg
|
||||
|
||||
/--
|
||||
Creates an E-matching theorem for a theorem with proof `proof`, `numParams` parameters, and the given set of patterns.
|
||||
Pattern variables are represented using de Bruijn indices.
|
||||
-/
|
||||
def mkEMatchTheoremCore (origin : Origin) (levelParams : Array Name) (numParams : Nat) (proof : Expr) (patterns : List Expr) : MetaM EMatchTheorem := do
|
||||
let (patterns, symbols, bvarFound) ← NormalizePattern.main patterns
|
||||
trace[grind.ematch.pattern] "{MessageData.ofConst proof}: {patterns.map ppPattern}"
|
||||
if let .missing pos ← checkCoverage proof numParams bvarFound then
|
||||
let pats : MessageData := m!"{patterns.map ppPattern}"
|
||||
throwError "invalid pattern(s) for `{← origin.pp}`{indentD pats}\nthe following theorem parameters cannot be instantiated:{indentD (← ppParamsAt proof numParams pos)}"
|
||||
return {
|
||||
proof, patterns, numParams, symbols
|
||||
levelParams, origin
|
||||
}
|
||||
|
||||
private def getProofFor (declName : Name) : CoreM Expr := do
|
||||
let .thmInfo info ← getConstInfo declName
|
||||
| throwError "`{declName}` is not a theorem"
|
||||
let us := info.levelParams.map mkLevelParam
|
||||
return mkConst declName us
|
||||
|
||||
/--
|
||||
Creates an E-matching theorem for `declName` with `numParams` parameters, and the given set of patterns.
|
||||
Pattern variables are represented using de Bruijn indices.
|
||||
-/
|
||||
def mkEMatchTheorem (declName : Name) (numParams : Nat) (patterns : List Expr) : MetaM EMatchTheorem := do
|
||||
mkEMatchTheoremCore (.decl declName) #[] numParams (← getProofFor declName) patterns
|
||||
|
||||
/--
|
||||
Given a theorem with proof `proof` and type of the form `∀ (a_1 ... a_n), lhs = rhs`,
|
||||
creates an E-matching pattern for it using `addEMatchTheorem n [lhs]`
|
||||
If `normalizePattern` is true, it applies the `grind` simplification theorems and simprocs to the pattern.
|
||||
-/
|
||||
def mkEMatchEqTheoremCore (origin : Origin) (levelParams : Array Name) (proof : Expr) (normalizePattern : Bool) (useLhs : Bool) : MetaM EMatchTheorem := do
|
||||
let (numParams, patterns) ← forallTelescopeReducing (← inferType proof) fun xs type => do
|
||||
let (lhs, rhs) ← match_expr type with
|
||||
| Eq _ lhs rhs => pure (lhs, rhs)
|
||||
| Iff lhs rhs => pure (lhs, rhs)
|
||||
| HEq _ lhs _ rhs => pure (lhs, rhs)
|
||||
| _ => throwError "invalid E-matching equality theorem, conclusion must be an equality{indentExpr type}"
|
||||
let pat := if useLhs then lhs else rhs
|
||||
let pat ← preprocessPattern pat normalizePattern
|
||||
return (xs.size, [pat.abstract xs])
|
||||
mkEMatchTheoremCore origin levelParams numParams proof patterns
|
||||
|
||||
/--
|
||||
Given theorem with name `declName` and type of the form `∀ (a_1 ... a_n), lhs = rhs`,
|
||||
creates an E-matching pattern for it using `addEMatchTheorem n [lhs]`
|
||||
|
||||
If `normalizePattern` is true, it applies the `grind` simplification theorems and simprocs to the
|
||||
pattern.
|
||||
-/
|
||||
def mkEMatchEqTheorem (declName : Name) (normalizePattern := true) (useLhs : Bool := true) : MetaM EMatchTheorem := do
|
||||
mkEMatchEqTheoremCore (.decl declName) #[] (← getProofFor declName) normalizePattern useLhs
|
||||
|
||||
/--
|
||||
Adds an E-matching theorem to the environment.
|
||||
See `mkEMatchTheorem`.
|
||||
-/
|
||||
def addEMatchTheorem (declName : Name) (numParams : Nat) (patterns : List Expr) : MetaM Unit := do
|
||||
ematchTheoremsExt.add (← mkEMatchTheorem declName numParams patterns)
|
||||
|
||||
/--
|
||||
Adds an E-matching equality theorem to the environment.
|
||||
See `mkEMatchEqTheorem`.
|
||||
-/
|
||||
def addEMatchEqTheorem (declName : Name) : MetaM Unit := do
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem declName)
|
||||
|
||||
/-- Returns the E-matching theorems registered in the environment. -/
|
||||
def getEMatchTheorems : CoreM EMatchTheorems :=
|
||||
return ematchTheoremsExt.getState (← getEnv)
|
||||
|
||||
inductive TheoremKind where
|
||||
| eqLhs | eqRhs | eqBoth | fwd | bwd | default
|
||||
deriving Inhabited, BEq
|
||||
|
||||
private def TheoremKind.toAttribute : TheoremKind → String
|
||||
| .eqLhs => "[grind =]"
|
||||
| .eqRhs => "[grind =_]"
|
||||
| .eqBoth => "[grind _=_]"
|
||||
| .fwd => "[grind →]"
|
||||
| .bwd => "[grind ←]"
|
||||
| .default => "[grind]"
|
||||
|
||||
private def TheoremKind.explainFailure : TheoremKind → String
|
||||
| .eqLhs => "failed to find pattern in the left-hand side of the theorem's conclusion"
|
||||
| .eqRhs => "failed to find pattern in the right-hand side of the theorem's conclusion"
|
||||
| .eqBoth => unreachable! -- eqBoth is a macro
|
||||
| .fwd => "failed to find patterns in the antecedents of the theorem"
|
||||
| .bwd => "failed to find patterns in the theorem's conclusion"
|
||||
| .default => "failed to find patterns"
|
||||
|
||||
/-- Returns the types of `xs` that are propositions. -/
|
||||
private def getPropTypes (xs : Array Expr) : MetaM (Array Expr) :=
|
||||
xs.filterMapM fun x => do
|
||||
let type ← inferType x
|
||||
if (← isProp type) then return some type else return none
|
||||
|
||||
/-- State for the (pattern) `CollectorM` monad -/
|
||||
private structure Collector.State where
|
||||
/-- Pattern found so far. -/
|
||||
patterns : Array Expr := #[]
|
||||
done : Bool := false
|
||||
|
||||
private structure Collector.Context where
|
||||
proof : Expr
|
||||
xs : Array Expr
|
||||
|
||||
/-- Monad for collecting patterns for a theorem. -/
|
||||
private abbrev CollectorM := ReaderT Collector.Context $ StateRefT Collector.State NormalizePattern.M
|
||||
|
||||
/-- Similar to `getPatternFn?`, but operates on expressions that do not contain loose de Bruijn variables. -/
|
||||
private def isPatternFnCandidate (f : Expr) : CollectorM Bool := do
|
||||
match f with
|
||||
| .const declName _ => return !isForbidden declName
|
||||
| .fvar .. => return !(← read).xs.contains f
|
||||
| _ => return false
|
||||
|
||||
private def addNewPattern (p : Expr) : CollectorM Unit := do
|
||||
trace[grind.ematch.pattern.search] "found pattern: {ppPattern p}"
|
||||
let bvarsFound := (← getThe NormalizePattern.State).bvarsFound
|
||||
let done := (← checkCoverage (← read).proof (← read).xs.size bvarsFound) matches .ok
|
||||
if done then
|
||||
trace[grind.ematch.pattern.search] "found full coverage"
|
||||
modify fun s => { s with patterns := s.patterns.push p, done }
|
||||
|
||||
private partial def collect (e : Expr) : CollectorM Unit := do
|
||||
if (← get).done then return ()
|
||||
match e with
|
||||
| .app .. =>
|
||||
let f := e.getAppFn
|
||||
if (← isPatternFnCandidate f) then
|
||||
let saved ← getThe NormalizePattern.State
|
||||
try
|
||||
trace[grind.ematch.pattern.search] "candidate: {e}"
|
||||
let p := e.abstract (← read).xs
|
||||
unless p.hasLooseBVars do
|
||||
trace[grind.ematch.pattern.search] "skip, does not contain pattern variables"
|
||||
return ()
|
||||
let p ← NormalizePattern.normalizePattern p
|
||||
if saved.bvarsFound.size < (← getThe NormalizePattern.State).bvarsFound.size then
|
||||
addNewPattern p
|
||||
return ()
|
||||
trace[grind.ematch.pattern.search] "skip, no new variables covered"
|
||||
-- restore state and continue search
|
||||
set saved
|
||||
catch _ =>
|
||||
trace[grind.ematch.pattern.search] "skip, exception during normalization"
|
||||
-- restore state and continue search
|
||||
set saved
|
||||
let args := e.getAppArgs
|
||||
for arg in args, flag in (← NormalizePattern.getPatternSupportMask f args.size) do
|
||||
unless flag do
|
||||
collect arg
|
||||
| .forallE _ d b _ =>
|
||||
if (← pure e.isArrow <&&> isProp d <&&> isProp b) then
|
||||
collect d
|
||||
collect b
|
||||
| _ => return ()
|
||||
|
||||
private def collectPatterns? (proof : Expr) (xs : Array Expr) (searchPlaces : Array Expr) : MetaM (Option (List Expr × List HeadIndex)) := do
|
||||
let go : CollectorM (Option (List Expr)) := do
|
||||
for place in searchPlaces do
|
||||
let place ← preprocessPattern place
|
||||
collect place
|
||||
if (← get).done then
|
||||
return some ((← get).patterns.toList)
|
||||
return none
|
||||
let (some ps, s) ← go { proof, xs } |>.run' {} |>.run {}
|
||||
| return none
|
||||
return some (ps, s.symbols.toList)
|
||||
|
||||
def mkEMatchTheoremWithKind? (origin : Origin) (levelParams : Array Name) (proof : Expr) (kind : TheoremKind) : MetaM (Option EMatchTheorem) := do
|
||||
if kind == .eqLhs then
|
||||
return (← mkEMatchEqTheoremCore origin levelParams proof (normalizePattern := false) (useLhs := true))
|
||||
else if kind == .eqRhs then
|
||||
return (← mkEMatchEqTheoremCore origin levelParams proof (normalizePattern := false) (useLhs := false))
|
||||
let type ← inferType proof
|
||||
forallTelescopeReducing type fun xs type => do
|
||||
let searchPlaces ← match kind with
|
||||
| .fwd =>
|
||||
let ps ← getPropTypes xs
|
||||
if ps.isEmpty then
|
||||
throwError "invalid `grind` forward theorem, theorem `{← origin.pp}` does not have propositional hypotheses"
|
||||
pure ps
|
||||
| .bwd => pure #[type]
|
||||
| .default => pure <| #[type] ++ (← getPropTypes xs)
|
||||
| _ => unreachable!
|
||||
go xs searchPlaces
|
||||
where
|
||||
go (xs : Array Expr) (searchPlaces : Array Expr) : MetaM (Option EMatchTheorem) := do
|
||||
let some (patterns, symbols) ← collectPatterns? proof xs searchPlaces
|
||||
| return none
|
||||
let numParams := xs.size
|
||||
trace[grind.ematch.pattern] "{← origin.pp}: {patterns.map ppPattern}"
|
||||
return some {
|
||||
proof, patterns, numParams, symbols
|
||||
levelParams, origin
|
||||
}
|
||||
|
||||
private def getKind (stx : Syntax) : TheoremKind :=
|
||||
if stx[1].isNone then
|
||||
.default
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEq then
|
||||
.eqLhs
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindFwd then
|
||||
.fwd
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEqRhs then
|
||||
.eqRhs
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEqBoth then
|
||||
.eqBoth
|
||||
else
|
||||
.bwd
|
||||
|
||||
private def addGrindEqAttr (declName : Name) (attrKind : AttributeKind) (thmKind : TheoremKind) (useLhs := true) : MetaM Unit := do
|
||||
if (← getConstInfo declName).isTheorem then
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem declName (normalizePattern := true) (useLhs := useLhs)) attrKind
|
||||
else if let some eqns ← getEqnsFor? declName then
|
||||
unless useLhs do
|
||||
throwError "`{declName}` is a definition, you must only use the left-hand side for extracting patterns"
|
||||
for eqn in eqns do
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem eqn) attrKind
|
||||
else
|
||||
throwError s!"`{thmKind.toAttribute}` attribute can only be applied to equational theorems or function definitions"
|
||||
|
||||
private def addGrindAttr (declName : Name) (attrKind : AttributeKind) (thmKind : TheoremKind) : MetaM Unit := do
|
||||
if thmKind == .eqLhs then
|
||||
addGrindEqAttr declName attrKind thmKind (useLhs := true)
|
||||
else if thmKind == .eqRhs then
|
||||
addGrindEqAttr declName attrKind thmKind (useLhs := false)
|
||||
else if thmKind == .eqBoth then
|
||||
addGrindEqAttr declName attrKind thmKind (useLhs := true)
|
||||
addGrindEqAttr declName attrKind thmKind (useLhs := false)
|
||||
else if !(← getConstInfo declName).isTheorem then
|
||||
addGrindEqAttr declName attrKind thmKind
|
||||
else
|
||||
let some thm ← mkEMatchTheoremWithKind? (.decl declName) #[] (← getProofFor declName) thmKind
|
||||
| throwError "`@{thmKind.toAttribute} theorem {declName}` {thmKind.explainFailure}, consider using different options or the `grind_pattern` command"
|
||||
ematchTheoremsExt.add thm attrKind
|
||||
|
||||
builtin_initialize
|
||||
registerBuiltinAttribute {
|
||||
name := `grind
|
||||
descr :=
|
||||
"The `[grind]` attribute is used to annotate declarations.\
|
||||
\
|
||||
When applied to an equational theorem, `[grind =]`, `[grind =_]`, or `[grind _=_]`\
|
||||
will mark the theorem for use in heuristic instantiations by the `grind` tactic,
|
||||
using respectively the left-hand side, the right-hand side, or both sides of the theorem.\
|
||||
When applied to a function, `[grind =]` automatically annotates the equational theorems associated with that function.\
|
||||
When applied to a theorem `[grind ←]` will instantiate the theorem whenever it encounters the conclusion of the theorem
|
||||
(that is, it will use the theorem for backwards reasoning).\
|
||||
When applied to a theorem `[grind →]` will instantiate the theorem whenever it encounters sufficiently many of the propositional hypotheses
|
||||
(that is, it will use the theorem for forwards reasoning).\
|
||||
\
|
||||
The attribute `[grind]` by itself will effectively try `[grind ←]` (if the conclusion is sufficient for instantiation) and then `[grind →]`.\
|
||||
\
|
||||
The `grind` tactic utilizes annotated theorems to add instances of matching patterns into the local context during proof search.\
|
||||
For example, if a theorem `@[grind =] theorem foo_idempotent : foo (foo x) = foo x` is annotated,\
|
||||
`grind` will add an instance of this theorem to the local context whenever it encounters the pattern `foo (foo x)`."
|
||||
applicationTime := .afterCompilation
|
||||
add := fun declName stx attrKind => do
|
||||
addGrindAttr declName attrKind (getKind stx) |>.run' {}
|
||||
erase := fun declName => MetaM.run' do
|
||||
/-
|
||||
Remark: consider the following example
|
||||
```
|
||||
attribute [grind] foo -- ok
|
||||
attribute [-grind] foo.eqn_2 -- ok
|
||||
attribute [-grind] foo -- error
|
||||
```
|
||||
One may argue that the correct behavior should be
|
||||
```
|
||||
attribute [grind] foo -- ok
|
||||
attribute [-grind] foo.eqn_2 -- error
|
||||
attribute [-grind] foo -- ok
|
||||
```
|
||||
-/
|
||||
let throwErr := throwError "`{declName}` is not marked with the `[grind]` attribute"
|
||||
let info ← getConstInfo declName
|
||||
if !info.isTheorem then
|
||||
if let some eqns ← getEqnsFor? declName then
|
||||
let s := ematchTheoremsExt.getState (← getEnv)
|
||||
unless eqns.all fun eqn => s.contains (.decl eqn) do
|
||||
throwErr
|
||||
modifyEnv fun env => ematchTheoremsExt.modifyState env fun s =>
|
||||
eqns.foldl (init := s) fun s eqn => s.erase (.decl eqn)
|
||||
else
|
||||
throwErr
|
||||
else
|
||||
unless ematchTheoremsExt.getState (← getEnv) |>.contains (.decl declName) do
|
||||
throwErr
|
||||
modifyEnv fun env => ematchTheoremsExt.modifyState env fun s => s.erase (.decl declName)
|
||||
}
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,102 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Internalize
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
/--
|
||||
If `parent` is a projection-application `proj_i c`,
|
||||
check whether the root of the equivalence class containing `c` is a constructor-application `ctor ... a_i ...`.
|
||||
If so, internalize the term `proj_i (ctor ... a_i ...)` and add the equality `proj_i (ctor ... a_i ...) = a_i`.
|
||||
-/
|
||||
def propagateForallPropUp (e : Expr) : GoalM Unit := do
|
||||
let .forallE n p q bi := e | return ()
|
||||
trace_goal[grind.debug.forallPropagator] "{e}"
|
||||
if !q.hasLooseBVars then
|
||||
propagateImpliesUp p q
|
||||
else
|
||||
unless (← isEqTrue p) do return
|
||||
trace_goal[grind.debug.forallPropagator] "isEqTrue, {e}"
|
||||
let h₁ ← mkEqTrueProof p
|
||||
let qh₁ := q.instantiate1 (mkApp2 (mkConst ``of_eq_true) p h₁)
|
||||
let r ← simp qh₁
|
||||
let q := mkLambda n bi p q
|
||||
let q' := r.expr
|
||||
internalize q' (← getGeneration e)
|
||||
trace_goal[grind.debug.forallPropagator] "q': {q'} for{indentExpr e}"
|
||||
let h₂ ← r.getProof
|
||||
let h := mkApp5 (mkConst ``Lean.Grind.forall_propagator) p q q' h₁ h₂
|
||||
pushEq e q' h
|
||||
where
|
||||
propagateImpliesUp (a b : Expr) : GoalM Unit := do
|
||||
unless (← alreadyInternalized b) do return ()
|
||||
if (← isEqFalse a) then
|
||||
-- a = False → (a → b) = True
|
||||
pushEqTrue e <| mkApp3 (mkConst ``Grind.imp_eq_of_eq_false_left) a b (← mkEqFalseProof a)
|
||||
else if (← isEqTrue a) then
|
||||
-- a = True → (a → b) = b
|
||||
pushEq e b <| mkApp3 (mkConst ``Grind.imp_eq_of_eq_true_left) a b (← mkEqTrueProof a)
|
||||
else if (← isEqTrue b) then
|
||||
-- b = True → (a → b) = True
|
||||
pushEqTrue e <| mkApp3 (mkConst ``Grind.imp_eq_of_eq_true_right) a b (← mkEqTrueProof b)
|
||||
|
||||
private def isEqTrueHyp? (proof : Expr) : Option FVarId := Id.run do
|
||||
let_expr eq_true _ p := proof | return none
|
||||
let .fvar fvarId := p | return none
|
||||
return some fvarId
|
||||
|
||||
/-- Similar to `mkEMatchTheoremWithKind?`, but swallow any exceptions. -/
|
||||
private def mkEMatchTheoremWithKind'? (origin : Origin) (proof : Expr) (kind : TheoremKind) : MetaM (Option EMatchTheorem) := do
|
||||
try
|
||||
mkEMatchTheoremWithKind? origin #[] proof kind
|
||||
catch _ =>
|
||||
return none
|
||||
|
||||
private def addLocalEMatchTheorems (e : Expr) : GoalM Unit := do
|
||||
let proof ← mkEqTrueProof e
|
||||
let origin ← if let some fvarId := isEqTrueHyp? proof then
|
||||
pure <| .fvar fvarId
|
||||
else
|
||||
let idx ← modifyGet fun s => (s.nextThmIdx, { s with nextThmIdx := s.nextThmIdx + 1 })
|
||||
pure <| .local ((`local).appendIndexAfter idx)
|
||||
let proof := mkApp2 (mkConst ``of_eq_true) e proof
|
||||
let size := (← get).newThms.size
|
||||
let gen ← getGeneration e
|
||||
-- TODO: we should have a flag for collecting all unary patterns in a local theorem
|
||||
if let some thm ← mkEMatchTheoremWithKind'? origin proof .fwd then
|
||||
activateTheorem thm gen
|
||||
if let some thm ← mkEMatchTheoremWithKind'? origin proof .bwd then
|
||||
activateTheorem thm gen
|
||||
if (← get).newThms.size == size then
|
||||
if let some thm ← mkEMatchTheoremWithKind'? origin proof .default then
|
||||
activateTheorem thm gen
|
||||
if (← get).newThms.size == size then
|
||||
trace[grind.issues] "failed to create E-match local theorem for{indentExpr e}"
|
||||
|
||||
def propagateForallPropDown (e : Expr) : GoalM Unit := do
|
||||
let .forallE n a b bi := e | return ()
|
||||
if (← isEqFalse e) then
|
||||
if b.hasLooseBVars then
|
||||
let α := a
|
||||
let p := b
|
||||
-- `e` is of the form `∀ x : α, p x`
|
||||
-- Add fact `∃ x : α, ¬ p x`
|
||||
let u ← getLevel α
|
||||
let prop := mkApp2 (mkConst ``Exists [u]) α (mkLambda n bi α (mkNot p))
|
||||
let proof := mkApp3 (mkConst ``Grind.of_forall_eq_false [u]) α (mkLambda n bi α p) (← mkEqFalseProof e)
|
||||
addNewFact proof prop (← getGeneration e)
|
||||
else
|
||||
let h ← mkEqFalseProof e
|
||||
pushEqTrue a <| mkApp3 (mkConst ``Grind.eq_true_of_imp_eq_false) a b h
|
||||
pushEqFalse b <| mkApp3 (mkConst ``Grind.eq_false_of_imp_eq_false) a b h
|
||||
else if (← isEqTrue e) then
|
||||
if b.hasLooseBVars then
|
||||
addLocalEMatchTheorems e
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -5,12 +5,8 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.LitValues
|
||||
import Lean.Meta.Match.MatcherInfo
|
||||
import Lean.Meta.Match.MatchEqsExt
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -23,152 +19,34 @@ def addCongrTable (e : Expr) : GoalM Unit := do
|
||||
let g := e'.getAppFn
|
||||
unless isSameExpr f g do
|
||||
unless (← hasSameType f g) do
|
||||
trace_goal[grind.issues] "found congruence between{indentExpr e}\nand{indentExpr e'}\nbut functions have different types"
|
||||
trace[grind.issues] "found congruence between{indentExpr e}\nand{indentExpr e'}\nbut functions have different types"
|
||||
return ()
|
||||
trace_goal[grind.debug.congr] "{e} = {e'}"
|
||||
trace[grind.congr] "{e} = {e'}"
|
||||
pushEqHEq e e' congrPlaceholderProof
|
||||
let node ← getENode e
|
||||
setENode e { node with congr := e' }
|
||||
setENode e { node with cgRoot := e' }
|
||||
else
|
||||
modify fun s => { s with congrTable := s.congrTable.insert { e } }
|
||||
|
||||
/--
|
||||
Given an application `e` of the form `f a_1 ... a_n`,
|
||||
adds entry `f ↦ e` to `appMap`. Recall that `appMap` is a multi-map.
|
||||
-/
|
||||
private def updateAppMap (e : Expr) : GoalM Unit := do
|
||||
let key := e.toHeadIndex
|
||||
modify fun s => { s with
|
||||
appMap := if let some es := s.appMap.find? key then
|
||||
s.appMap.insert key (e :: es)
|
||||
else
|
||||
s.appMap.insert key [e]
|
||||
}
|
||||
|
||||
/-- Inserts `e` into the list of case-split candidates. -/
|
||||
private def addSplitCandidate (e : Expr) : GoalM Unit := do
|
||||
trace_goal[grind.split.candidate] "{e}"
|
||||
modify fun s => { s with splitCandidates := e :: s.splitCandidates }
|
||||
|
||||
-- TODO: add attribute to make this extensible
|
||||
private def forbiddenSplitTypes := [``Eq, ``HEq, ``True, ``False]
|
||||
|
||||
/-- Inserts `e` into the list of case-split candidates if applicable. -/
|
||||
private def checkAndAddSplitCandidate (e : Expr) : GoalM Unit := do
|
||||
unless e.isApp do return ()
|
||||
if (← getConfig).splitIte && (e.isIte || e.isDIte) then
|
||||
addSplitCandidate e
|
||||
return ()
|
||||
if (← getConfig).splitMatch then
|
||||
if (← isMatcherApp e) then
|
||||
if let .reduced _ ← reduceMatcher? e then
|
||||
-- When instantiating `match`-equations, we add `match`-applications that can be reduced,
|
||||
-- and consequently don't need to be splitted.
|
||||
return ()
|
||||
else
|
||||
addSplitCandidate e
|
||||
return ()
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
if forbiddenSplitTypes.contains declName then return ()
|
||||
-- We should have a mechanism for letting users to select types to case-split.
|
||||
-- Right now, we just consider inductive predicates that are not in the forbidden list
|
||||
if (← getConfig).splitIndPred then
|
||||
if (← isInductivePredicate declName) then
|
||||
addSplitCandidate e
|
||||
|
||||
/--
|
||||
If `e` is a `cast`-like term (e.g., `cast h a`), add `HEq e a` to the to-do list.
|
||||
It could be an E-matching theorem, but we want to ensure it is always applied since
|
||||
we want to rely on the fact that `cast h a` and `a` are in the same equivalence class.
|
||||
-/
|
||||
private def pushCastHEqs (e : Expr) : GoalM Unit := do
|
||||
match_expr e with
|
||||
| f@cast α β h a => pushHEq e a (mkApp4 (mkConst ``cast_heq f.constLevels!) α β h a)
|
||||
| f@Eq.rec α a motive v b h => pushHEq e v (mkApp6 (mkConst ``Grind.eqRec_heq f.constLevels!) α a motive v b h)
|
||||
| f@Eq.ndrec α a motive v b h => pushHEq e v (mkApp6 (mkConst ``Grind.eqNDRec_heq f.constLevels!) α a motive v b h)
|
||||
| f@Eq.recOn α a motive b h v => pushHEq e v (mkApp6 (mkConst ``Grind.eqRecOn_heq f.constLevels!) α a motive b h v)
|
||||
| _ => return ()
|
||||
|
||||
mutual
|
||||
/-- Internalizes the nested ground terms in the given pattern. -/
|
||||
private partial def internalizePattern (pattern : Expr) (generation : Nat) : GoalM Expr := do
|
||||
if pattern.isBVar || isPatternDontCare pattern then
|
||||
return pattern
|
||||
else if let some e := groundPattern? pattern then
|
||||
let e ← shareCommon (← canon (← normalizeLevels (← unfoldReducible e)))
|
||||
internalize e generation
|
||||
return mkGroundPattern e
|
||||
else pattern.withApp fun f args => do
|
||||
return mkAppN f (← args.mapM (internalizePattern · generation))
|
||||
|
||||
partial def activateTheorem (thm : EMatchTheorem) (generation : Nat) : GoalM Unit := do
|
||||
-- Recall that we use the proof as part of the key for a set of instances found so far.
|
||||
-- We don't want to use structural equality when comparing keys.
|
||||
let proof ← shareCommon thm.proof
|
||||
let thm := { thm with proof, patterns := (← thm.patterns.mapM (internalizePattern · generation)) }
|
||||
trace_goal[grind.ematch] "activated `{thm.origin.key}`, {thm.patterns.map ppPattern}"
|
||||
modify fun s => { s with newThms := s.newThms.push thm }
|
||||
|
||||
/--
|
||||
If `Config.matchEqs` is set to `true`, and `f` is `match`-auxiliary function,
|
||||
adds its equations to `newThms`.
|
||||
-/
|
||||
private partial def addMatchEqns (f : Expr) (generation : Nat) : GoalM Unit := do
|
||||
if !(← getConfig).matchEqs then return ()
|
||||
let .const declName _ := f | return ()
|
||||
if !(← isMatcher declName) then return ()
|
||||
if (← get).matchEqNames.contains declName then return ()
|
||||
modify fun s => { s with matchEqNames := s.matchEqNames.insert declName }
|
||||
for eqn in (← Match.getEquationsFor declName).eqnNames do
|
||||
-- We disable pattern normalization to prevent the `match`-expression to be reduced.
|
||||
activateTheorem (← mkEMatchEqTheorem eqn (normalizePattern := false)) generation
|
||||
|
||||
private partial def activateTheoremPatterns (fName : Name) (generation : Nat) : GoalM Unit := do
|
||||
if let some (thms, thmMap) := (← get).thmMap.retrieve? fName then
|
||||
modify fun s => { s with thmMap }
|
||||
let appMap := (← get).appMap
|
||||
for thm in thms do
|
||||
unless (← get).thmMap.isErased thm.origin do
|
||||
let symbols := thm.symbols.filter fun sym => !appMap.contains sym
|
||||
let thm := { thm with symbols }
|
||||
match symbols with
|
||||
| [] => activateTheorem thm generation
|
||||
| _ =>
|
||||
trace_goal[grind.ematch] "reinsert `{thm.origin.key}`"
|
||||
modify fun s => { s with thmMap := s.thmMap.insert thm }
|
||||
|
||||
partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
if (← alreadyInternalized e) then return ()
|
||||
trace_goal[grind.internalize] "{e}"
|
||||
match e with
|
||||
| .bvar .. => unreachable!
|
||||
| .sort .. => return ()
|
||||
| .fvar .. | .letE .. | .lam .. =>
|
||||
| .fvar .. | .letE .. | .lam .. | .forallE .. =>
|
||||
mkENodeCore e (ctor := false) (interpreted := false) (generation := generation)
|
||||
| .forallE _ d b _ =>
|
||||
mkENodeCore e (ctor := false) (interpreted := false) (generation := generation)
|
||||
if (← isProp d <&&> isProp e) then
|
||||
internalize d generation
|
||||
registerParent e d
|
||||
unless b.hasLooseBVars do
|
||||
internalize b generation
|
||||
registerParent e b
|
||||
propagateUp e
|
||||
| .lit .. | .const .. =>
|
||||
mkENode e generation
|
||||
| .mvar ..
|
||||
| .mdata ..
|
||||
| .proj .. =>
|
||||
trace_goal[grind.issues] "unexpected term during internalization{indentExpr e}"
|
||||
trace[grind.issues] "unexpected term during internalization{indentExpr e}"
|
||||
mkENodeCore e (ctor := false) (interpreted := false) (generation := generation)
|
||||
| .app .. =>
|
||||
if (← isLitValue e) then
|
||||
-- We do not want to internalize the components of a literal value.
|
||||
mkENode e generation
|
||||
else e.withApp fun f args => do
|
||||
checkAndAddSplitCandidate e
|
||||
pushCastHEqs e
|
||||
addMatchEqns f generation
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
-- We only internalize the proposition. We can skip the proof because of
|
||||
-- proof irrelevance
|
||||
@@ -176,9 +54,7 @@ partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
internalize c generation
|
||||
registerParent e c
|
||||
else
|
||||
if let .const fName _ := f then
|
||||
activateTheoremPatterns fName generation
|
||||
else
|
||||
unless f.isConst do
|
||||
internalize f generation
|
||||
registerParent e f
|
||||
for h : i in [: args.size] do
|
||||
@@ -187,8 +63,6 @@ partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
registerParent e arg
|
||||
mkENode e generation
|
||||
addCongrTable e
|
||||
updateAppMap e
|
||||
propagateUp e
|
||||
end
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -1,143 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Assert
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
import Lean.Meta.Tactic.Grind.Core
|
||||
import Lean.Meta.Tactic.Grind.Combinators
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
private inductive IntroResult where
|
||||
| done
|
||||
| newHyp (fvarId : FVarId) (goal : Goal)
|
||||
| newDepHyp (goal : Goal)
|
||||
| newLocal (fvarId : FVarId) (goal : Goal)
|
||||
deriving Inhabited
|
||||
|
||||
private def introNext (goal : Goal) (generation : Nat) : GrindM IntroResult := do
|
||||
let target ← goal.mvarId.getType
|
||||
if target.isArrow then
|
||||
goal.mvarId.withContext do
|
||||
let p := target.bindingDomain!
|
||||
if !(← isProp p) then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
let tag ← goal.mvarId.getTag
|
||||
let q := target.bindingBody!
|
||||
-- TODO: keep applying simp/eraseIrrelevantMData/canon/shareCommon until no progress
|
||||
let r ← simp p
|
||||
let fvarId ← mkFreshFVarId
|
||||
let lctx := (← getLCtx).mkLocalDecl fvarId target.bindingName! r.expr target.bindingInfo!
|
||||
let mvarNew ← mkFreshExprMVarAt lctx (← getLocalInstances) q .syntheticOpaque tag
|
||||
let mvarIdNew := mvarNew.mvarId!
|
||||
mvarIdNew.withContext do
|
||||
let h ← mkLambdaFVars #[mkFVar fvarId] mvarNew
|
||||
match r.proof? with
|
||||
| some he =>
|
||||
let hNew := mkAppN (mkConst ``Lean.Grind.intro_with_eq) #[p, r.expr, q, he, h]
|
||||
goal.mvarId.assign hNew
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
| none =>
|
||||
-- `p` and `p'` are definitionally equal
|
||||
goal.mvarId.assign h
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
else if target.isLet || target.isForall || target.isLetFun then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
mvarId.withContext do
|
||||
let localDecl ← fvarId.getDecl
|
||||
if (← isProp localDecl.type) then
|
||||
-- Add a non-dependent copy
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName localDecl.userName) localDecl.type (mkFVar fvarId)
|
||||
return .newDepHyp { goal with mvarId }
|
||||
else
|
||||
let goal := { goal with mvarId }
|
||||
if target.isLet || target.isLetFun then
|
||||
let v := (← fvarId.getDecl).value
|
||||
let r ← simp v
|
||||
let x ← shareCommon (mkFVar fvarId)
|
||||
let goal ← GoalM.run' goal <| addNewEq x r.expr (← r.getProof) generation
|
||||
return .newLocal fvarId goal
|
||||
else
|
||||
return .newLocal fvarId goal
|
||||
else
|
||||
return .done
|
||||
|
||||
private def isCasesCandidate (type : Expr) : MetaM Bool := do
|
||||
let .const declName _ := type.getAppFn | return false
|
||||
isGrindCasesTarget declName
|
||||
|
||||
private def applyCases? (goal : Goal) (fvarId : FVarId) : MetaM (Option (List Goal)) := goal.mvarId.withContext do
|
||||
if (← isCasesCandidate (← fvarId.getType)) then
|
||||
let mvarIds ← cases goal.mvarId (mkFVar fvarId)
|
||||
return mvarIds.map fun mvarId => { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
private def applyInjection? (goal : Goal) (fvarId : FVarId) : MetaM (Option Goal) := do
|
||||
if let some mvarId ← injection? goal.mvarId fvarId then
|
||||
return some { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
/-- Introduce new hypotheses (and apply `by_contra`) until goal is of the form `... ⊢ False` -/
|
||||
partial def intros (generation : Nat) : GrindTactic' := fun goal => do
|
||||
let rec go (goal : Goal) : StateRefT (Array Goal) GrindM Unit := do
|
||||
if goal.inconsistent then
|
||||
return ()
|
||||
match (← introNext goal generation) with
|
||||
| .done =>
|
||||
if let some mvarId ← goal.mvarId.byContra? then
|
||||
go { goal with mvarId }
|
||||
else
|
||||
modify fun s => s.push goal
|
||||
| .newHyp fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM go
|
||||
else if let some goal ← applyInjection? goal fvarId then
|
||||
go goal
|
||||
else
|
||||
go (← GoalM.run' goal <| addHypothesis fvarId generation)
|
||||
| .newDepHyp goal =>
|
||||
go goal
|
||||
| .newLocal fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM go
|
||||
else
|
||||
go goal
|
||||
let (_, goals) ← (go goal).run #[]
|
||||
return goals.toList
|
||||
|
||||
/-- Asserts a new fact `prop` with proof `proof` to the given `goal`. -/
|
||||
def assertAt (proof : Expr) (prop : Expr) (generation : Nat) : GrindTactic' := fun goal => do
|
||||
if (← isCasesCandidate prop) then
|
||||
let mvarId ← goal.mvarId.assert (← mkFreshUserName `h) prop proof
|
||||
let goal := { goal with mvarId }
|
||||
intros generation goal
|
||||
else
|
||||
let goal ← GoalM.run' goal do
|
||||
let r ← simp prop
|
||||
let prop' := r.expr
|
||||
let proof' ← mkEqMP (← r.getProof) proof
|
||||
add prop' proof' generation
|
||||
if goal.inconsistent then return [] else return [goal]
|
||||
|
||||
/-- Asserts next fact in the `goal` fact queue. -/
|
||||
def assertNext : GrindTactic := fun goal => do
|
||||
let some (fact, newFacts) := goal.newFacts.dequeue?
|
||||
| return none
|
||||
assertAt fact.proof fact.prop fact.generation { goal with newFacts }
|
||||
|
||||
/-- Asserts all facts in the `goal` fact queue. -/
|
||||
partial def assertAll : GrindTactic :=
|
||||
assertNext.iterate
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -24,9 +24,9 @@ private def checkEqc (root : ENode) : GoalM Unit := do
|
||||
if curr.isApp then
|
||||
if let some { e } := (← get).congrTable.find? { e := curr } then
|
||||
if (← hasSameType e.getAppFn curr.getAppFn) then
|
||||
assert! isSameExpr e (← getCongrRoot curr)
|
||||
assert! isSameExpr e (← getENode curr).cgRoot
|
||||
else
|
||||
assert! (← isCongrRoot curr)
|
||||
assert! isSameExpr curr (← getENode curr).cgRoot
|
||||
-- If the equivalence class does not have HEq proofs, then the types must be definitionally equal.
|
||||
unless root.heqProofs do
|
||||
assert! (← hasSameType curr root.self)
|
||||
@@ -57,10 +57,6 @@ private def checkParents (e : Expr) : GoalM Unit := do
|
||||
if (← checkChild arg) then
|
||||
found := true
|
||||
break
|
||||
-- Recall that we have support for `Expr.forallE` propagation. See `ForallProp.lean`.
|
||||
if let .forallE _ d _ _ := parent then
|
||||
if (← checkChild d) then
|
||||
found := true
|
||||
unless found do
|
||||
assert! (← checkChild parent.getAppFn)
|
||||
else
|
||||
@@ -84,10 +80,10 @@ private def checkProofs : GoalM Unit := do
|
||||
for a in eqc do
|
||||
for b in eqc do
|
||||
unless isSameExpr a b do
|
||||
let p ← mkEqHEqProof a b
|
||||
trace_goal[grind.debug.proofs] "{a} = {b}"
|
||||
let p ← mkEqProof a b
|
||||
trace[grind.debug.proofs] "{a} = {b}"
|
||||
check p
|
||||
trace_goal[grind.debug.proofs] "checked: {← inferType p}"
|
||||
trace[grind.debug.proofs] "checked: {← inferType p}"
|
||||
|
||||
/--
|
||||
Checks basic invariants if `grind.debug` is enabled.
|
||||
@@ -103,7 +99,4 @@ def checkInvariants (expensive := false) : GoalM Unit := do
|
||||
if expensive && grind.debug.proofs.get (← getOptions) then
|
||||
checkProofs
|
||||
|
||||
def Goal.checkInvariants (goal : Goal) (expensive := false) : GrindM Unit :=
|
||||
discard <| GoalM.run' goal <| Grind.checkInvariants expensive
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -1,89 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Proj
|
||||
import Lean.Meta.Tactic.Grind.ForallProp
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Inv
|
||||
import Lean.Meta.Tactic.Grind.Intro
|
||||
import Lean.Meta.Tactic.Grind.EMatch
|
||||
import Lean.Meta.Tactic.Grind.Split
|
||||
import Lean.Meta.Tactic.Grind.SimpUtil
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkMethods (fallback : Fallback) : CoreM Methods := do
|
||||
let builtinPropagators ← builtinPropagatorsRef.get
|
||||
return {
|
||||
fallback
|
||||
propagateUp := fun e => do
|
||||
propagateForallPropUp e
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
propagateProjEq e
|
||||
if let some prop := builtinPropagators.up[declName]? then
|
||||
prop e
|
||||
propagateDown := fun e => do
|
||||
propagateForallPropDown e
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
if let some prop := builtinPropagators.down[declName]? then
|
||||
prop e
|
||||
}
|
||||
|
||||
def GrindM.run (x : GrindM α) (mainDeclName : Name) (config : Grind.Config) (fallback : Fallback) : MetaM α := do
|
||||
let scState := ShareCommon.State.mk _
|
||||
let (falseExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``False)
|
||||
let (trueExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``True)
|
||||
let simprocs ← Grind.getSimprocs
|
||||
let simp ← Grind.getSimpContext
|
||||
x (← mkMethods fallback).toMethodsRef { mainDeclName, config, simprocs, simp } |>.run' { scState, trueExpr, falseExpr }
|
||||
|
||||
private def mkGoal (mvarId : MVarId) : GrindM Goal := do
|
||||
let trueExpr ← getTrueExpr
|
||||
let falseExpr ← getFalseExpr
|
||||
let thmMap ← getEMatchTheorems
|
||||
GoalM.run' { mvarId, thmMap } do
|
||||
mkENodeCore falseExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
mkENodeCore trueExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
|
||||
private def initCore (mvarId : MVarId) : GrindM (List Goal) := do
|
||||
mvarId.ensureProp
|
||||
-- TODO: abstract metavars
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.clearAuxDecls
|
||||
let mvarId ← mvarId.revertAll
|
||||
let mvarId ← mvarId.unfoldReducible
|
||||
let mvarId ← mvarId.betaReduce
|
||||
appendTagSuffix mvarId `grind
|
||||
let goals ← intros (← mkGoal mvarId) (generation := 0)
|
||||
goals.forM (·.checkInvariants (expensive := true))
|
||||
return goals.filter fun goal => !goal.inconsistent
|
||||
|
||||
def all (goals : List Goal) (f : Goal → GrindM (List Goal)) : GrindM (List Goal) := do
|
||||
goals.foldlM (init := []) fun acc goal => return acc ++ (← f goal)
|
||||
|
||||
/-- A very simple strategy -/
|
||||
private def simple (goals : List Goal) : GrindM (List Goal) := do
|
||||
applyToAll (assertAll >> ematchStar >> (splitNext >> assertAll >> ematchStar).iterate) goals
|
||||
|
||||
def main (mvarId : MVarId) (config : Grind.Config) (mainDeclName : Name) (fallback : Fallback) : MetaM (List MVarId) := do
|
||||
let go : GrindM (List MVarId) := do
|
||||
let goals ← initCore mvarId
|
||||
let goals ← simple goals
|
||||
let goals ← goals.filterMapM fun goal => do
|
||||
if goal.inconsistent then return none
|
||||
let goal ← GoalM.run' goal fallback
|
||||
if goal.inconsistent then return none
|
||||
if (← goal.mvarId.isAssigned) then return none
|
||||
return some goal
|
||||
trace[grind.debug.final] "{← ppGoals goals}"
|
||||
return goals.map (·.mvarId)
|
||||
go.run mainDeclName config fallback
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -24,37 +24,30 @@ where
|
||||
let e' := mkApp2 (mkConst ``Lean.Grind.nestedProof) prop e
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
-- Remark: we have to process `Expr.proj` since we only
|
||||
-- fold projections later during term internalization
|
||||
unless e.isApp || e.isForall || e.isProj do
|
||||
return e
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
let e' ← match e with
|
||||
| .app .. => e.withApp fun f args => do
|
||||
let mut modified := false
|
||||
let mut args := args
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let arg' ← visit arg
|
||||
unless ptrEq arg arg' do
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
if modified then
|
||||
pure <| mkAppN f args
|
||||
else
|
||||
pure e
|
||||
| .proj _ _ b =>
|
||||
pure <| e.updateProj! (← visit b)
|
||||
| .forallE _ d b _ =>
|
||||
-- Recall that we have `ForallProp.lean`.
|
||||
let d' ← visit d
|
||||
let b' ← if b.hasLooseBVars then pure b else visit b
|
||||
pure <| e.updateForallE! d' b'
|
||||
| _ => unreachable!
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
else match e with
|
||||
| .bvar .. => unreachable!
|
||||
-- See comments on `Canon.lean` for why we do not visit these cases.
|
||||
| .letE .. | .forallE .. | .lam ..
|
||||
| .const .. | .lit .. | .mvar .. | .sort .. | .fvar ..
|
||||
| .proj ..
|
||||
| .mdata .. => return e
|
||||
-- We only visit applications
|
||||
| .app .. =>
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
e.withApp fun f args => do
|
||||
let mut modified := false
|
||||
let mut args := args
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let arg' ← visit arg
|
||||
unless ptrEq arg arg' do
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
let e' := if modified then mkAppN f args else e
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
|
||||
/--
|
||||
Wrap nested proofs `e` with `Lean.Grind.nestedProof`-applications.
|
||||
|
||||
@@ -56,11 +56,4 @@ def ppState : GoalM Format := do
|
||||
r := r ++ "\n" ++ "{" ++ (Format.joinSep (← eqc.mapM ppENodeRef) ", ") ++ "}"
|
||||
return r
|
||||
|
||||
def ppGoals (goals : List Goal) : GrindM Format := do
|
||||
let mut r := f!""
|
||||
for goal in goals do
|
||||
let (f, _) ← GoalM.run goal ppState
|
||||
r := r ++ Format.line ++ f
|
||||
return r
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
172
src/Lean/Meta/Tactic/Grind/Preprocessor.lean
Normal file
172
src/Lean/Meta/Tactic/Grind/Preprocessor.lean
Normal file
@@ -0,0 +1,172 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Canonicalizer
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Intro
|
||||
import Lean.Meta.Tactic.Simp.Main
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
import Lean.Meta.Tactic.Grind.Core
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Run
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
namespace Preprocessor
|
||||
|
||||
structure State where
|
||||
goals : PArray Goal := {}
|
||||
deriving Inhabited
|
||||
|
||||
abbrev PreM := StateRefT State GrindM
|
||||
|
||||
def PreM.run (x : PreM α) : GrindM α := do
|
||||
x.run' {}
|
||||
|
||||
inductive IntroResult where
|
||||
| done
|
||||
| newHyp (fvarId : FVarId) (goal : Goal)
|
||||
| newDepHyp (goal : Goal)
|
||||
| newLocal (fvarId : FVarId) (goal : Goal)
|
||||
|
||||
def introNext (goal : Goal) : PreM IntroResult := do
|
||||
let target ← goal.mvarId.getType
|
||||
if target.isArrow then
|
||||
goal.mvarId.withContext do
|
||||
let p := target.bindingDomain!
|
||||
if !(← isProp p) then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
let tag ← goal.mvarId.getTag
|
||||
let q := target.bindingBody!
|
||||
-- TODO: keep applying simp/eraseIrrelevantMData/canon/shareCommon until no progress
|
||||
let r ← pre p
|
||||
let fvarId ← mkFreshFVarId
|
||||
let lctx := (← getLCtx).mkLocalDecl fvarId target.bindingName! r.expr target.bindingInfo!
|
||||
let mvarNew ← mkFreshExprMVarAt lctx (← getLocalInstances) q .syntheticOpaque tag
|
||||
let mvarIdNew := mvarNew.mvarId!
|
||||
mvarIdNew.withContext do
|
||||
let h ← mkLambdaFVars #[mkFVar fvarId] mvarNew
|
||||
match r.proof? with
|
||||
| some he =>
|
||||
let hNew := mkAppN (mkConst ``Lean.Grind.intro_with_eq) #[p, r.expr, q, he, h]
|
||||
goal.mvarId.assign hNew
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
| none =>
|
||||
-- `p` and `p'` are definitionally equal
|
||||
goal.mvarId.assign h
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
else if target.isLet || target.isForall then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
mvarId.withContext do
|
||||
let localDecl ← fvarId.getDecl
|
||||
if (← isProp localDecl.type) then
|
||||
-- Add a non-dependent copy
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName localDecl.userName) localDecl.type (mkFVar fvarId)
|
||||
return .newDepHyp { goal with mvarId }
|
||||
else
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
return .done
|
||||
|
||||
def pushResult (goal : Goal) : PreM Unit :=
|
||||
modify fun s => { s with goals := s.goals.push goal }
|
||||
|
||||
def isCasesCandidate (fvarId : FVarId) : MetaM Bool := do
|
||||
let .const declName _ := (← fvarId.getType).getAppFn | return false
|
||||
isGrindCasesTarget declName
|
||||
|
||||
def applyCases? (goal : Goal) (fvarId : FVarId) : MetaM (Option (List Goal)) := goal.mvarId.withContext do
|
||||
if (← isCasesCandidate fvarId) then
|
||||
let mvarIds ← cases goal.mvarId fvarId
|
||||
return mvarIds.map fun mvarId => { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
def applyInjection? (goal : Goal) (fvarId : FVarId) : MetaM (Option Goal) := do
|
||||
if let some mvarId ← injection? goal.mvarId fvarId then
|
||||
return some { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
partial def loop (goal : Goal) : PreM Unit := do
|
||||
if goal.inconsistent then
|
||||
return ()
|
||||
match (← introNext goal) with
|
||||
| .done =>
|
||||
if let some mvarId ← goal.mvarId.byContra? then
|
||||
loop { goal with mvarId }
|
||||
else
|
||||
pushResult goal
|
||||
| .newHyp fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM loop
|
||||
else if let some goal ← applyInjection? goal fvarId then
|
||||
loop goal
|
||||
else
|
||||
loop (← GoalM.run' goal <| addHyp fvarId)
|
||||
| .newDepHyp goal =>
|
||||
loop goal
|
||||
| .newLocal fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM loop
|
||||
else
|
||||
loop goal
|
||||
|
||||
def ppGoals : PreM Format := do
|
||||
let mut r := f!""
|
||||
for goal in (← get).goals do
|
||||
let (f, _) ← GoalM.run goal ppState
|
||||
r := r ++ Format.line ++ f
|
||||
return r
|
||||
|
||||
def preprocess (mvarId : MVarId) : PreM State := do
|
||||
mvarId.ensureProp
|
||||
-- TODO: abstract metavars
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.clearAuxDecls
|
||||
let mvarId ← mvarId.revertAll
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.abstractNestedProofs (← getMainDeclName)
|
||||
let mvarId ← mvarId.unfoldReducible
|
||||
let mvarId ← mvarId.betaReduce
|
||||
loop (← mkGoal mvarId)
|
||||
if (← isTracingEnabledFor `grind.pre) then
|
||||
trace[grind.pre] (← ppGoals)
|
||||
for goal in (← get).goals do
|
||||
discard <| GoalM.run' goal <| checkInvariants (expensive := true)
|
||||
get
|
||||
|
||||
def preprocessAndProbe (mvarId : MVarId) (p : GoalM Unit) : PreM Unit := do
|
||||
let s ← preprocess mvarId
|
||||
s.goals.forM fun goal =>
|
||||
discard <| GoalM.run' goal p
|
||||
|
||||
end Preprocessor
|
||||
|
||||
open Preprocessor
|
||||
|
||||
def preprocessAndProbe (mvarId : MVarId) (mainDeclName : Name) (p : GoalM Unit) : MetaM Unit :=
|
||||
withoutModifyingMCtx do
|
||||
Preprocessor.preprocessAndProbe mvarId p |>.run |>.run mainDeclName
|
||||
|
||||
def preprocess (mvarId : MVarId) (mainDeclName : Name) : MetaM Preprocessor.State :=
|
||||
Preprocessor.preprocess mvarId |>.run |>.run mainDeclName
|
||||
|
||||
def main (mvarId : MVarId) (mainDeclName : Name) : MetaM (List MVarId) := do
|
||||
let go : GrindM (List MVarId) := do
|
||||
let s ← Preprocessor.preprocess mvarId |>.run
|
||||
let goals := s.goals.toList.filter fun goal => !goal.inconsistent
|
||||
return goals.map (·.mvarId)
|
||||
go.run mainDeclName
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -19,21 +19,17 @@ def propagateProjEq (parent : Expr) : GoalM Unit := do
|
||||
let .const declName _ := parent.getAppFn | return ()
|
||||
let some info ← getProjectionFnInfo? declName | return ()
|
||||
unless info.numParams + 1 == parent.getAppNumArgs do return ()
|
||||
-- It is wasteful to add equation if `parent` is not the root of its congruence class
|
||||
unless (← isCongrRoot parent) do return ()
|
||||
let arg := parent.appArg!
|
||||
let ctor ← getRoot arg
|
||||
unless ctor.isAppOf info.ctorName do return ()
|
||||
let parentNew ← if isSameExpr arg ctor then
|
||||
pure parent
|
||||
if isSameExpr arg ctor then
|
||||
let idx := info.numParams + info.i
|
||||
unless idx < ctor.getAppNumArgs do return ()
|
||||
let v := ctor.getArg! idx
|
||||
pushEq parent v (← mkEqRefl v)
|
||||
else
|
||||
let parentNew ← shareCommon (mkApp parent.appFn! ctor)
|
||||
internalize parentNew (← getGeneration parent)
|
||||
pure parentNew
|
||||
trace_goal[grind.debug.proj] "{parentNew}"
|
||||
let idx := info.numParams + info.i
|
||||
unless idx < ctor.getAppNumArgs do return ()
|
||||
let v := ctor.getArg! idx
|
||||
pushEq parentNew v (← mkEqRefl v)
|
||||
let newProj := mkApp parent.appFn! ctor
|
||||
let newProj ← shareCommon newProj
|
||||
internalize newProj (← getGeneration parent)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -4,7 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Sorry -- TODO: remove
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
@@ -128,52 +128,6 @@ mutual
|
||||
let r := (← loop lhs rhs).get!
|
||||
if heq then mkHEqOfEq r else return r
|
||||
|
||||
private partial def mkHCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let f := lhs.getAppFn
|
||||
let g := rhs.getAppFn
|
||||
let numArgs := lhs.getAppNumArgs
|
||||
assert! rhs.getAppNumArgs == numArgs
|
||||
let thm ← mkHCongrWithArity f numArgs
|
||||
assert! thm.argKinds.size == numArgs
|
||||
let rec loop (lhs rhs : Expr) (i : Nat) : GoalM Expr := do
|
||||
let i := i - 1
|
||||
if lhs.isApp then
|
||||
let proof ← loop lhs.appFn! rhs.appFn! i
|
||||
let a₁ := lhs.appArg!
|
||||
let a₂ := rhs.appArg!
|
||||
let k := thm.argKinds[i]!
|
||||
return mkApp3 proof a₁ a₂ (← mkEqProofCore a₁ a₂ (k matches .heq))
|
||||
else
|
||||
return thm.proof
|
||||
let proof ← loop lhs rhs numArgs
|
||||
if isSameExpr f g then
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
else
|
||||
/-
|
||||
`lhs` is of the form `f a_1 ... a_n`
|
||||
`rhs` is of the form `g b_1 ... b_n`
|
||||
`proof : HEq (f a_1 ... a_n) (f b_1 ... b_n)`
|
||||
We construct a proof for `HEq (f a_1 ... a_n) (g b_1 ... b_n)` using `Eq.ndrec`
|
||||
-/
|
||||
let motive ← withLocalDeclD (← mkFreshUserName `x) (← inferType f) fun x => do
|
||||
mkLambdaFVars #[x] (← mkHEq lhs (mkAppN x rhs.getAppArgs))
|
||||
let fEq ← mkEqProofCore f g false
|
||||
let proof ← mkEqNDRec motive proof fEq
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
|
||||
private partial def mkEqCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let_expr f@Eq α₁ a₁ b₁ := lhs | unreachable!
|
||||
let_expr Eq α₂ a₂ b₂ := rhs | unreachable!
|
||||
let enodes := (← get).enodes
|
||||
let us := f.constLevels!
|
||||
if !isSameExpr α₁ α₂ then
|
||||
mkHCongrProof lhs rhs heq
|
||||
else if hasSameRoot enodes a₁ a₂ && hasSameRoot enodes b₁ b₂ then
|
||||
return mkApp7 (mkConst ``Grind.eq_congr us) α₁ a₁ b₁ a₂ b₂ (← mkEqProofCore a₁ a₂ false) (← mkEqProofCore b₁ b₂ false)
|
||||
else
|
||||
assert! hasSameRoot enodes a₁ b₂ && hasSameRoot enodes b₁ a₂
|
||||
return mkApp7 (mkConst ``Grind.eq_congr' us) α₁ a₁ b₁ a₂ b₂ (← mkEqProofCore a₁ b₂ false) (← mkEqProofCore b₁ a₂ false)
|
||||
|
||||
/-- Constructs a congruence proof for `lhs` and `rhs`. -/
|
||||
private partial def mkCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let f := lhs.getAppFn
|
||||
@@ -182,12 +136,36 @@ mutual
|
||||
assert! rhs.getAppNumArgs == numArgs
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && g.isConstOf ``Lean.Grind.nestedProof && numArgs == 2 then
|
||||
mkNestedProofCongr lhs rhs heq
|
||||
else if f.isConstOf ``Eq && g.isConstOf ``Eq && numArgs == 3 then
|
||||
mkEqCongrProof lhs rhs heq
|
||||
else if (← isCongrDefaultProofTarget lhs rhs f g numArgs) then
|
||||
mkCongrDefaultProof lhs rhs heq
|
||||
else
|
||||
mkHCongrProof lhs rhs heq
|
||||
let thm ← mkHCongrWithArity f numArgs
|
||||
assert! thm.argKinds.size == numArgs
|
||||
let rec loop (lhs rhs : Expr) (i : Nat) : GoalM Expr := do
|
||||
let i := i - 1
|
||||
if lhs.isApp then
|
||||
let proof ← loop lhs.appFn! rhs.appFn! i
|
||||
let a₁ := lhs.appArg!
|
||||
let a₂ := rhs.appArg!
|
||||
let k := thm.argKinds[i]!
|
||||
return mkApp3 proof a₁ a₂ (← mkEqProofCore a₁ a₂ (k matches .heq))
|
||||
else
|
||||
return thm.proof
|
||||
let proof ← loop lhs rhs numArgs
|
||||
if isSameExpr f g then
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
else
|
||||
/-
|
||||
`lhs` is of the form `f a_1 ... a_n`
|
||||
`rhs` is of the form `g b_1 ... b_n`
|
||||
`proof : HEq (f a_1 ... a_n) (f b_1 ... b_n)`
|
||||
We construct a proof for `HEq (f a_1 ... a_n) (g b_1 ... b_n)` using `Eq.ndrec`
|
||||
-/
|
||||
let motive ← withLocalDeclD (← mkFreshUserName `x) (← inferType f) fun x => do
|
||||
mkLambdaFVars #[x] (← mkHEq lhs (mkAppN x rhs.getAppArgs))
|
||||
let fEq ← mkEqProofCore f g false
|
||||
let proof ← mkEqNDRec motive proof fEq
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
|
||||
private partial def realizeEqProof (lhs rhs : Expr) (h : Expr) (flipped : Bool) (heq : Bool) : GoalM Expr := do
|
||||
let h ← if h == congrPlaceholderProof then
|
||||
@@ -220,41 +198,42 @@ mutual
|
||||
-- `h' : lhs = target`
|
||||
mkTrans' h' h heq
|
||||
|
||||
/--
|
||||
Returns a proof of `lhs = rhs` (`HEq lhs rhs`) if `heq = false` (`heq = true`).
|
||||
If `heq = false`, this function assumes that `lhs` and `rhs` have the same type.
|
||||
-/
|
||||
private partial def mkEqProofCore (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
if isSameExpr lhs rhs then
|
||||
return (← mkRefl lhs heq)
|
||||
-- The equivalence class contains `HEq` proofs. So, we build a proof using HEq. Otherwise, we use `Eq`.
|
||||
let heqProofs := (← getRootENode lhs).heqProofs
|
||||
let n₁ ← getENode lhs
|
||||
let n₂ ← getENode rhs
|
||||
assert! isSameExpr n₁.root n₂.root
|
||||
let common ← findCommon lhs rhs
|
||||
let lhsEqCommon? ← mkProofTo lhs common none heqProofs
|
||||
let some lhsEqRhs ← mkProofFrom rhs common lhsEqCommon? heqProofs | unreachable!
|
||||
if heq == heqProofs then
|
||||
return lhsEqRhs
|
||||
else if heq then
|
||||
mkHEqOfEq lhsEqRhs
|
||||
else
|
||||
mkEqOfHEq lhsEqRhs
|
||||
|
||||
let lhsEqCommon? ← mkProofTo lhs common none heq
|
||||
let some lhsEqRhs ← mkProofFrom rhs common lhsEqCommon? heq | unreachable!
|
||||
return lhsEqRhs
|
||||
end
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b`.
|
||||
Returns a proof that `a = b` (or `HEq a b`).
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
@[export lean_grind_mk_eq_proof]
|
||||
def mkEqProofImpl (a b : Expr) : GoalM Expr := do
|
||||
assert! (← hasSameType a b)
|
||||
mkEqProofCore a b (heq := false)
|
||||
let p ← go
|
||||
trace[grind.proof.detail] "{p}"
|
||||
return p
|
||||
where
|
||||
go : GoalM Expr := do
|
||||
let n ← getRootENode a
|
||||
if !n.heqProofs then
|
||||
trace[grind.proof] "{a} = {b}"
|
||||
mkEqProofCore a b (heq := false)
|
||||
else
|
||||
if (← hasSameType a b) then
|
||||
trace[grind.proof] "{a} = {b}"
|
||||
mkEqOfHEq (← mkEqProofCore a b (heq := true))
|
||||
else
|
||||
trace[grind.proof] "{a} ≡ {b}"
|
||||
mkEqProofCore a b (heq := true)
|
||||
|
||||
@[export lean_grind_mk_heq_proof]
|
||||
def mkHEqProofImpl (a b : Expr) : GoalM Expr :=
|
||||
def mkHEqProof (a b : Expr) : GoalM Expr :=
|
||||
mkEqProofCore a b (heq := true)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -7,8 +7,6 @@ prelude
|
||||
import Init.Grind
|
||||
import Lean.Meta.Tactic.Grind.Proof
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Internalize
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -99,8 +97,6 @@ builtin_grind_propagator propagateNotUp ↑Not := fun e => do
|
||||
else if (← isEqTrue a) then
|
||||
-- a = True → (Not a) = False
|
||||
pushEqFalse e <| mkApp2 (mkConst ``Lean.Grind.not_eq_of_eq_true) a (← mkEqTrueProof a)
|
||||
else if (← isEqv e a) then
|
||||
closeGoal <| mkApp2 (mkConst ``Lean.Grind.false_of_not_eq_self) a (← mkEqProof e a)
|
||||
|
||||
/--
|
||||
Propagates truth values downwards for a negation expression `Not a` based on the truth value of `Not a`.
|
||||
@@ -134,13 +130,6 @@ builtin_grind_propagator propagateEqDown ↓Eq := fun e => do
|
||||
let_expr Eq _ a b := e | return ()
|
||||
pushEq a b <| mkApp2 (mkConst ``of_eq_true) e (← mkEqTrueProof e)
|
||||
|
||||
/-- Propagates `EqMatch` downwards -/
|
||||
builtin_grind_propagator propagateEqMatchDown ↓Grind.EqMatch := fun e => do
|
||||
if (← isEqTrue e) then
|
||||
let_expr Grind.EqMatch _ a b origin := e | return ()
|
||||
markCaseSplitAsResolved origin
|
||||
pushEq a b <| mkApp2 (mkConst ``of_eq_true) e (← mkEqTrueProof e)
|
||||
|
||||
/-- Propagates `HEq` downwards -/
|
||||
builtin_grind_propagator propagateHEqDown ↓HEq := fun e => do
|
||||
if (← isEqTrue e) then
|
||||
@@ -153,32 +142,4 @@ builtin_grind_propagator propagateHEqUp ↑HEq := fun e => do
|
||||
if (← isEqv a b) then
|
||||
pushEqTrue e <| mkApp2 (mkConst ``eq_true) e (← mkHEqProof a b)
|
||||
|
||||
/-- Propagates `ite` upwards -/
|
||||
builtin_grind_propagator propagateIte ↑ite := fun e => do
|
||||
let_expr f@ite α c h a b := e | return ()
|
||||
if (← isEqTrue c) then
|
||||
pushEq e a <| mkApp6 (mkConst ``ite_cond_eq_true f.constLevels!) α c h a b (← mkEqTrueProof c)
|
||||
else if (← isEqFalse c) then
|
||||
pushEq e b <| mkApp6 (mkConst ``ite_cond_eq_false f.constLevels!) α c h a b (← mkEqFalseProof c)
|
||||
|
||||
/-- Propagates `dite` upwards -/
|
||||
builtin_grind_propagator propagateDIte ↑dite := fun e => do
|
||||
let_expr f@dite α c h a b := e | return ()
|
||||
if (← isEqTrue c) then
|
||||
let h₁ ← mkEqTrueProof c
|
||||
let ah₁ := mkApp a (mkApp2 (mkConst ``of_eq_true) c h₁)
|
||||
let p ← simp ah₁
|
||||
let r := p.expr
|
||||
let h₂ ← p.getProof
|
||||
internalize r (← getGeneration e)
|
||||
pushEq e r <| mkApp8 (mkConst ``Grind.dite_cond_eq_true' f.constLevels!) α c h a b r h₁ h₂
|
||||
else if (← isEqFalse c) then
|
||||
let h₁ ← mkEqFalseProof c
|
||||
let bh₁ := mkApp b (mkApp2 (mkConst ``of_eq_false) c h₁)
|
||||
let p ← simp bh₁
|
||||
let r := p.expr
|
||||
let h₂ ← p.getProof
|
||||
internalize r (← getGeneration e)
|
||||
pushEq e r <| mkApp8 (mkConst ``Grind.dite_cond_eq_false' f.constLevels!) α c h a b r h₁ h₂
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
53
src/Lean/Meta/Tactic/Grind/Run.lean
Normal file
53
src/Lean/Meta/Tactic/Grind/Run.lean
Normal file
@@ -0,0 +1,53 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Proj
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkMethods : CoreM Methods := do
|
||||
let builtinPropagators ← builtinPropagatorsRef.get
|
||||
return {
|
||||
propagateUp := fun e => do
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
propagateProjEq e
|
||||
if let some prop := builtinPropagators.up[declName]? then
|
||||
prop e
|
||||
propagateDown := fun e => do
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
if let some prop := builtinPropagators.down[declName]? then
|
||||
prop e
|
||||
}
|
||||
|
||||
def GrindM.run (x : GrindM α) (mainDeclName : Name) : MetaM α := do
|
||||
let scState := ShareCommon.State.mk _
|
||||
let (falseExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``False)
|
||||
let (trueExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``True)
|
||||
let thms ← grindNormExt.getTheorems
|
||||
let simprocs := #[(← grindNormSimprocExt.getSimprocs)]
|
||||
let simp ← Simp.mkContext
|
||||
(config := { arith := true })
|
||||
(simpTheorems := #[thms])
|
||||
(congrTheorems := (← getSimpCongrTheorems))
|
||||
x (← mkMethods).toMethodsRef { mainDeclName, simprocs, simp } |>.run' { scState, trueExpr, falseExpr }
|
||||
|
||||
@[inline] def GoalM.run (goal : Goal) (x : GoalM α) : GrindM (α × Goal) :=
|
||||
goal.mvarId.withContext do StateRefT'.run x goal
|
||||
|
||||
@[inline] def GoalM.run' (goal : Goal) (x : GoalM Unit) : GrindM Goal :=
|
||||
goal.mvarId.withContext do StateRefT'.run' (x *> get) goal
|
||||
|
||||
def mkGoal (mvarId : MVarId) : GrindM Goal := do
|
||||
let trueExpr ← getTrueExpr
|
||||
let falseExpr ← getFalseExpr
|
||||
GoalM.run' { mvarId } do
|
||||
mkENodeCore falseExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
mkENodeCore trueExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -4,17 +4,18 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Assert
|
||||
import Lean.Meta.Tactic.Simp.Main
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
import Lean.Meta.Tactic.Grind.MarkNestedProofs
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
-- TODO: use congruence closure and decision procedures during pre-processing
|
||||
-- TODO: implement `simp` discharger using preprocessor state
|
||||
|
||||
/-- Simplifies the given expression using the `grind` simprocs and normalization theorems. -/
|
||||
def simpCore (e : Expr) : GrindM Simp.Result := do
|
||||
def simp (e : Expr) : GrindM Simp.Result := do
|
||||
let simpStats := (← get).simpStats
|
||||
let (r, simpStats) ← Meta.simp e (← readThe Context).simp (← readThe Context).simprocs (stats := simpStats)
|
||||
modify fun s => { s with simpStats }
|
||||
@@ -24,17 +25,14 @@ def simpCore (e : Expr) : GrindM Simp.Result := do
|
||||
Simplifies `e` using `grind` normalization theorems and simprocs,
|
||||
and then applies several other preprocessing steps.
|
||||
-/
|
||||
def simp (e : Expr) : GrindM Simp.Result := do
|
||||
let e ← instantiateMVars e
|
||||
let r ← simpCore e
|
||||
def pre (e : Expr) : GrindM Simp.Result := do
|
||||
let r ← simp e
|
||||
let e' := r.expr
|
||||
let e' ← abstractNestedProofs e'
|
||||
let e' ← markNestedProofs e'
|
||||
let e' ← unfoldReducible e'
|
||||
let e' ← eraseIrrelevantMData e'
|
||||
let e' ← foldProjs e'
|
||||
let e' ← normalizeLevels e'
|
||||
let e' ← eraseDoNotSimp e'
|
||||
let e' ← canon e'
|
||||
let e' ← shareCommon e'
|
||||
trace[grind.simp] "{e}\n===>\n{e'}"
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Simp.Simproc
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/-- Returns the array of simprocs used by `grind`. -/
|
||||
protected def getSimprocs : MetaM (Array Simprocs) := do
|
||||
let s ← grindNormSimprocExt.getSimprocs
|
||||
let s ← addDoNotSimp s
|
||||
return #[s, (← Simp.getSEvalSimprocs)]
|
||||
|
||||
/-- Returns the simplification context used by `grind`. -/
|
||||
protected def getSimpContext : MetaM Simp.Context := do
|
||||
let thms ← grindNormExt.getTheorems
|
||||
Simp.mkContext
|
||||
(config := { arith := true })
|
||||
(simpTheorems := #[thms])
|
||||
(congrTheorems := (← getSimpCongrTheorems))
|
||||
|
||||
@[export lean_grind_normalize]
|
||||
def normalizeImp (e : Expr) : MetaM Expr := do
|
||||
let (r, _) ← Meta.simp e (← Grind.getSimpContext) (← Grind.getSimprocs)
|
||||
return r.expr
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,126 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Intro
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.CasesMatch
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
inductive CaseSplitStatus where
|
||||
| resolved
|
||||
| notReady
|
||||
| ready
|
||||
deriving Inhabited, BEq
|
||||
|
||||
private def checkCaseSplitStatus (e : Expr) : GoalM CaseSplitStatus := do
|
||||
match_expr e with
|
||||
| Or a b =>
|
||||
if (← isEqTrue e) then
|
||||
if (← isEqTrue a <||> isEqTrue b) then
|
||||
return .resolved
|
||||
else
|
||||
return .ready
|
||||
else if (← isEqFalse e) then
|
||||
return .resolved
|
||||
else
|
||||
return .notReady
|
||||
| And a b =>
|
||||
if (← isEqTrue e) then
|
||||
return .resolved
|
||||
else if (← isEqFalse e) then
|
||||
if (← isEqFalse a <||> isEqFalse b) then
|
||||
return .resolved
|
||||
else
|
||||
return .ready
|
||||
else
|
||||
return .notReady
|
||||
| ite _ c _ _ _ =>
|
||||
if (← isEqTrue c <||> isEqFalse c) then
|
||||
return .resolved
|
||||
else
|
||||
return .ready
|
||||
| dite _ c _ _ _ =>
|
||||
if (← isEqTrue c <||> isEqFalse c) then
|
||||
return .resolved
|
||||
else
|
||||
return .ready
|
||||
| _ =>
|
||||
if (← isResolvedCaseSplit e) then
|
||||
trace[grind.debug.split] "split resolved: {e}"
|
||||
return .resolved
|
||||
if (← isMatcherApp e) then
|
||||
return .ready
|
||||
let .const declName .. := e.getAppFn | unreachable!
|
||||
if (← isInductivePredicate declName <&&> isEqTrue e) then
|
||||
return .ready
|
||||
return .notReady
|
||||
|
||||
/-- Returns the next case-split to be performed. It uses a very simple heuristic. -/
|
||||
private def selectNextSplit? : GoalM (Option Expr) := do
|
||||
if (← isInconsistent) then return none
|
||||
if (← checkMaxCaseSplit) then return none
|
||||
go (← get).splitCandidates none []
|
||||
where
|
||||
go (cs : List Expr) (c? : Option Expr) (cs' : List Expr) : GoalM (Option Expr) := do
|
||||
match cs with
|
||||
| [] =>
|
||||
modify fun s => { s with splitCandidates := cs'.reverse }
|
||||
if c?.isSome then
|
||||
-- Remark: we reset `numEmatch` after each case split.
|
||||
-- We should consider other strategies in the future.
|
||||
modify fun s => { s with numSplits := s.numSplits + 1, numEmatch := 0 }
|
||||
return c?
|
||||
| c::cs =>
|
||||
match (← checkCaseSplitStatus c) with
|
||||
| .notReady => go cs c? (c::cs')
|
||||
| .resolved => go cs c? cs'
|
||||
| .ready =>
|
||||
match c? with
|
||||
| none => go cs (some c) cs'
|
||||
| some c' =>
|
||||
if (← getGeneration c) < (← getGeneration c') then
|
||||
go cs (some c) (c'::cs')
|
||||
else
|
||||
go cs c? (c::cs')
|
||||
|
||||
/-- Constructs a major premise for the `cases` tactic used by `grind`. -/
|
||||
private def mkCasesMajor (c : Expr) : GoalM Expr := do
|
||||
match_expr c with
|
||||
| And a b => return mkApp3 (mkConst ``Grind.or_of_and_eq_false) a b (← mkEqFalseProof c)
|
||||
| ite _ c _ _ _ => return mkEM c
|
||||
| dite _ c _ _ _ => return mkEM c
|
||||
| _ => return mkApp2 (mkConst ``of_eq_true) c (← mkEqTrueProof c)
|
||||
|
||||
/-- Introduces new hypotheses in each goal. -/
|
||||
private def introNewHyp (goals : List Goal) (acc : List Goal) (generation : Nat) : GrindM (List Goal) := do
|
||||
match goals with
|
||||
| [] => return acc.reverse
|
||||
| goal::goals => introNewHyp goals ((← intros generation goal) ++ acc) generation
|
||||
|
||||
/--
|
||||
Selects a case-split from the list of candidates,
|
||||
and returns a new list of goals if successful.
|
||||
-/
|
||||
def splitNext : GrindTactic := fun goal => do
|
||||
let (goals?, _) ← GoalM.run goal do
|
||||
let some c ← selectNextSplit?
|
||||
| return none
|
||||
let gen ← getGeneration c
|
||||
trace_goal[grind.split] "{c}, generation: {gen}"
|
||||
let mvarIds ← if (← isMatcherApp c) then
|
||||
casesMatch (← get).mvarId c
|
||||
else
|
||||
let major ← mkCasesMajor c
|
||||
cases (← get).mvarId major
|
||||
let goal ← get
|
||||
let goals := mvarIds.map fun mvarId => { goal with mvarId }
|
||||
let goals ← introNewHyp goals [] (gen+1)
|
||||
return some goals
|
||||
return goals?
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -4,10 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Tactics
|
||||
import Init.Data.Queue
|
||||
import Lean.Util.ShareCommon
|
||||
import Lean.HeadIndex
|
||||
import Lean.Meta.Basic
|
||||
import Lean.Meta.CongrTheorems
|
||||
import Lean.Meta.AbstractNestedProofs
|
||||
@@ -15,7 +12,6 @@ import Lean.Meta.Tactic.Simp.Types
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Grind.Canon
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.EMatchTheorem
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -29,7 +25,7 @@ def congrPlaceholderProof := mkConst (Name.mkSimple "[congruence]")
|
||||
|
||||
/--
|
||||
Returns `true` if `e` is `True`, `False`, or a literal value.
|
||||
See `Lean.Meta.LitValues` for supported literals.
|
||||
See `LitValues` for supported literals.
|
||||
-/
|
||||
def isInterpreted (e : Expr) : MetaM Bool := do
|
||||
if e.isTrue || e.isFalse then return true
|
||||
@@ -49,21 +45,20 @@ register_builtin_option grind.debug.proofs : Bool := {
|
||||
|
||||
/-- Context for `GrindM` monad. -/
|
||||
structure Context where
|
||||
simp : Simp.Context
|
||||
simprocs : Array Simp.Simprocs
|
||||
simp : Simp.Context
|
||||
simprocs : Array Simp.Simprocs
|
||||
mainDeclName : Name
|
||||
config : Grind.Config
|
||||
|
||||
/-- Key for the congruence theorem cache. -/
|
||||
structure CongrTheoremCacheKey where
|
||||
f : Expr
|
||||
numArgs : Nat
|
||||
|
||||
-- We manually define `BEq` because we want to use pointer equality.
|
||||
-- We manually define `BEq` because we wannt to use pointer equality.
|
||||
instance : BEq CongrTheoremCacheKey where
|
||||
beq a b := isSameExpr a.f b.f && a.numArgs == b.numArgs
|
||||
|
||||
-- We manually define `Hashable` because we want to use pointer equality.
|
||||
-- We manually define `Hashable` because we wannt to use pointer equality.
|
||||
instance : Hashable CongrTheoremCacheKey where
|
||||
hash a := mixHash (unsafe ptrAddrUnsafe a.f).toUInt64 (hash a.numArgs)
|
||||
|
||||
@@ -83,11 +78,6 @@ structure State where
|
||||
simpStats : Simp.Stats := {}
|
||||
trueExpr : Expr
|
||||
falseExpr : Expr
|
||||
/--
|
||||
Used to generate trace messages of the for `[grind] working on <tag>`,
|
||||
and implement the macro `trace_goal`.
|
||||
-/
|
||||
lastTag : Name := .anonymous
|
||||
|
||||
private opaque MethodsRefPointed : NonemptyType.{0}
|
||||
private def MethodsRef : Type := MethodsRefPointed.type
|
||||
@@ -95,10 +85,6 @@ instance : Nonempty MethodsRef := MethodsRefPointed.property
|
||||
|
||||
abbrev GrindM := ReaderT MethodsRef $ ReaderT Context $ StateRefT State MetaM
|
||||
|
||||
/-- Returns the user-defined configuration options -/
|
||||
def getConfig : GrindM Grind.Config :=
|
||||
return (← readThe Context).config
|
||||
|
||||
/-- Returns the internalized `True` constant. -/
|
||||
def getTrueExpr : GrindM Expr := do
|
||||
return (← get).trueExpr
|
||||
@@ -113,10 +99,6 @@ def getMainDeclName : GrindM Name :=
|
||||
@[inline] def getMethodsRef : GrindM MethodsRef :=
|
||||
read
|
||||
|
||||
/-- Returns maximum term generation that is considered during ematching. -/
|
||||
def getMaxGeneration : GrindM Nat := do
|
||||
return (← getConfig).gen
|
||||
|
||||
/--
|
||||
Abtracts nested proofs in `e`. This is a preprocessing step performed before internalization.
|
||||
-/
|
||||
@@ -128,12 +110,12 @@ def abstractNestedProofs (e : Expr) : GrindM Expr := do
|
||||
|
||||
/--
|
||||
Applies hash-consing to `e`. Recall that all expressions in a `grind` goal have
|
||||
been hash-consed. We perform this step before we internalize expressions.
|
||||
been hash-consing. We perform this step before we internalize expressions.
|
||||
-/
|
||||
def shareCommon (e : Expr) : GrindM Expr := do
|
||||
modifyGet fun { canon, scState, nextThmIdx, congrThms, trueExpr, falseExpr, simpStats, lastTag } =>
|
||||
modifyGet fun { canon, scState, nextThmIdx, congrThms, trueExpr, falseExpr, simpStats } =>
|
||||
let (e, scState) := ShareCommon.State.shareCommon scState e
|
||||
(e, { canon, scState, nextThmIdx, congrThms, trueExpr, falseExpr, simpStats, lastTag })
|
||||
(e, { canon, scState, nextThmIdx, congrThms, trueExpr, falseExpr, simpStats })
|
||||
|
||||
/--
|
||||
Canonicalizes nested types, type formers, and instances in `e`.
|
||||
@@ -144,14 +126,6 @@ def canon (e : Expr) : GrindM Expr := do
|
||||
modify fun s => { s with canon := canonS }
|
||||
return e
|
||||
|
||||
/-- Returns `true` if `e` is the internalized `True` expression. -/
|
||||
def isTrueExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getTrueExpr)
|
||||
|
||||
/-- Returns `true` if `e` is the internalized `False` expression. -/
|
||||
def isFalseExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getFalseExpr)
|
||||
|
||||
/--
|
||||
Creates a congruence theorem for a `f`-applications with `numArgs` arguments.
|
||||
-/
|
||||
@@ -178,12 +152,8 @@ structure ENode where
|
||||
next : Expr
|
||||
/-- Root (aka canonical representative) of the equivalence class -/
|
||||
root : Expr
|
||||
/--
|
||||
`congr` is the term `self` is congruent to.
|
||||
We say `self` is the congruence class root if `isSameExpr congr self`.
|
||||
This field is initialized to `self` even if `e` is not an application.
|
||||
-/
|
||||
congr : Expr
|
||||
/-- Root of the congruence class. This is field is a don't care if `e` is not an application. -/
|
||||
cgRoot : Expr
|
||||
/--
|
||||
When `e` was added to this equivalence class because of an equality `h : e = target`,
|
||||
then we store `target` here, and `h` at `proof?`.
|
||||
@@ -198,7 +168,7 @@ structure ENode where
|
||||
interpreted : Bool := false
|
||||
/-- `ctor := true` if the head symbol is a constructor application. -/
|
||||
ctor : Bool := false
|
||||
/-- `hasLambdas := true` if the equivalence class contains lambda expressions. -/
|
||||
/-- `hasLambdas := true` if equivalence class contains lambda expressions. -/
|
||||
hasLambdas : Bool := false
|
||||
/--
|
||||
If `heqProofs := true`, then some proofs in the equivalence class are based
|
||||
@@ -212,88 +182,57 @@ structure ENode where
|
||||
generation : Nat := 0
|
||||
/-- Modification time -/
|
||||
mt : Nat := 0
|
||||
-- TODO: see Lean 3 implementation
|
||||
deriving Inhabited, Repr
|
||||
|
||||
def ENode.isCongrRoot (n : ENode) :=
|
||||
isSameExpr n.self n.congr
|
||||
|
||||
/-- New equality to be processed. -/
|
||||
structure NewEq where
|
||||
lhs : Expr
|
||||
rhs : Expr
|
||||
proof : Expr
|
||||
isHEq : Bool
|
||||
|
||||
/--
|
||||
Key for the `ENodeMap` and `ParentMap` map.
|
||||
We use pointer addresses and rely on the fact all internalized expressions
|
||||
have been hash-consed, i.e., we have applied `shareCommon`.
|
||||
-/
|
||||
private structure ENodeKey where
|
||||
expr : Expr
|
||||
abbrev ENodes := PHashMap USize ENode
|
||||
|
||||
instance : Hashable ENodeKey where
|
||||
hash k := unsafe (ptrAddrUnsafe k.expr).toUInt64
|
||||
|
||||
instance : BEq ENodeKey where
|
||||
beq k₁ k₂ := isSameExpr k₁.expr k₂.expr
|
||||
|
||||
abbrev ENodeMap := PHashMap ENodeKey ENode
|
||||
|
||||
/--
|
||||
Key for the congruence table.
|
||||
We need access to the `enodes` to be able to retrieve the equivalence class roots.
|
||||
-/
|
||||
structure CongrKey (enodes : ENodeMap) where
|
||||
structure CongrKey (enodes : ENodes) where
|
||||
e : Expr
|
||||
|
||||
private def hashRoot (enodes : ENodeMap) (e : Expr) : UInt64 :=
|
||||
if let some node := enodes.find? { expr := e } then
|
||||
unsafe (ptrAddrUnsafe node.root).toUInt64
|
||||
private abbrev toENodeKey (e : Expr) : USize :=
|
||||
unsafe ptrAddrUnsafe e
|
||||
|
||||
private def hashRoot (enodes : ENodes) (e : Expr) : UInt64 :=
|
||||
if let some node := enodes.find? (toENodeKey e) then
|
||||
toENodeKey node.root |>.toUInt64
|
||||
else
|
||||
13
|
||||
|
||||
def hasSameRoot (enodes : ENodeMap) (a b : Expr) : Bool := Id.run do
|
||||
if isSameExpr a b then
|
||||
private def hasSameRoot (enodes : ENodes) (a b : Expr) : Bool := Id.run do
|
||||
let ka := toENodeKey a
|
||||
let kb := toENodeKey b
|
||||
if ka == kb then
|
||||
return true
|
||||
else
|
||||
let some n1 := enodes.find? { expr := a } | return false
|
||||
let some n2 := enodes.find? { expr := b } | return false
|
||||
isSameExpr n1.root n2.root
|
||||
let some n1 := enodes.find? ka | return false
|
||||
let some n2 := enodes.find? kb | return false
|
||||
toENodeKey n1.root == toENodeKey n2.root
|
||||
|
||||
def congrHash (enodes : ENodeMap) (e : Expr) : UInt64 :=
|
||||
match_expr e with
|
||||
| Grind.nestedProof p _ => hashRoot enodes p
|
||||
| Eq _ lhs rhs => goEq lhs rhs
|
||||
| _ => go e 17
|
||||
def congrHash (enodes : ENodes) (e : Expr) : UInt64 :=
|
||||
if e.isAppOfArity ``Lean.Grind.nestedProof 2 then
|
||||
-- We only hash the proposition
|
||||
hashRoot enodes (e.getArg! 0)
|
||||
else
|
||||
go e 17
|
||||
where
|
||||
goEq (lhs rhs : Expr) : UInt64 :=
|
||||
let h₁ := hashRoot enodes lhs
|
||||
let h₂ := hashRoot enodes rhs
|
||||
if h₁ > h₂ then mixHash h₂ h₁ else mixHash h₁ h₂
|
||||
go (e : Expr) (r : UInt64) : UInt64 :=
|
||||
match e with
|
||||
| .app f a => go f (mixHash r (hashRoot enodes a))
|
||||
| _ => mixHash r (hashRoot enodes e)
|
||||
|
||||
/-- Returns `true` if `a` and `b` are congruent modulo the equivalence classes in `enodes`. -/
|
||||
partial def isCongruent (enodes : ENodeMap) (a b : Expr) : Bool :=
|
||||
match_expr a with
|
||||
| Grind.nestedProof p₁ _ =>
|
||||
let_expr Grind.nestedProof p₂ _ := b | false
|
||||
hasSameRoot enodes p₁ p₂
|
||||
| Eq α₁ lhs₁ rhs₁ =>
|
||||
let_expr Eq α₂ lhs₂ rhs₂ := b | false
|
||||
if isSameExpr α₁ α₂ then
|
||||
goEq lhs₁ rhs₁ lhs₂ rhs₂
|
||||
else
|
||||
go a b
|
||||
| _ => go a b
|
||||
partial def isCongruent (enodes : ENodes) (a b : Expr) : Bool :=
|
||||
if a.isAppOfArity ``Lean.Grind.nestedProof 2 && b.isAppOfArity ``Lean.Grind.nestedProof 2 then
|
||||
hasSameRoot enodes (a.getArg! 0) (b.getArg! 0)
|
||||
else
|
||||
go a b
|
||||
where
|
||||
goEq (lhs₁ rhs₁ lhs₂ rhs₂ : Expr) : Bool :=
|
||||
(hasSameRoot enodes lhs₁ lhs₂ && hasSameRoot enodes rhs₁ rhs₂)
|
||||
||
|
||||
(hasSameRoot enodes lhs₁ rhs₂ && hasSameRoot enodes rhs₁ lhs₂)
|
||||
go (a b : Expr) : Bool :=
|
||||
if a.isApp && b.isApp then
|
||||
hasSameRoot enodes a.appArg! b.appArg! && go a.appFn! b.appFn!
|
||||
@@ -308,58 +247,17 @@ instance : Hashable (CongrKey enodes) where
|
||||
instance : BEq (CongrKey enodes) where
|
||||
beq k1 k2 := isCongruent enodes k1.e k2.e
|
||||
|
||||
abbrev CongrTable (enodes : ENodeMap) := PHashSet (CongrKey enodes)
|
||||
abbrev CongrTable (enodes : ENodes) := PHashSet (CongrKey enodes)
|
||||
|
||||
-- Remark: we cannot use pointer addresses here because we have to traverse the tree.
|
||||
abbrev ParentSet := RBTree Expr Expr.quickComp
|
||||
abbrev ParentMap := PHashMap ENodeKey ParentSet
|
||||
|
||||
/--
|
||||
The E-matching module instantiates theorems using the `EMatchTheorem proof` and a (partial) assignment.
|
||||
We want to avoid instantiating the same theorem with the same assignment more than once.
|
||||
Therefore, we store the (pre-)instance information in set.
|
||||
Recall that the proofs of activated theorems have been hash-consed.
|
||||
The assignment contains internalized expressions, which have also been hash-consed.
|
||||
-/
|
||||
structure PreInstance where
|
||||
proof : Expr
|
||||
assignment : Array Expr
|
||||
|
||||
instance : Hashable PreInstance where
|
||||
hash i := Id.run do
|
||||
let mut r := unsafe (ptrAddrUnsafe i.proof >>> 3).toUInt64
|
||||
for v in i.assignment do
|
||||
r := mixHash r (unsafe (ptrAddrUnsafe v >>> 3).toUInt64)
|
||||
return r
|
||||
|
||||
instance : BEq PreInstance where
|
||||
beq i₁ i₂ := Id.run do
|
||||
unless isSameExpr i₁.proof i₂.proof do return false
|
||||
unless i₁.assignment.size == i₂.assignment.size do return false
|
||||
for v₁ in i₁.assignment, v₂ in i₂.assignment do
|
||||
unless isSameExpr v₁ v₂ do return false
|
||||
return true
|
||||
|
||||
abbrev PreInstanceSet := PHashSet PreInstance
|
||||
|
||||
/-- New fact to be processed. -/
|
||||
structure NewFact where
|
||||
proof : Expr
|
||||
prop : Expr
|
||||
generation : Nat
|
||||
deriving Inhabited
|
||||
abbrev ParentMap := PHashMap USize ParentSet
|
||||
|
||||
structure Goal where
|
||||
mvarId : MVarId
|
||||
enodes : ENodeMap := {}
|
||||
enodes : ENodes := {}
|
||||
parents : ParentMap := {}
|
||||
congrTable : CongrTable enodes := {}
|
||||
/--
|
||||
A mapping from each function application index (`HeadIndex`) to a list of applications with that index.
|
||||
Recall that the `HeadIndex` for a constant is its constant name, and for a free variable,
|
||||
it is its unique id.
|
||||
-/
|
||||
appMap : PHashMap HeadIndex (List Expr) := {}
|
||||
/-- Equations to be processed. -/
|
||||
newEqs : Array NewEq := #[]
|
||||
/-- `inconsistent := true` if `ENode`s for `True` and `False` are in the same equivalence class. -/
|
||||
@@ -368,33 +266,6 @@ structure Goal where
|
||||
gmt : Nat := 0
|
||||
/-- Next unique index for creating ENodes -/
|
||||
nextIdx : Nat := 0
|
||||
/-- Active theorems that we have performed ematching at least once. -/
|
||||
thms : PArray EMatchTheorem := {}
|
||||
/-- Active theorems that we have not performed any round of ematching yet. -/
|
||||
newThms : PArray EMatchTheorem := {}
|
||||
/--
|
||||
Inactive global theorems. As we internalize terms, we activate theorems as we find their symbols.
|
||||
Local theorem provided by users are added directly into `newThms`.
|
||||
-/
|
||||
thmMap : EMatchTheorems
|
||||
/-- Number of theorem instances generated so far -/
|
||||
numInstances : Nat := 0
|
||||
/-- Number of E-matching rounds performed in this goal since the last case-split. -/
|
||||
numEmatch : Nat := 0
|
||||
/-- (pre-)instances found so far. It includes instances that failed to be instantiated. -/
|
||||
preInstances : PreInstanceSet := {}
|
||||
/-- new facts to be processed. -/
|
||||
newFacts : Std.Queue NewFact := ∅
|
||||
/-- `match` auxiliary functions whose equations have already been created and activated. -/
|
||||
matchEqNames : PHashSet Name := {}
|
||||
/-- Case-split candidates. -/
|
||||
splitCandidates : List Expr := []
|
||||
/-- Number of splits performed to get to this goal. -/
|
||||
numSplits : Nat := 0
|
||||
/-- Case-splits that do not have to be performed anymore. -/
|
||||
resolvedSplits : PHashSet ENodeKey := {}
|
||||
/-- Next local E-match theorem idx. -/
|
||||
nextThmIdx : Nat := 0
|
||||
deriving Inhabited
|
||||
|
||||
def Goal.admit (goal : Goal) : MetaM Unit :=
|
||||
@@ -402,73 +273,26 @@ def Goal.admit (goal : Goal) : MetaM Unit :=
|
||||
|
||||
abbrev GoalM := StateRefT Goal GrindM
|
||||
|
||||
@[inline] def GoalM.run (goal : Goal) (x : GoalM α) : GrindM (α × Goal) :=
|
||||
goal.mvarId.withContext do StateRefT'.run x goal
|
||||
abbrev Propagator := Expr → GoalM Unit
|
||||
|
||||
@[inline] def GoalM.run' (goal : Goal) (x : GoalM Unit) : GrindM Goal :=
|
||||
goal.mvarId.withContext do StateRefT'.run' (x *> get) goal
|
||||
/-- Returns `true` if `e` is the internalized `True` expression. -/
|
||||
def isTrueExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getTrueExpr)
|
||||
|
||||
def updateLastTag : GoalM Unit := do
|
||||
if (← isTracingEnabledFor `grind) then
|
||||
let currTag ← (← get).mvarId.getTag
|
||||
if currTag != (← getThe Grind.State).lastTag then
|
||||
trace[grind] "working on goal `{currTag}`"
|
||||
modifyThe Grind.State fun s => { s with lastTag := currTag }
|
||||
|
||||
/--
|
||||
Macro similar to `trace[...]`, but it includes the trace message `trace[grind] "working on <current goal>"`
|
||||
if the tag has changed since the last trace message.
|
||||
-/
|
||||
macro "trace_goal[" id:ident "]" s:(interpolatedStr(term) <|> term) : doElem => do
|
||||
let msg ← if s.raw.getKind == interpolatedStrKind then `(m! $(⟨s⟩)) else `(($(⟨s⟩) : MessageData))
|
||||
`(doElem| do
|
||||
let cls := $(quote id.getId.eraseMacroScopes)
|
||||
if (← Lean.isTracingEnabledFor cls) then
|
||||
updateLastTag
|
||||
Lean.addTrace cls $msg)
|
||||
|
||||
/--
|
||||
A helper function used to mark a theorem instance found by the E-matching module.
|
||||
It returns `true` if it is a new instance and `false` otherwise.
|
||||
-/
|
||||
def markTheoremInstance (proof : Expr) (assignment : Array Expr) : GoalM Bool := do
|
||||
let k := { proof, assignment }
|
||||
if (← get).preInstances.contains k then
|
||||
return false
|
||||
modify fun s => { s with preInstances := s.preInstances.insert k }
|
||||
return true
|
||||
|
||||
/-- Adds a new fact `prop` with proof `proof` to the queue for processing. -/
|
||||
def addNewFact (proof : Expr) (prop : Expr) (generation : Nat) : GoalM Unit := do
|
||||
modify fun s => { s with newFacts := s.newFacts.enqueue { proof, prop, generation } }
|
||||
|
||||
/-- Adds a new theorem instance produced using E-matching. -/
|
||||
def addTheoremInstance (proof : Expr) (prop : Expr) (generation : Nat) : GoalM Unit := do
|
||||
addNewFact proof prop generation
|
||||
modify fun s => { s with numInstances := s.numInstances + 1 }
|
||||
|
||||
/-- Returns `true` if the maximum number of instances has been reached. -/
|
||||
def checkMaxInstancesExceeded : GoalM Bool := do
|
||||
return (← get).numInstances >= (← getConfig).instances
|
||||
|
||||
/-- Returns `true` if the maximum number of case-splits has been reached. -/
|
||||
def checkMaxCaseSplit : GoalM Bool := do
|
||||
return (← get).numSplits >= (← getConfig).splits
|
||||
|
||||
/-- Returns `true` if the maximum number of E-matching rounds has been reached. -/
|
||||
def checkMaxEmatchExceeded : GoalM Bool := do
|
||||
return (← get).numEmatch >= (← getConfig).ematch
|
||||
/-- Returns `true` if `e` is the internalized `False` expression. -/
|
||||
def isFalseExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getFalseExpr)
|
||||
|
||||
/--
|
||||
Returns `some n` if `e` has already been "internalized" into the
|
||||
Otherwise, returns `none`s.
|
||||
-/
|
||||
def getENode? (e : Expr) : GoalM (Option ENode) :=
|
||||
return (← get).enodes.find? { expr := e }
|
||||
return (← get).enodes.find? (unsafe ptrAddrUnsafe e)
|
||||
|
||||
/-- Returns node associated with `e`. It assumes `e` has already been internalized. -/
|
||||
def getENode (e : Expr) : GoalM ENode := do
|
||||
let some n := (← get).enodes.find? { expr := e }
|
||||
let some n := (← get).enodes.find? (unsafe ptrAddrUnsafe e)
|
||||
| throwError "internal `grind` error, term has not been internalized{indentExpr e}"
|
||||
return n
|
||||
|
||||
@@ -519,7 +343,7 @@ def getNext (e : Expr) : GoalM Expr :=
|
||||
|
||||
/-- Returns `true` if `e` has already been internalized. -/
|
||||
def alreadyInternalized (e : Expr) : GoalM Bool :=
|
||||
return (← get).enodes.contains { expr := e }
|
||||
return (← get).enodes.contains (unsafe ptrAddrUnsafe e)
|
||||
|
||||
def getTarget? (e : Expr) : GoalM (Option Expr) := do
|
||||
let some n ← getENode? e | return none
|
||||
@@ -564,8 +388,9 @@ information in the root (aka canonical representative) of `child`.
|
||||
-/
|
||||
def registerParent (parent : Expr) (child : Expr) : GoalM Unit := do
|
||||
let some childRoot ← getRoot? child | return ()
|
||||
let parents := if let some parents := (← get).parents.find? { expr := childRoot } then parents else {}
|
||||
modify fun s => { s with parents := s.parents.insert { expr := childRoot } (parents.insert parent) }
|
||||
let key := toENodeKey childRoot
|
||||
let parents := if let some parents := (← get).parents.find? key then parents else {}
|
||||
modify fun s => { s with parents := s.parents.insert key (parents.insert parent) }
|
||||
|
||||
/--
|
||||
Returns the set of expressions `e` is a child of, or an expression in
|
||||
@@ -573,7 +398,7 @@ Returns the set of expressions `e` is a child of, or an expression in
|
||||
The information is only up to date if `e` is the root (aka canonical representative) of the equivalence class.
|
||||
-/
|
||||
def getParents (e : Expr) : GoalM ParentSet := do
|
||||
let some parents := (← get).parents.find? { expr := e } | return {}
|
||||
let some parents := (← get).parents.find? (toENodeKey e) | return {}
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -581,7 +406,7 @@ Similar to `getParents`, but also removes the entry `e ↦ parents` from the par
|
||||
-/
|
||||
def getParentsAndReset (e : Expr) : GoalM ParentSet := do
|
||||
let parents ← getParents e
|
||||
modify fun s => { s with parents := s.parents.erase { expr := e } }
|
||||
modify fun s => { s with parents := s.parents.erase (toENodeKey e) }
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -589,20 +414,21 @@ Copy `parents` to the parents of `root`.
|
||||
`root` must be the root of its equivalence class.
|
||||
-/
|
||||
def copyParentsTo (parents : ParentSet) (root : Expr) : GoalM Unit := do
|
||||
let mut curr := if let some parents := (← get).parents.find? { expr := root } then parents else {}
|
||||
let key := toENodeKey root
|
||||
let mut curr := if let some parents := (← get).parents.find? key then parents else {}
|
||||
for parent in parents do
|
||||
curr := curr.insert parent
|
||||
modify fun s => { s with parents := s.parents.insert { expr := root } curr }
|
||||
modify fun s => { s with parents := s.parents.insert key curr }
|
||||
|
||||
def setENode (e : Expr) (n : ENode) : GoalM Unit :=
|
||||
modify fun s => { s with
|
||||
enodes := s.enodes.insert { expr := e } n
|
||||
enodes := s.enodes.insert (unsafe ptrAddrUnsafe e) n
|
||||
congrTable := unsafe unsafeCast s.congrTable
|
||||
}
|
||||
|
||||
def mkENodeCore (e : Expr) (interpreted ctor : Bool) (generation : Nat) : GoalM Unit := do
|
||||
setENode e {
|
||||
self := e, next := e, root := e, congr := e, size := 1
|
||||
self := e, next := e, root := e, cgRoot := e, size := 1
|
||||
flipped := false
|
||||
heqProofs := false
|
||||
hasLambdas := e.isLambda
|
||||
@@ -622,46 +448,18 @@ def mkENode (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
let interpreted ← isInterpreted e
|
||||
mkENodeCore e interpreted ctor generation
|
||||
|
||||
/-- Returns `true` is `e` is the root of its congruence class. -/
|
||||
def isCongrRoot (e : Expr) : GoalM Bool := do
|
||||
return (← getENode e).isCongrRoot
|
||||
|
||||
/-- Returns the root of the congruence class containing `e`. -/
|
||||
partial def getCongrRoot (e : Expr) : GoalM Expr := do
|
||||
let n ← getENode e
|
||||
if isSameExpr n.congr e then return e
|
||||
getCongrRoot n.congr
|
||||
|
||||
/-- Return `true` if the goal is inconsistent. -/
|
||||
def isInconsistent : GoalM Bool :=
|
||||
return (← get).inconsistent
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b`.
|
||||
It assumes `a` and `b` are in the same equivalence class, and have the same type.
|
||||
Returns a proof that `a = b` (or `HEq a b`).
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
-- Forward definition
|
||||
@[extern "lean_grind_mk_eq_proof"]
|
||||
opaque mkEqProof (a b : Expr) : GoalM Expr
|
||||
|
||||
/--
|
||||
Returns a proof that `HEq a b`.
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
-- Forward definition
|
||||
@[extern "lean_grind_mk_heq_proof"]
|
||||
opaque mkHEqProof (a b : Expr) : GoalM Expr
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b` if they have the same type. Otherwise, returns a proof of `HEq a b`.
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
def mkEqHEqProof (a b : Expr) : GoalM Expr := do
|
||||
if (← hasSameType a b) then
|
||||
mkEqProof a b
|
||||
else
|
||||
mkHEqProof a b
|
||||
|
||||
/--
|
||||
Returns a proof that `a = True`.
|
||||
It assumes `a` and `True` are in the same equivalence class.
|
||||
@@ -678,9 +476,7 @@ def mkEqFalseProof (a : Expr) : GoalM Expr := do
|
||||
|
||||
/-- Marks current goal as inconsistent without assigning `mvarId`. -/
|
||||
def markAsInconsistent : GoalM Unit := do
|
||||
unless (← get).inconsistent do
|
||||
trace[grind] "closed `{← (← get).mvarId.getTag}`"
|
||||
modify fun s => { s with inconsistent := true }
|
||||
modify fun s => { s with inconsistent := true }
|
||||
|
||||
/--
|
||||
Closes the current goal using the given proof of `False` and
|
||||
@@ -720,13 +516,9 @@ def forEachEqc (f : ENode → GoalM Unit) : GoalM Unit := do
|
||||
if isSameExpr n.self n.root then
|
||||
f n
|
||||
|
||||
abbrev Propagator := Expr → GoalM Unit
|
||||
abbrev Fallback := GoalM Unit
|
||||
|
||||
structure Methods where
|
||||
propagateUp : Propagator := fun _ => return ()
|
||||
propagateDown : Propagator := fun _ => return ()
|
||||
fallback : Fallback := pure ()
|
||||
deriving Inhabited
|
||||
|
||||
def Methods.toMethodsRef (m : Methods) : MethodsRef :=
|
||||
@@ -744,10 +536,6 @@ def propagateUp (e : Expr) : GoalM Unit := do
|
||||
def propagateDown (e : Expr) : GoalM Unit := do
|
||||
(← getMethods).propagateDown e
|
||||
|
||||
def applyFallback : GoalM Unit := do
|
||||
let fallback : GoalM Unit := (← getMethods).fallback
|
||||
fallback
|
||||
|
||||
/-- Returns expressions in the given expression equivalence class. -/
|
||||
partial def getEqc (e : Expr) : GoalM (List Expr) :=
|
||||
go e e []
|
||||
@@ -769,17 +557,4 @@ partial def getEqcs : GoalM (List (List Expr)) := do
|
||||
r := (← getEqc node.self) :: r
|
||||
return r
|
||||
|
||||
/-- Returns `true` if `e` is a case-split that does not need to be performed anymore. -/
|
||||
def isResolvedCaseSplit (e : Expr) : GoalM Bool :=
|
||||
return (← get).resolvedSplits.contains { expr := e }
|
||||
|
||||
/--
|
||||
Mark `e` as a case-split that does not need to be performed anymore.
|
||||
Remark: we currently use this feature to disable `match`-case-splits
|
||||
-/
|
||||
def markCaseSplitAsResolved (e : Expr) : GoalM Unit := do
|
||||
unless (← isResolvedCaseSplit e) do
|
||||
trace_goal[grind.split.resolved] "{e}"
|
||||
modify fun s => { s with resolvedSplits := s.resolvedSplits.insert { expr := e } }
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -100,14 +100,12 @@ def _root_.Lean.MVarId.clearAuxDecls (mvarId : MVarId) : MetaM MVarId := mvarId.
|
||||
/--
|
||||
In the `grind` tactic, during `Expr` internalization, we don't expect to find `Expr.mdata`.
|
||||
This function ensures `Expr.mdata` is not found during internalization.
|
||||
Recall that we do not internalize `Expr.lam` children.
|
||||
Recall that we still have to process `Expr.forallE` because of `ForallProp.lean`.
|
||||
Moreover, we may not want to reduce `p → q` to `¬p ∨ q` when `(p q : Prop)`.
|
||||
Recall that we do not internalize `Expr.forallE` and `Expr.lam` components.
|
||||
-/
|
||||
def eraseIrrelevantMData (e : Expr) : CoreM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
match e with
|
||||
| .letE .. | .lam .. => return .done e
|
||||
| .letE .. | .lam .. | .forallE .. => return .done e
|
||||
| .mdata _ e => return .continue e
|
||||
| _ => return .continue e
|
||||
Core.transform e (pre := pre)
|
||||
@@ -118,14 +116,11 @@ Converts nested `Expr.proj`s into projection applications if possible.
|
||||
def foldProjs (e : Expr) : MetaM Expr := do
|
||||
let post (e : Expr) := do
|
||||
let .proj structName idx s := e | return .done e
|
||||
let some info := getStructureInfo? (← getEnv) structName |
|
||||
trace[grind.issues] "found `Expr.proj` but `{structName}` is not marked as structure{indentExpr e}"
|
||||
return .done e
|
||||
let some info := getStructureInfo? (← getEnv) structName | return .done e
|
||||
if h : idx < info.fieldNames.size then
|
||||
let fieldName := info.fieldNames[idx]
|
||||
return .done (← mkProjection s fieldName)
|
||||
else
|
||||
trace[grind.issues] "found `Expr.proj` with invalid field index `{idx}`{indentExpr e}"
|
||||
return .done e
|
||||
Meta.transform e (post := post)
|
||||
|
||||
@@ -140,11 +135,4 @@ def normalizeLevels (e : Expr) : CoreM Expr := do
|
||||
| _ => return .continue
|
||||
Core.transform e (pre := pre)
|
||||
|
||||
/--
|
||||
Normalizes the given expression using the `grind` simplification theorems and simprocs.
|
||||
This function is used for normalzing E-matching patterns. Note that it does not return a proof.
|
||||
-/
|
||||
@[extern "lean_grind_normalize"] -- forward definition
|
||||
opaque normalize (e : Expr) : MetaM Expr
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -123,15 +123,6 @@ unsafe def mkDelabAttribute : IO (KeyedDeclsAttribute Delab) :=
|
||||
} `Lean.PrettyPrinter.Delaborator.delabAttribute
|
||||
@[builtin_init mkDelabAttribute] opaque delabAttribute : KeyedDeclsAttribute Delab
|
||||
|
||||
/--
|
||||
`@[app_delab c]` registers a delaborator for applications with head constant `c`.
|
||||
Such delaborators also apply to the constant `c` itself (known as a "nullary application").
|
||||
|
||||
This attribute should be applied to definitions of type `Lean.PrettyPrinter.Delaborator.Delab`.
|
||||
|
||||
When defining delaborators for constant applications, one should prefer this attribute over `@[delab app.c]`,
|
||||
as `@[app_delab c]` first performs name resolution on `c` in the current scope.
|
||||
-/
|
||||
macro "app_delab" id:ident : attr => do
|
||||
match ← Macro.resolveGlobalName id.getId with
|
||||
| [] => Macro.throwErrorAt id s!"unknown declaration '{id.getId}'"
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Sebastian Ullrich, Leonardo de Moura, Gabriel Ebner, Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Lean.Parser
|
||||
import Lean.PrettyPrinter.Delaborator.Attributes
|
||||
import Lean.PrettyPrinter.Delaborator.Basic
|
||||
import Lean.PrettyPrinter.Delaborator.SubExpr
|
||||
|
||||
@@ -5,7 +5,6 @@ Authors: Leonardo de Moura, Marc Huisinga
|
||||
-/
|
||||
prelude
|
||||
import Lean.Server.Completion.CompletionCollectors
|
||||
import Std.Data.HashMap
|
||||
|
||||
namespace Lean.Server.Completion
|
||||
open Lsp
|
||||
|
||||
@@ -14,15 +14,10 @@ namespace Lean
|
||||
/--
|
||||
We use the `ToExpr` type class to convert values of type `α` into
|
||||
expressions that denote these values in Lean.
|
||||
|
||||
Examples:
|
||||
Example:
|
||||
```
|
||||
toExpr true = .const ``Bool.true []
|
||||
|
||||
toTypeExpr Bool = .const ``Bool []
|
||||
```
|
||||
|
||||
See also `ToLevel` for representing universe levels as `Level` expressions.
|
||||
-/
|
||||
class ToExpr (α : Type u) where
|
||||
/-- Convert a value `a : α` into an expression that denotes `a` -/
|
||||
|
||||
@@ -5,8 +5,8 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.ShareCommon
|
||||
import Std.Data.HashSet.Basic
|
||||
import Std.Data.HashMap.Basic
|
||||
import Std.Data.HashSet
|
||||
import Std.Data.HashMap
|
||||
import Lean.Data.PersistentHashMap
|
||||
import Lean.Data.PersistentHashSet
|
||||
|
||||
|
||||
@@ -293,16 +293,13 @@ def registerTraceClass (traceClassName : Name) (inherited := false) (ref : Name
|
||||
if inherited then
|
||||
inheritedTraceOptions.modify (·.insert optionName)
|
||||
|
||||
def expandTraceMacro (id : Syntax) (s : Syntax) : MacroM (TSyntax `doElem) := do
|
||||
let msg ← if s.getKind == interpolatedStrKind then `(m! $(⟨s⟩)) else `(($(⟨s⟩) : MessageData))
|
||||
macro "trace[" id:ident "]" s:(interpolatedStr(term) <|> term) : doElem => do
|
||||
let msg ← if s.raw.getKind == interpolatedStrKind then `(m! $(⟨s⟩)) else `(($(⟨s⟩) : MessageData))
|
||||
`(doElem| do
|
||||
let cls := $(quote id.getId.eraseMacroScopes)
|
||||
if (← Lean.isTracingEnabledFor cls) then
|
||||
Lean.addTrace cls $msg)
|
||||
|
||||
macro "trace[" id:ident "]" s:(interpolatedStr(term) <|> term) : doElem => do
|
||||
expandTraceMacro id s.raw
|
||||
|
||||
def bombEmoji := "💥️"
|
||||
def checkEmoji := "✅️"
|
||||
def crossEmoji := "❌️"
|
||||
|
||||
@@ -10,4 +10,3 @@ import Std.Sync
|
||||
import Std.Time
|
||||
import Std.Tactic
|
||||
import Std.Internal
|
||||
import Std.Net
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Henrik Böving
|
||||
-/
|
||||
prelude
|
||||
import Std.Net.Addr
|
||||
@@ -1,197 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Henrik Böving
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Vector.Basic
|
||||
|
||||
/-!
|
||||
This module contains Lean representations of IP and socket addresses:
|
||||
- `IPv4Addr`: Representing IPv4 addresses.
|
||||
- `SocketAddressV4`: Representing a pair of IPv4 address and port.
|
||||
- `IPv6Addr`: Representing IPv6 addresses.
|
||||
- `SocketAddressV6`: Representing a pair of IPv6 address and port.
|
||||
- `IPAddr`: Can either be an `IPv4Addr` or an `IPv6Addr`.
|
||||
- `SocketAddress`: Can either be a `SocketAddressV4` or `SocketAddressV6`.
|
||||
-/
|
||||
|
||||
namespace Std
|
||||
namespace Net
|
||||
|
||||
/--
|
||||
Representation of an IPv4 address.
|
||||
-/
|
||||
structure IPv4Addr where
|
||||
/--
|
||||
This structure represents the address: `octets[0].octets[1].octets[2].octets[3]`.
|
||||
-/
|
||||
octets : Vector UInt8 4
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
A pair of an `IPv4Addr` and a port.
|
||||
-/
|
||||
structure SocketAddressV4 where
|
||||
addr : IPv4Addr
|
||||
port : UInt16
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
Representation of an IPv6 address.
|
||||
-/
|
||||
structure IPv6Addr where
|
||||
/--
|
||||
This structure represents the address: `segments[0]:segments[1]:...`.
|
||||
-/
|
||||
segments : Vector UInt16 8
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
A pair of an `IPv6Addr` and a port.
|
||||
-/
|
||||
structure SocketAddressV6 where
|
||||
addr : IPv6Addr
|
||||
port : UInt16
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
An IP address, either IPv4 or IPv6.
|
||||
-/
|
||||
inductive IPAddr where
|
||||
| v4 (addr : IPv4Addr)
|
||||
| v6 (addr : IPv6Addr)
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
Either a `SocketAddressV4` or `SocketAddressV6`.
|
||||
-/
|
||||
inductive SocketAddress where
|
||||
| v4 (addr : SocketAddressV4)
|
||||
| v6 (addr : SocketAddressV6)
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
/--
|
||||
The kinds of address families supported by Lean, currently only IP variants.
|
||||
-/
|
||||
inductive AddressFamily where
|
||||
| ipv4
|
||||
| ipv6
|
||||
deriving Inhabited, DecidableEq
|
||||
|
||||
namespace IPv4Addr
|
||||
|
||||
/--
|
||||
Build the IPv4 address `a.b.c.d`.
|
||||
-/
|
||||
def ofParts (a b c d : UInt8) : IPv4Addr :=
|
||||
{ octets := #v[a, b, c, d] }
|
||||
|
||||
/--
|
||||
Try to parse `s` as an IPv4 address, returning `none` on failure.
|
||||
-/
|
||||
@[extern "lean_uv_pton_v4"]
|
||||
opaque ofString (s : @&String) : Option IPv4Addr
|
||||
|
||||
/--
|
||||
Turn `addr` into a `String` in the usual IPv4 format.
|
||||
-/
|
||||
@[extern "lean_uv_ntop_v4"]
|
||||
opaque toString (addr : @&IPv4Addr) : String
|
||||
|
||||
instance : ToString IPv4Addr where
|
||||
toString := toString
|
||||
|
||||
instance : Coe IPv4Addr IPAddr where
|
||||
coe addr := .v4 addr
|
||||
|
||||
end IPv4Addr
|
||||
|
||||
namespace SocketAddressV4
|
||||
|
||||
instance : Coe SocketAddressV4 SocketAddress where
|
||||
coe addr := .v4 addr
|
||||
|
||||
end SocketAddressV4
|
||||
|
||||
namespace IPv6Addr
|
||||
|
||||
/--
|
||||
Build the IPv6 address `a:b:c:d:e:f:g:h`.
|
||||
-/
|
||||
def ofParts (a b c d e f g h : UInt16) : IPv6Addr :=
|
||||
{ segments := #v[a, b, c, d, e, f, g, h] }
|
||||
|
||||
/--
|
||||
Try to parse `s` as an IPv6 address according to
|
||||
[RFC 2373](https://datatracker.ietf.org/doc/html/rfc2373), returning `none` on failure.
|
||||
-/
|
||||
@[extern "lean_uv_pton_v6"]
|
||||
opaque ofString (s : @&String) : Option IPv6Addr
|
||||
|
||||
/--
|
||||
Turn `addr` into a `String` in the IPv6 format described in
|
||||
[RFC 2373](https://datatracker.ietf.org/doc/html/rfc2373).
|
||||
-/
|
||||
@[extern "lean_uv_ntop_v6"]
|
||||
opaque toString (addr : @&IPv6Addr) : String
|
||||
|
||||
instance : ToString IPv6Addr where
|
||||
toString := toString
|
||||
|
||||
instance : Coe IPv6Addr IPAddr where
|
||||
coe addr := .v6 addr
|
||||
|
||||
end IPv6Addr
|
||||
|
||||
namespace SocketAddressV6
|
||||
|
||||
instance : Coe SocketAddressV6 SocketAddress where
|
||||
coe addr := .v6 addr
|
||||
|
||||
end SocketAddressV6
|
||||
|
||||
namespace IPAddr
|
||||
|
||||
/--
|
||||
Obtain the `AddressFamily` associated with an `IPAddr`.
|
||||
-/
|
||||
def family : IPAddr → AddressFamily
|
||||
| .v4 .. => .ipv4
|
||||
| .v6 .. => .ipv6
|
||||
|
||||
def toString : IPAddr → String
|
||||
| .v4 addr => addr.toString
|
||||
| .v6 addr => addr.toString
|
||||
|
||||
instance : ToString IPAddr where
|
||||
toString := toString
|
||||
|
||||
end IPAddr
|
||||
|
||||
namespace SocketAddress
|
||||
|
||||
/--
|
||||
Obtain the `AddressFamily` associated with a `SocketAddress`.
|
||||
-/
|
||||
def family : SocketAddress → AddressFamily
|
||||
| .v4 .. => .ipv4
|
||||
| .v6 .. => .ipv6
|
||||
|
||||
/--
|
||||
Obtain the `IPAddr` contained in a `SocketAddress`.
|
||||
-/
|
||||
def ipAddr : SocketAddress → IPAddr
|
||||
| .v4 sa => .v4 sa.addr
|
||||
| .v6 sa => .v6 sa.addr
|
||||
|
||||
/--
|
||||
Obtain the port contained in a `SocketAddress`.
|
||||
-/
|
||||
def port : SocketAddress → UInt16
|
||||
| .v4 sa | .v6 sa => sa.port
|
||||
|
||||
end SocketAddress
|
||||
|
||||
end Net
|
||||
end Std
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Henrik Böving
|
||||
-/
|
||||
prelude
|
||||
import Std.Data.HashMap
|
||||
import Std.Data.HashSet
|
||||
|
||||
namespace Std
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Josh Clune
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Array
|
||||
import Std.Tactic.BVDecide.LRAT.Internal.Formula.Class
|
||||
import Std.Tactic.BVDecide.LRAT.Internal.Assignment
|
||||
import Std.Sat.CNF.Basic
|
||||
|
||||
@@ -156,7 +156,7 @@ theorem or_congr (lhs rhs lhs' rhs' : Bool) (h1 : lhs' = lhs) (h2 : rhs' = rhs)
|
||||
|
||||
theorem cond_congr (discr lhs rhs discr' lhs' rhs' : Bool) (h1 : discr' = discr) (h2 : lhs' = lhs)
|
||||
(h3 : rhs' = rhs) :
|
||||
(bif discr' then lhs' else rhs') = (bif discr then lhs else rhs) := by
|
||||
(bif discr' = true then lhs' else rhs') = (bif discr = true then lhs else rhs) := by
|
||||
simp[*]
|
||||
|
||||
theorem false_of_eq_true_of_eq_false (h₁ : x = true) (h₂ : x = false) : False := by
|
||||
|
||||
@@ -39,12 +39,6 @@ instance {x y : Ordinal} : Decidable (x < y) :=
|
||||
def Offset : Type := Int
|
||||
deriving Repr, BEq, Inhabited, Add, Sub, Mul, Div, Neg, ToString, LT, LE, DecidableEq
|
||||
|
||||
instance {x y : Offset} : Decidable (x ≤ y) :=
|
||||
Int.decLe x y
|
||||
|
||||
instance {x y : Offset} : Decidable (x < y) :=
|
||||
Int.decLt x y
|
||||
|
||||
instance : OfNat Offset n :=
|
||||
⟨Int.ofNat n⟩
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user