mirror of
https://github.com/leanprover/lean4.git
synced 2026-03-20 20:04:23 +00:00
Compare commits
1 Commits
array_alig
...
grind_patt
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
32adce928c |
1189
RELEASES.md
1189
RELEASES.md
File diff suppressed because it is too large
Load Diff
@@ -5,6 +5,11 @@ See below for the checklist for release candidates.
|
||||
|
||||
We'll use `v4.6.0` as the intended release version as a running example.
|
||||
|
||||
- One week before the planned release, ensure that
|
||||
(1) someone has written the release notes and
|
||||
(2) someone has written the first draft of the release blog post.
|
||||
If there is any material in `./releases_drafts/` on the `releases/v4.6.0` branch, then the release notes are not done.
|
||||
(See the section "Writing the release notes".)
|
||||
- `git checkout releases/v4.6.0`
|
||||
(This branch should already exist, from the release candidates.)
|
||||
- `git pull`
|
||||
@@ -76,16 +81,12 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [plausible](https://github.com/leanprover-community/plausible)
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- There is no `stable` branch; skip this step
|
||||
- [Mathlib](https://github.com/leanprover-community/mathlib4)
|
||||
- Dependencies: `Aesop`, `ProofWidgets4`, `lean4checker`, `Batteries`, `doc-gen4`, `import-graph`
|
||||
- Toolchain bump PR notes:
|
||||
- In addition to updating the `lean-toolchain` and `lakefile.lean`,
|
||||
in `.github/workflows/lean4checker.yml` update the line
|
||||
`git checkout v4.6.0` to the appropriate tag.
|
||||
`git checkout v4.6.0` to the appropriate tag.
|
||||
- Push the PR branch to the main Mathlib repository rather than a fork, or CI may not work reliably
|
||||
- Create and push the tag
|
||||
- Create a new branch from the tag, push it, and open a pull request against `stable`.
|
||||
@@ -97,7 +98,6 @@ We'll use `v4.6.0` as the intended release version as a running example.
|
||||
- Toolchain bump PR including updated Lake manifest
|
||||
- Create and push the tag
|
||||
- Merge the tag into `stable`
|
||||
- Run `scripts/release_checklist.py v4.6.0` to check that everything is in order.
|
||||
- The `v4.6.0` section of `RELEASES.md` is out of sync between
|
||||
`releases/v4.6.0` and `master`. This should be reconciled:
|
||||
- Replace the `v4.6.0` section on `master` with the `v4.6.0` section on `releases/v4.6.0`
|
||||
@@ -139,13 +139,16 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
|
||||
git checkout -b releases/v4.7.0
|
||||
```
|
||||
- In `RELEASES.md` replace `Development in progress` in the `v4.7.0` section with `Release notes to be written.`
|
||||
- It is essential to choose the nightly that will become the release candidate as early as possible, to avoid confusion.
|
||||
- We will rely on automatically generated release notes for release candidates,
|
||||
and the written release notes will be used for stable versions only.
|
||||
It is essential to choose the nightly that will become the release candidate as early as possible, to avoid confusion.
|
||||
- In `src/CMakeLists.txt`,
|
||||
- verify that you see `set(LEAN_VERSION_MINOR 7)` (for whichever `7` is appropriate); this should already have been updated when the development cycle began.
|
||||
- `set(LEAN_VERSION_IS_RELEASE 1)` (this should be a change; on `master` and nightly releases it is always `0`).
|
||||
- Commit your changes to `src/CMakeLists.txt`, and push.
|
||||
- `git tag v4.7.0-rc1`
|
||||
- `git push origin v4.7.0-rc1`
|
||||
- Ping the FRO Zulip that release notes need to be written. The release notes do not block completing the rest of this checklist.
|
||||
- Now wait, while CI runs.
|
||||
- You can monitor this at `https://github.com/leanprover/lean4/actions/workflows/ci.yml`, looking for the `v4.7.0-rc1` tag.
|
||||
- This step can take up to an hour.
|
||||
@@ -245,12 +248,15 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
|
||||
|
||||
# Writing the release notes
|
||||
|
||||
Release notes are automatically generated from the commit history, using `script/release_notes.py`.
|
||||
We are currently trying a system where release notes are compiled all at once from someone looking through the commit history.
|
||||
The exact steps are a work in progress.
|
||||
Here is the general idea:
|
||||
|
||||
Run this as `script/release_notes.py v4.6.0`, where `v4.6.0` is the *previous* release version. This will generate output
|
||||
for all commits since that tag. Note that there is output on both stderr, which should be manually reviewed,
|
||||
and on stdout, which should be manually copied to `RELEASES.md`.
|
||||
|
||||
There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
|
||||
* The work is done right on the `releases/v4.6.0` branch sometime after it is created but before the stable release is made.
|
||||
The release notes for `v4.6.0` will later be copied to `master` when we begin a new development cycle.
|
||||
* There can be material for release notes entries in commit messages.
|
||||
* There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
|
||||
See `./releases_drafts/README.md` for more information.
|
||||
* The release notes should be written from a downstream expert user's point of view.
|
||||
|
||||
This section will be updated when the next release notes are written (for `v4.10.0`).
|
||||
|
||||
16
releases_drafts/list_lex.md
Normal file
16
releases_drafts/list_lex.md
Normal file
@@ -0,0 +1,16 @@
|
||||
We replace the inductive predicate `List.lt` with an upstreamed version of `List.Lex` from Mathlib.
|
||||
(Previously `Lex.lt` was defined in terms of `<`; now it is generalized to take an arbitrary relation.)
|
||||
This subtely changes the notion of ordering on `List α`.
|
||||
|
||||
`List.lt` was a weaker relation: in particular if `l₁ < l₂`, then
|
||||
`a :: l₁ < b :: l₂` may hold according to `List.lt` even if `a` and `b` are merely incomparable
|
||||
(either neither `a < b` nor `b < a`), whereas according to `List.Lex` this would require `a = b`.
|
||||
|
||||
When `<` is total, in the sense that `¬ · < ·` is antisymmetric, then the two relations coincide.
|
||||
|
||||
Mathlib was already overriding the order instances for `List α`,
|
||||
so this change should not be noticed by anyone already using Mathlib.
|
||||
|
||||
We simultaneously add the boolean valued `List.lex` function, parameterised by a `BEq` typeclass
|
||||
and an arbitrary `lt` function. This will support the flexibility previously provided for `List.lt`,
|
||||
via a `==` function which is weaker than strict equality.
|
||||
@@ -43,7 +43,7 @@ echo -n " -DCMAKE_C_COMPILER=$PWD/stage1/bin/clang.exe -DCMAKE_C_COMPILER_WORKS=
|
||||
echo -n " -DSTAGE0_CMAKE_C_COMPILER=clang -DSTAGE0_CMAKE_CXX_COMPILER=clang++"
|
||||
echo -n " -DLEAN_EXTRA_CXX_FLAGS='--sysroot $PWD/llvm -idirafter /clang64/include/'"
|
||||
echo -n " -DLEANC_INTERNAL_FLAGS='--sysroot ROOT -nostdinc -isystem ROOT/include/clang' -DLEANC_CC=ROOT/bin/clang.exe"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -Wl,-Bstatic -lgmp $(pkg-config --static --libs libuv) -lunwind -Wl,-Bdynamic -fuse-ld=lld'"
|
||||
echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='-L ROOT/lib -static-libgcc -Wl,-Bstatic -lgmp $(pkg-config --static --libs libuv) -lunwind -Wl,-Bdynamic -fuse-ld=lld'"
|
||||
# when not using the above flags, link GMP dynamically/as usual. Always link ICU dynamically.
|
||||
echo -n " -DLEAN_EXTRA_LINKER_FLAGS='-lgmp $(pkg-config --libs libuv) -lucrtbase'"
|
||||
# do not set `LEAN_CC` for tests
|
||||
|
||||
@@ -1,132 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import yaml
|
||||
import requests
|
||||
import base64
|
||||
import subprocess
|
||||
import sys
|
||||
import os
|
||||
|
||||
def parse_repos_config(file_path):
|
||||
with open(file_path, "r") as f:
|
||||
return yaml.safe_load(f)["repositories"]
|
||||
|
||||
def get_github_token():
|
||||
try:
|
||||
import subprocess
|
||||
result = subprocess.run(['gh', 'auth', 'token'], capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
except FileNotFoundError:
|
||||
print("Warning: 'gh' CLI not found. Some API calls may be rate-limited.")
|
||||
return None
|
||||
|
||||
def get_branch_content(repo_url, branch, file_path, github_token):
|
||||
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/contents/{file_path}?ref={branch}"
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
response = requests.get(api_url, headers=headers)
|
||||
if response.status_code == 200:
|
||||
content = response.json().get("content", "")
|
||||
content = content.replace("\n", "")
|
||||
try:
|
||||
return base64.b64decode(content).decode('utf-8').strip()
|
||||
except Exception:
|
||||
return None
|
||||
return None
|
||||
|
||||
def tag_exists(repo_url, tag_name, github_token):
|
||||
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/git/refs/tags/{tag_name}"
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
response = requests.get(api_url, headers=headers)
|
||||
return response.status_code == 200
|
||||
|
||||
def is_merged_into_stable(repo_url, tag_name, stable_branch, github_token):
|
||||
# First get the commit SHA for the tag
|
||||
api_base = repo_url.replace("https://github.com/", "https://api.github.com/repos/")
|
||||
headers = {'Authorization': f'token {github_token}'} if github_token else {}
|
||||
|
||||
# Get tag's commit SHA
|
||||
tag_response = requests.get(f"{api_base}/git/refs/tags/{tag_name}", headers=headers)
|
||||
if tag_response.status_code != 200:
|
||||
return False
|
||||
tag_sha = tag_response.json()['object']['sha']
|
||||
|
||||
# Get commits on stable branch containing this SHA
|
||||
commits_response = requests.get(
|
||||
f"{api_base}/commits?sha={stable_branch}&per_page=100",
|
||||
headers=headers
|
||||
)
|
||||
if commits_response.status_code != 200:
|
||||
return False
|
||||
|
||||
# Check if any commit in stable's history matches our tag's SHA
|
||||
stable_commits = [commit['sha'] for commit in commits_response.json()]
|
||||
return tag_sha in stable_commits
|
||||
|
||||
def parse_version(version_str):
|
||||
# Remove 'v' prefix and split into components
|
||||
# Handle Lean toolchain format (leanprover/lean4:v4.x.y)
|
||||
if ':' in version_str:
|
||||
version_str = version_str.split(':')[1]
|
||||
version = version_str.lstrip('v')
|
||||
# Handle release candidates by removing -rc part for comparison
|
||||
version = version.split('-')[0]
|
||||
return tuple(map(int, version.split('.')))
|
||||
|
||||
def is_version_gte(version1, version2):
|
||||
"""Check if version1 >= version2"""
|
||||
return parse_version(version1) >= parse_version(version2)
|
||||
|
||||
def is_release_candidate(version):
|
||||
return "-rc" in version
|
||||
|
||||
def main():
|
||||
github_token = get_github_token()
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python3 release_checklist.py <toolchain>")
|
||||
sys.exit(1)
|
||||
|
||||
toolchain = sys.argv[1]
|
||||
|
||||
with open(os.path.join(os.path.dirname(__file__), "release_repos.yml")) as f:
|
||||
repos = yaml.safe_load(f)["repositories"]
|
||||
|
||||
for repo in repos:
|
||||
name = repo["name"]
|
||||
url = repo["url"]
|
||||
branch = repo["branch"]
|
||||
check_stable = repo["stable-branch"]
|
||||
check_tag = repo.get("toolchain-tag", True)
|
||||
|
||||
print(f"\nRepository: {name}")
|
||||
|
||||
# Check if branch is on at least the target toolchain
|
||||
lean_toolchain_content = get_branch_content(url, branch, "lean-toolchain", github_token)
|
||||
if lean_toolchain_content is None:
|
||||
print(f" ❌ No lean-toolchain file found in {branch} branch")
|
||||
continue
|
||||
|
||||
on_target_toolchain = is_version_gte(lean_toolchain_content.strip(), toolchain)
|
||||
if not on_target_toolchain:
|
||||
print(f" ❌ Not on target toolchain (needs ≥ {toolchain}, but {branch} is on {lean_toolchain_content.strip()})")
|
||||
continue
|
||||
print(f" ✅ On compatible toolchain (>= {toolchain})")
|
||||
|
||||
# Only check for tag if toolchain-tag is true
|
||||
if check_tag:
|
||||
if not tag_exists(url, toolchain, github_token):
|
||||
print(f" ❌ Tag {toolchain} does not exist")
|
||||
continue
|
||||
print(f" ✅ Tag {toolchain} exists")
|
||||
|
||||
# Only check merging into stable if stable-branch is true and not a release candidate
|
||||
if check_stable and not is_release_candidate(toolchain):
|
||||
if not is_merged_into_stable(url, toolchain, "stable", github_token):
|
||||
print(f" ❌ Tag {toolchain} is not merged into stable")
|
||||
continue
|
||||
print(f" ✅ Tag {toolchain} is merged into stable")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,145 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
import requests
|
||||
import subprocess
|
||||
from collections import defaultdict
|
||||
from git import Repo
|
||||
|
||||
def get_commits_since_tag(repo, tag):
|
||||
try:
|
||||
tag_commit = repo.commit(tag)
|
||||
commits = list(repo.iter_commits(f"{tag_commit.hexsha}..HEAD"))
|
||||
return [
|
||||
(commit.hexsha, commit.message.splitlines()[0], commit.message)
|
||||
for commit in commits
|
||||
]
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"Error retrieving commits: {e}\n")
|
||||
sys.exit(1)
|
||||
|
||||
def check_pr_number(first_line):
|
||||
match = re.search(r"\(\#(\d+)\)$", first_line)
|
||||
if match:
|
||||
return int(match.group(1))
|
||||
return None
|
||||
|
||||
def fetch_pr_labels(pr_number):
|
||||
try:
|
||||
# Use gh CLI to fetch PR details
|
||||
result = subprocess.run([
|
||||
"gh", "api", f"repos/leanprover/lean4/pulls/{pr_number}"
|
||||
], capture_output=True, text=True, check=True)
|
||||
pr_data = result.stdout
|
||||
pr_json = json.loads(pr_data)
|
||||
return [label["name"] for label in pr_json.get("labels", [])]
|
||||
except subprocess.CalledProcessError as e:
|
||||
sys.stderr.write(f"Failed to fetch PR #{pr_number} using gh: {e.stderr}\n")
|
||||
return []
|
||||
|
||||
def format_section_title(label):
|
||||
title = label.replace("changelog-", "").capitalize()
|
||||
if title == "Doc":
|
||||
return "Documentation"
|
||||
elif title == "Pp":
|
||||
return "Pretty Printing"
|
||||
return title
|
||||
|
||||
def sort_sections_order():
|
||||
return [
|
||||
"Language",
|
||||
"Library",
|
||||
"Compiler",
|
||||
"Pretty Printing",
|
||||
"Documentation",
|
||||
"Server",
|
||||
"Lake",
|
||||
"Other",
|
||||
"Uncategorised"
|
||||
]
|
||||
|
||||
def format_markdown_description(pr_number, description):
|
||||
link = f"[#{pr_number}](https://github.com/leanprover/lean4/pull/{pr_number})"
|
||||
return f"{link} {description}"
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write("Usage: script.py <git-tag>\n")
|
||||
sys.exit(1)
|
||||
|
||||
tag = sys.argv[1]
|
||||
try:
|
||||
repo = Repo(".")
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"Error opening Git repository: {e}\n")
|
||||
sys.exit(1)
|
||||
|
||||
commits = get_commits_since_tag(repo, tag)
|
||||
|
||||
sys.stderr.write(f"Found {len(commits)} commits since tag {tag}:\n")
|
||||
for commit_hash, first_line, _ in commits:
|
||||
sys.stderr.write(f"- {commit_hash}: {first_line}\n")
|
||||
|
||||
changelog = defaultdict(list)
|
||||
|
||||
for commit_hash, first_line, full_message in commits:
|
||||
# Skip commits with the specific first lines
|
||||
if first_line == "chore: update stage0" or first_line.startswith("chore: CI: bump "):
|
||||
continue
|
||||
|
||||
pr_number = check_pr_number(first_line)
|
||||
|
||||
if not pr_number:
|
||||
sys.stderr.write(f"No PR number found in {first_line}\n")
|
||||
continue
|
||||
|
||||
# Remove the first line from the full_message for further processing
|
||||
body = full_message[len(first_line):].strip()
|
||||
|
||||
paragraphs = body.split('\n\n')
|
||||
second_paragraph = paragraphs[0] if len(paragraphs) > 0 else ""
|
||||
|
||||
labels = fetch_pr_labels(pr_number)
|
||||
|
||||
# Skip entries with the "changelog-no" label
|
||||
if "changelog-no" in labels:
|
||||
continue
|
||||
|
||||
report_errors = first_line.startswith("feat:") or first_line.startswith("fix:")
|
||||
|
||||
if not second_paragraph.startswith("This PR "):
|
||||
if report_errors:
|
||||
sys.stderr.write(f"No PR description found in commit:\n{commit_hash}\n{first_line}\n{body}\n\n")
|
||||
fallback_description = re.sub(r":$", "", first_line.split(" ", 1)[1]).rsplit(" (#", 1)[0]
|
||||
markdown_description = format_markdown_description(pr_number, fallback_description)
|
||||
else:
|
||||
continue
|
||||
else:
|
||||
markdown_description = format_markdown_description(pr_number, second_paragraph.replace("This PR ", ""))
|
||||
|
||||
changelog_labels = [label for label in labels if label.startswith("changelog-")]
|
||||
if len(changelog_labels) > 1:
|
||||
sys.stderr.write(f"Warning: Multiple changelog-* labels found for PR #{pr_number}: {changelog_labels}\n")
|
||||
|
||||
if not changelog_labels:
|
||||
if report_errors:
|
||||
sys.stderr.write(f"Warning: No changelog-* label found for PR #{pr_number}\n")
|
||||
else:
|
||||
continue
|
||||
|
||||
for label in changelog_labels:
|
||||
changelog[label].append((pr_number, markdown_description))
|
||||
|
||||
section_order = sort_sections_order()
|
||||
sorted_changelog = sorted(changelog.items(), key=lambda item: section_order.index(format_section_title(item[0])) if format_section_title(item[0]) in section_order else len(section_order))
|
||||
|
||||
for label, entries in sorted_changelog:
|
||||
section_title = format_section_title(label) if label != "Uncategorised" else "Uncategorised"
|
||||
print(f"## {section_title}\n")
|
||||
for _, entry in sorted(entries, key=lambda x: x[0]):
|
||||
print(f"* {entry}\n")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,79 +0,0 @@
|
||||
repositories:
|
||||
- name: Batteries
|
||||
url: https://github.com/leanprover-community/batteries
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: lean4checker
|
||||
url: https://github.com/leanprover/lean4checker
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies: []
|
||||
|
||||
- name: doc-gen4
|
||||
url: https://github.com/leanprover/doc-gen4
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: Verso
|
||||
url: https://github.com/leanprover/verso
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: ProofWidgets4
|
||||
url: https://github.com/leanprover-community/ProofWidgets4
|
||||
toolchain-tag: false
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies:
|
||||
- Batteries
|
||||
|
||||
- name: Aesop
|
||||
url: https://github.com/leanprover-community/aesop
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Batteries
|
||||
|
||||
- name: import-graph
|
||||
url: https://github.com/leanprover-community/import-graph
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: plausible
|
||||
url: https://github.com/leanprover-community/plausible
|
||||
toolchain-tag: true
|
||||
stable-branch: false
|
||||
branch: main
|
||||
dependencies: []
|
||||
|
||||
- name: Mathlib
|
||||
url: https://github.com/leanprover-community/mathlib4
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Aesop
|
||||
- ProofWidgets4
|
||||
- lean4checker
|
||||
- Batteries
|
||||
- doc-gen4
|
||||
- import-graph
|
||||
|
||||
- name: REPL
|
||||
url: https://github.com/leanprover-community/repl
|
||||
toolchain-tag: true
|
||||
stable-branch: true
|
||||
branch: master
|
||||
dependencies:
|
||||
- Mathlib
|
||||
@@ -37,4 +37,3 @@ import Init.MacroTrace
|
||||
import Init.Grind
|
||||
import Init.While
|
||||
import Init.Syntax
|
||||
import Init.Internal
|
||||
|
||||
@@ -36,8 +36,6 @@ namespace Array
|
||||
@[simp] theorem mem_toList_iff (a : α) (l : Array α) : a ∈ l.toList ↔ a ∈ l := by
|
||||
cases l <;> simp
|
||||
|
||||
@[simp] theorem length_toList {l : Array α} : l.toList.length = l.size := rfl
|
||||
|
||||
/-! ### empty -/
|
||||
|
||||
@[simp] theorem empty_eq {xs : Array α} : #[] = xs ↔ xs = #[] := by
|
||||
@@ -1003,7 +1001,52 @@ private theorem beq_of_beq_singleton [BEq α] {a b : α} : #[a] == #[b] → a ==
|
||||
|
||||
/-! Content below this point has not yet been aligned with `List`. -/
|
||||
|
||||
/-! ### map -/
|
||||
theorem singleton_eq_toArray_singleton (a : α) : #[a] = [a].toArray := rfl
|
||||
|
||||
@[simp] theorem singleton_def (v : α) : singleton v = #[v] := rfl
|
||||
|
||||
-- This is a duplicate of `List.toArray_toList`.
|
||||
-- It's confusing to guess which namespace this theorem should live in,
|
||||
-- so we provide both.
|
||||
@[simp] theorem toArray_toList (a : Array α) : a.toList.toArray = a := rfl
|
||||
|
||||
@[simp] theorem length_toList {l : Array α} : l.toList.length = l.size := rfl
|
||||
|
||||
@[simp] theorem mkEmpty_eq (α n) : @mkEmpty α n = #[] := rfl
|
||||
|
||||
@[deprecated size_toArray (since := "2024-12-11")]
|
||||
theorem size_mk (as : List α) : (Array.mk as).size = as.length := by simp [size]
|
||||
|
||||
theorem foldrM_push [Monad m] (f : α → β → m β) (init : β) (arr : Array α) (a : α) :
|
||||
(arr.push a).foldrM f init = f a init >>= arr.foldrM f := by
|
||||
simp only [foldrM_eq_reverse_foldlM_toList, push_toList, List.reverse_append, List.reverse_cons,
|
||||
List.reverse_nil, List.nil_append, List.singleton_append, List.foldlM_cons, List.foldlM_reverse]
|
||||
|
||||
/--
|
||||
Variant of `foldrM_push` with `h : start = arr.size + 1`
|
||||
rather than `(arr.push a).size` as the argument.
|
||||
-/
|
||||
@[simp] theorem foldrM_push' [Monad m] (f : α → β → m β) (init : β) (arr : Array α) (a : α)
|
||||
{start} (h : start = arr.size + 1) :
|
||||
(arr.push a).foldrM f init start = f a init >>= arr.foldrM f := by
|
||||
simp [← foldrM_push, h]
|
||||
|
||||
theorem foldr_push (f : α → β → β) (init : β) (arr : Array α) (a : α) :
|
||||
(arr.push a).foldr f init = arr.foldr f (f a init) := foldrM_push ..
|
||||
|
||||
/--
|
||||
Variant of `foldr_push` with the `h : start = arr.size + 1`
|
||||
rather than `(arr.push a).size` as the argument.
|
||||
-/
|
||||
@[simp] theorem foldr_push' (f : α → β → β) (init : β) (arr : Array α) (a : α) {start}
|
||||
(h : start = arr.size + 1) : (arr.push a).foldr f init start = arr.foldr f (f a init) :=
|
||||
foldrM_push' _ _ _ _ h
|
||||
|
||||
/-- A more efficient version of `arr.toList.reverse`. -/
|
||||
@[inline] def toListRev (arr : Array α) : List α := arr.foldl (fun l t => t :: l) []
|
||||
|
||||
@[simp] theorem toListRev_eq (arr : Array α) : arr.toListRev = arr.toList.reverse := by
|
||||
rw [toListRev, ← foldl_toList, ← List.foldr_reverse, List.foldr_cons_nil]
|
||||
|
||||
theorem mapM_eq_foldlM [Monad m] [LawfulMonad m] (f : α → m β) (arr : Array α) :
|
||||
arr.mapM f = arr.foldlM (fun bs a => bs.push <$> f a) #[] := by
|
||||
@@ -1027,11 +1070,6 @@ where
|
||||
induction l generalizing arr <;> simp [*]
|
||||
simp [H]
|
||||
|
||||
@[simp] theorem _root_.List.map_toArray (f : α → β) (l : List α) :
|
||||
l.toArray.map f = (l.map f).toArray := by
|
||||
apply ext'
|
||||
simp
|
||||
|
||||
@[simp] theorem size_map (f : α → β) (arr : Array α) : (arr.map f).size = arr.size := by
|
||||
simp only [← length_toList]
|
||||
simp
|
||||
@@ -1041,128 +1079,6 @@ where
|
||||
|
||||
@[simp] theorem map_empty (f : α → β) : map f #[] = #[] := mapM_empty f
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (a : Array α) (i : Nat) (hi : i < (a.map f).size) :
|
||||
(a.map f)[i] = f (a[i]'(by simpa using hi)) := by
|
||||
cases a
|
||||
simp
|
||||
|
||||
@[simp] theorem getElem?_map (f : α → β) (as : Array α) (i : Nat) :
|
||||
(as.map f)[i]? = as[i]?.map f := by
|
||||
simp [getElem?_def]
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : Array α) : map (id : α → α) l = l := by
|
||||
cases l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : Array α) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : Array α) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f #[a] = #[f a] := rfl
|
||||
|
||||
@[simp] theorem mem_map {f : α → β} {l : Array α} : b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b := by
|
||||
simp only [mem_def, toList_map, List.mem_map]
|
||||
|
||||
theorem exists_of_mem_map (h : b ∈ map f l) : ∃ a, a ∈ l ∧ f a = b := mem_map.1 h
|
||||
|
||||
theorem mem_map_of_mem (f : α → β) (h : a ∈ l) : f a ∈ map f l := mem_map.2 ⟨_, h, rfl⟩
|
||||
|
||||
theorem forall_mem_map {f : α → β} {l : Array α} {P : β → Prop} :
|
||||
(∀ (i) (_ : i ∈ l.map f), P i) ↔ ∀ (j) (_ : j ∈ l), P (f j) := by
|
||||
simp
|
||||
|
||||
@[simp] theorem map_inj_left {f g : α → β} : map f l = map g l ↔ ∀ a ∈ l, f a = g a := by
|
||||
cases l <;> simp_all
|
||||
|
||||
theorem map_congr_left (h : ∀ a ∈ l, f a = g a) : map f l = map g l :=
|
||||
map_inj_left.2 h
|
||||
|
||||
theorem map_inj : map f = map g ↔ f = g := by
|
||||
constructor
|
||||
· intro h; ext a; replace h := congrFun h #[a]; simpa using h
|
||||
· intro h; subst h; rfl
|
||||
|
||||
@[simp] theorem map_eq_empty_iff {f : α → β} {l : Array α} : map f l = #[] ↔ l = #[] := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
theorem eq_empty_of_map_eq_empty {f : α → β} {l : Array α} (h : map f l = #[]) : l = #[] :=
|
||||
map_eq_empty_iff.mp h
|
||||
|
||||
@[simp] theorem map_map {f : α → β} {g : β → γ} {as : Array α} :
|
||||
(as.map f).map g = as.map (g ∘ f) := by
|
||||
cases as; simp
|
||||
|
||||
@[simp] theorem map_push {f : α → β} {as : Array α} {x : α} :
|
||||
(as.push x).map f = (as.map f).push (f x) := by
|
||||
ext
|
||||
· simp
|
||||
· simp only [getElem_map, getElem_push, size_map]
|
||||
split <;> rfl
|
||||
|
||||
theorem mapM_eq_mapM_toList [Monad m] [LawfulMonad m] (f : α → m β) (arr : Array α) :
|
||||
arr.mapM f = List.toArray <$> (arr.toList.mapM f) := by
|
||||
rw [mapM_eq_foldlM, ← foldlM_toList, ← List.foldrM_reverse]
|
||||
conv => rhs; rw [← List.reverse_reverse arr.toList]
|
||||
induction arr.toList.reverse with
|
||||
| nil => simp
|
||||
| cons a l ih => simp [ih]
|
||||
|
||||
@[simp] theorem toList_mapM [Monad m] [LawfulMonad m] (f : α → m β) (arr : Array α) :
|
||||
toList <$> arr.mapM f = arr.toList.mapM f := by
|
||||
simp [mapM_eq_mapM_toList]
|
||||
|
||||
theorem mapM_map_eq_foldl (as : Array α) (f : α → β) (i) :
|
||||
mapM.map (m := Id) f as i b = as.foldl (start := i) (fun r a => r.push (f a)) b := by
|
||||
unfold mapM.map
|
||||
split <;> rename_i h
|
||||
· simp only [Id.bind_eq]
|
||||
dsimp [foldl, Id.run, foldlM]
|
||||
rw [mapM_map_eq_foldl, dif_pos (by omega), foldlM.loop, dif_pos h]
|
||||
-- Calling `split` here gives a bad goal.
|
||||
have : size as - i = Nat.succ (size as - i - 1) := by omega
|
||||
rw [this]
|
||||
simp [foldl, foldlM, Id.run, Nat.sub_add_eq]
|
||||
· dsimp [foldl, Id.run, foldlM]
|
||||
rw [dif_pos (by omega), foldlM.loop, dif_neg h]
|
||||
rfl
|
||||
termination_by as.size - i
|
||||
|
||||
theorem map_eq_foldl (as : Array α) (f : α → β) :
|
||||
as.map f = as.foldl (fun r a => r.push (f a)) #[] :=
|
||||
mapM_map_eq_foldl _ _ _
|
||||
|
||||
theorem singleton_eq_toArray_singleton (a : α) : #[a] = [a].toArray := rfl
|
||||
|
||||
@[simp] theorem singleton_def (v : α) : singleton v = #[v] := rfl
|
||||
|
||||
-- This is a duplicate of `List.toArray_toList`.
|
||||
-- It's confusing to guess which namespace this theorem should live in,
|
||||
-- so we provide both.
|
||||
@[simp] theorem toArray_toList (a : Array α) : a.toList.toArray = a := rfl
|
||||
|
||||
@[simp] theorem mkEmpty_eq (α n) : @mkEmpty α n = #[] := rfl
|
||||
|
||||
@[deprecated size_toArray (since := "2024-12-11")]
|
||||
theorem size_mk (as : List α) : (Array.mk as).size = as.length := by simp [size]
|
||||
|
||||
/-- A more efficient version of `arr.toList.reverse`. -/
|
||||
@[inline] def toListRev (arr : Array α) : List α := arr.foldl (fun l t => t :: l) []
|
||||
|
||||
@[simp] theorem toListRev_eq (arr : Array α) : arr.toListRev = arr.toList.reverse := by
|
||||
rw [toListRev, ← foldl_toList, ← List.foldr_reverse, List.foldr_cons_nil]
|
||||
|
||||
@[simp] theorem appendList_nil (arr : Array α) : arr ++ ([] : List α) = arr := Array.ext' (by simp)
|
||||
|
||||
@[simp] theorem appendList_cons (arr : Array α) (a : α) (l : List α) :
|
||||
@@ -1178,6 +1094,7 @@ theorem foldl_toList_eq_map (l : List α) (acc : Array β) (G : α → β) :
|
||||
(l.foldl (fun acc a => acc.push (G a)) acc).toList = acc.toList ++ l.map G := by
|
||||
induction l generalizing acc <;> simp [*]
|
||||
|
||||
|
||||
/-! # uset -/
|
||||
|
||||
attribute [simp] uset
|
||||
@@ -1353,16 +1270,18 @@ theorem get_set (a : Array α) (i : Nat) (hi : i < a.size) (j : Nat) (hj : j < a
|
||||
(h : i ≠ j) : (a.set i v)[j]'(by simp [*]) = a[j] := by
|
||||
simp only [set, ← getElem_toList, List.getElem_set_ne h]
|
||||
|
||||
private theorem fin_cast_val (e : n = n') (i : Fin n) : e ▸ i = ⟨i.1, e ▸ i.2⟩ := by cases e; rfl
|
||||
|
||||
theorem swap_def (a : Array α) (i j : Nat) (hi hj) :
|
||||
a.swap i j hi hj = (a.set i a[j]).set j a[i] (by simpa using hj) := by
|
||||
simp [swap]
|
||||
simp [swap, fin_cast_val]
|
||||
|
||||
@[simp] theorem toList_swap (a : Array α) (i j : Nat) (hi hj) :
|
||||
(a.swap i j hi hj).toList = (a.toList.set i a[j]).set j a[i] := by simp [swap_def]
|
||||
|
||||
theorem getElem?_swap (a : Array α) (i j : Nat) (hi hj) (k : Nat) : (a.swap i j hi hj)[k]? =
|
||||
if j = k then some a[i] else if i = k then some a[j] else a[k]? := by
|
||||
simp [swap_def, get?_set]
|
||||
simp [swap_def, get?_set, ← getElem_fin_eq_getElem_toList]
|
||||
|
||||
@[simp] theorem swapAt_def (a : Array α) (i : Nat) (v : α) (hi) :
|
||||
a.swapAt i v hi = (a[i], a.set i v) := rfl
|
||||
@@ -1477,90 +1396,6 @@ theorem getElem_range {n : Nat} {x : Nat} (h : x < (Array.range n).size) : (Arra
|
||||
true_and, Nat.not_lt] at h
|
||||
rw [List.getElem?_eq_none_iff.2 ‹_›, List.getElem?_eq_none_iff.2 (a.toList.length_reverse ▸ ‹_›)]
|
||||
|
||||
end Array
|
||||
|
||||
open Array
|
||||
|
||||
namespace List
|
||||
|
||||
@[simp] theorem reverse_toArray (l : List α) : l.toArray.reverse = l.reverse.toArray := by
|
||||
apply ext'
|
||||
simp only [toList_reverse]
|
||||
|
||||
end List
|
||||
|
||||
namespace Array
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : Array α) :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
cases l
|
||||
cases l'
|
||||
rw [List.append_toArray]
|
||||
simp
|
||||
|
||||
/-- Variant of `foldM_append` with `h : stop = (l ++ l').size`. -/
|
||||
@[simp] theorem foldlM_append' [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : Array α)
|
||||
(h : stop = (l ++ l').size) :
|
||||
(l ++ l').foldlM f b 0 stop = l.foldlM f b >>= l'.foldlM f := by
|
||||
subst h
|
||||
rw [foldlM_append]
|
||||
|
||||
theorem foldrM_push [Monad m] (f : α → β → m β) (init : β) (arr : Array α) (a : α) :
|
||||
(arr.push a).foldrM f init = f a init >>= arr.foldrM f := by
|
||||
simp only [foldrM_eq_reverse_foldlM_toList, push_toList, List.reverse_append, List.reverse_cons,
|
||||
List.reverse_nil, List.nil_append, List.singleton_append, List.foldlM_cons, List.foldlM_reverse]
|
||||
|
||||
/--
|
||||
Variant of `foldrM_push` with `h : start = arr.size + 1`
|
||||
rather than `(arr.push a).size` as the argument.
|
||||
-/
|
||||
@[simp] theorem foldrM_push' [Monad m] (f : α → β → m β) (init : β) (arr : Array α) (a : α)
|
||||
{start} (h : start = arr.size + 1) :
|
||||
(arr.push a).foldrM f init start = f a init >>= arr.foldrM f := by
|
||||
simp [← foldrM_push, h]
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : Array α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldl_eq_foldlM]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : Array α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldr_eq_foldrM]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
theorem foldr_push (f : α → β → β) (init : β) (arr : Array α) (a : α) :
|
||||
(arr.push a).foldr f init = arr.foldr f (f a init) := foldrM_push ..
|
||||
|
||||
/--
|
||||
Variant of `foldr_push` with the `h : start = arr.size + 1`
|
||||
rather than `(arr.push a).size` as the argument.
|
||||
-/
|
||||
@[simp] theorem foldr_push' (f : α → β → β) (init : β) (arr : Array α) (a : α) {start}
|
||||
(h : start = arr.size + 1) : (arr.push a).foldr f init start = arr.foldr f (f a init) :=
|
||||
foldrM_push' _ _ _ _ h
|
||||
|
||||
@[simp] theorem foldl_push_eq_append (l l' : Array α) : l.foldl push l' = l' ++ l := by
|
||||
cases l
|
||||
cases l'
|
||||
simp
|
||||
|
||||
@[simp] theorem foldr_flip_push_eq_append (l l' : Array α) :
|
||||
l.foldr (fun x y => push y x) l' = l' ++ l.reverse := by
|
||||
cases l
|
||||
cases l'
|
||||
simp
|
||||
|
||||
/-! ### take -/
|
||||
|
||||
@[simp] theorem size_take_loop (a : Array α) (n : Nat) : (take.loop n a).size = a.size - n := by
|
||||
@@ -1673,6 +1508,22 @@ theorem foldr_congr {as bs : Array α} (h₀ : as = bs) {f g : α → β → β}
|
||||
as.foldr f a start stop = bs.foldr g b start' stop' := by
|
||||
congr
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : Array α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldl_eq_foldlM]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : Array α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldr_eq_foldrM]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : Array β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
cases l
|
||||
@@ -1687,13 +1538,45 @@ theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ :
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem map_pop {f : α → β} {as : Array α} :
|
||||
as.pop.map f = (as.map f).pop := by
|
||||
ext
|
||||
· simp
|
||||
· simp only [getElem_map, getElem_pop, size_map]
|
||||
@[simp] theorem mem_map {f : α → β} {l : Array α} : b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b := by
|
||||
simp only [mem_def, toList_map, List.mem_map]
|
||||
|
||||
theorem exists_of_mem_map (h : b ∈ map f l) : ∃ a, a ∈ l ∧ f a = b := mem_map.1 h
|
||||
|
||||
theorem mem_map_of_mem (f : α → β) (h : a ∈ l) : f a ∈ map f l := mem_map.2 ⟨_, h, rfl⟩
|
||||
|
||||
theorem mapM_eq_mapM_toList [Monad m] [LawfulMonad m] (f : α → m β) (arr : Array α) :
|
||||
arr.mapM f = List.toArray <$> (arr.toList.mapM f) := by
|
||||
rw [mapM_eq_foldlM, ← foldlM_toList, ← List.foldrM_reverse]
|
||||
conv => rhs; rw [← List.reverse_reverse arr.toList]
|
||||
induction arr.toList.reverse with
|
||||
| nil => simp
|
||||
| cons a l ih => simp [ih]
|
||||
|
||||
@[simp] theorem toList_mapM [Monad m] [LawfulMonad m] (f : α → m β) (arr : Array α) :
|
||||
toList <$> arr.mapM f = arr.toList.mapM f := by
|
||||
simp [mapM_eq_mapM_toList]
|
||||
|
||||
theorem mapM_map_eq_foldl (as : Array α) (f : α → β) (i) :
|
||||
mapM.map (m := Id) f as i b = as.foldl (start := i) (fun r a => r.push (f a)) b := by
|
||||
unfold mapM.map
|
||||
split <;> rename_i h
|
||||
· simp only [Id.bind_eq]
|
||||
dsimp [foldl, Id.run, foldlM]
|
||||
rw [mapM_map_eq_foldl, dif_pos (by omega), foldlM.loop, dif_pos h]
|
||||
-- Calling `split` here gives a bad goal.
|
||||
have : size as - i = Nat.succ (size as - i - 1) := by omega
|
||||
rw [this]
|
||||
simp [foldl, foldlM, Id.run, Nat.sub_add_eq]
|
||||
· dsimp [foldl, Id.run, foldlM]
|
||||
rw [dif_pos (by omega), foldlM.loop, dif_neg h]
|
||||
rfl
|
||||
termination_by as.size - i
|
||||
|
||||
theorem map_eq_foldl (as : Array α) (f : α → β) :
|
||||
as.map f = as.foldl (fun r a => r.push (f a)) #[] :=
|
||||
mapM_map_eq_foldl _ _ _
|
||||
|
||||
@[deprecated "Use `toList_map` or `List.map_toArray` to characterize `Array.map`." (since := "2025-01-06")]
|
||||
theorem map_induction (as : Array α) (f : α → β) (motive : Nat → Prop) (h0 : motive 0)
|
||||
(p : Fin as.size → β → Prop) (hs : ∀ i, motive i.1 → p i (f as[i]) ∧ motive (i+1)) :
|
||||
motive as.size ∧
|
||||
@@ -1721,13 +1604,36 @@ theorem map_induction (as : Array α) (f : α → β) (motive : Nat → Prop) (h
|
||||
simp only [show j = i by omega]
|
||||
exact (hs _ m).1
|
||||
|
||||
set_option linter.deprecated false in
|
||||
@[deprecated "Use `toList_map` or `List.map_toArray` to characterize `Array.map`." (since := "2025-01-06")]
|
||||
theorem map_spec (as : Array α) (f : α → β) (p : Fin as.size → β → Prop)
|
||||
(hs : ∀ i, p i (f as[i])) :
|
||||
∃ eq : (as.map f).size = as.size, ∀ i h, p ⟨i, h⟩ ((as.map f)[i]) := by
|
||||
simpa using map_induction as f (fun _ => True) trivial p (by simp_all)
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (as : Array α) (i : Nat) (h) :
|
||||
(as.map f)[i] = f (as[i]'(size_map .. ▸ h)) := by
|
||||
have := map_spec as f (fun i b => b = f (as[i]))
|
||||
simp only [implies_true, true_implies] at this
|
||||
obtain ⟨eq, w⟩ := this
|
||||
apply w
|
||||
simp_all
|
||||
|
||||
@[simp] theorem getElem?_map (f : α → β) (as : Array α) (i : Nat) :
|
||||
(as.map f)[i]? = as[i]?.map f := by
|
||||
simp [getElem?_def]
|
||||
|
||||
@[simp] theorem map_push {f : α → β} {as : Array α} {x : α} :
|
||||
(as.push x).map f = (as.map f).push (f x) := by
|
||||
ext
|
||||
· simp
|
||||
· simp only [getElem_map, getElem_push, size_map]
|
||||
split <;> rfl
|
||||
|
||||
@[simp] theorem map_pop {f : α → β} {as : Array α} :
|
||||
as.pop.map f = (as.map f).pop := by
|
||||
ext
|
||||
· simp
|
||||
· simp only [getElem_map, getElem_pop, size_map]
|
||||
|
||||
/-! ### modify -/
|
||||
|
||||
@[simp] theorem size_modify (a : Array α) (i : Nat) (f : α → α) : (a.modify i f).size = a.size := by
|
||||
@@ -2214,6 +2120,10 @@ theorem toListRev_toArray (l : List α) : l.toArray.toListRev = l.reverse := by
|
||||
rw [size_toArray, mapM'_cons, foldlM_toArray]
|
||||
simp [ih]
|
||||
|
||||
@[simp] theorem map_toArray (f : α → β) (l : List α) : l.toArray.map f = (l.map f).toArray := by
|
||||
apply ext'
|
||||
simp
|
||||
|
||||
theorem uset_toArray (l : List α) (i : USize) (a : α) (h : i.toNat < l.toArray.size) :
|
||||
l.toArray.uset i a h = (l.set i.toNat a).toArray := by simp
|
||||
|
||||
@@ -2222,6 +2132,10 @@ theorem uset_toArray (l : List α) (i : USize) (a : α) (h : i.toNat < l.toArray
|
||||
apply ext'
|
||||
simp
|
||||
|
||||
@[simp] theorem reverse_toArray (l : List α) : l.toArray.reverse = l.reverse.toArray := by
|
||||
apply ext'
|
||||
simp
|
||||
|
||||
@[simp] theorem modify_toArray (f : α → α) (l : List α) :
|
||||
l.toArray.modify i f = (l.modify f i).toArray := by
|
||||
apply ext'
|
||||
@@ -2315,6 +2229,29 @@ namespace Array
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem map_map {f : α → β} {g : β → γ} {as : Array α} :
|
||||
(as.map f).map g = as.map (g ∘ f) := by
|
||||
cases as; simp
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (as : Array α) : map (id : α → α) as = as := by
|
||||
cases as <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (as : Array α) : map (fun (a : α) => a) as = as := map_id as
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (as : Array α) : map f as = as := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem array_array_induction (P : Array (Array α) → Prop) (h : ∀ (xss : List (List α)), P (xss.map List.toArray).toArray)
|
||||
(ass : Array (Array α)) : P ass := by
|
||||
specialize h (ass.toList.map toList)
|
||||
|
||||
@@ -1098,32 +1098,6 @@ theorem bmod_def (x : Int) (m : Nat) : bmod x m =
|
||||
(x % m) - m :=
|
||||
rfl
|
||||
|
||||
theorem bdiv_add_bmod (x : Int) (m : Nat) : m * bdiv x m + bmod x m = x := by
|
||||
unfold bdiv bmod
|
||||
split
|
||||
· simp_all only [Nat.cast_ofNat_Int, Int.mul_zero, emod_zero, Int.zero_add, Int.sub_zero,
|
||||
ite_self]
|
||||
· dsimp only
|
||||
split
|
||||
· exact ediv_add_emod x m
|
||||
· rw [Int.mul_add, Int.mul_one, Int.add_assoc, Int.add_comm m, Int.sub_add_cancel]
|
||||
exact ediv_add_emod x m
|
||||
|
||||
theorem bmod_add_bdiv (x : Int) (m : Nat) : bmod x m + m * bdiv x m = x := by
|
||||
rw [Int.add_comm]; exact bdiv_add_bmod x m
|
||||
|
||||
theorem bdiv_add_bmod' (x : Int) (m : Nat) : bdiv x m * m + bmod x m = x := by
|
||||
rw [Int.mul_comm]; exact bdiv_add_bmod x m
|
||||
|
||||
theorem bmod_add_bdiv' (x : Int) (m : Nat) : bmod x m + bdiv x m * m = x := by
|
||||
rw [Int.add_comm]; exact bdiv_add_bmod' x m
|
||||
|
||||
theorem bmod_eq_self_sub_mul_bdiv (x : Int) (m : Nat) : bmod x m = x - m * bdiv x m := by
|
||||
rw [← Int.add_sub_cancel (bmod x m), bmod_add_bdiv]
|
||||
|
||||
theorem bmod_eq_self_sub_bdiv_mul (x : Int) (m : Nat) : bmod x m = x - bdiv x m * m := by
|
||||
rw [← Int.add_sub_cancel (bmod x m), bmod_add_bdiv']
|
||||
|
||||
theorem bmod_pos (x : Int) (m : Nat) (p : x % m < (m + 1) / 2) : bmod x m = x % m := by
|
||||
simp [bmod_def, p]
|
||||
|
||||
|
||||
@@ -757,6 +757,207 @@ theorem length_eq_of_beq [BEq α] {l₁ l₂ : List α} (h : l₁ == l₂) : l
|
||||
| nil => simp
|
||||
| cons b l₂ => simp [isEqv, ih]
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_reverse [Monad m] (l : List α) (f : β → α → m β) (b) :
|
||||
l.reverse.foldlM f b = l.foldrM (fun x y => f y x) b := rfl
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
induction l generalizing b <;> simp [*]
|
||||
|
||||
@[simp] theorem foldrM_cons [Monad m] [LawfulMonad m] (a : α) (l) (f : α → β → m β) (b) :
|
||||
(a :: l).foldrM f b = l.foldrM f b >>= f a := by
|
||||
simp only [foldrM]
|
||||
induction l <;> simp_all
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : List α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
induction l generalizing b <;> simp [*, foldl]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : List α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
induction l <;> simp [*, foldr]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
@[simp] theorem foldr_cons_eq_append (l : List α) : l.foldr cons l' = l ++ l' := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[deprecated foldr_cons_eq_append (since := "2024-08-22")] abbrev foldr_self_append := @foldr_cons_eq_append
|
||||
|
||||
@[simp] theorem foldl_flip_cons_eq_append (l : List α) : l.foldl (fun x y => y :: x) l' = l.reverse ++ l' := by
|
||||
induction l generalizing l' <;> simp [*]
|
||||
|
||||
theorem foldr_cons_nil (l : List α) : l.foldr cons [] = l := by simp
|
||||
|
||||
@[deprecated foldr_cons_nil (since := "2024-09-04")] abbrev foldr_self := @foldr_cons_nil
|
||||
|
||||
theorem foldl_map (f : β₁ → β₂) (g : α → β₂ → α) (l : List β₁) (init : α) :
|
||||
(l.map f).foldl g init = l.foldl (fun x y => g x (f y)) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldr_map (f : α₁ → α₂) (g : α₂ → β → β) (l : List α₁) (init : β) :
|
||||
(l.map f).foldr g init = l.foldr (fun x y => g (f x) y) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldl_filterMap (f : α → Option β) (g : γ → β → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldl g init = l.foldl (fun x y => match f y with | some b => g x b | none => x) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldl_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldr_filterMap (f : α → Option β) (g : β → γ → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldr g init = l.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldr_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldl_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldr_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldl_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldl op (op a₁ a₂) = op a₁ (l.foldl op a₂)
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldl_cons, ha.assoc]
|
||||
rw [foldl_assoc]
|
||||
|
||||
theorem foldr_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldr op (op a₁ a₂) = op (l.foldr op a₁) a₂
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldr_cons, ha.assoc]
|
||||
rw [foldr_assoc]
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : List β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
induction l generalizing init <;> simp [*, H]
|
||||
|
||||
theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ : α → β₂ → β₂) (l : List α) (init : β₁)
|
||||
(H : ∀ x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
|
||||
induction l <;> simp [*, H]
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldl`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldlRecOn {motive : β → Sort _} : ∀ (l : List α) (op : β → α → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op b a)), motive (List.foldl op b l)
|
||||
| [], _, _, hb, _ => hb
|
||||
| hd :: tl, op, b, hb, hl =>
|
||||
foldlRecOn tl op (op b hd) (hl b hb hd (mem_cons_self hd tl))
|
||||
fun y hy x hx => hl y hy x (mem_cons_of_mem hd hx)
|
||||
|
||||
@[simp] theorem foldlRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op b a)) :
|
||||
foldlRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldlRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op b a)) :
|
||||
foldlRecOn (x :: l) op b hb hl =
|
||||
foldlRecOn l op (op b x) (hl b hb x (mem_cons_self x l))
|
||||
(fun b c a m => hl b c a (mem_cons_of_mem x m)) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldr`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldrRecOn {motive : β → Sort _} : ∀ (l : List α) (op : α → β → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op a b)), motive (List.foldr op b l)
|
||||
| nil, _, _, hb, _ => hb
|
||||
| x :: l, op, b, hb, hl =>
|
||||
hl (foldr op b l)
|
||||
(foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m)) x (mem_cons_self x l)
|
||||
|
||||
@[simp] theorem foldrRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op a b)) :
|
||||
foldrRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldrRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op a b)) :
|
||||
foldrRecOn (x :: l) op b hb hl =
|
||||
hl _ (foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m))
|
||||
x (mem_cons_self x l) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldl_rel {l : List α} {f g : β → α → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f c a) (g c' a)) :
|
||||
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldl_cons]
|
||||
apply ih
|
||||
· simp_all
|
||||
· exact fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldr_rel {l : List α} {f g : α → β → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f a c) (g a c')) :
|
||||
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldr_cons]
|
||||
apply h'
|
||||
· simp
|
||||
· exact ih h fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
@[simp] theorem foldl_add_const (l : List α) (a b : Nat) :
|
||||
l.foldl (fun x _ => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldl_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc,
|
||||
Nat.add_comm a]
|
||||
|
||||
@[simp] theorem foldr_add_const (l : List α) (a b : Nat) :
|
||||
l.foldr (fun _ x => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldr_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc]
|
||||
|
||||
/-! ### getLast -/
|
||||
|
||||
theorem getLast_eq_getElem : ∀ (l : List α) (h : l ≠ []),
|
||||
@@ -1015,6 +1216,27 @@ theorem getLast?_tail (l : List α) : (tail l).getLast? = if l.length = 1 then n
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : List α) : map (id : α → α) l = l := by
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : List α) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : List α) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f [a] = [f a] := rfl
|
||||
|
||||
@[simp] theorem length_map (as : List α) (f : α → β) : (as.map f).length = as.length := by
|
||||
induction as with
|
||||
| nil => simp [List.map]
|
||||
@@ -1040,27 +1262,6 @@ theorem get_map (f : α → β) {l i} :
|
||||
get (map f l) i = f (get l ⟨i, length_map l f ▸ i.2⟩) := by
|
||||
simp
|
||||
|
||||
@[simp] theorem map_id_fun : map (id : α → α) = id := by
|
||||
funext l
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id_fun'` differs from `map_id_fun` by representing the identity function as a lambda, rather than `id`. -/
|
||||
@[simp] theorem map_id_fun' : map (fun (a : α) => a) = id := map_id_fun
|
||||
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun` will apply.
|
||||
theorem map_id (l : List α) : map (id : α → α) l = l := by
|
||||
induction l <;> simp_all
|
||||
|
||||
/-- `map_id'` differs from `map_id` by representing the identity function as a lambda, rather than `id`. -/
|
||||
-- This is not a `@[simp]` lemma because `map_id_fun'` will apply.
|
||||
theorem map_id' (l : List α) : map (fun (a : α) => a) l = l := map_id l
|
||||
|
||||
/-- Variant of `map_id`, with a side condition that the function is pointwise the identity. -/
|
||||
theorem map_id'' {f : α → α} (h : ∀ x, f x = x) (l : List α) : map f l = l := by
|
||||
simp [show f = id from funext h]
|
||||
|
||||
theorem map_singleton (f : α → β) (a : α) : map f [a] = [f a] := rfl
|
||||
|
||||
@[simp] theorem mem_map {f : α → β} : ∀ {l : List α}, b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b
|
||||
| [] => by simp
|
||||
| _ :: l => by simp [mem_map (l := l), eq_comm (a := b)]
|
||||
@@ -1760,6 +1961,16 @@ theorem set_append {s t : List α} :
|
||||
(s ++ t).set i x = s ++ t.set (i - s.length) x := by
|
||||
rw [set_append, if_neg (by simp_all)]
|
||||
|
||||
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α → β → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[simp] theorem foldl_append {β : Type _} (f : β → α → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldl f b = l'.foldl f (l.foldl f b) := by simp [foldl_eq_foldlM]
|
||||
|
||||
@[simp] theorem foldr_append (f : α → β → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM]
|
||||
|
||||
theorem filterMap_eq_append_iff {f : α → Option β} :
|
||||
filterMap f l = L₁ ++ L₂ ↔ ∃ l₁ l₂, l = l₁ ++ l₂ ∧ filterMap f l₁ = L₁ ∧ filterMap f l₂ = L₂ := by
|
||||
constructor
|
||||
@@ -1908,6 +2119,14 @@ theorem head?_flatten {L : List (List α)} : (flatten L).head? = L.findSome? fun
|
||||
-- `getLast?_flatten` is proved later, after the `reverse` section.
|
||||
-- `head_flatten` and `getLast_flatten` are proved in `Init.Data.List.Find`.
|
||||
|
||||
theorem foldl_flatten (f : β → α → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
|
||||
induction L generalizing b <;> simp_all
|
||||
|
||||
theorem foldr_flatten (f : α → β → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldr f b = L.foldr (fun l b => l.foldr f b) b := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@[simp] theorem map_flatten (f : α → β) (L : List (List α)) : map f (flatten L) = flatten (map (map f) L) := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@@ -2480,114 +2699,10 @@ theorem flatMap_reverse {β} (l : List α) (f : α → List β) : (l.reverse.fla
|
||||
@[simp] theorem reverseAux_eq (as bs : List α) : reverseAux as bs = reverse as ++ bs :=
|
||||
reverseAux_eq_append ..
|
||||
|
||||
@[simp] theorem reverse_replicate (n) (a : α) : reverse (replicate n a) = replicate n a :=
|
||||
eq_replicate_iff.2
|
||||
⟨by rw [length_reverse, length_replicate],
|
||||
fun _ h => eq_of_mem_replicate (mem_reverse.1 h)⟩
|
||||
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
induction l generalizing b <;> simp [*]
|
||||
|
||||
@[simp] theorem foldrM_cons [Monad m] [LawfulMonad m] (a : α) (l) (f : α → β → m β) (b) :
|
||||
(a :: l).foldrM f b = l.foldrM f b >>= f a := by
|
||||
simp only [foldrM]
|
||||
induction l <;> simp_all
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : List α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
induction l generalizing b <;> simp [*, foldl]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : List α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
induction l <;> simp [*, foldr]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
@[simp] theorem foldlM_reverse [Monad m] (l : List α) (f : β → α → m β) (b) :
|
||||
l.reverse.foldlM f b = l.foldrM (fun x y => f y x) b := rfl
|
||||
|
||||
@[simp] theorem foldrM_reverse [Monad m] (l : List α) (f : α → β → m β) (b) :
|
||||
l.reverse.foldrM f b = l.foldlM (fun x y => f y x) b :=
|
||||
(foldlM_reverse ..).symm.trans <| by simp
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
@[simp] theorem foldr_cons_eq_append (l : List α) : l.foldr cons l' = l ++ l' := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[deprecated foldr_cons_eq_append (since := "2024-08-22")] abbrev foldr_self_append := @foldr_cons_eq_append
|
||||
|
||||
@[simp] theorem foldl_flip_cons_eq_append (l : List α) : l.foldl (fun x y => y :: x) l' = l.reverse ++ l' := by
|
||||
induction l generalizing l' <;> simp [*]
|
||||
|
||||
theorem foldr_cons_nil (l : List α) : l.foldr cons [] = l := by simp
|
||||
|
||||
@[deprecated foldr_cons_nil (since := "2024-09-04")] abbrev foldr_self := @foldr_cons_nil
|
||||
|
||||
theorem foldl_map (f : β₁ → β₂) (g : α → β₂ → α) (l : List β₁) (init : α) :
|
||||
(l.map f).foldl g init = l.foldl (fun x y => g x (f y)) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldr_map (f : α₁ → α₂) (g : α₂ → β → β) (l : List α₁) (init : β) :
|
||||
(l.map f).foldr g init = l.foldr (fun x y => g (f x) y) init := by
|
||||
induction l generalizing init <;> simp [*]
|
||||
|
||||
theorem foldl_filterMap (f : α → Option β) (g : γ → β → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldl g init = l.foldl (fun x y => match f y with | some b => g x b | none => x) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldl_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldr_filterMap (f : α → Option β) (g : β → γ → γ) (l : List α) (init : γ) :
|
||||
(l.filterMap f).foldr g init = l.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
|
||||
induction l generalizing init with
|
||||
| nil => rfl
|
||||
| cons a l ih =>
|
||||
simp only [filterMap_cons, foldr_cons]
|
||||
cases f a <;> simp [ih]
|
||||
|
||||
theorem foldl_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
theorem foldr_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
|
||||
induction l generalizing a
|
||||
· simp
|
||||
· simp [*, h]
|
||||
|
||||
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α → β → m β) (b) (l l' : List α) :
|
||||
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
|
||||
induction l <;> simp [*]
|
||||
|
||||
@[simp] theorem foldl_append {β : Type _} (f : β → α → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldl f b = l'.foldl f (l.foldl f b) := by simp [foldl_eq_foldlM]
|
||||
|
||||
@[simp] theorem foldr_append (f : α → β → β) (b) (l l' : List α) :
|
||||
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM]
|
||||
|
||||
theorem foldl_flatten (f : β → α → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
|
||||
induction L generalizing b <;> simp_all
|
||||
|
||||
theorem foldr_flatten (f : α → β → β) (b : β) (L : List (List α)) :
|
||||
(flatten L).foldr f b = L.foldr (fun l b => l.foldr f b) b := by
|
||||
induction L <;> simp_all
|
||||
|
||||
@[simp] theorem foldl_reverse (l : List α) (f : β → α → β) (b) :
|
||||
l.reverse.foldl f b = l.foldr (fun x y => f y x) b := by simp [foldl_eq_foldlM, foldr_eq_foldrM]
|
||||
|
||||
@@ -2601,127 +2716,10 @@ theorem foldl_eq_foldr_reverse (l : List α) (f : β → α → β) (b) :
|
||||
theorem foldr_eq_foldl_reverse (l : List α) (f : α → β → β) (b) :
|
||||
l.foldr f b = l.reverse.foldl (fun x y => f y x) b := by simp
|
||||
|
||||
theorem foldl_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldl op (op a₁ a₂) = op a₁ (l.foldl op a₂)
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldl_cons, ha.assoc]
|
||||
rw [foldl_assoc]
|
||||
|
||||
theorem foldr_assoc {op : α → α → α} [ha : Std.Associative op] :
|
||||
∀ {l : List α} {a₁ a₂}, l.foldr op (op a₁ a₂) = op (l.foldr op a₁) a₂
|
||||
| [], a₁, a₂ => rfl
|
||||
| a :: l, a₁, a₂ => by
|
||||
simp only [foldr_cons, ha.assoc]
|
||||
rw [foldr_assoc]
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : List β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
induction l generalizing init <;> simp [*, H]
|
||||
|
||||
theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ : α → β₂ → β₂) (l : List α) (init : β₁)
|
||||
(H : ∀ x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
|
||||
induction l <;> simp [*, H]
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldl`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldlRecOn {motive : β → Sort _} : ∀ (l : List α) (op : β → α → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op b a)), motive (List.foldl op b l)
|
||||
| [], _, _, hb, _ => hb
|
||||
| hd :: tl, op, b, hb, hl =>
|
||||
foldlRecOn tl op (op b hd) (hl b hb hd (mem_cons_self hd tl))
|
||||
fun y hy x hx => hl y hy x (mem_cons_of_mem hd hx)
|
||||
|
||||
@[simp] theorem foldlRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op b a)) :
|
||||
foldlRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldlRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op b a)) :
|
||||
foldlRecOn (x :: l) op b hb hl =
|
||||
foldlRecOn l op (op b x) (hl b hb x (mem_cons_self x l))
|
||||
(fun b c a m => hl b c a (mem_cons_of_mem x m)) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
Prove a proposition about the result of `List.foldr`,
|
||||
by proving it for the initial data,
|
||||
and the implication that the operation applied to any element of the list preserves the property.
|
||||
|
||||
The motive can take values in `Sort _`, so this may be used to construct data,
|
||||
as well as to prove propositions.
|
||||
-/
|
||||
def foldrRecOn {motive : β → Sort _} : ∀ (l : List α) (op : α → β → β) (b : β) (_ : motive b)
|
||||
(_ : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ l), motive (op a b)), motive (List.foldr op b l)
|
||||
| nil, _, _, hb, _ => hb
|
||||
| x :: l, op, b, hb, hl =>
|
||||
hl (foldr op b l)
|
||||
(foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m)) x (mem_cons_self x l)
|
||||
|
||||
@[simp] theorem foldrRecOn_nil {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ []), motive (op a b)) :
|
||||
foldrRecOn [] op b hb hl = hb := rfl
|
||||
|
||||
@[simp] theorem foldrRecOn_cons {motive : β → Sort _} (hb : motive b)
|
||||
(hl : ∀ (b : β) (_ : motive b) (a : α) (_ : a ∈ x :: l), motive (op a b)) :
|
||||
foldrRecOn (x :: l) op b hb hl =
|
||||
hl _ (foldrRecOn l op b hb fun b c a m => hl b c a (mem_cons_of_mem x m))
|
||||
x (mem_cons_self x l) :=
|
||||
rfl
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldl_rel {l : List α} {f g : β → α → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f c a) (g c' a)) :
|
||||
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldl_cons]
|
||||
apply ih
|
||||
· simp_all
|
||||
· exact fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
/--
|
||||
We can prove that two folds over the same list are related (by some arbitrary relation)
|
||||
if we know that the initial elements are related and the folding function, for each element of the list,
|
||||
preserves the relation.
|
||||
-/
|
||||
theorem foldr_rel {l : List α} {f g : α → β → β} {a b : β} (r : β → β → Prop)
|
||||
(h : r a b) (h' : ∀ (a : α), a ∈ l → ∀ (c c' : β), r c c' → r (f a c) (g a c')) :
|
||||
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
|
||||
induction l generalizing a b with
|
||||
| nil => simp_all
|
||||
| cons a l ih =>
|
||||
simp only [foldr_cons]
|
||||
apply h'
|
||||
· simp
|
||||
· exact ih h fun a m c c' h => h' _ (by simp_all) _ _ h
|
||||
|
||||
@[simp] theorem foldl_add_const (l : List α) (a b : Nat) :
|
||||
l.foldl (fun x _ => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldl_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc,
|
||||
Nat.add_comm a]
|
||||
|
||||
@[simp] theorem foldr_add_const (l : List α) (a b : Nat) :
|
||||
l.foldr (fun _ x => x + a) b = b + a * l.length := by
|
||||
induction l generalizing b with
|
||||
| nil => simp
|
||||
| cons y l ih =>
|
||||
simp only [foldr_cons, ih, length_cons, Nat.mul_add, Nat.mul_one, Nat.add_assoc]
|
||||
|
||||
@[simp] theorem reverse_replicate (n) (a : α) : reverse (replicate n a) = replicate n a :=
|
||||
eq_replicate_iff.2
|
||||
⟨by rw [length_reverse, length_replicate],
|
||||
fun _ h => eq_of_mem_replicate (mem_reverse.1 h)⟩
|
||||
|
||||
/-! #### Further results about `getLast` and `getLast?` -/
|
||||
|
||||
|
||||
@@ -510,18 +510,4 @@ theorem Perm.eraseP (f : α → Bool) {l₁ l₂ : List α}
|
||||
refine (IH₁ H).trans (IH₂ ((p₁.pairwise_iff ?_).1 H))
|
||||
exact fun h h₁ h₂ => h h₂ h₁
|
||||
|
||||
theorem perm_insertIdx {α} (x : α) (l : List α) {n} (h : n ≤ l.length) :
|
||||
insertIdx n x l ~ x :: l := by
|
||||
induction l generalizing n with
|
||||
| nil =>
|
||||
cases n with
|
||||
| zero => rfl
|
||||
| succ => cases h
|
||||
| cons _ _ ih =>
|
||||
cases n with
|
||||
| zero => simp [insertIdx]
|
||||
| succ =>
|
||||
simp only [insertIdx, modifyTailIdx]
|
||||
refine .trans (.cons _ (ih (Nat.le_of_succ_le_succ h))) (.swap ..)
|
||||
|
||||
end List
|
||||
|
||||
@@ -253,10 +253,6 @@ theorem merge_perm_append : ∀ {xs ys : List α}, merge xs ys le ~ xs ++ ys
|
||||
· exact (merge_perm_append.cons y).trans
|
||||
((Perm.swap x y _).trans (perm_middle.symm.cons x))
|
||||
|
||||
theorem Perm.merge (s₁ s₂ : α → α → Bool) (hl : l₁ ~ l₂) (hr : r₁ ~ r₂) :
|
||||
merge l₁ r₁ s₁ ~ merge l₂ r₂ s₂ :=
|
||||
Perm.trans (merge_perm_append ..) <| Perm.trans (Perm.append hl hr) <| Perm.symm (merge_perm_append ..)
|
||||
|
||||
/-! ### mergeSort -/
|
||||
|
||||
@[simp] theorem mergeSort_nil : [].mergeSort r = [] := by rw [List.mergeSort]
|
||||
|
||||
@@ -136,18 +136,6 @@ This will perform the update destructively provided that the vector has a refere
|
||||
@[inline] def set! (v : Vector α n) (i : Nat) (x : α) : Vector α n :=
|
||||
⟨v.toArray.set! i x, by simp⟩
|
||||
|
||||
@[inline] def foldlM [Monad m] (f : β → α → m β) (b : β) (v : Vector α n) : m β :=
|
||||
v.toArray.foldlM f b
|
||||
|
||||
@[inline] def foldrM [Monad m] (f : α → β → m β) (b : β) (v : Vector α n) : m β :=
|
||||
v.toArray.foldrM f b
|
||||
|
||||
@[inline] def foldl (f : β → α → β) (b : β) (v : Vector α n) : β :=
|
||||
v.toArray.foldl f b
|
||||
|
||||
@[inline] def foldr (f : α → β → β) (b : β) (v : Vector α n) : β :=
|
||||
v.toArray.foldr f b
|
||||
|
||||
/-- Append two vectors. -/
|
||||
@[inline] def append (v : Vector α n) (w : Vector α m) : Vector α (n + m) :=
|
||||
⟨v.toArray ++ w.toArray, by simp⟩
|
||||
|
||||
@@ -66,18 +66,6 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
|
||||
@[simp] theorem back?_mk (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).back? = a.back? := rfl
|
||||
|
||||
@[simp] theorem foldlM_mk [Monad m] (f : β → α → m β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldlM f b = a.foldlM f b := rfl
|
||||
|
||||
@[simp] theorem foldrM_mk [Monad m] (f : α → β → m β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldrM f b = a.foldrM f b := rfl
|
||||
|
||||
@[simp] theorem foldl_mk (f : β → α → β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldl f b = a.foldl f b := rfl
|
||||
|
||||
@[simp] theorem foldr_mk (f : α → β → β) (b : β) (a : Array α) (h : a.size = n) :
|
||||
(Vector.mk a h).foldr f b = a.foldr f b := rfl
|
||||
|
||||
@[simp] theorem drop_mk (a : Array α) (h : a.size = n) (m) :
|
||||
(Vector.mk a h).drop m = Vector.mk (a.extract m a.size) (by simp [h]) := rfl
|
||||
|
||||
@@ -1037,13 +1025,6 @@ theorem mem_setIfInBounds (v : Vector α n) (i : Nat) (hi : i < n) (a : α) :
|
||||
|
||||
/-! Content below this point has not yet been aligned with `List` and `Array`. -/
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (a : Vector α n) (i : Nat) (hi : i < n) :
|
||||
(a.map f)[i] = f a[i] := by
|
||||
cases a
|
||||
simp
|
||||
|
||||
@[simp] theorem getElem_ofFn {α n} (f : Fin n → α) (i : Nat) (h : i < n) :
|
||||
(Vector.ofFn f)[i] = f ⟨i, by simpa using h⟩ := by
|
||||
simp [ofFn]
|
||||
@@ -1107,6 +1088,13 @@ theorem getElem_append_right {a : Vector α n} {b : Vector α m} {i : Nat} (h :
|
||||
cases a
|
||||
simp
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem getElem_map (f : α → β) (a : Vector α n) (i : Nat) (hi : i < n) :
|
||||
(a.map f)[i] = f a[i] := by
|
||||
cases a
|
||||
simp
|
||||
|
||||
/-! ### zipWith -/
|
||||
|
||||
@[simp] theorem getElem_zipWith (f : α → β → γ) (a : Vector α n) (b : Vector β n) (i : Nat)
|
||||
@@ -1115,37 +1103,6 @@ theorem getElem_append_right {a : Vector α n} {b : Vector α m} {i : Nat} (h :
|
||||
cases b
|
||||
simp
|
||||
|
||||
/-! ### foldlM and foldrM -/
|
||||
|
||||
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β → α → m β) (b) (l : Vector α n) (l' : Vector α n') :
|
||||
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
|
||||
cases l
|
||||
cases l'
|
||||
simp
|
||||
|
||||
@[simp] theorem foldrM_push [Monad m] (f : α → β → m β) (init : β) (l : Vector α n) (a : α) :
|
||||
(l.push a).foldrM f init = f a init >>= l.foldrM f := by
|
||||
cases l
|
||||
simp
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : Vector α n) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
cases l
|
||||
simp [Array.foldl_eq_foldlM]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : Vector α n) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
cases l
|
||||
simp [Array.foldr_eq_foldrM]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : Vector α n) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : Vector α n) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
/-! ### take -/
|
||||
|
||||
@[simp] theorem take_size (a : Vector α n) : a.take n = a.cast (by simp) := by
|
||||
|
||||
@@ -8,7 +8,6 @@ import Init.Core
|
||||
import Init.SimpLemmas
|
||||
import Init.Classical
|
||||
import Init.ByCases
|
||||
import Init.Grind.Util
|
||||
|
||||
namespace Lean.Grind
|
||||
|
||||
@@ -51,19 +50,4 @@ theorem false_of_not_eq_self {a : Prop} (h : (Not a) = a) : False := by
|
||||
theorem eq_eq_of_eq_true_left {a b : Prop} (h : a = True) : (a = b) = b := by simp [h]
|
||||
theorem eq_eq_of_eq_true_right {a b : Prop} (h : b = True) : (a = b) = a := by simp [h]
|
||||
|
||||
theorem eq_congr {α : Sort u} {a₁ b₁ a₂ b₂ : α} (h₁ : a₁ = a₂) (h₂ : b₁ = b₂) : (a₁ = b₁) = (a₂ = b₂) := by simp [*]
|
||||
theorem eq_congr' {α : Sort u} {a₁ b₁ a₂ b₂ : α} (h₁ : a₁ = b₂) (h₂ : b₁ = a₂) : (a₁ = b₁) = (a₂ = b₂) := by rw [h₁, h₂, Eq.comm (a := a₂)]
|
||||
|
||||
/-! Forall -/
|
||||
|
||||
theorem forall_propagator (p : Prop) (q : p → Prop) (q' : Prop) (h₁ : p = True) (h₂ : q (of_eq_true h₁) = q') : (∀ hp : p, q hp) = q' := by
|
||||
apply propext; apply Iff.intro
|
||||
· intro h'; exact Eq.mp h₂ (h' (of_eq_true h₁))
|
||||
· intro h'; intros; exact Eq.mpr h₂ h'
|
||||
|
||||
/-! dite -/
|
||||
|
||||
theorem dite_cond_eq_true' {α : Sort u} {c : Prop} {_ : Decidable c} {a : c → α} {b : ¬ c → α} {r : α} (h₁ : c = True) (h₂ : a (of_eq_true h₁) = r) : (dite c a b) = r := by simp [h₁, h₂]
|
||||
theorem dite_cond_eq_false' {α : Sort u} {c : Prop} {_ : Decidable c} {a : c → α} {b : ¬ c → α} {r : α} (h₁ : c = False) (h₂ : b (of_eq_false h₁) = r) : (dite c a b) = r := by simp [h₁, h₂]
|
||||
|
||||
end Lean.Grind
|
||||
|
||||
@@ -5,7 +5,6 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.SimpLemmas
|
||||
import Init.PropLemmas
|
||||
import Init.Classical
|
||||
import Init.ByCases
|
||||
|
||||
@@ -59,19 +58,13 @@ attribute [grind_norm] ite_true ite_false
|
||||
@[grind_norm↓] theorem not_ite {_ : Decidable p} (q r : Prop) : (¬ite p q r) = ite p (¬q) (¬r) := by
|
||||
by_cases p <;> simp [*]
|
||||
|
||||
@[grind_norm] theorem ite_true_false {_ : Decidable p} : (ite p True False) = p := by
|
||||
by_cases p <;> simp
|
||||
|
||||
@[grind_norm] theorem ite_false_true {_ : Decidable p} : (ite p False True) = ¬p := by
|
||||
by_cases p <;> simp
|
||||
|
||||
-- Forall
|
||||
@[grind_norm↓] theorem not_forall (p : α → Prop) : (¬∀ x, p x) = ∃ x, ¬p x := by simp
|
||||
attribute [grind_norm] forall_and
|
||||
|
||||
-- Exists
|
||||
@[grind_norm↓] theorem not_exists (p : α → Prop) : (¬∃ x, p x) = ∀ x, ¬p x := by simp
|
||||
attribute [grind_norm] exists_const exists_or exists_prop exists_and_left exists_and_right
|
||||
attribute [grind_norm] exists_const exists_or
|
||||
|
||||
-- Bool cond
|
||||
@[grind_norm] theorem cond_eq_ite (c : Bool) (a b : α) : cond c a b = ite c a b := by
|
||||
@@ -114,7 +107,4 @@ attribute [grind_norm] Nat.le_zero_eq
|
||||
-- GT GE
|
||||
attribute [grind_norm] GT.gt GE.ge
|
||||
|
||||
-- Succ
|
||||
attribute [grind_norm] Nat.succ_eq_add_one
|
||||
|
||||
end Lean.Grind
|
||||
|
||||
@@ -6,42 +6,17 @@ Authors: Leonardo de Moura
|
||||
prelude
|
||||
import Init.Tactics
|
||||
|
||||
namespace Lean.Parser.Attr
|
||||
|
||||
syntax grindEq := "="
|
||||
syntax grindEqBoth := "_=_"
|
||||
syntax grindEqRhs := "=_"
|
||||
syntax grindBwd := "←"
|
||||
syntax grindFwd := "→"
|
||||
|
||||
syntax (name := grind) "grind" (grindEq <|> grindBwd <|> grindFwd <|> grindEqBoth <|> grindEqRhs)? : attr
|
||||
|
||||
end Lean.Parser.Attr
|
||||
|
||||
namespace Lean.Grind
|
||||
/--
|
||||
The configuration for `grind`.
|
||||
Passed to `grind` using, for example, the `grind (config := { eager := true })` syntax.
|
||||
-/
|
||||
structure Config where
|
||||
/-- When `eager` is true (default: `false`), `grind` eagerly splits `if-then-else` and `match` expressions during internalization. -/
|
||||
eager : Bool := false
|
||||
/-- Maximum number of branches (i.e., case-splits) in the proof search tree. -/
|
||||
splits : Nat := 100
|
||||
/--
|
||||
Maximum number of E-matching (aka heuristic theorem instantiation)
|
||||
in a proof search tree branch.
|
||||
When `eager` is true (default: `false`), `grind` eagerly splits `if-then-else` and `match`
|
||||
expressions.
|
||||
-/
|
||||
ematch : Nat := 5
|
||||
/--
|
||||
Maximum term generation.
|
||||
The input goal terms have generation 0. When we instantiate a theorem using a term from generation `n`,
|
||||
the new terms have generation `n+1`. Thus, this parameter limits the length of an instantiation chain. -/
|
||||
gen : Nat := 5
|
||||
/-- Maximum number of theorem instances generated using E-matching in a proof search tree branch. -/
|
||||
instances : Nat := 1000
|
||||
/-- If `matchEqs` is `true`, `grind` uses `match`-equations as E-matching theorems. -/
|
||||
matchEqs : Bool := true
|
||||
eager : Bool := false
|
||||
deriving Inhabited, BEq
|
||||
|
||||
end Lean.Grind
|
||||
@@ -52,7 +27,7 @@ namespace Lean.Parser.Tactic
|
||||
`grind` tactic and related tactics.
|
||||
-/
|
||||
|
||||
-- TODO: parameters
|
||||
syntax (name := grind) "grind" optConfig ("on_failure " term)? : tactic
|
||||
-- TODO: configuration option, parameters
|
||||
syntax (name := grind) "grind" : tactic
|
||||
|
||||
end Lean.Parser.Tactic
|
||||
|
||||
@@ -11,16 +11,6 @@ namespace Lean.Grind
|
||||
/-- A helper gadget for annotating nested proofs in goals. -/
|
||||
def nestedProof (p : Prop) (h : p) : p := h
|
||||
|
||||
/--
|
||||
Gadget for marking terms that should not be normalized by `grind`s simplifier.
|
||||
`grind` uses a simproc to implement this feature.
|
||||
We use it when adding instances of `match`-equations to prevent them from being simplified to true.
|
||||
-/
|
||||
def doNotSimp {α : Sort u} (a : α) : α := a
|
||||
|
||||
/-- Gadget for representing offsets `t+k` in patterns. -/
|
||||
def offset (a b : Nat) : Nat := a + b
|
||||
|
||||
set_option pp.proofs true
|
||||
|
||||
theorem nestedProof_congr (p q : Prop) (h : p = q) (hp : p) (hq : q) : HEq (nestedProof p hp) (nestedProof q hq) := by
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Init.Internal.Order
|
||||
|
||||
/-!
|
||||
This directory is used for components of the standard library that are either considered
|
||||
implementation details or not yet ready for public consumption, and that should be available
|
||||
without explicit import (in contrast to `Std.Internal`)
|
||||
-/
|
||||
@@ -1,8 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Init.Internal.Order.Basic
|
||||
import Init.Internal.Order.Tactic
|
||||
@@ -1,693 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
|
||||
import Init.ByCases
|
||||
import Init.RCases
|
||||
|
||||
/-!
|
||||
This module contains some basic definitions and results from domain theory, intended to be used as
|
||||
the underlying construction of the `partial_fixpoint` feature. It is not meant to be used as a
|
||||
general purpose library for domain theory, but can be of interest to users who want to extend
|
||||
the `partial_fixpoint` machinery (e.g. mark more functions as monotone or register more monads).
|
||||
|
||||
This follows the corresponding
|
||||
[Isabelle development](https://isabelle.in.tum.de/library/HOL/HOL/Partial_Function.html), as also
|
||||
described in [Alexander Krauss: Recursive Definitions of Monadic Functions](https://www21.in.tum.de/~krauss/papers/mrec.pdf).
|
||||
-/
|
||||
|
||||
universe u v w
|
||||
|
||||
namespace Lean.Order
|
||||
|
||||
/--
|
||||
A partial order is a reflexive, transitive and antisymmetric relation.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class PartialOrder (α : Sort u) where
|
||||
/--
|
||||
A “less-or-equal-to” or “approximates” relation.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
rel : α → α → Prop
|
||||
rel_refl : ∀ {x}, rel x x
|
||||
rel_trans : ∀ {x y z}, rel x y → rel y z → rel x z
|
||||
rel_antisymm : ∀ {x y}, rel x y → rel y x → x = y
|
||||
|
||||
@[inherit_doc] scoped infix:50 " ⊑ " => PartialOrder.rel
|
||||
|
||||
section PartialOrder
|
||||
|
||||
variable {α : Sort u} [PartialOrder α]
|
||||
|
||||
theorem PartialOrder.rel_of_eq {x y : α} (h : x = y) : x ⊑ y := by cases h; apply rel_refl
|
||||
|
||||
/--
|
||||
A chain is a totally ordered set (representing a set as a predicate).
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def chain (c : α → Prop) : Prop := ∀ x y , c x → c y → x ⊑ y ∨ y ⊑ x
|
||||
|
||||
end PartialOrder
|
||||
|
||||
section CCPO
|
||||
|
||||
/--
|
||||
A chain-complete partial order (CCPO) is a partial order where every chain a least upper bound.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class CCPO (α : Sort u) extends PartialOrder α where
|
||||
/--
|
||||
The least upper bound of a chain.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
csup : (α → Prop) → α
|
||||
csup_spec {c : α → Prop} (hc : chain c) : csup c ⊑ x ↔ (∀ y, c y → y ⊑ x)
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
theorem csup_le {c : α → Prop} (hchain : chain c) : (∀ y, c y → y ⊑ x) → csup c ⊑ x :=
|
||||
(csup_spec hchain).mpr
|
||||
|
||||
theorem le_csup {c : α → Prop} (hchain : chain c) {y : α} (hy : c y) : y ⊑ csup c :=
|
||||
(csup_spec hchain).mp rel_refl y hy
|
||||
|
||||
/--
|
||||
The bottom element is the least upper bound of the empty chain.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def bot : α := csup (fun _ => False)
|
||||
|
||||
scoped notation "⊥" => bot
|
||||
|
||||
theorem bot_le (x : α) : ⊥ ⊑ x := by
|
||||
apply csup_le
|
||||
· intro x y hx hy; contradiction
|
||||
· intro x hx; contradiction
|
||||
|
||||
end CCPO
|
||||
|
||||
section monotone
|
||||
|
||||
variable {α : Sort u} [PartialOrder α]
|
||||
variable {β : Sort v} [PartialOrder β]
|
||||
|
||||
/--
|
||||
A function is monotone if if it maps related elements to releated elements.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def monotone (f : α → β) : Prop := ∀ x y, x ⊑ y → f x ⊑ f y
|
||||
|
||||
theorem monotone_const (c : β) : monotone (fun (_ : α) => c) :=
|
||||
fun _ _ _ => PartialOrder.rel_refl
|
||||
|
||||
theorem monotone_id : monotone (fun (x : α) => x) :=
|
||||
fun _ _ hxy => hxy
|
||||
|
||||
theorem monotone_compose
|
||||
{γ : Sort w} [PartialOrder γ]
|
||||
{f : α → β} {g : β → γ}
|
||||
(hf : monotone f) (hg : monotone g) :
|
||||
monotone (fun x => g (f x)) := fun _ _ hxy => hg _ _ (hf _ _ hxy)
|
||||
|
||||
end monotone
|
||||
|
||||
section admissibility
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
/--
|
||||
A predicate is admissable if it can be transferred from the elements of a chain to the chains least
|
||||
upper bound. Such predicates can be used in fixpoint induction.
|
||||
|
||||
This definition implies `P ⊥`. Sometimes (e.g. in Isabelle) the empty chain is excluded
|
||||
from this definition, and `P ⊥` is a separate condition of the induction predicate.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def admissible (P : α → Prop) :=
|
||||
∀ (c : α → Prop), chain c → (∀ x, c x → P x) → P (csup c)
|
||||
|
||||
theorem admissible_const_true : admissible (fun (_ : α) => True) :=
|
||||
fun _ _ _ => trivial
|
||||
|
||||
theorem admissible_and (P Q : α → Prop)
|
||||
(hadm₁ : admissible P) (hadm₂ : admissible Q) : admissible (fun x => P x ∧ Q x) :=
|
||||
fun c hchain h =>
|
||||
⟨ hadm₁ c hchain fun x hx => (h x hx).1,
|
||||
hadm₂ c hchain fun x hx => (h x hx).2⟩
|
||||
|
||||
theorem chain_conj (c P : α → Prop) (hchain : chain c) : chain (fun x => c x ∧ P x) := by
|
||||
intro x y ⟨hcx, _⟩ ⟨hcy, _⟩
|
||||
exact hchain x y hcx hcy
|
||||
|
||||
theorem csup_conj (c P : α → Prop) (hchain : chain c) (h : ∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ P y) :
|
||||
csup c = csup (fun x => c x ∧ P x) := by
|
||||
apply rel_antisymm
|
||||
· apply csup_le hchain
|
||||
intro x hcx
|
||||
obtain ⟨y, hcy, hxy, hPy⟩ := h x hcx
|
||||
apply rel_trans hxy; clear x hcx hxy
|
||||
apply le_csup (chain_conj _ _ hchain) ⟨hcy, hPy⟩
|
||||
· apply csup_le (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hPx⟩
|
||||
apply le_csup hchain hcx
|
||||
|
||||
theorem admissible_or (P Q : α → Prop)
|
||||
(hadm₁ : admissible P) (hadm₂ : admissible Q) : admissible (fun x => P x ∨ Q x) := by
|
||||
intro c hchain h
|
||||
have : (∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ P y) ∨ (∀ x, c x → ∃ y, c y ∧ x ⊑ y ∧ Q y) := by
|
||||
open Classical in
|
||||
apply Decidable.or_iff_not_imp_left.mpr
|
||||
intro h'
|
||||
simp only [not_forall, not_imp, not_exists, not_and] at h'
|
||||
obtain ⟨x, hcx, hx⟩ := h'
|
||||
intro y hcy
|
||||
cases hchain x y hcx hcy with
|
||||
| inl hxy =>
|
||||
refine ⟨y, hcy, rel_refl, ?_⟩
|
||||
cases h y hcy with
|
||||
| inl hPy => exfalso; apply hx y hcy hxy hPy
|
||||
| inr hQy => assumption
|
||||
| inr hyx =>
|
||||
refine ⟨x, hcx, hyx , ?_⟩
|
||||
cases h x hcx with
|
||||
| inl hPx => exfalso; apply hx x hcx rel_refl hPx
|
||||
| inr hQx => assumption
|
||||
cases this with
|
||||
| inl hP =>
|
||||
left
|
||||
rw [csup_conj (h := hP) (hchain := hchain)]
|
||||
apply hadm₁ _ (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hPx⟩
|
||||
exact hPx
|
||||
| inr hQ =>
|
||||
right
|
||||
rw [csup_conj (h := hQ) (hchain := hchain)]
|
||||
apply hadm₂ _ (chain_conj _ _ hchain)
|
||||
intro x ⟨hcx, hQx⟩
|
||||
exact hQx
|
||||
|
||||
def admissible_pi (P : α → β → Prop)
|
||||
(hadm₁ : ∀ y, admissible (fun x => P x y)) : admissible (fun x => ∀ y, P x y) :=
|
||||
fun c hchain h y => hadm₁ y c hchain fun x hx => h x hx y
|
||||
|
||||
end admissibility
|
||||
|
||||
section fix
|
||||
|
||||
open PartialOrder CCPO
|
||||
|
||||
variable {α : Sort u} [CCPO α]
|
||||
|
||||
variable {c : α → Prop} (hchain : chain c)
|
||||
|
||||
/--
|
||||
The transfinite iteration of a function `f` is a set that is `⊥ ` and is closed under application
|
||||
of `f` and `csup`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
inductive iterates (f : α → α) : α → Prop where
|
||||
| step : iterates f x → iterates f (f x)
|
||||
| sup {c : α → Prop} (hc : chain c) (hi : ∀ x, c x → iterates f x) : iterates f (csup c)
|
||||
|
||||
theorem chain_iterates {f : α → α} (hf : monotone f) : chain (iterates f) := by
|
||||
intros x y hx hy
|
||||
induction hx generalizing y
|
||||
case step x hx ih =>
|
||||
induction hy
|
||||
case step y hy _ =>
|
||||
cases ih y hy
|
||||
· left; apply hf; assumption
|
||||
· right; apply hf; assumption
|
||||
case sup c hchain hi ih2 =>
|
||||
show f x ⊑ csup c ∨ csup c ⊑ f x
|
||||
by_cases h : ∃ z, c z ∧ f x ⊑ z
|
||||
· left
|
||||
obtain ⟨z, hz, hfz⟩ := h
|
||||
apply rel_trans hfz
|
||||
apply le_csup hchain hz
|
||||
· right
|
||||
apply csup_le hchain _
|
||||
intro z hz
|
||||
rw [not_exists] at h
|
||||
specialize h z
|
||||
rw [not_and] at h
|
||||
specialize h hz
|
||||
cases ih2 z hz
|
||||
next => contradiction
|
||||
next => assumption
|
||||
case sup c hchain hi ih =>
|
||||
show rel (csup c) y ∨ rel y (csup c)
|
||||
by_cases h : ∃ z, c z ∧ rel y z
|
||||
· right
|
||||
obtain ⟨z, hz, hfz⟩ := h
|
||||
apply rel_trans hfz
|
||||
apply le_csup hchain hz
|
||||
· left
|
||||
apply csup_le hchain _
|
||||
intro z hz
|
||||
rw [not_exists] at h
|
||||
specialize h z
|
||||
rw [not_and] at h
|
||||
specialize h hz
|
||||
cases ih z hz y hy
|
||||
next => assumption
|
||||
next => contradiction
|
||||
|
||||
theorem rel_f_of_iterates {f : α → α} (hf : monotone f) {x : α} (hx : iterates f x) : x ⊑ f x := by
|
||||
induction hx
|
||||
case step ih =>
|
||||
apply hf
|
||||
assumption
|
||||
case sup c hchain hi ih =>
|
||||
apply csup_le hchain
|
||||
intro y hy
|
||||
apply rel_trans (ih y hy)
|
||||
apply hf
|
||||
apply le_csup hchain hy
|
||||
|
||||
set_option linter.unusedVariables false in
|
||||
/--
|
||||
The least fixpoint of a monotone function is the least upper bound of its transfinite iteration.
|
||||
|
||||
The `monotone f` assumption is not strictly necessarily for the definition, but without this the
|
||||
definition is not very meaningful and it simplifies applying theorems like `fix_eq` if every use of
|
||||
`fix` already has the monotonicty requirement.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def fix (f : α → α) (hmono : monotone f) := csup (iterates f)
|
||||
|
||||
/--
|
||||
The main fixpoint theorem for fixedpoints of monotone functions in chain-complete partial orders.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
theorem fix_eq {f : α → α} (hf : monotone f) : fix f hf = f (fix f hf) := by
|
||||
apply rel_antisymm
|
||||
· apply rel_f_of_iterates hf
|
||||
apply iterates.sup (chain_iterates hf)
|
||||
exact fun _ h => h
|
||||
· apply le_csup (chain_iterates hf)
|
||||
apply iterates.step
|
||||
apply iterates.sup (chain_iterates hf)
|
||||
intro y hy
|
||||
exact hy
|
||||
|
||||
/--
|
||||
The fixpoint induction theme: An admissible predicate holds for a least fixpoint if it is preserved
|
||||
by the fixpoint's function.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
theorem fix_induct {f : α → α} (hf : monotone f)
|
||||
(motive : α → Prop) (hadm: admissible motive)
|
||||
(h : ∀ x, motive x → motive (f x)) : motive (fix f hf) := by
|
||||
apply hadm _ (chain_iterates hf)
|
||||
intro x hiterates
|
||||
induction hiterates with
|
||||
| @step x hiter ih => apply h x ih
|
||||
| @sup c hchain hiter ih => apply hadm c hchain ih
|
||||
|
||||
end fix
|
||||
|
||||
section fun_order
|
||||
|
||||
open PartialOrder
|
||||
|
||||
variable {α : Sort u}
|
||||
variable {β : α → Sort v}
|
||||
variable {γ : Sort w}
|
||||
|
||||
instance instOrderPi [∀ x, PartialOrder (β x)] : PartialOrder (∀ x, β x) where
|
||||
rel f g := ∀ x, f x ⊑ g x
|
||||
rel_refl _ := rel_refl
|
||||
rel_trans hf hg x := rel_trans (hf x) (hg x)
|
||||
rel_antisymm hf hg := funext (fun x => rel_antisymm (hf x) (hg x))
|
||||
|
||||
theorem monotone_of_monotone_apply [PartialOrder γ] [∀ x, PartialOrder (β x)] (f : γ → (∀ x, β x))
|
||||
(h : ∀ y, monotone (fun x => f x y)) : monotone f :=
|
||||
fun x y hxy z => h z x y hxy
|
||||
|
||||
theorem monotone_apply [PartialOrder γ] [∀ x, PartialOrder (β x)] (a : α) (f : γ → ∀ x, β x)
|
||||
(h : monotone f) :
|
||||
monotone (fun x => f x a) := fun _ _ hfg => h _ _ hfg a
|
||||
|
||||
theorem chain_apply [∀ x, PartialOrder (β x)] {c : (∀ x, β x) → Prop} (hc : chain c) (x : α) :
|
||||
chain (fun y => ∃ f, c f ∧ f x = y) := by
|
||||
intro _ _ ⟨f, hf, hfeq⟩ ⟨g, hg, hgeq⟩
|
||||
subst hfeq; subst hgeq
|
||||
cases hc f g hf hg
|
||||
next h => left; apply h x
|
||||
next h => right; apply h x
|
||||
|
||||
def fun_csup [∀ x, CCPO (β x)] (c : (∀ x, β x) → Prop) (x : α) :=
|
||||
CCPO.csup (fun y => ∃ f, c f ∧ f x = y)
|
||||
|
||||
instance instCCPOPi [∀ x, CCPO (β x)] : CCPO (∀ x, β x) where
|
||||
csup := fun_csup
|
||||
csup_spec := by
|
||||
intro f c hc
|
||||
constructor
|
||||
next =>
|
||||
intro hf g hg x
|
||||
apply rel_trans _ (hf x); clear hf
|
||||
apply le_csup (chain_apply hc x)
|
||||
exact ⟨g, hg, rfl⟩
|
||||
next =>
|
||||
intro h x
|
||||
apply csup_le (chain_apply hc x)
|
||||
intro y ⟨z, hz, hyz⟩
|
||||
subst y
|
||||
apply h z hz
|
||||
|
||||
def admissible_apply [∀ x, CCPO (β x)] (P : ∀ x, β x → Prop) (x : α)
|
||||
(hadm : admissible (P x)) : admissible (fun (f : ∀ x, β x) => P x (f x)) := by
|
||||
intro c hchain h
|
||||
apply hadm _ (chain_apply hchain x)
|
||||
rintro _ ⟨f, hcf, rfl⟩
|
||||
apply h _ hcf
|
||||
|
||||
def admissible_pi_apply [∀ x, CCPO (β x)] (P : ∀ x, β x → Prop) (hadm : ∀ x, admissible (P x)) :
|
||||
admissible (fun (f : ∀ x, β x) => ∀ x, P x (f x)) := by
|
||||
apply admissible_pi
|
||||
intro
|
||||
apply admissible_apply
|
||||
apply hadm
|
||||
|
||||
end fun_order
|
||||
|
||||
section monotone_lemmas
|
||||
|
||||
theorem monotone_letFun
|
||||
{α : Sort u} {β : Sort v} {γ : Sort w} [PartialOrder α] [PartialOrder β]
|
||||
(v : γ) (k : α → γ → β)
|
||||
(hmono : ∀ y, monotone (fun x => k x y)) :
|
||||
monotone fun (x : α) => letFun v (k x) := hmono v
|
||||
|
||||
theorem monotone_ite
|
||||
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
|
||||
(c : Prop) [Decidable c]
|
||||
(k₁ : α → β) (k₂ : α → β)
|
||||
(hmono₁ : monotone k₁) (hmono₂ : monotone k₂) :
|
||||
monotone fun x => if c then k₁ x else k₂ x := by
|
||||
split
|
||||
· apply hmono₁
|
||||
· apply hmono₂
|
||||
|
||||
theorem monotone_dite
|
||||
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
|
||||
(c : Prop) [Decidable c]
|
||||
(k₁ : α → c → β) (k₂ : α → ¬ c → β)
|
||||
(hmono₁ : monotone k₁) (hmono₂ : monotone k₂) :
|
||||
monotone fun x => dite c (k₁ x) (k₂ x) := by
|
||||
split
|
||||
· apply monotone_apply _ _ hmono₁
|
||||
· apply monotone_apply _ _ hmono₂
|
||||
|
||||
end monotone_lemmas
|
||||
|
||||
section pprod_order
|
||||
|
||||
open PartialOrder
|
||||
|
||||
variable {α : Sort u}
|
||||
variable {β : Sort v}
|
||||
variable {γ : Sort w}
|
||||
|
||||
instance [PartialOrder α] [PartialOrder β] : PartialOrder (α ×' β) where
|
||||
rel a b := a.1 ⊑ b.1 ∧ a.2 ⊑ b.2
|
||||
rel_refl := ⟨rel_refl, rel_refl⟩
|
||||
rel_trans ha hb := ⟨rel_trans ha.1 hb.1, rel_trans ha.2 hb.2⟩
|
||||
rel_antisymm := fun {a} {b} ha hb => by
|
||||
cases a; cases b;
|
||||
dsimp at *
|
||||
rw [rel_antisymm ha.1 hb.1, rel_antisymm ha.2 hb.2]
|
||||
|
||||
theorem monotone_pprod [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α} {g : γ → β} (hf : monotone f) (hg : monotone g) :
|
||||
monotone (fun x => PProd.mk (f x) (g x)) :=
|
||||
fun _ _ h12 => ⟨hf _ _ h12, hg _ _ h12⟩
|
||||
|
||||
theorem monotone_pprod_fst [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α ×' β} (hf : monotone f) : monotone (fun x => (f x).1) :=
|
||||
fun _ _ h12 => (hf _ _ h12).1
|
||||
|
||||
theorem monotone_pprod_snd [PartialOrder α] [PartialOrder β] [PartialOrder γ]
|
||||
{f : γ → α ×' β} (hf : monotone f) : monotone (fun x => (f x).2) :=
|
||||
fun _ _ h12 => (hf _ _ h12).2
|
||||
|
||||
def chain_pprod_fst [CCPO α] [CCPO β] (c : α ×' β → Prop) : α → Prop := fun a => ∃ b, c ⟨a, b⟩
|
||||
def chain_pprod_snd [CCPO α] [CCPO β] (c : α ×' β → Prop) : β → Prop := fun b => ∃ a, c ⟨a, b⟩
|
||||
|
||||
theorem chain.pprod_fst [CCPO α] [CCPO β] (c : α ×' β → Prop) (hchain : chain c) :
|
||||
chain (chain_pprod_fst c) := by
|
||||
intro a₁ a₂ ⟨b₁, h₁⟩ ⟨b₂, h₂⟩
|
||||
cases hchain ⟨a₁, b₁⟩ ⟨a₂, b₂⟩ h₁ h₂
|
||||
case inl h => left; exact h.1
|
||||
case inr h => right; exact h.1
|
||||
|
||||
theorem chain.pprod_snd [CCPO α] [CCPO β] (c : α ×' β → Prop) (hchain : chain c) :
|
||||
chain (chain_pprod_snd c) := by
|
||||
intro b₁ b₂ ⟨a₁, h₁⟩ ⟨a₂, h₂⟩
|
||||
cases hchain ⟨a₁, b₁⟩ ⟨a₂, b₂⟩ h₁ h₂
|
||||
case inl h => left; exact h.2
|
||||
case inr h => right; exact h.2
|
||||
|
||||
instance [CCPO α] [CCPO β] : CCPO (α ×' β) where
|
||||
csup c := ⟨CCPO.csup (chain_pprod_fst c), CCPO.csup (chain_pprod_snd c)⟩
|
||||
csup_spec := by
|
||||
intro ⟨a, b⟩ c hchain
|
||||
dsimp
|
||||
constructor
|
||||
next =>
|
||||
intro ⟨h₁, h₂⟩ ⟨a', b'⟩ cab
|
||||
constructor <;> dsimp at *
|
||||
· apply rel_trans ?_ h₁
|
||||
apply le_csup hchain.pprod_fst
|
||||
exact ⟨b', cab⟩
|
||||
· apply rel_trans ?_ h₂
|
||||
apply le_csup hchain.pprod_snd
|
||||
exact ⟨a', cab⟩
|
||||
next =>
|
||||
intro h
|
||||
constructor <;> dsimp
|
||||
· apply csup_le hchain.pprod_fst
|
||||
intro a' ⟨b', hcab⟩
|
||||
apply (h _ hcab).1
|
||||
· apply csup_le hchain.pprod_snd
|
||||
intro b' ⟨a', hcab⟩
|
||||
apply (h _ hcab).2
|
||||
|
||||
theorem admissible_pprod_fst {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : α → Prop)
|
||||
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.1) := by
|
||||
intro c hchain h
|
||||
apply hadm _ hchain.pprod_fst
|
||||
intro x ⟨y, hxy⟩
|
||||
apply h ⟨x,y⟩ hxy
|
||||
|
||||
theorem admissible_pprod_snd {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : β → Prop)
|
||||
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.2) := by
|
||||
intro c hchain h
|
||||
apply hadm _ hchain.pprod_snd
|
||||
intro y ⟨x, hxy⟩
|
||||
apply h ⟨x,y⟩ hxy
|
||||
|
||||
end pprod_order
|
||||
|
||||
section flat_order
|
||||
|
||||
variable {α : Sort u}
|
||||
|
||||
set_option linter.unusedVariables false in
|
||||
/--
|
||||
`FlatOrder b` wraps the type `α` with the flat partial order generated by `∀ x, b ⊑ x`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
def FlatOrder {α : Sort u} (b : α) := α
|
||||
|
||||
variable {b : α}
|
||||
|
||||
/--
|
||||
The flat partial order generated by `∀ x, b ⊑ x`.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
inductive FlatOrder.rel : (x y : FlatOrder b) → Prop where
|
||||
| bot : rel b x
|
||||
| refl : rel x x
|
||||
|
||||
instance FlatOrder.instOrder : PartialOrder (FlatOrder b) where
|
||||
rel := rel
|
||||
rel_refl := .refl
|
||||
rel_trans {x y z : α} (hxy : rel x y) (hyz : rel y z) := by
|
||||
cases hxy <;> cases hyz <;> constructor
|
||||
rel_antisymm {x y : α} (hxy : rel x y) (hyz : rel y x) : x = y := by
|
||||
cases hxy <;> cases hyz <;> constructor
|
||||
|
||||
open Classical in
|
||||
private theorem Classical.some_spec₂ {α : Sort _} {p : α → Prop} {h : ∃ a, p a} (q : α → Prop)
|
||||
(hpq : ∀ a, p a → q a) : q (choose h) := hpq _ <| choose_spec _
|
||||
|
||||
noncomputable def flat_csup (c : FlatOrder b → Prop) : FlatOrder b := by
|
||||
by_cases h : ∃ (x : FlatOrder b), c x ∧ x ≠ b
|
||||
· exact Classical.choose h
|
||||
· exact b
|
||||
|
||||
noncomputable instance FlatOrder.instCCPO : CCPO (FlatOrder b) where
|
||||
csup := flat_csup
|
||||
csup_spec := by
|
||||
intro x c hc
|
||||
unfold flat_csup
|
||||
split
|
||||
next hex =>
|
||||
apply Classical.some_spec₂ (q := (· ⊑ x ↔ (∀ y, c y → y ⊑ x)))
|
||||
clear hex
|
||||
intro z ⟨hz, hnb⟩
|
||||
constructor
|
||||
· intro h y hy
|
||||
apply PartialOrder.rel_trans _ h; clear h
|
||||
cases hc y z hy hz
|
||||
next => assumption
|
||||
next h =>
|
||||
cases h
|
||||
· contradiction
|
||||
· constructor
|
||||
· intro h
|
||||
cases h z hz
|
||||
· contradiction
|
||||
· constructor
|
||||
next hnotex =>
|
||||
constructor
|
||||
· intro h y hy; clear h
|
||||
suffices y = b by rw [this]; exact rel.bot
|
||||
rw [not_exists] at hnotex
|
||||
specialize hnotex y
|
||||
rw [not_and] at hnotex
|
||||
specialize hnotex hy
|
||||
rw [@Classical.not_not] at hnotex
|
||||
assumption
|
||||
· intro; exact rel.bot
|
||||
|
||||
theorem admissible_flatOrder (P : FlatOrder b → Prop) (hnot : P b) : admissible P := by
|
||||
intro c hchain h
|
||||
by_cases h' : ∃ (x : FlatOrder b), c x ∧ x ≠ b
|
||||
· simp [CCPO.csup, flat_csup, h']
|
||||
apply Classical.some_spec₂ (q := (P ·))
|
||||
intro x ⟨hcx, hneb⟩
|
||||
apply h x hcx
|
||||
· simp [CCPO.csup, flat_csup, h', hnot]
|
||||
|
||||
end flat_order
|
||||
|
||||
section mono_bind
|
||||
|
||||
/--
|
||||
The class `MonoBind m` indicates that every `m α` has a `PartialOrder`, and that the bind operation
|
||||
on `m` is monotone in both arguments with regard to that order.
|
||||
|
||||
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
|
||||
-/
|
||||
class MonoBind (m : Type u → Type v) [Bind m] [∀ α, PartialOrder (m α)] where
|
||||
bind_mono_left {a₁ a₂ : m α} {f : α → m b} (h : a₁ ⊑ a₂) : a₁ >>= f ⊑ a₂ >>= f
|
||||
bind_mono_right {a : m α} {f₁ f₂ : α → m b} (h : ∀ x, f₁ x ⊑ f₂ x) : a >>= f₁ ⊑ a >>= f₂
|
||||
|
||||
theorem monotone_bind
|
||||
(m : Type u → Type v) [Bind m] [∀ α, PartialOrder (m α)] [MonoBind m]
|
||||
{α β : Type u}
|
||||
{γ : Type w} [PartialOrder γ]
|
||||
(f : γ → m α) (g : γ → α → m β)
|
||||
(hmono₁ : monotone f)
|
||||
(hmono₂ : monotone g) :
|
||||
monotone (fun (x : γ) => f x >>= g x) := by
|
||||
intro x₁ x₂ hx₁₂
|
||||
apply PartialOrder.rel_trans
|
||||
· apply MonoBind.bind_mono_left (hmono₁ _ _ hx₁₂)
|
||||
· apply MonoBind.bind_mono_right (fun y => monotone_apply y _ hmono₂ _ _ hx₁₂)
|
||||
|
||||
instance : PartialOrder (Option α) := inferInstanceAs (PartialOrder (FlatOrder none))
|
||||
noncomputable instance : CCPO (Option α) := inferInstanceAs (CCPO (FlatOrder none))
|
||||
noncomputable instance : MonoBind Option where
|
||||
bind_mono_left h := by
|
||||
cases h
|
||||
· exact FlatOrder.rel.bot
|
||||
· exact FlatOrder.rel.refl
|
||||
bind_mono_right h := by
|
||||
cases ‹Option _›
|
||||
· exact FlatOrder.rel.refl
|
||||
· exact h _
|
||||
|
||||
theorem admissible_eq_some (P : Prop) (y : α) :
|
||||
admissible (fun (x : Option α) => x = some y → P) := by
|
||||
apply admissible_flatOrder; simp
|
||||
|
||||
instance [Monad m] [inst : ∀ α, PartialOrder (m α)] : PartialOrder (ExceptT ε m α) := inst _
|
||||
instance [Monad m] [∀ α, PartialOrder (m α)] [inst : ∀ α, CCPO (m α)] : CCPO (ExceptT ε m α) := inst _
|
||||
instance [Monad m] [∀ α, PartialOrder (m α)] [∀ α, CCPO (m α)] [MonoBind m] : MonoBind (ExceptT ε m) where
|
||||
bind_mono_left h₁₂ := by
|
||||
apply MonoBind.bind_mono_left (m := m)
|
||||
exact h₁₂
|
||||
bind_mono_right h₁₂ := by
|
||||
apply MonoBind.bind_mono_right (m := m)
|
||||
intro x
|
||||
cases x
|
||||
· apply PartialOrder.rel_refl
|
||||
· apply h₁₂
|
||||
|
||||
end mono_bind
|
||||
|
||||
namespace Example
|
||||
|
||||
def findF (P : Nat → Bool) (rec : Nat → Option Nat) (x : Nat) : Option Nat :=
|
||||
if P x then
|
||||
some x
|
||||
else
|
||||
rec (x + 1)
|
||||
|
||||
noncomputable def find (P : Nat → Bool) : Nat → Option Nat := fix (findF P) <| by
|
||||
unfold findF
|
||||
apply monotone_of_monotone_apply
|
||||
intro n
|
||||
split
|
||||
· apply monotone_const
|
||||
· apply monotone_apply
|
||||
apply monotone_id
|
||||
|
||||
theorem find_eq : find P = findF P (find P) := fix_eq ..
|
||||
|
||||
theorem find_spec : ∀ n m, find P n = some m → n ≤ m ∧ P m := by
|
||||
unfold find
|
||||
refine fix_induct (motive := fun (f : Nat → Option Nat) => ∀ n m, f n = some m → n ≤ m ∧ P m) _ ?hadm ?hstep
|
||||
case hadm =>
|
||||
-- apply admissible_pi_apply does not work well, hard to infer everything
|
||||
exact admissible_pi_apply _ (fun n => admissible_pi _ (fun m => admissible_eq_some _ m))
|
||||
case hstep =>
|
||||
intro f ih n m heq
|
||||
simp only [findF] at heq
|
||||
split at heq
|
||||
· simp_all
|
||||
· obtain ⟨ih1, ih2⟩ := ih _ _ heq
|
||||
constructor
|
||||
· exact Nat.le_trans (Nat.le_add_right _ _ ) ih1
|
||||
· exact ih2
|
||||
|
||||
end Example
|
||||
|
||||
end Lean.Order
|
||||
@@ -1,20 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
|
||||
prelude
|
||||
import Init.Notation
|
||||
|
||||
namespace Lean.Order
|
||||
/--
|
||||
`monotonicity` performs one compositional step solving `monotone` goals,
|
||||
using lemma tagged with `@[partial_fixpoint_monotone]`.
|
||||
|
||||
This tactic is mostly used internally by lean in `partial_fixpoint` definitions, but
|
||||
can be useful on its own for debugging or when proving new `@[partial_fixpoint_monotone]` lemmas.
|
||||
-/
|
||||
scoped syntax (name := monotonicity) "monotonicity" : tactic
|
||||
|
||||
end Lean.Order
|
||||
@@ -6,7 +6,6 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.Lsp.Basic
|
||||
import Lean.Data.Lsp.CancelParams
|
||||
import Lean.Data.Lsp.Capabilities
|
||||
import Lean.Data.Lsp.Client
|
||||
import Lean.Data.Lsp.Communication
|
||||
|
||||
@@ -6,6 +6,7 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.Json
|
||||
import Lean.Data.JsonRpc
|
||||
|
||||
/-! Defines most of the 'Basic Structures' in the LSP specification
|
||||
(https://microsoft.github.io/language-server-protocol/specifications/specification-current/),
|
||||
@@ -18,6 +19,10 @@ namespace Lsp
|
||||
|
||||
open Json
|
||||
|
||||
structure CancelParams where
|
||||
id : JsonRpc.RequestID
|
||||
deriving Inhabited, BEq, ToJson, FromJson
|
||||
|
||||
abbrev DocumentUri := String
|
||||
|
||||
/-- We adopt the convention that zero-based UTF-16 positions as sent by LSP clients
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2020 Marc Huisinga. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
|
||||
Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Lean.Data.JsonRpc
|
||||
|
||||
/-! # Defines `Lean.Lsp.CancelParams`.
|
||||
|
||||
This is separate from `Lean.Data.Lsp.Basic` to reduce transitive dependencies.
|
||||
-/
|
||||
|
||||
namespace Lean
|
||||
namespace Lsp
|
||||
|
||||
open Json
|
||||
|
||||
structure CancelParams where
|
||||
id : JsonRpc.RequestID
|
||||
deriving Inhabited, BEq, ToJson, FromJson
|
||||
|
||||
end Lsp
|
||||
end Lean
|
||||
@@ -6,6 +6,7 @@ Authors: Marc Huisinga, Wojciech Nawrocki
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.String
|
||||
import Init.Data.Array
|
||||
import Lean.Data.Lsp.Basic
|
||||
import Lean.Data.Position
|
||||
import Lean.DeclarationRange
|
||||
|
||||
@@ -49,8 +49,3 @@ variable {_ : BEq α} {_ : Hashable α}
|
||||
|
||||
@[inline] def fold {β : Type v} (f : β → α → β) (init : β) (s : PersistentHashSet α) : β :=
|
||||
Id.run $ s.foldM f init
|
||||
|
||||
def toList (s : PersistentHashSet α) : List α :=
|
||||
s.set.toList.map (·.1)
|
||||
|
||||
end PersistentHashSet
|
||||
|
||||
@@ -131,18 +131,14 @@ def throwCalcFailure (steps : Array CalcStepView) (expectedType result : Expr) :
|
||||
if ← isDefEqGuarded r er then
|
||||
let mut failed := false
|
||||
unless ← isDefEqGuarded lhs elhs do
|
||||
let (lhs, elhs) ← addPPExplicitToExposeDiff lhs elhs
|
||||
let (lhsTy, elhsTy) ← addPPExplicitToExposeDiff (← inferType lhs) (← inferType elhs)
|
||||
logErrorAt steps[0]!.term m!"\
|
||||
invalid 'calc' step, left-hand side is{indentD m!"{lhs} : {lhsTy}"}\n\
|
||||
but is expected to be{indentD m!"{elhs} : {elhsTy}"}"
|
||||
invalid 'calc' step, left-hand side is{indentD m!"{lhs} : {← inferType lhs}"}\n\
|
||||
but is expected to be{indentD m!"{elhs} : {← inferType elhs}"}"
|
||||
failed := true
|
||||
unless ← isDefEqGuarded rhs erhs do
|
||||
let (rhs, erhs) ← addPPExplicitToExposeDiff rhs erhs
|
||||
let (rhsTy, erhsTy) ← addPPExplicitToExposeDiff (← inferType rhs) (← inferType erhs)
|
||||
logErrorAt steps.back!.term m!"\
|
||||
invalid 'calc' step, right-hand side is{indentD m!"{rhs} : {rhsTy}"}\n\
|
||||
but is expected to be{indentD m!"{erhs} : {erhsTy}"}"
|
||||
invalid 'calc' step, right-hand side is{indentD m!"{rhs} : {← inferType rhs}"}\n\
|
||||
but is expected to be{indentD m!"{erhs} : {← inferType erhs}"}"
|
||||
failed := true
|
||||
if failed then
|
||||
throwAbortTerm
|
||||
|
||||
@@ -38,7 +38,6 @@ def elabCheckTactic : CommandElab := fun stx => do
|
||||
| [next] => do
|
||||
let (val, _, _) ← matchCheckGoalType stx (←next.getType)
|
||||
if !(← Meta.withReducible <| isDefEq val expTerm) then
|
||||
let (val, expTerm) ← addPPExplicitToExposeDiff val expTerm
|
||||
throwErrorAt stx
|
||||
m!"Term reduces to{indentExpr val}\nbut is expected to reduce to {indentExpr expTerm}"
|
||||
| _ => do
|
||||
|
||||
@@ -16,4 +16,3 @@ import Lean.Elab.Deriving.FromToJson
|
||||
import Lean.Elab.Deriving.SizeOf
|
||||
import Lean.Elab.Deriving.Hashable
|
||||
import Lean.Elab.Deriving.Ord
|
||||
import Lean.Elab.Deriving.ToExpr
|
||||
|
||||
@@ -1,237 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Kyle Miller
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Transform
|
||||
import Lean.Elab.Deriving.Basic
|
||||
import Lean.Elab.Deriving.Util
|
||||
import Lean.ToLevel
|
||||
import Lean.ToExpr
|
||||
|
||||
/-!
|
||||
# `ToExpr` deriving handler
|
||||
|
||||
This module defines a `ToExpr` deriving handler for inductive types.
|
||||
It supports mutually inductive types as well.
|
||||
|
||||
The `ToExpr` deriving handlers support universe level polymorphism, via the `Lean.ToLevel` class.
|
||||
To use `ToExpr` in places where there is universe polymorphism, make sure a `[ToLevel.{u}]` instance is available,
|
||||
though be aware that the `ToLevel` mechanism does not support `max` or `imax` expressions.
|
||||
|
||||
Implementation note: this deriving handler was initially modeled after the `Repr` deriving handler, but
|
||||
1. we need to account for universe levels,
|
||||
2. the `ToExpr` class has two fields rather than one, and
|
||||
3. we don't handle structures specially.
|
||||
-/
|
||||
|
||||
namespace Lean.Elab.Deriving.ToExpr
|
||||
|
||||
open Lean Elab Parser.Term
|
||||
open Meta Command Deriving
|
||||
|
||||
/--
|
||||
Given `args := #[e₁, e₂, …, eₙ]`, constructs the syntax `Expr.app (… (Expr.app (Expr.app f e₁) e₂) …) eₙ`.
|
||||
-/
|
||||
def mkAppNTerm (f : Term) (args : Array Term) : MetaM Term :=
|
||||
args.foldlM (fun a b => ``(Expr.app $a $b)) f
|
||||
|
||||
/-- Fixes the output of `mkInductiveApp` to explicitly reference universe levels. -/
|
||||
def updateIndType (indVal : InductiveVal) (t : Term) : TermElabM Term :=
|
||||
let levels := indVal.levelParams.toArray.map mkIdent
|
||||
match t with
|
||||
| `(@$f $args*) => `(@$f.{$levels,*} $args*)
|
||||
| _ => throwError "(internal error) expecting output of `mkInductiveApp`"
|
||||
|
||||
/--
|
||||
Creates a term that evaluates to an expression representing the inductive type.
|
||||
Uses `toExpr` and `toTypeExpr` for the arguments to the type constructor.
|
||||
-/
|
||||
def mkToTypeExpr (indVal : InductiveVal) (argNames : Array Name) : TermElabM Term := do
|
||||
let levels ← indVal.levelParams.toArray.mapM (fun u => `(Lean.toLevel.{$(mkIdent u)}))
|
||||
forallTelescopeReducing indVal.type fun xs _ => do
|
||||
let mut args : Array Term := #[]
|
||||
for argName in argNames, x in xs do
|
||||
let a := mkIdent argName
|
||||
if ← Meta.isType x then
|
||||
args := args.push <| ← ``(toTypeExpr $a)
|
||||
else
|
||||
args := args.push <| ← ``(toExpr $a)
|
||||
mkAppNTerm (← ``(Expr.const $(quote indVal.name) [$levels,*])) args
|
||||
|
||||
/--
|
||||
Creates the body of the `toExpr` function for the `ToExpr` instance, which is a `match` expression
|
||||
that calls `toExpr` and `toTypeExpr` to assemble an expression for a given term.
|
||||
For recursive inductive types, `auxFunName` refers to the `ToExpr` instance for the current type.
|
||||
For mutually recursive types, we rely on the local instances set up by `mkLocalInstanceLetDecls`.
|
||||
-/
|
||||
def mkToExprBody (header : Header) (indVal : InductiveVal) (auxFunName : Name) (levelInsts : Array Term) :
|
||||
TermElabM Term := do
|
||||
let discrs ← mkDiscrs header indVal
|
||||
let alts ← mkAlts
|
||||
`(match $[$discrs],* with $alts:matchAlt*)
|
||||
where
|
||||
/-- Create the `match` cases, one per constructor. -/
|
||||
mkAlts : TermElabM (Array (TSyntax ``matchAlt)) := do
|
||||
let levels ← levelInsts.mapM fun inst => `($(inst).toLevel)
|
||||
let mut alts := #[]
|
||||
for ctorName in indVal.ctors do
|
||||
let ctorInfo ← getConstInfoCtor ctorName
|
||||
let alt ← forallTelescopeReducing ctorInfo.type fun xs _ => do
|
||||
let mut patterns := #[]
|
||||
-- add `_` pattern for indices, before the constructor's pattern
|
||||
for _ in [:indVal.numIndices] do
|
||||
patterns := patterns.push (← `(_))
|
||||
let mut ctorArgs := #[]
|
||||
let mut rhsArgs : Array Term := #[]
|
||||
let mkArg (x : Expr) (a : Term) : TermElabM Term := do
|
||||
if (← inferType x).isAppOf indVal.name then
|
||||
`($(mkIdent auxFunName) $levelInsts* $a)
|
||||
else if ← Meta.isType x then
|
||||
``(toTypeExpr $a)
|
||||
else
|
||||
``(toExpr $a)
|
||||
-- add `_` pattern for inductive parameters, which are inaccessible
|
||||
for i in [:ctorInfo.numParams] do
|
||||
let a := mkIdent header.argNames[i]!
|
||||
ctorArgs := ctorArgs.push (← `(_))
|
||||
rhsArgs := rhsArgs.push <| ← mkArg xs[i]! a
|
||||
for i in [:ctorInfo.numFields] do
|
||||
let a := mkIdent (← mkFreshUserName `a)
|
||||
ctorArgs := ctorArgs.push a
|
||||
rhsArgs := rhsArgs.push <| ← mkArg xs[ctorInfo.numParams + i]! a
|
||||
patterns := patterns.push (← `(@$(mkIdent ctorName):ident $ctorArgs:term*))
|
||||
let rhs : Term ← mkAppNTerm (← ``(Expr.const $(quote ctorInfo.name) [$levels,*])) rhsArgs
|
||||
`(matchAltExpr| | $[$patterns:term],* => $rhs)
|
||||
alts := alts.push alt
|
||||
return alts
|
||||
|
||||
/--
|
||||
For nested and mutually recursive inductive types, we define `partial` instances,
|
||||
and the strategy is to have local `ToExpr` instances in scope for the body of each instance.
|
||||
This way, each instance can freely use `toExpr` and `toTypeExpr` for each of the types in `ctx`.
|
||||
|
||||
This is a modified copy of `Lean.Elab.Deriving.mkLocalInstanceLetDecls`,
|
||||
since we need to include the `toTypeExpr` field in the `letDecl`
|
||||
Note that, for simplicity, each instance gets its own definition of each others' `toTypeExpr` fields.
|
||||
These are very simple fields, so avoiding the duplication is not worth it.
|
||||
-/
|
||||
def mkLocalInstanceLetDecls (ctx : Deriving.Context) (argNames : Array Name) (levelInsts : Array Term) :
|
||||
TermElabM (Array (TSyntax ``Parser.Term.letDecl)) := do
|
||||
let mut letDecls := #[]
|
||||
for indVal in ctx.typeInfos, auxFunName in ctx.auxFunNames do
|
||||
let currArgNames ← mkInductArgNames indVal
|
||||
let numParams := indVal.numParams
|
||||
let currIndices := currArgNames[numParams:]
|
||||
let binders ← mkImplicitBinders currIndices
|
||||
let argNamesNew := argNames[:numParams] ++ currIndices
|
||||
let indType ← mkInductiveApp indVal argNamesNew
|
||||
let instName ← mkFreshUserName `localinst
|
||||
let toTypeExpr ← mkToTypeExpr indVal argNames
|
||||
-- Recall that mutually inductive types all use the same universe levels, hence we pass the same ToLevel instances to each aux function.
|
||||
let letDecl ← `(Parser.Term.letDecl| $(mkIdent instName):ident $binders:implicitBinder* : ToExpr $indType :=
|
||||
{ toExpr := $(mkIdent auxFunName) $levelInsts*,
|
||||
toTypeExpr := $toTypeExpr })
|
||||
letDecls := letDecls.push letDecl
|
||||
return letDecls
|
||||
|
||||
open TSyntax.Compat in
|
||||
/--
|
||||
Makes a `toExpr` function for the given inductive type.
|
||||
The implementation of each `toExpr` function for a (mutual) inductive type is given as top-level private definitions.
|
||||
These are assembled into `ToExpr` instances in `mkInstanceCmds`.
|
||||
For mutual/nested inductive types, then each of the types' `ToExpr` instances are provided as local instances,
|
||||
to wire together the recursion (necessitating these auxiliary definitions being `partial`).
|
||||
-/
|
||||
def mkAuxFunction (ctx : Deriving.Context) (i : Nat) : TermElabM Command := do
|
||||
let auxFunName := ctx.auxFunNames[i]!
|
||||
let indVal := ctx.typeInfos[i]!
|
||||
let header ← mkHeader ``ToExpr 1 indVal
|
||||
/- We make the `ToLevel` instances be explicit here so that we can pass the instances from the instances to the
|
||||
aux functions. This lets us ensure universe level variables are being lined up,
|
||||
without needing to use `ident.{u₁,…,uₙ}` syntax, which could conditionally be incorrect
|
||||
depending on the ambient CommandElabM scope state.
|
||||
TODO(kmill): deriving handlers should run in a scope with no `universes` or `variables`. -/
|
||||
let (toLevelInsts, levelBinders) := Array.unzip <| ← indVal.levelParams.toArray.mapM fun u => do
|
||||
let inst := mkIdent (← mkFreshUserName `inst)
|
||||
return (inst, ← `(explicitBinderF| ($inst : ToLevel.{$(mkIdent u)})))
|
||||
let mut body ← mkToExprBody header indVal auxFunName toLevelInsts
|
||||
if ctx.usePartial then
|
||||
let letDecls ← mkLocalInstanceLetDecls ctx header.argNames toLevelInsts
|
||||
body ← mkLet letDecls body
|
||||
/- We need to alter the last binder (the one for the "target") to have explicit universe levels
|
||||
so that the `ToLevel` instance arguments can use them. -/
|
||||
let addLevels binder :=
|
||||
match binder with
|
||||
| `(bracketedBinderF| ($a : $ty)) => do `(bracketedBinderF| ($a : $(← updateIndType indVal ty)))
|
||||
| _ => throwError "(internal error) expecting inst binder"
|
||||
let binders := header.binders.pop ++ levelBinders ++ #[← addLevels header.binders.back!]
|
||||
if ctx.usePartial then
|
||||
`(private partial def $(mkIdent auxFunName):ident $binders:bracketedBinder* : Expr := $body:term)
|
||||
else
|
||||
`(private def $(mkIdent auxFunName):ident $binders:bracketedBinder* : Expr := $body:term)
|
||||
|
||||
/--
|
||||
Creates all the auxiliary functions (using `mkAuxFunction`) for the (mutual) inductive type(s).
|
||||
Wraps the resulting definition commands in `mutual ... end`.
|
||||
-/
|
||||
def mkAuxFunctions (ctx : Deriving.Context) : TermElabM Syntax := do
|
||||
let mut auxDefs := #[]
|
||||
for i in [:ctx.typeInfos.size] do
|
||||
auxDefs := auxDefs.push (← mkAuxFunction ctx i)
|
||||
`(mutual $auxDefs:command* end)
|
||||
|
||||
open TSyntax.Compat in
|
||||
/--
|
||||
Assuming all of the auxiliary definitions exist,
|
||||
creates all the `instance` commands for the `ToExpr` instances for the (mutual) inductive type(s).
|
||||
This is a modified copy of `Lean.Elab.Deriving.mkInstanceCmds` to account for `ToLevel` instances.
|
||||
-/
|
||||
def mkInstanceCmds (ctx : Deriving.Context) (typeNames : Array Name) :
|
||||
TermElabM (Array Command) := do
|
||||
let mut instances := #[]
|
||||
for indVal in ctx.typeInfos, auxFunName in ctx.auxFunNames do
|
||||
if typeNames.contains indVal.name then
|
||||
let argNames ← mkInductArgNames indVal
|
||||
let binders ← mkImplicitBinders argNames
|
||||
let binders := binders ++ (← mkInstImplicitBinders ``ToExpr indVal argNames)
|
||||
let (toLevelInsts, levelBinders) := Array.unzip <| ← indVal.levelParams.toArray.mapM fun u => do
|
||||
let inst := mkIdent (← mkFreshUserName `inst)
|
||||
return (inst, ← `(instBinderF| [$inst : ToLevel.{$(mkIdent u)}]))
|
||||
let binders := binders ++ levelBinders
|
||||
let indType ← updateIndType indVal (← mkInductiveApp indVal argNames)
|
||||
let toTypeExpr ← mkToTypeExpr indVal argNames
|
||||
let instCmd ← `(instance $binders:implicitBinder* : ToExpr $indType where
|
||||
toExpr := $(mkIdent auxFunName) $toLevelInsts*
|
||||
toTypeExpr := $toTypeExpr)
|
||||
instances := instances.push instCmd
|
||||
return instances
|
||||
|
||||
/--
|
||||
Returns all the commands necessary to construct the `ToExpr` instances.
|
||||
-/
|
||||
def mkToExprInstanceCmds (declNames : Array Name) : TermElabM (Array Syntax) := do
|
||||
let ctx ← mkContext "toExpr" declNames[0]!
|
||||
let cmds := #[← mkAuxFunctions ctx] ++ (← mkInstanceCmds ctx declNames)
|
||||
trace[Elab.Deriving.toExpr] "\n{cmds}"
|
||||
return cmds
|
||||
|
||||
/--
|
||||
The main entry point to the `ToExpr` deriving handler.
|
||||
-/
|
||||
def mkToExprInstanceHandler (declNames : Array Name) : CommandElabM Bool := do
|
||||
if (← declNames.allM isInductive) && declNames.size > 0 then
|
||||
let cmds ← withFreshMacroScope <| liftTermElabM <| mkToExprInstanceCmds declNames
|
||||
-- Enable autoimplicits, used for universe levels.
|
||||
withScope (fun scope => { scope with opts := autoImplicit.set scope.opts true }) do
|
||||
elabCommand (mkNullNode cmds)
|
||||
return true
|
||||
else
|
||||
return false
|
||||
|
||||
builtin_initialize
|
||||
registerDerivingHandler ``Lean.ToExpr mkToExprInstanceHandler
|
||||
registerTraceClass `Elab.Deriving.toExpr
|
||||
|
||||
end Lean.Elab.Deriving.ToExpr
|
||||
@@ -691,9 +691,6 @@ private def addProjections (r : ElabHeaderResult) (fieldInfos : Array StructFiel
|
||||
let env ← getEnv
|
||||
let env ← ofExceptKernelException (mkProjections env r.view.declName projNames.toList r.view.isClass)
|
||||
setEnv env
|
||||
for fieldInfo in fieldInfos do
|
||||
if fieldInfo.isSubobject then
|
||||
addDeclarationRangesFromSyntax fieldInfo.declName r.view.ref fieldInfo.ref
|
||||
|
||||
private def registerStructure (structName : Name) (infos : Array StructFieldInfo) : TermElabM Unit := do
|
||||
let fields ← infos.filterMapM fun info => do
|
||||
@@ -778,14 +775,14 @@ private def setSourceInstImplicit (type : Expr) : Expr :=
|
||||
/--
|
||||
Creates a projection function to a non-subobject parent.
|
||||
-/
|
||||
private partial def mkCoercionToCopiedParent (levelParams : List Name) (params : Array Expr) (view : StructView) (source : Expr) (parent : StructParentInfo) (parentType : Expr) : MetaM StructureParentInfo := do
|
||||
private partial def mkCoercionToCopiedParent (levelParams : List Name) (params : Array Expr) (view : StructView) (source : Expr) (parentStructName : Name) (parentType : Expr) : MetaM StructureParentInfo := do
|
||||
let isProp ← Meta.isProp parentType
|
||||
let env ← getEnv
|
||||
let structName := view.declName
|
||||
let sourceFieldNames := getStructureFieldsFlattened env structName
|
||||
let binfo := if view.isClass && isClass env parent.structName then BinderInfo.instImplicit else BinderInfo.default
|
||||
let binfo := if view.isClass && isClass env parentStructName then BinderInfo.instImplicit else BinderInfo.default
|
||||
let mut declType ← instantiateMVars (← mkForallFVars params (← mkForallFVars #[source] parentType))
|
||||
if view.isClass && isClass env parent.structName then
|
||||
if view.isClass && isClass env parentStructName then
|
||||
declType := setSourceInstImplicit declType
|
||||
declType := declType.inferImplicit params.size true
|
||||
let rec copyFields (parentType : Expr) : MetaM Expr := do
|
||||
@@ -826,8 +823,7 @@ private partial def mkCoercionToCopiedParent (levelParams : List Name) (params :
|
||||
-- (Instances will get instance reducibility in `Lean.Elab.Command.addParentInstances`.)
|
||||
if !binfo.isInstImplicit && !(← Meta.isProp parentType) then
|
||||
setReducibleAttribute declName
|
||||
addDeclarationRangesFromSyntax declName view.ref parent.ref
|
||||
return { structName := parent.structName, subobject := false, projFn := declName }
|
||||
return { structName := parentStructName, subobject := false, projFn := declName }
|
||||
|
||||
private def mkRemainingProjections (levelParams : List Name) (params : Array Expr) (view : StructView)
|
||||
(parents : Array StructParentInfo) (fieldInfos : Array StructFieldInfo) : TermElabM (Array StructureParentInfo) := do
|
||||
@@ -848,7 +844,7 @@ private def mkRemainingProjections (levelParams : List Name) (params : Array Exp
|
||||
pure { structName := parent.structName, subobject := true, projFn := info.declName }
|
||||
else
|
||||
let parent_type := (← instantiateMVars parent.type).replace fun e => parentFVarToConst[e]?
|
||||
mkCoercionToCopiedParent levelParams params view source parent parent_type)
|
||||
mkCoercionToCopiedParent levelParams params view source parent.structName parent_type)
|
||||
parentInfos := parentInfos.push parentInfo
|
||||
if let some fvar := parent.fvar? then
|
||||
parentFVarToConst := parentFVarToConst.insert fvar <|
|
||||
|
||||
@@ -45,4 +45,3 @@ import Lean.Elab.Tactic.BVDecide
|
||||
import Lean.Elab.Tactic.BoolToPropSimps
|
||||
import Lean.Elab.Tactic.Classical
|
||||
import Lean.Elab.Tactic.Grind
|
||||
import Lean.Elab.Tactic.Monotonicity
|
||||
|
||||
@@ -6,64 +6,22 @@ Authors: Leonardo de Moura
|
||||
prelude
|
||||
import Init.Grind.Tactics
|
||||
import Lean.Meta.Tactic.Grind
|
||||
import Lean.Elab.Command
|
||||
import Lean.Elab.Tactic.Basic
|
||||
import Lean.Elab.Tactic.Config
|
||||
|
||||
namespace Lean.Elab.Tactic
|
||||
open Meta
|
||||
|
||||
declare_config_elab elabGrindConfig Grind.Config
|
||||
|
||||
open Command Term in
|
||||
@[builtin_command_elab Lean.Parser.Command.grindPattern]
|
||||
def elabGrindPattern : CommandElab := fun stx => do
|
||||
match stx with
|
||||
| `(grind_pattern $thmName:ident => $terms,*) => do
|
||||
liftTermElabM do
|
||||
let declName ← resolveGlobalConstNoOverload thmName
|
||||
discard <| addTermInfo thmName (← mkConstWithLevelParams declName)
|
||||
let info ← getConstInfo declName
|
||||
forallTelescope info.type fun xs _ => do
|
||||
let patterns ← terms.getElems.mapM fun term => do
|
||||
let pattern ← elabTerm term none
|
||||
synthesizeSyntheticMVarsUsingDefault
|
||||
let pattern ← instantiateMVars pattern
|
||||
let pattern ← Grind.preprocessPattern pattern
|
||||
return pattern.abstract xs
|
||||
Grind.addEMatchTheorem declName xs.size patterns.toList
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
def grind (mvarId : MVarId) (config : Grind.Config) (mainDeclName : Name) (fallback : Grind.Fallback) : MetaM Unit := do
|
||||
let mvarIds ← Grind.main mvarId config mainDeclName fallback
|
||||
def grind (mvarId : MVarId) (mainDeclName : Name) : MetaM Unit := do
|
||||
let mvarIds ← Grind.main mvarId mainDeclName
|
||||
unless mvarIds.isEmpty do
|
||||
throwError "`grind` failed\n{goalsToMessageData mvarIds}"
|
||||
|
||||
private def elabFallback (fallback? : Option Term) : TermElabM (Grind.GoalM Unit) := do
|
||||
let some fallback := fallback? | return (pure ())
|
||||
let type := mkApp (mkConst ``Grind.GoalM) (mkConst ``Unit)
|
||||
let value ← withLCtx {} {} do Term.elabTermAndSynthesize fallback type
|
||||
let auxDeclName ← if let .const declName _ := value then
|
||||
pure declName
|
||||
else
|
||||
let auxDeclName ← Term.mkAuxName `_grind_fallback
|
||||
let decl := Declaration.defnDecl {
|
||||
name := auxDeclName
|
||||
levelParams := []
|
||||
type, value, hints := .opaque, safety := .safe
|
||||
}
|
||||
addAndCompile decl
|
||||
pure auxDeclName
|
||||
unsafe evalConst (Grind.GoalM Unit) auxDeclName
|
||||
|
||||
@[builtin_tactic Lean.Parser.Tactic.grind] def evalApplyRfl : Tactic := fun stx => do
|
||||
match stx with
|
||||
| `(tactic| grind $config:optConfig $[on_failure $fallback?]?) =>
|
||||
let fallback ← elabFallback fallback?
|
||||
| `(tactic| grind) =>
|
||||
logWarningAt stx "The `grind` tactic is experimental and still under development. Avoid using it in production projects"
|
||||
let declName := (← Term.getDeclName?).getD `_grind
|
||||
let config ← elabGrindConfig config
|
||||
withMainContext do liftMetaFinishingTactic (grind · config declName fallback)
|
||||
withMainContext do liftMetaFinishingTactic (grind · declName)
|
||||
| _ => throwUnsupportedSyntax
|
||||
|
||||
end Lean.Elab.Tactic
|
||||
|
||||
@@ -1,223 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Joachim Breitner
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Split
|
||||
import Lean.Elab.RecAppSyntax
|
||||
import Lean.Elab.Tactic.Basic
|
||||
import Init.Internal.Order
|
||||
|
||||
namespace Lean.Meta.Monotonicity
|
||||
|
||||
open Lean Meta
|
||||
open Lean.Order
|
||||
|
||||
partial def headBetaUnderLambda (f : Expr) : Expr := Id.run do
|
||||
let mut f := f.headBeta
|
||||
if f.isLambda then
|
||||
while f.bindingBody!.isHeadBetaTarget do
|
||||
f := f.updateLambda! f.bindingInfo! f.bindingDomain! f.bindingBody!.headBeta
|
||||
return f
|
||||
|
||||
|
||||
/-- Environment extensions for monotonicity lemmas -/
|
||||
builtin_initialize monotoneExt :
|
||||
SimpleScopedEnvExtension (Name × Array DiscrTree.Key) (DiscrTree Name) ←
|
||||
registerSimpleScopedEnvExtension {
|
||||
addEntry := fun dt (n, ks) => dt.insertCore ks n
|
||||
initial := {}
|
||||
}
|
||||
|
||||
builtin_initialize registerBuiltinAttribute {
|
||||
name := `partial_fixpoint_monotone
|
||||
descr := "monotonicity theorem"
|
||||
add := fun decl _ kind => MetaM.run' do
|
||||
let declTy := (← getConstInfo decl).type
|
||||
let (xs, _, targetTy) ← withReducible <| forallMetaTelescopeReducing declTy
|
||||
let_expr monotone α inst_α β inst_β f := targetTy |
|
||||
throwError "@[partial_fixpoint_monotone] attribute only applies to lemmas proving {.ofConstName ``monotone}"
|
||||
let f := f.headBeta
|
||||
let f ← if f.isLambda then pure f else etaExpand f
|
||||
let f := headBetaUnderLambda f
|
||||
lambdaBoundedTelescope f 1 fun _ e => do
|
||||
let key ← withReducible <| DiscrTree.mkPath e
|
||||
monotoneExt.add (decl, key) kind
|
||||
}
|
||||
|
||||
/--
|
||||
Finds tagged monotonicity theorems of the form `monotone (fun x => e)`.
|
||||
-/
|
||||
def findMonoThms (e : Expr) : MetaM (Array Name) := do
|
||||
(monotoneExt.getState (← getEnv)).getMatch e
|
||||
|
||||
private def defaultFailK (f : Expr) (monoThms : Array Name) : MetaM α :=
|
||||
let extraMsg := if monoThms.isEmpty then m!"" else
|
||||
m!"Tried to apply {.andList (monoThms.toList.map (m!"'{·}'"))}, but failed."
|
||||
throwError "Failed to prove monotonicity of:{indentExpr f}\n{extraMsg}"
|
||||
|
||||
private def applyConst (goal : MVarId) (name : Name) : MetaM (List MVarId) := do
|
||||
mapError (f := (m!"Could not apply {.ofConstName name}:{indentD ·}")) do
|
||||
goal.applyConst name (cfg := { synthAssignedInstances := false})
|
||||
|
||||
/--
|
||||
Base case for solveMonoStep: Handles goals of the form
|
||||
```
|
||||
monotone (fun f => f.1.2 x y)
|
||||
```
|
||||
|
||||
It's tricky to solve them compositionally from the outside in, so here we construct the proof
|
||||
from the inside out.
|
||||
-/
|
||||
partial def solveMonoCall (α inst_α : Expr) (e : Expr) : MetaM (Option Expr) := do
|
||||
if e.isApp && !e.appArg!.hasLooseBVars then
|
||||
let some hmono ← solveMonoCall α inst_α e.appFn! | return none
|
||||
let hmonoType ← inferType hmono
|
||||
let_expr monotone _ _ _ inst _ := hmonoType | throwError "solveMonoCall {e}: unexpected type {hmonoType}"
|
||||
let some inst ← whnfUntil inst ``instOrderPi | throwError "solveMonoCall {e}: unexpected instance {inst}"
|
||||
let_expr instOrderPi γ δ inst ← inst | throwError "solveMonoCall {e}: whnfUntil failed?{indentExpr inst}"
|
||||
return ← mkAppOptM ``monotone_apply #[γ, δ, α, inst_α, inst, e.appArg!, none, hmono]
|
||||
|
||||
if e.isProj then
|
||||
let some hmono ← solveMonoCall α inst_α e.projExpr! | return none
|
||||
let hmonoType ← inferType hmono
|
||||
let_expr monotone _ _ _ inst _ := hmonoType | throwError "solveMonoCall {e}: unexpected type {hmonoType}"
|
||||
let some inst ← whnfUntil inst ``instPartialOrderPProd | throwError "solveMonoCall {e}: unexpected instance {inst}"
|
||||
let_expr instPartialOrderPProd β γ inst_β inst_γ ← inst | throwError "solveMonoCall {e}: whnfUntil failed?{indentExpr inst}"
|
||||
let n := if e.projIdx! == 0 then ``monotone_pprod_fst else ``monotone_pprod_snd
|
||||
return ← mkAppOptM n #[β, γ, α, inst_β, inst_γ, inst_α, none, hmono]
|
||||
|
||||
if e == .bvar 0 then
|
||||
let hmono ← mkAppOptM ``monotone_id #[α, inst_α]
|
||||
return some hmono
|
||||
|
||||
return none
|
||||
|
||||
|
||||
def solveMonoStep (failK : ∀ {α}, Expr → Array Name → MetaM α := @defaultFailK) (goal : MVarId) : MetaM (List MVarId) :=
|
||||
goal.withContext do
|
||||
trace[Elab.Tactic.monotonicity] "monotonicity at\n{goal}"
|
||||
let type ← goal.getType
|
||||
if type.isForall then
|
||||
let (_, goal) ← goal.intro1P
|
||||
return [goal]
|
||||
|
||||
match_expr type with
|
||||
| monotone α inst_α β inst_β f =>
|
||||
-- Ensure f is not headed by a redex and headed by at least one lambda, and clean some
|
||||
-- redexes left by some of the lemmas we tend to apply
|
||||
let f ← instantiateMVars f
|
||||
let f := f.headBeta
|
||||
let f ← if f.isLambda then pure f else etaExpand f
|
||||
let f := headBetaUnderLambda f
|
||||
let e := f.bindingBody!
|
||||
|
||||
-- No recursive calls left
|
||||
if !e.hasLooseBVars then
|
||||
return ← applyConst goal ``monotone_const
|
||||
|
||||
-- NB: `e` is now an open term.
|
||||
|
||||
-- Look through mdata
|
||||
if e.isMData then
|
||||
let f' := f.updateLambdaE! f.bindingDomain! e.mdataExpr!
|
||||
let goal' ← mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! f')
|
||||
goal.assign goal'
|
||||
return [goal'.mvarId!]
|
||||
|
||||
-- Float letE to the environment
|
||||
if let .letE n t v b _nonDep := e then
|
||||
if t.hasLooseBVars || v.hasLooseBVars then
|
||||
failK f #[]
|
||||
let goal' ← withLetDecl n t v fun x => do
|
||||
let b' := f.updateLambdaE! f.bindingDomain! (b.instantiate1 x)
|
||||
let goal' ← mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! b')
|
||||
goal.assign (← mkLetFVars #[x] goal')
|
||||
pure goal'
|
||||
return [goal'.mvarId!]
|
||||
|
||||
-- Float `letFun` to the environment.
|
||||
-- `applyConst` tends to reduce the redex
|
||||
match_expr e with
|
||||
| letFun γ _ v b =>
|
||||
if γ.hasLooseBVars || v.hasLooseBVars then
|
||||
failK f #[]
|
||||
let b' := f.updateLambdaE! f.bindingDomain! b
|
||||
let p ← mkAppOptM ``monotone_letFun #[α, β, γ, inst_α, inst_β, v, b']
|
||||
let new_goals ← mapError (f := (m!"Could not apply {p}:{indentD ·}")) do
|
||||
goal.apply p
|
||||
let [new_goal] := new_goals
|
||||
| throwError "Unexpected number of goals after {.ofConstName ``monotone_letFun}."
|
||||
let (_, new_goal) ←
|
||||
if b.isLambda then
|
||||
new_goal.intro b.bindingName!
|
||||
else
|
||||
new_goal.intro1
|
||||
return [new_goal]
|
||||
| _ => pure ()
|
||||
|
||||
-- Handle lambdas, preserving the name of the binder
|
||||
if e.isLambda then
|
||||
let [new_goal] ← applyConst goal ``monotone_of_monotone_apply
|
||||
| throwError "Unexpected number of goals after {.ofConstName ``monotone_of_monotone_apply}."
|
||||
let (_, new_goal) ← new_goal.intro e.bindingName!
|
||||
return [new_goal]
|
||||
|
||||
-- A recursive call directly here
|
||||
if e.isBVar then
|
||||
return ← applyConst goal ``monotone_id
|
||||
|
||||
-- A recursive call
|
||||
if let some hmono ← solveMonoCall α inst_α e then
|
||||
trace[Elab.Tactic.monotonicity] "Found recursive call {e}:{indentExpr hmono}"
|
||||
unless ← goal.checkedAssign hmono do
|
||||
trace[Elab.Tactic.monotonicity] "Failed to assign {hmono} : {← inferType hmono} to goal"
|
||||
failK f #[]
|
||||
return []
|
||||
|
||||
let monoThms ← withLocalDeclD `f f.bindingDomain! fun f =>
|
||||
-- The discrimination tree does not like open terms
|
||||
findMonoThms (e.instantiate1 f)
|
||||
trace[Elab.Tactic.monotonicity] "Found monoThms: {monoThms.map MessageData.ofConstName}"
|
||||
for monoThm in monoThms do
|
||||
let new_goals? ← try
|
||||
let new_goals ← applyConst goal monoThm
|
||||
trace[Elab.Tactic.monotonicity] "Succeeded with {.ofConstName monoThm}"
|
||||
pure (some new_goals)
|
||||
catch e =>
|
||||
trace[Elab.Tactic.monotonicity] "{e.toMessageData}"
|
||||
pure none
|
||||
if let some new_goals := new_goals? then
|
||||
return new_goals
|
||||
|
||||
-- Split match-expressions
|
||||
if let some info := isMatcherAppCore? (← getEnv) e then
|
||||
let candidate ← id do
|
||||
let args := e.getAppArgs
|
||||
for i in [info.getFirstDiscrPos : info.getFirstDiscrPos + info.numDiscrs] do
|
||||
if args[i]!.hasLooseBVars then
|
||||
return false
|
||||
return true
|
||||
if candidate then
|
||||
-- We could be even more deliberate here and use the `lifter` lemmas
|
||||
-- for the match statements instead of the `split` tactic.
|
||||
-- For now using `splitMatch` works fine.
|
||||
return ← Split.splitMatch goal e
|
||||
|
||||
failK f monoThms
|
||||
| _ =>
|
||||
throwError "Unexpected goal:{goal}"
|
||||
|
||||
partial def solveMono (failK : ∀ {α}, Expr → Array Name → MetaM α := defaultFailK) (goal : MVarId) : MetaM Unit := do
|
||||
let new_goals ← solveMonoStep failK goal
|
||||
new_goals.forM (solveMono failK)
|
||||
|
||||
open Elab Tactic in
|
||||
@[builtin_tactic Lean.Order.monotonicity]
|
||||
def evalMonotonicity : Tactic := fun _stx =>
|
||||
liftMetaTactic Lean.Meta.Monotonicity.solveMonoStep
|
||||
|
||||
end Lean.Meta.Monotonicity
|
||||
|
||||
builtin_initialize Lean.registerTraceClass `Elab.Tactic.monotonicity
|
||||
@@ -63,7 +63,7 @@ def isNumeral? (e : Expr) : Option (Expr × Nat) :=
|
||||
if e.isConstOf ``Nat.zero then
|
||||
(mkConst ``Nat, 0)
|
||||
else if let Expr.app (Expr.app (Expr.app (Expr.const ``OfNat.ofNat ..) α ..)
|
||||
(Expr.lit (Literal.natVal n) ..) ..) .. := e.consumeMData then
|
||||
(Expr.lit (Literal.natVal n) ..) ..) .. := e then
|
||||
some (α, n)
|
||||
else
|
||||
none
|
||||
|
||||
@@ -4,7 +4,6 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Lean.Parser.Syntax
|
||||
import Lean.Meta.Tactic.Simp.RegisterCommand
|
||||
import Lean.Elab.Command
|
||||
import Lean.Elab.SetOption
|
||||
|
||||
@@ -17,6 +17,14 @@ namespace Lean.Meta
|
||||
private def ensureType (e : Expr) : MetaM Unit := do
|
||||
discard <| getLevel e
|
||||
|
||||
def throwLetTypeMismatchMessage {α} (fvarId : FVarId) : MetaM α := do
|
||||
let lctx ← getLCtx
|
||||
match lctx.find? fvarId with
|
||||
| some (LocalDecl.ldecl _ _ _ t v _ _) => do
|
||||
let vType ← inferType v
|
||||
throwError "invalid let declaration, term{indentExpr v}\nhas type{indentExpr vType}\nbut is expected to have type{indentExpr t}"
|
||||
| _ => unreachable!
|
||||
|
||||
private def checkConstant (constName : Name) (us : List Level) : MetaM Unit := do
|
||||
let cinfo ← getConstInfo constName
|
||||
unless us.length == cinfo.levelParams.length do
|
||||
@@ -169,15 +177,6 @@ where
|
||||
catch _ =>
|
||||
return (a, b)
|
||||
|
||||
def throwLetTypeMismatchMessage {α} (fvarId : FVarId) : MetaM α := do
|
||||
let lctx ← getLCtx
|
||||
match lctx.find? fvarId with
|
||||
| some (LocalDecl.ldecl _ _ _ t v _ _) => do
|
||||
let vType ← inferType v
|
||||
let (vType, t) ← addPPExplicitToExposeDiff vType t
|
||||
throwError "invalid let declaration, term{indentExpr v}\nhas type{indentExpr vType}\nbut is expected to have type{indentExpr t}"
|
||||
| _ => unreachable!
|
||||
|
||||
/--
|
||||
Return error message "has type{givenType}\nbut is expected to have type{expectedType}"
|
||||
-/
|
||||
|
||||
@@ -66,31 +66,30 @@ structure GeneralizeIndicesSubgoal where
|
||||
numEqs : Nat
|
||||
|
||||
/--
|
||||
Given a metavariable `mvarId` representing the goal
|
||||
```
|
||||
Ctx |- T
|
||||
```
|
||||
and an expression `e : I A j`, where `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, j' : J, h' : I A j' |- j == j' -> e == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> e == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target
|
||||
|
||||
If `varName?` is not none, it is used to name `h'`.
|
||||
-/
|
||||
def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := none) : MetaM GeneralizeIndicesSubgoal :=
|
||||
Similar to `generalizeTargets` but customized for the `casesOn` motive.
|
||||
Given a metavariable `mvarId` representing the
|
||||
```
|
||||
Ctx, h : I A j, D |- T
|
||||
```
|
||||
where `fvarId` is `h`s id, and the type `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, h : I A j, D, j' : J, h' : I A j' |- j == j' -> h == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> h == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target -/
|
||||
def generalizeIndices (mvarId : MVarId) (fvarId : FVarId) : MetaM GeneralizeIndicesSubgoal :=
|
||||
mvarId.withContext do
|
||||
let lctx ← getLCtx
|
||||
let localInsts ← getLocalInstances
|
||||
mvarId.checkNotAssigned `generalizeIndices
|
||||
let type ← whnfD (← inferType e)
|
||||
let fvarDecl ← fvarId.getDecl
|
||||
let type ← whnf fvarDecl.type
|
||||
type.withApp fun f args => matchConstInduct f (fun _ => throwTacticEx `generalizeIndices mvarId "inductive type expected") fun val _ => do
|
||||
unless val.numIndices > 0 do throwTacticEx `generalizeIndices mvarId "indexed inductive type expected"
|
||||
unless args.size == val.numIndices + val.numParams do throwTacticEx `generalizeIndices mvarId "ill-formed inductive datatype"
|
||||
@@ -99,10 +98,9 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
let IAType ← inferType IA
|
||||
forallTelescopeReducing IAType fun newIndices _ => do
|
||||
let newType := mkAppN IA newIndices
|
||||
let varName ← if let some varName := varName? then pure varName else mkFreshUserName `x
|
||||
withLocalDeclD varName newType fun h' =>
|
||||
withLocalDeclD fvarDecl.userName newType fun h' =>
|
||||
withNewEqs indices newIndices fun newEqs newRefls => do
|
||||
let (newEqType, newRefl) ← mkEqAndProof e h'
|
||||
let (newEqType, newRefl) ← mkEqAndProof fvarDecl.toExpr h'
|
||||
let newRefls := newRefls.push newRefl
|
||||
withLocalDeclD `h newEqType fun newEq => do
|
||||
let newEqs := newEqs.push newEq
|
||||
@@ -114,7 +112,7 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
let auxType ← mkForallFVars newIndices auxType
|
||||
let newMVar ← mkFreshExprMVarAt lctx localInsts auxType MetavarKind.syntheticOpaque tag
|
||||
/- assign mvarId := newMVar indices h refls -/
|
||||
mvarId.assign (mkAppN (mkApp (mkAppN newMVar indices) e) newRefls)
|
||||
mvarId.assign (mkAppN (mkApp (mkAppN newMVar indices) fvarDecl.toExpr) newRefls)
|
||||
let (indicesFVarIds, newMVarId) ← newMVar.mvarId!.introNP newIndices.size
|
||||
let (fvarId, newMVarId) ← newMVarId.intro1P
|
||||
return {
|
||||
@@ -124,29 +122,6 @@ def generalizeIndices' (mvarId : MVarId) (e : Expr) (varName? : Option Name := n
|
||||
numEqs := newEqs.size
|
||||
}
|
||||
|
||||
/--
|
||||
Similar to `generalizeTargets` but customized for the `casesOn` motive.
|
||||
Given a metavariable `mvarId` representing the
|
||||
```
|
||||
Ctx, h : I A j, D |- T
|
||||
```
|
||||
where `fvarId` is `h`s id, and the type `I A j` is an inductive datatype where `A` are parameters,
|
||||
and `j` the indices. Generate the goal
|
||||
```
|
||||
Ctx, h : I A j, D, j' : J, h' : I A j' |- j == j' -> h == h' -> T
|
||||
```
|
||||
Remark: `(j == j' -> h == h')` is a "telescopic" equality.
|
||||
Remark: `j` is sequence of terms, and `j'` a sequence of free variables.
|
||||
The result contains the fields
|
||||
- `mvarId`: the new goal
|
||||
- `indicesFVarIds`: `j'` ids
|
||||
- `fvarId`: `h'` id
|
||||
- `numEqs`: number of equations in the target -/
|
||||
def generalizeIndices (mvarId : MVarId) (fvarId : FVarId) : MetaM GeneralizeIndicesSubgoal :=
|
||||
mvarId.withContext do
|
||||
let fvarDecl ← fvarId.getDecl
|
||||
generalizeIndices' mvarId fvarDecl.toExpr fvarDecl.userName
|
||||
|
||||
structure CasesSubgoal extends InductionSubgoal where
|
||||
ctorName : Name
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ prelude
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Preprocessor
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
@@ -20,34 +21,19 @@ import Lean.Meta.Tactic.Grind.PP
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Ctor
|
||||
import Lean.Meta.Tactic.Grind.Parser
|
||||
import Lean.Meta.Tactic.Grind.EMatchTheorem
|
||||
import Lean.Meta.Tactic.Grind.EMatch
|
||||
import Lean.Meta.Tactic.Grind.Main
|
||||
|
||||
|
||||
namespace Lean
|
||||
|
||||
/-! Trace options for `grind` users -/
|
||||
builtin_initialize registerTraceClass `grind
|
||||
builtin_initialize registerTraceClass `grind.assert
|
||||
builtin_initialize registerTraceClass `grind.eqc
|
||||
builtin_initialize registerTraceClass `grind.internalize
|
||||
builtin_initialize registerTraceClass `grind.ematch
|
||||
builtin_initialize registerTraceClass `grind.ematch.pattern
|
||||
builtin_initialize registerTraceClass `grind.ematch.pattern.search
|
||||
builtin_initialize registerTraceClass `grind.ematch.instance
|
||||
builtin_initialize registerTraceClass `grind.ematch.instance.assignment
|
||||
builtin_initialize registerTraceClass `grind.eq
|
||||
builtin_initialize registerTraceClass `grind.issues
|
||||
builtin_initialize registerTraceClass `grind.simp
|
||||
|
||||
/-! Trace options for `grind` developers -/
|
||||
builtin_initialize registerTraceClass `grind.add
|
||||
builtin_initialize registerTraceClass `grind.pre
|
||||
builtin_initialize registerTraceClass `grind.debug
|
||||
builtin_initialize registerTraceClass `grind.debug.proofs
|
||||
builtin_initialize registerTraceClass `grind.debug.congr
|
||||
builtin_initialize registerTraceClass `grind.debug.proof
|
||||
builtin_initialize registerTraceClass `grind.debug.proj
|
||||
builtin_initialize registerTraceClass `grind.debug.parent
|
||||
builtin_initialize registerTraceClass `grind.debug.final
|
||||
builtin_initialize registerTraceClass `grind.debug.forallPropagator
|
||||
builtin_initialize registerTraceClass `grind.simp
|
||||
builtin_initialize registerTraceClass `grind.congr
|
||||
builtin_initialize registerTraceClass `grind.proof
|
||||
builtin_initialize registerTraceClass `grind.proof.detail
|
||||
|
||||
end Lean
|
||||
|
||||
@@ -41,9 +41,8 @@ Furthermore, `grind` will not be able to infer that `HEq (a + a) (b + b)` even
|
||||
-/
|
||||
|
||||
structure State where
|
||||
argMap : PHashMap (Expr × Nat) (List Expr) := {}
|
||||
canon : PHashMap Expr Expr := {}
|
||||
proofCanon : PHashMap Expr Expr := {}
|
||||
argMap : PHashMap (Expr × Nat) (List Expr) := {}
|
||||
canon : PHashMap Expr Expr := {}
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
@@ -62,11 +61,10 @@ def canonElemCore (f : Expr) (i : Nat) (e : Expr) (isInst : Bool) : StateT State
|
||||
let cs := s.argMap.find? key |>.getD []
|
||||
for c in cs do
|
||||
if (← isDefEq e c) then
|
||||
-- We used to check `c.fvarsSubset e` because it is not
|
||||
-- in general safe to replace `e` with `c` if `c` has more free variables than `e`.
|
||||
-- However, we don't revert previously canonicalized elements in the `grind` tactic.
|
||||
modify fun s => { s with canon := s.canon.insert e c }
|
||||
return c
|
||||
if c.fvarsSubset e then
|
||||
-- It is not in general safe to replace `e` with `c` if `c` has more free variables than `e`.
|
||||
modify fun s => { s with canon := s.canon.insert e c }
|
||||
return c
|
||||
if isInst then
|
||||
if (← isDiagnosticsEnabled <&&> pure (c.fvarsSubset e) <&&> (withDefault <| isDefEq e c)) then
|
||||
-- TODO: consider storing this information in some structure that can be browsed later.
|
||||
@@ -111,21 +109,26 @@ unsafe def canonImpl (e : Expr) : StateT State MetaM Expr := do
|
||||
visit e |>.run' mkPtrMap
|
||||
where
|
||||
visit (e : Expr) : StateRefT (PtrMap Expr Expr) (StateT State MetaM) Expr := do
|
||||
unless e.isApp || e.isForall do return e
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
let e' ← match e with
|
||||
| .app .. => e.withApp fun f args => do
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
match e with
|
||||
| .bvar .. => unreachable!
|
||||
-- Recall that `grind` treats `let`, `forall`, and `lambda` as atomic terms.
|
||||
| .letE .. | .forallE .. | .lam ..
|
||||
| .const .. | .lit .. | .mvar .. | .sort .. | .fvar ..
|
||||
-- Recall that the `grind` preprocessor uses the `foldProjs` preprocessing step.
|
||||
| .proj ..
|
||||
-- Recall that the `grind` preprocessor uses the `eraseIrrelevantMData` preprocessing step.
|
||||
| .mdata .. => return e
|
||||
-- We only visit applications
|
||||
| .app .. =>
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
e.withApp fun f args => do
|
||||
let e' ← if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
-- We just canonize the proposition
|
||||
let prop := args[0]!
|
||||
let prop' ← visit prop
|
||||
if let some r := (← getThe State).proofCanon.find? prop' then
|
||||
pure r
|
||||
else
|
||||
let e' := if ptrEq prop prop' then e else mkAppN f (args.set! 0 prop')
|
||||
modifyThe State fun s => { s with proofCanon := s.proofCanon.insert prop' e' }
|
||||
pure e'
|
||||
pure <| if ptrEq prop prop' then mkAppN f (args.set! 0 prop') else e
|
||||
else
|
||||
let pinfos := (← getFunInfo f).paramInfo
|
||||
let mut modified := false
|
||||
@@ -140,17 +143,12 @@ where
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
pure <| if modified then mkAppN f args else e
|
||||
| .forallE _ d b _ =>
|
||||
-- Recall that we have `ForallProp.lean`.
|
||||
let d' ← visit d
|
||||
-- Remark: users may not want to convert `p → q` into `¬p ∨ q`
|
||||
let b' ← if b.hasLooseBVars then pure b else visit b
|
||||
pure <| e.updateForallE! d' b'
|
||||
| _ => unreachable!
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
|
||||
/-- Canonicalizes nested types, type formers, and instances in `e`. -/
|
||||
/--
|
||||
Canonicalizes nested types, type formers, and instances in `e`.
|
||||
-/
|
||||
def canon (e : Expr) : StateT State MetaM Expr :=
|
||||
unsafe canonImpl e
|
||||
|
||||
|
||||
@@ -11,29 +11,28 @@ namespace Lean.Meta.Grind
|
||||
The `grind` tactic includes an auxiliary `cases` tactic that is not intended for direct use by users.
|
||||
This method implements it.
|
||||
This tactic is automatically applied when introducing local declarations with a type tagged with `[grind_cases]`.
|
||||
It is also used for "case-splitting" on terms during the search.
|
||||
|
||||
It differs from the user-facing Lean `cases` tactic in the following ways:
|
||||
|
||||
- It avoids unnecessary `revert` and `intro` operations.
|
||||
|
||||
- It does not introduce new local declarations for each minor premise. Instead, the `grind` tactic preprocessor is responsible for introducing them.
|
||||
|
||||
- It assumes that the major premise (i.e., the parameter `fvarId`) is the latest local declaration in the current goal.
|
||||
|
||||
- If the major premise type is an indexed family, auxiliary declarations and (heterogeneous) equalities are introduced.
|
||||
However, these equalities are not resolved using `unifyEqs`. Instead, the `grind` tactic employs union-find and
|
||||
congruence closure to process these auxiliary equalities. This approach avoids applying substitution to propositions
|
||||
that have already been internalized by `grind`.
|
||||
-/
|
||||
def cases (mvarId : MVarId) (e : Expr) : MetaM (List MVarId) := mvarId.withContext do
|
||||
def cases (mvarId : MVarId) (fvarId : FVarId) : MetaM (List MVarId) := mvarId.withContext do
|
||||
let tag ← mvarId.getTag
|
||||
let type ← whnf (← inferType e)
|
||||
let type ← whnf (← fvarId.getType)
|
||||
let .const declName _ := type.getAppFn | throwInductiveExpected type
|
||||
let .inductInfo _ ← getConstInfo declName | throwInductiveExpected type
|
||||
let recursorInfo ← mkRecursorInfo (mkCasesOnName declName)
|
||||
let k (mvarId : MVarId) (fvarId : FVarId) (indices : Array FVarId) : MetaM (List MVarId) := do
|
||||
let indicesExpr := indices.map mkFVar
|
||||
let recursor ← mkRecursorAppPrefix mvarId `grind.cases fvarId recursorInfo indicesExpr
|
||||
let mut recursor := mkApp (mkAppN recursor indicesExpr) (mkFVar fvarId)
|
||||
let k (mvarId : MVarId) (fvarId : FVarId) (indices : Array Expr) (clearMajor : Bool) : MetaM (List MVarId) := do
|
||||
let recursor ← mkRecursorAppPrefix mvarId `grind.cases fvarId recursorInfo indices
|
||||
let mut recursor := mkApp (mkAppN recursor indices) (mkFVar fvarId)
|
||||
let mut recursorType ← inferType recursor
|
||||
let mut mvarIdsNew := #[]
|
||||
for _ in [:recursorInfo.numMinors] do
|
||||
@@ -42,22 +41,22 @@ def cases (mvarId : MVarId) (e : Expr) : MetaM (List MVarId) := mvarId.withConte
|
||||
recursorType := recursorTypeNew
|
||||
let mvar ← mkFreshExprSyntheticOpaqueMVar targetNew tag
|
||||
recursor := mkApp recursor mvar
|
||||
let mvarIdNew ← mvar.mvarId!.tryClearMany (indices.push fvarId)
|
||||
let mvarIdNew ← if clearMajor then
|
||||
mvar.mvarId!.clear fvarId
|
||||
else
|
||||
pure mvar.mvarId!
|
||||
mvarIdsNew := mvarIdsNew.push mvarIdNew
|
||||
mvarId.assign recursor
|
||||
return mvarIdsNew.toList
|
||||
if recursorInfo.numIndices > 0 then
|
||||
let s ← generalizeIndices' mvarId e
|
||||
let s ← generalizeIndices mvarId fvarId
|
||||
s.mvarId.withContext do
|
||||
k s.mvarId s.fvarId s.indicesFVarIds
|
||||
else if let .fvar fvarId := e then
|
||||
k mvarId fvarId #[]
|
||||
k s.mvarId s.fvarId (s.indicesFVarIds.map mkFVar) (clearMajor := false)
|
||||
else
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName `x) type e
|
||||
let (fvarId, mvarId) ← mvarId.intro1
|
||||
mvarId.withContext do k mvarId fvarId #[]
|
||||
let indices ← getMajorTypeIndices mvarId `grind.cases recursorInfo type
|
||||
k mvarId fvarId indices (clearMajor := true)
|
||||
where
|
||||
throwInductiveExpected {α} (type : Expr) : MetaM α := do
|
||||
throwTacticEx `grind.cases mvarId m!"(non-recursive) inductive type expected at {e}{indentExpr type}"
|
||||
throwTacticEx `grind.cases mvarId m!"(non-recursive) inductive type expected at {mkFVar fvarId}{indentExpr type}"
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -41,10 +41,7 @@ This is an auxiliary function performed while merging equivalence classes.
|
||||
private def removeParents (root : Expr) : GoalM ParentSet := do
|
||||
let parents ← getParentsAndReset root
|
||||
for parent in parents do
|
||||
-- Recall that we may have `Expr.forallE` in `parents` because of `ForallProp.lean`
|
||||
if (← pure parent.isApp <&&> isCongrRoot parent) then
|
||||
trace[grind.debug.parent] "remove: {parent}"
|
||||
modify fun s => { s with congrTable := s.congrTable.erase { e := parent } }
|
||||
modify fun s => { s with congrTable := s.congrTable.erase { e := parent } }
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -53,9 +50,7 @@ This is an auxiliary function performed while merging equivalence classes.
|
||||
-/
|
||||
private def reinsertParents (parents : ParentSet) : GoalM Unit := do
|
||||
for parent in parents do
|
||||
if (← pure parent.isApp <&&> isCongrRoot parent) then
|
||||
trace[grind.debug.parent] "reinsert: {parent}"
|
||||
addCongrTable parent
|
||||
addCongrTable parent
|
||||
|
||||
/-- Closes the goal when `True` and `False` are in the same equivalence class. -/
|
||||
private def closeGoalWithTrueEqFalse : GoalM Unit := do
|
||||
@@ -74,26 +69,14 @@ private def closeGoalWithValuesEq (lhs rhs : Expr) : GoalM Unit := do
|
||||
let falseProof ← mkEqMP pEqFalse hp
|
||||
closeGoal falseProof
|
||||
|
||||
/--
|
||||
Updates the modification time to `gmt` for the parents of `root`.
|
||||
The modification time is used to decide which terms are considered during e-matching.
|
||||
-/
|
||||
private partial def updateMT (root : Expr) : GoalM Unit := do
|
||||
let gmt := (← get).gmt
|
||||
for parent in (← getParents root) do
|
||||
let node ← getENode parent
|
||||
if node.mt < gmt then
|
||||
setENode parent { node with mt := gmt }
|
||||
updateMT parent
|
||||
|
||||
private partial def addEqStep (lhs rhs proof : Expr) (isHEq : Bool) : GoalM Unit := do
|
||||
trace[grind.eq] "{lhs} {if isHEq then "≡" else "="} {rhs}"
|
||||
let lhsNode ← getENode lhs
|
||||
let rhsNode ← getENode rhs
|
||||
if isSameExpr lhsNode.root rhsNode.root then
|
||||
-- `lhs` and `rhs` are already in the same equivalence class.
|
||||
trace[grind.debug] "{← ppENodeRef lhs} and {← ppENodeRef rhs} are already in the same equivalence class"
|
||||
return ()
|
||||
trace[grind.eqc] "{← if isHEq then mkHEq lhs rhs else mkEq lhs rhs}"
|
||||
let lhsRoot ← getENode lhsNode.root
|
||||
let rhsRoot ← getENode rhsNode.root
|
||||
let mut valueInconsistency := false
|
||||
@@ -154,8 +137,6 @@ where
|
||||
unless (← isInconsistent) do
|
||||
for parent in parents do
|
||||
propagateUp parent
|
||||
unless (← isInconsistent) do
|
||||
updateMT rhsRoot.self
|
||||
|
||||
updateRoots (lhs : Expr) (rootNew : Expr) : GoalM Unit := do
|
||||
let rec loop (e : Expr) : GoalM Unit := do
|
||||
@@ -186,29 +167,21 @@ where
|
||||
if (← isInconsistent) then
|
||||
resetNewEqs
|
||||
return ()
|
||||
checkSystem "grind"
|
||||
let some { lhs, rhs, proof, isHEq } := (← popNextEq?) | return ()
|
||||
addEqStep lhs rhs proof isHEq
|
||||
processTodo
|
||||
|
||||
/-- Adds a new equality `lhs = rhs`. It assumes `lhs` and `rhs` have already been internalized. -/
|
||||
def addEq (lhs rhs proof : Expr) : GoalM Unit := do
|
||||
addEqCore lhs rhs proof false
|
||||
|
||||
|
||||
/-- Adds a new heterogeneous equality `HEq lhs rhs`. It assumes `lhs` and `rhs` have already been internalized. -/
|
||||
def addHEq (lhs rhs proof : Expr) : GoalM Unit := do
|
||||
addEqCore lhs rhs proof true
|
||||
|
||||
/-- Internalizes `lhs` and `rhs`, and then adds equality `lhs = rhs`. -/
|
||||
def addNewEq (lhs rhs proof : Expr) (generation : Nat) : GoalM Unit := do
|
||||
internalize lhs generation
|
||||
internalize rhs generation
|
||||
addEq lhs rhs proof
|
||||
|
||||
/-- Adds a new `fact` justified by the given proof and using the given generation. -/
|
||||
/--
|
||||
Adds a new `fact` justified by the given proof and using the given generation.
|
||||
-/
|
||||
def add (fact : Expr) (proof : Expr) (generation := 0) : GoalM Unit := do
|
||||
trace[grind.assert] "{fact}"
|
||||
trace[grind.add] "{proof} : {fact}"
|
||||
if (← isInconsistent) then return ()
|
||||
resetNewEqs
|
||||
let_expr Not p := fact
|
||||
@@ -216,6 +189,7 @@ def add (fact : Expr) (proof : Expr) (generation := 0) : GoalM Unit := do
|
||||
go p true
|
||||
where
|
||||
go (p : Expr) (isNeg : Bool) : GoalM Unit := do
|
||||
trace[grind.add] "isNeg: {isNeg}, {p}"
|
||||
match_expr p with
|
||||
| Eq _ lhs rhs => goEq p lhs rhs isNeg false
|
||||
| HEq _ lhs _ rhs => goEq p lhs rhs isNeg true
|
||||
@@ -235,8 +209,10 @@ where
|
||||
internalize rhs generation
|
||||
addEqCore lhs rhs proof isHEq
|
||||
|
||||
/-- Adds a new hypothesis. -/
|
||||
def addHypothesis (fvarId : FVarId) (generation := 0) : GoalM Unit := do
|
||||
/--
|
||||
Adds a new hypothesis.
|
||||
-/
|
||||
def addHyp (fvarId : FVarId) (generation := 0) : GoalM Unit := do
|
||||
add (← fvarId.getType) (mkFVar fvarId) generation
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -9,16 +9,12 @@ import Lean.Meta.Tactic.Grind.Types
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
private partial def propagateInjEqs (eqs : Expr) (proof : Expr) : GoalM Unit := do
|
||||
-- Remark: we must use `shareCommon` before using `pushEq` and `pushHEq`.
|
||||
-- This is needed because the result type of the injection theorem may allocate
|
||||
match_expr eqs with
|
||||
| And left right =>
|
||||
propagateInjEqs left (.proj ``And 0 proof)
|
||||
propagateInjEqs right (.proj ``And 1 proof)
|
||||
| Eq _ lhs rhs =>
|
||||
pushEq (← shareCommon lhs) (← shareCommon rhs) proof
|
||||
| HEq _ lhs _ rhs =>
|
||||
pushHEq (← shareCommon lhs) (← shareCommon rhs) proof
|
||||
| Eq _ lhs rhs => pushEq lhs rhs proof
|
||||
| HEq _ lhs _ rhs => pushHEq lhs rhs proof
|
||||
| _ =>
|
||||
trace[grind.issues] "unexpected injectivity theorem result type{indentExpr eqs}"
|
||||
return ()
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Init.Simproc
|
||||
import Lean.Meta.Tactic.Simp.Simproc
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/--
|
||||
Returns `Grind.doNotSimp e`.
|
||||
Recall that `Grind.doNotSimp` is an identity function, but the following simproc is used to prevent the term `e` from being simplified.
|
||||
-/
|
||||
def markAsDoNotSimp (e : Expr) : MetaM Expr :=
|
||||
mkAppM ``Grind.doNotSimp #[e]
|
||||
|
||||
builtin_dsimproc_decl reduceDoNotSimp (Grind.doNotSimp _) := fun e => do
|
||||
let_expr Grind.doNotSimp _ _ ← e | return .continue
|
||||
return .done e
|
||||
|
||||
/-- Adds `reduceDoNotSimp` to `s` -/
|
||||
def addDoNotSimp (s : Simprocs) : CoreM Simprocs := do
|
||||
s.add ``reduceDoNotSimp (post := false)
|
||||
|
||||
/-- Erases `Grind.doNotSimp` annotations. -/
|
||||
def eraseDoNotSimp (e : Expr) : CoreM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
let_expr Grind.doNotSimp _ a := e | return .continue e
|
||||
return .continue a
|
||||
Core.transform e (pre := pre)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,354 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Intro
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
namespace EMatch
|
||||
/-! This module implements a simple E-matching procedure as a backtracking search. -/
|
||||
|
||||
/-- We represent an `E-matching` problem as a list of constraints. -/
|
||||
inductive Cnstr where
|
||||
| /-- Matches pattern `pat` with term `e` -/
|
||||
«match» (pat : Expr) (e : Expr)
|
||||
| /-- Matches offset pattern `pat+k` with term `e` -/
|
||||
offset (pat : Expr) (k : Nat) (e : Expr)
|
||||
| /-- This constraint is used to encode multi-patterns. -/
|
||||
«continue» (pat : Expr)
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
Internal "marker" for representing unassigned elemens in the `assignment` field.
|
||||
This is a small hack to avoid one extra level of indirection by using `Option Expr` at `assignment`.
|
||||
-/
|
||||
private def unassigned : Expr := mkConst (Name.mkSimple "[grind_unassigned]")
|
||||
|
||||
private def assignmentToMessageData (assignment : Array Expr) : Array MessageData :=
|
||||
assignment.reverse.map fun e =>
|
||||
if isSameExpr e unassigned then m!"_" else m!"{e}"
|
||||
|
||||
/--
|
||||
Choice point for the backtracking search.
|
||||
The state of the procedure contains a stack of choices.
|
||||
-/
|
||||
structure Choice where
|
||||
/-- Contraints to be processed. -/
|
||||
cnstrs : List Cnstr
|
||||
/-- Maximum term generation found so far. -/
|
||||
gen : Nat
|
||||
/-- Partial assignment so far. Recall that pattern variables are encoded as de-Bruijn variables. -/
|
||||
assignment : Array Expr
|
||||
deriving Inhabited
|
||||
|
||||
/-- Context for the E-matching monad. -/
|
||||
structure Context where
|
||||
/-- `useMT` is `true` if we are using the mod-time optimization. It is always set to false for new `EMatchTheorem`s. -/
|
||||
useMT : Bool := true
|
||||
/-- `EMatchTheorem` being processed. -/
|
||||
thm : EMatchTheorem := default
|
||||
deriving Inhabited
|
||||
|
||||
/-- State for the E-matching monad -/
|
||||
structure State where
|
||||
/-- Choices that still have to be processed. -/
|
||||
choiceStack : List Choice := []
|
||||
deriving Inhabited
|
||||
|
||||
abbrev M := ReaderT Context $ StateRefT State GoalM
|
||||
|
||||
def M.run' (x : M α) : GoalM α :=
|
||||
x {} |>.run' {}
|
||||
|
||||
/--
|
||||
Assigns `bidx := e` in `c`. If `bidx` is already assigned in `c`, we check whether
|
||||
`e` and `c.assignment[bidx]` are in the same equivalence class.
|
||||
This function assumes `bidx < c.assignment.size`.
|
||||
Recall that we initialize the assignment array with the number of theorem parameters.
|
||||
-/
|
||||
private def assign? (c : Choice) (bidx : Nat) (e : Expr) : OptionT GoalM Choice := do
|
||||
if h : bidx < c.assignment.size then
|
||||
let v := c.assignment[bidx]
|
||||
if isSameExpr v unassigned then
|
||||
return { c with assignment := c.assignment.set bidx e }
|
||||
else
|
||||
guard (← isEqv v e)
|
||||
return c
|
||||
else
|
||||
-- `Choice` was not properly initialized
|
||||
unreachable!
|
||||
|
||||
/--
|
||||
Returns `true` if the function `pFn` of a pattern is equivalent to the function `eFn`.
|
||||
Recall that we ignore universe levels in patterns.
|
||||
-/
|
||||
private def eqvFunctions (pFn eFn : Expr) : Bool :=
|
||||
(pFn.isFVar && pFn == eFn)
|
||||
|| (pFn.isConst && eFn.isConstOf pFn.constName!)
|
||||
|
||||
/-- Matches a pattern argument. See `matchArgs?`. -/
|
||||
private def matchArg? (c : Choice) (pArg : Expr) (eArg : Expr) : OptionT GoalM Choice := do
|
||||
if isPatternDontCare pArg then
|
||||
return c
|
||||
else if pArg.isBVar then
|
||||
assign? c pArg.bvarIdx! eArg
|
||||
else if let some pArg := groundPattern? pArg then
|
||||
guard (← isEqv pArg eArg)
|
||||
return c
|
||||
else if let some (pArg, k) := isOffsetPattern? pArg then
|
||||
assert! Option.isNone <| isOffsetPattern? pArg
|
||||
assert! !isPatternDontCare pArg
|
||||
return { c with cnstrs := .offset pArg k eArg :: c.cnstrs }
|
||||
else
|
||||
return { c with cnstrs := .match pArg eArg :: c.cnstrs }
|
||||
|
||||
private def Choice.updateGen (c : Choice) (gen : Nat) : Choice :=
|
||||
{ c with gen := Nat.max gen c.gen }
|
||||
|
||||
private def pushChoice (c : Choice) : M Unit :=
|
||||
modify fun s => { s with choiceStack := c :: s.choiceStack }
|
||||
|
||||
/--
|
||||
Matches arguments of pattern `p` with term `e`. Returns `some` if successful,
|
||||
and `none` otherwise. It may update `c`s assignment and list of contraints to be
|
||||
processed.
|
||||
-/
|
||||
private partial def matchArgs? (c : Choice) (p : Expr) (e : Expr) : OptionT GoalM Choice := do
|
||||
if !p.isApp then return c -- Done
|
||||
let pArg := p.appArg!
|
||||
let eArg := e.appArg!
|
||||
let c ← matchArg? c pArg eArg
|
||||
matchArgs? c p.appFn! e.appFn!
|
||||
|
||||
/--
|
||||
Matches pattern `p` with term `e` with respect to choice `c`.
|
||||
We traverse the equivalence class of `e` looking for applications compatible with `p`.
|
||||
For each candidate application, we match the arguments and may update `c`s assignments and contraints.
|
||||
We add the updated choices to the choice stack.
|
||||
-/
|
||||
private partial def processMatch (c : Choice) (p : Expr) (e : Expr) : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
let pFn := p.getAppFn
|
||||
let numArgs := p.getAppNumArgs
|
||||
let mut curr := e
|
||||
repeat
|
||||
let n ← getENode curr
|
||||
-- Remark: we use `<` because the instance generation is the maximum term generation + 1
|
||||
if n.generation < maxGeneration
|
||||
-- uses heterogeneous equality or is the root of its congruence class
|
||||
&& (n.heqProofs || n.isCongrRoot)
|
||||
&& eqvFunctions pFn curr.getAppFn
|
||||
&& curr.getAppNumArgs == numArgs then
|
||||
if let some c ← matchArgs? c p curr |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
curr ← getNext curr
|
||||
if isSameExpr curr e then break
|
||||
|
||||
/--
|
||||
Matches offset pattern `pArg+k` with term `e` with respect to choice `c`.
|
||||
-/
|
||||
private partial def processOffset (c : Choice) (pArg : Expr) (k : Nat) (e : Expr) : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
let mut curr := e
|
||||
repeat
|
||||
let n ← getENode curr
|
||||
if n.generation < maxGeneration then
|
||||
if let some (eArg, k') ← isOffset? curr |>.run then
|
||||
if k' < k then
|
||||
let c := c.updateGen n.generation
|
||||
pushChoice { c with cnstrs := .offset pArg (k - k') eArg :: c.cnstrs }
|
||||
else if k' == k then
|
||||
if let some c ← matchArg? c pArg eArg |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
else if k' > k then
|
||||
let eArg' := mkNatAdd eArg (mkNatLit (k' - k))
|
||||
let eArg' ← shareCommon (← canon eArg')
|
||||
internalize eArg' (n.generation+1)
|
||||
if let some c ← matchArg? c pArg eArg' |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
else if let some k' ← evalNat curr |>.run then
|
||||
if k' >= k then
|
||||
let eArg' := mkNatLit (k' - k)
|
||||
let eArg' ← shareCommon (← canon eArg')
|
||||
internalize eArg' (n.generation+1)
|
||||
if let some c ← matchArg? c pArg eArg' |>.run then
|
||||
pushChoice (c.updateGen n.generation)
|
||||
curr ← getNext curr
|
||||
if isSameExpr curr e then break
|
||||
|
||||
/-- Processes `continue` contraint used to implement multi-patterns. -/
|
||||
private def processContinue (c : Choice) (p : Expr) : M Unit := do
|
||||
let some apps := (← getThe Goal).appMap.find? p.toHeadIndex
|
||||
| return ()
|
||||
let maxGeneration ← getMaxGeneration
|
||||
for app in apps do
|
||||
let n ← getENode app
|
||||
if n.generation < maxGeneration
|
||||
&& (n.heqProofs || n.isCongrRoot) then
|
||||
if let some c ← matchArgs? c p app |>.run then
|
||||
let gen := n.generation
|
||||
let c := { c with gen := Nat.max gen c.gen }
|
||||
modify fun s => { s with choiceStack := c :: s.choiceStack }
|
||||
|
||||
/-- Helper function for marking parts of `match`-equation theorem as "do-not-simplify" -/
|
||||
private partial def annotateMatchEqnType (prop : Expr) : M Expr := do
|
||||
if let .forallE n d b bi := prop then
|
||||
withLocalDecl n bi (← markAsDoNotSimp d) fun x => do
|
||||
mkForallFVars #[x] (← annotateMatchEqnType (b.instantiate1 x))
|
||||
else
|
||||
let_expr f@Eq α lhs rhs := prop | return prop
|
||||
return mkApp3 f α (← markAsDoNotSimp lhs) rhs
|
||||
|
||||
/--
|
||||
Stores new theorem instance in the state.
|
||||
Recall that new instances are internalized later, after a full round of ematching.
|
||||
-/
|
||||
private def addNewInstance (origin : Origin) (proof : Expr) (generation : Nat) : M Unit := do
|
||||
let proof ← instantiateMVars proof
|
||||
if grind.debug.proofs.get (← getOptions) then
|
||||
check proof
|
||||
let mut prop ← inferType proof
|
||||
if Match.isMatchEqnTheorem (← getEnv) origin.key then
|
||||
prop ← annotateMatchEqnType prop
|
||||
trace[grind.ematch.instance] "{← origin.pp}: {prop}"
|
||||
addTheoremInstance proof prop (generation+1)
|
||||
|
||||
/--
|
||||
After processing a (multi-)pattern, use the choice assignment to instantiate the proof.
|
||||
Missing parameters are synthesized using type inference and type class synthesis."
|
||||
-/
|
||||
private partial def instantiateTheorem (c : Choice) : M Unit := withDefault do withNewMCtxDepth do
|
||||
let thm := (← read).thm
|
||||
unless (← markTheoremInstance thm.proof c.assignment) do
|
||||
return ()
|
||||
trace[grind.ematch.instance.assignment] "{← thm.origin.pp}: {assignmentToMessageData c.assignment}"
|
||||
let proof ← thm.getProofWithFreshMVarLevels
|
||||
let numParams := thm.numParams
|
||||
assert! c.assignment.size == numParams
|
||||
let (mvars, bis, _) ← forallMetaBoundedTelescope (← inferType proof) numParams
|
||||
if mvars.size != thm.numParams then
|
||||
trace[grind.issues] "unexpected number of parameters at {← thm.origin.pp}"
|
||||
return ()
|
||||
-- Apply assignment
|
||||
for h : i in [:mvars.size] do
|
||||
let v := c.assignment[numParams - i - 1]!
|
||||
unless isSameExpr v unassigned do
|
||||
let mvarId := mvars[i].mvarId!
|
||||
unless (← isDefEq (← mvarId.getType) (← inferType v) <&&> mvarId.checkedAssign v) do
|
||||
trace[grind.issues] "type error constructing proof for {← thm.origin.pp}\nwhen assigning metavariable {mvars[i]} with {indentExpr v}"
|
||||
return ()
|
||||
-- Synthesize instances
|
||||
for mvar in mvars, bi in bis do
|
||||
if bi.isInstImplicit && !(← mvar.mvarId!.isAssigned) then
|
||||
let type ← inferType mvar
|
||||
unless (← synthesizeInstance mvar type) do
|
||||
trace[grind.issues] "failed to synthesize instance when instantiating {← thm.origin.pp}{indentExpr type}"
|
||||
return ()
|
||||
let proof := mkAppN proof mvars
|
||||
if (← mvars.allM (·.mvarId!.isAssigned)) then
|
||||
addNewInstance thm.origin proof c.gen
|
||||
else
|
||||
let mvars ← mvars.filterM fun mvar => return !(← mvar.mvarId!.isAssigned)
|
||||
if let some mvarBad ← mvars.findM? fun mvar => return !(← isProof mvar) then
|
||||
trace[grind.issues] "failed to instantiate {← thm.origin.pp}, failed to instantiate non propositional argument with type{indentExpr (← inferType mvarBad)}"
|
||||
let proof ← mkLambdaFVars (binderInfoForMVars := .default) mvars (← instantiateMVars proof)
|
||||
addNewInstance thm.origin proof c.gen
|
||||
where
|
||||
synthesizeInstance (x type : Expr) : MetaM Bool := do
|
||||
let .some val ← trySynthInstance type | return false
|
||||
isDefEq x val
|
||||
|
||||
/-- Process choice stack until we don't have more choices to be processed. -/
|
||||
private def processChoices : M Unit := do
|
||||
let maxGeneration ← getMaxGeneration
|
||||
while !(← get).choiceStack.isEmpty do
|
||||
checkSystem "ematch"
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
let c ← modifyGet fun s : State => (s.choiceStack.head!, { s with choiceStack := s.choiceStack.tail! })
|
||||
if c.gen < maxGeneration then
|
||||
match c.cnstrs with
|
||||
| [] => instantiateTheorem c
|
||||
| .match p e :: cnstrs => processMatch { c with cnstrs } p e
|
||||
| .offset p k e :: cnstrs => processOffset { c with cnstrs } p k e
|
||||
| .continue p :: cnstrs => processContinue { c with cnstrs } p
|
||||
|
||||
private def main (p : Expr) (cnstrs : List Cnstr) : M Unit := do
|
||||
let some apps := (← getThe Goal).appMap.find? p.toHeadIndex
|
||||
| return ()
|
||||
let numParams := (← read).thm.numParams
|
||||
let assignment := mkArray numParams unassigned
|
||||
let useMT := (← read).useMT
|
||||
let gmt := (← getThe Goal).gmt
|
||||
for app in apps do
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
let n ← getENode app
|
||||
if (n.heqProofs || n.isCongrRoot) &&
|
||||
(!useMT || n.mt == gmt) then
|
||||
if let some c ← matchArgs? { cnstrs, assignment, gen := n.generation } p app |>.run then
|
||||
modify fun s => { s with choiceStack := [c] }
|
||||
processChoices
|
||||
|
||||
def ematchTheorem (thm : EMatchTheorem) : M Unit := do
|
||||
if (← checkMaxInstancesExceeded) then return ()
|
||||
withReader (fun ctx => { ctx with thm }) do
|
||||
let ps := thm.patterns
|
||||
match ps, (← read).useMT with
|
||||
| [p], _ => main p []
|
||||
| p::ps, false => main p (ps.map (.continue ·))
|
||||
| _::_, true => tryAll ps []
|
||||
| _, _ => unreachable!
|
||||
where
|
||||
/--
|
||||
When using the mod-time optimization with multi-patterns,
|
||||
we must start ematching at each different pattern. That is,
|
||||
if we have `[p₁, p₂, p₃]`, we must execute
|
||||
- `main p₁ [.continue p₂, .continue p₃]`
|
||||
- `main p₂ [.continue p₁, .continue p₃]`
|
||||
- `main p₃ [.continue p₁, .continue p₂]`
|
||||
-/
|
||||
tryAll (ps : List Expr) (cs : List Cnstr) : M Unit := do
|
||||
match ps with
|
||||
| [] => return ()
|
||||
| p::ps =>
|
||||
main p (cs.reverse ++ (ps.map (.continue ·)))
|
||||
tryAll ps (.continue p :: cs)
|
||||
|
||||
def ematchTheorems (thms : PArray EMatchTheorem) : M Unit := do
|
||||
thms.forM ematchTheorem
|
||||
|
||||
end EMatch
|
||||
|
||||
open EMatch
|
||||
|
||||
/-- Performs one round of E-matching, and returns new instances. -/
|
||||
def ematch : GoalM Unit := do
|
||||
let go (thms newThms : PArray EMatchTheorem) : EMatch.M Unit := do
|
||||
withReader (fun ctx => { ctx with useMT := true }) <| ematchTheorems thms
|
||||
withReader (fun ctx => { ctx with useMT := false }) <| ematchTheorems newThms
|
||||
if (← checkMaxInstancesExceeded <||> checkMaxEmatchExceeded) then
|
||||
return ()
|
||||
else
|
||||
go (← get).thms (← get).newThms |>.run'
|
||||
modify fun s => { s with
|
||||
thms := s.thms ++ s.newThms
|
||||
newThms := {}
|
||||
gmt := s.gmt + 1
|
||||
numEmatch := s.numEmatch + 1
|
||||
}
|
||||
|
||||
/-- Performs one round of E-matching, and assert new instances. -/
|
||||
def ematchAndAssert? (goal : Goal) : GrindM (Option (List Goal)) := do
|
||||
let numInstances := goal.numInstances
|
||||
let goal ← GoalM.run' goal ematch
|
||||
if goal.numInstances == numInstances then
|
||||
return none
|
||||
assertAll goal
|
||||
|
||||
def ematchStar (goal : Goal) : GrindM (List Goal) := do
|
||||
iterate goal ematchAndAssert?
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,667 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Init.Grind.Tactics
|
||||
import Lean.HeadIndex
|
||||
import Lean.PrettyPrinter
|
||||
import Lean.Util.FoldConsts
|
||||
import Lean.Util.CollectFVars
|
||||
import Lean.Meta.Basic
|
||||
import Lean.Meta.InferType
|
||||
import Lean.Meta.Eqns
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkOffsetPattern (pat : Expr) (k : Nat) : Expr :=
|
||||
mkApp2 (mkConst ``Grind.offset) pat (mkRawNatLit k)
|
||||
|
||||
private def detectOffsets (pat : Expr) : MetaM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
if e == pat then
|
||||
-- We only consider nested offset patterns
|
||||
return .continue e
|
||||
else match e with
|
||||
| .letE .. | .lam .. | .forallE .. => return .done e
|
||||
| _ =>
|
||||
let some (e, k) ← isOffset? e
|
||||
| return .continue e
|
||||
if k == 0 then return .continue e
|
||||
return .continue <| mkOffsetPattern e k
|
||||
Core.transform pat (pre := pre)
|
||||
|
||||
def isOffsetPattern? (pat : Expr) : Option (Expr × Nat) := Id.run do
|
||||
let_expr Grind.offset pat k := pat | none
|
||||
let .lit (.natVal k) := k | none
|
||||
return some (pat, k)
|
||||
|
||||
def preprocessPattern (pat : Expr) (normalizePattern := true) : MetaM Expr := do
|
||||
let pat ← instantiateMVars pat
|
||||
let pat ← unfoldReducible pat
|
||||
let pat ← if normalizePattern then normalize pat else pure pat
|
||||
let pat ← detectOffsets pat
|
||||
let pat ← foldProjs pat
|
||||
return pat
|
||||
|
||||
inductive Origin where
|
||||
/-- A global declaration in the environment. -/
|
||||
| decl (declName : Name)
|
||||
/-- A local hypothesis. -/
|
||||
| fvar (fvarId : FVarId)
|
||||
/--
|
||||
A proof term provided directly to a call to `grind` where `ref`
|
||||
is the provided grind argument. The `id` is a unique identifier for the call.
|
||||
-/
|
||||
| stx (id : Name) (ref : Syntax)
|
||||
| other
|
||||
deriving Inhabited, Repr
|
||||
|
||||
/-- A unique identifier corresponding to the origin. -/
|
||||
def Origin.key : Origin → Name
|
||||
| .decl declName => declName
|
||||
| .fvar fvarId => fvarId.name
|
||||
| .stx id _ => id
|
||||
| .other => `other
|
||||
|
||||
def Origin.pp [Monad m] [MonadEnv m] [MonadError m] (o : Origin) : m MessageData := do
|
||||
match o with
|
||||
| .decl declName => return MessageData.ofConst (← mkConstWithLevelParams declName)
|
||||
| .fvar fvarId => return mkFVar fvarId
|
||||
| .stx _ ref => return ref
|
||||
| .other => return "[unknown]"
|
||||
|
||||
/-- A theorem for heuristic instantiation based on E-matching. -/
|
||||
structure EMatchTheorem where
|
||||
/--
|
||||
It stores universe parameter names for universe polymorphic proofs.
|
||||
Recall that it is non-empty only when we elaborate an expression provided by the user.
|
||||
When `proof` is just a constant, we can use the universe parameter names stored in the declaration.
|
||||
-/
|
||||
levelParams : Array Name
|
||||
proof : Expr
|
||||
numParams : Nat
|
||||
patterns : List Expr
|
||||
/-- Contains all symbols used in `pattterns`. -/
|
||||
symbols : List HeadIndex
|
||||
origin : Origin
|
||||
deriving Inhabited
|
||||
|
||||
/-- Set of E-matching theorems. -/
|
||||
structure EMatchTheorems where
|
||||
/-- The key is a symbol from `EMatchTheorem.symbols`. -/
|
||||
private map : PHashMap Name (List EMatchTheorem) := {}
|
||||
/-- Set of theorem names that have been inserted using `insert`. -/
|
||||
private thmNames : PHashSet Name := {}
|
||||
deriving Inhabited
|
||||
|
||||
/--
|
||||
Inserts a `thm` with symbols `[s_1, ..., s_n]` to `s`.
|
||||
We add `s_1 -> { thm with symbols := [s_2, ..., s_n] }`.
|
||||
When `grind` internalizes a term containing symbol `s`, we
|
||||
process all theorems `thm` associated with key `s`.
|
||||
If their `thm.symbols` is empty, we say they are activated.
|
||||
Otherwise, we reinsert into `map`.
|
||||
-/
|
||||
def EMatchTheorems.insert (s : EMatchTheorems) (thm : EMatchTheorem) : EMatchTheorems := Id.run do
|
||||
let .const declName :: syms := thm.symbols
|
||||
| unreachable!
|
||||
let thm := { thm with symbols := syms }
|
||||
let { map, thmNames } := s
|
||||
let thmNames := thmNames.insert thm.origin.key
|
||||
if let some thms := map.find? declName then
|
||||
return { map := map.insert declName (thm::thms), thmNames }
|
||||
else
|
||||
return { map := map.insert declName [thm], thmNames }
|
||||
|
||||
/--
|
||||
Retrieves theorems from `s` associated with the given symbol. See `EMatchTheorem.insert`.
|
||||
The theorems are removed from `s`.
|
||||
-/
|
||||
@[inline]
|
||||
def EMatchTheorems.retrieve? (s : EMatchTheorems) (sym : Name) : Option (List EMatchTheorem × EMatchTheorems) :=
|
||||
if let some thms := s.map.find? sym then
|
||||
some (thms, { s with map := s.map.erase sym })
|
||||
else
|
||||
none
|
||||
|
||||
/-- Returns `true` if `declName` is the name of a theorem that was inserted using `insert`. -/
|
||||
def EMatchTheorems.containsTheoremName (s : EMatchTheorems) (declName : Name) : Bool :=
|
||||
s.thmNames.contains declName
|
||||
|
||||
def EMatchTheorem.getProofWithFreshMVarLevels (thm : EMatchTheorem) : MetaM Expr := do
|
||||
if thm.proof.isConst && thm.levelParams.isEmpty then
|
||||
let declName := thm.proof.constName!
|
||||
let info ← getConstInfo declName
|
||||
if info.levelParams.isEmpty then
|
||||
return thm.proof
|
||||
else
|
||||
mkConstWithFreshMVarLevels declName
|
||||
else if thm.levelParams.isEmpty then
|
||||
return thm.proof
|
||||
else
|
||||
let us ← thm.levelParams.mapM fun _ => mkFreshLevelMVar
|
||||
return thm.proof.instantiateLevelParamsArray thm.levelParams us
|
||||
|
||||
private builtin_initialize ematchTheoremsExt : SimpleScopedEnvExtension EMatchTheorem EMatchTheorems ←
|
||||
registerSimpleScopedEnvExtension {
|
||||
addEntry := EMatchTheorems.insert
|
||||
initial := {}
|
||||
}
|
||||
|
||||
-- TODO: create attribute?
|
||||
private def forbiddenDeclNames := #[``Eq, ``HEq, ``Iff, ``And, ``Or, ``Not]
|
||||
|
||||
private def isForbidden (declName : Name) := forbiddenDeclNames.contains declName
|
||||
|
||||
private def dontCare := mkConst (Name.mkSimple "[grind_dontcare]")
|
||||
|
||||
def mkGroundPattern (e : Expr) : Expr :=
|
||||
mkAnnotation `grind.ground_pat e
|
||||
|
||||
def groundPattern? (e : Expr) : Option Expr :=
|
||||
annotation? `grind.ground_pat e
|
||||
|
||||
private def isGroundPattern (e : Expr) : Bool :=
|
||||
groundPattern? e |>.isSome
|
||||
|
||||
def isPatternDontCare (e : Expr) : Bool :=
|
||||
e == dontCare
|
||||
|
||||
private def isAtomicPattern (e : Expr) : Bool :=
|
||||
e.isBVar || isPatternDontCare e || isGroundPattern e
|
||||
|
||||
partial def ppPattern (pattern : Expr) : MessageData := Id.run do
|
||||
if let some e := groundPattern? pattern then
|
||||
return m!"`[{e}]"
|
||||
else if isPatternDontCare pattern then
|
||||
return m!"?"
|
||||
else match pattern with
|
||||
| .bvar idx => return m!"#{idx}"
|
||||
| _ =>
|
||||
let mut r := m!"{pattern.getAppFn}"
|
||||
for arg in pattern.getAppArgs do
|
||||
let mut argFmt ← ppPattern arg
|
||||
if !isAtomicPattern arg then
|
||||
argFmt := MessageData.paren argFmt
|
||||
r := r ++ " " ++ argFmt
|
||||
return r
|
||||
|
||||
namespace NormalizePattern
|
||||
|
||||
structure State where
|
||||
symbols : Array HeadIndex := #[]
|
||||
symbolSet : Std.HashSet HeadIndex := {}
|
||||
bvarsFound : Std.HashSet Nat := {}
|
||||
|
||||
abbrev M := StateRefT State MetaM
|
||||
|
||||
private def saveSymbol (h : HeadIndex) : M Unit := do
|
||||
unless (← get).symbolSet.contains h do
|
||||
modify fun s => { s with symbols := s.symbols.push h, symbolSet := s.symbolSet.insert h }
|
||||
|
||||
private def foundBVar (idx : Nat) : M Bool :=
|
||||
return (← get).bvarsFound.contains idx
|
||||
|
||||
private def saveBVar (idx : Nat) : M Unit := do
|
||||
modify fun s => { s with bvarsFound := s.bvarsFound.insert idx }
|
||||
|
||||
private def getPatternFn? (pattern : Expr) : Option Expr :=
|
||||
if !pattern.isApp then
|
||||
none
|
||||
else match pattern.getAppFn with
|
||||
| f@(.const declName _) => if isForbidden declName then none else some f
|
||||
| f@(.fvar _) => some f
|
||||
| _ => none
|
||||
|
||||
/--
|
||||
Returns a bit-mask `mask` s.t. `mask[i]` is true if the the corresponding argument is
|
||||
- a type (that is not a proposition) or type former, or
|
||||
- a proof, or
|
||||
- an instance implicit argument
|
||||
|
||||
When `mask[i]`, we say the corresponding argument is a "support" argument.
|
||||
-/
|
||||
def getPatternSupportMask (f : Expr) (numArgs : Nat) : MetaM (Array Bool) := do
|
||||
forallBoundedTelescope (← inferType f) numArgs fun xs _ => do
|
||||
xs.mapM fun x => do
|
||||
if (← isProp x) then
|
||||
return false
|
||||
else if (← isTypeFormer x <||> isProof x) then
|
||||
return true
|
||||
else
|
||||
return (← x.fvarId!.getDecl).binderInfo matches .instImplicit
|
||||
|
||||
private partial def go (pattern : Expr) (root := false) : M Expr := do
|
||||
if root && !pattern.hasLooseBVars then
|
||||
throwError "invalid pattern, it does not have pattern variables"
|
||||
if let some (e, k) := isOffsetPattern? pattern then
|
||||
let e ← goArg e (isSupport := false)
|
||||
if e == dontCare then
|
||||
return dontCare
|
||||
else
|
||||
return mkOffsetPattern e k
|
||||
let some f := getPatternFn? pattern
|
||||
| throwError "invalid pattern, (non-forbidden) application expected{indentExpr pattern}"
|
||||
assert! f.isConst || f.isFVar
|
||||
saveSymbol f.toHeadIndex
|
||||
let mut args := pattern.getAppArgs
|
||||
let supportMask ← getPatternSupportMask f args.size
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let isSupport := supportMask[i]?.getD false
|
||||
args := args.set! i (← goArg arg isSupport)
|
||||
return mkAppN f args
|
||||
where
|
||||
goArg (arg : Expr) (isSupport : Bool) : M Expr := do
|
||||
if !arg.hasLooseBVars then
|
||||
if arg.hasMVar then
|
||||
pure dontCare
|
||||
else
|
||||
pure <| mkGroundPattern arg
|
||||
else match arg with
|
||||
| .bvar idx =>
|
||||
if isSupport && (← foundBVar idx) then
|
||||
pure dontCare
|
||||
else
|
||||
saveBVar idx
|
||||
pure arg
|
||||
| _ =>
|
||||
if isSupport then
|
||||
pure dontCare
|
||||
else if let some _ := getPatternFn? arg then
|
||||
go arg
|
||||
else
|
||||
pure dontCare
|
||||
|
||||
def main (patterns : List Expr) : MetaM (List Expr × List HeadIndex × Std.HashSet Nat) := do
|
||||
let (patterns, s) ← patterns.mapM go |>.run {}
|
||||
return (patterns, s.symbols.toList, s.bvarsFound)
|
||||
|
||||
def normalizePattern (e : Expr) : M Expr := do
|
||||
go e
|
||||
|
||||
end NormalizePattern
|
||||
|
||||
/--
|
||||
Returns `true` if free variables in `type` are not in `thmVars` or are in `fvarsFound`.
|
||||
We use this function to check whether `type` is fully instantiated.
|
||||
-/
|
||||
private def checkTypeFVars (thmVars : FVarIdSet) (fvarsFound : FVarIdSet) (type : Expr) : Bool :=
|
||||
let typeFVars := (collectFVars {} type).fvarIds
|
||||
typeFVars.all fun fvarId => !thmVars.contains fvarId || fvarsFound.contains fvarId
|
||||
|
||||
/--
|
||||
Given an type class instance type `instType`, returns true if free variables in input parameters
|
||||
1- are not in `thmVars`, or
|
||||
2- are in `fvarsFound`.
|
||||
Remark: `fvarsFound` is a subset of `thmVars`
|
||||
-/
|
||||
private def canBeSynthesized (thmVars : FVarIdSet) (fvarsFound : FVarIdSet) (instType : Expr) : MetaM Bool := do
|
||||
forallTelescopeReducing instType fun xs type => type.withApp fun classFn classArgs => do
|
||||
for x in xs do
|
||||
unless checkTypeFVars thmVars fvarsFound (← inferType x) do return false
|
||||
forallBoundedTelescope (← inferType classFn) type.getAppNumArgs fun params _ => do
|
||||
for param in params, classArg in classArgs do
|
||||
let paramType ← inferType param
|
||||
if !paramType.isAppOf ``semiOutParam && !paramType.isAppOf ``outParam then
|
||||
unless checkTypeFVars thmVars fvarsFound classArg do
|
||||
return false
|
||||
return true
|
||||
|
||||
/--
|
||||
Auxiliary type for the `checkCoverage` function.
|
||||
-/
|
||||
inductive CheckCoverageResult where
|
||||
| /-- `checkCoverage` succeeded -/
|
||||
ok
|
||||
| /--
|
||||
`checkCoverage` failed because some of the theorem parameters are missing,
|
||||
`pos` contains their positions
|
||||
-/
|
||||
missing (pos : List Nat)
|
||||
|
||||
/--
|
||||
After we process a set of patterns, we obtain the set of de Bruijn indices in these patterns.
|
||||
We say they are pattern variables. This function checks whether the set of pattern variables is sufficient for
|
||||
instantiating the theorem with proof `thmProof`. The theorem has `numParams` parameters.
|
||||
The missing parameters:
|
||||
1- we may be able to infer them using type inference or type class synthesis, or
|
||||
2- they are propositions, and may become hypotheses of the instantiated theorem.
|
||||
|
||||
For type class instance parameters, we must check whether the free variables in class input parameters are available.
|
||||
-/
|
||||
private def checkCoverage (thmProof : Expr) (numParams : Nat) (bvarsFound : Std.HashSet Nat) : MetaM CheckCoverageResult := do
|
||||
if bvarsFound.size == numParams then return .ok
|
||||
forallBoundedTelescope (← inferType thmProof) numParams fun xs _ => do
|
||||
assert! numParams == xs.size
|
||||
let patternVars := bvarsFound.toList.map fun bidx => xs[numParams - bidx - 1]!.fvarId!
|
||||
-- `xs` as a `FVarIdSet`.
|
||||
let thmVars : FVarIdSet := RBTree.ofList <| xs.toList.map (·.fvarId!)
|
||||
-- Collect free variables occurring in `e`, and insert the ones that are in `thmVars` into `fvarsFound`
|
||||
let update (fvarsFound : FVarIdSet) (e : Expr) : FVarIdSet :=
|
||||
(collectFVars {} e).fvarIds.foldl (init := fvarsFound) fun s fvarId =>
|
||||
if thmVars.contains fvarId then s.insert fvarId else s
|
||||
-- Theorem variables found so far. We initialize with the variables occurring in patterns
|
||||
-- Remark: fvarsFound is a subset of thmVars
|
||||
let mut fvarsFound : FVarIdSet := RBTree.ofList patternVars
|
||||
for patternVar in patternVars do
|
||||
let type ← patternVar.getType
|
||||
fvarsFound := update fvarsFound type
|
||||
if fvarsFound.size == numParams then return .ok
|
||||
-- Now, we keep traversing remaining variables and collecting
|
||||
-- `processed` contains the variables we have already processed.
|
||||
let mut processed : FVarIdSet := RBTree.ofList patternVars
|
||||
let mut modified := false
|
||||
repeat
|
||||
modified := false
|
||||
for x in xs do
|
||||
let fvarId := x.fvarId!
|
||||
unless processed.contains fvarId do
|
||||
let xType ← inferType x
|
||||
if fvarsFound.contains fvarId then
|
||||
-- Collect free vars in `x`s type and mark as processed
|
||||
fvarsFound := update fvarsFound xType
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
else if (← isProp xType) then
|
||||
-- If `x` is a proposition, and all theorem variables in `x`s type have already been found
|
||||
-- add it to `fvarsFound` and mark it as processed.
|
||||
if checkTypeFVars thmVars fvarsFound xType then
|
||||
fvarsFound := fvarsFound.insert fvarId
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
else if (← fvarId.getDecl).binderInfo matches .instImplicit then
|
||||
-- If `x` is instance implicit, check whether
|
||||
-- we have found all free variables needed to synthesize instance
|
||||
if (← canBeSynthesized thmVars fvarsFound xType) then
|
||||
fvarsFound := fvarsFound.insert fvarId
|
||||
fvarsFound := update fvarsFound xType
|
||||
processed := processed.insert fvarId
|
||||
modified := true
|
||||
if fvarsFound.size == numParams then
|
||||
return .ok
|
||||
if !modified then
|
||||
break
|
||||
let mut pos := #[]
|
||||
for h : i in [:xs.size] do
|
||||
let fvarId := xs[i].fvarId!
|
||||
unless fvarsFound.contains fvarId do
|
||||
pos := pos.push i
|
||||
return .missing pos.toList
|
||||
|
||||
/--
|
||||
Given a theorem with proof `proof` and `numParams` parameters, returns a message
|
||||
containing the parameters at positions `paramPos`.
|
||||
-/
|
||||
private def ppParamsAt (proof : Expr) (numParams : Nat) (paramPos : List Nat) : MetaM MessageData := do
|
||||
forallBoundedTelescope (← inferType proof) numParams fun xs _ => do
|
||||
let mut msg := m!""
|
||||
let mut first := true
|
||||
for h : i in [:xs.size] do
|
||||
if paramPos.contains i then
|
||||
let x := xs[i]
|
||||
if first then first := false else msg := msg ++ "\n"
|
||||
msg := msg ++ m!"{x} : {← inferType x}"
|
||||
addMessageContextFull msg
|
||||
|
||||
/--
|
||||
Creates an E-matching theorem for a theorem with proof `proof`, `numParams` parameters, and the given set of patterns.
|
||||
Pattern variables are represented using de Bruijn indices.
|
||||
-/
|
||||
def mkEMatchTheoremCore (origin : Origin) (levelParams : Array Name) (numParams : Nat) (proof : Expr) (patterns : List Expr) : MetaM EMatchTheorem := do
|
||||
let (patterns, symbols, bvarFound) ← NormalizePattern.main patterns
|
||||
trace[grind.ematch.pattern] "{MessageData.ofConst proof}: {patterns.map ppPattern}"
|
||||
if let .missing pos ← checkCoverage proof numParams bvarFound then
|
||||
let pats : MessageData := m!"{patterns.map ppPattern}"
|
||||
throwError "invalid pattern(s) for `{← origin.pp}`{indentD pats}\nthe following theorem parameters cannot be instantiated:{indentD (← ppParamsAt proof numParams pos)}"
|
||||
return {
|
||||
proof, patterns, numParams, symbols
|
||||
levelParams, origin
|
||||
}
|
||||
|
||||
private def getProofFor (declName : Name) : CoreM Expr := do
|
||||
let .thmInfo info ← getConstInfo declName
|
||||
| throwError "`{declName}` is not a theorem"
|
||||
let us := info.levelParams.map mkLevelParam
|
||||
return mkConst declName us
|
||||
|
||||
/--
|
||||
Creates an E-matching theorem for `declName` with `numParams` parameters, and the given set of patterns.
|
||||
Pattern variables are represented using de Bruijn indices.
|
||||
-/
|
||||
def mkEMatchTheorem (declName : Name) (numParams : Nat) (patterns : List Expr) : MetaM EMatchTheorem := do
|
||||
mkEMatchTheoremCore (.decl declName) #[] numParams (← getProofFor declName) patterns
|
||||
|
||||
/--
|
||||
Given a theorem with proof `proof` and type of the form `∀ (a_1 ... a_n), lhs = rhs`,
|
||||
creates an E-matching pattern for it using `addEMatchTheorem n [lhs]`
|
||||
If `normalizePattern` is true, it applies the `grind` simplification theorems and simprocs to the pattern.
|
||||
-/
|
||||
def mkEMatchEqTheoremCore (origin : Origin) (levelParams : Array Name) (proof : Expr) (normalizePattern : Bool) (useLhs : Bool) : MetaM EMatchTheorem := do
|
||||
let (numParams, patterns) ← forallTelescopeReducing (← inferType proof) fun xs type => do
|
||||
let (lhs, rhs) ← match_expr type with
|
||||
| Eq _ lhs rhs => pure (lhs, rhs)
|
||||
| Iff lhs rhs => pure (lhs, rhs)
|
||||
| HEq _ lhs _ rhs => pure (lhs, rhs)
|
||||
| _ => throwError "invalid E-matching equality theorem, conclusion must be an equality{indentExpr type}"
|
||||
let pat := if useLhs then lhs else rhs
|
||||
let pat ← preprocessPattern pat normalizePattern
|
||||
return (xs.size, [pat.abstract xs])
|
||||
mkEMatchTheoremCore origin levelParams numParams proof patterns
|
||||
|
||||
/--
|
||||
Given theorem with name `declName` and type of the form `∀ (a_1 ... a_n), lhs = rhs`,
|
||||
creates an E-matching pattern for it using `addEMatchTheorem n [lhs]`
|
||||
|
||||
If `normalizePattern` is true, it applies the `grind` simplification theorems and simprocs to the
|
||||
pattern.
|
||||
-/
|
||||
def mkEMatchEqTheorem (declName : Name) (normalizePattern := true) (useLhs : Bool := true) : MetaM EMatchTheorem := do
|
||||
mkEMatchEqTheoremCore (.decl declName) #[] (← getProofFor declName) normalizePattern useLhs
|
||||
|
||||
/--
|
||||
Adds an E-matching theorem to the environment.
|
||||
See `mkEMatchTheorem`.
|
||||
-/
|
||||
def addEMatchTheorem (declName : Name) (numParams : Nat) (patterns : List Expr) : MetaM Unit := do
|
||||
ematchTheoremsExt.add (← mkEMatchTheorem declName numParams patterns)
|
||||
|
||||
/--
|
||||
Adds an E-matching equality theorem to the environment.
|
||||
See `mkEMatchEqTheorem`.
|
||||
-/
|
||||
def addEMatchEqTheorem (declName : Name) : MetaM Unit := do
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem declName)
|
||||
|
||||
/-- Returns the E-matching theorems registered in the environment. -/
|
||||
def getEMatchTheorems : CoreM EMatchTheorems :=
|
||||
return ematchTheoremsExt.getState (← getEnv)
|
||||
|
||||
private inductive TheoremKind where
|
||||
| eqLhs | eqRhs | eqBoth | fwd | bwd | default
|
||||
deriving Inhabited, BEq
|
||||
|
||||
private def TheoremKind.toAttribute : TheoremKind → String
|
||||
| .eqLhs => "[grind =]"
|
||||
| .eqRhs => "[grind =_]"
|
||||
| .eqBoth => "[grind _=_]"
|
||||
| .fwd => "[grind →]"
|
||||
| .bwd => "[grind ←]"
|
||||
| .default => "[grind]"
|
||||
|
||||
private def TheoremKind.explainFailure : TheoremKind → String
|
||||
| .eqLhs => "failed to find pattern in the left-hand side of the theorem's conclusion"
|
||||
| .eqRhs => "failed to find pattern in the right-hand side of the theorem's conclusion"
|
||||
| .eqBoth => unreachable! -- eqBoth is a macro
|
||||
| .fwd => "failed to find patterns in the antecedents of the theorem"
|
||||
| .bwd => "failed to find patterns in the theorem's conclusion"
|
||||
| .default => "failed to find patterns"
|
||||
|
||||
/-- Returns the types of `xs` that are propositions. -/
|
||||
private def getPropTypes (xs : Array Expr) : MetaM (Array Expr) :=
|
||||
xs.filterMapM fun x => do
|
||||
let type ← inferType x
|
||||
if (← isProp type) then return some type else return none
|
||||
|
||||
/-- State for the (pattern) `CollectorM` monad -/
|
||||
private structure Collector.State where
|
||||
/-- Pattern found so far. -/
|
||||
patterns : Array Expr := #[]
|
||||
done : Bool := false
|
||||
|
||||
private structure Collector.Context where
|
||||
proof : Expr
|
||||
xs : Array Expr
|
||||
|
||||
/-- Monad for collecting patterns for a theorem. -/
|
||||
private abbrev CollectorM := ReaderT Collector.Context $ StateRefT Collector.State NormalizePattern.M
|
||||
|
||||
/-- Similar to `getPatternFn?`, but operates on expressions that do not contain loose de Bruijn variables. -/
|
||||
private def isPatternFnCandidate (f : Expr) : CollectorM Bool := do
|
||||
match f with
|
||||
| .const declName _ => return !isForbidden declName
|
||||
| .fvar .. => return !(← read).xs.contains f
|
||||
| _ => return false
|
||||
|
||||
private def addNewPattern (p : Expr) : CollectorM Unit := do
|
||||
trace[grind.ematch.pattern.search] "found pattern: {ppPattern p}"
|
||||
let bvarsFound := (← getThe NormalizePattern.State).bvarsFound
|
||||
let done := (← checkCoverage (← read).proof (← read).xs.size bvarsFound) matches .ok
|
||||
if done then
|
||||
trace[grind.ematch.pattern.search] "found full coverage"
|
||||
modify fun s => { s with patterns := s.patterns.push p, done }
|
||||
|
||||
private partial def collect (e : Expr) : CollectorM Unit := do
|
||||
if (← get).done then return ()
|
||||
match e with
|
||||
| .app .. =>
|
||||
let f := e.getAppFn
|
||||
if (← isPatternFnCandidate f) then
|
||||
let saved ← getThe NormalizePattern.State
|
||||
try
|
||||
trace[grind.ematch.pattern.search] "candidate: {e}"
|
||||
let p := e.abstract (← read).xs
|
||||
unless p.hasLooseBVars do
|
||||
trace[grind.ematch.pattern.search] "skip, does not contain pattern variables"
|
||||
return ()
|
||||
let p ← NormalizePattern.normalizePattern p
|
||||
if saved.bvarsFound.size < (← getThe NormalizePattern.State).bvarsFound.size then
|
||||
addNewPattern p
|
||||
return ()
|
||||
trace[grind.ematch.pattern.search] "skip, no new variables covered"
|
||||
-- restore state and continue search
|
||||
set saved
|
||||
catch _ =>
|
||||
-- restore state and continue search
|
||||
trace[grind.ematch.pattern.search] "skip, exception during normalization"
|
||||
set saved
|
||||
let args := e.getAppArgs
|
||||
for arg in args, flag in (← NormalizePattern.getPatternSupportMask f args.size) do
|
||||
unless flag do
|
||||
collect arg
|
||||
| .forallE _ d b _ =>
|
||||
if (← pure e.isArrow <&&> isProp d <&&> isProp b) then
|
||||
collect d
|
||||
collect b
|
||||
| _ => return ()
|
||||
|
||||
private def collectPatterns? (proof : Expr) (xs : Array Expr) (searchPlaces : Array Expr) : MetaM (Option (List Expr × List HeadIndex)) := do
|
||||
let go : CollectorM (Option (List Expr)) := do
|
||||
for place in searchPlaces do
|
||||
let place ← preprocessPattern place
|
||||
collect place
|
||||
if (← get).done then
|
||||
return some ((← get).patterns.toList)
|
||||
return none
|
||||
let (some ps, s) ← go { proof, xs } |>.run' {} |>.run {}
|
||||
| return none
|
||||
return some (ps, s.symbols.toList)
|
||||
|
||||
private def mkEMatchTheoremWithKind? (origin : Origin) (levelParams : Array Name) (proof : Expr) (kind : TheoremKind) : MetaM (Option EMatchTheorem) := do
|
||||
if kind == .eqLhs then
|
||||
return (← mkEMatchEqTheoremCore origin levelParams proof (normalizePattern := false) (useLhs := true))
|
||||
else if kind == .eqRhs then
|
||||
return (← mkEMatchEqTheoremCore origin levelParams proof (normalizePattern := false) (useLhs := false))
|
||||
let type ← inferType proof
|
||||
forallTelescopeReducing type fun xs type => do
|
||||
let searchPlaces ← match kind with
|
||||
| .fwd =>
|
||||
let ps ← getPropTypes xs
|
||||
if ps.isEmpty then
|
||||
throwError "invalid `grind` forward theorem, theorem `{← origin.pp}` does not have proposional hypotheses"
|
||||
pure ps
|
||||
| .bwd => pure #[type]
|
||||
| .default => pure <| #[type] ++ (← getPropTypes xs)
|
||||
| _ => unreachable!
|
||||
go xs searchPlaces
|
||||
where
|
||||
go (xs : Array Expr) (searchPlaces : Array Expr) : MetaM (Option EMatchTheorem) := do
|
||||
let some (patterns, symbols) ← collectPatterns? proof xs searchPlaces
|
||||
| return none
|
||||
let numParams := xs.size
|
||||
trace[grind.ematch.pattern] "{← origin.pp}: {patterns.map ppPattern}"
|
||||
return some {
|
||||
proof, patterns, numParams, symbols
|
||||
levelParams, origin
|
||||
}
|
||||
|
||||
private def getKind (stx : Syntax) : TheoremKind :=
|
||||
if stx[1].isNone then
|
||||
.default
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEq then
|
||||
.eqLhs
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindFwd then
|
||||
.fwd
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEqRhs then
|
||||
.eqRhs
|
||||
else if stx[1][0].getKind == ``Parser.Attr.grindEqBoth then
|
||||
.eqBoth
|
||||
else
|
||||
.bwd
|
||||
|
||||
private def addGrindEqAttr (declName : Name) (attrKind : AttributeKind) (useLhs := true) : MetaM Unit := do
|
||||
if (← getConstInfo declName).isTheorem then
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem declName (normalizePattern := true) (useLhs := useLhs)) attrKind
|
||||
else if let some eqns ← getEqnsFor? declName then
|
||||
unless useLhs do
|
||||
throwError "`{declName}` is a definition, you must only use the left-hand side for extracting patterns"
|
||||
for eqn in eqns do
|
||||
ematchTheoremsExt.add (← mkEMatchEqTheorem eqn) attrKind
|
||||
else
|
||||
throwError "`[grind_eq]` attribute can only be applied to equational theorems or function definitions"
|
||||
|
||||
private def addGrindAttr (declName : Name) (attrKind : AttributeKind) (thmKind : TheoremKind) : MetaM Unit := do
|
||||
if thmKind == .eqLhs then
|
||||
addGrindEqAttr declName attrKind (useLhs := true)
|
||||
else if thmKind == .eqRhs then
|
||||
addGrindEqAttr declName attrKind (useLhs := false)
|
||||
else if thmKind == .eqBoth then
|
||||
addGrindEqAttr declName attrKind (useLhs := true)
|
||||
addGrindEqAttr declName attrKind (useLhs := false)
|
||||
else if !(← getConstInfo declName).isTheorem then
|
||||
addGrindEqAttr declName attrKind
|
||||
else
|
||||
let some thm ← mkEMatchTheoremWithKind? (.decl declName) #[] (← getProofFor declName) thmKind
|
||||
| throwError "`@{thmKind.toAttribute} theorem {declName}` {thmKind.explainFailure}, consider using different options or the `grind_pattern` command"
|
||||
ematchTheoremsExt.add thm attrKind
|
||||
|
||||
builtin_initialize
|
||||
registerBuiltinAttribute {
|
||||
name := `grind
|
||||
descr :=
|
||||
"The `[grind_eq]` attribute is used to annotate equational theorems and functions.\
|
||||
When applied to an equational theorem, it marks the theorem for use in heuristic instantiations by the `grind` tactic.\
|
||||
When applied to a function, it automatically annotates the equational theorems associated with that function.\
|
||||
The `grind` tactic utilizes annotated theorems to add instances of matching patterns into the local context during proof search.\
|
||||
For example, if a theorem `@[grind_eq] theorem foo_idempotent : foo (foo x) = foo x` is annotated,\
|
||||
`grind` will add an instance of this theorem to the local context whenever it encounters the pattern `foo (foo x)`."
|
||||
applicationTime := .afterCompilation
|
||||
add := fun declName stx attrKind => do
|
||||
addGrindAttr declName attrKind (getKind stx) |>.run' {}
|
||||
}
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -1,34 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Internalize
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
/--
|
||||
If `parent` is a projection-application `proj_i c`,
|
||||
check whether the root of the equivalence class containing `c` is a constructor-application `ctor ... a_i ...`.
|
||||
If so, internalize the term `proj_i (ctor ... a_i ...)` and add the equality `proj_i (ctor ... a_i ...) = a_i`.
|
||||
-/
|
||||
def propagateForallProp (parent : Expr) : GoalM Unit := do
|
||||
let .forallE n p q bi := parent | return ()
|
||||
trace[grind.debug.forallPropagator] "{parent}"
|
||||
unless (← isEqTrue p) do return ()
|
||||
trace[grind.debug.forallPropagator] "isEqTrue, {parent}"
|
||||
let h₁ ← mkEqTrueProof p
|
||||
let qh₁ := q.instantiate1 (mkApp2 (mkConst ``of_eq_true) p h₁)
|
||||
let r ← simp qh₁
|
||||
let q := mkLambda n bi p q
|
||||
let q' := r.expr
|
||||
internalize q' (← getGeneration parent)
|
||||
trace[grind.debug.forallPropagator] "q': {q'} for{indentExpr parent}"
|
||||
let h₂ ← r.getProof
|
||||
let h := mkApp5 (mkConst ``Lean.Grind.forall_propagator) p q q' h₁ h₂
|
||||
pushEq parent q' h
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -6,10 +6,7 @@ Authors: Leonardo de Moura
|
||||
prelude
|
||||
import Init.Grind.Util
|
||||
import Lean.Meta.LitValues
|
||||
import Lean.Meta.Match.MatcherInfo
|
||||
import Lean.Meta.Match.MatchEqsExt
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -24,83 +21,20 @@ def addCongrTable (e : Expr) : GoalM Unit := do
|
||||
unless (← hasSameType f g) do
|
||||
trace[grind.issues] "found congruence between{indentExpr e}\nand{indentExpr e'}\nbut functions have different types"
|
||||
return ()
|
||||
trace[grind.debug.congr] "{e} = {e'}"
|
||||
trace[grind.congr] "{e} = {e'}"
|
||||
pushEqHEq e e' congrPlaceholderProof
|
||||
let node ← getENode e
|
||||
setENode e { node with congr := e' }
|
||||
setENode e { node with cgRoot := e' }
|
||||
else
|
||||
modify fun s => { s with congrTable := s.congrTable.insert { e } }
|
||||
|
||||
private def updateAppMap (e : Expr) : GoalM Unit := do
|
||||
let key := e.toHeadIndex
|
||||
modify fun s => { s with
|
||||
appMap := if let some es := s.appMap.find? key then
|
||||
s.appMap.insert key (e :: es)
|
||||
else
|
||||
s.appMap.insert key [e]
|
||||
}
|
||||
|
||||
mutual
|
||||
/-- Internalizes the nested ground terms in the given pattern. -/
|
||||
private partial def internalizePattern (pattern : Expr) (generation : Nat) : GoalM Expr := do
|
||||
if pattern.isBVar || isPatternDontCare pattern then
|
||||
return pattern
|
||||
else if let some e := groundPattern? pattern then
|
||||
let e ← shareCommon (← canon (← normalizeLevels (← unfoldReducible e)))
|
||||
internalize e generation
|
||||
return mkGroundPattern e
|
||||
else pattern.withApp fun f args => do
|
||||
return mkAppN f (← args.mapM (internalizePattern · generation))
|
||||
|
||||
private partial def activateTheorem (thm : EMatchTheorem) (generation : Nat) : GoalM Unit := do
|
||||
-- Recall that we use the proof as part of the key for a set of instances found so far.
|
||||
-- We don't want to use structural equality when comparing keys.
|
||||
let proof ← shareCommon thm.proof
|
||||
let thm := { thm with proof, patterns := (← thm.patterns.mapM (internalizePattern · generation)) }
|
||||
trace[grind.ematch] "activated `{thm.origin.key}`, {thm.patterns.map ppPattern}"
|
||||
modify fun s => { s with newThms := s.newThms.push thm }
|
||||
|
||||
/--
|
||||
If `Config.matchEqs` is set to `true`, and `f` is `match`-auxiliary function,
|
||||
adds its equations to `newThms`.
|
||||
-/
|
||||
private partial def addMatchEqns (f : Expr) (generation : Nat) : GoalM Unit := do
|
||||
if !(← getConfig).matchEqs then return ()
|
||||
let .const declName _ := f | return ()
|
||||
if !(← isMatcher declName) then return ()
|
||||
if (← get).matchEqNames.contains declName then return ()
|
||||
modify fun s => { s with matchEqNames := s.matchEqNames.insert declName }
|
||||
for eqn in (← Match.getEquationsFor declName).eqnNames do
|
||||
-- We disable pattern normalization to prevent the `match`-expression to be reduced.
|
||||
activateTheorem (← mkEMatchEqTheorem eqn (normalizePattern := false)) generation
|
||||
|
||||
private partial def activateTheoremPatterns (fName : Name) (generation : Nat) : GoalM Unit := do
|
||||
if let some (thms, thmMap) := (← get).thmMap.retrieve? fName then
|
||||
modify fun s => { s with thmMap }
|
||||
let appMap := (← get).appMap
|
||||
for thm in thms do
|
||||
let symbols := thm.symbols.filter fun sym => !appMap.contains sym
|
||||
let thm := { thm with symbols }
|
||||
match symbols with
|
||||
| [] => activateTheorem thm generation
|
||||
| _ =>
|
||||
trace[grind.ematch] "reinsert `{thm.origin.key}`"
|
||||
modify fun s => { s with thmMap := s.thmMap.insert thm }
|
||||
|
||||
partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
if (← alreadyInternalized e) then return ()
|
||||
trace[grind.internalize] "{e}"
|
||||
match e with
|
||||
| .bvar .. => unreachable!
|
||||
| .sort .. => return ()
|
||||
| .fvar .. | .letE .. | .lam .. =>
|
||||
| .fvar .. | .letE .. | .lam .. | .forallE .. =>
|
||||
mkENodeCore e (ctor := false) (interpreted := false) (generation := generation)
|
||||
| .forallE _ d _ _ =>
|
||||
mkENodeCore e (ctor := false) (interpreted := false) (generation := generation)
|
||||
if (← isProp d <&&> isProp e) then
|
||||
internalize d generation
|
||||
registerParent e d
|
||||
propagateUp e
|
||||
| .lit .. | .const .. =>
|
||||
mkENode e generation
|
||||
| .mvar ..
|
||||
@@ -113,7 +47,6 @@ partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
-- We do not want to internalize the components of a literal value.
|
||||
mkENode e generation
|
||||
else e.withApp fun f args => do
|
||||
addMatchEqns f generation
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && args.size == 2 then
|
||||
-- We only internalize the proposition. We can skip the proof because of
|
||||
-- proof irrelevance
|
||||
@@ -121,9 +54,7 @@ partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
internalize c generation
|
||||
registerParent e c
|
||||
else
|
||||
if let .const fName _ := f then
|
||||
activateTheoremPatterns fName generation
|
||||
else
|
||||
unless f.isConst do
|
||||
internalize f generation
|
||||
registerParent e f
|
||||
for h : i in [: args.size] do
|
||||
@@ -132,8 +63,6 @@ partial def internalize (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
registerParent e arg
|
||||
mkENode e generation
|
||||
addCongrTable e
|
||||
updateAppMap e
|
||||
propagateUp e
|
||||
end
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -1,154 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Assert
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
import Lean.Meta.Tactic.Grind.Core
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
private inductive IntroResult where
|
||||
| done
|
||||
| newHyp (fvarId : FVarId) (goal : Goal)
|
||||
| newDepHyp (goal : Goal)
|
||||
| newLocal (fvarId : FVarId) (goal : Goal)
|
||||
deriving Inhabited
|
||||
|
||||
private def introNext (goal : Goal) (generation : Nat) : GrindM IntroResult := do
|
||||
let target ← goal.mvarId.getType
|
||||
if target.isArrow then
|
||||
goal.mvarId.withContext do
|
||||
let p := target.bindingDomain!
|
||||
if !(← isProp p) then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
let tag ← goal.mvarId.getTag
|
||||
let q := target.bindingBody!
|
||||
-- TODO: keep applying simp/eraseIrrelevantMData/canon/shareCommon until no progress
|
||||
let r ← simp p
|
||||
let fvarId ← mkFreshFVarId
|
||||
let lctx := (← getLCtx).mkLocalDecl fvarId target.bindingName! r.expr target.bindingInfo!
|
||||
let mvarNew ← mkFreshExprMVarAt lctx (← getLocalInstances) q .syntheticOpaque tag
|
||||
let mvarIdNew := mvarNew.mvarId!
|
||||
mvarIdNew.withContext do
|
||||
let h ← mkLambdaFVars #[mkFVar fvarId] mvarNew
|
||||
match r.proof? with
|
||||
| some he =>
|
||||
let hNew := mkAppN (mkConst ``Lean.Grind.intro_with_eq) #[p, r.expr, q, he, h]
|
||||
goal.mvarId.assign hNew
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
| none =>
|
||||
-- `p` and `p'` are definitionally equal
|
||||
goal.mvarId.assign h
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
else if target.isLet || target.isForall || target.isLetFun then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
mvarId.withContext do
|
||||
let localDecl ← fvarId.getDecl
|
||||
if (← isProp localDecl.type) then
|
||||
-- Add a non-dependent copy
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName localDecl.userName) localDecl.type (mkFVar fvarId)
|
||||
return .newDepHyp { goal with mvarId }
|
||||
else
|
||||
let goal := { goal with mvarId }
|
||||
if target.isLet || target.isLetFun then
|
||||
let v := (← fvarId.getDecl).value
|
||||
let r ← simp v
|
||||
let x ← shareCommon (mkFVar fvarId)
|
||||
let goal ← GoalM.run' goal <| addNewEq x r.expr (← r.getProof) generation
|
||||
return .newLocal fvarId goal
|
||||
else
|
||||
return .newLocal fvarId goal
|
||||
else
|
||||
return .done
|
||||
|
||||
private def isCasesCandidate (type : Expr) : MetaM Bool := do
|
||||
let .const declName _ := type.getAppFn | return false
|
||||
isGrindCasesTarget declName
|
||||
|
||||
private def applyCases? (goal : Goal) (fvarId : FVarId) : MetaM (Option (List Goal)) := goal.mvarId.withContext do
|
||||
if (← isCasesCandidate (← fvarId.getType)) then
|
||||
let mvarIds ← cases goal.mvarId (mkFVar fvarId)
|
||||
return mvarIds.map fun mvarId => { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
private def applyInjection? (goal : Goal) (fvarId : FVarId) : MetaM (Option Goal) := do
|
||||
if let some mvarId ← injection? goal.mvarId fvarId then
|
||||
return some { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
/-- Introduce new hypotheses (and apply `by_contra`) until goal is of the form `... ⊢ False` -/
|
||||
partial def intros (goal : Goal) (generation : Nat) : GrindM (List Goal) := do
|
||||
let rec go (goal : Goal) : StateRefT (Array Goal) GrindM Unit := do
|
||||
if goal.inconsistent then
|
||||
return ()
|
||||
match (← introNext goal generation) with
|
||||
| .done =>
|
||||
if let some mvarId ← goal.mvarId.byContra? then
|
||||
go { goal with mvarId }
|
||||
else
|
||||
modify fun s => s.push goal
|
||||
| .newHyp fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM go
|
||||
else if let some goal ← applyInjection? goal fvarId then
|
||||
go goal
|
||||
else
|
||||
go (← GoalM.run' goal <| addHypothesis fvarId generation)
|
||||
| .newDepHyp goal =>
|
||||
go goal
|
||||
| .newLocal fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM go
|
||||
else
|
||||
go goal
|
||||
let (_, goals) ← (go goal).run #[]
|
||||
return goals.toList
|
||||
|
||||
/-- Asserts a new fact `prop` with proof `proof` to the given `goal`. -/
|
||||
def assertAt (goal : Goal) (proof : Expr) (prop : Expr) (generation : Nat) : GrindM (List Goal) := do
|
||||
if (← isCasesCandidate prop) then
|
||||
let mvarId ← goal.mvarId.assert (← mkFreshUserName `h) prop proof
|
||||
let goal := { goal with mvarId }
|
||||
intros goal generation
|
||||
else
|
||||
let goal ← GoalM.run' goal do
|
||||
let r ← simp prop
|
||||
let prop' := r.expr
|
||||
let proof' ← mkEqMP (← r.getProof) proof
|
||||
add prop' proof' generation
|
||||
if goal.inconsistent then return [] else return [goal]
|
||||
|
||||
/-- Asserts next fact in the `goal` fact queue. -/
|
||||
def assertNext? (goal : Goal) : GrindM (Option (List Goal)) := do
|
||||
let some (fact, newFacts) := goal.newFacts.dequeue?
|
||||
| return none
|
||||
assertAt { goal with newFacts } fact.proof fact.prop fact.generation
|
||||
|
||||
partial def iterate (goal : Goal) (f : Goal → GrindM (Option (List Goal))) : GrindM (List Goal) := do
|
||||
go [goal] []
|
||||
where
|
||||
go (todo : List Goal) (result : List Goal) : GrindM (List Goal) := do
|
||||
match todo with
|
||||
| [] => return result
|
||||
| goal :: todo =>
|
||||
if let some goalsNew ← f goal then
|
||||
go (goalsNew ++ todo) result
|
||||
else
|
||||
go todo (goal :: result)
|
||||
|
||||
/-- Asserts all facts in the `goal` fact queue. -/
|
||||
partial def assertAll (goal : Goal) : GrindM (List Goal) := do
|
||||
iterate goal assertNext?
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -24,9 +24,9 @@ private def checkEqc (root : ENode) : GoalM Unit := do
|
||||
if curr.isApp then
|
||||
if let some { e } := (← get).congrTable.find? { e := curr } then
|
||||
if (← hasSameType e.getAppFn curr.getAppFn) then
|
||||
assert! isSameExpr e (← getCongrRoot curr)
|
||||
assert! isSameExpr e (← getENode curr).cgRoot
|
||||
else
|
||||
assert! (← isCongrRoot curr)
|
||||
assert! isSameExpr curr (← getENode curr).cgRoot
|
||||
-- If the equivalence class does not have HEq proofs, then the types must be definitionally equal.
|
||||
unless root.heqProofs do
|
||||
assert! (← hasSameType curr root.self)
|
||||
@@ -57,10 +57,6 @@ private def checkParents (e : Expr) : GoalM Unit := do
|
||||
if (← checkChild arg) then
|
||||
found := true
|
||||
break
|
||||
-- Recall that we have support for `Expr.forallE` propagation. See `ForallProp.lean`.
|
||||
if let .forallE _ d _ _ := parent then
|
||||
if (← checkChild d) then
|
||||
found := true
|
||||
unless found do
|
||||
assert! (← checkChild parent.getAppFn)
|
||||
else
|
||||
@@ -84,7 +80,7 @@ private def checkProofs : GoalM Unit := do
|
||||
for a in eqc do
|
||||
for b in eqc do
|
||||
unless isSameExpr a b do
|
||||
let p ← mkEqHEqProof a b
|
||||
let p ← mkEqProof a b
|
||||
trace[grind.debug.proofs] "{a} = {b}"
|
||||
check p
|
||||
trace[grind.debug.proofs] "checked: {← inferType p}"
|
||||
@@ -103,7 +99,4 @@ def checkInvariants (expensive := false) : GoalM Unit := do
|
||||
if expensive && grind.debug.proofs.get (← getOptions) then
|
||||
checkProofs
|
||||
|
||||
def Goal.checkInvariants (goal : Goal) (expensive := false) : GrindM Unit :=
|
||||
discard <| GoalM.run' goal <| Grind.checkInvariants expensive
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -1,86 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Proj
|
||||
import Lean.Meta.Tactic.Grind.ForallProp
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Inv
|
||||
import Lean.Meta.Tactic.Grind.Intro
|
||||
import Lean.Meta.Tactic.Grind.EMatch
|
||||
import Lean.Meta.Tactic.Grind.SimpUtil
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkMethods (fallback : Fallback) : CoreM Methods := do
|
||||
let builtinPropagators ← builtinPropagatorsRef.get
|
||||
return {
|
||||
fallback
|
||||
propagateUp := fun e => do
|
||||
propagateForallProp e
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
propagateProjEq e
|
||||
if let some prop := builtinPropagators.up[declName]? then
|
||||
prop e
|
||||
propagateDown := fun e => do
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
if let some prop := builtinPropagators.down[declName]? then
|
||||
prop e
|
||||
}
|
||||
|
||||
def GrindM.run (x : GrindM α) (mainDeclName : Name) (config : Grind.Config) (fallback : Fallback) : MetaM α := do
|
||||
let scState := ShareCommon.State.mk _
|
||||
let (falseExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``False)
|
||||
let (trueExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``True)
|
||||
let simprocs ← Grind.getSimprocs
|
||||
let simp ← Grind.getSimpContext
|
||||
x (← mkMethods fallback).toMethodsRef { mainDeclName, config, simprocs, simp } |>.run' { scState, trueExpr, falseExpr }
|
||||
|
||||
private def mkGoal (mvarId : MVarId) : GrindM Goal := do
|
||||
let trueExpr ← getTrueExpr
|
||||
let falseExpr ← getFalseExpr
|
||||
let thmMap ← getEMatchTheorems
|
||||
GoalM.run' { mvarId, thmMap } do
|
||||
mkENodeCore falseExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
mkENodeCore trueExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
|
||||
private def initCore (mvarId : MVarId) : GrindM (List Goal) := do
|
||||
mvarId.ensureProp
|
||||
-- TODO: abstract metavars
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.clearAuxDecls
|
||||
let mvarId ← mvarId.revertAll
|
||||
let mvarId ← mvarId.unfoldReducible
|
||||
let mvarId ← mvarId.betaReduce
|
||||
let goals ← intros (← mkGoal mvarId) (generation := 0)
|
||||
goals.forM (·.checkInvariants (expensive := true))
|
||||
return goals.filter fun goal => !goal.inconsistent
|
||||
|
||||
def all (goals : List Goal) (f : Goal → GrindM (List Goal)) : GrindM (List Goal) := do
|
||||
goals.foldlM (init := []) fun acc goal => return acc ++ (← f goal)
|
||||
|
||||
/-- A very simple strategy -/
|
||||
private def simple (goals : List Goal) : GrindM (List Goal) := do
|
||||
all goals ematchStar
|
||||
|
||||
def main (mvarId : MVarId) (config : Grind.Config) (mainDeclName : Name) (fallback : Fallback) : MetaM (List MVarId) := do
|
||||
let go : GrindM (List MVarId) := do
|
||||
let goals ← initCore mvarId
|
||||
let goals ← simple goals
|
||||
let goals ← goals.filterMapM fun goal => do
|
||||
if goal.inconsistent then return none
|
||||
let goal ← GoalM.run' goal fallback
|
||||
if goal.inconsistent then return none
|
||||
if (← goal.mvarId.isAssigned) then return none
|
||||
return some goal
|
||||
trace[grind.debug.final] "{← ppGoals goals}"
|
||||
return goals.map (·.mvarId)
|
||||
go.run mainDeclName config fallback
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -24,37 +24,30 @@ where
|
||||
let e' := mkApp2 (mkConst ``Lean.Grind.nestedProof) prop e
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
-- Remark: we have to process `Expr.proj` since we only
|
||||
-- fold projections later during term internalization
|
||||
unless e.isApp || e.isForall || e.isProj do
|
||||
return e
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
let e' ← match e with
|
||||
| .app .. => e.withApp fun f args => do
|
||||
let mut modified := false
|
||||
let mut args := args
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let arg' ← visit arg
|
||||
unless ptrEq arg arg' do
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
if modified then
|
||||
pure <| mkAppN f args
|
||||
else
|
||||
pure e
|
||||
| .proj _ _ b =>
|
||||
pure <| e.updateProj! (← visit b)
|
||||
| .forallE _ d b _ =>
|
||||
-- Recall that we have `ForallProp.lean`.
|
||||
let d' ← visit d
|
||||
let b' ← if b.hasLooseBVars then pure b else visit b
|
||||
pure <| e.updateForallE! d' b'
|
||||
| _ => unreachable!
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
else match e with
|
||||
| .bvar .. => unreachable!
|
||||
-- See comments on `Canon.lean` for why we do not visit these cases.
|
||||
| .letE .. | .forallE .. | .lam ..
|
||||
| .const .. | .lit .. | .mvar .. | .sort .. | .fvar ..
|
||||
| .proj ..
|
||||
| .mdata .. => return e
|
||||
-- We only visit applications
|
||||
| .app .. =>
|
||||
-- Check whether it is cached
|
||||
if let some r := (← get).find? e then
|
||||
return r
|
||||
e.withApp fun f args => do
|
||||
let mut modified := false
|
||||
let mut args := args
|
||||
for i in [:args.size] do
|
||||
let arg := args[i]!
|
||||
let arg' ← visit arg
|
||||
unless ptrEq arg arg' do
|
||||
args := args.set! i arg'
|
||||
modified := true
|
||||
let e' := if modified then mkAppN f args else e
|
||||
modify fun s => s.insert e e'
|
||||
return e'
|
||||
|
||||
/--
|
||||
Wrap nested proofs `e` with `Lean.Grind.nestedProof`-applications.
|
||||
|
||||
@@ -56,11 +56,4 @@ def ppState : GoalM Format := do
|
||||
r := r ++ "\n" ++ "{" ++ (Format.joinSep (← eqc.mapM ppENodeRef) ", ") ++ "}"
|
||||
return r
|
||||
|
||||
def ppGoals (goals : List Goal) : GrindM Format := do
|
||||
let mut r := f!""
|
||||
for goal in goals do
|
||||
let (f, _) ← GoalM.run goal ppState
|
||||
r := r ++ Format.line ++ f
|
||||
return r
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
172
src/Lean/Meta/Tactic/Grind/Preprocessor.lean
Normal file
172
src/Lean/Meta/Tactic/Grind/Preprocessor.lean
Normal file
@@ -0,0 +1,172 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Canonicalizer
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Intro
|
||||
import Lean.Meta.Tactic.Simp.Main
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.RevertAll
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Cases
|
||||
import Lean.Meta.Tactic.Grind.Injection
|
||||
import Lean.Meta.Tactic.Grind.Core
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Run
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
namespace Preprocessor
|
||||
|
||||
structure State where
|
||||
goals : PArray Goal := {}
|
||||
deriving Inhabited
|
||||
|
||||
abbrev PreM := StateRefT State GrindM
|
||||
|
||||
def PreM.run (x : PreM α) : GrindM α := do
|
||||
x.run' {}
|
||||
|
||||
inductive IntroResult where
|
||||
| done
|
||||
| newHyp (fvarId : FVarId) (goal : Goal)
|
||||
| newDepHyp (goal : Goal)
|
||||
| newLocal (fvarId : FVarId) (goal : Goal)
|
||||
|
||||
def introNext (goal : Goal) : PreM IntroResult := do
|
||||
let target ← goal.mvarId.getType
|
||||
if target.isArrow then
|
||||
goal.mvarId.withContext do
|
||||
let p := target.bindingDomain!
|
||||
if !(← isProp p) then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
let tag ← goal.mvarId.getTag
|
||||
let q := target.bindingBody!
|
||||
-- TODO: keep applying simp/eraseIrrelevantMData/canon/shareCommon until no progress
|
||||
let r ← pre p
|
||||
let fvarId ← mkFreshFVarId
|
||||
let lctx := (← getLCtx).mkLocalDecl fvarId target.bindingName! r.expr target.bindingInfo!
|
||||
let mvarNew ← mkFreshExprMVarAt lctx (← getLocalInstances) q .syntheticOpaque tag
|
||||
let mvarIdNew := mvarNew.mvarId!
|
||||
mvarIdNew.withContext do
|
||||
let h ← mkLambdaFVars #[mkFVar fvarId] mvarNew
|
||||
match r.proof? with
|
||||
| some he =>
|
||||
let hNew := mkAppN (mkConst ``Lean.Grind.intro_with_eq) #[p, r.expr, q, he, h]
|
||||
goal.mvarId.assign hNew
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
| none =>
|
||||
-- `p` and `p'` are definitionally equal
|
||||
goal.mvarId.assign h
|
||||
return .newHyp fvarId { goal with mvarId := mvarIdNew }
|
||||
else if target.isLet || target.isForall then
|
||||
let (fvarId, mvarId) ← goal.mvarId.intro1P
|
||||
mvarId.withContext do
|
||||
let localDecl ← fvarId.getDecl
|
||||
if (← isProp localDecl.type) then
|
||||
-- Add a non-dependent copy
|
||||
let mvarId ← mvarId.assert (← mkFreshUserName localDecl.userName) localDecl.type (mkFVar fvarId)
|
||||
return .newDepHyp { goal with mvarId }
|
||||
else
|
||||
return .newLocal fvarId { goal with mvarId }
|
||||
else
|
||||
return .done
|
||||
|
||||
def pushResult (goal : Goal) : PreM Unit :=
|
||||
modify fun s => { s with goals := s.goals.push goal }
|
||||
|
||||
def isCasesCandidate (fvarId : FVarId) : MetaM Bool := do
|
||||
let .const declName _ := (← fvarId.getType).getAppFn | return false
|
||||
isGrindCasesTarget declName
|
||||
|
||||
def applyCases? (goal : Goal) (fvarId : FVarId) : MetaM (Option (List Goal)) := goal.mvarId.withContext do
|
||||
if (← isCasesCandidate fvarId) then
|
||||
let mvarIds ← cases goal.mvarId fvarId
|
||||
return mvarIds.map fun mvarId => { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
def applyInjection? (goal : Goal) (fvarId : FVarId) : MetaM (Option Goal) := do
|
||||
if let some mvarId ← injection? goal.mvarId fvarId then
|
||||
return some { goal with mvarId }
|
||||
else
|
||||
return none
|
||||
|
||||
partial def loop (goal : Goal) : PreM Unit := do
|
||||
if goal.inconsistent then
|
||||
return ()
|
||||
match (← introNext goal) with
|
||||
| .done =>
|
||||
if let some mvarId ← goal.mvarId.byContra? then
|
||||
loop { goal with mvarId }
|
||||
else
|
||||
pushResult goal
|
||||
| .newHyp fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM loop
|
||||
else if let some goal ← applyInjection? goal fvarId then
|
||||
loop goal
|
||||
else
|
||||
loop (← GoalM.run' goal <| addHyp fvarId)
|
||||
| .newDepHyp goal =>
|
||||
loop goal
|
||||
| .newLocal fvarId goal =>
|
||||
if let some goals ← applyCases? goal fvarId then
|
||||
goals.forM loop
|
||||
else
|
||||
loop goal
|
||||
|
||||
def ppGoals : PreM Format := do
|
||||
let mut r := f!""
|
||||
for goal in (← get).goals do
|
||||
let (f, _) ← GoalM.run goal ppState
|
||||
r := r ++ Format.line ++ f
|
||||
return r
|
||||
|
||||
def preprocess (mvarId : MVarId) : PreM State := do
|
||||
mvarId.ensureProp
|
||||
-- TODO: abstract metavars
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.clearAuxDecls
|
||||
let mvarId ← mvarId.revertAll
|
||||
mvarId.ensureNoMVar
|
||||
let mvarId ← mvarId.abstractNestedProofs (← getMainDeclName)
|
||||
let mvarId ← mvarId.unfoldReducible
|
||||
let mvarId ← mvarId.betaReduce
|
||||
loop (← mkGoal mvarId)
|
||||
if (← isTracingEnabledFor `grind.pre) then
|
||||
trace[grind.pre] (← ppGoals)
|
||||
for goal in (← get).goals do
|
||||
discard <| GoalM.run' goal <| checkInvariants (expensive := true)
|
||||
get
|
||||
|
||||
def preprocessAndProbe (mvarId : MVarId) (p : GoalM Unit) : PreM Unit := do
|
||||
let s ← preprocess mvarId
|
||||
s.goals.forM fun goal =>
|
||||
discard <| GoalM.run' goal p
|
||||
|
||||
end Preprocessor
|
||||
|
||||
open Preprocessor
|
||||
|
||||
def preprocessAndProbe (mvarId : MVarId) (mainDeclName : Name) (p : GoalM Unit) : MetaM Unit :=
|
||||
withoutModifyingMCtx do
|
||||
Preprocessor.preprocessAndProbe mvarId p |>.run |>.run mainDeclName
|
||||
|
||||
def preprocess (mvarId : MVarId) (mainDeclName : Name) : MetaM Preprocessor.State :=
|
||||
Preprocessor.preprocess mvarId |>.run |>.run mainDeclName
|
||||
|
||||
def main (mvarId : MVarId) (mainDeclName : Name) : MetaM (List MVarId) := do
|
||||
let go : GrindM (List MVarId) := do
|
||||
let s ← Preprocessor.preprocess mvarId |>.run
|
||||
let goals := s.goals.toList.filter fun goal => !goal.inconsistent
|
||||
return goals.map (·.mvarId)
|
||||
go.run mainDeclName
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -19,21 +19,17 @@ def propagateProjEq (parent : Expr) : GoalM Unit := do
|
||||
let .const declName _ := parent.getAppFn | return ()
|
||||
let some info ← getProjectionFnInfo? declName | return ()
|
||||
unless info.numParams + 1 == parent.getAppNumArgs do return ()
|
||||
-- It is wasteful to add equation if `parent` is not the root of its congruence class
|
||||
unless (← isCongrRoot parent) do return ()
|
||||
let arg := parent.appArg!
|
||||
let ctor ← getRoot arg
|
||||
unless ctor.isAppOf info.ctorName do return ()
|
||||
let parentNew ← if isSameExpr arg ctor then
|
||||
pure parent
|
||||
if isSameExpr arg ctor then
|
||||
let idx := info.numParams + info.i
|
||||
unless idx < ctor.getAppNumArgs do return ()
|
||||
let v := ctor.getArg! idx
|
||||
pushEq parent v (← mkEqRefl v)
|
||||
else
|
||||
let parentNew ← shareCommon (mkApp parent.appFn! ctor)
|
||||
internalize parentNew (← getGeneration parent)
|
||||
pure parentNew
|
||||
trace[grind.debug.proj] "{parentNew}"
|
||||
let idx := info.numParams + info.i
|
||||
unless idx < ctor.getAppNumArgs do return ()
|
||||
let v := ctor.getArg! idx
|
||||
pushEq parentNew v (← mkEqRefl v)
|
||||
let newProj := mkApp parent.appFn! ctor
|
||||
let newProj ← shareCommon newProj
|
||||
internalize newProj (← getGeneration parent)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -4,7 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Sorry -- TODO: remove
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
@@ -128,52 +128,6 @@ mutual
|
||||
let r := (← loop lhs rhs).get!
|
||||
if heq then mkHEqOfEq r else return r
|
||||
|
||||
private partial def mkHCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let f := lhs.getAppFn
|
||||
let g := rhs.getAppFn
|
||||
let numArgs := lhs.getAppNumArgs
|
||||
assert! rhs.getAppNumArgs == numArgs
|
||||
let thm ← mkHCongrWithArity f numArgs
|
||||
assert! thm.argKinds.size == numArgs
|
||||
let rec loop (lhs rhs : Expr) (i : Nat) : GoalM Expr := do
|
||||
let i := i - 1
|
||||
if lhs.isApp then
|
||||
let proof ← loop lhs.appFn! rhs.appFn! i
|
||||
let a₁ := lhs.appArg!
|
||||
let a₂ := rhs.appArg!
|
||||
let k := thm.argKinds[i]!
|
||||
return mkApp3 proof a₁ a₂ (← mkEqProofCore a₁ a₂ (k matches .heq))
|
||||
else
|
||||
return thm.proof
|
||||
let proof ← loop lhs rhs numArgs
|
||||
if isSameExpr f g then
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
else
|
||||
/-
|
||||
`lhs` is of the form `f a_1 ... a_n`
|
||||
`rhs` is of the form `g b_1 ... b_n`
|
||||
`proof : HEq (f a_1 ... a_n) (f b_1 ... b_n)`
|
||||
We construct a proof for `HEq (f a_1 ... a_n) (g b_1 ... b_n)` using `Eq.ndrec`
|
||||
-/
|
||||
let motive ← withLocalDeclD (← mkFreshUserName `x) (← inferType f) fun x => do
|
||||
mkLambdaFVars #[x] (← mkHEq lhs (mkAppN x rhs.getAppArgs))
|
||||
let fEq ← mkEqProofCore f g false
|
||||
let proof ← mkEqNDRec motive proof fEq
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
|
||||
private partial def mkEqCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let_expr f@Eq α₁ a₁ b₁ := lhs | unreachable!
|
||||
let_expr Eq α₂ a₂ b₂ := rhs | unreachable!
|
||||
let enodes := (← get).enodes
|
||||
let us := f.constLevels!
|
||||
if !isSameExpr α₁ α₂ then
|
||||
mkHCongrProof lhs rhs heq
|
||||
else if hasSameRoot enodes a₁ a₂ && hasSameRoot enodes b₁ b₂ then
|
||||
return mkApp7 (mkConst ``Grind.eq_congr us) α₁ a₁ b₁ a₂ b₂ (← mkEqProofCore a₁ a₂ false) (← mkEqProofCore b₁ b₂ false)
|
||||
else
|
||||
assert! hasSameRoot enodes a₁ b₂ && hasSameRoot enodes b₁ a₂
|
||||
return mkApp7 (mkConst ``Grind.eq_congr' us) α₁ a₁ b₁ a₂ b₂ (← mkEqProofCore a₁ b₂ false) (← mkEqProofCore b₁ a₂ false)
|
||||
|
||||
/-- Constructs a congruence proof for `lhs` and `rhs`. -/
|
||||
private partial def mkCongrProof (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
let f := lhs.getAppFn
|
||||
@@ -182,12 +136,36 @@ mutual
|
||||
assert! rhs.getAppNumArgs == numArgs
|
||||
if f.isConstOf ``Lean.Grind.nestedProof && g.isConstOf ``Lean.Grind.nestedProof && numArgs == 2 then
|
||||
mkNestedProofCongr lhs rhs heq
|
||||
else if f.isConstOf ``Eq && g.isConstOf ``Eq && numArgs == 3 then
|
||||
mkEqCongrProof lhs rhs heq
|
||||
else if (← isCongrDefaultProofTarget lhs rhs f g numArgs) then
|
||||
mkCongrDefaultProof lhs rhs heq
|
||||
else
|
||||
mkHCongrProof lhs rhs heq
|
||||
let thm ← mkHCongrWithArity f numArgs
|
||||
assert! thm.argKinds.size == numArgs
|
||||
let rec loop (lhs rhs : Expr) (i : Nat) : GoalM Expr := do
|
||||
let i := i - 1
|
||||
if lhs.isApp then
|
||||
let proof ← loop lhs.appFn! rhs.appFn! i
|
||||
let a₁ := lhs.appArg!
|
||||
let a₂ := rhs.appArg!
|
||||
let k := thm.argKinds[i]!
|
||||
return mkApp3 proof a₁ a₂ (← mkEqProofCore a₁ a₂ (k matches .heq))
|
||||
else
|
||||
return thm.proof
|
||||
let proof ← loop lhs rhs numArgs
|
||||
if isSameExpr f g then
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
else
|
||||
/-
|
||||
`lhs` is of the form `f a_1 ... a_n`
|
||||
`rhs` is of the form `g b_1 ... b_n`
|
||||
`proof : HEq (f a_1 ... a_n) (f b_1 ... b_n)`
|
||||
We construct a proof for `HEq (f a_1 ... a_n) (g b_1 ... b_n)` using `Eq.ndrec`
|
||||
-/
|
||||
let motive ← withLocalDeclD (← mkFreshUserName `x) (← inferType f) fun x => do
|
||||
mkLambdaFVars #[x] (← mkHEq lhs (mkAppN x rhs.getAppArgs))
|
||||
let fEq ← mkEqProofCore f g false
|
||||
let proof ← mkEqNDRec motive proof fEq
|
||||
mkEqOfHEqIfNeeded proof heq
|
||||
|
||||
private partial def realizeEqProof (lhs rhs : Expr) (h : Expr) (flipped : Bool) (heq : Bool) : GoalM Expr := do
|
||||
let h ← if h == congrPlaceholderProof then
|
||||
@@ -220,41 +198,42 @@ mutual
|
||||
-- `h' : lhs = target`
|
||||
mkTrans' h' h heq
|
||||
|
||||
/--
|
||||
Returns a proof of `lhs = rhs` (`HEq lhs rhs`) if `heq = false` (`heq = true`).
|
||||
If `heq = false`, this function assumes that `lhs` and `rhs` have the same type.
|
||||
-/
|
||||
private partial def mkEqProofCore (lhs rhs : Expr) (heq : Bool) : GoalM Expr := do
|
||||
if isSameExpr lhs rhs then
|
||||
return (← mkRefl lhs heq)
|
||||
-- The equivalence class contains `HEq` proofs. So, we build a proof using HEq. Otherwise, we use `Eq`.
|
||||
let heqProofs := (← getRootENode lhs).heqProofs
|
||||
let n₁ ← getENode lhs
|
||||
let n₂ ← getENode rhs
|
||||
assert! isSameExpr n₁.root n₂.root
|
||||
let common ← findCommon lhs rhs
|
||||
let lhsEqCommon? ← mkProofTo lhs common none heqProofs
|
||||
let some lhsEqRhs ← mkProofFrom rhs common lhsEqCommon? heqProofs | unreachable!
|
||||
if heq == heqProofs then
|
||||
return lhsEqRhs
|
||||
else if heq then
|
||||
mkHEqOfEq lhsEqRhs
|
||||
else
|
||||
mkEqOfHEq lhsEqRhs
|
||||
|
||||
let lhsEqCommon? ← mkProofTo lhs common none heq
|
||||
let some lhsEqRhs ← mkProofFrom rhs common lhsEqCommon? heq | unreachable!
|
||||
return lhsEqRhs
|
||||
end
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b`.
|
||||
Returns a proof that `a = b` (or `HEq a b`).
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
@[export lean_grind_mk_eq_proof]
|
||||
def mkEqProofImpl (a b : Expr) : GoalM Expr := do
|
||||
assert! (← hasSameType a b)
|
||||
mkEqProofCore a b (heq := false)
|
||||
let p ← go
|
||||
trace[grind.proof.detail] "{p}"
|
||||
return p
|
||||
where
|
||||
go : GoalM Expr := do
|
||||
let n ← getRootENode a
|
||||
if !n.heqProofs then
|
||||
trace[grind.proof] "{a} = {b}"
|
||||
mkEqProofCore a b (heq := false)
|
||||
else
|
||||
if (← hasSameType a b) then
|
||||
trace[grind.proof] "{a} = {b}"
|
||||
mkEqOfHEq (← mkEqProofCore a b (heq := true))
|
||||
else
|
||||
trace[grind.proof] "{a} ≡ {b}"
|
||||
mkEqProofCore a b (heq := true)
|
||||
|
||||
@[export lean_grind_mk_heq_proof]
|
||||
def mkHEqProofImpl (a b : Expr) : GoalM Expr :=
|
||||
def mkHEqProof (a b : Expr) : GoalM Expr :=
|
||||
mkEqProofCore a b (heq := true)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -7,8 +7,6 @@ prelude
|
||||
import Init.Grind
|
||||
import Lean.Meta.Tactic.Grind.Proof
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.Internalize
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -99,8 +97,6 @@ builtin_grind_propagator propagateNotUp ↑Not := fun e => do
|
||||
else if (← isEqTrue a) then
|
||||
-- a = True → (Not a) = False
|
||||
pushEqFalse e <| mkApp2 (mkConst ``Lean.Grind.not_eq_of_eq_true) a (← mkEqTrueProof a)
|
||||
else if (← isEqv e a) then
|
||||
closeGoal <| mkApp2 (mkConst ``Lean.Grind.false_of_not_eq_self) a (← mkEqProof e a)
|
||||
|
||||
/--
|
||||
Propagates truth values downwards for a negation expression `Not a` based on the truth value of `Not a`.
|
||||
@@ -146,32 +142,4 @@ builtin_grind_propagator propagateHEqUp ↑HEq := fun e => do
|
||||
if (← isEqv a b) then
|
||||
pushEqTrue e <| mkApp2 (mkConst ``eq_true) e (← mkHEqProof a b)
|
||||
|
||||
/-- Propagates `ite` upwards -/
|
||||
builtin_grind_propagator propagateIte ↑ite := fun e => do
|
||||
let_expr f@ite α c h a b := e | return ()
|
||||
if (← isEqTrue c) then
|
||||
pushEq e a <| mkApp6 (mkConst ``ite_cond_eq_true f.constLevels!) α c h a b (← mkEqTrueProof c)
|
||||
else if (← isEqFalse c) then
|
||||
pushEq e b <| mkApp6 (mkConst ``ite_cond_eq_false f.constLevels!) α c h a b (← mkEqFalseProof c)
|
||||
|
||||
/-- Propagates `dite` upwards -/
|
||||
builtin_grind_propagator propagateDIte ↑dite := fun e => do
|
||||
let_expr f@dite α c h a b := e | return ()
|
||||
if (← isEqTrue c) then
|
||||
let h₁ ← mkEqTrueProof c
|
||||
let ah₁ := mkApp a (mkApp2 (mkConst ``of_eq_true) c h₁)
|
||||
let p ← simp ah₁
|
||||
let r := p.expr
|
||||
let h₂ ← p.getProof
|
||||
internalize r (← getGeneration e)
|
||||
pushEq e r <| mkApp8 (mkConst ``Grind.dite_cond_eq_true' f.constLevels!) α c h a b r h₁ h₂
|
||||
else if (← isEqFalse c) then
|
||||
let h₁ ← mkEqFalseProof c
|
||||
let bh₁ := mkApp b (mkApp2 (mkConst ``of_eq_false) c h₁)
|
||||
let p ← simp bh₁
|
||||
let r := p.expr
|
||||
let h₂ ← p.getProof
|
||||
internalize r (← getGeneration e)
|
||||
pushEq e r <| mkApp8 (mkConst ``Grind.dite_cond_eq_false' f.constLevels!) α c h a b r h₁ h₂
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
53
src/Lean/Meta/Tactic/Grind/Run.lean
Normal file
53
src/Lean/Meta/Tactic/Grind/Run.lean
Normal file
@@ -0,0 +1,53 @@
|
||||
/-
|
||||
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.PropagatorAttr
|
||||
import Lean.Meta.Tactic.Grind.Proj
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
def mkMethods : CoreM Methods := do
|
||||
let builtinPropagators ← builtinPropagatorsRef.get
|
||||
return {
|
||||
propagateUp := fun e => do
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
propagateProjEq e
|
||||
if let some prop := builtinPropagators.up[declName]? then
|
||||
prop e
|
||||
propagateDown := fun e => do
|
||||
let .const declName _ := e.getAppFn | return ()
|
||||
if let some prop := builtinPropagators.down[declName]? then
|
||||
prop e
|
||||
}
|
||||
|
||||
def GrindM.run (x : GrindM α) (mainDeclName : Name) : MetaM α := do
|
||||
let scState := ShareCommon.State.mk _
|
||||
let (falseExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``False)
|
||||
let (trueExpr, scState) := ShareCommon.State.shareCommon scState (mkConst ``True)
|
||||
let thms ← grindNormExt.getTheorems
|
||||
let simprocs := #[(← grindNormSimprocExt.getSimprocs)]
|
||||
let simp ← Simp.mkContext
|
||||
(config := { arith := true })
|
||||
(simpTheorems := #[thms])
|
||||
(congrTheorems := (← getSimpCongrTheorems))
|
||||
x (← mkMethods).toMethodsRef { mainDeclName, simprocs, simp } |>.run' { scState, trueExpr, falseExpr }
|
||||
|
||||
@[inline] def GoalM.run (goal : Goal) (x : GoalM α) : GrindM (α × Goal) :=
|
||||
goal.mvarId.withContext do StateRefT'.run x goal
|
||||
|
||||
@[inline] def GoalM.run' (goal : Goal) (x : GoalM Unit) : GrindM Goal :=
|
||||
goal.mvarId.withContext do StateRefT'.run' (x *> get) goal
|
||||
|
||||
def mkGoal (mvarId : MVarId) : GrindM Goal := do
|
||||
let trueExpr ← getTrueExpr
|
||||
let falseExpr ← getFalseExpr
|
||||
GoalM.run' { mvarId } do
|
||||
mkENodeCore falseExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
mkENodeCore trueExpr (interpreted := true) (ctor := false) (generation := 0)
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -4,17 +4,18 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Lemmas
|
||||
import Lean.Meta.Tactic.Assert
|
||||
import Lean.Meta.Tactic.Simp.Main
|
||||
import Lean.Meta.Tactic.Grind.Util
|
||||
import Lean.Meta.Tactic.Grind.Types
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
import Lean.Meta.Tactic.Grind.MarkNestedProofs
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
-- TODO: use congruence closure and decision procedures during pre-processing
|
||||
-- TODO: implement `simp` discharger using preprocessor state
|
||||
|
||||
/-- Simplifies the given expression using the `grind` simprocs and normalization theorems. -/
|
||||
def simpCore (e : Expr) : GrindM Simp.Result := do
|
||||
def simp (e : Expr) : GrindM Simp.Result := do
|
||||
let simpStats := (← get).simpStats
|
||||
let (r, simpStats) ← Meta.simp e (← readThe Context).simp (← readThe Context).simprocs (stats := simpStats)
|
||||
modify fun s => { s with simpStats }
|
||||
@@ -24,17 +25,14 @@ def simpCore (e : Expr) : GrindM Simp.Result := do
|
||||
Simplifies `e` using `grind` normalization theorems and simprocs,
|
||||
and then applies several other preprocessing steps.
|
||||
-/
|
||||
def simp (e : Expr) : GrindM Simp.Result := do
|
||||
let e ← instantiateMVars e
|
||||
let r ← simpCore e
|
||||
def pre (e : Expr) : GrindM Simp.Result := do
|
||||
let r ← simp e
|
||||
let e' := r.expr
|
||||
let e' ← abstractNestedProofs e'
|
||||
let e' ← markNestedProofs e'
|
||||
let e' ← unfoldReducible e'
|
||||
let e' ← eraseIrrelevantMData e'
|
||||
let e' ← foldProjs e'
|
||||
let e' ← normalizeLevels e'
|
||||
let e' ← eraseDoNotSimp e'
|
||||
let e' ← canon e'
|
||||
let e' ← shareCommon e'
|
||||
trace[grind.simp] "{e}\n===>\n{e'}"
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
/-
|
||||
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Lean.Meta.Tactic.Simp.Simproc
|
||||
import Lean.Meta.Tactic.Grind.Simp
|
||||
import Lean.Meta.Tactic.Grind.DoNotSimp
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
/-- Returns the array of simprocs used by `grind`. -/
|
||||
protected def getSimprocs : MetaM (Array Simprocs) := do
|
||||
let s ← grindNormSimprocExt.getSimprocs
|
||||
let s ← addDoNotSimp s
|
||||
return #[s, (← Simp.getSEvalSimprocs)]
|
||||
|
||||
/-- Returns the simplification context used by `grind`. -/
|
||||
protected def getSimpContext : MetaM Simp.Context := do
|
||||
let thms ← grindNormExt.getTheorems
|
||||
Simp.mkContext
|
||||
(config := { arith := true })
|
||||
(simpTheorems := #[thms])
|
||||
(congrTheorems := (← getSimpCongrTheorems))
|
||||
|
||||
@[export lean_grind_normalize]
|
||||
def normalizeImp (e : Expr) : MetaM Expr := do
|
||||
let (r, _) ← Meta.simp e (← Grind.getSimpContext) (← Grind.getSimprocs)
|
||||
return r.expr
|
||||
|
||||
end Lean.Meta.Grind
|
||||
@@ -4,10 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Grind.Tactics
|
||||
import Init.Data.Queue
|
||||
import Lean.Util.ShareCommon
|
||||
import Lean.HeadIndex
|
||||
import Lean.Meta.Basic
|
||||
import Lean.Meta.CongrTheorems
|
||||
import Lean.Meta.AbstractNestedProofs
|
||||
@@ -15,7 +12,6 @@ import Lean.Meta.Tactic.Simp.Types
|
||||
import Lean.Meta.Tactic.Util
|
||||
import Lean.Meta.Tactic.Grind.Canon
|
||||
import Lean.Meta.Tactic.Grind.Attr
|
||||
import Lean.Meta.Tactic.Grind.EMatchTheorem
|
||||
|
||||
namespace Lean.Meta.Grind
|
||||
|
||||
@@ -49,10 +45,9 @@ register_builtin_option grind.debug.proofs : Bool := {
|
||||
|
||||
/-- Context for `GrindM` monad. -/
|
||||
structure Context where
|
||||
simp : Simp.Context
|
||||
simprocs : Array Simp.Simprocs
|
||||
simp : Simp.Context
|
||||
simprocs : Array Simp.Simprocs
|
||||
mainDeclName : Name
|
||||
config : Grind.Config
|
||||
|
||||
/-- Key for the congruence theorem cache. -/
|
||||
structure CongrTheoremCacheKey where
|
||||
@@ -90,10 +85,6 @@ instance : Nonempty MethodsRef := MethodsRefPointed.property
|
||||
|
||||
abbrev GrindM := ReaderT MethodsRef $ ReaderT Context $ StateRefT State MetaM
|
||||
|
||||
/-- Returns the user-defined configuration options -/
|
||||
def getConfig : GrindM Grind.Config :=
|
||||
return (← readThe Context).config
|
||||
|
||||
/-- Returns the internalized `True` constant. -/
|
||||
def getTrueExpr : GrindM Expr := do
|
||||
return (← get).trueExpr
|
||||
@@ -108,10 +99,6 @@ def getMainDeclName : GrindM Name :=
|
||||
@[inline] def getMethodsRef : GrindM MethodsRef :=
|
||||
read
|
||||
|
||||
/-- Returns maximum term generation that is considered during ematching. -/
|
||||
def getMaxGeneration : GrindM Nat := do
|
||||
return (← getConfig).gen
|
||||
|
||||
/--
|
||||
Abtracts nested proofs in `e`. This is a preprocessing step performed before internalization.
|
||||
-/
|
||||
@@ -139,14 +126,6 @@ def canon (e : Expr) : GrindM Expr := do
|
||||
modify fun s => { s with canon := canonS }
|
||||
return e
|
||||
|
||||
/-- Returns `true` if `e` is the internalized `True` expression. -/
|
||||
def isTrueExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getTrueExpr)
|
||||
|
||||
/-- Returns `true` if `e` is the internalized `False` expression. -/
|
||||
def isFalseExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getFalseExpr)
|
||||
|
||||
/--
|
||||
Creates a congruence theorem for a `f`-applications with `numArgs` arguments.
|
||||
-/
|
||||
@@ -173,12 +152,8 @@ structure ENode where
|
||||
next : Expr
|
||||
/-- Root (aka canonical representative) of the equivalence class -/
|
||||
root : Expr
|
||||
/--
|
||||
`congr` is the term `self` is congruent to.
|
||||
We say `self` is the congruence class root if `isSameExpr congr self`.
|
||||
This field is initialized to `self` even if `e` is not an application.
|
||||
-/
|
||||
congr : Expr
|
||||
/-- Root of the congruence class. This is field is a don't care if `e` is not an application. -/
|
||||
cgRoot : Expr
|
||||
/--
|
||||
When `e` was added to this equivalence class because of an equality `h : e = target`,
|
||||
then we store `target` here, and `h` at `proof?`.
|
||||
@@ -207,88 +182,57 @@ structure ENode where
|
||||
generation : Nat := 0
|
||||
/-- Modification time -/
|
||||
mt : Nat := 0
|
||||
-- TODO: see Lean 3 implementation
|
||||
deriving Inhabited, Repr
|
||||
|
||||
def ENode.isCongrRoot (n : ENode) :=
|
||||
isSameExpr n.self n.congr
|
||||
|
||||
/-- New equality to be processed. -/
|
||||
structure NewEq where
|
||||
lhs : Expr
|
||||
rhs : Expr
|
||||
proof : Expr
|
||||
isHEq : Bool
|
||||
|
||||
/--
|
||||
Key for the `ENodeMap` and `ParentMap` map.
|
||||
We use pointer addresses and rely on the fact all internalized expressions
|
||||
have been hash-consed, i.e., we have applied `shareCommon`.
|
||||
-/
|
||||
private structure ENodeKey where
|
||||
expr : Expr
|
||||
abbrev ENodes := PHashMap USize ENode
|
||||
|
||||
instance : Hashable ENodeKey where
|
||||
hash k := unsafe (ptrAddrUnsafe k.expr).toUInt64
|
||||
|
||||
instance : BEq ENodeKey where
|
||||
beq k₁ k₂ := isSameExpr k₁.expr k₂.expr
|
||||
|
||||
abbrev ENodeMap := PHashMap ENodeKey ENode
|
||||
|
||||
/--
|
||||
Key for the congruence table.
|
||||
We need access to the `enodes` to be able to retrieve the equivalence class roots.
|
||||
-/
|
||||
structure CongrKey (enodes : ENodeMap) where
|
||||
structure CongrKey (enodes : ENodes) where
|
||||
e : Expr
|
||||
|
||||
private def hashRoot (enodes : ENodeMap) (e : Expr) : UInt64 :=
|
||||
if let some node := enodes.find? { expr := e } then
|
||||
unsafe (ptrAddrUnsafe node.root).toUInt64
|
||||
private abbrev toENodeKey (e : Expr) : USize :=
|
||||
unsafe ptrAddrUnsafe e
|
||||
|
||||
private def hashRoot (enodes : ENodes) (e : Expr) : UInt64 :=
|
||||
if let some node := enodes.find? (toENodeKey e) then
|
||||
toENodeKey node.root |>.toUInt64
|
||||
else
|
||||
13
|
||||
|
||||
def hasSameRoot (enodes : ENodeMap) (a b : Expr) : Bool := Id.run do
|
||||
if isSameExpr a b then
|
||||
private def hasSameRoot (enodes : ENodes) (a b : Expr) : Bool := Id.run do
|
||||
let ka := toENodeKey a
|
||||
let kb := toENodeKey b
|
||||
if ka == kb then
|
||||
return true
|
||||
else
|
||||
let some n1 := enodes.find? { expr := a } | return false
|
||||
let some n2 := enodes.find? { expr := b } | return false
|
||||
isSameExpr n1.root n2.root
|
||||
let some n1 := enodes.find? ka | return false
|
||||
let some n2 := enodes.find? kb | return false
|
||||
toENodeKey n1.root == toENodeKey n2.root
|
||||
|
||||
def congrHash (enodes : ENodeMap) (e : Expr) : UInt64 :=
|
||||
match_expr e with
|
||||
| Grind.nestedProof p _ => hashRoot enodes p
|
||||
| Eq _ lhs rhs => goEq lhs rhs
|
||||
| _ => go e 17
|
||||
def congrHash (enodes : ENodes) (e : Expr) : UInt64 :=
|
||||
if e.isAppOfArity ``Lean.Grind.nestedProof 2 then
|
||||
-- We only hash the proposition
|
||||
hashRoot enodes (e.getArg! 0)
|
||||
else
|
||||
go e 17
|
||||
where
|
||||
goEq (lhs rhs : Expr) : UInt64 :=
|
||||
let h₁ := hashRoot enodes lhs
|
||||
let h₂ := hashRoot enodes rhs
|
||||
if h₁ > h₂ then mixHash h₂ h₁ else mixHash h₁ h₂
|
||||
go (e : Expr) (r : UInt64) : UInt64 :=
|
||||
match e with
|
||||
| .app f a => go f (mixHash r (hashRoot enodes a))
|
||||
| _ => mixHash r (hashRoot enodes e)
|
||||
|
||||
/-- Returns `true` if `a` and `b` are congruent modulo the equivalence classes in `enodes`. -/
|
||||
partial def isCongruent (enodes : ENodeMap) (a b : Expr) : Bool :=
|
||||
match_expr a with
|
||||
| Grind.nestedProof p₁ _ =>
|
||||
let_expr Grind.nestedProof p₂ _ := b | false
|
||||
hasSameRoot enodes p₁ p₂
|
||||
| Eq α₁ lhs₁ rhs₁ =>
|
||||
let_expr Eq α₂ lhs₂ rhs₂ := b | false
|
||||
if isSameExpr α₁ α₂ then
|
||||
goEq lhs₁ rhs₁ lhs₂ rhs₂
|
||||
else
|
||||
go a b
|
||||
| _ => go a b
|
||||
partial def isCongruent (enodes : ENodes) (a b : Expr) : Bool :=
|
||||
if a.isAppOfArity ``Lean.Grind.nestedProof 2 && b.isAppOfArity ``Lean.Grind.nestedProof 2 then
|
||||
hasSameRoot enodes (a.getArg! 0) (b.getArg! 0)
|
||||
else
|
||||
go a b
|
||||
where
|
||||
goEq (lhs₁ rhs₁ lhs₂ rhs₂ : Expr) : Bool :=
|
||||
(hasSameRoot enodes lhs₁ lhs₂ && hasSameRoot enodes rhs₁ rhs₂)
|
||||
||
|
||||
(hasSameRoot enodes lhs₁ rhs₂ && hasSameRoot enodes rhs₁ lhs₂)
|
||||
go (a b : Expr) : Bool :=
|
||||
if a.isApp && b.isApp then
|
||||
hasSameRoot enodes a.appArg! b.appArg! && go a.appFn! b.appFn!
|
||||
@@ -303,58 +247,17 @@ instance : Hashable (CongrKey enodes) where
|
||||
instance : BEq (CongrKey enodes) where
|
||||
beq k1 k2 := isCongruent enodes k1.e k2.e
|
||||
|
||||
abbrev CongrTable (enodes : ENodeMap) := PHashSet (CongrKey enodes)
|
||||
abbrev CongrTable (enodes : ENodes) := PHashSet (CongrKey enodes)
|
||||
|
||||
-- Remark: we cannot use pointer addresses here because we have to traverse the tree.
|
||||
abbrev ParentSet := RBTree Expr Expr.quickComp
|
||||
abbrev ParentMap := PHashMap ENodeKey ParentSet
|
||||
|
||||
/--
|
||||
The E-matching module instantiates theorems using the `EMatchTheorem proof` and a (partial) assignment.
|
||||
We want to avoid instantiating the same theorem with the same assignment more than once.
|
||||
Therefore, we store the (pre-)instance information in set.
|
||||
Recall that the proofs of activated theorems have been hash-consed.
|
||||
The assignment contains internalized expressions, which have also been hash-consed.
|
||||
-/
|
||||
structure PreInstance where
|
||||
proof : Expr
|
||||
assignment : Array Expr
|
||||
|
||||
instance : Hashable PreInstance where
|
||||
hash i := Id.run do
|
||||
let mut r := unsafe (ptrAddrUnsafe i.proof >>> 3).toUInt64
|
||||
for v in i.assignment do
|
||||
r := mixHash r (unsafe (ptrAddrUnsafe v >>> 3).toUInt64)
|
||||
return r
|
||||
|
||||
instance : BEq PreInstance where
|
||||
beq i₁ i₂ := Id.run do
|
||||
unless isSameExpr i₁.proof i₂.proof do return false
|
||||
unless i₁.assignment.size == i₂.assignment.size do return false
|
||||
for v₁ in i₁.assignment, v₂ in i₂.assignment do
|
||||
unless isSameExpr v₁ v₂ do return false
|
||||
return true
|
||||
|
||||
abbrev PreInstanceSet := PHashSet PreInstance
|
||||
|
||||
/-- New fact to be processed. -/
|
||||
structure NewFact where
|
||||
proof : Expr
|
||||
prop : Expr
|
||||
generation : Nat
|
||||
deriving Inhabited
|
||||
abbrev ParentMap := PHashMap USize ParentSet
|
||||
|
||||
structure Goal where
|
||||
mvarId : MVarId
|
||||
enodes : ENodeMap := {}
|
||||
enodes : ENodes := {}
|
||||
parents : ParentMap := {}
|
||||
congrTable : CongrTable enodes := {}
|
||||
/--
|
||||
A mapping from each function application index (`HeadIndex`) to a list of applications with that index.
|
||||
Recall that the `HeadIndex` for a constant is its constant name, and for a free variable,
|
||||
it is its unique id.
|
||||
-/
|
||||
appMap : PHashMap HeadIndex (List Expr) := {}
|
||||
/-- Equations to be processed. -/
|
||||
newEqs : Array NewEq := #[]
|
||||
/-- `inconsistent := true` if `ENode`s for `True` and `False` are in the same equivalence class. -/
|
||||
@@ -363,26 +266,6 @@ structure Goal where
|
||||
gmt : Nat := 0
|
||||
/-- Next unique index for creating ENodes -/
|
||||
nextIdx : Nat := 0
|
||||
/-- Active theorems that we have performed ematching at least once. -/
|
||||
thms : PArray EMatchTheorem := {}
|
||||
/-- Active theorems that we have not performed any round of ematching yet. -/
|
||||
newThms : PArray EMatchTheorem := {}
|
||||
/--
|
||||
Inactive global theorems. As we internalize terms, we activate theorems as we find their symbols.
|
||||
Local theorem provided by users are added directly into `newThms`.
|
||||
-/
|
||||
thmMap : EMatchTheorems
|
||||
/-- Number of theorem instances generated so far -/
|
||||
numInstances : Nat := 0
|
||||
/-- Number of E-matching rounds performed in this goal so far. -/
|
||||
-- Remark: it is always equal to `gmt` in the current implementation.
|
||||
numEmatch : Nat := 0
|
||||
/-- (pre-)instances found so far. It includes instances that failed to be instantiated. -/
|
||||
preInstances : PreInstanceSet := {}
|
||||
/-- new facts to be processed. -/
|
||||
newFacts : Std.Queue NewFact := ∅
|
||||
/-- `match` auxiliary functions whose equations have already been created and activated. -/
|
||||
matchEqNames : PHashSet Name := {}
|
||||
deriving Inhabited
|
||||
|
||||
def Goal.admit (goal : Goal) : MetaM Unit :=
|
||||
@@ -390,50 +273,26 @@ def Goal.admit (goal : Goal) : MetaM Unit :=
|
||||
|
||||
abbrev GoalM := StateRefT Goal GrindM
|
||||
|
||||
@[inline] def GoalM.run (goal : Goal) (x : GoalM α) : GrindM (α × Goal) :=
|
||||
goal.mvarId.withContext do StateRefT'.run x goal
|
||||
abbrev Propagator := Expr → GoalM Unit
|
||||
|
||||
@[inline] def GoalM.run' (goal : Goal) (x : GoalM Unit) : GrindM Goal :=
|
||||
goal.mvarId.withContext do StateRefT'.run' (x *> get) goal
|
||||
/-- Returns `true` if `e` is the internalized `True` expression. -/
|
||||
def isTrueExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getTrueExpr)
|
||||
|
||||
/--
|
||||
A helper function used to mark a theorem instance found by the E-matching module.
|
||||
It returns `true` if it is a new instance and `false` otherwise.
|
||||
-/
|
||||
def markTheoremInstance (proof : Expr) (assignment : Array Expr) : GoalM Bool := do
|
||||
let k := { proof, assignment }
|
||||
if (← get).preInstances.contains k then
|
||||
return false
|
||||
modify fun s => { s with preInstances := s.preInstances.insert k }
|
||||
return true
|
||||
|
||||
/-- Adds a new fact `prop` with proof `proof` to the queue for processing. -/
|
||||
def addNewFact (proof : Expr) (prop : Expr) (generation : Nat) : GoalM Unit := do
|
||||
modify fun s => { s with newFacts := s.newFacts.enqueue { proof, prop, generation } }
|
||||
|
||||
/-- Adds a new theorem instance produced using E-matching. -/
|
||||
def addTheoremInstance (proof : Expr) (prop : Expr) (generation : Nat) : GoalM Unit := do
|
||||
addNewFact proof prop generation
|
||||
modify fun s => { s with numInstances := s.numInstances + 1 }
|
||||
|
||||
/-- Returns `true` if the maximum number of instances has been reached. -/
|
||||
def checkMaxInstancesExceeded : GoalM Bool := do
|
||||
return (← get).numInstances >= (← getConfig).instances
|
||||
|
||||
/-- Returns `true` if the maximum number of E-matching rounds has been reached. -/
|
||||
def checkMaxEmatchExceeded : GoalM Bool := do
|
||||
return (← get).numEmatch >= (← getConfig).ematch
|
||||
/-- Returns `true` if `e` is the internalized `False` expression. -/
|
||||
def isFalseExpr (e : Expr) : GrindM Bool :=
|
||||
return isSameExpr e (← getFalseExpr)
|
||||
|
||||
/--
|
||||
Returns `some n` if `e` has already been "internalized" into the
|
||||
Otherwise, returns `none`s.
|
||||
-/
|
||||
def getENode? (e : Expr) : GoalM (Option ENode) :=
|
||||
return (← get).enodes.find? { expr := e }
|
||||
return (← get).enodes.find? (unsafe ptrAddrUnsafe e)
|
||||
|
||||
/-- Returns node associated with `e`. It assumes `e` has already been internalized. -/
|
||||
def getENode (e : Expr) : GoalM ENode := do
|
||||
let some n := (← get).enodes.find? { expr := e }
|
||||
let some n := (← get).enodes.find? (unsafe ptrAddrUnsafe e)
|
||||
| throwError "internal `grind` error, term has not been internalized{indentExpr e}"
|
||||
return n
|
||||
|
||||
@@ -484,7 +343,7 @@ def getNext (e : Expr) : GoalM Expr :=
|
||||
|
||||
/-- Returns `true` if `e` has already been internalized. -/
|
||||
def alreadyInternalized (e : Expr) : GoalM Bool :=
|
||||
return (← get).enodes.contains { expr := e }
|
||||
return (← get).enodes.contains (unsafe ptrAddrUnsafe e)
|
||||
|
||||
def getTarget? (e : Expr) : GoalM (Option Expr) := do
|
||||
let some n ← getENode? e | return none
|
||||
@@ -529,8 +388,9 @@ information in the root (aka canonical representative) of `child`.
|
||||
-/
|
||||
def registerParent (parent : Expr) (child : Expr) : GoalM Unit := do
|
||||
let some childRoot ← getRoot? child | return ()
|
||||
let parents := if let some parents := (← get).parents.find? { expr := childRoot } then parents else {}
|
||||
modify fun s => { s with parents := s.parents.insert { expr := childRoot } (parents.insert parent) }
|
||||
let key := toENodeKey childRoot
|
||||
let parents := if let some parents := (← get).parents.find? key then parents else {}
|
||||
modify fun s => { s with parents := s.parents.insert key (parents.insert parent) }
|
||||
|
||||
/--
|
||||
Returns the set of expressions `e` is a child of, or an expression in
|
||||
@@ -538,7 +398,7 @@ Returns the set of expressions `e` is a child of, or an expression in
|
||||
The information is only up to date if `e` is the root (aka canonical representative) of the equivalence class.
|
||||
-/
|
||||
def getParents (e : Expr) : GoalM ParentSet := do
|
||||
let some parents := (← get).parents.find? { expr := e } | return {}
|
||||
let some parents := (← get).parents.find? (toENodeKey e) | return {}
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -546,7 +406,7 @@ Similar to `getParents`, but also removes the entry `e ↦ parents` from the par
|
||||
-/
|
||||
def getParentsAndReset (e : Expr) : GoalM ParentSet := do
|
||||
let parents ← getParents e
|
||||
modify fun s => { s with parents := s.parents.erase { expr := e } }
|
||||
modify fun s => { s with parents := s.parents.erase (toENodeKey e) }
|
||||
return parents
|
||||
|
||||
/--
|
||||
@@ -554,20 +414,21 @@ Copy `parents` to the parents of `root`.
|
||||
`root` must be the root of its equivalence class.
|
||||
-/
|
||||
def copyParentsTo (parents : ParentSet) (root : Expr) : GoalM Unit := do
|
||||
let mut curr := if let some parents := (← get).parents.find? { expr := root } then parents else {}
|
||||
let key := toENodeKey root
|
||||
let mut curr := if let some parents := (← get).parents.find? key then parents else {}
|
||||
for parent in parents do
|
||||
curr := curr.insert parent
|
||||
modify fun s => { s with parents := s.parents.insert { expr := root } curr }
|
||||
modify fun s => { s with parents := s.parents.insert key curr }
|
||||
|
||||
def setENode (e : Expr) (n : ENode) : GoalM Unit :=
|
||||
modify fun s => { s with
|
||||
enodes := s.enodes.insert { expr := e } n
|
||||
enodes := s.enodes.insert (unsafe ptrAddrUnsafe e) n
|
||||
congrTable := unsafe unsafeCast s.congrTable
|
||||
}
|
||||
|
||||
def mkENodeCore (e : Expr) (interpreted ctor : Bool) (generation : Nat) : GoalM Unit := do
|
||||
setENode e {
|
||||
self := e, next := e, root := e, congr := e, size := 1
|
||||
self := e, next := e, root := e, cgRoot := e, size := 1
|
||||
flipped := false
|
||||
heqProofs := false
|
||||
hasLambdas := e.isLambda
|
||||
@@ -587,46 +448,18 @@ def mkENode (e : Expr) (generation : Nat) : GoalM Unit := do
|
||||
let interpreted ← isInterpreted e
|
||||
mkENodeCore e interpreted ctor generation
|
||||
|
||||
/-- Returns `true` is `e` is the root of its congruence class. -/
|
||||
def isCongrRoot (e : Expr) : GoalM Bool := do
|
||||
return (← getENode e).isCongrRoot
|
||||
|
||||
/-- Returns the root of the congruence class containing `e`. -/
|
||||
partial def getCongrRoot (e : Expr) : GoalM Expr := do
|
||||
let n ← getENode e
|
||||
if isSameExpr n.congr e then return e
|
||||
getCongrRoot n.congr
|
||||
|
||||
/-- Return `true` if the goal is inconsistent. -/
|
||||
def isInconsistent : GoalM Bool :=
|
||||
return (← get).inconsistent
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b`.
|
||||
It assumes `a` and `b` are in the same equivalence class, and have the same type.
|
||||
Returns a proof that `a = b` (or `HEq a b`).
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
-- Forward definition
|
||||
@[extern "lean_grind_mk_eq_proof"]
|
||||
opaque mkEqProof (a b : Expr) : GoalM Expr
|
||||
|
||||
/--
|
||||
Returns a proof that `HEq a b`.
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
-- Forward definition
|
||||
@[extern "lean_grind_mk_heq_proof"]
|
||||
opaque mkHEqProof (a b : Expr) : GoalM Expr
|
||||
|
||||
/--
|
||||
Returns a proof that `a = b` if they have the same type. Otherwise, returns a proof of `HEq a b`.
|
||||
It assumes `a` and `b` are in the same equivalence class.
|
||||
-/
|
||||
def mkEqHEqProof (a b : Expr) : GoalM Expr := do
|
||||
if (← hasSameType a b) then
|
||||
mkEqProof a b
|
||||
else
|
||||
mkHEqProof a b
|
||||
|
||||
/--
|
||||
Returns a proof that `a = True`.
|
||||
It assumes `a` and `True` are in the same equivalence class.
|
||||
@@ -683,13 +516,9 @@ def forEachEqc (f : ENode → GoalM Unit) : GoalM Unit := do
|
||||
if isSameExpr n.self n.root then
|
||||
f n
|
||||
|
||||
abbrev Propagator := Expr → GoalM Unit
|
||||
abbrev Fallback := GoalM Unit
|
||||
|
||||
structure Methods where
|
||||
propagateUp : Propagator := fun _ => return ()
|
||||
propagateDown : Propagator := fun _ => return ()
|
||||
fallback : Fallback := pure ()
|
||||
deriving Inhabited
|
||||
|
||||
def Methods.toMethodsRef (m : Methods) : MethodsRef :=
|
||||
@@ -707,10 +536,6 @@ def propagateUp (e : Expr) : GoalM Unit := do
|
||||
def propagateDown (e : Expr) : GoalM Unit := do
|
||||
(← getMethods).propagateDown e
|
||||
|
||||
def applyFallback : GoalM Unit := do
|
||||
let fallback : GoalM Unit := (← getMethods).fallback
|
||||
fallback
|
||||
|
||||
/-- Returns expressions in the given expression equivalence class. -/
|
||||
partial def getEqc (e : Expr) : GoalM (List Expr) :=
|
||||
go e e []
|
||||
|
||||
@@ -100,14 +100,12 @@ def _root_.Lean.MVarId.clearAuxDecls (mvarId : MVarId) : MetaM MVarId := mvarId.
|
||||
/--
|
||||
In the `grind` tactic, during `Expr` internalization, we don't expect to find `Expr.mdata`.
|
||||
This function ensures `Expr.mdata` is not found during internalization.
|
||||
Recall that we do not internalize `Expr.lam` children.
|
||||
Recall that we still have to process `Expr.forallE` because of `ForallProp.lean`.
|
||||
Moreover, we may not want to reduce `p → q` to `¬p ∨ q` when `(p q : Prop)`.
|
||||
Recall that we do not internalize `Expr.forallE` and `Expr.lam` components.
|
||||
-/
|
||||
def eraseIrrelevantMData (e : Expr) : CoreM Expr := do
|
||||
let pre (e : Expr) := do
|
||||
match e with
|
||||
| .letE .. | .lam .. => return .done e
|
||||
| .letE .. | .lam .. | .forallE .. => return .done e
|
||||
| .mdata _ e => return .continue e
|
||||
| _ => return .continue e
|
||||
Core.transform e (pre := pre)
|
||||
@@ -118,14 +116,11 @@ Converts nested `Expr.proj`s into projection applications if possible.
|
||||
def foldProjs (e : Expr) : MetaM Expr := do
|
||||
let post (e : Expr) := do
|
||||
let .proj structName idx s := e | return .done e
|
||||
let some info := getStructureInfo? (← getEnv) structName |
|
||||
trace[grind.issues] "found `Expr.proj` but `{structName}` is not marked as structure{indentExpr e}"
|
||||
return .done e
|
||||
let some info := getStructureInfo? (← getEnv) structName | return .done e
|
||||
if h : idx < info.fieldNames.size then
|
||||
let fieldName := info.fieldNames[idx]
|
||||
return .done (← mkProjection s fieldName)
|
||||
else
|
||||
trace[grind.issues] "found `Expr.proj` with invalid field index `{idx}`{indentExpr e}"
|
||||
return .done e
|
||||
Meta.transform e (post := post)
|
||||
|
||||
@@ -140,11 +135,4 @@ def normalizeLevels (e : Expr) : CoreM Expr := do
|
||||
| _ => return .continue
|
||||
Core.transform e (pre := pre)
|
||||
|
||||
/--
|
||||
Normalizes the given expression using the `grind` simplification theorems and simprocs.
|
||||
This function is used for normalzing E-matching patterns. Note that it does not return a proof.
|
||||
-/
|
||||
@[extern "lean_grind_normalize"] -- forward definition
|
||||
opaque normalize (e : Expr) : MetaM Expr
|
||||
|
||||
end Lean.Meta.Grind
|
||||
|
||||
@@ -123,15 +123,6 @@ unsafe def mkDelabAttribute : IO (KeyedDeclsAttribute Delab) :=
|
||||
} `Lean.PrettyPrinter.Delaborator.delabAttribute
|
||||
@[builtin_init mkDelabAttribute] opaque delabAttribute : KeyedDeclsAttribute Delab
|
||||
|
||||
/--
|
||||
`@[app_delab c]` registers a delaborator for applications with head constant `c`.
|
||||
Such delaborators also apply to the constant `c` itself (known as a "nullary application").
|
||||
|
||||
This attribute should be applied to definitions of type `Lean.PrettyPrinter.Delaborator.Delab`.
|
||||
|
||||
When defining delaborators for constant applications, one should prefer this attribute over `@[delab app.c]`,
|
||||
as `@[app_delab c]` first performs name resolution on `c` in the current scope.
|
||||
-/
|
||||
macro "app_delab" id:ident : attr => do
|
||||
match ← Macro.resolveGlobalName id.getId with
|
||||
| [] => Macro.throwErrorAt id s!"unknown declaration '{id.getId}'"
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Sebastian Ullrich, Leonardo de Moura, Gabriel Ebner, Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Lean.Parser
|
||||
import Lean.PrettyPrinter.Delaborator.Attributes
|
||||
import Lean.PrettyPrinter.Delaborator.Basic
|
||||
import Lean.PrettyPrinter.Delaborator.SubExpr
|
||||
|
||||
@@ -5,7 +5,6 @@ Authors: Leonardo de Moura, Marc Huisinga
|
||||
-/
|
||||
prelude
|
||||
import Lean.Server.Completion.CompletionCollectors
|
||||
import Std.Data.HashMap
|
||||
|
||||
namespace Lean.Server.Completion
|
||||
open Lsp
|
||||
|
||||
@@ -14,15 +14,10 @@ namespace Lean
|
||||
/--
|
||||
We use the `ToExpr` type class to convert values of type `α` into
|
||||
expressions that denote these values in Lean.
|
||||
|
||||
Examples:
|
||||
Example:
|
||||
```
|
||||
toExpr true = .const ``Bool.true []
|
||||
|
||||
toTypeExpr Bool = .const ``Bool []
|
||||
```
|
||||
|
||||
See also `ToLevel` for representing universe levels as `Level` expressions.
|
||||
-/
|
||||
class ToExpr (α : Type u) where
|
||||
/-- Convert a value `a : α` into an expression that denotes `a` -/
|
||||
|
||||
@@ -5,8 +5,8 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.ShareCommon
|
||||
import Std.Data.HashSet.Basic
|
||||
import Std.Data.HashMap.Basic
|
||||
import Std.Data.HashSet
|
||||
import Std.Data.HashMap
|
||||
import Lean.Data.PersistentHashMap
|
||||
import Lean.Data.PersistentHashSet
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Henrik Böving
|
||||
-/
|
||||
prelude
|
||||
import Std.Data.HashMap
|
||||
import Std.Data.HashSet
|
||||
|
||||
namespace Std
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Josh Clune
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Array
|
||||
import Std.Tactic.BVDecide.LRAT.Internal.Formula.Class
|
||||
import Std.Tactic.BVDecide.LRAT.Internal.Assignment
|
||||
import Std.Sat.CNF.Basic
|
||||
|
||||
@@ -156,7 +156,7 @@ theorem or_congr (lhs rhs lhs' rhs' : Bool) (h1 : lhs' = lhs) (h2 : rhs' = rhs)
|
||||
|
||||
theorem cond_congr (discr lhs rhs discr' lhs' rhs' : Bool) (h1 : discr' = discr) (h2 : lhs' = lhs)
|
||||
(h3 : rhs' = rhs) :
|
||||
(bif discr' then lhs' else rhs') = (bif discr then lhs else rhs) := by
|
||||
(bif discr' = true then lhs' else rhs') = (bif discr = true then lhs else rhs) := by
|
||||
simp[*]
|
||||
|
||||
theorem false_of_eq_true_of_eq_false (h₁ : x = true) (h₂ : x = false) : False := by
|
||||
|
||||
@@ -5,6 +5,7 @@ Authors: Sofia Rodrigues
|
||||
-/
|
||||
prelude
|
||||
import Std.Time.Internal
|
||||
import Init.Data.Int
|
||||
import Init.System.IO
|
||||
import Std.Time.Time
|
||||
import Std.Time.Date
|
||||
|
||||
@@ -4,7 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Sofia Rodrigues
|
||||
-/
|
||||
prelude
|
||||
import Init.Omega
|
||||
import Init.Data.Int
|
||||
|
||||
namespace Std
|
||||
namespace Time
|
||||
|
||||
BIN
stage0/stdlib/Init.c
generated
BIN
stage0/stdlib/Init.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Grind.c
generated
BIN
stage0/stdlib/Init/Grind.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Grind/Lemmas.c
generated
BIN
stage0/stdlib/Init/Grind/Lemmas.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Grind/Tactics.c
generated
BIN
stage0/stdlib/Init/Grind/Tactics.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Internal.c
generated
BIN
stage0/stdlib/Init/Internal.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Internal/Order.c
generated
BIN
stage0/stdlib/Init/Internal/Order.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Internal/Order/Basic.c
generated
BIN
stage0/stdlib/Init/Internal/Order/Basic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Internal/Order/Tactic.c
generated
BIN
stage0/stdlib/Init/Internal/Order/Tactic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Class.c
generated
BIN
stage0/stdlib/Lean/Class.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Compiler/LCNF/FVarUtil.c
generated
BIN
stage0/stdlib/Lean/Compiler/LCNF/FVarUtil.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Data/PersistentHashSet.c
generated
BIN
stage0/stdlib/Lean/Data/PersistentHashSet.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Calc.c
generated
BIN
stage0/stdlib/Lean/Elab/Calc.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/CheckTactic.c
generated
BIN
stage0/stdlib/Lean/Elab/CheckTactic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Deriving.c
generated
BIN
stage0/stdlib/Lean/Elab/Deriving.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Deriving/ToExpr.c
generated
BIN
stage0/stdlib/Lean/Elab/Deriving/ToExpr.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Structure.c
generated
BIN
stage0/stdlib/Lean/Elab/Structure.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Tactic.c
generated
BIN
stage0/stdlib/Lean/Elab/Tactic.c
generated
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Tactic/Grind.c
generated
BIN
stage0/stdlib/Lean/Elab/Tactic/Grind.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Tactic/Monotonicity.c
generated
BIN
stage0/stdlib/Lean/Elab/Tactic/Monotonicity.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lean/Elab/Tactic/NormCast.c
generated
BIN
stage0/stdlib/Lean/Elab/Tactic/NormCast.c
generated
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user