Compare commits

..

1 Commits

Author SHA1 Message Date
Leonardo de Moura
829b515281 fix: zetaDelta at Sym/Pattern.lean
This PR fixes missing zetaDelta support at the pattern
matching/unification procedure in the new Sym framework.
2025-12-30 15:37:56 -08:00
2372 changed files with 5652 additions and 19902 deletions

View File

@@ -45,7 +45,3 @@ feat: add optional binder limit to `mkPatternFromTheorem`
This PR adds a `num?` parameter to `mkPatternFromTheorem` to control how many
leading quantifiers are stripped when creating a pattern.
```
## CI Log Retrieval
When CI jobs fail, investigate immediately - don't wait for other jobs to complete. Individual job logs are often available even while other jobs are still running. Try `gh run view <run-id> --log` or `gh run view <run-id> --log-failed`, or use `gh run view <run-id> --job=<job-id>` to target the specific failed job. Sleeping is fine when asked to monitor CI and no failures exist yet, but once any job fails, investigate that failure immediately.

View File

@@ -13,54 +13,12 @@ These comments explain the scripts' behavior, which repositories get special han
## Arguments
- `version`: The version to release (e.g., v4.24.0)
## Release Notes (Required for -rc1 releases)
For first release candidates (`-rc1`), you must create release notes BEFORE the reference-manual toolchain bump PR can be merged.
**Steps to create release notes:**
1. Generate the release notes:
```bash
cd /path/to/lean4
python3 script/release_notes.py --since <previous_version> > /tmp/release-notes-<version>.md
```
Replace `<previous_version>` with the last stable release (e.g., `v4.27.0` when releasing `v4.28.0-rc1`).
2. Review `/tmp/release-notes-<version>.md` for common issues:
- **Unterminated code blocks**: Look for code fences that aren't closed. Fetch original PR with `gh pr view <number>` to repair.
- **Truncated descriptions**: Some may end mid-sentence. Complete them from the original PR.
- **Markdown issues**: Other syntax problems that could cause parsing errors.
3. Create the release notes file in the reference-manual repository:
- File path: `Manual/Releases/v<version>.lean` (e.g., `v4_28_0.lean`)
- Use Verso format with proper imports and `#doc (Manual)` block
- **Use `#` for headers, not `##`** (Verso uses level 1 for subsections)
- **Use plain ` ``` ` not ` ```lean `** (the latter executes code)
- **Wrap underscore identifiers in backticks**: `` `bv_decide` `` not `bv_decide`
4. Update `Manual/Releases.lean`:
- Add import: `import Manual.Releases.«v4_28_0»`
- Add include: `{include 0 Manual.Releases.«v4_28_0»}`
5. Build to verify: `lake build Manual.Releases.v4_28_0`
6. Create a **separate PR** for release notes (not bundled with toolchain bump):
```bash
git checkout -b v<version>-release-notes
gh pr create --title "doc: add v<version> release notes"
```
For subsequent RCs (`-rc2`, etc.) and stable releases, just update the version number in the existing release notes file title.
See `doc/dev/release_checklist.md` section "Writing the release notes" for full details.
## Process
1. Run `script/release_checklist.py {version}` to check the current status
2. **CRITICAL: If preliminary lean4 checks fail, STOP immediately and alert the user**
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes file exists
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes exist
- **IMPORTANT**: The release page is created AUTOMATICALLY by CI after pushing the tag - DO NOT create it manually
- **IMPORTANT**: For -rc1 releases, release notes must be created before proceeding
- Do NOT create any PRs or proceed with repository updates if these checks fail
3. Create a todo list tracking all repositories that need updates
4. **CRITICAL RULE: You can ONLY run `release_steps.py` for a repository if `release_checklist.py` explicitly says to do so**
@@ -103,15 +61,6 @@ Every time you run `release_checklist.py`, you MUST:
This summary should be provided EVERY time you run the checklist, not just after creating new PRs.
The user needs to see the complete picture of what's waiting for review.
## Nightly Infrastructure
The nightly build system uses branches and tags across two repositories:
- `leanprover/lean4` has **branches** `nightly` and `nightly-with-mathlib` tracking the latest nightly builds
- `leanprover/lean4-nightly` has **dated tags** like `nightly-2026-01-23`
When a nightly succeeds with mathlib, all three should point to the same commit. Don't confuse these: branches are in the main lean4 repo, dated tags are in lean4-nightly.
## Error Handling
**CRITICAL**: If something goes wrong or a command fails:

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: actionlint
uses: raven-actions/actionlint@v2
with:

View File

@@ -67,13 +67,13 @@ jobs:
if: runner.os == 'macOS'
- name: Checkout
if: (!endsWith(matrix.os, '-with-cache'))
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
# the default is to use a virtual merge commit between the PR and master: just use the PR
ref: ${{ github.event.pull_request.head.sha }}
- name: Namespace Checkout
if: endsWith(matrix.os, '-with-cache')
uses: namespacelabs/nscloud-checkout-action@v8
uses: namespacelabs/nscloud-checkout-action@v7
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Open Nix shell once

View File

@@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
# the default is to use a virtual merge commit between the PR and master: just use the PR
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -8,7 +8,7 @@ jobs:
check-stage0-on-queue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -50,7 +50,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@v5
# don't schedule nightlies on forks
if: github.event_name == 'schedule' && github.repository == 'leanprover/lean4' || inputs.action == 'release nightly' || (startsWith(github.ref, 'refs/tags/') && github.repository == 'leanprover/lean4')
- name: Set Nightly
@@ -115,7 +115,7 @@ jobs:
CMAKE_MAJOR=$(grep -E "^set\(LEAN_VERSION_MAJOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_MINOR=$(grep -E "^set\(LEAN_VERSION_MINOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_PATCH=$(grep -E "^set\(LEAN_VERSION_PATCH " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | sed -nE 's/^set\(LEAN_VERSION_IS_RELEASE ([0-9]+)\).*/\1/p')
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | grep -oE '[0-9]+')
# Expected values from tag parsing
TAG_MAJOR="${{ steps.set-release.outputs.LEAN_VERSION_MAJOR }}"
@@ -267,17 +267,14 @@ jobs:
"test": true,
// turn off custom allocator & symbolic functions to make LSAN do its magic
"CMAKE_PRESET": "sanitize",
// * `StackOverflow*` correctly triggers ubsan.
// * `reverse-ffi` fails to link in sanitizers.
// * `interactive` and `async_select_channel` fail nondeterministically, would need
// to be investigated..
// * 9366 is too close to timeout.
// * `bv_` sometimes times out calling into cadical even though we should be using
// the standard compile flags for it.
// * `grind_guide` always times out.
// * `pkg/|lake/` tests sometimes time out (likely even hang), related to Lake CI
// failures?
"CTEST_OPTIONS": "-E 'StackOverflow|reverse-ffi|interactive|async_select_channel|9366|run/bv_|grind_guide|pkg/|lake/'"
// `StackOverflow*` correctly triggers ubsan.
// `reverse-ffi` fails to link in sanitizers.
// `interactive` and `async_select_channel` fail nondeterministically, would need to
// be investigated..
// 9366 is too close to timeout.
// `bv_` sometimes times out calling into cadical even though we should be using the
// standard compile flags for it.
"CTEST_OPTIONS": "-E 'StackOverflow|reverse-ffi|interactive|async_select_channel|9366|run/bv_'"
},
{
"name": "macOS",
@@ -437,7 +434,7 @@ jobs:
with:
path: artifacts
- name: Release
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
files: artifacts/*/*
fail_on_unmatched_files: true
@@ -458,7 +455,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
# needed for tagging
fetch-depth: 0
@@ -483,7 +480,7 @@ jobs:
echo -e "\n*Full commit log*\n" >> diff.md
git log --oneline "$last_tag"..HEAD | sed 's/^/* /' >> diff.md
- name: Release Nightly
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
body_path: diff.md
prerelease: true

View File

@@ -6,7 +6,7 @@ jobs:
check-lean-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
- name: Verify .lean files start with a copyright header.
run: |

View File

@@ -71,7 +71,7 @@ jobs:
GH_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Release (short format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }}
# There are coredumps files here as well, but all in deeper subdirectories.
@@ -86,7 +86,7 @@ jobs:
- name: Release (SHA-suffixed format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }} (${{ steps.workflow-info.outputs.sourceHeadSha }})
# There are coredumps files here as well, but all in deeper subdirectories.
@@ -387,7 +387,7 @@ jobs:
# Checkout the Batteries repository with all branches
- name: Checkout Batteries repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
repository: leanprover-community/batteries
token: ${{ secrets.MATHLIB4_BOT }}
@@ -447,7 +447,7 @@ jobs:
# Checkout the mathlib4 repository with all branches
- name: Checkout mathlib4 repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
repository: leanprover-community/mathlib4-nightly-testing
token: ${{ secrets.MATHLIB4_BOT }}
@@ -530,7 +530,7 @@ jobs:
# Checkout the reference manual repository with all branches
- name: Checkout mathlib4 repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.reference-manual-ready.outputs.manual_ready == 'true'
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
repository: leanprover/reference-manual
token: ${{ secrets.MANUAL_PR_BOT }}

View File

@@ -27,7 +27,7 @@ jobs:
# This action should push to an otherwise protected branch, so it
# uses a deploy key with write permissions, as suggested at
# https://stackoverflow.com/a/76135647/946226
- uses: actions/checkout@v6
- uses: actions/checkout@v5
with:
ssh-key: ${{secrets.STAGE0_SSH_KEY}}
- run: echo "should_update_stage0=yes" >> "$GITHUB_ENV"

View File

@@ -218,11 +218,6 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
# Writing the release notes
Release notes are only needed for the first release candidate (`-rc1`). For subsequent RCs and stable releases,
just update the version number in the title of the existing release notes file.
## Generating the release notes
Release notes are automatically generated from the commit history, using `script/release_notes.py`.
Run this as `script/release_notes.py --since v4.6.0`, where `v4.6.0` is the *previous* release version.
@@ -237,93 +232,4 @@ Some judgement is required here: ignore commits which look minor,
but manually add items to the release notes for significant PRs that were rebase-merged.
There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
## Reviewing and fixing the generated markdown
Before adding the release notes to the reference manual, carefully review the generated markdown for these common issues:
1. **Unterminated code blocks**: PR descriptions sometimes have unclosed code fences. Look for code blocks
that don't have a closing ` ``` `. If found, fetch the original PR description with `gh pr view <number>`
and repair the code block with the complete content.
2. **Truncated descriptions**: Some PR descriptions may end abruptly mid-sentence. Review these and complete
the descriptions based on the original PR.
3. **Markdown syntax issues**: Check for other markdown problems that could cause parsing errors.
## Creating the release notes file
The release notes go in `Manual/Releases/v4_7_0.lean` in the reference-manual repository.
The file structure must follow the Verso format:
```lean
/-
Copyright (c) 2025 Lean FRO LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: <Your Name>
-/
import VersoManual
import Manual.Meta
import Manual.Meta.Markdown
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
#doc (Manual) "Lean 4.7.0-rc1 (YYYY-MM-DD)" =>
%%%
tag := "release-v4.7.0"
file := "v4.7.0"
%%%
<release notes content here>
```
**Important formatting rules for Verso:**
- Use `#` for section headers inside the document, not `##` (Verso uses header level 1 for subsections)
- Use plain ` ``` ` for code blocks, not ` ```lean ` (the latter will cause Lean to execute the code)
- Identifiers with underscores like `bv_decide` should be wrapped in backticks: `` `bv_decide` ``
(otherwise the underscore may be interpreted as markdown emphasis)
## Updating Manual/Releases.lean
After creating the release notes file, update `Manual/Releases.lean` to include it:
1. Add the import near the top with other version imports:
```lean
import Manual.Releases.«v4_7_0»
```
2. Add the include statement after the other includes:
```lean
{include 0 Manual.Releases.«v4_7_0»}
```
## Building and verifying
Build the release notes to check for errors:
```bash
lake build Manual.Releases.v4_7_0
```
Common errors and fixes:
- "Wrong header nesting - got ## but expected at most #": Change `##` to `#`
- "Tactic 'X' failed" or similar: Code is being executed; change ` ```lean ` to ` ``` `
- "'_'" errors: Underscore in identifier being parsed as emphasis; wrap in backticks
## Creating the PR
Create a separate PR for the release notes (don't bundle with the toolchain bump PR):
```bash
git checkout -b v4.7.0-release-notes
git add Manual/Releases/v4_7_0.lean Manual/Releases.lean
git commit -m "doc: add v4.7.0 release notes"
git push -u origin v4.7.0-release-notes
gh pr create --title "doc: add v4.7.0 release notes" --body "This PR adds the release notes for Lean v4.7.0."
```
See `./releases_drafts/README.md` for more information about pre-written release note entries.
See `./releases_drafts/README.md` for more information.

View File

@@ -810,7 +810,7 @@ Docstrings for constants should have the following structure:
The **short summary** should be 13 sentences (ideally 1) and provide
enough information for most readers to quickly decide whether the
constant is relevant to their task. The first (or only) sentence of
docstring is relevant to their task. The first (or only) sentence of
the short summary should be a *sentence fragment* in which the subject
is implied to be the documented item, written in present tense
indicative, or a *noun phrase* that characterizes the documented
@@ -1123,110 +1123,6 @@ infix:50 " ⇔ " => Bijection
recommended_spelling "bij" for "⇔" in [Bijection, «term_⇔_»]
```
#### Tactics
Docstrings for tactics should have the following structure:
* Short summary
* Details
* Variants
* Examples
Sometimes more than one declaration is needed to implement what the user
sees as a single tactic. In that case, only one declaration should have
the associated docstring, and the others should have the `tactic_alt`
attribute to mark them as an implementation detail.
The **short summary** should be 13 sentences (ideally 1) and provide
enough information for most readers to quickly decide whether the
tactic is relevant to their task. The first (or only) sentence of
the short summary should be a full sentence in which the subject
is an example invocation of the tactic, written in present tense
indicative. If the example tactic invocation names parameters, then the
short summary may refer to them. For the example invocation, prefer the
simplest or most typical example. Explain more complicated forms in the
variants section. If needed, abbreviate the invocation by naming part of
the syntax and expanding it in the next sentence. The summary should be
written as a single paragraph.
**Details**, if needed, may be 1-3 paragraphs that describe further
relevant information. They may insert links as needed. This section
should fully explain the scope of the tactic: its syntax format,
on which goals it works and what the resulting goal(s) look like. It
should be clear whether the tactic fails if it does not close the main
goal and whether it creates any side goals. The details may include
explanatory examples that cant necessarily be machine checked and
dont fit the format.
If the tactic is extensible using `macro_rules`, mention this in the
details, with a link to `lean-manual://section/tactic-macro-extension`
and give a one-line example. If the tactic provides an attribute or a
command that allows the user to extend its behavior, the documentation
on how to extend the tactic belongs to that attribute or command. In the
tactic docstring, use a single sentence to refer the reader to this
further documentation.
**Variants**, if needed, should be a bulleted list describing different
options and forms of the same tactic. The reader should be able to parse
and understand the parts of a tactic invocation they are hovering over,
using this list. Each list item should describe an individual variant
and take one of two formats: the **short summary** as above, or a
**named list item**. A named list item consists of a title in bold
followed by an indented short paragraph.
Variants should be explained from the perspective of the tactic's users, not
their implementers. A tactic that is implemented as a single Lean parser may
have multiple variants from the perspective of users, while a tactic that is
implemented as multiple parsers may have no variants, but merely an optional
part of the syntax.
**Examples** should start with the line `Examples:` (or `Example:` if
theres exactly one). The section should consist of a sequence of code
blocks, each showing a Lean declaration (usually with the `example`
keyword) that invokes the tactic. When the effect of the tactic is not
clear from the code, you can use code comments to describe this. Do
not include text between examples, because it can be unclear whether
the text refers to the code before or after the example.
##### Example
````
`rw [e]` uses the expression `e` as a rewrite rule on the main goal,
then tries to close the goal by "cheap" (reducible) `rfl`.
If `e` is a defined constant, then the equational theorems associated with `e`
are used. This provides a convenient way to unfold `e`. If `e` has parameters,
the tactic will try to fill these in by unification with the matching part of
the target. Parameters are only filled in once per rule, restricting which
later rewrites can be found. Parameters that are not filled in after
unification will create side goals. If the `rfl` fails to close the main goal,
no error is raised.
`rw` may fail to rewrite terms "under binders", such as `∀ x, ...` or `∃ x,
...`. `rw` can also fail with a "motive is type incorrect" error in the context
of dependent types. In these cases, consider using `simp only`.
* `rw [e₁, ... eₙ]` applies the given rules sequentially.
* `rw [← e]` or `rw [<- e]` applies the rewrite in the reverse direction.
* `rw [e] at l` rewrites with `e` at location(s) `l`.
* `rw (occs := .pos L) [e]`, where `L` is a literal list of natural numbers,
only rewrites the given occurrences in the target. Occurrences count from 1.
* `rw (occs := .neg L) [e]`, where `L` is a literal list of natural numbers,
skips rewriting the given occurrences in the target. Occurrences count from 1.
Examples:
```lean
example {a b : Nat} (h : a + a = b) : (a + a) + (a + a) = b + b := by rw [h]
```
```lean
example {f : Nat -> Nat} (h : ∀ x, f x = 1) (a b : Nat) : f a = f b := by
rw [h] -- `rw` instantiates `h` only once, so this is equivalent to: `rw [h a]`
-- goal: ⊢ 1 = f b
rw [h] -- equivalent to: `rw [h b]`
```
````
## Dictionary

View File

@@ -29,7 +29,7 @@ def main (args : List String) : IO Unit := do
if !msgs.toList.isEmpty then -- skip this file if there are parse errors
msgs.forM fun msg => msg.toString >>= IO.println
throw <| .userError "parse errors in file"
let `(header| $[module%$moduleTk?]? $[prelude%$preludeTk?]? $imps:import*) := header
let `(header| $[module%$moduleTk?]? $imps:import*) := header
| throw <| .userError s!"unexpected header syntax of {path}"
if moduleTk?.isSome then
continue
@@ -38,11 +38,11 @@ def main (args : List String) : IO Unit := do
let startPos := header.raw.getPos? |>.getD parserState.pos
let dummyEnv mkEmptyEnvironment
let (initCmd, parserState', msgs') :=
let (initCmd, parserState', _) :=
Parser.parseCommand inputCtx { env := dummyEnv, options := {} } parserState msgs
-- insert section if any trailing command (or error, which could be from an unknown command)
if !initCmd.isOfKind ``Parser.Command.eoi || msgs'.hasErrors then
-- insert section if any trailing command
if !initCmd.isOfKind ``Parser.Command.eoi then
let insertPos? :=
-- put below initial module docstring if any
guard (initCmd.isOfKind ``Parser.Command.moduleDoc) *> initCmd.getTailPos? <|>
@@ -57,21 +57,19 @@ def main (args : List String) : IO Unit := do
sec := "\n\n" ++ sec
if insertPos?.isNone then
sec := sec ++ "\n\n"
let insertPos := text.pos! insertPos
text := text.extract text.startPos insertPos ++ sec ++ text.extract insertPos text.endPos
text := text.extract 0 insertPos ++ sec ++ text.extract insertPos text.rawEndPos
-- prepend each import with `public `
for imp in imps.reverse do
let insertPos := imp.raw.getPos?.get!
let prfx := if doMeta then "public meta " else "public "
let insertPos := text.pos! insertPos
text := text.extract text.startPos insertPos ++ prfx ++ text.extract insertPos text.endPos
text := text.extract 0 insertPos ++ prfx ++ text.extract insertPos text.rawEndPos
-- insert `module` header
let mut initText := text.extract text.startPos (text.pos! startPos)
if !initText.trimAscii.isEmpty then
let mut initText := text.extract 0 startPos
if !initText.trim.isEmpty then
-- If there is a header comment, preserve it and put `module` in the line after
initText := initText.trimAsciiEnd.toString ++ "\n"
text := initText ++ "module\n\n" ++ text.extract (text.pos! startPos) text.endPos
initText := initText.trimRight ++ "\n"
text := initText ++ "module\n\n" ++ text.extract startPos text.rawEndPos
IO.FS.writeFile path text

View File

@@ -5,13 +5,12 @@ Authors: Mario Carneiro, Sebastian Ullrich
-/
module
prelude
public import Init.Prelude
public import Init.System.IO
public import Lean.Util.Path
import Lean.Environment
import Lean.ExtraModUses
import Lake.CLI.Main
import Lean.Parser.Module
import Lake.Load.Workspace
/-! # Shake: A Lean import minimizer
@@ -21,12 +20,84 @@ ensuring that every import is used to contribute some constant or other elaborat
recorded by `recordExtraModUse` and friends.
-/
/-- help string for the command line interface -/
def help : String := "Lean project tree shaking tool
Usage: lake exe shake [OPTIONS] <MODULE>..
Arguments:
<MODULE>
A module path like `Mathlib`. All files transitively reachable from the
provided module(s) will be checked.
Options:
--force
Skips the `lake build --no-build` sanity check
--keep-implied
Preserves existing imports that are implied by other imports and thus not technically needed
anymore
--keep-prefix
If an import `X` would be replaced in favor of a more specific import `X.Y...` it implies,
preserves the original import instead. More generally, prefers inserting `import X` even if it
was not part of the original imports as long as it was in the original transitive import closure
of the current module.
--keep-public
Preserves all `public` imports to avoid breaking changes for external downstream modules
--add-public
Adds new imports as `public` if they have been in the original public closure of that module.
In other words, public imports will not be removed from a module unless they are unused even
in the private scope, and those that are removed will be re-added as `public` in downstream
modules even if only needed in the private scope there. Unlike `--keep-public`, this may
introduce breaking changes but will still limit the number of inserted imports.
--explain
Gives constants explaining why each module is needed
--fix
Apply the suggested fixes directly. Make sure you have a clean checkout
before running this, so you can review the changes.
--gh-style
Outputs messages that can be parsed by `gh-problem-matcher-wrap`
Annotations:
The following annotations can be added to Lean files in order to configure the behavior of
`shake`. Only the substring `shake: ` directly followed by a directive is checked for, so multiple
directives can be mixed in one line such as `-- shake: keep-downstream, shake: keep-all`, and they
can be surrounded by arbitrary comments such as `-- shake: keep (metaprogram output dependency)`.
* `module -- shake: keep-downstream`:
Preserves this module in all (current) downstream modules, adding new imports of it if needed.
* `module -- shake: keep-all`:
Preserves all existing imports in this module as is. New imports now needed because of upstream
changes may still be added.
* `import X -- shake: keep`:
Preserves this specific import in the current module. The most common use case is to preserve a
public import that will be needed in downstream modules to make sense of the output of a
metaprogram defined in this module. For example, if a tactic is defined that may synthesize a
reference to a theorem when run, there is no way for `shake` to detect this by itself and the
module of that theorem should be publicly imported and annotated with `keep` in the tactic's
module.
```
public import X -- shake: keep (metaprogram output dependency)
...
elab \"my_tactic\" : tactic => do
... mkConst ``f -- `f`, defined in `X`, may appear in the output of this tactic
```
"
open Lean
namespace Lake.Shake
/-- The parsed CLI arguments for shake. -/
public structure Args where
/-- The parsed CLI arguments. See `help` for more information -/
structure Args where
help : Bool := false
keepImplied : Bool := false
keepPrefix : Bool := false
keepPublic : Bool := false
@@ -65,7 +136,7 @@ instance : Union Bitset where
instance : XorOp Bitset where
xor a b := { toNat := a.toNat ^^^ b.toNat }
def has (s : Bitset) (i : Nat) : Bool := s.toNat.testBit i
def has (s : Bitset) (i : Nat) : Bool := s {i}
end Bitset
@@ -166,19 +237,8 @@ structure State where
/-- Edits to be applied to the module imports. -/
edits : Edits := {}
-- Memoizations
reservedNames : Std.HashSet Name := Id.run do
let mut m := {}
for (c, _) in env.constants do
if isReservedName env c then
m := m.insert c
return m
indirectModUses : Std.HashMap Name (Array ModuleIdx) :=
indirectModUseExt.getState env
modNames : Array Name :=
env.header.moduleNames
def State.mods (s : State) := s.env.header.moduleData
def State.modNames (s : State) := s.env.header.moduleNames
/--
Given module `j`'s transitive dependencies, computes the union of `transImps` and the transitive
@@ -233,9 +293,9 @@ def isDeclMeta' (env : Environment) (declName : Name) : Bool :=
Given an `Expr` reference, returns the declaration name that should be considered the reference, if
any.
-/
def getDepConstName? (s : State) (ref : Name) : Option Name := do
def getDepConstName? (env : Environment) (ref : Name) : Option Name := do
-- Ignore references to reserved names, they can be re-generated in-place
guard <| !s.reservedNames.contains ref
guard <| !isReservedName env ref
-- `_simp_...` constants are similar, use base decl instead
return if ref.isStr && ref.getString!.startsWith "_simp_" then
ref.getPrefix
@@ -268,24 +328,22 @@ where
let env := s.env
Lean.Expr.foldConsts e deps fun c deps => Id.run do
let mut deps := deps
if let some c := getDepConstName? s c then
if let some c := getDepConstName? env c then
if let some j := env.getModuleIdxFor? c then
let k := { k with isMeta := k.isMeta && !isDeclMeta' env c }
if j != i then
deps := deps.union k {j}
for indMod in s.indirectModUses[c]?.getD #[] do
for indMod in (indirectModUseExt.getState env)[c]?.getD #[] do
if s.transDeps[i]!.has k indMod then
deps := deps.union k {indMod}
return deps
abbrev Explanations := Std.HashMap (ModuleIdx × NeedsKind) (Option (Name × Name))
/--
Calculates the same as `calcNeeds` but tracing each module to a use-def declaration pair or
`none` if merely a recorded extra use.
-/
def getExplanations (s : State) (i : ModuleIdx) : Explanations := Id.run do
let env := s.env
def getExplanations (env : Environment) (i : ModuleIdx) :
Std.HashMap (ModuleIdx × NeedsKind) (Option (Name × Name)) := Id.run do
let mut deps := default
for ci in env.header.moduleData[i]!.constants do
-- Added guard for cases like `structure` that are still exported even if private
@@ -306,25 +364,18 @@ def getExplanations (s : State) (i : ModuleIdx) : Explanations := Id.run do
where
/-- Accumulate the results from expression `e` into `deps`. -/
visitExpr (k : NeedsKind) name e deps :=
let env := s.env
Lean.Expr.foldConsts e deps fun c deps => Id.run do
let mut deps := deps
if let some c := getDepConstName? s c then
if let some c := getDepConstName? env c then
if let some j := env.getModuleIdxFor? c then
let k := { k with isMeta := k.isMeta && !isDeclMeta' env c }
deps := addExplanation j k name c deps
for indMod in s.indirectModUses[c]?.getD #[] do
if s.transDeps[i]!.has k indMod then
deps := addExplanation indMod k name (`_indirect ++ c) deps
if
if let some (some (name', _)) := deps[(j, k)]? then
decide (name.toString.length < name'.toString.length)
else true
then
deps := deps.insert (j, k) (name, c)
return deps
addExplanation (j : ModuleIdx) (k : NeedsKind) (use def_ : Name) (deps : Explanations) : Explanations :=
if
if let some (some (name', _)) := deps[(j, k)]? then
decide (use.toString.length < name'.toString.length)
else true
then
deps.insert (j, k) (use, def_)
else deps
partial def initStateFromEnv (env : Environment) : State := Id.run do
let mut s := { env }
@@ -491,7 +542,7 @@ def visitModule (pkg : Name) (srcSearchPath : SearchPath)
let mut imp : Import := { k with module := s.modNames[j]! }
let mut j := j
if args.trace then
IO.eprintln s!"`{imp}` is needed{if needs.has k j then " (calculated)" else ""}"
IO.eprintln s!"`{imp}` is needed"
if args.addPublic && !k.isExported &&
-- also add as public if previously `public meta`, which could be from automatic porting
(s.transDepsOrig[i]!.has { k with isExported := true } j || s.transDepsOrig[i]!.has { k with isExported := true, isMeta := true } j) then
@@ -580,7 +631,7 @@ def visitModule (pkg : Name) (srcSearchPath : SearchPath)
if toRemove.any fun imp => imp == decodeImport stx then
let pos := inputCtx.fileMap.toPosition stx.raw.getPos?.get!
println! "{path}:{pos.line}:{pos.column+1}: warning: unused import \
(use `lake shake --fix` to fix this, or `lake shake --update` to ignore)"
(use `lake exe shake --fix` to fix this, or `lake exe shake --update` to ignore)"
if !toAdd.isEmpty then
-- we put the insert message on the beginning of the last import line
let pos := inputCtx.fileMap.toPosition endHeader.offset
@@ -609,7 +660,7 @@ def visitModule (pkg : Name) (srcSearchPath : SearchPath)
modify fun s => { s with transDeps := s.transDeps.set! i newTransDepsI }
if args.explain then
let explanation := getExplanations s i
let explanation := getExplanations s.env i
let sanitize n := if n.hasMacroScopes then (sanitizeName n).run' { options := {} } else n
let run (imp : Import) := do
let j := s.env.getModuleIdx? imp.module |>.get!
@@ -625,31 +676,76 @@ def visitModule (pkg : Name) (srcSearchPath : SearchPath)
run j
for i in toAdd do run i
/-- Convert a list of module names to a bitset of module indexes -/
def toBitset (s : State) (ns : List Name) : Bitset :=
ns.foldl (init := ) fun c name =>
match s.env.getModuleIdxFor? name with
| some i => c {i}
| none => c
local instance : Ord Import where
compare :=
let _ := @lexOrd
compareOn fun imp => (!imp.isExported, imp.module.toString)
/--
Run the shake analysis with the given arguments.
/-- The main entry point. See `help` for more information on arguments. -/
public def main (args : List String) : IO UInt32 := do
initSearchPath ( findSysroot)
-- Parse the arguments
let rec parseArgs (args : Args) : List String Args
| [] => args
| "--help" :: rest => parseArgs { args with help := true } rest
| "--keep-implied" :: rest => parseArgs { args with keepImplied := true } rest
| "--keep-prefix" :: rest => parseArgs { args with keepPrefix := true } rest
| "--keep-public" :: rest => parseArgs { args with keepPublic := true } rest
| "--add-public" :: rest => parseArgs { args with addPublic := true } rest
| "--force" :: rest => parseArgs { args with force := true } rest
| "--fix" :: rest => parseArgs { args with fix := true } rest
| "--explain" :: rest => parseArgs { args with explain := true } rest
| "--trace" :: rest => parseArgs { args with trace := true } rest
| "--gh-style" :: rest => parseArgs { args with githubStyle := true } rest
| "--" :: rest => { args with mods := args.mods ++ rest.map (·.toName) }
| other :: rest => parseArgs { args with mods := args.mods.push other.toName } rest
let args := parseArgs {} args
Assumes Lean's search path has already been properly configured.
-/
public def run (args : Args) (h : 0 < args.mods.size)
(srcSearchPath : SearchPath := {}) : IO UInt32 := do
-- Bail if `--help` is passed
if args.help then
IO.println help
IO.Process.exit 0
if !args.force then
if ( IO.Process.output { cmd := "lake", args := #["build", "--no-build"] }).exitCode != 0 then
IO.println "There are out of date oleans. Run `lake build` or `lake exe cache get` first"
IO.Process.exit 1
-- Determine default module(s) to run shake on
let defaultTargetModules : Array Name try
let (elanInstall?, leanInstall?, lakeInstall?) Lake.findInstall?
let config Lake.MonadError.runEIO <| Lake.mkLoadConfig { elanInstall?, leanInstall?, lakeInstall? }
let some workspace Lake.loadWorkspace config |>.toBaseIO
| throw <| IO.userError "failed to load Lake workspace"
let defaultTargetModules := workspace.root.defaultTargets.flatMap fun target =>
if let some lib := workspace.root.findLeanLib? target then
lib.roots
else if let some exe := workspace.root.findLeanExe? target then
#[exe.config.root]
else
#[]
pure defaultTargetModules
catch _ =>
pure #[]
let srcSearchPath getSrcSearchPath
-- the list of root modules
let mods := args.mods
let mods := if args.mods.isEmpty then defaultTargetModules else args.mods
-- Only submodules of `pkg` will be edited or have info reported on them
let pkg := mods[0].getRoot
let pkg := mods[0]!.components.head!
-- Load all the modules
let imps := mods.map ({ module := · })
let (_, s) importModulesCore imps (isExported := true) |>.run
let s := s.markAllExported
let mut env finalizeImport s (isModule := true) imps {} (leakEnv := true) (loadExts := false)
if env.header.moduleData.any (!·.isModule) then
throw <| .userError "`lake shake` only works with `module`s currently"
let mut env finalizeImport s (isModule := true) imps {} (leakEnv := false) (loadExts := false)
-- the one env ext we want to initialize
let is := indirectModUseExt.toEnvExtension.getState env
let newState indirectModUseExt.addImportedFn is.importedEntries { env := env, opts := {} }

View File

@@ -3,3 +3,9 @@ name = "scripts"
[[lean_exe]]
name = "modulize"
root = "Modulize"
[[lean_exe]]
name = "shake"
root = "Shake"
# needed by `Lake.loadWorkspace`
supportInterpreter = true

View File

@@ -185,30 +185,6 @@ def get_release_notes(tag_name):
except Exception:
return None
def check_release_notes_file_exists(toolchain, github_token):
"""Check if the release notes file exists in the reference-manual repository.
For -rc1 releases, this checks that the release notes have been created.
For subsequent RCs and stable releases, release notes should already exist.
Returns tuple (exists: bool, is_rc1: bool) where is_rc1 indicates if this is
the first release candidate (when release notes need to be written).
"""
# Determine the release notes file path
# e.g., v4.28.0-rc1 -> Manual/Releases/v4_28_0.lean
base_version = strip_rc_suffix(toolchain.lstrip('v')) # "4.28.0"
file_name = f"v{base_version.replace('.', '_')}.lean" # "v4_28_0.lean"
file_path = f"Manual/Releases/{file_name}"
is_rc1 = toolchain.endswith("-rc1")
repo_url = "https://github.com/leanprover/reference-manual"
# Check if the file exists on main branch
content = get_branch_content(repo_url, "main", file_path, github_token)
return (content is not None, is_rc1)
def get_branch_content(repo_url, branch, file_path, github_token):
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/contents/{file_path}?ref={branch}"
headers = {'Authorization': f'token {github_token}'} if github_token else {}
@@ -525,76 +501,6 @@ def check_proofwidgets4_release(repo_url, target_toolchain, github_token):
print(f" You will need to create and push a tag v0.0.{next_version}")
return False
def check_reference_manual_release_title(repo_url, toolchain, pr_branch, github_token):
"""Check if the reference-manual release notes title matches the release type.
For RC releases (e.g., v4.27.0-rc1), the title should contain the exact RC suffix.
For final releases (e.g., v4.27.0), the title should NOT contain any "-rc".
Returns True if check passes or is not applicable, False if title needs updating.
"""
is_rc = is_release_candidate(toolchain)
# For RC releases, get the base version and RC suffix
# e.g., "v4.27.0-rc1" -> version="4.27.0", rc_suffix="-rc1"
if is_rc:
parts = toolchain.lstrip('v').split('-', 1)
version = parts[0]
rc_suffix = '-' + parts[1] if len(parts) > 1 else ''
else:
version = toolchain.lstrip('v')
rc_suffix = ''
# Construct the release notes file path (e.g., Manual/Releases/v4_27_0.lean for v4.27.0)
file_name = f"v{version.replace('.', '_')}.lean" # "v4_27_0.lean"
file_path = f"Manual/Releases/{file_name}"
# Try to get the file from the PR branch first, then fall back to main branch
content = get_branch_content(repo_url, pr_branch, file_path, github_token)
if content is None:
# Try the default branch
content = get_branch_content(repo_url, "main", file_path, github_token)
if content is None:
print(f" ⚠️ Could not check release notes file: {file_path}")
return True # Don't block on this
# Look for the #doc line with the title
for line in content.splitlines():
if line.strip().startswith('#doc') and 'Manual' in line:
has_rc_in_title = '-rc' in line.lower()
if is_rc:
# For RC releases, title should contain the exact RC suffix (e.g., "-rc1")
# Use regex to match exact suffix followed by non-digit (to avoid -rc1 matching -rc10)
# Pattern matches the RC suffix followed by a non-digit or end-of-string context
# e.g., "-rc1" followed by space, quote, paren, or similar
exact_match = re.search(rf'{re.escape(rc_suffix)}(?![0-9])', line, re.IGNORECASE)
if exact_match:
print(f" ✅ Release notes title correctly shows {rc_suffix}")
return True
elif has_rc_in_title:
print(f" ❌ Release notes title shows wrong RC version (expected {rc_suffix})")
print(f" Update {file_path} to use '{rc_suffix}' in the title")
return False
else:
print(f" ❌ Release notes title missing RC suffix")
print(f" Update {file_path} to include '{rc_suffix}' in the title")
return False
else:
# For final releases, title should NOT contain -rc
if has_rc_in_title:
print(f" ❌ Release notes title still shows RC version")
print(f" Update {file_path} to remove '-rcN' from the title")
return False
else:
print(f" ✅ Release notes title is updated for final release")
return True
# If we didn't find the #doc line, don't block
print(f" ⚠️ Could not find release notes title in {file_path}")
return True
def run_mathlib_verify_version_tags(toolchain, verbose=False):
"""Run mathlib4's verify_version_tags.py script to validate the release tag.
@@ -738,27 +644,6 @@ def main():
else:
print(f" ✅ Release notes page title looks good ('{actual_title}').")
# Check if release notes file exists in reference-manual repository
# For -rc1 releases, this is when release notes need to be written
# For subsequent RCs and stable releases, they should already exist
release_notes_exists, is_rc1 = check_release_notes_file_exists(toolchain, github_token)
base_version = strip_rc_suffix(toolchain.lstrip('v'))
release_notes_file = f"Manual/Releases/v{base_version.replace('.', '_')}.lean"
if not release_notes_exists:
if is_rc1:
print(f" ❌ Release notes file not found: {release_notes_file}")
print(f" This is an -rc1 release, so release notes need to be written.")
print(f" Run `script/release_notes.py --since <previous_version>` to generate them.")
print(f" See doc/dev/release_checklist.md section 'Writing the release notes' for details.")
lean4_success = False
else:
print(f" ❌ Release notes file not found: {release_notes_file}")
print(f" Release notes should have been created for -rc1. Check the reference-manual repository.")
lean4_success = False
else:
print(f" ✅ Release notes file exists: {release_notes_file}")
repo_status["lean4"] = lean4_success
# If the release page doesn't exist, skip repository checks and master branch checks
@@ -824,11 +709,6 @@ def main():
print(f" ⚠️ CI: {ci_message}")
else:
print(f" ❓ CI: {ci_message}")
# For reference-manual, check that the release notes title has been updated
if name == "reference-manual":
pr_branch = f"bump_to_{toolchain}"
check_reference_manual_release_title(url, toolchain, pr_branch, github_token)
else:
print(f" ❌ PR with title '{pr_title}' does not exist")
print(f" Run `script/release_steps.py {toolchain} {name}` to create it")

View File

@@ -14,6 +14,13 @@ repositories:
bump-branch: true
dependencies: []
- name: verso
url: https://github.com/leanprover/verso
toolchain-tag: true
stable-branch: false
branch: main
dependencies: []
- name: lean4checker
url: https://github.com/leanprover/lean4checker
toolchain-tag: true
@@ -35,14 +42,6 @@ repositories:
branch: main
dependencies: []
- name: verso
url: https://github.com/leanprover/verso
toolchain-tag: true
stable-branch: false
branch: main
dependencies:
- plausible
- name: import-graph
url: https://github.com/leanprover-community/import-graph
toolchain-tag: true
@@ -143,15 +142,3 @@ repositories:
branch: master
dependencies:
- verso-web-components
- name: comparator
url: https://github.com/leanprover/comparator
toolchain-tag: true
stable-branch: false
branch: master
- name: lean4export
url: https://github.com/leanprover/lean4export
toolchain-tag: true
stable-branch: false
branch: master

View File

@@ -10,7 +10,7 @@ endif()
include(ExternalProject)
project(LEAN CXX C)
set(LEAN_VERSION_MAJOR 4)
set(LEAN_VERSION_MINOR 29)
set(LEAN_VERSION_MINOR 28)
set(LEAN_VERSION_PATCH 0)
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")
@@ -40,10 +40,6 @@ find_program(LLD_PATH lld)
if(LLD_PATH)
string(APPEND LEAN_EXTRA_LINKER_FLAGS_DEFAULT " -fuse-ld=lld")
endif()
if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
# Create space in install names so they can be patched later in Nix.
string(APPEND LEAN_EXTRA_LINKER_FLAGS_DEFAULT " -headerpad_max_install_names")
endif()
set(LEAN_EXTRA_LINKER_FLAGS ${LEAN_EXTRA_LINKER_FLAGS_DEFAULT} CACHE STRING "Additional flags used by the linker")
set(LEAN_EXTRA_CXX_FLAGS "" CACHE STRING "Additional flags used by the C++ compiler. Unlike `CMAKE_CXX_FLAGS`, these will not be used to build e.g. cadical.")
@@ -456,14 +452,11 @@ if(LLVM AND ${STAGE} GREATER 0)
message(VERBOSE "leanshared linker flags: '${LEANSHARED_LINKER_FLAGS}' | lean extra cxx flags '${CMAKE_CXX_FLAGS}'")
endif()
# We always strip away unused declarations to reduce binary sizes as the time cost is small and the
# potential benefit can be huge, especially when stripping `meta import`s.
# get rid of unused parts of C++ stdlib
if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
string(APPEND LEANC_EXTRA_CC_FLAGS " -fdata-sections -ffunction-sections")
string(APPEND LEAN_EXTRA_LINKER_FLAGS " -Wl,-dead_strip")
string(APPEND TOOLCHAIN_SHARED_LINKER_FLAGS " -Wl,-dead_strip")
elseif(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Emscripten")
string(APPEND LEANC_EXTRA_CC_FLAGS " -fdata-sections -ffunction-sections")
string(APPEND LEAN_EXTRA_LINKER_FLAGS " -Wl,--gc-sections")
string(APPEND TOOLCHAIN_SHARED_LINKER_FLAGS " -Wl,--gc-sections")
endif()
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
@@ -638,9 +631,6 @@ if(${STAGE} GREATER 1)
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/lean/libleanrt.a" "${CMAKE_BINARY_DIR}/lib/lean/libleanrt.a"
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/lean/libleancpp.a" "${CMAKE_BINARY_DIR}/lib/lean/libleancpp.a"
COMMAND cmake -E copy_if_different "${PREV_STAGE}/lib/temp/libleancpp_1.a" "${CMAKE_BINARY_DIR}/lib/temp/libleancpp_1.a")
add_dependencies(leanrt_initial-exec copy-leancpp)
add_dependencies(leanrt copy-leancpp)
add_dependencies(leancpp_1 copy-leancpp)
add_dependencies(leancpp copy-leancpp)
if(LLVM)
add_custom_target(copy-lean-h-bc
@@ -705,7 +695,7 @@ endif()
set(STDLIBS Init Std Lean Leanc)
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Emscripten")
list(APPEND STDLIBS Lake LeanChecker)
list(APPEND STDLIBS Lake)
endif()
add_custom_target(make_stdlib ALL
@@ -768,12 +758,6 @@ if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Emscripten")
DEPENDS lake_shared
COMMAND $(MAKE) -f ${CMAKE_BINARY_DIR}/stdlib.make lake
VERBATIM)
add_custom_target(leanchecker ALL
WORKING_DIRECTORY ${LEAN_SOURCE_DIR}
DEPENDS lake_shared
COMMAND $(MAKE) -f ${CMAKE_BINARY_DIR}/stdlib.make leanchecker
VERBATIM)
endif()
if(PREV_STAGE)

View File

@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Prelude
public import Init.Notation
@@ -37,7 +38,6 @@ public import Init.Omega
public import Init.MacroTrace
public import Init.Grind
public import Init.GrindInstances
public import Init.Sym
public import Init.While
public import Init.Syntax
public import Init.Internal

View File

@@ -102,7 +102,7 @@ noncomputable def strongIndefiniteDescription {α : Sort u} (p : α → Prop) (h
xp.val, fun _ => xp.property)
(fun hp => choice h, fun h => absurd h hp)
/-- The Hilbert epsilon function. -/
/-- the Hilbert epsilon Function -/
noncomputable def epsilon {α : Sort u} [h : Nonempty α] (p : α Prop) : α :=
(strongIndefiniteDescription p h).val

View File

@@ -144,7 +144,7 @@ instance : ToBool Bool where
Converts the result of the monadic action `x` to a `Bool`. If it is `true`, returns it and ignores
`y`; otherwise, runs `y` and returns its result.
This is a monadic counterpart to the short-circuiting `||` operator, usually accessed via the `<||>`
This a monadic counterpart to the short-circuiting `||` operator, usually accessed via the `<||>`
operator.
-/
@[macro_inline] def orM {m : Type u Type v} {β : Type u} [Monad m] [ToBool β] (x y : m β) : m β := do
@@ -161,7 +161,7 @@ recommended_spelling "orM" for "<||>" in [orM, «term_<||>_»]
Converts the result of the monadic action `x` to a `Bool`. If it is `true`, returns `y`; otherwise,
returns the original result of `x`.
This is a monadic counterpart to the short-circuiting `&&` operator, usually accessed via the `<&&>`
This a monadic counterpart to the short-circuiting `&&` operator, usually accessed via the `<&&>`
operator.
-/
@[macro_inline] def andM {m : Type u Type v} {β : Type u} [Monad m] [ToBool β] (x y : m β) : m β := do

View File

@@ -13,10 +13,6 @@ public import Init.SizeOf
public section
set_option linter.missingDocs true -- keep it documented
-- BEq instance for Option defined here so it's available early in the import chain
-- (before Init.Grind.Config and Init.MetaTypes which need BEq (Option Nat))
deriving instance BEq for Option
@[expose] section
universe u v w
@@ -341,7 +337,7 @@ inductive Exists {α : Sort u} (p : α → Prop) : Prop where
An indication of whether a loop's body terminated early that's used to compile the `for x in xs`
notation.
A collection's `ForIn` or `ForIn'` instance describes how to iterate over its elements. The monadic
A collection's `ForIn` or `ForIn'` instance describe's how to iterate over its elements. The monadic
action that represents the body of the loop returns a `ForInStep α`, where `α` is the local state
used to implement features such as `let mut`.
-/
@@ -514,12 +510,12 @@ abbrev SSuperset [HasSSubset α] (a b : α) := SSubset b a
/-- Notation type class for the union operation ``. -/
class Union (α : Type u) where
/-- `a b` is the union of `a` and `b`. -/
/-- `a b` is the union of`a` and `b`. -/
union : α α α
/-- Notation type class for the intersection operation `∩`. -/
class Inter (α : Type u) where
/-- `a ∩ b` is the intersection of `a` and `b`. -/
/-- `a ∩ b` is the intersection of`a` and `b`. -/
inter : α α α
/-- Notation type class for the set difference `\`. -/
@@ -542,10 +538,10 @@ infix:50 " ⊇ " => Superset
/-- Strict superset relation: `a ⊃ b` -/
infix:50 "" => SSuperset
/-- `a b` is the union of `a` and `b`. -/
/-- `a b` is the union of`a` and `b`. -/
infixl:65 " " => Union.union
/-- `a ∩ b` is the intersection of `a` and `b`. -/
/-- `a ∩ b` is the intersection of`a` and `b`. -/
infixl:70 "" => Inter.inter
/--
@@ -1565,10 +1561,6 @@ instance {p q : Prop} [d : Decidable (p ↔ q)] : Decidable (p = q) :=
| isTrue h => isTrue (propext h)
| isFalse h => isFalse fun heq => h (heq Iff.rfl)
/-- Helper theorem for proving injectivity theorems -/
theorem Lean.injEq_helper {P Q R : Prop} :
(P Q R) (P Q R) := by intro h h₁,h₂; exact h h₁ h₂
gen_injective_theorems% Array
gen_injective_theorems% BitVec
gen_injective_theorems% ByteArray

View File

@@ -125,22 +125,6 @@ instance instDecidableEmpEq (ys : Array α) : Decidable (#[] = ys) :=
| [] => isTrue rfl
| _ :: _ => isFalse (fun h => Array.noConfusion rfl (heq_of_eq h) (fun h => List.noConfusion rfl h))
@[inline]
def instDecidableEqEmpImpl (xs : Array α) : Decidable (xs = #[]) :=
decidable_of_iff xs.isEmpty <| by rcases xs with <;> simp [Array.isEmpty]
@[inline]
def instDecidableEmpEqImpl (xs : Array α) : Decidable (#[] = xs) :=
decidable_of_iff xs.isEmpty <| by rcases xs with <;> simp [Array.isEmpty]
@[csimp]
theorem instDecidableEqEmp_csimp : @instDecidableEqEmp = @instDecidableEqEmpImpl :=
Subsingleton.allEq _ _
@[csimp]
theorem instDecidableEmpEq_csimp : @instDecidableEmpEq = @instDecidableEmpEqImpl :=
Subsingleton.allEq _ _
theorem beq_eq_decide [BEq α] (xs ys : Array α) :
(xs == ys) = if h : xs.size = ys.size then
decide ( (i : Nat) (h' : i < xs.size), xs[i] == ys[i]'(h h')) else false := by

View File

@@ -115,8 +115,7 @@ theorem none_eq_getElem?_iff {xs : Array α} {i : Nat} : none = xs[i]? ↔ xs.si
theorem getElem?_eq_none {xs : Array α} (h : xs.size i) : xs[i]? = none := by
simp [h]
grind_pattern Array.getElem?_eq_none => xs.size, xs[i]? where
guard xs.size i
grind_pattern Array.getElem?_eq_none => xs.size, xs[i]?
@[simp] theorem getElem?_eq_getElem {xs : Array α} {i : Nat} (h : i < xs.size) : xs[i]? = some xs[i] :=
getElem?_pos ..

View File

@@ -159,17 +159,4 @@ theorem setWidth_neg_of_le {x : BitVec v} (h : w ≤ v) : BitVec.setWidth w (-x)
omega
omega
@[induction_eliminator, elab_as_elim]
theorem cons_induction {motive : (w : Nat) BitVec w Prop} (nil : motive 0 .nil)
(cons : {w : Nat} (b : Bool) (bv : BitVec w), motive w bv motive (w + 1) (.cons b bv)) :
{w : Nat} (x : BitVec w), motive w x := by
intros w x
induction w
case zero =>
simp only [BitVec.eq_nil x, nil]
case succ wl ih =>
rw [ cons_msb_setWidth x]
apply cons
apply ih
end BitVec

View File

@@ -67,9 +67,6 @@ theorem none_eq_getElem?_iff {l : BitVec w} : none = l[n]? ↔ w ≤ n := by
@[simp]
theorem getElem?_eq_none {l : BitVec w} (h : w n) : l[n]? = none := getElem?_eq_none_iff.mpr h
grind_pattern BitVec.getElem?_eq_none => l[n]? where
guard w n
theorem getElem?_eq (l : BitVec w) (i : Nat) :
l[i]? = if h : i < w then some l[i] else none := by
split <;> simp_all
@@ -3362,26 +3359,6 @@ theorem extractLsb'_concat {x : BitVec (w + 1)} {y : Bool} :
· simp
· simp [show i - 1 < t by omega]
theorem concat_extractLsb'_getLsb {x : BitVec (w + 1)} :
BitVec.concat (x.extractLsb' 1 w) (x.getLsb 0) = x := by
ext i hw
by_cases h : i = 0
· simp [h]
· simp [h, hw, show (1 + (i - 1)) = i by omega, getElem_concat]
@[elab_as_elim]
theorem concat_induction {motive : (w : Nat) BitVec w Prop} (nil : motive 0 .nil)
(concat : {w : Nat} (bv : BitVec w) (b : Bool), motive w bv motive (w + 1) (bv.concat b)) :
{w : Nat} (x : BitVec w), motive w x := by
intros w x
induction w
case zero =>
simp only [BitVec.eq_nil x, nil]
case succ wl ih =>
rw [ concat_extractLsb'_getLsb (x := x)]
apply concat
apply ih
/-! ### shiftConcat -/
@[grind =]
@@ -6403,6 +6380,73 @@ theorem cpopNatRec_add {x : BitVec w} {acc n : Nat} :
x.cpopNatRec n (acc + acc') = x.cpopNatRec n acc + acc' := by
rw [cpopNatRec_eq (acc := acc + acc'), cpopNatRec_eq (acc := acc), Nat.add_assoc]
theorem cpopNatRec_le {x : BitVec w} (n : Nat) :
x.cpopNatRec n acc acc + n := by
induction n generalizing acc
· case zero =>
simp
· case succ n ihn =>
have : (x.getLsbD n).toNat 1 := by cases x.getLsbD n <;> simp
specialize ihn (acc := acc + (x.getLsbD n).toNat)
simp
omega
@[simp]
theorem cpopNatRec_of_le {x : BitVec w} (k n : Nat) (hn : w n) :
x.cpopNatRec (n + k) acc = x.cpopNatRec n acc := by
induction k
· case zero =>
simp
· case succ k ihk =>
simp [show n + (k + 1) = (n + k) + 1 by omega, ihk, show w n + k by omega]
theorem cpopNatRec_zero_le (x : BitVec w) (n : Nat) :
x.cpopNatRec n 0 w := by
induction n
· case zero =>
simp
· case succ n ihn =>
by_cases hle : n w
· by_cases hx : x.getLsbD n
· have := cpopNatRec_le (x := x) (acc := 1) (by omega)
have := lt_of_getLsbD hx
simp [hx]
omega
· have := cpopNatRec_le (x := x) (acc := 0) (by omega)
simp [hx]
omega
· simp [show w n by omega]
omega
@[simp]
theorem cpopNatRec_allOnes (h : n w) :
(allOnes w).cpopNatRec n acc = acc + n := by
induction n
· case zero =>
simp
· case succ n ihn =>
specialize ihn (by omega)
simp [show n < w by omega, ihn,
cpopNatRec_add (acc := acc) (acc' := 1)]
omega
@[simp]
theorem cpop_allOnes :
(allOnes w).cpop = BitVec.ofNat w w := by
simp [cpop, cpopNatRec_allOnes]
@[simp]
theorem cpop_zero :
(0#w).cpop = 0#w := by
simp [cpop]
theorem toNat_cpop_le (x : BitVec w) :
x.cpop.toNat w := by
have hlt := Nat.lt_two_pow_self (n := w)
have hle := cpopNatRec_zero_le (x := x) (n := w)
simp only [cpop, toNat_ofNat, ge_iff_le]
rw [Nat.mod_eq_of_lt (by omega)]
exact hle
@[simp]
theorem cpopNatRec_cons_of_le {x : BitVec w} {b : Bool} (hn : n w) :
@@ -6428,68 +6472,6 @@ theorem cpopNatRec_cons_of_lt {x : BitVec w} {b : Bool} (hn : w < n) :
· simp [show w = n by omega, getElem_cons,
cpopNatRec_add (acc := acc) (acc' := b.toNat), Nat.add_comm]
theorem cpopNatRec_le {x : BitVec w} (n : Nat) :
x.cpopNatRec n acc acc + n := by
induction n generalizing acc
· case zero =>
simp
· case succ n ihn =>
have : (x.getLsbD n).toNat 1 := by cases x.getLsbD n <;> simp
specialize ihn (acc := acc + (x.getLsbD n).toNat)
simp
omega
@[simp]
theorem cpopNatRec_of_le {x : BitVec w} (k n : Nat) (hn : w n) :
x.cpopNatRec (n + k) acc = x.cpopNatRec n acc := by
induction k
· case zero =>
simp
· case succ k ihk =>
simp [show n + (k + 1) = (n + k) + 1 by omega, ihk, show w n + k by omega]
@[simp]
theorem cpopNatRec_allOnes (h : n w) :
(allOnes w).cpopNatRec n acc = acc + n := by
induction n
· case zero =>
simp
· case succ n ihn =>
specialize ihn (by omega)
simp [show n < w by omega, ihn,
cpopNatRec_add (acc := acc) (acc' := 1)]
omega
@[simp]
theorem cpop_allOnes :
(allOnes w).cpop = BitVec.ofNat w w := by
simp [cpop, cpopNatRec_allOnes]
@[simp]
theorem cpop_zero :
(0#w).cpop = 0#w := by
simp [cpop]
theorem cpopNatRec_zero_le (x : BitVec w) (n : Nat) :
x.cpopNatRec n 0 w := by
induction x
· case nil => simp
· case cons w b bv ih =>
by_cases hle : n w
· have := cpopNatRec_cons_of_le (b := b) (x := bv) (n := n) (acc := 0) hle
omega
· rw [cpopNatRec_cons_of_lt (by omega)]
have : b.toNat 1 := by cases b <;> simp
omega
theorem toNat_cpop_le (x : BitVec w) :
x.cpop.toNat w := by
have hlt := Nat.lt_two_pow_self (n := w)
have hle := cpopNatRec_zero_le (x := x) (n := w)
simp only [cpop, toNat_ofNat, ge_iff_le]
rw [Nat.mod_eq_of_lt (by omega)]
exact hle
theorem cpopNatRec_concat_of_lt {x : BitVec w} {b : Bool} (hn : 0 < n) :
(concat x b).cpopNatRec n acc = b.toNat + x.cpopNatRec (n - 1) acc := by
induction n generalizing acc
@@ -6587,12 +6569,12 @@ theorem cpop_cast (x : BitVec w) (h : w = v) :
@[simp]
theorem toNat_cpop_append {x : BitVec w} {y : BitVec u} :
(x ++ y).cpop.toNat = x.cpop.toNat + y.cpop.toNat := by
induction x generalizing y
· case nil =>
simp
· case cons w b bv ih =>
simp [cons_append, ih]
omega
induction w generalizing u
· case zero =>
simp [cpop]
· case succ w ihw =>
rw [ cons_msb_setWidth x, toNat_cpop_cons, cons_append, cpop_cast, toNat_cast,
toNat_cpop_cons, ihw, Nat.add_assoc]
theorem cpop_append {x : BitVec w} {y : BitVec u} :
(x ++ y).cpop = x.cpop.setWidth (w + u) + y.cpop.setWidth (w + u) := by
@@ -6603,14 +6585,4 @@ theorem cpop_append {x : BitVec w} {y : BitVec u} :
simp only [toNat_cpop_append, toNat_add, toNat_setWidth, Nat.add_mod_mod, Nat.mod_add_mod]
rw [Nat.mod_eq_of_lt (by omega)]
theorem toNat_cpop_not {x : BitVec w} :
(~~~x).cpop.toNat = w - x.cpop.toNat := by
induction x
· case nil =>
simp
· case cons b x ih =>
have := toNat_cpop_le x
cases b
<;> (simp [ih]; omega)
end BitVec

View File

@@ -9,4 +9,3 @@ prelude
public import Init.Data.Char.Basic
public import Init.Data.Char.Lemmas
public import Init.Data.Char.Order
public import Init.Data.Char.Ordinal

View File

@@ -1,242 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Markus Himmel
-/
module
prelude
public import Init.Data.Fin.OverflowAware
public import Init.Data.UInt.Basic
public import Init.Data.Function
import Init.Data.Char.Lemmas
import Init.Data.Char.Order
import Init.Grind
/-!
# Bijection between `Char` and `Fin Char.numCodePoints`
In this file, we construct a bijection between `Char` and `Fin Char.numCodePoints` and show that
it is compatible with various operations. Since `Fin` is simpler than `Char` due to being based
on natural numbers instead of `UInt32` and not having a hole in the middle (surrogate code points),
this is sometimes useful to simplify reasoning about `Char`.
We use these declarations in the construction of `Char` ranges, see the module
`Init.Data.Range.Polymorphic.Char`.
-/
set_option doc.verso true
public section
namespace Char
/-- The number of surrogate code points. -/
abbrev numSurrogates : Nat :=
-- 0xe000 - 0xd800
2048
/-- The size of the {name}`Char` type. -/
abbrev numCodePoints : Nat :=
-- 0x110000 - numSurrogates
1112064
/--
Packs {name}`Char` bijectively into {lean}`Fin Char.numCodePoints` by shifting code points which are
greater than the surrogate code points by the number of surrogate code points.
The inverse of this function is called {name (scope := "Init.Data.Char.Ordinal")}`Char.ofOrdinal`.
-/
def ordinal (c : Char) : Fin Char.numCodePoints :=
if h : c.val < 0xd800 then
c.val.toNat, by grind [UInt32.lt_iff_toNat_lt]
else
c.val.toNat - Char.numSurrogates, by grind [UInt32.lt_iff_toNat_lt]
/--
Unpacks {lean}`Fin Char.numCodePoints` bijectively to {name}`Char` by shifting code points which are
greater than the surrogate code points by the number of surrogate code points.
The inverse of this function is called {name}`Char.ordinal`.
-/
def ofOrdinal (f : Fin Char.numCodePoints) : Char :=
if h : (f : Nat) < 0xd800 then
UInt32.ofNatLT f (by grind), by grind [UInt32.toNat_ofNatLT]
else
UInt32.ofNatLT (f + Char.numSurrogates) (by grind), by grind [UInt32.toNat_ofNatLT]
/--
Computes the next {name}`Char`, skipping over surrogate code points (which are not valid
{name}`Char`s) as necessary.
This function is specified by its interaction with {name}`Char.ordinal`, see
{name (scope := "Init.Data.Char.Ordinal")}`Char.succ?_eq`.
-/
def succ? (c : Char) : Option Char :=
if h₀ : c.val < 0xd7ff then
some c.val + 1, by grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_add]
else if h₁ : c.val = 0xd7ff then
some 0xe000, by decide
else if h₂ : c.val < 0x10ffff then
some c.val + 1, by
simp only [UInt32.lt_iff_toNat_lt, UInt32.reduceToNat, Nat.not_lt, UInt32.toNat_inj,
UInt32.isValidChar, Nat.isValidChar, UInt32.toNat_add, Nat.reducePow] at *
grind
else none
/--
Computes the {name}`m`-th next {name}`Char`, skipping over surrogate code points (which are not
valid {name}`Char`s) as necessary.
This function is specified by its interaction with {name}`Char.ordinal`, see
{name (scope := "Init.Data.Char.Ordinal")}`Char.succMany?_eq`.
-/
def succMany? (m : Nat) (c : Char) : Option Char :=
c.ordinal.addNat? m |>.map Char.ofOrdinal
@[grind =]
theorem coe_ordinal {c : Char} :
(c.ordinal : Nat) =
if c.val < 0xd800 then
c.val.toNat
else
c.val.toNat - Char.numSurrogates := by
grind [Char.ordinal]
@[simp]
theorem ordinal_zero : '\x00'.ordinal = 0 := by
ext
simp [coe_ordinal]
@[grind =]
theorem val_ofOrdinal {f : Fin Char.numCodePoints} :
(Char.ofOrdinal f).val =
if h : (f : Nat) < 0xd800 then
UInt32.ofNatLT f (by grind)
else
UInt32.ofNatLT (f + Char.numSurrogates) (by grind) := by
grind [Char.ofOrdinal]
@[simp]
theorem ofOrdinal_ordinal {c : Char} : Char.ofOrdinal c.ordinal = c := by
ext
simp only [val_ofOrdinal, coe_ordinal, UInt32.ofNatLT_add]
split
· grind [UInt32.lt_iff_toNat_lt, UInt32.ofNatLT_toNat]
· rw [dif_neg]
· simp only [ UInt32.toNat_inj, UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
grind [UInt32.toNat_lt, UInt32.lt_iff_toNat_lt]
· grind [UInt32.lt_iff_toNat_lt]
@[simp]
theorem ordinal_ofOrdinal {f : Fin Char.numCodePoints} : (Char.ofOrdinal f).ordinal = f := by
ext
simp [coe_ordinal, val_ofOrdinal]
split
· rw [if_pos, UInt32.toNat_ofNatLT]
simpa [UInt32.lt_iff_toNat_lt]
· rw [if_neg, UInt32.toNat_add, UInt32.toNat_ofNatLT, UInt32.toNat_ofNatLT, Nat.mod_eq_of_lt,
Nat.add_sub_cancel]
· grind
· simp only [UInt32.lt_iff_toNat_lt, UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow,
UInt32.reduceToNat, Nat.not_lt]
grind
@[simp]
theorem ordinal_comp_ofOrdinal : Char.ordinal Char.ofOrdinal = id := by
ext; simp
@[simp]
theorem ofOrdinal_comp_ordinal : Char.ofOrdinal Char.ordinal = id := by
ext; simp
@[simp]
theorem ordinal_inj {c d : Char} : c.ordinal = d.ordinal c = d :=
fun h => by simpa using congrArg Char.ofOrdinal h, (· rfl)
theorem ordinal_injective : Function.Injective Char.ordinal :=
fun _ _ => ordinal_inj.1
@[simp]
theorem ofOrdinal_inj {f g : Fin Char.numCodePoints} :
Char.ofOrdinal f = Char.ofOrdinal g f = g :=
fun h => by simpa using congrArg Char.ordinal h, (· rfl)
theorem ofOrdinal_injective : Function.Injective Char.ofOrdinal :=
fun _ _ => ofOrdinal_inj.1
theorem ordinal_le_of_le {c d : Char} (h : c d) : c.ordinal d.ordinal := by
simp only [le_def, UInt32.le_iff_toNat_le] at h
simp only [Fin.le_def, coe_ordinal, UInt32.lt_iff_toNat_lt, UInt32.reduceToNat]
grind
theorem ofOrdinal_le_of_le {f g : Fin Char.numCodePoints} (h : f g) :
Char.ofOrdinal f Char.ofOrdinal g := by
simp only [Fin.le_def] at h
simp only [le_def, val_ofOrdinal, UInt32.ofNatLT_add, UInt32.le_iff_toNat_le]
split
· simp only [UInt32.toNat_ofNatLT]
split
· simpa
· simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
grind
· simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
rw [dif_neg (by grind)]
simp only [UInt32.toNat_add, UInt32.toNat_ofNatLT, Nat.reducePow]
grind
theorem le_iff_ordinal_le {c d : Char} : c d c.ordinal d.ordinal :=
ordinal_le_of_le, fun h => by simpa using ofOrdinal_le_of_le h
theorem le_iff_ofOrdinal_le {f g : Fin Char.numCodePoints} :
f g Char.ofOrdinal f Char.ofOrdinal g :=
ofOrdinal_le_of_le, fun h => by simpa using ordinal_le_of_le h
theorem lt_iff_ordinal_lt {c d : Char} : c < d c.ordinal < d.ordinal := by
simp only [Std.lt_iff_le_and_not_ge, le_iff_ordinal_le]
theorem lt_iff_ofOrdinal_lt {f g : Fin Char.numCodePoints} :
f < g Char.ofOrdinal f < Char.ofOrdinal g := by
simp only [Std.lt_iff_le_and_not_ge, le_iff_ofOrdinal_le]
theorem succ?_eq {c : Char} : c.succ? = (c.ordinal.addNat? 1).map Char.ofOrdinal := by
fun_cases Char.succ? with
| case1 h =>
rw [Fin.addNat?_eq_some]
· simp only [coe_ordinal, Option.map_some, Option.some.injEq, Char.ext_iff, val_ofOrdinal,
UInt32.ofNatLT_add, UInt32.reduceOfNatLT]
split
· simp only [UInt32.ofNatLT_toNat, dite_eq_ite, left_eq_ite_iff, Nat.not_lt,
Nat.reduceLeDiff, UInt32.left_eq_add]
grind [UInt32.lt_iff_toNat_lt]
· grind
· simp [coe_ordinal]
grind [UInt32.lt_iff_toNat_lt]
| case2 =>
rw [Fin.addNat?_eq_some]
· simp [coe_ordinal, *, Char.ext_iff, val_ofOrdinal, numSurrogates]
· simp [coe_ordinal, *, numCodePoints]
| case3 =>
rw [Fin.addNat?_eq_some]
· simp only [coe_ordinal, Option.map_some, Option.some.injEq, Char.ext_iff, val_ofOrdinal,
UInt32.ofNatLT_add, UInt32.reduceOfNatLT]
split
· grind
· rw [dif_neg]
· simp only [ UInt32.toNat_inj, UInt32.toNat_add, UInt32.reduceToNat, Nat.reducePow,
UInt32.toNat_ofNatLT, Nat.mod_add_mod]
grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_inj]
· grind [UInt32.lt_iff_toNat_lt, UInt32.toNat_inj]
· grind [UInt32.lt_iff_toNat_lt]
| case4 =>
rw [eq_comm]
grind [UInt32.lt_iff_toNat_lt]
theorem map_ordinal_succ? {c : Char} : c.succ?.map ordinal = c.ordinal.addNat? 1 := by
simp [succ?_eq]
theorem succMany?_eq {m : Nat} {c : Char} :
c.succMany? m = (c.ordinal.addNat? m).map Char.ofOrdinal := by
rfl
end Char

View File

@@ -11,4 +11,3 @@ public import Init.Data.Fin.Log2
public import Init.Data.Fin.Iterate
public import Init.Data.Fin.Fold
public import Init.Data.Fin.Lemmas
public import Init.Data.Fin.OverflowAware

View File

@@ -1,51 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Markus Himmel
-/
module
prelude
public import Init.Data.Fin.Basic
import Init.Data.Fin.Lemmas
set_option doc.verso true
public section
namespace Fin
/--
Overflow-aware addition of a natural number to an element of {lean}`Fin n`.
Examples:
* {lean}`(2 : Fin 3).addNat? 1 = (none : Option (Fin 3))`
* {lean}`(2 : Fin 4).addNat? 1 = (some 3 : Option (Fin 4))`
-/
@[inline]
protected def addNat? (i : Fin n) (m : Nat) : Option (Fin n) :=
if h : i + m < n then some i + m, h else none
theorem addNat?_eq_some {i : Fin n} (h : i + m < n) : i.addNat? m = some i + m, h := by
simp [Fin.addNat?, h]
theorem addNat?_eq_some_iff {i : Fin n} :
i.addNat? m = some j i + m < n j = i + m := by
simp only [Fin.addNat?]
split <;> simp [Fin.ext_iff, eq_comm, *]
@[simp]
theorem addNat?_eq_none_iff {i : Fin n} : i.addNat? m = none n i + m := by
simp only [Fin.addNat?]
split <;> simp_all [Nat.not_lt]
@[simp]
theorem addNat?_zero {i : Fin n} : i.addNat? 0 = some i := by
simp [addNat?_eq_some_iff]
@[grind =]
theorem addNat?_eq_dif {i : Fin n} :
i.addNat? m = if h : i + m < n then some i + m, h else none := by
rfl
end Fin

View File

@@ -113,8 +113,6 @@ theorem gcd_eq_right_iff_dvd (hb : 0 ≤ b) : gcd a b = b ↔ b a := by
theorem gcd_assoc (a b c : Int) : gcd (gcd a b) c = gcd a (gcd b c) := Nat.gcd_assoc ..
theorem gcd_left_comm (a b c : Int) : gcd a (gcd b c) = gcd b (gcd a c) := Nat.gcd_left_comm ..
theorem gcd_mul_left (m n k : Int) : gcd (m * n) (m * k) = m.natAbs * gcd n k := by
simp [gcd_eq_natAbs_gcd_natAbs, Nat.gcd_mul_left, natAbs_mul]

View File

@@ -10,7 +10,6 @@ public import Init.Classical
public import Init.Ext
set_option doc.verso true
set_option linter.missingDocs true
public section
@@ -350,24 +349,14 @@ abbrev PlausibleIterStep.casesOn {IsPlausibleStep : IterStep α β → Prop}
end IterStep
/--
The step function of an iterator in `Iter (α := α) β` or `IterM (α := α) m β`.
The typeclass providing the step function of an iterator in `Iter (α := α) β` or
`IterM (α := α) m β`.
In order to allow intrinsic termination proofs when iterating with the `step` function, the
step object is bundled with a proof that it is a "plausible" step for the given current iterator.
-/
class Iterator (α : Type w) (m : Type w Type w') (β : outParam (Type w)) where
/--
A relation that governs the allowed steps from a given iterator.
The "plausible" steps are those which make sense for a given state; plausibility can ensure
properties such as the successor iterator being drawn from the same collection, that an iterator
resulting from a skip will return the same next value, or that the next item yielded is next one
in the original collection.
-/
IsPlausibleStep : IterM (α := α) m β IterStep (IterM (α := α) m β) β Prop
/--
Carries out a step of iteration.
-/
step : (it : IterM (α := α) m β) m (Shrink <| PlausibleIterStep <| IsPlausibleStep it)
section Monadic
@@ -380,7 +369,7 @@ def IterM.mk {α : Type w} (it : α) (m : Type w → Type w') (β : Type w) :
IterM (α := α) m β :=
it
@[deprecated IterM.mk (since := "2025-12-01"), inline, expose, inherit_doc IterM.mk]
@[deprecated IterM.mk (since := "2025-12-01"), inline, expose]
def Iterators.toIterM := @IterM.mk
@[simp]
@@ -388,7 +377,6 @@ theorem IterM.mk_internalState {α m β} (it : IterM (α := α) m β) :
.mk it.internalState m β = it :=
rfl
set_option linter.missingDocs false in
@[deprecated IterM.mk_internalState (since := "2025-12-01")]
def Iterators.toIterM_internalState := @IterM.mk_internalState
@@ -471,10 +459,8 @@ number of steps.
-/
inductive IterM.IsPlausibleIndirectOutput {α β : Type w} {m : Type w Type w'} [Iterator α m β]
: IterM (α := α) m β β Prop where
/-- The output value could plausibly be emitted in the next step. -/
| direct {it : IterM (α := α) m β} {out : β} : it.IsPlausibleOutput out
it.IsPlausibleIndirectOutput out
/-- The output value could plausibly be emitted in a step after the next step. -/
| indirect {it it' : IterM (α := α) m β} {out : β} : it'.IsPlausibleSuccessorOf it
it'.IsPlausibleIndirectOutput out it.IsPlausibleIndirectOutput out
@@ -484,9 +470,7 @@ finitely many steps. This relation is reflexive.
-/
inductive IterM.IsPlausibleIndirectSuccessorOf {α β : Type w} {m : Type w Type w'}
[Iterator α m β] : IterM (α := α) m β IterM (α := α) m β Prop where
/-- Every iterator is a plausible indirect successor of itself. -/
| refl (it : IterM (α := α) m β) : it.IsPlausibleIndirectSuccessorOf it
/-- The iterator is a plausible successor of one of the current iterator's successors. -/
| cons_right {it'' it' it : IterM (α := α) m β} (h' : it''.IsPlausibleIndirectSuccessorOf it')
(h : it'.IsPlausibleSuccessorOf it) : it''.IsPlausibleIndirectSuccessorOf it
@@ -611,10 +595,8 @@ number of steps.
-/
inductive Iter.IsPlausibleIndirectOutput {α β : Type w} [Iterator α Id β] :
Iter (α := α) β β Prop where
/-- The output value could plausibly be emitted in the next step. -/
| direct {it : Iter (α := α) β} {out : β} : it.IsPlausibleOutput out
it.IsPlausibleIndirectOutput out
/-- The output value could plausibly be emitted in a step after the next step. -/
| indirect {it it' : Iter (α := α) β} {out : β} : it'.IsPlausibleSuccessorOf it
it'.IsPlausibleIndirectOutput out it.IsPlausibleIndirectOutput out
@@ -645,9 +627,7 @@ finitely many steps. This relation is reflexive.
-/
inductive Iter.IsPlausibleIndirectSuccessorOf {α : Type w} {β : Type w} [Iterator α Id β] :
Iter (α := α) β Iter (α := α) β Prop where
/-- Every iterator is a plausible indirect successor of itself. -/
| refl (it : Iter (α := α) β) : IsPlausibleIndirectSuccessorOf it it
/-- The iterator is a plausible indirect successor of one of the current iterator's successors. -/
| cons_right {it'' it' it : Iter (α := α) β} (h' : it''.IsPlausibleIndirectSuccessorOf it')
(h : it'.IsPlausibleSuccessorOf it) : it''.IsPlausibleIndirectSuccessorOf it
@@ -721,11 +701,6 @@ recursion over finite iterators. See also `IterM.finitelyManySteps` and `Iter.fi
-/
structure IterM.TerminationMeasures.Finite
(α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
/--
The wrapped iterator.
In the wrapper, its finiteness is used as a termination measure.
-/
it : IterM (α := α) m β
/--
@@ -852,11 +827,6 @@ recursion over productive iterators. See also `IterM.finitelyManySkips` and `Ite
-/
structure IterM.TerminationMeasures.Productive
(α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
/--
The wrapped iterator.
In the wrapper, its productivity is used as a termination measure.
-/
it : IterM (α := α) m β
/--
@@ -960,9 +930,6 @@ library.
-/
class LawfulDeterministicIterator (α : Type w) (m : Type w Type w') [Iterator α m β]
where
/--
Every iterator with state `α` in monad `m` has exactly one plausible step.
-/
isPlausibleStep_eq_eq : it : IterM (α := α) m β, step, it.IsPlausibleStep = (· = step)
namespace Iterators
@@ -973,13 +940,14 @@ This structure provides a more convenient way to define `Finite α m` instances
-/
structure FinitenessRelation (α : Type w) (m : Type w Type w') {β : Type w}
[Iterator α m β] where
/--
A well-founded relation such that if `it'` is a successor iterator of `it`, then `Rel it' it`.
/-
A well-founded relation such that if `it'` is a successor iterator of `it`, then
`Rel it' it`.
-/
Rel (it' it : IterM (α := α) m β) : Prop
/-- `Rel` is well-founded. -/
/- A proof that `Rel` is well-founded. -/
wf : WellFounded Rel
/-- If `it'` is a successor iterator of `it`, then `Rel it' it`. -/
/- A proof that if `it'` is a successor iterator of `it`, then `Rel it' it`. -/
subrelation : {it it'}, it'.IsPlausibleSuccessorOf it Rel it' it
theorem Finite.of_finitenessRelation
@@ -999,13 +967,14 @@ This structure provides a more convenient way to define `Productive α m` instan
-/
structure ProductivenessRelation (α : Type w) (m : Type w Type w') {β : Type w}
[Iterator α m β] where
/--
A well-founded relation such that if `it'` is obtained from `it` by skipping, then `Rel it' it`.
/-
A well-founded relation such that if `it'` is obtained from `it` by skipping, then
`Rel it' it`.
-/
Rel : (IterM (α := α) m β) (IterM (α := α) m β) Prop
/-- `Rel` is well-founded. -/
/- A proof that `Rel` is well-founded. -/
wf : WellFounded Rel
/-- If `it'` is obtained from `it` by skipping, then `Rel it' it`. -/
/- A proof that if `it'` is obtained from `it` by skipping, then `Rel it' it`. -/
subrelation : {it it'}, it'.IsPlausibleSkipSuccessorOf it Rel it' it
theorem Productive.of_productivenessRelation

View File

@@ -9,8 +9,6 @@ prelude
public import Init.Data.Iterators.Consumers.Loop
public import Init.Data.Iterators.Consumers.Monadic.Access
set_option linter.missingDocs true
@[expose] public section
namespace Std

View File

@@ -8,8 +8,6 @@ module
prelude
public import Init.Data.Iterators.Basic
set_option linter.missingDocs true
public section
namespace Std
@@ -59,8 +57,8 @@ theorem IterM.not_isPlausibleNthOutputStep_yield {α β : Type w} {m : Type w
/--
`IteratorAccess α m` provides efficient implementations for random access or iterators that support
it. `it.nextAtIdx? n` either returns the step in which the `n`th value of `it` is emitted
(necessarily of the form `.yield _ _`) or `.done` if `it` terminates before emitting the `n`th
it. `it.nextAtIdx? n` either returns the step in which the `n`-th value of `it` is emitted
(necessarily of the form `.yield _ _`) or `.done` if `it` terminates before emitting the `n`-th
value.
For monadic iterators, the monadic effects of this operation may differ from manually iterating
@@ -70,11 +68,6 @@ is guaranteed to plausible in the sense of `IterM.IsPlausibleNthOutputStep`.
This class is experimental and users of the iterator API should not explicitly depend on it.
-/
class IteratorAccess (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
/--
`nextAtIdx? it n` either returns the step in which the `n`th value of `it` is emitted
(necessarily of the form `.yield _ _`) or `.done` if `it` terminates before emitting the `n`th
value.
-/
nextAtIdx? (it : IterM (α := α) m β) (n : Nat) :
m (PlausibleIterStep (it.IsPlausibleNthOutputStep n))

View File

@@ -11,8 +11,6 @@ public import Init.Data.Iterators.Consumers.Monadic.Total
public import Init.Data.Iterators.Internal.LawfulMonadLiftFunction
public import Init.WFExtrinsicFix
set_option linter.missingDocs true
@[expose] public section
/-!

View File

@@ -11,8 +11,6 @@ public import Init.Data.Iterators.Internal.LawfulMonadLiftFunction
public import Init.WFExtrinsicFix
public import Init.Data.Iterators.Consumers.Monadic.Total
set_option linter.missingDocs true
public section
/-!
@@ -72,9 +70,6 @@ provided by the standard library.
@[ext]
class IteratorLoop (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β]
(n : Type x Type x') where
/--
Iteration over the iterator `it` in the manner expected by `for` loops.
-/
forIn : (_liftBind : (γ : Type w) (δ : Type x) (γ n δ) m γ n δ) (γ : Type x),
(plausible_forInStep : β γ ForInStep γ Prop)
(it : IterM (α := α) m β) γ
@@ -87,9 +82,7 @@ end Typeclasses
structure IteratorLoop.WithWF (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β]
{γ : Type x} (PlausibleForInStep : β γ ForInStep γ Prop)
(hwf : IteratorLoop.WellFounded α m PlausibleForInStep) where
/-- Internal implementation detail of the iterator library. -/
it : IterM (α := α) m β
/-- Internal implementation detail of the iterator library. -/
acc : γ
instance IteratorLoop.WithWF.instWellFoundedRelation
@@ -170,7 +163,6 @@ Asserts that a given `IteratorLoop` instance is equal to `IteratorLoop.defaultIm
-/
class LawfulIteratorLoop (α : Type w) (m : Type w Type w') (n : Type x Type x')
[Monad m] [Monad n] [Iterator α m β] [i : IteratorLoop α m n] where
/-- The implementation of `IteratorLoop.forIn` in `i` is equal to the default implementation. -/
lawful lift [LawfulMonadLiftBindFunction lift] γ it init
(Pl : β γ ForInStep γ Prop) (wf : IteratorLoop.WellFounded α m Pl)
(f : (b : β) it.IsPlausibleIndirectOutput b (c : γ) n (Subtype (Pl b c))) :
@@ -227,7 +219,6 @@ instance IterM.instForInOfIteratorLoop {m : Type w → Type w'} {n : Type w →
haveI : ForIn' n (IterM (α := α) m β) β _ := IterM.instForIn'
instForInOfForIn'
/-- Internal implementation detail of the iterator library. -/
@[always_inline, inline]
def IterM.Partial.instForIn' {m : Type w Type w'} {n : Type w Type w''}
{α : Type w} {β : Type w} [Iterator α m β] [IteratorLoop α m n] [MonadLiftT m n] [Monad n] :
@@ -235,7 +226,6 @@ def IterM.Partial.instForIn' {m : Type w → Type w'} {n : Type w → Type w''}
forIn' it init f :=
haveI := @IterM.instForIn'; forIn' it.it init f
/-- Internal implementation detail of the iterator library. -/
@[always_inline, inline]
def IterM.Total.instForIn' {m : Type w Type w'} {n : Type w Type w''}
{α : Type w} {β : Type w} [Iterator α m β] [IteratorLoop α m n] [MonadLiftT m n] [Monad n]

View File

@@ -8,8 +8,6 @@ module
prelude
public import Init.Data.Iterators.Basic
set_option linter.missingDocs true
public section
namespace Std
@@ -18,9 +16,6 @@ namespace Std
A wrapper around an iterator that provides partial consumers. See `IterM.allowNontermination`.
-/
structure IterM.Partial {α : Type w} (m : Type w Type w') (β : Type w) where
/--
The wrapped iterator, which was wrapped by `IterM.allowNontermination`.
-/
it : IterM (α := α) m β
/--

View File

@@ -9,19 +9,12 @@ prelude
public import Init.Data.Iterators.Basic
set_option doc.verso true
set_option linter.missingDocs true
public section
namespace Std
/--
A wrapper around an iterator that provides total consumers. See `IterM.ensureTermination`.
-/
structure IterM.Total {α : Type w} (m : Type w Type w') (β : Type w) where
/--
The wrapped iterator, which was wrapped by `IterM.ensureTermination`.
-/
it : IterM (α := α) m β
/--

View File

@@ -8,8 +8,6 @@ module
prelude
public import Init.Data.Iterators.Basic
set_option linter.missingDocs true
public section
namespace Std
@@ -18,9 +16,6 @@ namespace Std
A wrapper around an iterator that provides partial consumers. See `Iter.allowNontermination`.
-/
structure Iter.Partial {α : Type w} (β : Type w) where
/--
The wrapped iterator, which was wrapped by `Iter.allowNontermination`.
-/
it : Iter (α := α) β
/--

View File

@@ -9,8 +9,6 @@ prelude
public import Init.Data.Stream
public import Init.Data.Iterators.Consumers.Access
set_option linter.missingDocs true
public section
namespace Std

View File

@@ -9,19 +9,12 @@ prelude
public import Init.Data.Iterators.Basic
set_option doc.verso true
set_option linter.missingDocs true
public section
namespace Std
/--
A wrapper around an iterator that provides total consumers. See `Iter.ensureTermination`.
-/
structure Iter.Total {α : Type w} (β : Type w) where
/--
The wrapped iterator, which was wrapped by `Iter.ensureTermination`.
-/
it : Iter (α := α) β
/--

View File

@@ -9,4 +9,3 @@ prelude
public import Init.Data.Iterators.Lemmas.Consumers.Monadic
public import Init.Data.Iterators.Lemmas.Consumers.Collect
public import Init.Data.Iterators.Lemmas.Consumers.Loop
public import Init.Data.Iterators.Lemmas.Consumers.Access

View File

@@ -1,26 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Consumers.Access
namespace Std.Iter
open Std.Iterators
public theorem atIdxSlow?_eq_match [Iterator α Id β] [Productive α Id]
{n : Nat} {it : Iter (α := α) β} :
it.atIdxSlow? n =
(match it.step.val with
| .yield it' out =>
match n with
| 0 => some out
| n + 1 => it'.atIdxSlow? n
| .skip it' => it'.atIdxSlow? n
| .done => none) := by
fun_induction it.atIdxSlow? n <;> simp_all
end Std.Iter

View File

@@ -72,7 +72,7 @@ def PostconditionT.liftWithProperty {α : Type w} {m : Type w → Type w'} {P :
P, x
/--
Given a function `f : α → β`, returns a function `PostconditionT m α → PostconditionT m β`,
Given a function `f : α → β`, returns a a function `PostconditionT m α → PostconditionT m β`,
turning `PostconditionT m` into a functor.
The postcondition of the `x.map f` states that the return value is the image under `f` of some
@@ -85,7 +85,7 @@ protected def PostconditionT.map {m : Type w → Type w'} [Functor m] {α : Type
(fun a => f a.val, _, rfl) <$> x.operation
/--
Given a function `α → PostconditionT m β`, returns a function
Given a function `α → PostconditionT m β`, returns a a function
`PostconditionT m α → PostconditionT m β`, turning `PostconditionT m` into a monad.
-/
@[always_inline, inline, expose]
@@ -287,12 +287,6 @@ theorem PostconditionT.run_attachLift {m : Type w → Type w'} [Monad m] [MonadA
{x : m α} : (attachLift x).run = x := by
simp [attachLift, run_eq_map, WeaklyLawfulMonadAttach.map_attach]
@[simp]
theorem PostconditionT.operation_attachLift {m : Type w Type w'} [Monad m] [MonadAttach m]
{α : Type w} {x : m α} : (attachLift x : PostconditionT m α).operation =
MonadAttach.attach x := by
rfl
instance {m : Type w Type w'} {n : Type w Type w''} [MonadLift m n] :
MonadLift (PostconditionT m) (PostconditionT n) where
monadLift x := _, monadLift x.operation

View File

@@ -11,7 +11,7 @@ public import Init.Core
public section
/--
The `BEq α` and `Hashable α` instances on `α` are compatible. This means that `a == b` implies
The `BEq α` and `Hashable α` instances on `α` are compatible. This means that that `a == b` implies
`hash a = hash b`.
This is automatic if the `BEq` instance is lawful.

View File

@@ -169,10 +169,10 @@ Examples:
| a::as, b::bs, eqv => eqv a b && isEqv as bs eqv
| _, _, _ => false
@[simp, grind =] theorem isEqv_nil_nil : isEqv ([] : List α) [] eqv = true := rfl
@[simp, grind =] theorem isEqv_nil_cons : isEqv ([] : List α) (a::as) eqv = false := rfl
@[simp, grind =] theorem isEqv_cons_nil : isEqv (a::as : List α) [] eqv = false := rfl
@[grind =] theorem isEqv_cons₂ : isEqv (a::as) (b::bs) eqv = (eqv a b && isEqv as bs eqv) := rfl
@[simp] theorem isEqv_nil_nil : isEqv ([] : List α) [] eqv = true := rfl
@[simp] theorem isEqv_nil_cons : isEqv ([] : List α) (a::as) eqv = false := rfl
@[simp] theorem isEqv_cons_nil : isEqv (a::as : List α) [] eqv = false := rfl
theorem isEqv_cons₂ : isEqv (a::as) (b::bs) eqv = (eqv a b && isEqv as bs eqv) := rfl
/-! ## Lexicographic ordering -/
@@ -717,7 +717,6 @@ Examples:
* `["red", "green", "blue"].leftpad 3 "blank" = ["red", "green", "blue"]`
* `["red", "green", "blue"].leftpad 1 "blank" = ["red", "green", "blue"]`
-/
@[simp, grind =]
def leftpad (n : Nat) (a : α) (l : List α) : List α := replicate (n - length l) a ++ l
@@ -731,7 +730,6 @@ Examples:
* `["red", "green", "blue"].rightpad 3 "blank" = ["red", "green", "blue"]`
* `["red", "green", "blue"].rightpad 1 "blank" = ["red", "green", "blue"]`
-/
@[simp, grind =]
def rightpad (n : Nat) (a : α) (l : List α) : List α := l ++ replicate (n - length l) a
/-! ### reduceOption -/

View File

@@ -50,7 +50,7 @@ Users that want to use `mapM` with `Applicative` should use `mapA` instead.
Applies the monadic action `f` to every element in the list, left-to-right, and returns the list of
results.
This implementation is tail recursive. `List.mapM'` is a non-tail-recursive variant that may be
This implementation is tail recursive. `List.mapM'` is a a non-tail-recursive variant that may be
more convenient to reason about. `List.forM` is the variant that discards the results and
`List.mapA` is the variant that works with `Applicative`.
-/
@@ -107,7 +107,7 @@ Applies the monadic action `f` to the corresponding elements of two lists, left-
at the end of the shorter list. `zipWithM f as bs` is equivalent to `mapM id (zipWith f as bs)`
for lawful `Monad` instances.
This implementation is tail recursive. `List.zipWithM'` is a non-tail-recursive variant that may
This implementation is tail recursive. `List.zipWithM'` is a a non-tail-recursive variant that may
be more convenient to reason about.
-/
@[inline, expose]

View File

@@ -2941,6 +2941,9 @@ theorem getLast?_replicate {a : α} {n : Nat} : (replicate n a).getLast? = if n
/-! ### leftpad -/
-- We unfold `leftpad` and `rightpad` for verification purposes.
attribute [simp, grind =] leftpad rightpad
-- `length_leftpad` and `length_rightpad` are in `Init.Data.List.Nat.Basic`.
theorem leftpad_prefix {n : Nat} {a : α} {l : List α} :

View File

@@ -223,16 +223,6 @@ theorem testBit_lt_two_pow {x i : Nat} (lt : x < 2^i) : x.testBit i = false := b
exfalso
exact Nat.not_le_of_gt lt (ge_two_pow_of_testBit p)
theorem testBit_of_two_pow_le_and_two_pow_add_one_gt {n i : Nat}
(hle : 2^i n) (hgt : n < 2^(i + 1)) : n.testBit i = true := by
rcases exists_ge_and_testBit_of_ge_two_pow hle with i', _, _
have : i = i' := by
false_or_by_contra
have : 2 ^ (i + 1) 2 ^ i' := Nat.pow_le_pow_of_le (by decide) (by omega)
have : n.testBit i' = false := testBit_lt_two_pow (by omega)
simp_all only [Bool.false_eq_true]
rwa [this]
theorem lt_pow_two_of_testBit (x : Nat) (p : i, i n testBit x i = false) : x < 2^n := by
apply Decidable.by_contra
intro not_lt
@@ -241,10 +231,6 @@ theorem lt_pow_two_of_testBit (x : Nat) (p : ∀i, i ≥ n → testBit x i = fal
have test_false := p _ i_ge_n
simp [test_true] at test_false
theorem testBit_log2 {n : Nat} (h : n 0) : n.testBit n.log2 = true := by
have := log2_eq_iff (n := n) (k := n.log2) (by omega)
apply testBit_of_two_pow_le_and_two_pow_add_one_gt <;> omega
private theorem succ_mod_two : succ x % 2 = 1 - x % 2 := by
induction x with
| zero =>

View File

@@ -129,9 +129,6 @@ theorem gcd_assoc (m n k : Nat) : gcd (gcd m n) k = gcd m (gcd n k) :=
(Nat.dvd_trans (gcd_dvd_right m (gcd n k)) (gcd_dvd_right n k)))
instance : Std.Associative gcd := gcd_assoc
theorem gcd_left_comm (m n k : Nat) : gcd m (gcd n k) = gcd n (gcd m k) := by
rw [ gcd_assoc, gcd_assoc, gcd_comm m n]
@[simp] theorem gcd_one_right (n : Nat) : gcd n 1 = 1 := (gcd_comm n 1).trans (gcd_one_left n)
theorem gcd_mul_left (m n k : Nat) : gcd (m * n) (m * k) = m * gcd n k := by

View File

@@ -10,7 +10,7 @@ import all Init.Data.Nat.Bitwise.Basic
public import Init.Data.Nat.MinMax
public import Init.Data.Nat.Log2
import all Init.Data.Nat.Log2
public import Init.Data.Nat.Power2.Basic
public import Init.Data.Nat.Power2
public import Init.Data.Nat.Mod
import Init.TacticsExtra
import Init.BinderPredicates

View File

@@ -6,5 +6,66 @@ Authors: Leonardo de Moura
module
prelude
public import Init.Data.Nat.Power2.Basic
public import Init.Data.Nat.Power2.Lemmas
public import Init.Data.Nat.Linear
public section
namespace Nat
theorem nextPowerOfTwo_dec {n power : Nat} (h₁ : power > 0) (h₂ : power < n) : n - power * 2 < n - power := by
have : power * 2 = power + power := by simp +arith
rw [this, Nat.sub_add_eq]
exact Nat.sub_lt (Nat.zero_lt_sub_of_lt h₂) h₁
/--
Returns the least power of two that's greater than or equal to `n`.
Examples:
* `Nat.nextPowerOfTwo 0 = 1`
* `Nat.nextPowerOfTwo 1 = 1`
* `Nat.nextPowerOfTwo 2 = 2`
* `Nat.nextPowerOfTwo 3 = 4`
* `Nat.nextPowerOfTwo 5 = 8`
-/
def nextPowerOfTwo (n : Nat) : Nat :=
go 1 (by decide)
where
go (power : Nat) (h : power > 0) : Nat :=
if power < n then
go (power * 2) (Nat.mul_pos h (by decide))
else
power
termination_by n - power
decreasing_by simp_wf; apply nextPowerOfTwo_dec <;> assumption
/--
A natural number `n` is a power of two if there exists some `k : Nat` such that `n = 2 ^ k`.
-/
def isPowerOfTwo (n : Nat) := k, n = 2 ^ k
theorem isPowerOfTwo_one : isPowerOfTwo 1 :=
0, by decide
theorem isPowerOfTwo_mul_two_of_isPowerOfTwo (h : isPowerOfTwo n) : isPowerOfTwo (n * 2) :=
have k, h := h
k+1, by simp [h, Nat.pow_succ]
theorem pos_of_isPowerOfTwo (h : isPowerOfTwo n) : n > 0 := by
have k, h := h
rw [h]
apply Nat.pow_pos
decide
theorem isPowerOfTwo_nextPowerOfTwo (n : Nat) : n.nextPowerOfTwo.isPowerOfTwo := by
apply isPowerOfTwo_go
apply isPowerOfTwo_one
where
isPowerOfTwo_go (power : Nat) (h₁ : power > 0) (h₂ : power.isPowerOfTwo) : (nextPowerOfTwo.go n power h₁).isPowerOfTwo := by
unfold nextPowerOfTwo.go
split
. exact isPowerOfTwo_go (power*2) (Nat.mul_pos h₁ (by decide)) (Nat.isPowerOfTwo_mul_two_of_isPowerOfTwo h₂)
. assumption
termination_by n - power
decreasing_by simp_wf; apply nextPowerOfTwo_dec <;> assumption
end Nat

View File

@@ -1,71 +0,0 @@
/-
Copyright (c) 2022 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Data.Nat.Linear
public section
namespace Nat
theorem nextPowerOfTwo_dec {n power : Nat} (h₁ : power > 0) (h₂ : power < n) : n - power * 2 < n - power := by
have : power * 2 = power + power := by simp +arith
rw [this, Nat.sub_add_eq]
exact Nat.sub_lt (Nat.zero_lt_sub_of_lt h₂) h₁
/--
Returns the least power of two that's greater than or equal to `n`.
Examples:
* `Nat.nextPowerOfTwo 0 = 1`
* `Nat.nextPowerOfTwo 1 = 1`
* `Nat.nextPowerOfTwo 2 = 2`
* `Nat.nextPowerOfTwo 3 = 4`
* `Nat.nextPowerOfTwo 5 = 8`
-/
def nextPowerOfTwo (n : Nat) : Nat :=
go 1 (by decide)
where
go (power : Nat) (h : power > 0) : Nat :=
if power < n then
go (power * 2) (Nat.mul_pos h (by decide))
else
power
termination_by n - power
decreasing_by simp_wf; apply nextPowerOfTwo_dec <;> assumption
/--
A natural number `n` is a power of two if there exists some `k : Nat` such that `n = 2 ^ k`.
-/
def isPowerOfTwo (n : Nat) := k, n = 2 ^ k
theorem isPowerOfTwo_one : isPowerOfTwo 1 :=
0, by decide
theorem isPowerOfTwo_mul_two_of_isPowerOfTwo (h : isPowerOfTwo n) : isPowerOfTwo (n * 2) :=
have k, h := h
k+1, by simp [h, Nat.pow_succ]
theorem pos_of_isPowerOfTwo (h : isPowerOfTwo n) : n > 0 := by
have k, h := h
rw [h]
apply Nat.pow_pos
decide
theorem isPowerOfTwo_nextPowerOfTwo (n : Nat) : n.nextPowerOfTwo.isPowerOfTwo := by
apply isPowerOfTwo_go
apply isPowerOfTwo_one
where
isPowerOfTwo_go (power : Nat) (h₁ : power > 0) (h₂ : power.isPowerOfTwo) : (nextPowerOfTwo.go n power h₁).isPowerOfTwo := by
unfold nextPowerOfTwo.go
split
. exact isPowerOfTwo_go (power*2) (Nat.mul_pos h₁ (by decide)) (Nat.isPowerOfTwo_mul_two_of_isPowerOfTwo h₂)
. assumption
termination_by n - power
decreasing_by simp_wf; apply nextPowerOfTwo_dec <;> assumption
end Nat

View File

@@ -1,62 +0,0 @@
/-
Copyright (c) George Rennie. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: George Rennie
-/
module
prelude
import all Init.Data.Nat.Power2.Basic
public import Init.Data.Nat.Bitwise.Lemmas
public section
/-!
# Further lemmas about `Nat.isPowerOfTwo`, with the convenience of having bitwise lemmas available.
-/
namespace Nat
theorem not_isPowerOfTwo_zero : ¬isPowerOfTwo 0 := by
rw [isPowerOfTwo, not_exists]
intro x
have := one_le_pow x 2 (by decide)
omega
theorem and_sub_one_testBit_log2 {n : Nat} (h : n 0) (hpow2 : ¬n.isPowerOfTwo) :
(n &&& (n - 1)).testBit n.log2 := by
rw [testBit_and, Bool.and_eq_true]
constructor
· exact testBit_log2 (by omega)
· by_cases n = 2^n.log2
· rw [isPowerOfTwo, not_exists] at hpow2
have := hpow2 n.log2
trivial
· have := log2_eq_iff (n := n) (k := n.log2) (by omega)
have : (n - 1).log2 = n.log2 := by rw [log2_eq_iff] <;> omega
rw [this]
exact testBit_log2 (by omega)
theorem and_sub_one_eq_zero_iff_isPowerOfTwo {n : Nat} (h : n 0) :
(n &&& (n - 1)) = 0 n.isPowerOfTwo := by
constructor
· intro hbitwise
false_or_by_contra
rename_i hpow2
have := and_sub_one_testBit_log2 h hpow2
rwa [hbitwise, zero_testBit n.log2, Bool.false_eq_true] at this
· intro hpow2
rcases hpow2 with _, hpow2
rw [hpow2, and_two_pow_sub_one_eq_mod, mod_self]
theorem ne_zero_and_sub_one_eq_zero_iff_isPowerOfTwo {n : Nat} :
((n 0) (n &&& (n - 1)) = 0) n.isPowerOfTwo := by
match h : n with
| 0 => simp [not_isPowerOfTwo_zero]
| n + 1 => simp; exact and_sub_one_eq_zero_iff_isPowerOfTwo (by omega)
@[inline]
instance {n : Nat} : Decidable n.isPowerOfTwo :=
decidable_of_iff _ ne_zero_and_sub_one_eq_zero_iff_isPowerOfTwo
end Nat

View File

@@ -15,4 +15,3 @@ public import Init.Data.Option.Attach
public import Init.Data.Option.List
public import Init.Data.Option.Monadic
public import Init.Data.Option.Array
public import Init.Data.Option.Function

View File

@@ -1,26 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
module
prelude
public import Init.Data.Function
import Init.Data.Option.Lemmas
public section
namespace Option
theorem map_injective {f : α β} (hf : Function.Injective f) :
Function.Injective (Option.map f) := by
intros a b hab
cases a <;> cases b
· simp
· simp at hab
· simp at hab
· simp only [map_some, some.injEq] at hab
simpa using hf hab
end Option

View File

@@ -307,20 +307,12 @@ theorem map_id' {x : Option α} : (x.map fun a => a) = x := congrFun map_id x
theorem map_id_apply' {α : Type u} {x : Option α} : Option.map (fun (a : α) => a) x = x := by simp
/-- See `Option.apply_get` for a version that can be rewritten in the reverse direction. -/
@[simp, grind =] theorem get_map {f : α β} {o : Option α} {h : (o.map f).isSome} :
(o.map f).get h = f (o.get (by simpa using h)) := by
cases o with
| none => simp at h
| some a => simp
/-- See `Option.get_map` for a version that can be rewritten in the reverse direction. -/
theorem apply_get {f : α β} {o : Option α} {h} :
f (o.get h) = (o.map f).get (by simp [h]) := by
cases o
· simp at h
· simp
@[simp] theorem map_map (h : β γ) (g : α β) (x : Option α) :
(x.map g).map h = x.map (h g) := by
cases x <;> simp only [map_none, map_some, ··]
@@ -740,11 +732,6 @@ theorem get_merge {o o' : Option α} {f : ααα} {i : α} [Std.Lawful
theorem elim_guard : (guard p a).elim b f = if p a then f a else b := by
cases h : p a <;> simp [*, guard]
@[simp]
theorem Option.elim_map {f : α β} {g' : γ} {g : β γ} (o : Option α) :
(o.map f).elim g' g = o.elim g' (g f) := by
cases o <;> simp
-- I don't see how to construct a good grind pattern to instantiate this.
@[simp] theorem getD_map (f : α β) (x : α) (o : Option α) :
(o.map f).getD (f x) = f (getD o x) := by cases o <;> rfl

View File

@@ -46,7 +46,7 @@ theorem ne_of_cmp_ne_eq {α : Type u} {cmp : αα → Ordering} [Std.ReflCm
end ReflCmp
/-- A typeclass for ordered types for which `compare a a = .eq` for all `a`. -/
/-- A typeclasses for ordered types for which `compare a a = .eq` for all `a`. -/
abbrev ReflOrd (α : Type u) [Ord α] := ReflCmp (compare : α α Ordering)
@[simp]

View File

@@ -10,10 +10,7 @@ public import Init.Data.Range.Polymorphic.Basic
public import Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Range.Polymorphic.Stream
public import Init.Data.Range.Polymorphic.Lemmas
public import Init.Data.Range.Polymorphic.Map
public import Init.Data.Range.Polymorphic.Fin
public import Init.Data.Range.Polymorphic.Char
public import Init.Data.Range.Polymorphic.Nat
public import Init.Data.Range.Polymorphic.Int
public import Init.Data.Range.Polymorphic.BitVec

View File

@@ -1,79 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Markus Himmel
-/
module
prelude
public import Init.Data.Char.Ordinal
public import Init.Data.Range.Polymorphic.Fin
import Init.Data.Range.Polymorphic.Lemmas
import Init.Data.Range.Polymorphic.Map
import Init.Data.Char.Order
open Std Std.PRange Std.PRange.UpwardEnumerable
namespace Char
public instance : UpwardEnumerable Char where
succ?
succMany?
@[simp]
public theorem pRangeSucc?_eq : PRange.succ? (α := Char) = Char.succ? := rfl
@[simp]
public theorem pRangeSuccMany?_eq : PRange.succMany? (α := Char) = Char.succMany? := rfl
public instance : Rxc.HasSize Char where
size lo hi := Rxc.HasSize.size lo.ordinal hi.ordinal
public instance : Rxo.HasSize Char where
size lo hi := Rxo.HasSize.size lo.ordinal hi.ordinal
public instance : Rxi.HasSize Char where
size hi := Rxi.HasSize.size hi.ordinal
public instance : Least? Char where
least? := some '\x00'
@[simp]
public theorem least?_eq : Least?.least? (α := Char) = some '\x00' := rfl
def map : Map Char (Fin Char.numCodePoints) where
toFun := Char.ordinal
injective := ordinal_injective
succ?_toFun := by simp [succ?_eq]
succMany?_toFun := by simp [succMany?_eq]
@[simp]
theorem toFun_map : map.toFun = Char.ordinal := rfl
instance : Map.PreservesLE map where
le_iff := by simp [le_iff_ordinal_le]
instance : Map.PreservesRxcSize map where
size_eq := rfl
instance : Map.PreservesRxoSize map where
size_eq := rfl
instance : Map.PreservesRxiSize map where
size_eq := rfl
instance : Map.PreservesLeast? map where
map_least? := by simp
public instance : LawfulUpwardEnumerable Char := .ofMap map
public instance : LawfulUpwardEnumerableLE Char := .ofMap map
public instance : LawfulUpwardEnumerableLT Char := .ofMap map
public instance : LawfulUpwardEnumerableLeast? Char := .ofMap map
public instance : Rxc.LawfulHasSize Char := .ofMap map
public instance : Rxc.IsAlwaysFinite Char := .ofMap map
public instance : Rxo.LawfulHasSize Char := .ofMap map
public instance : Rxo.IsAlwaysFinite Char := .ofMap map
public instance : Rxi.LawfulHasSize Char := .ofMap map
public instance : Rxi.IsAlwaysFinite Char := .ofMap map
end Char

View File

@@ -1,92 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
module
prelude
public import Init.Data.Range.Polymorphic.Instances
public import Init.Data.Fin.OverflowAware
import Init.Grind
public section
open Std Std.PRange
namespace Fin
instance : UpwardEnumerable (Fin n) where
succ? i := i.addNat? 1
succMany? m i := i.addNat? m
@[simp, grind =]
theorem pRangeSucc?_eq : PRange.succ? (α := Fin n) = (·.addNat? 1) := rfl
@[simp, grind =]
theorem pRangeSuccMany?_eq : PRange.succMany? m (α := Fin n) = (·.addNat? m) :=
rfl
instance : LawfulUpwardEnumerable (Fin n) where
ne_of_lt a b := by grind [UpwardEnumerable.LT]
succMany?_zero a := by simp
succMany?_add_one m a := by grind
instance : LawfulUpwardEnumerableLE (Fin n) where
le_iff x y := by
simp only [le_def, UpwardEnumerable.LE, pRangeSuccMany?_eq, Fin.addNat?_eq_dif,
Option.dite_none_right_eq_some, Option.some.injEq, val_inj, exists_prop]
exact fun h => y - x, by grind, by grind
instance : Least? (Fin 0) where
least? := none
instance : LawfulUpwardEnumerableLeast? (Fin 0) where
least?_le a := False.elim (Nat.not_lt_zero _ a.isLt)
@[simp]
theorem least?_eq_of_zero : Least?.least? (α := Fin 0) = none := rfl
instance [NeZero n] : Least? (Fin n) where
least? := some 0
instance [NeZero n] : LawfulUpwardEnumerableLeast? (Fin n) where
least?_le a := 0, rfl, (LawfulUpwardEnumerableLE.le_iff 0 a).1 (Fin.zero_le _)
@[simp]
theorem least?_eq [NeZero n] : Least?.least? (α := Fin n) = some 0 := rfl
instance : LawfulUpwardEnumerableLT (Fin n) := inferInstance
instance : Rxc.HasSize (Fin n) where
size lo hi := hi + 1 - lo
@[grind =]
theorem rxcHasSize_eq :
Rxc.HasSize.size (α := Fin n) = fun (lo hi : Fin n) => (hi + 1 - lo : Nat) := rfl
instance : Rxc.LawfulHasSize (Fin n) where
size_eq_zero_of_not_le bound x := by grind
size_eq_one_of_succ?_eq_none lo hi := by grind
size_eq_succ_of_succ?_eq_some lo hi x := by grind
instance : Rxc.IsAlwaysFinite (Fin n) := inferInstance
instance : Rxo.HasSize (Fin n) := .ofClosed
instance : Rxo.LawfulHasSize (Fin n) := inferInstance
instance : Rxo.IsAlwaysFinite (Fin n) := inferInstance
instance : Rxi.HasSize (Fin n) where
size lo := n - lo
@[grind =]
theorem rxiHasSize_eq :
Rxi.HasSize.size (α := Fin n) = fun (lo : Fin n) => (n - lo : Nat) := rfl
instance : Rxi.LawfulHasSize (Fin n) where
size_eq_one_of_succ?_eq_none x := by grind
size_eq_succ_of_succ?_eq_some lo lo' := by grind
instance : Rxi.IsAlwaysFinite (Fin n) := inferInstance
end Fin

View File

@@ -1,195 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
module
prelude
public import Init.Data.Range.Polymorphic.Instances
public import Init.Data.Function
import Init.Data.Order.Lemmas
import Init.Data.Option.Function
public section
/-!
# Mappings between `UpwardEnumerable` types
In this file we build machinery for pulling back lawfulness properties for `UpwardEnumerable` along
injective functions that commute with the relevant operations.
-/
namespace Std
namespace PRange
namespace UpwardEnumerable
/--
An injective mapping between two types implementing `UpwardEnumerable` that commutes with `succ?`
and `succMany?`.
Having such a mapping means that all of the `Prop`-valued lawfulness classes around
`UpwardEnumerable` can be pulled back.
-/
structure Map (α : Type u) (β : Type v) [UpwardEnumerable α] [UpwardEnumerable β] where
toFun : α β
injective : Function.Injective toFun
succ?_toFun (a : α) : succ? (toFun a) = (succ? a).map toFun
succMany?_toFun (n : Nat) (a : α) : succMany? n (toFun a) = (succMany? n a).map toFun
namespace Map
variable [UpwardEnumerable α] [UpwardEnumerable β]
theorem succ?_eq_none_iff (f : Map α β) {a : α} :
succ? a = none succ? (f.toFun a) = none := by
rw [ (Option.map_injective f.injective).eq_iff, Option.map_none, f.succ?_toFun]
theorem succ?_eq_some_iff (f : Map α β) {a b : α} :
succ? a = some b succ? (f.toFun a) = some (f.toFun b) := by
rw [ (Option.map_injective f.injective).eq_iff, Option.map_some, f.succ?_toFun]
theorem le_iff (f : Map α β) {a b : α} :
UpwardEnumerable.LE a b UpwardEnumerable.LE (f.toFun a) (f.toFun b) := by
simp only [UpwardEnumerable.LE, f.succMany?_toFun, Option.map_eq_some_iff]
refine fun n, hn => n, b, by simp [hn], fun n, c, hn => n, ?_
rw [hn.1, Option.some_inj, f.injective hn.2]
theorem lt_iff (f : Map α β) {a b : α} :
UpwardEnumerable.LT a b UpwardEnumerable.LT (f.toFun a) (f.toFun b) := by
simp only [UpwardEnumerable.LT, f.succMany?_toFun, Option.map_eq_some_iff]
refine fun n, hn => n, b, by simp [hn], fun n, c, hn => n, ?_
rw [hn.1, Option.some_inj, f.injective hn.2]
theorem succ?_toFun' (f : Map α β) : succ? f.toFun = Option.map f.toFun succ? := by
ext
simp [f.succ?_toFun]
/-- Compatibility class for `Map` and `≤`. -/
class PreservesLE [LE α] [LE β] (f : Map α β) where
le_iff : a b f.toFun a f.toFun b
/-- Compatibility class for `Map` and `<`. -/
class PreservesLT [LT α] [LT β] (f : Map α β) where
lt_iff : a < b f.toFun a < f.toFun b
/-- Compatibility class for `Map` and `Rxc.HasSize`. -/
class PreservesRxcSize [Rxc.HasSize α] [Rxc.HasSize β] (f : Map α β) where
size_eq : Rxc.HasSize.size a b = Rxc.HasSize.size (f.toFun a) (f.toFun b)
/-- Compatibility class for `Map` and `Rxo.HasSize`. -/
class PreservesRxoSize [Rxo.HasSize α] [Rxo.HasSize β] (f : Map α β) where
size_eq : Rxo.HasSize.size a b = Rxo.HasSize.size (f.toFun a) (f.toFun b)
/-- Compatibility class for `Map` and `Rxi.HasSize`. -/
class PreservesRxiSize [Rxi.HasSize α] [Rxi.HasSize β] (f : Map α β) where
size_eq : Rxi.HasSize.size b = Rxi.HasSize.size (f.toFun b)
/-- Compatibility class for `Map` and `Least?`. -/
class PreservesLeast? [Least? α] [Least? β] (f : Map α β) where
map_least? : Least?.least?.map f.toFun = Least?.least?
end UpwardEnumerable.Map
open UpwardEnumerable
variable [UpwardEnumerable α] [UpwardEnumerable β]
theorem LawfulUpwardEnumerable.ofMap [LawfulUpwardEnumerable β] (f : Map α β) :
LawfulUpwardEnumerable α where
ne_of_lt a b := by
simpa only [f.lt_iff, f.injective.ne_iff] using LawfulUpwardEnumerable.ne_of_lt _ _
succMany?_zero a := by
apply Option.map_injective f.injective
simpa [ f.succMany?_toFun] using LawfulUpwardEnumerable.succMany?_zero _
succMany?_add_one n a := by
apply Option.map_injective f.injective
rw [ f.succMany?_toFun, LawfulUpwardEnumerable.succMany?_add_one,
f.succMany?_toFun, Option.bind_map, Map.succ?_toFun', Option.map_bind]
instance [LE α] [LT α] [LawfulOrderLT α] [LE β] [LT β] [LawfulOrderLT β] (f : Map α β)
[f.PreservesLE] : f.PreservesLT where
lt_iff := by simp [lt_iff_le_and_not_ge, Map.PreservesLE.le_iff (f := f)]
theorem LawfulUpwardEnumerableLE.ofMap [LE α] [LE β] [LawfulUpwardEnumerableLE β] (f : Map α β)
[f.PreservesLE] : LawfulUpwardEnumerableLE α where
le_iff := by simp [Map.PreservesLE.le_iff (f := f), f.le_iff, LawfulUpwardEnumerableLE.le_iff]
theorem LawfulUpwardEnumerableLT.ofMap [LT α] [LT β] [LawfulUpwardEnumerableLT β] (f : Map α β)
[f.PreservesLT] : LawfulUpwardEnumerableLT α where
lt_iff := by simp [Map.PreservesLT.lt_iff (f := f), f.lt_iff, LawfulUpwardEnumerableLT.lt_iff]
theorem LawfulUpwardEnumerableLeast?.ofMap [Least? α] [Least? β] [LawfulUpwardEnumerableLeast? β]
(f : Map α β) [f.PreservesLeast?] : LawfulUpwardEnumerableLeast? α where
least?_le a := by
obtain l, hl, hl' := LawfulUpwardEnumerableLeast?.least?_le (f.toFun a)
have : (Least?.least? (α := α)).isSome := by
rw [ Option.isSome_map (f := f.toFun), Map.PreservesLeast?.map_least?,
hl, Option.isSome_some]
refine Option.get _ this, by simp, ?_
rw [f.le_iff, Option.apply_get (f := f.toFun)]
simpa [Map.PreservesLeast?.map_least?, hl] using hl'
end PRange
open PRange PRange.UpwardEnumerable
variable [UpwardEnumerable α] [UpwardEnumerable β]
theorem Rxc.LawfulHasSize.ofMap [LE α] [LE β] [Rxc.HasSize α] [Rxc.HasSize β] [Rxc.LawfulHasSize β]
(f : Map α β) [f.PreservesLE] [f.PreservesRxcSize] : Rxc.LawfulHasSize α where
size_eq_zero_of_not_le a b := by
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f)] using
Rxc.LawfulHasSize.size_eq_zero_of_not_le _ _
size_eq_one_of_succ?_eq_none lo hi := by
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f),
f.succ?_eq_none_iff] using
Rxc.LawfulHasSize.size_eq_one_of_succ?_eq_none _ _
size_eq_succ_of_succ?_eq_some lo hi lo' := by
simpa [Map.PreservesRxcSize.size_eq (f := f), Map.PreservesLE.le_iff (f := f),
f.succ?_eq_some_iff] using
Rxc.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _ _
theorem Rxo.LawfulHasSize.ofMap [LT α] [LT β] [Rxo.HasSize α] [Rxo.HasSize β] [Rxo.LawfulHasSize β]
(f : Map α β) [f.PreservesLT] [f.PreservesRxoSize] : Rxo.LawfulHasSize α where
size_eq_zero_of_not_le a b := by
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f)] using
Rxo.LawfulHasSize.size_eq_zero_of_not_le _ _
size_eq_one_of_succ?_eq_none lo hi := by
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f),
f.succ?_eq_none_iff] using
Rxo.LawfulHasSize.size_eq_one_of_succ?_eq_none _ _
size_eq_succ_of_succ?_eq_some lo hi lo' := by
simpa [Map.PreservesRxoSize.size_eq (f := f), Map.PreservesLT.lt_iff (f := f),
f.succ?_eq_some_iff] using
Rxo.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _ _
theorem Rxi.LawfulHasSize.ofMap [Rxi.HasSize α] [Rxi.HasSize β] [Rxi.LawfulHasSize β]
(f : Map α β) [f.PreservesRxiSize] : Rxi.LawfulHasSize α where
size_eq_one_of_succ?_eq_none lo := by
simpa [Map.PreservesRxiSize.size_eq (f := f), f.succ?_eq_none_iff] using
Rxi.LawfulHasSize.size_eq_one_of_succ?_eq_none _
size_eq_succ_of_succ?_eq_some lo lo' := by
simpa [Map.PreservesRxiSize.size_eq (f := f), f.succ?_eq_some_iff] using
Rxi.LawfulHasSize.size_eq_succ_of_succ?_eq_some _ _
theorem Rxc.IsAlwaysFinite.ofMap [LE α] [LE β] [Rxc.IsAlwaysFinite β] (f : Map α β)
[f.PreservesLE] : Rxc.IsAlwaysFinite α where
finite init hi := by
obtain n, hn := Rxc.IsAlwaysFinite.finite (f.toFun init) (f.toFun hi)
exact n, by simpa [f.succMany?_toFun, Map.PreservesLE.le_iff (f := f)] using hn
theorem Rxo.IsAlwaysFinite.ofMap [LT α] [LT β] [Rxo.IsAlwaysFinite β] (f : Map α β)
[f.PreservesLT] : Rxo.IsAlwaysFinite α where
finite init hi := by
obtain n, hn := Rxo.IsAlwaysFinite.finite (f.toFun init) (f.toFun hi)
exact n, by simpa [f.succMany?_toFun, Map.PreservesLT.lt_iff (f := f)] using hn
theorem Rxi.IsAlwaysFinite.ofMap [Rxi.IsAlwaysFinite β] (f : Map α β) : Rxi.IsAlwaysFinite α where
finite init := by
obtain n, hn := Rxi.IsAlwaysFinite.finite (f.toFun init)
exact n, by simpa [f.succMany?_toFun] using hn
end Std

View File

@@ -246,12 +246,8 @@ class InfinitelyUpwardEnumerable (α : Type u) [UpwardEnumerable α] where
This propositional typeclass ensures that `UpwardEnumerable.succ?` is injective.
-/
class LinearlyUpwardEnumerable (α : Type u) [UpwardEnumerable α] where
/-- The implementation of `UpwardEnumerable.succ?` for `α` is injective. -/
eq_of_succ?_eq : a b : α, UpwardEnumerable.succ? a = UpwardEnumerable.succ? b a = b
/--
If a type is infinitely upwardly enumerable, then every element has a successor.
-/
theorem UpwardEnumerable.isSome_succ? {α : Type u} [UpwardEnumerable α]
[InfinitelyUpwardEnumerable α] {a : α} : (succ? a).isSome :=
InfinitelyUpwardEnumerable.isSome_succ? a

View File

@@ -157,7 +157,7 @@ Converts an 8-bit signed integer to a natural number, mapping all negative numbe
Use `Int8.toBitVec` to obtain the two's complement representation.
-/
@[suggest_for Int8.toNat, inline] def Int8.toNatClampNeg (i : Int8) : Nat := i.toInt.toNat
@[inline] def Int8.toNatClampNeg (i : Int8) : Nat := i.toInt.toNat
/-- Obtains the `Int8` whose 2's complement representation is the given `BitVec 8`. -/
@[inline] def Int8.ofBitVec (b : BitVec 8) : Int8 := b
@@ -510,7 +510,7 @@ Converts a 16-bit signed integer to a natural number, mapping all negative numbe
Use `Int16.toBitVec` to obtain the two's complement representation.
-/
@[suggest_for Int16.toNat, inline] def Int16.toNatClampNeg (i : Int16) : Nat := i.toInt.toNat
@[inline] def Int16.toNatClampNeg (i : Int16) : Nat := i.toInt.toNat
/-- Obtains the `Int16` whose 2's complement representation is the given `BitVec 16`. -/
@[inline] def Int16.ofBitVec (b : BitVec 16) : Int16 := b
@@ -880,7 +880,7 @@ Converts a 32-bit signed integer to a natural number, mapping all negative numbe
Use `Int32.toBitVec` to obtain the two's complement representation.
-/
@[suggest_for Int32.toNat, inline] def Int32.toNatClampNeg (i : Int32) : Nat := i.toInt.toNat
@[inline] def Int32.toNatClampNeg (i : Int32) : Nat := i.toInt.toNat
/-- Obtains the `Int32` whose 2's complement representation is the given `BitVec 32`. -/
@[inline] def Int32.ofBitVec (b : BitVec 32) : Int32 := b
@@ -1270,7 +1270,7 @@ Converts a 64-bit signed integer to a natural number, mapping all negative numbe
Use `Int64.toBitVec` to obtain the two's complement representation.
-/
@[suggest_for Int64.toNat, inline] def Int64.toNatClampNeg (i : Int64) : Nat := i.toInt.toNat
@[inline] def Int64.toNatClampNeg (i : Int64) : Nat := i.toInt.toNat
/-- Obtains the `Int64` whose 2's complement representation is the given `BitVec 64`. -/
@[inline] def Int64.ofBitVec (b : BitVec 64) : Int64 := b
@@ -1637,7 +1637,7 @@ Converts a word-sized signed integer to a natural number, mapping all negative n
Use `ISize.toBitVec` to obtain the two's complement representation.
-/
@[suggest_for ISize.toNat, inline] def ISize.toNatClampNeg (i : ISize) : Nat := i.toInt.toNat
@[inline] def ISize.toNatClampNeg (i : ISize) : Nat := i.toInt.toNat
/-- Obtains the `ISize` whose 2's complement representation is the given `BitVec`. -/
@[inline] def ISize.ofBitVec (b : BitVec System.Platform.numBits) : ISize := b

View File

@@ -148,7 +148,6 @@ theorem Subarray.copy_eq_toArray {s : Subarray α} :
s.copy = s.toArray :=
(rfl)
@[grind =]
theorem Subarray.sliceToArray_eq_toArray {s : Subarray α} :
Slice.toArray s = s.toArray :=
(rfl)

View File

@@ -119,13 +119,6 @@ public theorem forIn_toList {α : Type u} {s : Subarray α}
ForIn.forIn s.toList init f = ForIn.forIn s init f :=
Slice.forIn_toList
@[grind =]
public theorem forIn_eq_forIn_toList {α : Type u} {s : Subarray α}
{m : Type v Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
{f : α γ m (ForInStep γ)} :
ForIn.forIn s init f = ForIn.forIn s.toList init f :=
forIn_toList.symm
@[simp]
public theorem forIn_toArray {α : Type u} {s : Subarray α}
{m : Type v Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
@@ -174,22 +167,22 @@ public theorem Array.toSubarray_eq_min {xs : Array α} {lo hi : Nat} :
simp only [Array.toSubarray]
split <;> split <;> simp [Nat.min_eq_right (Nat.le_of_not_ge _), *]
@[simp, grind =]
@[simp]
public theorem Array.array_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).array = xs := by
simp [toSubarray_eq_min, Subarray.array]
@[simp, grind =]
@[simp]
public theorem Array.start_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).start = min lo (min hi xs.size) := by
simp [toSubarray_eq_min, Subarray.start]
@[simp, grind =]
@[simp]
public theorem Array.stop_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).stop = min hi xs.size := by
simp [toSubarray_eq_min, Subarray.stop]
public theorem Subarray.toList_eq {xs : Subarray α} :
theorem Subarray.toList_eq {xs : Subarray α} :
xs.toList = (xs.array.extract xs.start xs.stop).toList := by
let aslice := xs
obtain array, start, stop, h₁, h₂ := xs
@@ -206,46 +199,45 @@ public theorem Subarray.toList_eq {xs : Subarray α} :
simp [Subarray.array, Subarray.start, Subarray.stop]
simp [this, ListSlice.toList_eq, lslice]
@[grind =]
public theorem Subarray.size_eq {xs : Subarray α} :
xs.size = xs.stop - xs.start := by
simp [Subarray.size]
@[simp, grind =]
@[simp]
public theorem Subarray.toArray_toList {xs : Subarray α} :
xs.toList.toArray = xs.toArray := by
simp [Std.Slice.toList, Subarray.toArray, Std.Slice.toArray]
@[simp, grind =]
@[simp]
public theorem Subarray.toList_toArray {xs : Subarray α} :
xs.toArray.toList = xs.toList := by
simp [Std.Slice.toList, Subarray.toArray, Std.Slice.toArray]
@[simp, grind =]
@[simp]
public theorem Subarray.length_toList {xs : Subarray α} :
xs.toList.length = xs.size := by
have : xs.start xs.stop := xs.internalRepresentation.start_le_stop
have : xs.stop xs.array.size := xs.internalRepresentation.stop_le_array_size
simp [Subarray.toList_eq, Subarray.size]; omega
@[simp, grind =]
@[simp]
public theorem Subarray.size_toArray {xs : Subarray α} :
xs.toArray.size = xs.size := by
simp [ Subarray.toArray_toList, Subarray.size, Slice.size, SliceSize.size, start, stop]
namespace Array
@[simp, grind =]
@[simp]
public theorem array_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].array = xs := by
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp, grind =]
@[simp]
public theorem start_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].start = min lo (min hi xs.size) := by
simp [Std.Rco.Sliceable.mkSlice]
@[simp, grind =]
@[simp]
public theorem stop_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].stop = min hi xs.size := by
simp [Std.Rco.Sliceable.mkSlice]
@@ -254,14 +246,14 @@ public theorem mkSlice_rco_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
xs[lo...hi] = xs[(min lo (min hi xs.size))...(min hi xs.size)] := by
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray_eq_min]
@[simp, grind =]
@[simp]
public theorem toList_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
rw [List.take_eq_take_min, List.drop_eq_drop_min]
simp [Std.Rco.Sliceable.mkSlice, Subarray.toList_eq, List.take_drop,
Nat.add_sub_of_le (Nat.min_le_right _ _)]
@[simp, grind =]
@[simp]
public theorem toArray_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.extract lo hi := by
simp only [ Subarray.toArray_toList, toList_mkSlice_rco]
@@ -274,12 +266,12 @@ public theorem toArray_mkSlice_rco {xs : Array α} {lo hi : Nat} :
· simp; omega
· simp; omega
@[simp, grind =]
@[simp]
public theorem size_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].size = min hi xs.size - lo := by
simp [ Subarray.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -288,7 +280,7 @@ public theorem mkSlice_rcc_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
xs[lo...=hi] = xs[(min lo (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp, grind =]
@[simp]
public theorem array_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].array = xs := by
simp [Std.Rcc.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@@ -333,7 +325,7 @@ public theorem stop_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].stop = xs.size := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
@[simp, grind =]
@[simp]
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Array α} {lo : Nat} :
xs[lo...*] = xs[lo...xs.size] := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
@@ -352,7 +344,7 @@ public theorem toArray_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].toArray = xs.extract lo := by
simp
@[simp, grind =]
@[simp]
public theorem size_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].size = xs.size - lo := by
simp [ Subarray.length_toList]
@@ -372,7 +364,7 @@ public theorem stop_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].stop = min hi xs.size := by
simp [Std.Roo.Sliceable.mkSlice]
@[simp, grind =]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -416,11 +408,6 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
@[grind =]
public theorem mkSlice_roc_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
simp
public theorem mkSlice_roc_eq_mkSlice_roo_min {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(min (lo + 1) (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@@ -465,11 +452,6 @@ public theorem mkSlice_roi_eq_mkSlice_roo {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[lo<...xs.size] := by
simp [mkSlice_rci_eq_mkSlice_rco]
@[grind =]
public theorem mkSlice_roi_eq_mkSlice_rco {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...xs.size] := by
simp [mkSlice_rci_eq_mkSlice_rco]
public theorem mkSlice_roi_eq_mkSlice_roo_min {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[(min (lo + 1) xs.size)...xs.size] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@@ -494,7 +476,7 @@ public theorem array_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].array = xs := by
simp [Std.Rio.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp, grind =]
@[simp]
public theorem start_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].start = 0 := by
simp [Std.Rio.Sliceable.mkSlice]
@@ -504,7 +486,7 @@ public theorem stop_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].stop = min hi xs.size := by
simp [Std.Rio.Sliceable.mkSlice]
@[simp, grind =]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Array α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -533,7 +515,7 @@ public theorem array_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].array = xs := by
simp [Std.Ric.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp, grind =]
@[simp]
public theorem start_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].start = 0 := by
simp [Std.Ric.Sliceable.mkSlice]
@@ -548,11 +530,6 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
@[grind =]
public theorem mkSlice_ric_eq_mkSlice_rco {xs : Array α} {hi : Nat} :
xs[*...=hi] = xs[0...(hi + 1)] := by
simp
public theorem mkSlice_ric_eq_mkSlice_rio_min {xs : Array α} {hi : Nat} :
xs[*...=hi] = xs[*...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@@ -582,16 +559,11 @@ public theorem mkSlice_rii_eq_mkSlice_rio {xs : Array α} :
xs[*...*] = xs[*...xs.size] := by
simp [mkSlice_rci_eq_mkSlice_rco]
@[grind =]
public theorem mkSlice_rii_eq_mkSlice_rco {xs : Array α} :
xs[*...*] = xs[0...xs.size] := by
simp
public theorem mkSlice_rii_eq_mkSlice_rio_min {xs : Array α} :
xs[*...*] = xs[*...xs.size] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp, grind =]
@[simp]
public theorem toList_mkSlice_rii {xs : Array α} :
xs[*...*].toList = xs.toList := by
rw [mkSlice_rii_eq_mkSlice_rci, toList_mkSlice_rci, List.drop_zero]
@@ -601,7 +573,7 @@ public theorem toArray_mkSlice_rii {xs : Array α} :
xs[*...*].toArray = xs := by
simp
@[simp, grind =]
@[simp]
public theorem size_mkSlice_rii {xs : Array α} :
xs[*...*].size = xs.size := by
simp [ Subarray.length_toList]
@@ -611,12 +583,12 @@ public theorem array_mkSlice_rii {xs : Array α} :
xs[*...*].array = xs := by
simp
@[simp, grind =]
@[simp]
public theorem start_mkSlice_rii {xs : Array α} :
xs[*...*].start = 0 := by
simp
@[simp, grind =]
@[simp]
public theorem stop_mkSlice_rii {xs : Array α} :
xs[*...*].stop = xs.size := by
simp [Std.Rii.Sliceable.mkSlice]
@@ -627,7 +599,7 @@ section SubarraySlices
namespace Subarray
@[simp, grind =]
@[simp]
public theorem toList_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
simp only [Std.Rco.Sliceable.mkSlice, Std.Rco.HasRcoIntersection.intersection, toList_eq,
@@ -636,12 +608,12 @@ public theorem toList_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
rw [Nat.add_sub_cancel' (by omega)]
simp [Subarray.size, Array.length_toList, List.take_eq_take_min, Nat.add_comm xs.start]
@[simp, grind =]
@[simp]
public theorem toArray_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
simp [ Subarray.toArray_toList, List.drop_take]
@[simp, grind =]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
@@ -657,7 +629,7 @@ public theorem toArray_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
xs[lo...=hi].toArray = xs.toArray.extract lo (hi + 1) := by
simp
@[simp, grind =]
@[simp]
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Subarray α} {lo : Nat} :
xs[lo...*] = xs[lo...xs.size] := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
@@ -679,17 +651,12 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice,
Std.Roc.HasRcoIntersection.intersection, Std.Roo.HasRcoIntersection.intersection]
@[simp, grind =]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Roo.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[grind =]
public theorem mkSlice_roc_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
simp
@[simp]
public theorem toList_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
xs[lo<...hi].toList = (xs.toList.take hi).drop (lo + 1) := by
@@ -703,7 +670,8 @@ public theorem toArray_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
@[simp]
public theorem mkSlice_roc_eq_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
simp
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Roc.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_roc {xs : Subarray α} {lo hi : Nat} :
@@ -721,11 +689,6 @@ public theorem mkSlice_roi_eq_mkSlice_rci {xs : Subarray α} {lo : Nat} :
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice,
Std.Roi.HasRcoIntersection.intersection, Std.Rci.HasRcoIntersection.intersection]
@[grind =]
public theorem mkSlice_roi_eq_mkSlice_rco {xs : Subarray α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...xs.size] := by
simp
@[simp]
public theorem toList_mkSlice_roi {xs : Subarray α} {lo : Nat} :
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
@@ -742,17 +705,12 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : Subarray α} {hi : Nat} :
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice,
Std.Ric.HasRcoIntersection.intersection, Std.Rio.HasRcoIntersection.intersection]
@[simp, grind =]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Subarray α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Rio.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[grind =]
public theorem mkSlice_ric_eq_mkSlice_rco {xs : Subarray α} {hi : Nat} :
xs[*...=hi] = xs[0...(hi + 1)] := by
simp
@[simp]
public theorem toList_mkSlice_rio {xs : Subarray α} {hi : Nat} :
xs[*...hi].toList = xs.toList.take hi := by
@@ -779,7 +737,7 @@ public theorem toArray_mkSlice_ric {xs : Subarray α} {hi : Nat} :
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
simp
@[simp, grind =]
@[simp]
public theorem mkSlice_rii {xs : Subarray α} :
xs[*...*] = xs := by
simp [Std.Rii.Sliceable.mkSlice]

View File

@@ -47,28 +47,21 @@ public theorem toList_eq {xs : ListSlice α} :
simp only [Std.Slice.toList, toList_internalIter]
rfl
@[simp, grind =]
public theorem toArray_toList {xs : ListSlice α} :
xs.toList.toArray = xs.toArray := by
simp [Std.Slice.toArray, Std.Slice.toList]
@[simp, grind =]
public theorem toList_toArray {xs : ListSlice α} :
xs.toArray.toList = xs.toList := by
simp [Std.Slice.toArray, Std.Slice.toList]
@[simp, grind =]
@[simp]
public theorem length_toList {xs : ListSlice α} :
xs.toList.length = xs.size := by
simp [ListSlice.toList_eq, Std.Slice.size, Std.Slice.SliceSize.size, Iter.length_toList_eq_count,
toList_internalIter]; rfl
@[grind =]
public theorem size_eq_length_toList {xs : ListSlice α} :
xs.size = xs.toList.length :=
length_toList.symm
@[simp, grind =]
@[simp]
public theorem size_toArray {xs : ListSlice α} :
xs.toArray.size = xs.size := by
simp [ ListSlice.toArray_toList]
@@ -77,7 +70,7 @@ end ListSlice
namespace List
@[simp, grind =]
@[simp]
public theorem toList_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.take hi).drop lo := by
rw [List.take_eq_take_min, List.drop_eq_drop_min]
@@ -88,17 +81,17 @@ public theorem toList_mkSlice_rco {xs : List α} {lo hi : Nat} :
· have : min hi xs.length lo := by omega
simp [h, Nat.min_eq_right this]
@[simp, grind =]
@[simp]
public theorem toArray_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].toArray = ((xs.take hi).drop lo).toArray := by
simp [ ListSlice.toArray_toList]
@[simp, grind =]
@[simp]
public theorem size_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].size = min hi xs.length - lo := by
simp [ ListSlice.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -129,22 +122,12 @@ public theorem toArray_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo...*].toArray = (xs.drop lo).toArray := by
simp [ ListSlice.toArray_toList]
@[grind =]
public theorem toList_mkSlice_rci_eq_toList_mkSlice_rco {xs : List α} {lo : Nat} :
xs[lo...*].toList = xs[lo...xs.length].toList := by
simp
@[grind =]
public theorem toArray_mkSlice_rci_eq_toArray_mkSlice_rco {xs : List α} {lo : Nat} :
xs[lo...*].toArray = xs[lo...xs.length].toArray := by
simp
@[simp]
public theorem size_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo...*].size = xs.length - lo := by
simp [ ListSlice.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -169,11 +152,6 @@ public theorem mkSlice_roc_eq_mkSlice_roo {xs : List α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
@[simp, grind =]
public theorem mkSlice_roc_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
simp
@[simp]
public theorem toList_mkSlice_roc {xs : List α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.take (hi + 1)).drop (lo + 1) := by
@@ -189,27 +167,11 @@ public theorem size_mkSlice_roc {xs : List α} {lo hi : Nat} :
xs[lo<...=hi].size = min (hi + 1) xs.length - (lo + 1) := by
simp [ ListSlice.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
public theorem toList_mkSlice_roi_eq_toList_mkSlice_roo {xs : List α} {lo : Nat} :
xs[lo<...*].toList = xs[lo<...xs.length].toList := by
simp
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_roo {xs : List α} {lo : Nat} :
xs[lo<...*].toArray = xs[lo<...xs.length].toArray := by
simp
public theorem toList_mkSlice_roi_eq_toList_mkSlice_rco {xs : List α} {lo : Nat} :
xs[lo<...*].toList = xs[(lo + 1)...xs.length].toList := by
simp
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_rco {xs : List α} {lo : Nat} :
xs[lo<...*].toArray = xs[(lo + 1)...xs.length].toArray := by
simp
@[simp]
public theorem toList_mkSlice_roi {xs : List α} {lo : Nat} :
xs[lo<...*].toList = xs.drop (lo + 1) := by
@@ -225,7 +187,7 @@ public theorem size_mkSlice_roi {xs : List α} {lo : Nat} :
xs[lo<...*].size = xs.length - (lo + 1) := by
simp [ ListSlice.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : List α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -250,11 +212,6 @@ public theorem mkSlice_ric_eq_mkSlice_rio {xs : List α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
@[grind =]
public theorem mkSlice_ric_eq_mkSlice_rco {xs : List α} {hi : Nat} :
xs[*...=hi] = xs[0...(hi + 1)] := by
simp
@[simp]
public theorem toList_mkSlice_ric {xs : List α} {hi : Nat} :
xs[*...=hi].toList = xs.take (hi + 1) := by
@@ -270,19 +227,11 @@ public theorem size_mkSlice_ric {xs : List α} {hi : Nat} :
xs[*...=hi].size = min (hi + 1) xs.length := by
simp [ ListSlice.length_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_rii_eq_mkSlice_rci {xs : List α} :
xs[*...*] = xs[0...*] := by
simp [Std.Rii.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
public theorem toList_mkSlice_rii_eq_toList_mkSlice_rco {xs : List α} :
xs[*...*].toList = xs[0...xs.length].toList := by
simp
public theorem toArray_mkSlice_rii_eq_toArray_mkSlice_rco {xs : List α} :
xs[*...*].toArray = xs[0...xs.length].toArray := by
simp
@[simp]
public theorem toList_mkSlice_rii {xs : List α} :
xs[*...*].toList = xs := by
@@ -304,7 +253,7 @@ section ListSubslices
namespace ListSlice
@[simp, grind =]
@[simp]
public theorem toList_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
simp only [instSliceableListSliceNat_1, List.toList_mkSlice_rco, ListSlice.toList_eq (xs := xs)]
@@ -313,12 +262,12 @@ public theorem toList_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
· simp
· simp [List.take_take, Nat.min_comm]
@[simp, grind =]
@[simp]
public theorem toArray_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
simp [ toArray_toList, List.drop_take]
@[simp, grind =]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -346,19 +295,9 @@ public theorem toArray_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
xs[lo...*].toArray = xs.toArray.extract lo := by
simp only [ toArray_toList, toList_mkSlice_rci]
rw (occs := [1]) [ List.take_length (l := List.drop lo xs.toList)]
simp [- toArray_toList]
@[grind =]
public theorem toList_mkSlice_rci_eq_toList_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
xs[lo...*].toList = xs[lo...xs.size].toList := by
simp [ length_toList, - Slice.length_toList_eq_size]
@[grind =]
public theorem toArray_mkSlice_rci_eq_toArray_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
xs[lo...*].toArray = xs[lo...xs.size].toArray := by
simp
@[simp, grind =]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -383,11 +322,6 @@ public theorem mkSlice_roc_eq_mkSlice_rcc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp, grind =]
public theorem mkSlice_roc_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.toList.take (hi + 1)).drop (lo + 1) := by
@@ -398,28 +332,11 @@ public theorem toArray_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi].toArray = xs.toArray.extract (lo + 1) (hi + 1) := by
simp [ toArray_toList, List.drop_take]
@[simp, grind =]
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
public theorem toList_mkSlice_roi_eq_toList_mkSlice_roo {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toList = xs[lo<...xs.size].toList := by
simp [ length_toList, - Slice.length_toList_eq_size]
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_roo {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toArray = xs[lo<...xs.size].toArray := by
simp only [mkSlice_roi_eq_mkSlice_rci, toArray_mkSlice_rci, size_toArray_eq_size,
mkSlice_roo_eq_mkSlice_rco, toArray_mkSlice_rco]
public theorem toList_mkSlice_roi_eq_toList_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toList = xs[(lo + 1)...xs.size].toList := by
simp [ length_toList, - Slice.length_toList_eq_size]
public theorem toArray_mkSlice_roi_eq_toArray_mkSlice_rco {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toArray = xs[(lo + 1)...xs.size].toArray := by
simp
@[simp]
public theorem toList_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
@@ -430,9 +347,9 @@ public theorem toArray_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toArray = xs.toArray.extract (lo + 1) := by
simp only [ toArray_toList, toList_mkSlice_roi]
rw (occs := [1]) [ List.take_length (l := List.drop (lo + 1) xs.toList)]
simp [- toArray_toList]
simp
@[simp, grind =]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : ListSlice α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@@ -457,11 +374,6 @@ public theorem mkSlice_ric_eq_mkSlice_rcc {xs : ListSlice α} {hi : Nat} :
xs[*...=hi] = xs[0...=hi] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[grind =]
public theorem mkSlice_ric_eq_mkSlice_rco {xs : ListSlice α} {hi : Nat} :
xs[*...=hi] = xs[0...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
xs[*...=hi].toList = xs.toList.take (hi + 1) := by
@@ -472,7 +384,7 @@ public theorem toArray_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
simp [ toArray_toList]
@[simp, grind =]
@[simp]
public theorem mkSlice_rii {xs : ListSlice α} :
xs[*...*] = xs := by
simp [Std.Rii.Sliceable.mkSlice]

View File

@@ -40,7 +40,7 @@ class Rcc.Sliceable (α : Type u) (β : outParam (Type v)) (γ : outParam (Type
This typeclass indicates how to obtain slices of elements of {lit}`α` over ranges in the index type
{lit}`β`, the ranges being left-closed right-open.
The type of the resulting slices is {lit}`γ`.
The type of resulting the slices is {lit}`γ`.
-/
class Rco.Sliceable (α : Type u) (β : outParam (Type v)) (γ : outParam (Type w)) where
/--

View File

@@ -123,6 +123,18 @@ opaque getUTF8Byte (s : @& String) (n : Nat) (h : n < s.utf8ByteSize) : UInt8
end String.Internal
/--
Creates a string that contains the characters in a list, in order.
Examples:
* `['L', '∃', '∀', 'N'].asString = "L∃∀N"`
* `[].asString = ""`
* `['a', 'a', 'a'].asString = "aaa"`
-/
@[extern "lean_string_mk", expose]
def String.ofList (data : List Char) : String :=
List.utf8Encode data,.intro data rfl
@[extern "lean_string_mk", expose, deprecated String.ofList (since := "2025-10-30")]
def String.mk (data : List Char) : String :=
List.utf8Encode data,.intro data rfl

View File

@@ -1396,7 +1396,6 @@ scalar value.
public def IsUTF8FirstByte (c : UInt8) : Prop :=
c &&& 0x80 = 0 c &&& 0xe0 = 0xc0 c &&& 0xf0 = 0xe0 c &&& 0xf8 = 0xf0
@[inline]
public instance {c : UInt8} : Decidable c.IsUTF8FirstByte :=
inferInstanceAs <| Decidable (c &&& 0x80 = 0 c &&& 0xe0 = 0xc0 c &&& 0xf0 = 0xe0 c &&& 0xf8 = 0xf0)

View File

@@ -119,7 +119,7 @@ instance (s : Slice) : Std.Iterator (ForwardSliceSearcher s) Id (SearchStep s) w
-- **Invariant 1:** we have already covered everything up until `stackPos - needlePos` (exclusive),
-- with matches and rejections.
-- **Invariant 2:** `stackPos - needlePos` is a valid position
-- **Invariant 3:** the range from `stackPos - needlePos` to `stackPos` (exclusive) is a
-- **Invariant 3:** the range from from `stackPos - needlePos` to `stackPos` (exclusive) is a
-- prefix of the pattern.
if h₁ : stackPos < s.rawEndPos then
let stackByte := s.getUTF8Byte stackPos h₁

View File

@@ -20,7 +20,7 @@ functionality for searching for various kinds of pattern matches in slices to it
provide subslices according to matches etc. The key design principles behind this module are:
- Instead of providing one function per kind of pattern the API is generic over various kinds of
patterns. Thus it only provides e.g. one kind of function for looking for the position of the
first occurrence of a pattern. Currently the supported patterns are:
first occurence of a pattern. Currently the supported patterns are:
- {name}`Char`
- {lean}`Char → Bool`
- {name}`String` and {name}`String.Slice` (partially: doing non trivial searches backwards is not

View File

@@ -796,8 +796,7 @@ theorem getElem?_eq_none {xs : Vector α n} (h : n ≤ i) : xs[i]? = none := by
-- This is a more aggressive pattern than for `List/Array.getElem?_eq_none`, because
-- `length/size` won't appear.
grind_pattern Vector.getElem?_eq_none => xs[i]? where
guard n i
grind_pattern Vector.getElem?_eq_none => xs[i]?
@[simp] theorem getElem?_eq_getElem {xs : Vector α n} {i : Nat} (h : i < n) : xs[i]? = some xs[i] :=
getElem?_pos ..

View File

@@ -366,11 +366,9 @@ instance : GetElem? (List α) Nat α fun as i => i < as.length where
theorem none_eq_getElem?_iff {l : List α} {i : Nat} : none = l[i]? length l i := by
simp [eq_comm (a := none)]
@[grind =]
theorem getElem?_eq_none (h : length l i) : l[i]? = none := getElem?_eq_none_iff.mpr h
grind_pattern getElem?_eq_none => l.length, l[i]? where
guard l.length i
instance : LawfulGetElem (List α) Nat α fun as i => i < as.length where
getElem?_def as i h := by
split <;> simp_all

View File

@@ -21,8 +21,6 @@ structure Config where
/-- If `suggestions` is `true`, `grind` will invoke the currently configured library suggestion engine on the current goal,
and add attempt to use the resulting suggestions as additional parameters to the `grind` tactic. -/
suggestions : Bool := false
/-- If `locals` is `true`, `grind` will add all definitions from the current file. -/
locals : Bool := false
/-- Maximum number of case-splits in a proof search branch. It does not include splits performed during normalization. -/
splits : Nat := 9
/-- Maximum number of E-matching (aka heuristic theorem instantiation) rounds before each case split. -/

View File

@@ -766,7 +766,7 @@ def Poly.cancelVar (c : Int) (x : Var) (p : Poly) : Poly :=
(fun _ _ _ _ => a.toPoly_k.pow k)
(fun _ _ _ _ => a.toPoly_k.pow k)
(fun _ _ _ => a.toPoly_k.pow k)
a) = match a with
a) = match (generalizing := false) a with
| num n => Poly.num (n ^ k)
| .intCast n => .num (n^k)
| .natCast n => .num (n^k)

View File

@@ -4,9 +4,12 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Classical
public section
namespace Lean.Grind
/-- A helper gadget for annotating nested proofs in goals. -/

View File

@@ -132,18 +132,12 @@ structure Config where
Unused `have`s are still removed if `zeta` or `zetaUnused` are true.
-/
zetaHave : Bool := true
/--
If `locals` is `true`, `dsimp` will unfold all definitions from the current file.
For local theorems, use `+suggestions` instead.
-/
locals : Bool := false
deriving Inhabited, BEq
end DSimp
namespace Simp
@[inline]
def defaultMaxSteps := 100000
/--
@@ -303,11 +297,6 @@ structure Config where
and attempt to use the resulting suggestions as parameters to the `simp` tactic.
-/
suggestions : Bool := false
/--
If `locals` is `true`, `simp` will unfold all definitions from the current file.
For local theorems, use `+suggestions` instead.
-/
locals : Bool := false
deriving Inhabited, BEq
-- Configuration object for `simp_all`

View File

@@ -360,7 +360,7 @@ recommended_spelling "smul" for "•" in [HSMul.hSMul, «term_•_»]
recommended_spelling "append" for "++" in [HAppend.hAppend, «term_++_»]
/-- when used as a unary operator -/
recommended_spelling "neg" for "-" in [Neg.neg, «term-_»]
recommended_spelling "inv" for "⁻¹" in [Inv.inv, «term_⁻¹»]
recommended_spelling "inv" for "⁻¹" in [Inv.inv]
recommended_spelling "dvd" for "" in [Dvd.dvd, «term__»]
recommended_spelling "shiftLeft" for "<<<" in [HShiftLeft.hShiftLeft, «term_<<<_»]
recommended_spelling "shiftRight" for ">>>" in [HShiftRight.hShiftRight, «term_>>>_»]
@@ -523,7 +523,7 @@ macro_rules
| `(bif $c then $t else $e) => `(cond $c $t $e)
/--
Haskell-like pipe operator `<|`. `f <| x` means the same as `f x`,
Haskell-like pipe operator `<|`. `f <| x` means the same as the same as `f x`,
except that it parses `x` with lower precedence, which means that `f <| g <| x`
is interpreted as `f (g x)` rather than `(f g) x`.
-/
@@ -557,7 +557,7 @@ macro_rules
| `($a |> $f) => `($f $a)
/--
Alternative syntax for `<|`. `f $ x` means the same as `f x`,
Alternative syntax for `<|`. `f $ x` means the same as the same as `f x`,
except that it parses `x` with lower precedence, which means that `f $ g $ x`
is interpreted as `f (g x)` rather than `(f g) x`.
-/
@@ -782,16 +782,9 @@ Position reporting for `#guard_msgs`:
-/
syntax guardMsgsPositions := &"positions" " := " guardMsgsPositionsArg
/--
Substring matching for `#guard_msgs`:
- `substring := true` checks that the docstring appears as a substring of the output.
- `substring := false` (the default) requires exact matching (modulo whitespace normalization).
-/
syntax guardMsgsSubstring := &"substring" " := " (&"true" <|> &"false")
set_option linter.missingDocs false in
syntax guardMsgsSpecElt :=
guardMsgsFilter <|> guardMsgsWhitespace <|> guardMsgsOrdering <|> guardMsgsPositions <|> guardMsgsSubstring
guardMsgsFilter <|> guardMsgsWhitespace <|> guardMsgsOrdering <|> guardMsgsPositions
set_option linter.missingDocs false in
syntax guardMsgsSpec := "(" guardMsgsSpecElt,* ")"
@@ -867,11 +860,6 @@ Position reporting:
`#guard_msgs` appears.
- `positions := false` does not report position info.
Substring matching:
- `substring := true` checks that the docstring appears as a substring of the output
(after whitespace normalization). This is useful when you only care about part of the message.
- `substring := false` (the default) requires exact matching (modulo whitespace normalization).
For example, `#guard_msgs (error, drop all) in cmd` means to check errors and drop
everything else.
@@ -885,13 +873,6 @@ The top-level command elaborator only runs the linters if `#guard_msgs` is not p
syntax (name := guardMsgsCmd)
(plainDocComment)? "#guard_msgs" (ppSpace guardMsgsSpec)? " in" ppLine command : command
/--
`#guard_panic in cmd` runs `cmd` and succeeds if the command produces a panic message.
This is useful for testing that a command panics without matching the exact (volatile) panic text.
-/
syntax (name := guardPanicCmd)
"#guard_panic" " in" ppLine command : command
/--
Format and print the info trees for a given command.
This is mostly useful for debugging info trees.

View File

@@ -67,7 +67,7 @@ syntax unifConstraint := term patternIgnore(" =?= " <|> " ≟ ") term
syntax unifConstraintElem := colGe unifConstraint ", "?
syntax (docComment)? attrKind "unif_hint" (ppSpace ident)? (ppSpace bracketedBinder)*
" where " withPosition(unifConstraintElem*) patternIgnore(atomic("|" noWs "-") <|> "") ppSpace unifConstraint : command
" where " withPosition(unifConstraintElem*) patternIgnore(atomic("|" noWs "-") <|> "") unifConstraint : command
macro_rules
| `($[$doc?:docComment]? $kind:attrKind unif_hint $(n)? $bs* where $[$cs₁ $cs₂]* |- $t₁ $t₂) => do
@@ -120,7 +120,7 @@ calc
_ = z := pyz
```
It is also possible to write the *first* relation as `<lhs>\n _ = <rhs> :=
<proof>`. This is useful for aligning relation symbols, especially on longer
<proof>`. This is useful for aligning relation symbols, especially on longer:
identifiers:
```
calc abc

View File

@@ -375,10 +375,6 @@ theorem congr {α : Sort u} {β : Sort v} {f₁ f₂ : α → β} {a₁ a₂ :
theorem congrFun {α : Sort u} {β : α Sort v} {f g : (x : α) β x} (h : Eq f g) (a : α) : Eq (f a) (g a) :=
h rfl
/-- Similar to `congrFun` but `β` does not depend on `α`. -/
theorem congrFun' {α : Sort u} {β : Sort v} {f g : α β} (h : Eq f g) (a : α) : Eq (f a) (g a) :=
h rfl
/-!
Initialize the Quotient Module, which effectively adds the following definitions:
```
@@ -907,7 +903,7 @@ instance [Inhabited α] : Inhabited (ULift α) where
Lifts a type or proposition to a higher universe level.
`PULift α` wraps a value of type `α`. It is a generalization of
`PLift` that allows lifting values whose type may live in `Sort s`.
`PLift` that allows lifting values who's type may live in `Sort s`.
It also subsumes `PLift`.
-/
-- The universe variable `r` is written first so that `ULift.{r} α` can be used
@@ -2810,8 +2806,6 @@ structure Char where
/-- The value must be a legal scalar value. -/
valid : val.isValidChar
grind_pattern Char.valid => self.val
private theorem isValidChar_UInt32 {n : Nat} (h : n.isValidChar) : LT.lt n UInt32.size :=
match h with
| Or.inl h => Nat.lt_trans h (of_decide_eq_true rfl)
@@ -3194,7 +3188,7 @@ Constructs a new empty array with initial capacity `0`.
Use `Array.emptyWithCapacity` to create an array with a greater initial capacity.
-/
@[expose, inline]
@[expose]
def Array.empty {α : Type u} : Array α := emptyWithCapacity 0
/--
@@ -3483,18 +3477,6 @@ structure String where ofByteArray ::
attribute [extern "lean_string_to_utf8"] String.toByteArray
attribute [extern "lean_string_from_utf8_unchecked"] String.ofByteArray
/--
Creates a string that contains the characters in a list, in order.
Examples:
* `String.ofList ['L', '∃', '∀', 'N'] = "L∃∀N"`
* `String.ofList [] = ""`
* `String.ofList ['a', 'a', 'a'] = "aaa"`
-/
@[extern "lean_string_mk"]
def String.ofList (data : List Char) : String :=
List.utf8Encode data, .intro data rfl
/--
Decides whether two strings are equal. Normally used via the `DecidableEq String` instance and the
`=` operator.
@@ -3539,7 +3521,7 @@ instance : DecidableEq String.Pos.Raw :=
/--
A region or slice of some underlying string.
A substring contains a string together with the start and end byte positions of a region of
A substring contains an string together with the start and end byte positions of a region of
interest. Actually extracting a substring requires copying and memory allocation, while many
substrings of the same underlying string may exist with very little overhead, and they are more
convenient than tracking the bounds by hand.

View File

@@ -38,67 +38,6 @@ theorem eq_false_of_decide {p : Prop} {_ : Decidable p} (h : decide p = false) :
theorem implies_congr {p₁ p₂ : Sort u} {q₁ q₂ : Sort v} (h₁ : p₁ = p₂) (h₂ : q₁ = q₂) : (p₁ q₁) = (p₂ q₂) :=
h₁ h₂ rfl
theorem implies_congr_left {p₁ p₂ : Sort u} {q : Sort v} (h : p₁ = p₂) : (p₁ q) = (p₂ q) :=
h rfl
theorem implies_congr_right {p : Sort u} {q₁ q₂ : Sort v} (h : q₁ = q₂) : (p q₁) = (p q₂) :=
h rfl
namespace Lean
/--
`Arrow α β` is definitionally equal to `α → β`, but represented as a function
application rather than `Expr.forallE`.
This representation is useful for proof automation that builds nested implications
like `pₙ → ... → p₂ → p₁`. With `Expr.forallE`, each nesting level introduces a
binder that bumps de Bruijn indices in subterms, destroying sharing even with
hash-consing. For example, if `p₁` contains `#20`, then at depth 2 it becomes `#21`,
at depth 3 it becomes `#22`, etc., causing quadratic proof growth.
With `arrow`, both arguments are explicit (not under binders), so subterms remain
identical across nesting levels and can be shared, yielding linear-sized proofs.
-/
def Arrow (α : Sort u) (β : Sort v) : Sort (imax u v) := α β
theorem arrow_congr {p₁ p₂ : Sort u} {q₁ q₂ : Sort v} (h₁ : p₁ = p₂) (h₂ : q₁ = q₂) : Arrow p₁ q₁ = Arrow p₂ q₂ :=
h₁ h₂ rfl
theorem arrow_congr_left {p₁ p₂ : Sort u} {q : Sort v} (h : p₁ = p₂) : Arrow p₁ q = Arrow p₂ q :=
h rfl
theorem arrow_congr_right {p : Sort u} {q₁ q₂ : Sort v} (h : q₁ = q₂) : Arrow p q₁ = Arrow p q₂ :=
h rfl
theorem true_arrow (p : Prop) : Arrow True p = p := by
simp [Arrow]; constructor
next => intro h; exact h .intro
next => intros; assumption
theorem true_arrow_congr_left (p q : Prop) : p = True Arrow p q = q := by
intros; subst p; apply true_arrow
theorem true_arrow_congr_right (q q' : Prop) : q = q' Arrow True q = q' := by
intros; subst q; apply true_arrow
theorem true_arrow_congr (p q q' : Prop) : p = True q = q' Arrow p q = q' := by
intros; subst p q; apply true_arrow
theorem false_arrow (p : Prop) : Arrow False p = True := by
simp [Arrow]; constructor
next => intros; exact .intro
next => intros; contradiction
theorem false_arrow_congr (p q : Prop) : p = False Arrow p q = True := by
intros; subst p; apply false_arrow
theorem arrow_true (α : Sort u) : Arrow α True = True := by
simp [Arrow]; constructor <;> intros <;> exact .intro
theorem arrow_true_congr (α : Sort u) (p : Prop) : p = True Arrow α p = True := by
intros; subst p; apply arrow_true
end Lean
theorem iff_congr {p₁ p₂ q₁ q₂ : Prop} (h₁ : p₁ p₂) (h₂ : q₁ q₂) : (p₁ q₁) (p₂ q₂) :=
Iff.of_eq (propext h₁ propext h₂ rfl)

View File

@@ -1,8 +0,0 @@
/-
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Sym.Lemmas

View File

@@ -1,140 +0,0 @@
/-
Copyright (c) 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Data.Nat.Basic
public import Init.Data.Rat.Basic
public import Init.Data.Int.Basic
public import Init.Data.UInt.Basic
public import Init.Data.SInt.Basic
public section
namespace Lean.Sym
theorem ne_self (a : α) : (a a) = False := by simp
theorem not_true_eq : (¬ True) = False := by simp
theorem not_false_eq : (¬ False) = True := by simp
theorem ite_cond_congr {α : Sort u} (c : Prop) {inst : Decidable c} (a b : α)
(c' : Prop) {inst' : Decidable c'} (h : c = c') : @ite α c inst a b = @ite α c' inst' a b := by
simp [*]
theorem dite_cond_congr {α : Sort u} (c : Prop) {inst : Decidable c} (a : c α) (b : ¬ c α)
(c' : Prop) {inst' : Decidable c'} (h : c = c')
: @dite α c inst a b = @dite α c' inst' (fun h' => a (h.mpr_prop h')) (fun h' => b (h.mpr_not h')) := by
simp [*]
theorem cond_cond_eq_true {α : Sort u} (c : Bool) (a b : α) (h : c = true) : cond c a b = a := by
simp [*]
theorem cond_cond_eq_false {α : Sort u} (c : Bool) (a b : α) (h : c = false) : cond c a b = b := by
simp [*]
theorem cond_cond_congr {α : Sort u} (c : Bool) (a b : α) (c' : Bool) (h : c = c') : cond c a b = cond c' a b := by
simp [*]
theorem Nat.lt_eq_true (a b : Nat) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Int.lt_eq_true (a b : Int) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Rat.lt_eq_true (a b : Rat) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Int8.lt_eq_true (a b : Int8) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Int16.lt_eq_true (a b : Int16) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Int32.lt_eq_true (a b : Int32) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Int64.lt_eq_true (a b : Int64) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem UInt8.lt_eq_true (a b : UInt8) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem UInt16.lt_eq_true (a b : UInt16) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem UInt32.lt_eq_true (a b : UInt32) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem UInt64.lt_eq_true (a b : UInt64) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Fin.lt_eq_true (a b : Fin n) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem BitVec.lt_eq_true (a b : BitVec n) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem String.lt_eq_true (a b : String) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Char.lt_eq_true (a b : Char) (h : decide (a < b) = true) : (a < b) = True := by simp_all
theorem Nat.lt_eq_false (a b : Nat) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Int.lt_eq_false (a b : Int) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Rat.lt_eq_false (a b : Rat) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Int8.lt_eq_false (a b : Int8) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Int16.lt_eq_false (a b : Int16) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Int32.lt_eq_false (a b : Int32) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Int64.lt_eq_false (a b : Int64) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem UInt8.lt_eq_false (a b : UInt8) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem UInt16.lt_eq_false (a b : UInt16) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem UInt32.lt_eq_false (a b : UInt32) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem UInt64.lt_eq_false (a b : UInt64) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Fin.lt_eq_false (a b : Fin n) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem BitVec.lt_eq_false (a b : BitVec n) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem String.lt_eq_false (a b : String) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Char.lt_eq_false (a b : Char) (h : decide (a < b) = false) : (a < b) = False := by simp_all
theorem Nat.le_eq_true (a b : Nat) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int.le_eq_true (a b : Int) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Rat.le_eq_true (a b : Rat) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int8.le_eq_true (a b : Int8) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int16.le_eq_true (a b : Int16) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int32.le_eq_true (a b : Int32) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int64.le_eq_true (a b : Int64) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem UInt8.le_eq_true (a b : UInt8) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem UInt16.le_eq_true (a b : UInt16) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem UInt32.le_eq_true (a b : UInt32) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem UInt64.le_eq_true (a b : UInt64) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Fin.le_eq_true (a b : Fin n) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem BitVec.le_eq_true (a b : BitVec n) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem String.le_eq_true (a b : String) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Char.le_eq_true (a b : Char) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Nat.le_eq_false (a b : Nat) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int.le_eq_false (a b : Int) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Rat.le_eq_false (a b : Rat) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int8.le_eq_false (a b : Int8) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int16.le_eq_false (a b : Int16) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int32.le_eq_false (a b : Int32) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int64.le_eq_false (a b : Int64) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem UInt8.le_eq_false (a b : UInt8) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem UInt16.le_eq_false (a b : UInt16) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem UInt32.le_eq_false (a b : UInt32) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem UInt64.le_eq_false (a b : UInt64) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Fin.le_eq_false (a b : Fin n) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem BitVec.le_eq_false (a b : BitVec n) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem String.le_eq_false (a b : String) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Char.le_eq_false (a b : Char) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Nat.eq_eq_true (a b : Nat) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Int.eq_eq_true (a b : Int) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Rat.eq_eq_true (a b : Rat) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Int8.eq_eq_true (a b : Int8) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Int16.eq_eq_true (a b : Int16) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Int32.eq_eq_true (a b : Int32) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Int64.eq_eq_true (a b : Int64) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem UInt8.eq_eq_true (a b : UInt8) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem UInt16.eq_eq_true (a b : UInt16) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem UInt32.eq_eq_true (a b : UInt32) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem UInt64.eq_eq_true (a b : UInt64) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Fin.eq_eq_true (a b : Fin n) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem BitVec.eq_eq_true (a b : BitVec n) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem String.eq_eq_true (a b : String) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Char.eq_eq_true (a b : Char) (h : decide (a = b) = true) : (a = b) = True := by simp_all
theorem Nat.eq_eq_false (a b : Nat) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Int.eq_eq_false (a b : Int) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Rat.eq_eq_false (a b : Rat) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Int8.eq_eq_false (a b : Int8) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Int16.eq_eq_false (a b : Int16) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Int32.eq_eq_false (a b : Int32) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Int64.eq_eq_false (a b : Int64) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem UInt8.eq_eq_false (a b : UInt8) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem UInt16.eq_eq_false (a b : UInt16) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem UInt32.eq_eq_false (a b : UInt32) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem UInt64.eq_eq_false (a b : UInt64) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Fin.eq_eq_false (a b : Fin n) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem BitVec.eq_eq_false (a b : BitVec n) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem String.eq_eq_false (a b : String) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Char.eq_eq_false (a b : Char) (h : decide (a = b) = false) : (a = b) = False := by simp_all
theorem Nat.dvd_eq_true (a b : Nat) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Int.dvd_eq_true (a b : Int) (h : decide (a b) = true) : (a b) = True := by simp_all
theorem Nat.dvd_eq_false (a b : Nat) (h : decide (a b) = false) : (a b) = False := by simp_all
theorem Int.dvd_eq_false (a b : Int) (h : decide (a b) = false) : (a b) = False := by simp_all
end Lean.Sym

View File

@@ -150,7 +150,7 @@ def parent (p : FilePath) : Option FilePath :=
/--
Extracts the last element of a path if it is a file or directory name.
Returns `none` if the last entry is a special name (such as `.` or `..`) or if the path is the root
Returns `none ` if the last entry is a special name (such as `.` or `..`) or if the path is the root
directory.
-/
def fileName (p : FilePath) : Option String :=

View File

@@ -561,7 +561,7 @@ Waits for the task to finish, then returns its result.
return t.get
/--
Waits until any of the tasks in the list has finished, then returns its result.
Waits until any of the tasks in the list has finished, then return its result.
-/
@[extern "lean_io_wait_any"] opaque waitAny (tasks : @& List (Task α))
(h : tasks.length > 0 := by exact Nat.zero_lt_succ _) : BaseIO α :=
@@ -679,7 +679,7 @@ File handles wrap the underlying operating system's file descriptors. There is n
to close a file: when the last reference to a file handle is dropped, the file is closed
automatically.
Handles have an associated read/write cursor that determines where reads and writes occur in the
Handles have an associated read/write cursor that determines the where reads and writes occur in the
file.
-/
opaque FS.Handle : Type := Unit
@@ -790,7 +790,7 @@ An exception is thrown if the file cannot be opened.
/--
Acquires an exclusive or shared lock on the handle. Blocks to wait for the lock if necessary.
Acquiring an exclusive lock while already possessing a shared lock will **not** reliably succeed: it
Acquiring a exclusive lock while already possessing a shared lock will **not** reliably succeed: it
works on Unix-like systems but not on Windows.
-/
@[extern "lean_io_prim_handle_lock"] opaque lock (h : @& Handle) (exclusive := true) : IO Unit
@@ -798,7 +798,7 @@ works on Unix-like systems but not on Windows.
Tries to acquire an exclusive or shared lock on the handle and returns `true` if successful. Will
not block if the lock cannot be acquired, but instead returns `false`.
Acquiring an exclusive lock while already possessing a shared lock will **not** reliably succeed: it
Acquiring a exclusive lock while already possessing a shared lock will **not** reliably succeed: it
works on Unix-like systems but not on Windows.
-/
@[extern "lean_io_prim_handle_try_lock"] opaque tryLock (h : @& Handle) (exclusive := true) : IO Bool
@@ -1350,7 +1350,7 @@ def withTempFile [Monad m] [MonadFinally m] [MonadLiftT IO m] (f : Handle → Fi
removeFile path
/--
Creates a temporary directory in the most secure manner possible, providing its path to an `IO`
Creates a temporary directory in the most secure manner possible, providing a its path to an `IO`
action. Afterwards, all files in the temporary directory are recursively deleted, regardless of how
or when they were created.
@@ -1480,7 +1480,7 @@ possible to close the child's standard input before the process terminates, whic
@[extern "lean_io_process_spawn"] opaque spawn (args : SpawnArgs) : IO (Child args.toStdioConfig)
/--
Blocks until the child process has exited and returns its exit code.
Blocks until the child process has exited and return its exit code.
-/
@[extern "lean_io_process_child_wait"] opaque Child.wait {cfg : @& StdioConfig} : @& Child cfg IO UInt32
@@ -1586,7 +1586,7 @@ end Process
/--
POSIX-style file permissions.
The `FileRight` structure describes these permissions for a file's owner, members of its designated
The `FileRight` structure describes these permissions for a file's owner, members of it's designated
group, and all others.
-/
structure AccessRight where
@@ -1863,7 +1863,7 @@ unsafe def Runtime.markPersistent (a : α) : BaseIO α := return a
set_option linter.unusedVariables false in
/--
Discards the passed owned reference. This leads to `a` and any object reachable from it never being
Discards the passed owned reference. This leads to `a` any any object reachable from it never being
freed. This can be a useful optimization for eliding deallocation time of big object graphs that are
kept alive close to the end of the process anyway (in which case calling `Runtime.markPersistent`
would be similarly costly to deallocation). It is still considered a safe operation as it cannot

View File

@@ -369,12 +369,6 @@ In this setting all definitions that are not opaque are unfolded.
-/
syntax (name := withUnfoldingAll) "with_unfolding_all " tacticSeq : tactic
/--
`with_unfolding_none tacs` executes `tacs` using the `.none` transparency setting.
In this setting no definitions are unfolded.
-/
syntax (name := withUnfoldingNone) "with_unfolding_none " tacticSeq : tactic
/-- `first | tac | ...` runs each `tac` until one succeeds, or else fails. -/
syntax (name := first) "first " withPosition((ppDedent(ppLine) colGe "| " tacticSeq)+) : tactic
@@ -518,13 +512,14 @@ syntax location := withPosition(ppGroup(" at" (locationWildcard <|> locationHyp)
assuming these are definitionally equal.
* `change t' at h` will change hypothesis `h : t` to have type `t'`, assuming
assuming `t` and `t'` are definitionally equal.
-/
syntax (name := change) "change " term (location)? : tactic
/--
* `change a with b` will change occurrences of `a` to `b` in the goal,
assuming `a` and `b` are definitionally equal.
* `change a with b at h` similarly changes `a` to `b` in the type of hypothesis `h`.
-/
syntax (name := change) "change " term (location)? : tactic
@[tactic_alt change]
syntax (name := changeWith) "change " term " with " term (location)? : tactic
/--
@@ -545,7 +540,7 @@ introducing new local definitions.
For example, given a local hypotheses if the form `h : let x := v; b x`, then `extract_lets z at h`
introduces a new local definition `z := v` and changes `h` to be `h : b z`.
-/
syntax (name := extractLets) "extract_lets" ppSpace optConfig (ppSpace colGt (ident <|> hole))* (location)? : tactic
syntax (name := extractLets) "extract_lets " optConfig (ppSpace colGt (ident <|> hole))* (location)? : tactic
/--
Lifts `let` and `have` expressions within a term as far out as possible.
@@ -904,13 +899,8 @@ The tactic supports all the same syntax variants and options as the `let` term.
-/
macro "let" c:letConfig d:letDecl : tactic => `(tactic| refine_lift let $c:letConfig $d:letDecl; ?_)
/--
`let rec f : t := e` adds a recursive definition `f` to the current goal.
The syntax is the same as term-mode `let rec`.
The tactic supports all the same syntax variants and options as the `let` term.
-/
@[tactic_name "let rec"]
/-- `let rec f : t := e` adds a recursive definition `f` to the current goal.
The syntax is the same as term-mode `let rec`. -/
syntax (name := letrec) withPosition(atomic("let " &"rec ") letRecDecls) : tactic
macro_rules
| `(tactic| let rec $d) => `(tactic| refine_lift let rec $d; ?_)
@@ -1216,6 +1206,22 @@ while `congr 2` produces the intended `⊢ x + y = y + x`.
syntax (name := congr) "congr" (ppSpace num)? : tactic
/--
In tactic mode, `if h : t then tac1 else tac2` can be used as alternative syntax for:
```
by_cases h : t
· tac1
· tac2
```
It performs case distinction on `h : t` or `h : ¬t` and `tac1` and `tac2` are the subproofs.
You can use `?_` or `_` for either subproof to delay the goal to after the tactic, but
if a tactic sequence is provided for `tac1` or `tac2` then it will require the goal to be closed
by the end of the block.
-/
syntax (name := tacDepIfThenElse)
ppRealGroup(ppRealFill(ppIndent("if " binderIdent " : " term " then") ppSpace matchRhsTacticSeq)
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
/--
In tactic mode, `if t then tac1 else tac2` is alternative syntax for:
@@ -1224,34 +1230,16 @@ by_cases t
· tac1
· tac2
```
It performs case distinction on `h† : t` or `h† : ¬t`, where `h†` is an anonymous hypothesis, and
`tac1` and `tac2` are the subproofs. (It doesn't actually use nondependent `if`, since this wouldn't
add anything to the context and hence would be useless for proving theorems. To actually insert an
`ite` application use `refine if t then ?_ else ?_`.)
The assumptions in each subgoal can be named. `if h : t then tac1 else tac2` can be used as
alternative syntax for:
```
by_cases h : t
· tac1
· tac2
```
It performs case distinction on `h : t` or `h : ¬t`.
You can use `?_` or `_` for either subproof to delay the goal to after the tactic, but
if a tactic sequence is provided for `tac1` or `tac2` then it will require the goal to be closed
by the end of the block.
It performs case distinction on `h† : t` or `h† : ¬t`, where `h†` is an anonymous
hypothesis, and `tac1` and `tac2` are the subproofs. (It doesn't actually use
nondependent `if`, since this wouldn't add anything to the context and hence would be
useless for proving theorems. To actually insert an `ite` application use
`refine if t then ?_ else ?_`.)
-/
syntax (name := tacIfThenElse)
ppRealGroup(ppRealFill(ppIndent("if " term " then") ppSpace matchRhsTacticSeq)
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
@[tactic_alt tacIfThenElse]
syntax (name := tacDepIfThenElse)
ppRealGroup(ppRealFill(ppIndent("if " binderIdent " : " term " then") ppSpace matchRhsTacticSeq)
ppDedent(ppSpace) ppRealFill("else " matchRhsTacticSeq)) : tactic
/--
The tactic `nofun` is shorthand for `exact nofun`: it introduces the assumptions, then performs an
empty pattern match, closing the goal if the introduced pattern is impossible.

View File

@@ -58,9 +58,6 @@ syntax (name := attemptAll) "attempt_all " withPosition((ppDedent(ppLine) colGe
/-- Helper internal tactic for implementing the tactic `try?` with parallel execution. -/
syntax (name := attemptAllPar) "attempt_all_par " withPosition((ppDedent(ppLine) colGe "| " tacticSeq)+) : tactic
/-- Helper internal tactic for implementing the tactic `try?` with parallel execution, returning first success. -/
syntax (name := firstPar) "first_par " withPosition((ppDedent(ppLine) colGe "| " tacticSeq)+) : tactic
/-- Helper internal tactic used to implement `evalSuggest` in `try?` -/
syntax (name := tryResult) "try_suggestions " tactic* : tactic

View File

@@ -463,7 +463,7 @@ variable {motive : α → Sort v}
variable (h : α Nat)
variable (F : (x : α) ((y : α) InvImage (· < ·) h y x motive y) motive x)
/-- Helper gadget that prevents reduction of `Nat.eager n` unless `n` evaluates to a ground term. -/
/-- Helper gadget that prevents reduction of `Nat.eager n` unless `n` evalutes to a ground term. -/
def Nat.eager (n : Nat) : Nat :=
if Nat.beq n n = true then n else n
@@ -474,8 +474,8 @@ A well-founded fixpoint operator specialized for `Nat`-valued measures. Given a
its higher order function argument `F` to invoke its argument only on values `y` that are smaller
than `x` with regard to `h`.
In contrast to `WellFounded.fix`, this fixpoint operator reduces on closed terms. (More precisely:
when `h x` evaluates to a ground value)
In contrast to to `WellFounded.fix`, this fixpoint operator reduces on closed terms. (More precisely:
when `h x` evalutes to a ground value)
-/
def Nat.fix : (x : α) motive x :=

View File

@@ -28,8 +28,7 @@ builtin_initialize closedTermCacheExt : EnvExtension ClosedTermCache ←
{ s with map := s.map.insert e c, constNames := s.constNames.insert c, revExprs := e :: s.revExprs })
def cacheClosedTermName (env : Environment) (e : Expr) (n : Name) : Environment :=
closedTermCacheExt.modifyState env fun s =>
{ s with map := s.map.insert e n, constNames := s.constNames.insert n, revExprs := e :: s.revExprs }
closedTermCacheExt.modifyState env fun s => { s with map := s.map.insert e n, constNames := s.constNames.insert n }
def getClosedTermName? (env : Environment) (e : Expr) : Option Name :=
(closedTermCacheExt.getState env).map.find? e

View File

@@ -27,7 +27,6 @@ public import Lean.Compiler.IR.ToIR
public import Lean.Compiler.IR.ToIRType
public import Lean.Compiler.IR.Meta
public import Lean.Compiler.IR.Toposort
public import Lean.Compiler.IR.SimpleGroundExpr
-- The following imports are not required by the compiler. They are here to ensure that there
-- are no orphaned modules.
@@ -72,7 +71,6 @@ def compile (decls : Array Decl) : CompilerM (Array Decl) := do
logDecls `result decls
checkDecls decls
decls toposortDecls decls
decls.forM Decl.detectSimpleGround
addDecls decls
inferMeta decls
return decls

View File

@@ -44,7 +44,7 @@ def log (entry : LogEntry) : CompilerM Unit :=
def tracePrefixOptionName := `trace.compiler.ir
private def isLogEnabledFor (opts : Options) (optName : Name) : Bool :=
match opts.get? optName with
match opts.find optName with
| some (DataValue.ofBool v) => v
| _ => opts.getBool tracePrefixOptionName
@@ -186,7 +186,7 @@ def getDecl (n : Name) : CompilerM Decl := do
def findLocalDecl (n : Name) : CompilerM (Option Decl) :=
return declMapExt.getState ( getEnv) |>.find? n
/-- Returns the list of IR declarations in reverse declaration order. -/
/-- Returns the list of IR declarations in declaration order. -/
def getDecls (env : Environment) : List Decl :=
declMapExt.getEntries env

View File

@@ -12,7 +12,6 @@ public import Lean.Compiler.IR.NormIds
public import Lean.Compiler.IR.SimpCase
public import Lean.Compiler.IR.Boxing
public import Lean.Compiler.ModPkgExt
import Lean.Compiler.IR.SimpleGroundExpr
public section
@@ -77,26 +76,6 @@ def toCType : IRType → String
| IRType.struct _ _ => panic! "not implemented yet"
| IRType.union _ _ => panic! "not implemented yet"
def toHexDigit (c : Nat) : String :=
String.singleton c.digitChar
def quoteString (s : String) : String :=
let q := "\"";
let q := s.foldl
(fun q c => q ++
if c == '\n' then "\\n"
else if c == '\r' then "\\r"
else if c == '\t' then "\\t"
else if c == '\\' then "\\\\"
else if c == '\"' then "\\\""
else if c == '?' then "\\?" -- avoid trigraphs
else if c.toNat <= 31 then
"\\x" ++ toHexDigit (c.toNat / 16) ++ toHexDigit (c.toNat % 16)
-- TODO(Leo): we should use `\unnnn` for escaping unicode characters.
else String.singleton c)
q;
q ++ "\""
def throwInvalidExportName {α : Type} (n : Name) : M α :=
throw s!"invalid export name '{n}'"
@@ -122,160 +101,30 @@ def toCInitName (n : Name) : M String := do
def emitCInitName (n : Name) : M Unit :=
toCInitName n >>= emit
def ctorScalarSizeStr (usize : Nat) (ssize : Nat) : String :=
if usize == 0 then toString ssize
else if ssize == 0 then s!"sizeof(size_t)*{usize}"
else s!"sizeof(size_t)*{usize} + {ssize}"
structure GroundState where
auxCounter : Nat := 0
abbrev GroundM := StateT GroundState M
partial def emitGroundDecl (decl : Decl) (cppBaseName : String) : M Unit := do
let some ground := getSimpleGroundExpr ( getEnv) decl.name | unreachable!
discard <| compileGround ground |>.run {}
where
compileGround (e : SimpleGroundExpr) : GroundM Unit := do
let valueName compileGroundToValue e
let declPrefix := if isClosedTermName ( getEnv) decl.name then "static" else "LEAN_EXPORT"
emitLn <| s!"{declPrefix} const lean_object* {cppBaseName} = (const lean_object*)&{valueName};"
compileGroundToValue (e : SimpleGroundExpr) : GroundM String := do
match e with
| .ctor cidx objArgs usizeArgs scalarArgs =>
let val compileCtor cidx objArgs usizeArgs scalarArgs
mkValueCLit "lean_ctor_object" val
| .string data =>
let leanStringTag := 249
let header := mkHeader 0 0 leanStringTag
let size := data.utf8ByteSize + 1 -- null byte
let length := data.length
let data : String := quoteString data
mkValueCLit
"lean_string_object"
s!"\{.m_header = {header}, .m_size = {size}, .m_capacity = {size}, .m_length = {length}, .m_data = {data}}"
| .pap func args =>
let numFixed := args.size
let leanClosureTag := 245
let header := mkHeader s!"sizeof(lean_closure_object) + sizeof(void*)*{numFixed}" 0 leanClosureTag
let funPtr := s!"(void*){← toCName func}"
let arity := ( getDecl func).params.size
let args args.mapM groundArgToCLit
let argArray := String.intercalate "," args.toList
mkValueCLit
"lean_closure_object"
s!"\{.m_header = {header}, .m_fun = {funPtr}, .m_arity = {arity}, .m_num_fixed = {numFixed}, .m_objs = \{{argArray}} }"
| .nameMkStr args =>
let obj groundNameMkStrToCLit args
mkValueCLit "lean_ctor_object" obj
| .reference refDecl => findValueDecl refDecl
mkValueName (name : String) : String :=
name ++ "_value"
mkAuxValueName (name : String) (idx : Nat) : String :=
mkValueName name ++ s!"_aux_{idx}"
mkAuxDecl (type value : String) : GroundM String := do
let idx modifyGet fun s => (s.auxCounter, { s with auxCounter := s.auxCounter + 1 })
let name := mkAuxValueName cppBaseName idx
emitLn <| s!"static const {type} {name} = {value};"
return name
mkValueCLit (type value : String) : GroundM String := do
let valueName := mkValueName cppBaseName
emitLn <| s!"static const {type} {valueName} = {value};"
return valueName
groundNameMkStrToCLit (args : Array (Name × UInt64)) : GroundM String := do
assert! args.size > 0
if args.size == 1 then
let (ref, hash) := args[0]!
let hash := uint64ToByteArrayLE hash
compileCtor 1 #[.tagged 0, .reference ref] #[] hash
else
let (ref, hash) := args.back!
let args := args.pop
let lit groundNameMkStrToCLit args
let auxName mkAuxDecl "lean_ctor_object" lit
let hash := uint64ToByteArrayLE hash
compileCtor 1 #[.rawReference auxName, .reference ref] #[] hash
groundArgToCLit (a : SimpleGroundArg) : GroundM String := do
match a with
| .tagged val => return s!"((lean_object*)(((size_t)({val}) << 1) | 1))"
| .reference decl => return s!"((lean_object*)&{← findValueDecl decl})"
| .rawReference decl => return s!"((lean_object*)&{decl})"
findValueDecl (decl : Name) : GroundM String := do
let mut decl := decl
while true do
if let some (.reference ref) := getSimpleGroundExpr ( getEnv) decl then
decl := ref
else
break
return mkValueName ( toCName decl)
compileCtor (cidx : Nat) (objArgs : Array SimpleGroundArg) (usizeArgs : Array USize)
(scalarArgs : Array UInt8) : GroundM String := do
let header := mkCtorHeader objArgs.size usizeArgs.size scalarArgs.size cidx
let objArgs objArgs.mapM groundArgToCLit
let usizeArgs : Array String := usizeArgs.map fun val => s!"(lean_object*)(size_t)({val}ULL)"
assert! scalarArgs.size % 8 == 0
let scalarArgs : Array String := Id.run do
let chunks := scalarArgs.size / 8
let mut packed := Array.emptyWithCapacity chunks
for idx in 0...chunks do
let b1 := scalarArgs[idx * 8]!
let b2 := scalarArgs[idx * 8 + 1]!
let b3 := scalarArgs[idx * 8 + 2]!
let b4 := scalarArgs[idx * 8 + 3]!
let b5 := scalarArgs[idx * 8 + 4]!
let b6 := scalarArgs[idx * 8 + 5]!
let b7 := scalarArgs[idx * 8 + 6]!
let b8 := scalarArgs[idx * 8 + 7]!
let lit := s!"LEAN_SCALAR_PTR_LITERAL({b1}, {b2}, {b3}, {b4}, {b5}, {b6}, {b7}, {b8})"
packed := packed.push lit
return packed
let argArray := String.intercalate "," (objArgs ++ usizeArgs ++ scalarArgs).toList
return s!"\{.m_header = {header}, .m_objs = \{{argArray}}}"
mkCtorHeader (numObjs : Nat) (usize : Nat) (ssize : Nat) (tag : Nat) : String :=
let size := s!"sizeof(lean_ctor_object) + sizeof(void*)*{numObjs} + {ctorScalarSizeStr usize ssize}"
mkHeader size numObjs tag
mkHeader {α : Type} [ToString α] (csSz : α) (other : Nat) (tag : Nat) : String :=
s!"\{.m_rc = 0, .m_cs_sz = {csSz}, .m_other = {other}, .m_tag = {tag}}"
def emitFnDeclAux (decl : Decl) (cppBaseName : String) (isExternal : Bool) : M Unit := do
let ps := decl.params
let env getEnv
if isSimpleGroundDecl env decl.name then
emitGroundDecl decl cppBaseName
if ps.isEmpty then
if isExternal then emit "extern "
else if isClosedTermName env decl.name then emit "static "
else emit "LEAN_EXPORT "
else
if ps.isEmpty then
if isExternal then emit "extern "
else if isClosedTermName env decl.name then emit "static "
else emit "LEAN_EXPORT "
if !isExternal then emit "LEAN_EXPORT "
emit (toCType decl.resultType ++ " " ++ cppBaseName)
unless ps.isEmpty do
emit "("
-- We omit void parameters, note that they are guaranteed not to occur in boxed functions
let ps := ps.filter (fun p => !p.ty.isVoid)
-- We omit erased parameters for extern constants
let ps := if isExternC env decl.name then ps.filter (fun p => !p.ty.isErased) else ps
if ps.size > closureMaxArgs && isBoxedName decl.name then
emit "lean_object**"
else
if !isExternal then emit "LEAN_EXPORT "
emit (toCType decl.resultType ++ " " ++ cppBaseName)
unless ps.isEmpty do
emit "("
-- We omit void parameters, note that they are guaranteed not to occur in boxed functions
let ps := ps.filter (fun p => !p.ty.isVoid)
-- We omit erased parameters for extern constants
let ps := if isExternC env decl.name then ps.filter (fun p => !p.ty.isErased) else ps
if ps.size > closureMaxArgs && isBoxedName decl.name then
emit "lean_object**"
else
ps.size.forM fun i _ => do
if i > 0 then emit ", "
emit (toCType ps[i].ty)
emit ")"
emitLn ";"
ps.size.forM fun i _ => do
if i > 0 then emit ", "
emit (toCType ps[i].ty)
emit ")"
emitLn ";"
def emitFnDecl (decl : Decl) (isExternal : Bool) : M Unit := do
let cppBaseName toCName decl.name
@@ -288,9 +137,10 @@ def emitExternDeclAux (decl : Decl) (cNameStr : String) : M Unit := do
def emitFnDecls : M Unit := do
let env getEnv
let decls := getDecls env |>.reverse
let decls := getDecls env
let modDecls : NameSet := decls.foldl (fun s d => s.insert d.name) {}
let usedDecls := collectUsedDecls env decls
let usedDecls : NameSet := decls.foldl (fun s d => collectUsedDecls env d (s.insert d.name)) {}
let usedDecls := usedDecls.toList
usedDecls.forM fun n => do
let decl getDecl n;
match getExternNameFor env `c decl.name with
@@ -503,8 +353,10 @@ def emitArgs (ys : Array Arg) : M Unit :=
if i > 0 then emit ", "
emitArg ys[i]
def emitCtorScalarSize (usize : Nat) (ssize : Nat) : M Unit :=
emit <| ctorScalarSizeStr usize ssize
def emitCtorScalarSize (usize : Nat) (ssize : Nat) : M Unit := do
if usize == 0 then emit ssize
else if ssize == 0 then emit "sizeof(size_t)*"; emit usize
else emit "sizeof(size_t)*"; emit usize; emit " + "; emit ssize
def emitAllocCtor (c : CtorInfo) : M Unit := do
emit "lean_alloc_ctor("; emit c.cidx; emit ", "; emit c.size; emit ", "
@@ -583,18 +435,12 @@ def emitExternCall (f : FunId) (ps : Array Param) (extData : ExternAttrData) (ys
| some (ExternEntry.inline _ pat) => do emit (expandExternPattern pat (toStringArgs ys)); emitLn ";"
| _ => throw s!"failed to emit extern application '{f}'"
def emitLeanFunReference (f : FunId) : M Unit := do
if isSimpleGroundDecl ( getEnv) f then
emit s!"((lean_object*)({← toCName f}))"
else
emitCName f
def emitFullApp (z : VarId) (f : FunId) (ys : Array Arg) : M Unit := do
emitLhs z
let decl getDecl f
match decl with
| .fdecl (xs := ps) .. | .extern (xs := ps) (ext := { entries := [.opaque], .. }) .. =>
emitLeanFunReference f
emitCName f
if ys.size > 0 then
let (ys, _) := ys.zip ps |>.filter (fun (_, p) => !p.ty.isVoid) |>.unzip
emit "("; emitArgs ys; emit ")"
@@ -636,6 +482,26 @@ def emitUnbox (z : VarId) (t : IRType) (x : VarId) : M Unit := do
def emitIsShared (z : VarId) (x : VarId) : M Unit := do
emitLhs z; emit "!lean_is_exclusive("; emit x; emitLn ");"
def toHexDigit (c : Nat) : String :=
String.singleton c.digitChar
def quoteString (s : String) : String :=
let q := "\"";
let q := s.foldl
(fun q c => q ++
if c == '\n' then "\\n"
else if c == '\r' then "\\r"
else if c == '\t' then "\\t"
else if c == '\\' then "\\\\"
else if c == '\"' then "\\\""
else if c == '?' then "\\?" -- avoid trigraphs
else if c.toNat <= 31 then
"\\x" ++ toHexDigit (c.toNat / 16) ++ toHexDigit (c.toNat % 16)
-- TODO(Leo): we should use `\unnnn` for escaping unicode characters.
else String.singleton c)
q;
q ++ "\""
def emitNumLit (t : IRType) (v : Nat) : M Unit := do
if t.isObj then
if v < UInt32.size then
@@ -804,7 +670,7 @@ def emitDeclAux (d : Decl) : M Unit := do
let env getEnv
let (_, jpMap) := mkVarJPMaps d
withReader (fun ctx => { ctx with jpMap := jpMap }) do
unless hasInitAttr env d.name || isSimpleGroundDecl env d.name do
unless hasInitAttr env d.name do
match d with
| .fdecl (f := f) (xs := xs) (type := t) (body := b) .. =>
let baseName toCName f;
@@ -883,8 +749,7 @@ def emitDeclInit (d : Decl) : M Unit := do
if getBuiltinInitFnNameFor? env d.name |>.isSome then
emit "}"
| _ =>
if !isSimpleGroundDecl env d.name then
emitCName n; emit " = "; emitCInitName n; emitLn "();"; emitMarkPersistent d n
emitCName n; emit " = "; emitCInitName n; emitLn "();"; emitMarkPersistent d n
def emitInitFn : M Unit := do
let env getEnv

View File

@@ -31,7 +31,6 @@ time. These changes can likely be done similar to the ones in EmitC:
- function decls need to be fixed
- full applications need to be fixed
- tail calls need to be fixed
- closed term static initializers
-/
def leanMainFn := "_lean_main"
@@ -538,12 +537,14 @@ def emitFnDecls : M llvmctx Unit := do
let env getEnv
let decls := getDecls env
let modDecls : NameSet := decls.foldl (fun s d => s.insert d.name) {}
let usedDecls := collectUsedDecls env decls
usedDecls.forM fun n => do
let decl getDecl n;
let usedDecls : NameSet := decls.foldl (fun s d => collectUsedDecls env d (s.insert d.name)) {}
let usedDecls := usedDecls.toList
for n in usedDecls do
let decl getDecl n
match getExternNameFor env `c decl.name with
| some cName => emitExternDeclAux decl cName
| none => emitFnDecl decl (!modDecls.contains n)
return ()
def emitLhsSlot_ (x : VarId) : M llvmctx (LLVM.LLVMType llvmctx × LLVM.Value llvmctx) := do
let state get

View File

@@ -25,19 +25,10 @@ def usesModuleFrom (env : Environment) (modulePrefix : Name) : Bool :=
namespace CollectUsedDecls
structure State where
set : NameSet := {}
order : Array Name := #[]
abbrev M := ReaderT Environment (StateM State)
abbrev M := ReaderT Environment (StateM NameSet)
@[inline] def collect (f : FunId) : M Unit :=
modify fun { set, order } =>
let (contained, set) := set.containsThenInsert f
if !contained then
{ set, order := order.push f }
else
{ set, order }
modify fun s => s.insert f
partial def collectFnBody : FnBody M Unit
| .vdecl _ _ v b =>
@@ -55,19 +46,14 @@ def collectInitDecl (fn : Name) : M Unit := do
| some initFn => collect initFn
| _ => pure ()
def collectDecl : Decl M Unit
| .fdecl (f := f) (body := b) .. => collectInitDecl f *> CollectUsedDecls.collectFnBody b
| .extern (f := f) .. => collectInitDecl f
def collectDeclLoop (decls : List Decl) : M Unit := do
decls.forM fun decl => do
collectDecl decl
collect decl.name
def collectDecl : Decl M NameSet
| .fdecl (f := f) (body := b) .. => collectInitDecl f *> CollectUsedDecls.collectFnBody b *> get
| .extern (f := f) .. => collectInitDecl f *> get
end CollectUsedDecls
def collectUsedDecls (env : Environment) (decls : List Decl) : Array Name :=
(CollectUsedDecls.collectDeclLoop decls env).run {} |>.snd.order
def collectUsedDecls (env : Environment) (decl : Decl) (used : NameSet := {}) : NameSet :=
(CollectUsedDecls.collectDecl decl env).run' used
abbrev VarTypeMap := Std.HashMap VarId IRType
abbrev JPParamsMap := Std.HashMap JoinPointId (Array Param)

View File

@@ -1,355 +0,0 @@
/-
Copyright (c) 2026 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Lean.Compiler.IR.CompilerM
public import Lean.EnvExtension
import Lean.Compiler.ClosedTermCache
/-!
This module contains logic for detecting simple ground expressions that can be extracted into
statically initializable variables. To do this it attempts to compile declarations into
a simple language of expressions, `SimpleGroundExpr`. If this attempt succeeds it stores the result
in an environment extension, accessible through `getSimpleGroundExpr`. Later on the code emission
step can reference this environment extension to generate static initializers for the respective
declaration.
-/
namespace Lean
namespace IR
/--
An argument to a `SimpleGroundExpr`. They get compiled to `lean_object*` in various ways.
-/
public inductive SimpleGroundArg where
/--
A simple tagged literal.
-/
| tagged (val : Nat)
/--
A reference to another declaration that was marked as a simple ground expression. This gets
compiled to a reference to the mangled version of the name.
-/
| reference (n : Name)
/--
A reference directly to a raw C name. This gets compiled to a reference to the name directly.
-/
| rawReference (s : String)
deriving Inhabited
/--
A simple ground expression that can be turned into a static initializer.
-/
public inductive SimpleGroundExpr where
/--
Represents a `lean_ctor_object`. Crucially the `scalarArgs` array must have a size that is a
multiple of 8.
-/
| ctor (cidx : Nat) (objArgs : Array SimpleGroundArg) (usizeArgs : Array USize) (scalarArgs : Array UInt8)
/--
A string literal, represented by a `lean_string_object`.
-/
| string (data : String)
/--
A partial application, represented by a `lean_closure_object`.
-/
| pap (func : FunId) (args : Array SimpleGroundArg)
/--
An application of `Lean.Name.mkStrX`. This expression is represented separately to ensure that
long name literals get extracted into statically initializable constants. The arguments contain
both the name of the string literal it references as well as the hash of the name up to that
point. This is done to make emitting the literal as simple as possible.
-/
| nameMkStr (args : Array (Name × UInt64))
/--
A reference to another declaration that was marked as a simple ground expression. This gets
compiled to a reference to the mangled version of the name.
-/
| reference (n : Name)
deriving Inhabited
public structure SimpleGroundExtState where
constNames : PHashMap Name SimpleGroundExpr := {}
revNames : List Name := []
deriving Inhabited
builtin_initialize simpleGroundDeclExt : EnvExtension SimpleGroundExtState
registerEnvExtension (pure {}) (asyncMode := .sync)
(replay? := some fun oldState newState _ s =>
let newNames := newState.revNames.take (newState.revNames.length - oldState.revNames.length)
newNames.foldl (init := s) fun s n =>
let g := newState.constNames.find! n
{ s with constNames := s.constNames.insert n g, revNames := n :: s.revNames }
)
/--
Record `declName` as mapping to the simple ground expr `expr`.
-/
public def addSimpleGroundDecl (env : Environment) (declName : Name) (expr : SimpleGroundExpr) :
Environment :=
simpleGroundDeclExt.modifyState env fun s =>
{ s with constNames := s.constNames.insert declName expr, revNames := declName :: s.revNames }
/--
Attempt to fetch a `SimpleGroundExpr` associated with `declName` if it exists.
-/
public def getSimpleGroundExpr (env : Environment) (declName : Name) : Option SimpleGroundExpr :=
(simpleGroundDeclExt.getState env).constNames.find? declName
/--
Like `getSimpleGroundExpr` but recursively traverses `reference` exprs to get to actual ground
values.
-/
public def getSimpleGroundExprWithResolvedRefs (env : Environment) (declName : Name) :
Option SimpleGroundExpr := Id.run do
let mut declName := declName
while true do
let val := getSimpleGroundExpr env declName
match val with
| some (.reference ref) => declName := ref
| other => return other
return none
/--
Check if `declName` is recorded as being a `SimpleGroundExpr`.
-/
public def isSimpleGroundDecl (env : Environment) (declName : Name) : Bool :=
(simpleGroundDeclExt.getState env).constNames.contains declName
public def uint64ToByteArrayLE (n : UInt64) : Array UInt8 :=
#[
n.toUInt8,
(n >>> 0x08).toUInt8,
(n >>> 0x10).toUInt8,
(n >>> 0x18).toUInt8,
(n >>> 0x20).toUInt8,
(n >>> 0x28).toUInt8,
(n >>> 0x30).toUInt8,
(n >>> 0x38).toUInt8,
]
inductive SimpleGroundValue where
| arg (arg : SimpleGroundArg)
| uint8 (val : UInt8)
| uint16 (val : UInt16)
| uint32 (val : UInt32)
| uint64 (val : UInt64)
| usize (val : USize)
deriving Inhabited
structure State where
groundMap : Std.HashMap VarId SimpleGroundValue := {}
abbrev M := StateRefT State $ OptionT CompilerM
/--
Attempt to compile `b` into a `SimpleGroundExpr`. If `b` is not compileable return `none`.
The compiler currently supports the following patterns:
- String literals
- Partial applications with other simple expressions
- Constructor calls with other simple expressions
- `Name.mkStrX`, `Name.str._override`, and `Name.num._override`
- references to other declarations marked as simple ground expressions
-/
partial def compileToSimpleGroundExpr (b : FnBody) : CompilerM (Option SimpleGroundExpr) :=
compileFnBody b |>.run' {} |>.run
where
compileFnBody (b : FnBody) : M SimpleGroundExpr := do
match b with
| .vdecl id _ expr (.ret (.var id')) =>
guard <| id == id'
compileFinalExpr expr
| .vdecl id ty expr b => compileNonFinalExpr id ty expr b
| _ => failure
@[inline]
record (id : VarId) (val : SimpleGroundValue) : M Unit :=
modify fun s => { s with groundMap := s.groundMap.insert id val }
compileNonFinalExpr (id : VarId) (ty : IRType) (expr : Expr) (b : FnBody) : M SimpleGroundExpr := do
match expr with
| .fap c #[] =>
guard <| isSimpleGroundDecl ( getEnv) c
record id (.arg (.reference c))
compileFnBody b
| .lit v =>
match v with
| .num v =>
match ty with
| .tagged =>
guard <| v < 2^31
record id (.arg (.tagged v))
| .uint8 => record id (.uint8 (.ofNat v))
| .uint16 => record id (.uint16 (.ofNat v))
| .uint32 => record id (.uint32 (.ofNat v))
| .uint64 => record id (.uint64 (.ofNat v))
| .usize => record id (.usize (.ofNat v))
| _ => failure
compileFnBody b
| .str .. => failure
| .ctor i objArgs =>
if i.isScalar then
record id (.arg (.tagged i.cidx))
compileFnBody b
else
let objArgs compileArgs objArgs
let usizeArgs := Array.replicate i.usize 0
-- Align to 8 bytes for alignment with lean_object*
let align (v a : Nat) : Nat :=
(v / a) * a + a * (if v % a != 0 then 1 else 0)
let alignedSsize := align i.ssize 8
let ssizeArgs := Array.replicate alignedSsize 0
compileSetChain id i objArgs usizeArgs ssizeArgs b
| _ => failure
compileSetChain (id : VarId) (info : CtorInfo) (objArgs : Array SimpleGroundArg) (usizeArgs : Array USize)
(scalarArgs : Array UInt8) (b : FnBody) : M SimpleGroundExpr := do
match b with
| .ret (.var id') =>
guard <| id == id'
return .ctor info.cidx objArgs usizeArgs scalarArgs
| .sset id' i offset y _ b =>
guard <| id == id'
let i := i - objArgs.size - usizeArgs.size
let offset := i * 8 + offset
let scalarArgs
match ( get).groundMap[y]! with
| .uint8 v =>
let scalarArgs := scalarArgs.set! offset v
pure scalarArgs
| .uint16 v =>
let scalarArgs := scalarArgs.set! offset v.toUInt8
let scalarArgs := scalarArgs.set! (offset + 1) (v >>> 0x08).toUInt8
pure scalarArgs
| .uint32 v =>
let scalarArgs := scalarArgs.set! offset v.toUInt8
let scalarArgs := scalarArgs.set! (offset + 1) (v >>> 0x08).toUInt8
let scalarArgs := scalarArgs.set! (offset + 2) (v >>> 0x10).toUInt8
let scalarArgs := scalarArgs.set! (offset + 3) (v >>> 0x18).toUInt8
pure scalarArgs
| .uint64 v =>
let scalarArgs := scalarArgs.set! offset v.toUInt8
let scalarArgs := scalarArgs.set! (offset + 1) (v >>> 0x08).toUInt8
let scalarArgs := scalarArgs.set! (offset + 2) (v >>> 0x10).toUInt8
let scalarArgs := scalarArgs.set! (offset + 3) (v >>> 0x18).toUInt8
let scalarArgs := scalarArgs.set! (offset + 4) (v >>> 0x20).toUInt8
let scalarArgs := scalarArgs.set! (offset + 5) (v >>> 0x28).toUInt8
let scalarArgs := scalarArgs.set! (offset + 6) (v >>> 0x30).toUInt8
let scalarArgs := scalarArgs.set! (offset + 7) (v >>> 0x38).toUInt8
pure scalarArgs
| _ => failure
compileSetChain id info objArgs usizeArgs scalarArgs b
| .uset id' i y b =>
guard <| id == id'
let i := i - objArgs.size
let .usize v := ( get).groundMap[y]! | failure
let usizeArgs := usizeArgs.set! i v
compileSetChain id info objArgs usizeArgs scalarArgs b
| _ => failure
compileFinalExpr (e : Expr) : M SimpleGroundExpr := do
match e with
| .lit v =>
match v with
| .str v => return .string v
| .num .. => failure
| .ctor i args =>
guard <| i.usize == 0 && i.ssize == 0 && !args.isEmpty
return .ctor i.cidx ( compileArgs args) #[] #[]
| .fap ``Name.num._override args =>
let pre compileArg args[0]!
let .tagged i compileArg args[1]! | failure
let name := Name.num ( interpNameLiteral pre) i
let hash := name.hash
return .ctor 2 #[pre, .tagged i] #[] (uint64ToByteArrayLE hash)
| .fap ``Name.str._override args =>
let pre compileArg args[0]!
let (ref, str) compileStrArg args[1]!
let name := Name.str ( interpNameLiteral pre) str
let hash := name.hash
return .ctor 1 #[pre, .reference ref] #[] (uint64ToByteArrayLE hash)
| .fap ``Name.mkStr1 args
| .fap ``Name.mkStr2 args
| .fap ``Name.mkStr3 args
| .fap ``Name.mkStr4 args
| .fap ``Name.mkStr5 args
| .fap ``Name.mkStr6 args
| .fap ``Name.mkStr7 args
| .fap ``Name.mkStr8 args =>
let mut nameAcc := Name.anonymous
let mut processedArgs := Array.emptyWithCapacity args.size
for arg in args do
let (ref, str) compileStrArg arg
nameAcc := .str nameAcc str
processedArgs := processedArgs.push (ref, nameAcc.hash)
return .nameMkStr processedArgs
| .pap c ys => return .pap c ( compileArgs ys)
| .fap c #[] =>
guard <| isSimpleGroundDecl ( getEnv) c
return .reference c
| _ => failure
compileArg (arg : Arg) : M SimpleGroundArg := do
match arg with
| .var var =>
let .arg arg := ( get).groundMap[var]! | failure
return arg
| .erased => return .tagged 0
compileArgs (args : Array Arg) : M (Array SimpleGroundArg) := do
args.mapM compileArg
compileStrArg (arg : Arg) : M (Name × String) := do
let .var var := arg | failure
let (.arg (.reference ref)) := ( get).groundMap[var]! | failure
let some (.string val) := getSimpleGroundExprWithResolvedRefs ( getEnv) ref | failure
return (ref, val)
interpStringLiteral (arg : SimpleGroundArg) : M String := do
let .reference ref := arg | failure
let some (.string val) := getSimpleGroundExprWithResolvedRefs ( getEnv) ref | failure
return val
interpNameLiteral (arg : SimpleGroundArg) : M Name := do
match arg with
| .tagged 0 => return .anonymous
| .reference ref =>
match getSimpleGroundExprWithResolvedRefs ( getEnv) ref with
| some (.ctor 1 #[pre, .reference ref] _ _) =>
let pre interpNameLiteral pre
let str interpStringLiteral (.reference ref)
return .str pre str
| some (.ctor 2 #[pre, .tagged i] _ _) =>
let pre interpNameLiteral pre
return .num pre i
| some (.nameMkStr args) =>
args.foldlM (init := .anonymous) fun acc (ref, _) => do
let part interpStringLiteral (.reference ref)
return .str acc part
| _ => failure
| _ => failure
/--
Detect whether `d` can be compiled to a `SimpleGroundExpr`. If it can record the associated
`SimpleGroundExpr` into the environment for later processing by code emission.
-/
public def Decl.detectSimpleGround (d : Decl) : CompilerM Unit := do
let .fdecl (body := body) (xs := params) (type := type) .. := d | return ()
if type.isPossibleRef && params.isEmpty then
if let some groundExpr compileToSimpleGroundExpr body then
trace[compiler.ir.simple_ground] m!"Marked {d.name} as simple ground expr"
modifyEnv fun env => addSimpleGroundDecl env d.name groundExpr
builtin_initialize registerTraceClass `compiler.ir.simple_ground (inherited := true)
end IR
end Lean

View File

@@ -7,7 +7,6 @@ module
prelude
public import Lean.Attributes
import Lean.Meta.RecExt
public section
@@ -34,8 +33,14 @@ private def isValidMacroInline (declName : Name) : CoreM Bool := do
unless info.all.length = 1 do
-- We do not allow `[macro_inline]` attributes at mutual recursive definitions
return false
if ( Meta.isRecursiveDefinition declName) then
-- It is recursive
let env getEnv
let isRec (declName' : Name) : Bool :=
isBRecOnRecursor env declName' ||
declName' == ``WellFounded.fix ||
declName' == ``WellFounded.Nat.fix ||
declName' == declName ++ `_unary -- Auxiliary declaration created by `WF` module
if Option.isSome <| info.value.find? fun e => e.isConst && isRec e.constName! then
-- It contains a `brecOn` or `WellFounded.fix` application. So, it should be recursvie
return false
return true

View File

@@ -45,4 +45,3 @@ public import Lean.Compiler.LCNF.LambdaLifting
public import Lean.Compiler.LCNF.ReduceArity
public import Lean.Compiler.LCNF.Probing
public import Lean.Compiler.LCNF.Irrelevant
public import Lean.Compiler.LCNF.SplitSCC

View File

@@ -147,11 +147,18 @@ inductive Alt where
| alt (ctorName : Name) (params : Array Param) (code : Code)
| default (code : Code)
inductive FunDecl where
| mk (fvarId : FVarId) (binderName : Name) (params : Array Param) (type : Expr) (value : Code)
structure FunDecl where
fvarId : FVarId
binderName : Name
params : Array Param
type : Expr
value : Code
inductive Cases where
| mk (typeName : Name) (resultType : Expr) (discr : FVarId) (alts : Array Alt)
structure Cases where
typeName : Name
resultType : Expr
discr : FVarId
alts : Array Alt
deriving Inhabited
inductive Code where
@@ -166,57 +173,6 @@ inductive Code where
end
@[inline]
def FunDecl.fvarId : FunDecl FVarId
| .mk (fvarId := fvarId) .. => fvarId
@[inline]
def FunDecl.binderName : FunDecl Name
| .mk (binderName := binderName) .. => binderName
@[inline]
def FunDecl.params : FunDecl Array Param
| .mk (params := params) .. => params
@[inline]
def FunDecl.type : FunDecl Expr
| .mk (type := type) .. => type
@[inline]
def FunDecl.value : FunDecl Code
| .mk (value := value) .. => value
@[inline]
def FunDecl.updateBinderName : FunDecl Name FunDecl
| .mk fvarId _ params type value, new =>
.mk fvarId new params type value
@[inline]
def FunDecl.toParam (decl : FunDecl) (borrow : Bool) : Param :=
match decl with
| .mk fvarId binderName _ type .. => fvarId, binderName, type, borrow
@[inline]
def Cases.typeName : Cases Name
| .mk (typeName := typeName) .. => typeName
@[inline]
def Cases.resultType : Cases Expr
| .mk (resultType := resultType) .. => resultType
@[inline]
def Cases.discr : Cases FVarId
| .mk (discr := discr) .. => discr
@[inline]
def Cases.alts : Cases Array Alt
| .mk (alts := alts) .. => alts
@[inline]
def Cases.updateAlts : Cases Array Alt Cases
| .mk typeName resultType discr _, new =>
.mk typeName resultType discr new
deriving instance Inhabited for Alt
deriving instance Inhabited for FunDecl
@@ -325,18 +281,14 @@ private unsafe def updateAltImp (alt : Alt) (ps' : Array Param) (k' : Code) : Al
@[inline] private unsafe def updateAltsImp (c : Code) (alts : Array Alt) : Code :=
match c with
| .cases cs => if ptrEq cs.alts alts then c else .cases <| cs.updateAlts alts
| .cases cs => if ptrEq cs.alts alts then c else .cases { cs with alts }
| _ => unreachable!
@[implemented_by updateAltsImp] opaque Code.updateAlts! (c : Code) (alts : Array Alt) : Code
@[inline] private unsafe def updateCasesImp (c : Code) (resultType : Expr) (discr : FVarId) (alts : Array Alt) : Code :=
match c with
| .cases cs =>
if ptrEq cs.alts alts && ptrEq cs.resultType resultType && cs.discr == discr then
c
else
.cases <| cs.typeName, resultType, discr, alts
| .cases cs => if ptrEq cs.alts alts && ptrEq cs.resultType resultType && cs.discr == discr then c else .cases { cs with discr, resultType, alts }
| _ => unreachable!
@[implemented_by updateCasesImp] opaque Code.updateCases! (c : Code) (resultType : Expr) (discr : FVarId) (alts : Array Alt) : Code
@@ -416,7 +368,7 @@ private unsafe def updateFunDeclCoreImp (decl: FunDecl) (type : Expr) (params :
if ptrEq type decl.type && ptrEq params decl.params && ptrEq value decl.value then
decl
else
decl.fvarId, decl.binderName, params, type, value
{ decl with type, params, value }
/--
Low-level update `FunDecl` function. It does not update the local context.
@@ -426,7 +378,7 @@ to be updated.
@[implemented_by updateFunDeclCoreImp] opaque FunDecl.updateCore (decl : FunDecl) (type : Expr) (params : Array Param) (value : Code) : FunDecl
def Cases.extractAlt! (cases : Cases) (ctorName : Name) : Alt × Cases :=
let found i := (cases.alts[i]!, cases.updateAlts (cases.alts.eraseIdx! i))
let found i := (cases.alts[i], { cases with alts := cases.alts.eraseIdx i })
if let some i := cases.alts.findFinIdx? fun | .alt ctorName' .. => ctorName == ctorName' | _ => false then
found i
else if let some i := cases.alts.findFinIdx? fun | .default _ => true | _ => false then

View File

@@ -48,7 +48,7 @@ where
if alts.isEmpty then
throwError "`Code.bind` failed, empty `cases` found"
let resultType mkCasesResultType alts
return .cases c.typeName, resultType, c.discr, alts
return .cases { c with alts, resultType }
| .return fvarId => f fvarId
| .jmp fvarId .. =>
unless ( read).contains fvarId do

Some files were not shown because too many files have changed in this diff Show More