Compare commits

..

4 Commits

Author SHA1 Message Date
Leonardo de Moura
0dab5e4517 test: exposeNames 2025-02-03 16:17:58 -08:00
Leonardo de Moura
18388f156d fix: improve exposeNames 2025-02-03 16:17:45 -08:00
Leonardo de Moura
b73a166259 feat: expose_names parser and elaborator 2025-02-03 15:56:14 -08:00
Leonardo de Moura
e109fd2ef3 feat: add exposeNames tactic 2025-02-03 15:43:26 -08:00
379 changed files with 2332 additions and 8374 deletions

View File

@@ -175,8 +175,8 @@ jobs:
"os": "ubuntu-latest",
"check-level": 2,
"CMAKE_PRESET": "debug",
// exclude seriously slow/stackoverflowing tests
"CTEST_OPTIONS": "-E 'interactivetest|leanpkgtest|laketest|benchtest|bv_bitblast_stress|3807'"
// exclude seriously slow tests
"CTEST_OPTIONS": "-E 'interactivetest|leanpkgtest|laketest|benchtest|bv_bitblast_stress'"
},
// TODO: suddenly started failing in CI
/*{

View File

@@ -34,7 +34,7 @@ jobs:
- name: Download artifact from the previous workflow.
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
id: download-artifact
uses: dawidd6/action-download-artifact@v8 # https://github.com/marketplace/actions/download-workflow-artifact
uses: dawidd6/action-download-artifact@v7 # https://github.com/marketplace/actions/download-workflow-artifact
with:
run_id: ${{ github.event.workflow_run.id }}
path: artifacts

View File

@@ -199,7 +199,7 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
- We do this for the same list of repositories as for stable releases, see above.
As above, there are dependencies between these, and so the process above is iterative.
It greatly helps if you can merge the `bump/v4.7.0` PRs yourself!
- It is essential for Mathlib and Batteries CI that you then create the next `bump/v4.8.0` branch
It is essential for Mathlib CI that you then create the next `bump/v4.8.0` branch
for the next development cycle.
Set the `lean-toolchain` file on this branch to same `nightly` you used for this release.
- (Note: we're currently uncertain if we really want to do this step. Check with Kim Morrison if you're unsure.)

View File

@@ -1,9 +0,0 @@
# The Lean standard library
This directory contains development information about the Lean standard library. The user-facing documentation of the standard library
is part of the [Lean Language Reference](https://lean-lang.org/doc/reference/latest/).
Here you will find
* the [standard library vision document](./vision.md), including the call for contributions,
* the [standard library style guide](./style.md), and
* the [standard library naming conventions](./naming.md).

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -1,230 +0,0 @@
# Standard library naming conventions
The easiest way to access a result in the standard library is to correctly guess the name of the declaration (possibly with the help of identifier autocompletion). This is faster and has lower friction than more sophisticated search tools, so easily guessable names (which are still reasonably short) make Lean users more productive.
The guide that follows contains very few hard rules, many heuristics and a selection of examples. It cannot and does not present a deterministic algorithm for choosing good names in all situations. It is intended as a living document that gets clarified and expanded as situations arise during code reviews for the standard library. If applying one of the suggestions in this guide leads to nonsensical results in a certain situation, it is
probably safe to ignore the suggestion (or even better, suggest a way to improve the suggestion).
## Prelude
Identifiers use a mix of `UpperCamelCase`, `lowerCamelCase` and `snake_case`, used for types, data, and theorems, respectively.
Structure fields should be named such that the projections have the correct names.
## Naming convention for types
When defining a type, i.e., a (possibly 0-ary) function whose codomain is Sort u for some u, it should be named in UpperCamelCase. Examples include `List`, and `List.IsPrefix`.
When defining a predicate, prefix the name by `Is`, like in `List.IsPrefix`. The `Is` prefix may be omitted if
* the resulting name would be ungrammatical, or
* the predicate depends on additional data in a way where the `Is` prefix would be confusing (like `List.Pairwise`), or
* the name is an adjective (like `Std.Time.Month.Ordinal.Valid`)
## Namespaces and generalized projection notation
Almost always, definitions and theorems relating to a type should be placed in a namespace with the same name as the type. For example, operations and theorems about lists should be placed in the `List` namespace, and operations and theorems about `Std.Time.PlainDate` should be placed in the `Std.Time.PlainDate` namespace.
Declarations in the root namespace will be relatively rare. The most common type of declaration in the root namespace are declarations about data and properties exported by notation type classes, as long as they are not about a specific type implementing that type class. For example, we have
```lean
theorem beq_iff_eq [BEq α] [LawfulBEq α] {a b : α} : a == b a = b := sorry
```
in the root namespace, but
```lean
theorem List.cons_beq_cons [BEq α] {a b : α} {l₁ l₂ : List α} :
(a :: l₁ == b :: l₂) = (a == b && l₁ == l₂) := rfl
```
belongs in the `List` namespace.
Subtleties arise when multiple namespaces are in play. Generally, place your theorem in the most specific namespace that appears in one of the hypotheses of the theorem. The following names are both correct according to this convention:
```lean
theorem List.Sublist.reverse : l₁ <+ l₂ l₁.reverse <+ l₂.reverse := sorry
theorem List.reverse_sublist : l₁.reverse <+ l₂.reverse l₁ <+ l₂ := sorry
```
Notice that the second theorem does not have a hypothesis of type `List.Sublist l` for some `l`, so the name `List.Sublist.reverse_iff` would be incorrect.
The advantage of placing results in a namespace like `List.Sublist` is that it enables generalized projection notation, i.e., given `h : l₁ <+ l₂`,
one can write `h.reverse` to obtain a proof of `l₁.reverse <+ l₂.reverse`. Thinking about which dot notations are convenient can act as a guideline
for deciding where to place a theorem, and is, on occasion, a good reason to duplicate a theorem into multiple namespaces.
### The `Std` namespace
New types that are added will usually be placed in the `Std` namespace and in the `Std/` source directory, unless there are good reasons to place
them elsewhere.
Inside the `Std`, all internal declarations should be `private` or else have a name component that clearly marks them as internal, preferably
`Internal`.
## Naming convention for data
When defining data, i.e., a (possibly 0-ary) function whose codomain is not Sort u, but has type Type u for some u, it should be named in lowerCamelCase. Examples include List.append and List.isPrefixOf.
If your data is morally fully specified by its type, then use the naming procedure for theorems described below and convert the result to lower camel case.
If your function returns an `Option`, consider adding `?` as a suffix. If your function may panic, consider adding `!` as a suffix. In many cases, there will be multiple variants of a function; one returning an option, one that may panic and possibly one that takes a proof argument.
## Naming algorithm for theorems and some definitions
There is, in principle, a general algorithm for naming a theorem. The problem with this algorithm is that it produces very long and unwieldy names which need to be shortened. So choosing a name for a declaration can be thought of as consisting of a mechanical part and a creative part.
Usually the first part is to decide which namespace the result should live in, according to the guidelines described above.
Next, consider the type of your declaration as a tree. Inner nodes of this tree are function types or function applications. Leaves of the tree are 0-ary functions or bound variables.
As an example, consider the following result from the standard library:
```lean
example {α : Type u} {β : Type v} [BEq α] [Hashable α] [EquivBEq α] [LawfulHashable α]
[Inhabited β] {m : Std.HashMap α β} {a : α} {h' : a m} : m[a]? = some (m[a]'h') :=
sorry
```
The correct namespace is clearly `Std.HashMap`. The corresponding tree looks like this:
![](naming-tree.svg)
The preferred spelling of a notation can be looked up by hovering over the notation.
Now traverse the tree and build a name according to the following rules:
* When encountering a function type, first turn the result type into a name, then all of the argument types from left to right, and join the names using `_of_`.
* When encountering a function that is neither an infix notation nor a structure projection, first put the function name and then the arguments, joined by an underscore.
* When encountering an infix notation, join the arguments using the name of the notation, separated by underscores.
* When encountering a structure projection, proceed as for normal functions, but put the name of the projection last.
* When encountering a name, put it in lower camel case.
* Skip bound variables and proofs.
* Type class arguments are also generally skipped.
When encountering namespaces names, concatenate them in lower camel case.
Applying this algorithm to our example yields the name `Std.HashMap.getElem?_eq_optionSome_getElem_of_mem`.
From there, the name should be shortened, using the following heuristics:
* The namespace of functions can be omitted if it is clear from context or if the namespace is the current one. This is almost always the case.
* For infix operators, it is possible to leave out the RHS or the name of the notation and the RHS if they are clear from context.
* Hypotheses can be left out if it is clear that they are required or if they appear in the conclusion.
Based on this, here are some possible names for our example:
1. `Std.HashMap.getElem?_eq`
2. `Std.HashMap.getElem?_eq_of_mem`
3. `Std.HashMap.getElem?_eq_some`
4. `Std.HashMap.getElem?_eq_some_of_mem`
5. `Std.HashMap.getElem?_eq_some_getElem`
6. `Std.Hashmap.getElem?_eq_some_getElem_of_mem`
Choosing a good name among these then requires considering the context of the lemma. In this case it turns out that the first four options are underspecified as there is also a lemma relating `m[a]?` and `m[a]!` which could have the same name. This leaves the last two options, the first of which is shorter, and this is how the lemma is called in the Lean standard library.
Here are some additional examples:
```lean
example {x y : List α} (h : x <+: y) (hx : x []) :
x.head hx = y.head (h.ne_nil hx) := sorry
```
Since we have an `IsPrefix` parameter, this should live in the `List.IsPrefix` namespace, and the algorithm suggests `List.IsPrefix.head_eq_head_of_ne_nil`, which is shortened to `List.IsPrefix.head`. Note here the difference between the namespace name (`IsPrefix`) and the recommended spelling of the corresponding notation (`prefix`).
```lean
example : l₁ <+: l₂ reverse l₁ <:+ reverse l₂ := sorry
```
Again, this result should be in the `List.IsPrefix` namespace; the algorithm suggests `List.IsPrefix.reverse_prefix_reverse`, which becomes `List.IsPrefix.reverse`.
The following examples show how the traversal order often matters.
```lean
theorem Nat.mul_zero (n : Nat) : n * 0 = 0 := sorry
theorem Nat.zero_mul (n : Nat) : 0 * n = 0 := sorry
```
Here we see that one name may be a prefix of another name:
```lean
theorem Int.mul_ne_zero {a b : Int} (a0 : a 0) (b0 : b 0) : a * b 0 := sorry
theorem Int.mul_ne_zero_iff {a b : Int} : a * b 0 a 0 b 0 := sorry
```
It is usually a good idea to include the `iff` in a theorem name even if the name would still be unique without the name. For example,
```lean
theorem List.head?_eq_none_iff : l.head? = none l = [] := sorry
```
is a good name: if the lemma was simply called `List.head?_eq_none`, users might try to `apply` it when the goal is `l.head? = none`, leading
to confusion.
The more common you expect (or want) a theorem to be, the shorter you should try to make the name. For example, we have both
```lean
theorem Std.HashMap.getElem?_eq_none_of_contains_eq_false {a : α} : m.contains a = false m[a]? = none := sorry
theorem Std.HashMap.getElem?_eq_none {a : α} : ¬a m m[a]? = none := sorry
```
As users of the hash map are encouraged to use ∈ rather than contains, the second lemma gets the shorter name.
## Special cases
There are certain special “keywords” that may appear in identifiers.
| Keyword | Meaning | Example |
| :---- | :---- | :---- |
| `def` | Unfold a definition. Avoid this for public APIs. | `Nat.max_def` |
| `refl` | Theorems of the form `a R a`, where R is a reflexive relation and `a` is an explicit parameter | `Nat.le_refl` |
| `rfl` | Like `refl`, but with `a` implicit | `Nat.le_rfl` |
| `irrefl` | Theorems of the form `¬a R a`, where R is an irreflexive relation | `Nat.lt_irrefl` |
| `symm` | Theorems of the form `a R b → b R a`, where R is a symmetric relation (compare `comm` below) | `Eq.symm` |
| `trans` | Theorems of the form `a R b → b R c → a R c`, where R is a transitive relation (R may carry data) | `Eq.trans` |
| `antisymmm` | Theorems of the form `a R b → b R a → a = b`, where R is an antisymmetric relation | `Nat.le_antisymm` |
| `congr` | Theorems of the form `a R b → f a S f b`, where R and S are usually equivalence relations | `Std.HashMap.mem_congr` |
| `comm` | Theorems of the form `f a b = f b a` (compare `symm` above) | `Eq.comm`, `Nat.add_comm` |
| `assoc` | Theorems of the form `g (f a b) c = f a (g b c)` (note the order! In most cases, we have f = g) | `Nat.add_sub_assoc` |
| `distrib` | Theorems of the form `f (g a b) = g (f a) (f b)` | `Nat.add_left_distrib` |
| `self` | May be used if a variable appears multiple times in the conclusion | `List.mem_cons_self` |
| `inj` | Theorems of the form `f a = f b ↔ a = b`. | `Int.neg_inj`, `Nat.add_left_inj` |
| `cancel` | Theorems which have one of the forms `f a = f b →a = b` or `g (f a) = a`, where `f` and `g` usually involve a binary operator | `Nat.add_sub_cancel` |
| `cancel_iff` | Same as `inj`, but with different conventions for left and right (see below) | `Nat.add_right_cancel_iff` |
## Left and right
The keywords left and right are useful to disambiguate symmetric variants of theorems.
```lean
theorem imp_congr_left (h : a b) : (a c) (b c) := sorry
theorem imp_congr_right (h : a (b c)) : (a b) (a c) := sorry
```
It is not always obvious which version of a theorem should be “left” and which should be “right”.
Heuristically, the theorem should name the side which is “more variable”, but there are exceptions. For some of the special keywords discussed in this section, there are conventions which should be followed, as laid out in the following examples:
```lean
theorem Nat.left_distrib (n m k : Nat) : n * (m + k) = n * m + n * k := sorry
theorem Nat.right_distrib (n m k : Nat) : (n + m) * k = n * k + m * k := sorry
theorem Nat.add_left_cancel {n m k : Nat} : n + m = n + k m = k := sorry
theorem Nat.add_right_cancel {n m k : Nat} : n + m = k + m n = k := sorry
theorem Nat.add_left_cancel_iff {m k n : Nat} : n + m = n + k m = k := sorry
theorem Nat.add_right_cancel_iff {m k n : Nat} : m + n = k + n m = k := sorry
theorem Nat.add_left_inj {m k n : Nat} : m + n = k + n m = k := sorry
theorem Nat.add_right_inj {m k n : Nat} : n + m = n + k m = k := sorry
```
Note in particular that the convention is opposite for `cancel_iff` and `inj`.
```lean
theorem Nat.add_sub_self_left (a b : Nat) : (a + b) - a = b := sorry
theorem Nat.add_sub_self_right (a b : Nat) : (a + b) - b = a := sorry
theorem Nat.add_sub_cancel (n m : Nat) : (n + m) - m = n := sorry
```
## Acronyms
For acronyms which are three letters or shorter, all letters should use the same case as dictated by the convention. For example, `IO` is a correct name for a type and the name `IO.Ref` may become `IORef` when used as part of a definition name and `ioRef` when used as part of a theorem name.
For acronyms which are at least four letters long, switch to lower case starting from the second letter. For example, `Json` is a correct name for a type, as is `JsonRPC`.
If an acronym is typically spelled using mixed case, this mixed spelling may be used in identifiers (for example `Std.Net.IPv4Addr`).

View File

@@ -1,5 +1,3 @@
# Standard library style
Please take some time to familiarize yourself with the stylistic conventions of
the project and the specific part of the library you are planning to contribute
to. While the Lean compiler may not enforce strict formatting rules,
@@ -8,450 +6,5 @@ Attention to formatting is more than a cosmetic concern—it reflects the same
level of precision and care required to meet the deeper standards of the Lean 4
standard library.
Below we will give specific formatting prescriptions for various language constructs. Note that this style guide only applies to the Lean standard library, even though some examples in the guide are taken from other parts of the Lean code base.
## Basic whitespace rules
Syntactic elements (like `:`, `:=`, `|`, `::`) are surrounded by single spaces, with the exception of `,` and `;`, which are followed by a space but not preceded by one. Delimiters (like `()`, `{}`) do not have spaces on the inside, with the exceptions of subtype notation and structure instance notation.
Examples of correctly formatted function parameters:
* `{α : Type u}`
* `[BEq α]`
* `(cmp : αα → Ordering)`
* `(hab : a = b)`
* `{d : { l : List ((n : Nat) × Vector Nat n) // l.length % 2 = 0 }}`
Examples of correctly formatted terms:
* `1 :: [2, 3]`
* `letI : Ord α := ⟨cmp⟩; True`
* `(⟨2, 3⟩ : Nat × Nat)`
* `((2, 3) : Nat × Nat)`
* `{ x with fst := f (4 + f 0), snd := 4, .. }`
* `match 1 with | 0 => 0 | _ => 0`
* `fun ⟨a, b⟩ _ _ => by cases hab <;> apply id; rw [hbc]`
Configure your editor to remove trailing whitespace. If you have set up Visual Studio Code for Lean development in the recommended way then the correct setting is applied automatically.
## Splitting terms across multiple lines
When splitting a term across multiple lines, increase indentation by two spaces starting from the second line. When splitting a function application, try to split at argument boundaries. If an argument itself needs to be split, increase indentation further as appropriate.
When splitting at an infix operator, the operator goes at the end of the first line, not at the beginning of the second line. When splitting at an infix operator, you may or may not increase indentation depth, depending on what is more readable.
When splitting an `if`-`then`-`else` expression, the `then` keyword wants to stay with the condition and the `else` keyword wants to stay with the alternative term. Otherwise, indent as if the `if` and `else` keywords were arguments to the same function.
When splitting a comma-separated bracketed sequence (i.e., anonymous constructor application, list/array/vector literal, tuple) it is allowed to indent subsequent lines for alignment, but indenting by two spaces is also allowed.
Do not orphan parentheses.
Correct:
```lean
def MacroScopesView.isPrefixOf (v₁ v₂ : MacroScopesView) : Bool :=
v₁.name.isPrefixOf v₂.name &&
v₁.scopes == v₂.scopes &&
v₁.mainModule == v₂.mainModule &&
v₁.imported == v₂.imported
```
Correct:
```lean
theorem eraseP_eq_iff {p} {l : List α} :
l.eraseP p = l'
(( a l, ¬ p a) l = l')
a l₁ l₂, ( b l₁, ¬ p b) p a
l = l₁ ++ a :: l₂ l' = l₁ ++ l₂ :=
sorry
```
Correct:
```lean
example : Nat :=
functionWithAVeryLongNameSoThatSomeArgumentsWillNotFit firstArgument secondArgument
(firstArgumentWithAnEquallyLongNameAndThatFunctionDoesHaveMoreArguments firstArgument
secondArgument)
secondArgument
```
Correct:
```lean
theorem size_alter [LawfulBEq α] {k : α} {f : Option (β k) Option (β k)} (h : m.WF) :
(m.alter k f).size =
if m.contains k && (f (m.get? k)).isNone then
m.size - 1
else if !m.contains k && (f (m.get? k)).isSome then
m.size + 1
else
m.size := by
simp_to_raw using Raw₀.size_alter
```
Correct:
```lean
theorem get?_alter [LawfulBEq α] {k k' : α} {f : Option (β k) Option (β k)} (h : m.WF) :
(m.alter k f).get? k' =
if h : k == k' then
cast (congrArg (Option β) (eq_of_beq h)) (f (m.get? k))
else m.get? k' := by
simp_to_raw using Raw₀.get?_alter
```
Correct:
```lean
example : Nat × Nat :=
imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm
```
Correct:
```lean
example : Nat × Nat :=
imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm
```
Correct:
```lean
example : Vector Nat :=
#v[imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm]
```
## Basic file structure
Every file should start with a copyright header, imports (in the standard library, this always includes a `prelude` declaration) and a module documentation string. There should not be a blank line between the copyright header and the imports. There should be a blank line between the imports and the module documentation string.
Correct:
```lean
/-
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro,
Yury Kudryashov
-/
prelude
import Init.Data.List.Pairwise
import Init.Data.List.Find
/-!
**# Lemmas about `List.eraseP` and `List.erase`.**
-/
```
Syntax that is not supposed to be user-facing must be scoped. New public syntax must always be discussed explicitly in an RFC.
## Top-level commands and declarations
All top-level commands are unindented. Sectioning commands like `section` and `namespace` do not increase the indentation level.
Attributes may be placed on the same line as the rest of the command or on a separate line.
Multi-line declaration headers are indented by four spaces starting from the second line. The colon that indicates the type of a declaration may not be placed at the start of a line or on its own line.
Declaration bodies are indented by two spaces. Short declaration bodies may be placed on the same line as the declaration type.
Correct:
```lean
theorem eraseP_eq_iff {p} {l : List α} :
l.eraseP p = l'
(( a l, ¬ p a) l = l')
a l₁ l₂, ( b l₁, ¬ p b) p a
l = l₁ ++ a :: l₂ l' = l₁ ++ l₂ :=
sorry
```
Correct:
```lean
@[simp] theorem eraseP_nil : [].eraseP p = [] := rfl
```
Correct:
```lean
@[simp]
theorem eraseP_nil : [].eraseP p = [] := rfl
```
### Documentation comments
Note to external contributors: this is a section where the Lean style and the mathlib style are different.
Declarations should be documented as required by the `docBlame` linter, which may be activated in a file using
`set_option linter.missingDocs true` (we allow these to stay in the file).
Single-line documentation comments should go on the same line as `/--`/`-/`, while multi-line documentation strings
should have these delimiters on their own line, with the documentation comment itself unindented.
Documentation comments must be written in the indicative mood. Use American orthography.
Correct:
```lean
/-- Carries out a monadic action on each mapping in the hash map in some order. -/
@[inline] def forM (f : (a : α) β a m PUnit) (b : Raw α β) : m PUnit :=
b.buckets.forM (AssocList.forM f)
```
Correct:
```lean
/--
Monadically computes a value by folding the given function over the mappings in the hash
map in some order.
-/
@[inline] def foldM (f : δ (a : α) β a m δ) (init : δ) (b : Raw α β) : m δ :=
b.buckets.foldlM (fun acc l => l.foldlM f acc) init
```
### Where clauses
The `where` keyword should be unindented, and all declarations bound by it should be indented with two spaces.
Blank lines before and after `where` and between declarations bound by `where` are optional and should be chosen
to maximize readability.
Correct:
```lean
@[simp] theorem partition_eq_filter_filter (p : α Bool) (l : List α) :
partition p l = (filter p l, filter (not p) l) := by
simp [partition, aux]
where
aux (l) {as bs} : partition.loop p l (as, bs) =
(as.reverse ++ filter p l, bs.reverse ++ filter (not p) l) :=
match l with
| [] => by simp [partition.loop, filter]
| a :: l => by cases pa : p a <;> simp [partition.loop, pa, aux, filter, append_assoc]
```
### Termination arguments
The `termination_by`, `decreasing_by`, `partial_fixpoint` keywords should be unindented. The associated terms should be indented like declaration bodies.
Correct:
```lean
@[inline] def multiShortOption (handle : Char m PUnit) (opt : String) : m PUnit := do
let rec loop (p : String.Pos) := do
if h : opt.atEnd p then
return
else
handle (opt.get' p h)
loop (opt.next' p h)
termination_by opt.utf8ByteSize - p.byteIdx
decreasing_by
simp [String.atEnd] at h
apply Nat.sub_lt_sub_left h
simp [String.lt_next opt p]
loop 1
```
Correct:
```lean
def substrEq (s1 : String) (off1 : String.Pos) (s2 : String) (off2 : String.Pos) (sz : Nat) : Bool :=
off1.byteIdx + sz s1.endPos.byteIdx && off2.byteIdx + sz s2.endPos.byteIdx && loop off1 off2 { byteIdx := off1.byteIdx + sz }
where
loop (off1 off2 stop1 : Pos) :=
if _h : off1.byteIdx < stop1.byteIdx then
let c₁ := s1.get off1
let c₂ := s2.get off2
c₁ == c₂ && loop (off1 + c₁) (off2 + c₂) stop1
else true
termination_by stop1.1 - off1.1
decreasing_by
have := Nat.sub_lt_sub_left _h (Nat.add_lt_add_left c₁.utf8Size_pos off1.1)
decreasing_tactic
```
Correct:
```lean
theorem div_add_mod (m n : Nat) : n * (m / n) + m % n = m := by
rw [div_eq, mod_eq]
have h : Decidable (0 < n n m) := inferInstance
cases h with
| isFalse h => simp [h]
| isTrue h =>
simp [h]
have ih := div_add_mod (m - n) n
rw [Nat.left_distrib, Nat.mul_one, Nat.add_assoc, Nat.add_left_comm, ih, Nat.add_comm, Nat.sub_add_cancel h.2]
decreasing_by apply div_rec_lemma; assumption
```
### Deriving
The `deriving` clause should be unindented.
Correct:
```lean
structure Iterator where
array : ByteArray
idx : Nat
deriving Inhabited
```
## Language constructs
### Pattern matching, induction etc.
Match arms are indented at the indentation level that the match statement would have if it was on its own line. If the match is implicit, then the arms should be indented as if the match was explicitly given. The content of match arms is indented two spaces, so that it appears on the same level as the match pattern.
Correct:
```lean
def alter [BEq α] {β : Type v} (a : α) (f : Option β Option β) :
AssocList α (fun _ => β) AssocList α (fun _ => β)
| nil => match f none with
| none => nil
| some b => AssocList.cons a b nil
| cons k v l =>
if k == a then
match f v with
| none => l
| some b => cons a b l
else
cons k v (alter a f l)
```
Correct:
```lean
theorem eq_append_cons_of_mem {a : α} {xs : List α} (h : a xs) :
as bs, xs = as ++ a :: bs a as := by
induction xs with
| nil => cases h
| cons x xs ih =>
simp at h
cases h with
| inl h => exact [], xs, by simp_all
| inr h =>
by_cases h' : a = x
· subst h'
exact [], xs, by simp
· obtain as, bs, rfl, h := ih h
exact x :: as, bs, rfl, by simp_all
```
Aligning match arms is allowed, but not required.
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
### Structures
Note to external contributors: this is a section where the Lean style and the mathlib style are different.
When using structure instance syntax over multiple lines, the opening brace should go on the preceding line, while the closing brace should go on its own line. The rest of the syntax should be indented by one level. During structure updates, the `with` clause goes on the same line as the opening brace. Aligning at the assignment symbol is allowed but not required.
Correct:
```lean
def addConstAsync (env : Environment) (constName : Name) (kind : ConstantKind) (reportExts := true) :
IO AddConstAsyncResult := do
let sigPromise IO.Promise.new
let infoPromise IO.Promise.new
let extensionsPromise IO.Promise.new
let checkedEnvPromise IO.Promise.new
let asyncConst := {
constInfo := {
name := constName
kind
sig := sigPromise.result
constInfo := infoPromise.result
}
exts? := guard reportExts *> some extensionsPromise.result
}
return {
constName, kind
mainEnv := { env with
asyncConsts := env.asyncConsts.add asyncConst
checked := checkedEnvPromise.result }
asyncEnv := { env with
asyncCtx? := some { declPrefix := privateToUserName constName.eraseMacroScopes }
}
sigPromise, infoPromise, extensionsPromise, checkedEnvPromise
}
```
Correct:
```lean
instance [Inhabited α] : Inhabited (Descr α β σ) where
default := {
name := default
mkInitial := default
ofOLeanEntry := default
toOLeanEntry := default
addEntry := fun s _ => s
}
```
## Tactic proofs
Tactic proofs are the most common thing to break during any kind of upgrade, so it is important to write them in a way that minimizes the likelihood of proofs breaking and that makes it easy to debug breakages if they do occur.
If there are multiple goals, either use a tactic combinator (like `all_goals`) to operate on all of them or a clearly specified subset, or use focus dots to work on goals one at a time. Using structured proofs (e.g., `induction … with`) is encouraged but not mandatory.
Squeeze non-terminal `simp`s (i.e., calls to `simp` which do not close the goal). Squeezing terminal `simp`s is generally discouraged, although there are exceptions (for example if squeezing yields a noticeable performance improvement).
Do not over-golf proofs in ways that are likely to lead to hard-to-debug breakage. Examples of things to avoid include complex multi-goal manipulation using lots of tactic combinators, complex uses of the substitution operator (`▸`) and clever point-free expressions (possibly involving anonymous function notation for multiple arguments).
Do not under-golf proofs: for routine tasks, use the most powerful tactics available.
Do not use `erw`. Avoid using `rfl` after `simp` or `rw`, as this usually indicates a missing lemma that should be used instead of `rfl`.
Use `(d)simp` or `rw` instead of `delta` or `unfold`. Use `refine` instead of `refine`. Use `haveI` and `letI` only if they are actually required.
Prefer highly automated tactics (like `grind` and `omega`) over low-level proofs, unless the automated tactic requires unacceptable additional imports or has bad performance. If you decide against using a highly automated tactic, leave a comment explaining the decision.
## `do` notation
Use early `return` statements to reduce nesting depth and make the non-exceptional control flow of a function easier to see.
Alternatives for `let` matches may be placed in the same line or in the next line, indented by two spaces. If the term that is
being matched on is itself more than one line and there is an alternative present, consider breaking immediately after `←` and indent
as far as necessary to ensure readability.
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl findFunDecl? fvarId | throwError "unknown local function {fvarId.name}"
return decl
```
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl
findFunDecl? fvarId
| throwError "unknown local function {fvarId.name}"
return decl
```
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl findFunDecl?
fvarId
| throwError "unknown local function {fvarId.name}"
return decl
```
A full style guide and naming convention are currently under construction and
will be added here soon.

View File

@@ -13,17 +13,16 @@ as part of verified applications.
The standard library is a public API that contains the components listed in the
standard library outline below. Not all public APIs in the Lean distribution
are part of the standard library, and the standard library does not correspond
to a certain directory within the Lean source repository (like `Std`). For
example, the metaprogramming framework is not part of the standard library, but
basic types like `True` and `Nat` are.
to a certain directory within the Lean source repository. For example, the
metaprogramming framework is not part of the standard library.
The standard library is under active development. Our guiding principles are:
* Provide comprehensive, verified building blocks for real-world software.
* Build a public API of the highest quality with excellent internal consistency.
* Carefully optimize components that may be used in performance-critical software.
* Ensure smooth adoption and maintenance for users.
* Offer excellent documentation, example projects, and guides.
* Provide comprehensive, verified building blocks for real-world software.
* Build a public API of the highest quality with excellent internal consistency.
* Carefully optimize components that may be used in performance-critical software.
* Ensure smooth adoption and maintenance for users.
* Offer excellent documentation, example projects, and guides.
* Provide a reliable and extensible basis that libraries for software
development, software verification and mathematics can build on.
@@ -33,23 +32,23 @@ call for contributions below.
### Standard library outline
1. Core types and operations
1. Basic types
2. Numeric types, including floating point numbers
3. Containers
4. Strings and formatting
2. Language constructs
1. Ranges and iterators
2. Comparison, ordering, hashing and related type classes
3. Basic monad infrastructure
3. Libraries
1. Random numbers
2. Dates and times
4. Operating system abstractions
1. Concurrency and parallelism primitives
2. Asynchronous I/O
3. FFI helpers
4. Environment, file system, processes
1. Core types and operations
1. Basic types
2. Numeric types, including floating point numbers
3. Containers
4. Strings and formatting
2. Language constructs
1. Ranges and iterators
2. Comparison, ordering, hashing and related type classes
3. Basic monad infrastructure
3. Libraries
1. Random numbers
2. Dates and times
4. Operating system abstractions
1. Concurrency and parallelism primitives
2. Asynchronous I/O
3. FFI helpers
4. Environment, file system, processes
5. Locales
The material covered in the first three sections (core types and operations,

View File

@@ -1,10 +1,15 @@
v4.16.0
----------
## Highlights
### Unique `sorry`s
## Language
[#5757](https://github.com/leanprover/lean4/pull/5757) makes it harder to create "fake" theorems about definitions that
* [#3696](https://github.com/leanprover/lean4/pull/3696) makes all message constructors handle pretty printer errors.
* [#4460](https://github.com/leanprover/lean4/pull/4460) runs all linters for a single command (together) on a separate
thread from further elaboration, making a first step towards
parallelizing the elaborator.
* [#5757](https://github.com/leanprover/lean4/pull/5757) makes it harder to create "fake" theorems about definitions that
are stubbed-out with `sorry` by ensuring that each `sorry` is not
definitionally equal to any other. For example, this now fails:
```lean
@@ -29,9 +34,10 @@ definition" on `sorry` in the Infoview, which brings you to its origin.
The option `set_option pp.sorrySource true` causes the pretty printer to
show source position information on sorries.
### Separators in numeric literals
* [#6123](https://github.com/leanprover/lean4/pull/6123) ensures that the configuration in `Simp.Config` is used when
reducing terms and checking definitional equality in `simp`.
[#6204](https://github.com/leanprover/lean4/pull/6204) lets `_` be used in numeric literals as a separator. For
* [#6204](https://github.com/leanprover/lean4/pull/6204) lets `_` be used in numeric literals as a separator. For
example, `1_000_000`, `0xff_ff` or `0b_10_11_01_00`. New lexical syntax:
```text
numeral10 : [0-9]+ ("_"+ [0-9]+)*
@@ -41,50 +47,6 @@ numeral16 : "0" [xX] ("_"* hex_char+)+
float : numeral10 "." numeral10? [eE[+-]numeral10]
```
### Additional new featues
* [#6300](https://github.com/leanprover/lean4/pull/6300) adds the `debug.proofAsSorry` option. When enabled, the proofs
of theorems are ignored and replaced with `sorry`.
* [#6362](https://github.com/leanprover/lean4/pull/6362) adds the `--error=kind` option (shorthand: `-Ekind`) to the
`lean` CLI. When set, messages of `kind` (e.g.,
`linter.unusedVariables`) will be reported as errors. This setting does
nothing in interactive contexts (e.g., the server).
* [#6366](https://github.com/leanprover/lean4/pull/6366) adds support for `Float32` and fixes a bug in the runtime.
### Library updates
The Lean 4 library saw many changes that improve arithmetic reasoning, enhance data structure APIs,
and refine library organization. Key changes include better support for bitwise operations, shifts,
and conversions, expanded lemmas for `Array`, `Vector`, and `List`, and improved ordering definitions.
Some modules have been reorganized for clarity, and internal refinements ensure greater consistency and correctness.
### Breaking changes
[#6330](https://github.com/leanprover/lean4/pull/6330) removes unnecessary parameters from the functional induction
principles. This is a breaking change; broken code can typically be adjusted
simply by passing fewer parameters.
_This highlights section was contributed by Violetta Sim._
For this release, 201 changes landed. In addition to the 74 feature additions and 44 fixes listed below there were 7 refactoring changes, 5 documentation improvements and 62 chores.
## Language
* [#3696](https://github.com/leanprover/lean4/pull/3696) makes all message constructors handle pretty printer errors.
* [#4460](https://github.com/leanprover/lean4/pull/4460) runs all linters for a single command (together) on a separate
thread from further elaboration, making a first step towards
parallelizing the elaborator.
* [#5757](https://github.com/leanprover/lean4/pull/5757), see the highlights section above for details.
* [#6123](https://github.com/leanprover/lean4/pull/6123) ensures that the configuration in `Simp.Config` is used when
reducing terms and checking definitional equality in `simp`.
* [#6204](https://github.com/leanprover/lean4/pull/6204), see the highlights section above for details.
* [#6270](https://github.com/leanprover/lean4/pull/6270) fixes a bug that could cause the `injectivity` tactic to fail in
reducible mode, which could cause unfolding lemma generation to fail
(used by tactics such as `unfold`). In particular,
@@ -108,13 +70,23 @@ speedups on problems with lots of variables.
* [#6295](https://github.com/leanprover/lean4/pull/6295) sets up simprocs for all the remaining operations defined in
`Init.Data.Fin.Basic`
* [#6300](https://github.com/leanprover/lean4/pull/6300), see the highlights section above for details.
* [#6300](https://github.com/leanprover/lean4/pull/6300) adds the `debug.proofAsSorry` option. When enabled, the proofs
of theorems are ignored and replaced with `sorry`.
* [#6330](https://github.com/leanprover/lean4/pull/6330), see the highlights section above for details.
* [#6330](https://github.com/leanprover/lean4/pull/6330) removes unnecessary parameters from the funcion induction
principles. This is a breaking change; broken code can typically be adjusted
simply by passing fewer parameters.
* [#6362](https://github.com/leanprover/lean4/pull/6362), see the highlights section above for details.
* [#6330](https://github.com/leanprover/lean4/pull/6330) removes unnecessary parameters from the funcion induction
principles. This is a breaking change; broken code can typically be adjusted
simply by passing fewer parameters.
* [#6366](https://github.com/leanprover/lean4/pull/6366), see the highlights section above for details.
* [#6362](https://github.com/leanprover/lean4/pull/6362) adds the `--error=kind` option (shorthand: `-Ekind`) to the
`lean` CLI. When set, messages of `kind` (e.g.,
`linter.unusedVariables`) will be reported as errors. This setting does
nothing in interactive contexts (e.g., the server).
* [#6366](https://github.com/leanprover/lean4/pull/6366) adds support for `Float32` and fixes a bug in the runtime.
* [#6375](https://github.com/leanprover/lean4/pull/6375) fixes a bug in the simplifier. It was producing terms with loose
bound variables when eliminating unused `let_fun` expressions.
@@ -308,7 +280,8 @@ tactic.
* [#6490](https://github.com/leanprover/lean4/pull/6490) adds basic configuration options for the `grind` tactic.
* [#6492](https://github.com/leanprover/lean4/pull/6492) fixes a bug in the theorem instantiation procedure in the (WIP)
`grind` tactic.
`grind` tactic. For example, it was missing the following instance in
one of the tests:
* [#6497](https://github.com/leanprover/lean4/pull/6497) fixes another theorem instantiation bug in the `grind` tactic.
It also moves new instances to be processed to `Goal`.
@@ -325,7 +298,8 @@ test.
`grind` tactic.
* [#6503](https://github.com/leanprover/lean4/pull/6503) adds a simple strategy to the (WIP) `grind` tactic. It just
keeps internalizing new theorem instances found by E-matching.
keeps internalizing new theorem instances found by E-matching. The
simple strategy can solve examples such as:
* [#6506](https://github.com/leanprover/lean4/pull/6506) adds the `monotonicity` tactic, intended to be used inside the
`partial_fixpoint` feature.
@@ -503,7 +477,30 @@ make use of all `export`s when pretty printing, but now it only uses
"horizontal exports" that put the names into an unrelated namespace,
which the dot notation feature in #6189 now incentivizes.
* [#5757](https://github.com/leanprover/lean4/pull/5757), aside from introducing labeled sorries, fixes the bug that the metadata attached to the pretty-printed representation of arguments with a borrow annotation (for example, the second argument of `String.append`), is inconsistent with the metadata attached to the regular arguments.
* [#5757](https://github.com/leanprover/lean4/pull/5757) makes it harder to create "fake" theorems about definitions that
are stubbed-out with `sorry` by ensuring that each `sorry` is not
definitionally equal to any other. For example, this now fails:
```lean
example : (sorry : Nat) = sorry := rfl -- fails
```
However, this still succeeds, since the `sorry` is a single
indeterminate `Nat`:
```lean
def f (n : Nat) : Nat := sorry
example : f 0 = f 1 := rfl -- succeeds
```
One can be more careful by putting parameters to the right of the colon:
```lean
def f : (n : Nat) Nat := sorry
example : f 0 = f 1 := rfl -- fails
```
Most sources of synthetic sorries (recall: a sorry that originates from
the elaborator) are now unique, except for elaboration errors, since
making these unique tends to cause a confusing cascade of errors. In
general, however, such sorries are labeled. This enables "go to
definition" on `sorry` in the Infoview, which brings you to its origin.
The option `set_option pp.sorrySource true` causes the pretty printer to
show source position information on sorries.
## Documentation
@@ -563,3 +560,4 @@ the universe parameter.
* [#6363](https://github.com/leanprover/lean4/pull/6363) fixes errors at load time in the comparison mode of the Firefox
profiler.

View File

@@ -96,17 +96,7 @@ def is_merged_into_stable(repo_url, tag_name, stable_branch, github_token):
tag_response = requests.get(f"{api_base}/git/refs/tags/{tag_name}", headers=headers)
if tag_response.status_code != 200:
return False
# Handle both single object and array responses
tag_data = tag_response.json()
if isinstance(tag_data, list):
# Find the exact matching tag in the list
matching_tags = [tag for tag in tag_data if tag['ref'] == f'refs/tags/{tag_name}']
if not matching_tags:
return False
tag_sha = matching_tags[0]['object']['sha']
else:
tag_sha = tag_data['object']['sha']
tag_sha = tag_response.json()['object']['sha']
# Get commits on stable branch containing this SHA
commits_response = requests.get(
@@ -152,32 +142,6 @@ def extract_org_repo_from_url(repo_url):
return repo_url.replace("https://github.com/", "").rstrip("/")
return repo_url
def get_next_version(version):
"""Calculate the next version number, ignoring RC suffix."""
# Strip v prefix and RC suffix if present
base_version = strip_rc_suffix(version.lstrip('v'))
major, minor, patch = map(int, base_version.split('.'))
# Next version is always .0
return f"v{major}.{minor + 1}.0"
def check_bump_branch_toolchain(url, bump_branch, github_token):
"""Check if the lean-toolchain file in bump branch starts with either 'leanprover/lean4:nightly-' or the next version."""
content = get_branch_content(url, bump_branch, "lean-toolchain", github_token)
if content is None:
print(f" ❌ No lean-toolchain file found in {bump_branch} branch")
return False
# Extract the next version from the bump branch name (bump/v4.X.0)
next_version = bump_branch.split('/')[1]
if not (content.startswith("leanprover/lean4:nightly-") or
content.startswith(f"leanprover/lean4:{next_version}")):
print(f" ❌ Bump branch toolchain should use either nightly or {next_version}, but found: {content}")
return False
print(f" ✅ Bump branch correctly uses toolchain: {content}")
return True
def main():
github_token = get_github_token()
@@ -219,6 +183,7 @@ def main():
print(f" ✅ Release notes look good.")
else:
previous_minor_version = version_minor - 1
previous_stable_branch = f"releases/v{version_major}.{previous_minor_version}.0"
previous_release = f"v{version_major}.{previous_minor_version}.0"
print(f" ❌ Release notes not published. Please run `script/release_notes.py --since {previous_release}` on branch `{branch_name}`.")
else:
@@ -236,7 +201,6 @@ def main():
branch = repo["branch"]
check_stable = repo["stable-branch"]
check_tag = repo.get("toolchain-tag", True)
check_bump = repo.get("bump-branch", False)
print(f"\nRepository: {name}")
@@ -256,54 +220,15 @@ def main():
if check_tag:
if not tag_exists(url, toolchain, github_token):
print(f" ❌ Tag {toolchain} does not exist. Run `script/push_repo_release_tag.py {extract_org_repo_from_url(url)} {branch} {toolchain}`.")
else:
print(f" ✅ Tag {toolchain} exists")
continue
print(f" ✅ Tag {toolchain} exists")
# Only check merging into stable if stable-branch is true and not a release candidate
if check_stable and not is_release_candidate(toolchain):
if not is_merged_into_stable(url, toolchain, "stable", github_token):
print(f" ❌ Tag {toolchain} is not merged into stable")
else:
print(f" ✅ Tag {toolchain} is merged into stable")
# Check for bump branch if configured
if check_bump:
next_version = get_next_version(toolchain)
bump_branch = f"bump/{next_version}"
if branch_exists(url, bump_branch, github_token):
print(f" ✅ Bump branch {bump_branch} exists")
check_bump_branch_toolchain(url, bump_branch, github_token)
else:
print(f" ❌ Bump branch {bump_branch} does not exist")
# Check lean4 master branch for next development cycle
print("\nChecking lean4 master branch configuration...")
next_version = get_next_version(toolchain)
next_minor = int(next_version.split('.')[1])
cmake_content = get_branch_content(lean_repo_url, "master", "src/CMakeLists.txt", github_token)
if cmake_content is None:
print(" ❌ Could not retrieve CMakeLists.txt from master")
else:
cmake_lines = cmake_content.splitlines()
# Find the actual minor version in CMakeLists.txt
for line in cmake_lines:
if line.strip().startswith("set(LEAN_VERSION_MINOR "):
actual_minor = int(line.split()[-1].rstrip(")"))
version_minor_correct = actual_minor >= next_minor
break
else:
version_minor_correct = False
is_release_correct = any(
l.strip().startswith("set(LEAN_VERSION_IS_RELEASE 0)")
for l in cmake_lines
)
if not (version_minor_correct and is_release_correct):
print(" ❌ lean4 needs a \"begin dev cycle\" PR")
else:
print(" ✅ lean4 master branch is configured for next development cycle")
continue
print(f" ✅ Tag {toolchain} is merged into stable")
if __name__ == "__main__":
main()

View File

@@ -65,24 +65,6 @@ def format_markdown_description(pr_number, description):
link = f"[#{pr_number}](https://github.com/leanprover/lean4/pull/{pr_number})"
return f"{link} {description}"
def count_commit_types(commits):
counts = {
'total': len(commits),
'feat': 0,
'fix': 0,
'refactor': 0,
'doc': 0,
'chore': 0
}
for _, first_line, _ in commits:
for commit_type in ['feat:', 'fix:', 'refactor:', 'doc:', 'chore:']:
if first_line.startswith(commit_type):
counts[commit_type.rstrip(':')] += 1
break
return counts
def main():
parser = argparse.ArgumentParser(description='Generate release notes from Git commits')
parser.add_argument('--since', required=True, help='Git tag to generate release notes since')
@@ -110,18 +92,14 @@ def main():
pr_number = check_pr_number(first_line)
if not pr_number:
sys.stderr.write(f"No PR number found in commit:\n{commit_hash}\n{first_line}\n")
sys.stderr.write(f"No PR number found in {first_line}\n")
continue
# Remove the first line from the full_message for further processing
body = full_message[len(first_line):].strip()
paragraphs = body.split('\n\n')
description = paragraphs[0] if len(paragraphs) > 0 else ""
# If there's a third paragraph and second ends with colon, include it
if len(paragraphs) > 1 and description.endswith(':'):
description = description + '\n\n' + paragraphs[1]
second_paragraph = paragraphs[0] if len(paragraphs) > 0 else ""
labels = fetch_pr_labels(pr_number)
@@ -131,7 +109,7 @@ def main():
report_errors = first_line.startswith("feat:") or first_line.startswith("fix:")
if not description.startswith("This PR "):
if not second_paragraph.startswith("This PR "):
if report_errors:
sys.stderr.write(f"No PR description found in commit:\n{commit_hash}\n{first_line}\n{body}\n\n")
fallback_description = re.sub(r":$", "", first_line.split(" ", 1)[1]).rsplit(" (#", 1)[0]
@@ -139,7 +117,7 @@ def main():
else:
continue
else:
markdown_description = format_markdown_description(pr_number, description.replace("This PR ", ""))
markdown_description = format_markdown_description(pr_number, second_paragraph.replace("This PR ", ""))
changelog_labels = [label for label in labels if label.startswith("changelog-")]
if len(changelog_labels) > 1:
@@ -154,13 +132,6 @@ def main():
for label in changelog_labels:
changelog[label].append((pr_number, markdown_description))
# Add commit type counting
counts = count_commit_types(commits)
print(f"For this release, {counts['total']} changes landed. "
f"In addition to the {counts['feat']} feature additions and {counts['fix']} fixes listed below "
f"there were {counts['refactor']} refactoring changes, {counts['doc']} documentation improvements "
f"and {counts['chore']} chores.\n")
section_order = sort_sections_order()
sorted_changelog = sorted(changelog.items(), key=lambda item: section_order.index(format_section_title(item[0])) if format_section_title(item[0]) in section_order else len(section_order))

View File

@@ -4,7 +4,6 @@ repositories:
toolchain-tag: true
stable-branch: true
branch: main
bump-branch: true
dependencies: []
- name: lean4checker
@@ -77,7 +76,6 @@ repositories:
toolchain-tag: true
stable-branch: true
branch: master
bump-branch: true
dependencies:
- Aesop
- ProofWidgets4

View File

@@ -10,7 +10,7 @@ endif()
include(ExternalProject)
project(LEAN CXX C)
set(LEAN_VERSION_MAJOR 4)
set(LEAN_VERSION_MINOR 18)
set(LEAN_VERSION_MINOR 16)
set(LEAN_VERSION_PATCH 0)
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")

View File

@@ -39,4 +39,3 @@ import Init.While
import Init.Syntax
import Init.Internal
import Init.Try
import Init.BinderNameHint

View File

@@ -1,38 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Init.Prelude
import Init.Tactics
set_option linter.unusedVariables false in
/--
The expression `binderNameHint v binder e` defined to be `e`.
If it is used on the right-hand side of an equation that is used for rewriting by `rw` or `simp`,
and `v` is a local variable, and `binder` is an expression that (after beta-reduction) is a binder
(`fun w => …` or `∀ w, …`), then it will rename `v` to the name used in that binder, and remove
the `binderNameHint`.
A typical use of this gadget would be as follows; the gadget ensures that after rewriting, the local
variable is still `name`, and not `x`:
```
theorem all_eq_not_any_not (l : List α) (p : α → Bool) :
l.all p = !l.any fun x => binderNameHint x p (!p x) := sorry
example (names : List String) : names.all (fun name => "Waldo".isPrefixOf name) = true := by
rw [all_eq_not_any_not]
-- ⊢ (!names.any fun name => !"Waldo".isPrefixOf name) = true
```
If `binder` is not a binder, then the name of `v` attains a macro scope. This only matters when the
resulting term is used in a non-hygienic way, e.g. in termination proofs for well-founded recursion.
This gadget is supported by `simp`, `dsimp` and `rw` in the right-hand-side of an equation, but not
in hypotheses or by other tactics.
-/
@[simp ]
def binderNameHint {α : Sort u} {β : Sort v} {γ : Sort w} (v : α) (binder : β) (e : γ) : γ := e

View File

@@ -9,49 +9,25 @@ import Init.RCases
import Init.ByCases
-- Mapping by a function with a left inverse is injective.
theorem map_inj_of_left_inverse [Functor m] [LawfulFunctor m] {f : α β}
(w : g : β α, x, g (f x) = x) {x y : m α} :
f <$> x = f <$> y x = y := by
constructor
· intro h
rcases w with g, w
replace h := congrArg (g <$> ·) h
simpa [w] using h
· rintro rfl
rfl
theorem map_inj_of_left_inverse [Applicative m] [LawfulApplicative m] {f : α β}
(w : g : β α, x, g (f x) = x) {x y : m α}
(h : f <$> x = f <$> y) : x = y := by
rcases w with g, w
replace h := congrArg (g <$> ·) h
simpa [w] using h
-- Mapping by an injective function is injective, as long as the domain is nonempty.
@[simp] theorem map_inj_right_of_nonempty [Functor m] [LawfulFunctor m] [Nonempty α] {f : α β}
(w : {x y}, f x = f y x = y) {x y : m α} :
f <$> x = f <$> y x = y := by
constructor
· intro h
apply (map_inj_of_left_inverse ?_).mp h
let a := Nonempty α
refine ?_, ?_
· intro b
by_cases p : a, f a = b
· exact Exists.choose p
· exact a
· intro b
simp only [exists_apply_eq_apply, reduceDIte]
apply w
apply Exists.choose_spec (p := fun a => f a = f b)
· rintro rfl
rfl
@[simp] theorem map_inj_right [Monad m] [LawfulMonad m]
{f : α β} (h : {x y : α}, f x = f y x = y) {x y : m α} :
f <$> x = f <$> y x = y := by
by_cases hempty : Nonempty α
· exact map_inj_right_of_nonempty h
· constructor
· intro h'
have (z : m α) : z = (do let a z; let b pure (f a); x) := by
conv => lhs; rw [ bind_pure z]
congr; funext a
exact (hempty a).elim
rw [this x, this y]
rw [ bind_assoc, map_eq_pure_bind, h', map_eq_pure_bind, bind_assoc]
· intro h'
rw [h']
theorem map_inj_of_inj [Applicative m] [LawfulApplicative m] [Nonempty α] {f : α β}
(w : x y, f x = f y x = y) {x y : m α}
(h : f <$> x = f <$> y) : x = y := by
apply map_inj_of_left_inverse ?_ h
let a := Nonempty α
refine ?_, ?_
· intro b
by_cases p : a, f a = b
· exact Exists.choose p
· exact a
· intro b
simp only [exists_apply_eq_apply, reduceDIte]
apply w
apply Exists.choose_spec (p := fun a => f a = f b)

View File

@@ -593,9 +593,7 @@ set_option linter.unusedVariables.funArgs false in
be available and then calls `f` on the result.
`prio`, if provided, is the priority of the task.
If `sync` is set to true, `f` is executed on the current thread if `x` has already finished and
otherwise on the thread that `x` finished on. `prio` is ignored in this case. This should only be
done when executing `f` is cheap and non-blocking.
If `sync` is set to true, `f` is executed on the current thread if `x` has already finished.
-/
@[noinline, extern "lean_task_map"]
protected def map (f : α β) (x : Task α) (prio := Priority.default) (sync := false) : Task β :=
@@ -609,9 +607,7 @@ for the value of `x` to be available and then calls `f` on the result,
resulting in a new task which is then run for a result.
`prio`, if provided, is the priority of the task.
If `sync` is set to true, `f` is executed on the current thread if `x` has already finished and
otherwise on the thread that `x` finished on. `prio` is ignored in this case. This should only be
done when executing `f` is cheap and non-blocking.
If `sync` is set to true, `f` is executed on the current thread if `x` has already finished.
-/
@[noinline, extern "lean_task_bind"]
protected def bind (x : Task α) (f : α Task β) (prio := Priority.default) (sync := false) :

View File

@@ -26,4 +26,3 @@ import Init.Data.Array.Lex
import Init.Data.Array.Range
import Init.Data.Array.Erase
import Init.Data.Array.Zip
import Init.Data.Array.InsertIdx

View File

@@ -953,11 +953,7 @@ def eraseP (as : Array α) (p : α → Bool) : Array α :=
| none => as
| some i => as.eraseIdx i
/-- Insert element `a` at position `i`.
This function takes worst case O(n) time because
it has to swap the inserted element into place.
-/
/-- Insert element `a` at position `i`. -/
@[inline] def insertIdx (as : Array α) (i : Nat) (a : α) (_ : i as.size := by get_elem_tactic) : Array α :=
let rec @[semireducible] -- This is otherwise irreducible because it uses well-founded recursion.
loop (as : Array α) (j : Fin as.size) :=

View File

@@ -12,9 +12,9 @@ import Init.ByCases
namespace Array
private theorem rel_of_isEqvAux
{r : α α Bool} {xs ys : Array α} (hsz : xs.size = ys.size) {i : Nat} (hi : i xs.size)
(heqv : Array.isEqvAux xs ys hsz r i hi)
{j : Nat} (hj : j < i) : r (xs[j]'(Nat.lt_of_lt_of_le hj hi)) (ys[j]'(Nat.lt_of_lt_of_le hj (hsz hi))) := by
{r : α α Bool} {a b : Array α} (hsz : a.size = b.size) {i : Nat} (hi : i a.size)
(heqv : Array.isEqvAux a b hsz r i hi)
{j : Nat} (hj : j < i) : r (a[j]'(Nat.lt_of_lt_of_le hj hi)) (b[j]'(Nat.lt_of_lt_of_le hj (hsz hi))) := by
induction i with
| zero => contradiction
| succ i ih =>
@@ -27,8 +27,8 @@ private theorem rel_of_isEqvAux
subst hj'
exact heqv.left
private theorem isEqvAux_of_rel {r : α α Bool} {xs ys : Array α} (hsz : xs.size = ys.size) {i : Nat} (hi : i xs.size)
(w : j, (hj : j < i) r (xs[j]'(Nat.lt_of_lt_of_le hj hi)) (ys[j]'(Nat.lt_of_lt_of_le hj (hsz hi)))) : Array.isEqvAux xs ys hsz r i hi := by
private theorem isEqvAux_of_rel {r : α α Bool} {a b : Array α} (hsz : a.size = b.size) {i : Nat} (hi : i a.size)
(w : j, (hj : j < i) r (a[j]'(Nat.lt_of_lt_of_le hj hi)) (b[j]'(Nat.lt_of_lt_of_le hj (hsz hi)))) : Array.isEqvAux a b hsz r i hi := by
induction i with
| zero => simp [Array.isEqvAux]
| succ i ih =>
@@ -36,23 +36,23 @@ private theorem isEqvAux_of_rel {r : αα → Bool} {xs ys : Array α} (hsz
exact w i (Nat.lt_add_one i), ih _ fun j hj => w j (Nat.lt_add_right 1 hj)
-- This is private as the forward direction of `isEqv_iff_rel` may be used.
private theorem rel_of_isEqv {r : α α Bool} {xs ys : Array α} :
Array.isEqv xs ys r h : xs.size = ys.size, (i : Nat) (h' : i < xs.size), r (xs[i]) (ys[i]'(h h')) := by
private theorem rel_of_isEqv {r : α α Bool} {a b : Array α} :
Array.isEqv a b r h : a.size = b.size, (i : Nat) (h' : i < a.size), r (a[i]) (b[i]'(h h')) := by
simp only [isEqv]
split <;> rename_i h
· exact fun h' => h, fun i => rel_of_isEqvAux h (Nat.le_refl ..) h'
· intro; contradiction
theorem isEqv_iff_rel {xs ys : Array α} {r} :
Array.isEqv xs ys r h : xs.size = ys.size, (i : Nat) (h' : i < xs.size), r (xs[i]) (ys[i]'(h h')) :=
theorem isEqv_iff_rel {a b : Array α} {r} :
Array.isEqv a b r h : a.size = b.size, (i : Nat) (h' : i < a.size), r (a[i]) (b[i]'(h h')) :=
rel_of_isEqv, fun h, w => by
simp only [isEqv, h, reduceDIte]
exact isEqvAux_of_rel h (by simp [h]) w
theorem isEqv_eq_decide (xs ys : Array α) (r) :
Array.isEqv xs ys r =
if h : xs.size = ys.size then decide ( (i : Nat) (h' : i < xs.size), r (xs[i]) (ys[i]'(h h'))) else false := by
by_cases h : Array.isEqv xs ys r
theorem isEqv_eq_decide (a b : Array α) (r) :
Array.isEqv a b r =
if h : a.size = b.size then decide ( (i : Nat) (h' : i < a.size), r (a[i]) (b[i]'(h h'))) else false := by
by_cases h : Array.isEqv a b r
· simp only [h, Bool.true_eq]
simp only [isEqv_iff_rel] at h
obtain h, w := h
@@ -63,24 +63,24 @@ theorem isEqv_eq_decide (xs ys : Array α) (r) :
Bool.not_eq_true]
simpa [isEqv_iff_rel] using h'
@[simp] theorem isEqv_toList [BEq α] (xs ys : Array α) : (xs.toList.isEqv ys.toList r) = (xs.isEqv ys r) := by
@[simp] theorem isEqv_toList [BEq α] (a b : Array α) : (a.toList.isEqv b.toList r) = (a.isEqv b r) := by
simp [isEqv_eq_decide, List.isEqv_eq_decide]
theorem eq_of_isEqv [DecidableEq α] (xs ys : Array α) (h : Array.isEqv xs ys (fun x y => x = y)) : xs = ys := by
theorem eq_of_isEqv [DecidableEq α] (a b : Array α) (h : Array.isEqv a b (fun x y => x = y)) : a = b := by
have h, h' := rel_of_isEqv h
exact ext _ _ h (fun i lt _ => by simpa using h' i lt)
private theorem isEqvAux_self (r : α α Bool) (hr : a, r a a) (xs : Array α) (i : Nat) (h : i xs.size) :
Array.isEqvAux xs xs rfl r i h = true := by
private theorem isEqvAux_self (r : α α Bool) (hr : a, r a a) (a : Array α) (i : Nat) (h : i a.size) :
Array.isEqvAux a a rfl r i h = true := by
induction i with
| zero => simp [Array.isEqvAux]
| succ i ih =>
simp_all only [isEqvAux, Bool.and_self]
theorem isEqv_self_beq [BEq α] [ReflBEq α] (xs : Array α) : Array.isEqv xs xs (· == ·) = true := by
theorem isEqv_self_beq [BEq α] [ReflBEq α] (a : Array α) : Array.isEqv a a (· == ·) = true := by
simp [isEqv, isEqvAux_self]
theorem isEqv_self [DecidableEq α] (xs : Array α) : Array.isEqv xs xs (· = ·) = true := by
theorem isEqv_self [DecidableEq α] (a : Array α) : Array.isEqv a a (· = ·) = true := by
simp [isEqv, isEqvAux_self]
instance [DecidableEq α] : DecidableEq (Array α) :=
@@ -89,22 +89,22 @@ instance [DecidableEq α] : DecidableEq (Array α) :=
| true => isTrue (eq_of_isEqv a b h)
| false => isFalse fun h' => by subst h'; rw [isEqv_self] at h; contradiction
theorem beq_eq_decide [BEq α] (xs ys : Array α) :
(xs == ys) = if h : xs.size = ys.size then
decide ( (i : Nat) (h' : i < xs.size), xs[i] == ys[i]'(h h')) else false := by
theorem beq_eq_decide [BEq α] (a b : Array α) :
(a == b) = if h : a.size = b.size then
decide ( (i : Nat) (h' : i < a.size), a[i] == b[i]'(h h')) else false := by
simp [BEq.beq, isEqv_eq_decide]
@[simp] theorem beq_toList [BEq α] (xs ys : Array α) : (xs.toList == ys.toList) = (xs == ys) := by
@[simp] theorem beq_toList [BEq α] (a b : Array α) : (a.toList == b.toList) = (a == b) := by
simp [beq_eq_decide, List.beq_eq_decide]
end Array
namespace List
@[simp] theorem isEqv_toArray [BEq α] (as bs : List α) : (as.toArray.isEqv bs.toArray r) = (as.isEqv bs r) := by
@[simp] theorem isEqv_toArray [BEq α] (a b : List α) : (a.toArray.isEqv b.toArray r) = (a.isEqv b r) := by
simp [isEqv_eq_decide, Array.isEqv_eq_decide]
@[simp] theorem beq_toArray [BEq α] (as bs : List α) : (as.toArray == bs.toArray) = (as == bs) := by
@[simp] theorem beq_toArray [BEq α] (a b : List α) : (a.toArray == b.toArray) = (a == b) := by
simp [beq_eq_decide, Array.beq_eq_decide]
end List
@@ -114,7 +114,7 @@ namespace Array
instance [BEq α] [LawfulBEq α] : LawfulBEq (Array α) where
rfl := by simp [BEq.beq, isEqv_self_beq]
eq_of_beq := by
rintro _ _ h
rintro a b h
simpa using h
end Array

View File

@@ -1,129 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Array.Lemmas
import Init.Data.List.Nat.InsertIdx
/-!
# insertIdx
Proves various lemmas about `Array.insertIdx`.
-/
open Function
open Nat
namespace Array
universe u
variable {α : Type u}
section InsertIdx
variable {a : α}
@[simp] theorem toList_insertIdx (a : Array α) (i x) (h) :
(a.insertIdx i x h).toList = a.toList.insertIdx i x := by
rcases a with a
simp
@[simp]
theorem insertIdx_zero (s : Array α) (x : α) : s.insertIdx 0 x = #[x] ++ s := by
cases s
simp
@[simp] theorem size_insertIdx {as : Array α} (h : n as.size) : (as.insertIdx n a).size = as.size + 1 := by
cases as
simp [List.length_insertIdx, h]
theorem eraseIdx_insertIdx (i : Nat) (l : Array α) (h : i l.size) :
(l.insertIdx i a).eraseIdx i (by simp; omega) = l := by
cases l
simp_all
theorem insertIdx_eraseIdx_of_ge {as : Array α}
(w₁ : i < as.size) (w₂ : j (as.eraseIdx i).size) (h : i j) :
(as.eraseIdx i).insertIdx j a =
(as.insertIdx (j + 1) a (by simp at w₂; omega)).eraseIdx i (by simp_all; omega) := by
cases as
simpa using List.insertIdx_eraseIdx_of_ge _ _ _ (by simpa) (by simpa)
theorem insertIdx_eraseIdx_of_le {as : Array α}
(w₁ : i < as.size) (w₂ : j (as.eraseIdx i).size) (h : j i) :
(as.eraseIdx i).insertIdx j a =
(as.insertIdx j a (by simp at w₂; omega)).eraseIdx (i + 1) (by simp_all) := by
cases as
simpa using List.insertIdx_eraseIdx_of_le _ _ _ (by simpa) (by simpa)
theorem insertIdx_comm (a b : α) (i j : Nat) (l : Array α) (_ : i j) (_ : j l.size) :
(l.insertIdx i a).insertIdx (j + 1) b (by simpa) =
(l.insertIdx j b).insertIdx i a (by simp; omega) := by
cases l
simpa using List.insertIdx_comm a b i j _ (by simpa) (by simpa)
theorem mem_insertIdx {l : Array α} {h : i l.size} : a l.insertIdx i b h a = b a l := by
cases l
simpa using List.mem_insertIdx (by simpa)
@[simp]
theorem insertIdx_size_self (l : Array α) (x : α) : l.insertIdx l.size x = l.push x := by
cases l
simp
theorem getElem_insertIdx {as : Array α} {x : α} {i k : Nat} (w : i as.size) (h : k < (as.insertIdx i x).size) :
(as.insertIdx i x)[k] =
if h₁ : k < i then
as[k]'(by simp [size_insertIdx] at h; omega)
else
if h₂ : k = i then
x
else
as[k-1]'(by simp [size_insertIdx] at h; omega) := by
cases as
simp [List.getElem_insertIdx, w]
theorem getElem_insertIdx_of_lt {as : Array α} {x : α} {i k : Nat} (w : i as.size) (h : k < i) :
(as.insertIdx i x)[k]'(by simp; omega) = as[k] := by
simp [getElem_insertIdx, w, h]
theorem getElem_insertIdx_self {as : Array α} {x : α} {i : Nat} (w : i as.size) :
(as.insertIdx i x)[i]'(by simp; omega) = x := by
simp [getElem_insertIdx, w]
theorem getElem_insertIdx_of_gt {as : Array α} {x : α} {i k : Nat} (w : k as.size) (h : k > i) :
(as.insertIdx i x)[k]'(by simp; omega) = as[k - 1]'(by omega) := by
simp [getElem_insertIdx, w, h]
rw [dif_neg (by omega), dif_neg (by omega)]
theorem getElem?_insertIdx {l : Array α} {x : α} {i k : Nat} (h : i l.size) :
(l.insertIdx i x)[k]? =
if k < i then
l[k]?
else
if k = i then
if k l.size then some x else none
else
l[k-1]? := by
cases l
simp [List.getElem?_insertIdx, h]
theorem getElem?_insertIdx_of_lt {l : Array α} {x : α} {i k : Nat} (w : i l.size) (h : k < i) :
(l.insertIdx i x)[k]? = l[k]? := by
rw [getElem?_insertIdx, if_pos h]
theorem getElem?_insertIdx_self {l : Array α} {x : α} {i : Nat} (w : i l.size) :
(l.insertIdx i x)[i]? = some x := by
rw [getElem?_insertIdx, if_neg (by omega), if_pos rfl, if_pos w]
theorem getElem?_insertIdx_of_ge {l : Array α} {x : α} {i k : Nat} (w : i < k) (h : k l.size) :
(l.insertIdx i x)[k]? = l[k - 1]? := by
rw [getElem?_insertIdx, if_neg (by omega), if_neg (by omega)]
end InsertIdx
end Array

View File

@@ -1440,10 +1440,9 @@ theorem _root_.List.filterMap_toArray (f : α → Option β) (l : List α) :
rcases l with l
simp [h]
@[simp] theorem filterMap_eq_map (f : α β) (w : stop = as.size ) :
filterMap (some f) as 0 stop = map f as := by
subst w
cases as
@[simp] theorem filterMap_eq_map (f : α β) : filterMap (some f) = map f := by
funext l
cases l
simp
theorem filterMap_some_fun : filterMap (some : α Option α) = id := by
@@ -1471,10 +1470,10 @@ theorem size_filterMap_le (f : α → Option β) (l : Array α) :
simp [List.filterMap_length_eq_length]
@[simp]
theorem filterMap_eq_filter (p : α Bool) (w : stop = as.size) :
filterMap (Option.guard (p ·)) as 0 stop = filter p as := by
subst w
cases as
theorem filterMap_eq_filter (p : α Bool) :
filterMap (Option.guard (p ·)) = filter p := by
funext l
cases l
simp
theorem filterMap_filterMap (f : α Option β) (g : β Option γ) (l : Array α) :
@@ -1847,7 +1846,7 @@ theorem append_eq_filterMap_iff {f : α → Option β} :
theorem map_eq_append_iff {f : α β} :
map f l = L₁ ++ L₂ l₁ l₂, l = l₁ ++ l₂ map f l₁ = L₁ map f l₂ = L₂ := by
simp only [ filterMap_eq_map, filterMap_eq_append_iff]
rw [ filterMap_eq_map, filterMap_eq_append_iff]
theorem append_eq_map_iff {f : α β} :
L₁ ++ L₂ = map f l l₁ l₂, l = l₁ ++ l₂ map f l₁ = L₁ map f l₂ = L₂ := by
@@ -1878,8 +1877,7 @@ theorem append_eq_map_iff {f : α → β} :
rw [ flatten_map_toArray]
simp
-- We set this to lower priority so that `flatten_toArray_map` is applied first when relevant.
@[simp 500] theorem flatten_toArray (l : List (Array α)) :
theorem flatten_toArray (l : List (Array α)) :
l.toArray.flatten = (l.map Array.toList).flatten.toArray := by
apply ext'
simp
@@ -2141,8 +2139,7 @@ theorem flatMap_eq_foldl (f : α → Array β) (l : Array α) :
| nil => simp
| cons a l ih =>
intro l'
rw [List.foldl_cons, ih]
simp [toArray_append]
simp [ih ((l' ++ (f a).toList)), toArray_append]
/-! ### mkArray -/
@@ -3655,7 +3652,7 @@ theorem toListRev_toArray (l : List α) : l.toArray.toListRev = l.reverse := by
l.toArray.mapM f = List.toArray <$> l.mapM f := by
simp only [ mapM'_eq_mapM, mapM_eq_foldlM]
suffices init : Array β,
Array.foldlM (fun bs a => bs.push <$> f a) init l.toArray = (init ++ toArray ·) <$> mapM' f l by
foldlM (fun bs a => bs.push <$> f a) init l.toArray = (init ++ toArray ·) <$> mapM' f l by
simpa using this #[]
intro init
induction l generalizing init with

View File

@@ -8,9 +8,6 @@ import Init.Data.Array.Basic
import Init.Data.Nat.Lemmas
import Init.Data.Range
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace Array
/--

View File

@@ -434,57 +434,3 @@ theorem mapIdx_eq_mkArray_iff {l : Array α} {f : Nat → α → β} {b : β} :
simp [List.mapIdx_reverse]
end Array
namespace List
theorem mapFinIdxM_toArray [Monad m] [LawfulMonad m] (l : List α)
(f : (i : Nat) α (h : i < l.length) m β) :
l.toArray.mapFinIdxM f = toArray <$> l.mapFinIdxM f := by
let rec go (i : Nat) (acc : Array β) (inv : i + acc.size = l.length) :
Array.mapFinIdxM.map l.toArray f i acc.size inv acc
= toArray <$> mapFinIdxM.go l f (l.drop acc.size) acc
(by simp [Nat.sub_add_cancel (Nat.le.intro (Nat.add_comm _ _ inv))]) := by
match i with
| 0 =>
rw [Nat.zero_add] at inv
simp only [Array.mapFinIdxM.map, inv, drop_length, mapFinIdxM.go, map_pure]
| k + 1 =>
conv => enter [2, 2, 3]; rw [ getElem_cons_drop l acc.size (by omega)]
simp only [Array.mapFinIdxM.map, mapFinIdxM.go, _root_.map_bind]
congr; funext x
conv => enter [1, 4]; rw [ Array.size_push _ x]
conv => enter [2, 2, 3]; rw [ Array.size_push _ x]
refine go k (acc.push x) _
simp only [Array.mapFinIdxM, mapFinIdxM]
exact go _ #[] _
theorem mapIdxM_toArray [Monad m] [LawfulMonad m] (l : List α)
(f : Nat α m β) :
l.toArray.mapIdxM f = toArray <$> l.mapIdxM f := by
let rec go (bs : List α) (acc : Array β) (inv : bs.length + acc.size = l.length) :
mapFinIdxM.go l (fun i a h => f i a) bs acc inv = mapIdxM.go f bs acc := by
match bs with
| [] => simp only [mapFinIdxM.go, mapIdxM.go]
| x :: xs => simp only [mapFinIdxM.go, mapIdxM.go, go]
unfold Array.mapIdxM
rw [mapFinIdxM_toArray]
simp only [mapFinIdxM, mapIdxM]
rw [go]
end List
namespace Array
theorem toList_mapFinIdxM [Monad m] [LawfulMonad m] (l : Array α)
(f : (i : Nat) α (h : i < l.size) m β) :
toList <$> l.mapFinIdxM f = l.toList.mapFinIdxM f := by
rw [List.mapFinIdxM_toArray]
simp only [Functor.map_map, id_map']
theorem toList_mapIdxM [Monad m] [LawfulMonad m] (l : Array α)
(f : Nat α m β) :
toList <$> l.mapIdxM f = l.toList.mapIdxM f := by
rw [List.mapIdxM_toArray]
simp only [Functor.map_map, id_map']
end Array

View File

@@ -221,121 +221,6 @@ theorem forIn_pure_yield_eq_foldl [Monad m] [LawfulMonad m]
cases l
simp
end Array
namespace List
theorem filterM_toArray [Monad m] [LawfulMonad m] (l : List α) (p : α m Bool) :
l.toArray.filterM p = toArray <$> l.filterM p := by
simp only [Array.filterM, filterM, foldlM_toArray, bind_pure_comp, Functor.map_map]
conv => lhs; rw [ reverse_nil]
generalize [] = acc
induction l generalizing acc with simp
| cons x xs ih =>
congr; funext b
cases b
· simp only [Bool.false_eq_true, reduceIte, pure_bind, cond_false]
exact ih acc
· simp only [reduceIte, reverse_cons, pure_bind, cond_true]
exact ih (x :: acc)
/-- Variant of `filterM_toArray` with a side condition for the stop position. -/
@[simp] theorem filterM_toArray' [Monad m] [LawfulMonad m] (l : List α) (p : α m Bool) (w : stop = l.length) :
l.toArray.filterM p 0 stop = toArray <$> l.filterM p := by
subst w
rw [filterM_toArray]
theorem filterRevM_toArray [Monad m] [LawfulMonad m] (l : List α) (p : α m Bool) :
l.toArray.filterRevM p = toArray <$> l.filterRevM p := by
simp [Array.filterRevM, filterRevM]
rw [ foldlM_reverse, foldlM_toArray, Array.filterM, filterM_toArray]
simp only [filterM, bind_pure_comp, Functor.map_map, reverse_toArray, reverse_reverse]
/-- Variant of `filterRevM_toArray` with a side condition for the start position. -/
@[simp] theorem filterRevM_toArray' [Monad m] [LawfulMonad m] (l : List α) (p : α m Bool) (w : start = l.length) :
l.toArray.filterRevM p start 0 = toArray <$> l.filterRevM p := by
subst w
rw [filterRevM_toArray]
theorem filterMapM_toArray [Monad m] [LawfulMonad m] (l : List α) (f : α m (Option β)) :
l.toArray.filterMapM f = toArray <$> l.filterMapM f := by
simp [Array.filterMapM, filterMapM]
conv => lhs; rw [ reverse_nil]
generalize [] = acc
induction l generalizing acc with simp [filterMapM.loop]
| cons x xs ih =>
congr; funext o
cases o
· simp only [pure_bind]; exact ih acc
· simp only [pure_bind]; rw [ List.reverse_cons]; exact ih _
/-- Variant of `filterMapM_toArray` with a side condition for the stop position. -/
@[simp] theorem filterMapM_toArray' [Monad m] [LawfulMonad m] (l : List α) (f : α m (Option β)) (w : stop = l.length) :
l.toArray.filterMapM f 0 stop = toArray <$> l.filterMapM f := by
subst w
rw [filterMapM_toArray]
@[simp] theorem flatMapM_toArray [Monad m] [LawfulMonad m] (l : List α) (f : α m (Array β)) :
l.toArray.flatMapM f = toArray <$> l.flatMapM (fun a => Array.toList <$> f a) := by
simp only [Array.flatMapM, bind_pure_comp, foldlM_toArray, flatMapM]
conv => lhs; arg 2; change [].reverse.flatten.toArray
generalize [] = acc
induction l generalizing acc with
| nil => simp only [foldlM_nil, flatMapM.loop, map_pure]
| cons x xs ih =>
simp only [foldlM_cons, bind_map_left, flatMapM.loop, _root_.map_bind]
congr; funext a
conv => lhs; rw [Array.toArray_append, flatten_concat, reverse_cons]
exact ih _
end List
namespace Array
@[congr] theorem filterM_congr [Monad m] {as bs : Array α} (w : as = bs)
{p : α m Bool} {q : α m Bool} (h : a, p a = q a) :
as.filterM p = bs.filterM q := by
subst w
simp [filterM, h]
@[congr] theorem filterRevM_congr [Monad m] {as bs : Array α} (w : as = bs)
{p : α m Bool} {q : α m Bool} (h : a, p a = q a) :
as.filterRevM p = bs.filterRevM q := by
subst w
simp [filterRevM, h]
@[congr] theorem filterMapM_congr [Monad m] {as bs : Array α} (w : as = bs)
{f : α m (Option β)} {g : α m (Option β)} (h : a, f a = g a) :
as.filterMapM f = bs.filterMapM g := by
subst w
simp [filterMapM, h]
@[congr] theorem flatMapM_congr [Monad m] {as bs : Array α} (w : as = bs)
{f : α m (Array β)} {g : α m (Array β)} (h : a, f a = g a) :
as.flatMapM f = bs.flatMapM g := by
subst w
simp [flatMapM, h]
theorem toList_filterM [Monad m] [LawfulMonad m] (a : Array α) (p : α m Bool) :
toList <$> a.filterM p = a.toList.filterM p := by
rw [List.filterM_toArray]
simp only [Functor.map_map, id_map']
theorem toList_filterRevM [Monad m] [LawfulMonad m] (a : Array α) (p : α m Bool) :
toList <$> a.filterRevM p = a.toList.filterRevM p := by
rw [List.filterRevM_toArray]
simp only [Functor.map_map, id_map']
theorem toList_filterMapM [Monad m] [LawfulMonad m] (a : Array α) (f : α m (Option β)) :
toList <$> a.filterMapM f = a.toList.filterMapM f := by
rw [List.filterMapM_toArray]
simp only [Functor.map_map, id_map']
theorem toList_flatMapM [Monad m] [LawfulMonad m] (a : Array α) (f : α m (Array β)) :
toList <$> a.flatMapM f = a.toList.flatMapM (fun a => toList <$> f a) := by
rw [List.flatMapM_toArray]
simp only [Functor.map_map, id_map']
/-! ### Recognizing higher order functions using a function that only depends on the value. -/
/--
@@ -375,20 +260,20 @@ and simplifies these to the function directly taking the value.
simp
rw [List.mapM_subtype hf]
@[simp] theorem filterMapM_subtype [Monad m] [LawfulMonad m] {p : α Prop} {l : Array { x // p x }}
{f : { x // p x } m (Option β)} {g : α m (Option β)} (hf : x h, f x, h = g x) (w : stop = l.size) :
l.filterMapM f 0 stop = l.unattach.filterMapM g := by
subst w
rcases l with l
simp
rw [List.filterMapM_subtype hf]
-- Without `filterMapM_toArray` relating `filterMapM` on `List` and `Array` we can't prove this yet:
-- @[simp] theorem filterMapM_subtype [Monad m] [LawfulMonad m] {p : α → Prop} {l : Array { x // p x }}
-- {f : { x // p x } → m (Option β)} {g : α → m (Option β)} (hf : ∀ x h, f ⟨x, h⟩ = g x) :
-- l.filterMapM f = l.unattach.filterMapM g := by
-- rcases l with ⟨l
-- simp
-- rw [List.filterMapM_subtype hf]
@[simp] theorem flatMapM_subtype [Monad m] [LawfulMonad m] {p : α Prop} {l : Array { x // p x }}
{f : { x // p x } m (Array β)} {g : α m (Array β)} (hf : x h, f x, h = g x) :
(l.flatMapM f) = l.unattach.flatMapM g := by
rcases l with l
simp
rw [List.flatMapM_subtype]
simp [hf]
-- Without `flatMapM_toArray` relating `flatMapM` on `List` and `Array` we can't prove this yet:
-- @[simp] theorem flatMapM_subtype [Monad m] [LawfulMonad m] {p : α → Prop} {l : Array { x // p x }}
-- {f : { x // p x } → m (Array β)} {g : α → m (Array β)} (hf : ∀ x h, f ⟨x, h⟩ = g x) :
-- (l.flatMapM f) = l.unattach.flatMapM g := by
-- rcases l with ⟨l⟩
-- simp
-- rw [List.flatMapM_subtype hf]
end Array

View File

@@ -675,22 +675,6 @@ def ofBoolListLE : (bs : List Bool) → BitVec bs.length
| [] => 0#0
| b :: bs => concat (ofBoolListLE bs) b
/-! ## Overflow -/
/-- `uaddOverflow x y` returns `true` if addition of `x` and `y` results in *unsigned* overflow.
SMT-Lib name: `bvuaddo`.
-/
def uaddOverflow {w : Nat} (x y : BitVec w) : Bool := x.toNat + y.toNat 2 ^ w
/-- `saddOverflow x y` returns `true` if addition of `x` and `y` results in *signed* overflow,
treating `x` and `y` as 2's complement signed bitvectors.
SMT-Lib name: `bvsaddo`.
-/
def saddOverflow {w : Nat} (x y : BitVec w) : Bool :=
(x.toInt + y.toInt 2 ^ (w - 1)) || (x.toInt + y.toInt < - 2 ^ (w - 1))
/- ### reverse -/
/-- Reverse the bits in a bitvector. -/

View File

@@ -6,7 +6,6 @@ Authors: Harun Khan, Abdalrhman M Mohamed, Joe Hendrix, Siddharth Bhat
prelude
import Init.Data.BitVec.Folds
import Init.Data.Nat.Mod
import Init.Data.Int.LemmasAux
/-!
# Bitblasting of bitvectors
@@ -1231,53 +1230,4 @@ theorem shiftRight_eq_ushiftRightRec (x : BitVec w₁) (y : BitVec w₂) :
· simp [of_length_zero]
· simp [ushiftRightRec_eq]
/-! ### Overflow definitions -/
/-- Unsigned addition overflows iff the final carry bit of the addition circuit is `true`. -/
theorem uaddOverflow_eq {w : Nat} (x y : BitVec w) :
uaddOverflow x y = (x.setWidth (w + 1) + y.setWidth (w + 1)).msb := by
simp [uaddOverflow, msb_add, msb_setWidth, carry]
theorem saddOverflow_eq {w : Nat} (x y : BitVec w) :
saddOverflow x y = (x.msb == y.msb && !((x + y).msb == x.msb)) := by
simp only [saddOverflow]
rcases w with _|w
· revert x y; decide
· have := le_toInt (x := x); have := toInt_lt (x := x)
have := le_toInt (x := y); have := toInt_lt (x := y)
simp only [ decide_or, msb_eq_toInt, decide_beq_decide, toInt_add, decide_not, decide_and,
decide_eq_decide]
rw_mod_cast [Int.bmod_neg_iff (by omega) (by omega)]
simp
omega
/- ### umod -/
theorem getElem_umod {n d : BitVec w} (hi : i < w) :
(n % d)[i]
= if d = 0#w then n[i]
else (divRec w { n := n, d := d } (DivModState.init w)).r[i] := by
by_cases hd : d = 0#w
· simp [hd]
· have := (BitVec.not_le (x := d) (y := 0#w)).mp
rw [ BitVec.umod_eq_divRec (by simp [hd, this])]
simp [hd]
theorem getLsbD_umod {n d : BitVec w}:
(n % d).getLsbD i
= if d = 0#w then n.getLsbD i
else (divRec w { n := n, d := d } (DivModState.init w)).r.getLsbD i := by
by_cases hi : i < w
· simp only [BitVec.getLsbD_eq_getElem hi, getElem_umod]
· simp [show w i by omega]
theorem getMsbD_umod {n d : BitVec w}:
(n % d).getMsbD i
= if d = 0#w then n.getMsbD i
else (divRec w { n := n, d := d } (DivModState.init w)).r.getMsbD i := by
by_cases hi : i < w
· rw [BitVec.getMsbD_eq_getLsbD, getLsbD_umod]
simp [BitVec.getMsbD_eq_getLsbD, hi]
· simp [show w i by omega]
end BitVec

View File

@@ -326,10 +326,6 @@ theorem getLsbD_ofNat (n : Nat) (x : Nat) (i : Nat) :
simp only [toNat_eq, toNat_ofNat, Nat.zero_mod] at h
rw [Nat.mod_eq_of_lt (by omega)]
@[simp] theorem toNat_mod_cancel_of_lt {x : BitVec n} (h : n < m) : x.toNat % (2 ^ m) = x.toNat := by
have : 2 ^ n < 2 ^ m := Nat.pow_lt_pow_of_lt (by omega) h
exact Nat.mod_eq_of_lt (by omega)
@[simp] theorem sub_sub_toNat_cancel {x : BitVec w} :
2 ^ w - (2 ^ w - x.toNat) = x.toNat := by
simp [Nat.sub_sub_eq_min, Nat.min_eq_right]
@@ -559,30 +555,6 @@ theorem eq_zero_or_eq_one (a : BitVec 1) : a = 0#1 a = 1#1 := by
theorem toInt_zero {w : Nat} : (0#w).toInt = 0 := by
simp [BitVec.toInt, show 0 < 2^w by exact Nat.two_pow_pos w]
/--
`x.toInt` is less than `2^(w-1)`.
We phrase the fact in terms of `2^w` to prevent a case split on `w=0` when the lemma is used.
-/
theorem toInt_lt {w : Nat} {x : BitVec w} : 2 * x.toInt < 2 ^ w := by
simp only [BitVec.toInt]
rcases w with _|w'
· omega
· rw [ Nat.two_pow_pred_add_two_pow_pred (by omega), Nat.two_mul, Nat.add_sub_cancel]
simp only [Nat.zero_lt_succ, Nat.mul_lt_mul_left, Int.natCast_mul, Int.Nat.cast_ofNat_Int]
norm_cast; omega
/--
`x.toInt` is greater than or equal to `-2^(w-1)`.
We phrase the fact in terms of `2^w` to prevent a case split on `w=0` when the lemma is used.
-/
theorem le_toInt {w : Nat} {x : BitVec w} : -2 ^ w 2 * x.toInt := by
simp only [BitVec.toInt]
rcases w with _|w'
· omega
· rw [ Nat.two_pow_pred_add_two_pow_pred (by omega), Nat.two_mul, Nat.add_sub_cancel]
simp only [Nat.zero_lt_succ, Nat.mul_lt_mul_left, Int.natCast_mul, Int.Nat.cast_ofNat_Int]
norm_cast; omega
/-! ### slt -/
/--
@@ -3739,11 +3711,6 @@ theorem and_one_eq_setWidth_ofBool_getLsbD {x : BitVec w} :
theorem replicate_zero {x : BitVec w} : x.replicate 0 = 0#0 := by
simp [replicate]
@[simp]
theorem replicate_one {w : Nat} {x : BitVec w} :
(x.replicate 1) = x.cast (by rw [Nat.mul_one]) := by
simp [replicate]
@[simp]
theorem replicate_succ {x : BitVec w} :
x.replicate (n + 1) =
@@ -3751,7 +3718,7 @@ theorem replicate_succ {x : BitVec w} :
simp [replicate]
@[simp]
theorem getLsbD_replicate {n w : Nat} {x : BitVec w} :
theorem getLsbD_replicate {n w : Nat} (x : BitVec w) :
(x.replicate n).getLsbD i =
(decide (i < w * n) && x.getLsbD (i % w)) := by
induction n generalizing x
@@ -3762,17 +3729,17 @@ theorem getLsbD_replicate {n w : Nat} {x : BitVec w} :
· simp only [hi, decide_true, Bool.true_and]
by_cases hi' : i < w * n
· simp [hi', ih]
· simp only [hi', reduceIte]
rw [Nat.sub_mul_eq_mod_of_lt_of_le (by omega) (by omega)]
· simp [hi', decide_false]
rw [Nat.sub_mul_eq_mod_of_lt_of_le] <;> omega
· rw [Nat.mul_succ] at hi
simp only [show ¬i < w * n by omega, decide_false, cond_false, hi, Bool.false_and]
apply BitVec.getLsbD_ge (x := x) (i := i - w * n) (ge := by omega)
@[simp]
theorem getElem_replicate {n w : Nat} {x : BitVec w} (h : i < w * n) :
theorem getElem_replicate {n w : Nat} (x : BitVec w) (h : i < w * n) :
(x.replicate n)[i] = if h' : w = 0 then false else x[i % w]'(@Nat.mod_lt i w (by omega)) := by
simp only [ getLsbD_eq_getElem, getLsbD_replicate]
cases w <;> simp; omega
by_cases h' : w = 0 <;> simp [h'] <;> omega
theorem append_assoc {x₁ : BitVec w₁} {x₂ : BitVec w₂} {x₃ : BitVec w₃} :
(x₁ ++ x₂) ++ x₃ = (x₁ ++ (x₂ ++ x₃)).cast (by omega) := by
@@ -4138,17 +4105,6 @@ theorem reverse_replicate {n : Nat} {x : BitVec w} :
conv => lhs; simp only [replicate_succ']
simp [reverse_append, ih]
@[simp]
theorem getMsbD_replicate {n w : Nat} {x : BitVec w} :
(x.replicate n).getMsbD i = (decide (i < w * n) && x.getMsbD (i % w)) := by
rw [ getLsbD_reverse, reverse_replicate, getLsbD_replicate, getLsbD_reverse]
@[simp]
theorem msb_replicate {n w : Nat} {x : BitVec w} :
(x.replicate n).msb = (decide (0 < n) && x.msb) := by
simp only [BitVec.msb, getMsbD_replicate, Nat.zero_mod]
cases n <;> cases w <;> simp
/-! ### Decidable quantifiers -/
theorem forall_zero_iff {P : BitVec 0 Prop} :

View File

@@ -22,12 +22,6 @@ instance : Hashable Bool where
| true => 11
| false => 13
instance : Hashable PEmpty.{u} where
hash x := nomatch x
instance : Hashable PUnit.{u} where
hash | .unit => 11
instance [Hashable α] : Hashable (Option α) where
hash
| none => 11

View File

@@ -14,4 +14,3 @@ import Init.Data.Int.LemmasAux
import Init.Data.Int.Order
import Init.Data.Int.Pow
import Init.Data.Int.Cooper
import Init.Data.Int.Linear

View File

@@ -225,7 +225,7 @@ attribute [local simp] subNatNat_self
@[local simp] protected theorem add_right_neg (a : Int) : a + -a = 0 := by
rw [Int.add_comm, Int.add_left_neg]
protected theorem neg_eq_of_add_eq_zero {a b : Int} (h : a + b = 0) : -a = b := by
@[simp] protected theorem neg_eq_of_add_eq_zero {a b : Int} (h : a + b = 0) : -a = b := by
rw [ Int.add_zero (-a), h, Int.add_assoc, Int.add_left_neg, Int.zero_add]
protected theorem eq_neg_of_eq_neg {a b : Int} (h : a = -b) : b = -a := by
@@ -328,20 +328,24 @@ theorem toNat_sub (m n : Nat) : toNat (m - n) = m - n := by
/- ## add/sub injectivity -/
@[simp]
protected theorem add_left_inj {i j : Int} (k : Int) : (i + k = j + k) i = j := by
apply Iff.intro
· intro p
rw [Int.add_sub_cancel i k, Int.add_sub_cancel j k, p]
· exact congrArg (· + k)
@[simp]
protected theorem add_right_inj {i j : Int} (k : Int) : (k + i = k + j) i = j := by
simp [Int.add_comm k, Int.add_left_inj]
simp [Int.add_comm k]
@[simp]
protected theorem sub_right_inj {i j : Int} (k : Int) : (k - i = k - j) i = j := by
simp [Int.sub_eq_add_neg, Int.neg_inj, Int.add_right_inj]
simp [Int.sub_eq_add_neg, Int.neg_inj]
@[simp]
protected theorem sub_left_inj {i j : Int} (k : Int) : (i - k = j - k) i = j := by
simp [Int.sub_eq_add_neg, Int.add_left_inj]
simp [Int.sub_eq_add_neg]
/- ## Ring properties -/

View File

@@ -38,11 +38,4 @@ namespace Int
simp [toNat]
split <;> simp_all <;> omega
theorem bmod_neg_iff {m : Nat} {x : Int} (h2 : -m x) (h1 : x < m) :
(x.bmod m) < 0 (-(m / 2) x x < 0) ((m + 1) / 2 x) := by
simp only [Int.bmod_def]
by_cases xpos : 0 x
· rw [Int.emod_eq_of_lt xpos (by omega)]; omega
· rw [Int.add_emod_self.symm, Int.emod_eq_of_lt (by omega) (by omega)]; omega
end Int

View File

@@ -1,276 +0,0 @@
/-
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Init.ByCases
import Init.Data.Prod
import Init.Data.Int.Lemmas
import Init.Data.Int.LemmasAux
import Init.Data.RArray
namespace Int.Linear
/-! Helper definitions and theorems for constructing linear arithmetic proofs. -/
abbrev Var := Nat
abbrev Context := Lean.RArray Int
def Var.denote (ctx : Context) (v : Var) : Int :=
ctx.get v
inductive Expr where
| num (v : Int)
| var (i : Var)
| add (a b : Expr)
| sub (a b : Expr)
| neg (a : Expr)
| mulL (k : Int) (a : Expr)
| mulR (a : Expr) (k : Int)
deriving Inhabited, BEq
def Expr.denote (ctx : Context) : Expr Int
| .add a b => Int.add (denote ctx a) (denote ctx b)
| .sub a b => Int.sub (denote ctx a) (denote ctx b)
| .neg a => Int.neg (denote ctx a)
| .num k => k
| .var v => v.denote ctx
| .mulL k e => Int.mul k (denote ctx e)
| .mulR e k => Int.mul (denote ctx e) k
inductive Poly where
| num (k : Int)
| add (k : Int) (v : Var) (p : Poly)
deriving BEq
def Poly.denote (ctx : Context) (p : Poly) : Int :=
match p with
| .num k => k
| .add k v p => Int.add (Int.mul k (v.denote ctx)) (denote ctx p)
def Poly.addConst (p : Poly) (k : Int) : Poly :=
match p with
| .num k' => .num (k+k')
| .add k' v' p => .add k' v' (addConst p k)
def Poly.insert (k : Int) (v : Var) (p : Poly) : Poly :=
match p with
| .num k' => .add k v (.num k')
| .add k' v' p =>
bif Nat.blt v v' then
.add k v <| .add k' v' p
else bif Nat.beq v v' then
if Int.add k k' == 0 then
p
else
.add (Int.add k k') v' p
else
.add k' v' (insert k v p)
def Poly.norm (p : Poly) : Poly :=
match p with
| .num k => .num k
| .add k v p => (norm p).insert k v
def Expr.toPoly' (e : Expr) :=
go 1 e (.num 0)
where
go (coeff : Int) : Expr (Poly Poly)
| .num k => bif k == 0 then id else (Poly.addConst · (Int.mul coeff k))
| .var v => (.add coeff v ·)
| .add a b => go coeff a go coeff b
| .sub a b => go coeff a go (-coeff) b
| .mulL k a
| .mulR a k => bif k == 0 then id else go (Int.mul coeff k) a
| .neg a => go (-coeff) a
def Expr.toPoly (e : Expr) : Poly :=
e.toPoly'.norm
inductive PolyCnstr where
| eq (p : Poly)
| le (p : Poly)
deriving BEq
def PolyCnstr.denote (ctx : Context) : PolyCnstr Prop
| .eq p => p.denote ctx = 0
| .le p => p.denote ctx 0
def PolyCnstr.norm : PolyCnstr PolyCnstr
| .eq p => .eq p.norm
| .le p => .le p.norm
inductive ExprCnstr where
| eq (p₁ p₂ : Expr)
| le (p₁ p₂ : Expr)
deriving Inhabited, BEq
def ExprCnstr.denote (ctx : Context) : ExprCnstr Prop
| .eq e₁ e₂ => e₁.denote ctx = e₂.denote ctx
| .le e₁ e₂ => e₁.denote ctx e₂.denote ctx
def ExprCnstr.toPoly : ExprCnstr PolyCnstr
| .eq e₁ e₂ => .eq (e₁.sub e₂).toPoly.norm
| .le e₁ e₂ => .le (e₁.sub e₂).toPoly.norm
attribute [local simp] Int.add_comm Int.add_assoc Int.add_left_comm Int.add_mul Int.mul_add
attribute [local simp] Poly.insert Poly.denote Poly.norm Poly.addConst
theorem Poly.denote_addConst (ctx : Context) (p : Poly) (k : Int) : (p.addConst k).denote ctx = p.denote ctx + k := by
induction p <;> simp [*]
attribute [local simp] Poly.denote_addConst
theorem Poly.denote_insert (ctx : Context) (k : Int) (v : Var) (p : Poly) :
(p.insert k v).denote ctx = p.denote ctx + k * v.denote ctx := by
induction p <;> simp [*]
next k' v' p' ih =>
by_cases h₁ : Nat.blt v v' <;> simp [*]
by_cases h₂ : Nat.beq v v' <;> simp [*]
by_cases h₃ : k + k' = 0 <;> simp [*, Nat.eq_of_beq_eq_true h₂]
rw [ Int.add_mul]
simp [*]
attribute [local simp] Poly.denote_insert
theorem Poly.denote_norm (ctx : Context) (p : Poly) : p.norm.denote ctx = p.denote ctx := by
induction p <;> simp [*]
attribute [local simp] Poly.denote_norm
private theorem sub_fold (a b : Int) : a.sub b = a - b := rfl
private theorem neg_fold (a : Int) : a.neg = -a := rfl
attribute [local simp] sub_fold neg_fold
attribute [local simp] ExprCnstr.denote ExprCnstr.toPoly PolyCnstr.denote Expr.denote
theorem Expr.denote_toPoly'_go (ctx : Context) (e : Expr) :
(toPoly'.go k e p).denote ctx = k * e.denote ctx + p.denote ctx := by
induction k, e using Expr.toPoly'.go.induct generalizing p with
| case1 k k' =>
simp only [toPoly'.go]
by_cases h : k' == 0
· simp [h, eq_of_beq h]
· simp [h, Var.denote]
| case2 k i => simp [toPoly'.go]
| case3 k a b iha ihb => simp [toPoly'.go, iha, ihb]
| case4 k a b iha ihb =>
simp [toPoly'.go, iha, ihb, Int.mul_sub]
rw [Int.sub_eq_add_neg, Int.neg_mul, Int.add_assoc]
| case5 k k' a ih
| case6 k a k' ih =>
simp only [toPoly'.go]
by_cases h : k' == 0
· simp [h, eq_of_beq h]
· simp [h, cond_false, Int.mul_assoc]
simp at ih
rw [ih]
rw [Int.mul_assoc, Int.mul_comm k']
| case7 k a ih => simp [toPoly'.go, ih]
theorem Expr.denote_toPoly (ctx : Context) (e : Expr) : e.toPoly.denote ctx = e.denote ctx := by
simp [toPoly, toPoly', Expr.denote_toPoly'_go]
attribute [local simp] Expr.denote_toPoly
theorem ExprCnstr.denote_toPoly (ctx : Context) (c : ExprCnstr) : c.toPoly.denote ctx = c.denote ctx := by
cases c <;> simp
· rw [Int.sub_eq_zero]
· constructor
· exact Int.le_of_sub_nonpos
· exact Int.sub_nonpos_of_le
instance : LawfulBEq Poly where
eq_of_beq {a} := by
induction a <;> intro b <;> cases b <;> simp_all! [BEq.beq]
· rename_i k₁ v₁ p₁ k₂ v₂ p₂ ih
intro _ _ h
exact ih h
rfl := by
intro a
induction a <;> simp! [BEq.beq]
· rename_i k v p ih
exact ih
instance : LawfulBEq PolyCnstr where
eq_of_beq {a b} := by
cases a <;> cases b <;> rename_i p₁ p₂ <;> simp_all! [BEq.beq]
· show (p₁ == p₂) = true _
simp
· show (p₁ == p₂) = true _
simp
rfl {a} := by
cases a <;> rename_i p <;> show (p == p) = true
<;> simp
theorem Expr.eq_of_toPoly_eq (ctx : Context) (e e' : Expr) (h : e.toPoly == e'.toPoly) : e.denote ctx = e'.denote ctx := by
have h := congrArg (Poly.denote ctx) (eq_of_beq h)
simp [Poly.norm] at h
assumption
theorem ExprCnstr.eq_of_toPoly_eq (ctx : Context) (c c' : ExprCnstr) (h : c.toPoly == c'.toPoly) : c.denote ctx = c'.denote ctx := by
have h := congrArg (PolyCnstr.denote ctx) (eq_of_beq h)
rw [denote_toPoly, denote_toPoly] at h
assumption
theorem ExprCnstr.eq_of_toPoly_eq_var (ctx : Context) (x y : Var) (c : ExprCnstr) (h : c.toPoly == .eq (.add 1 x (.add (-1) y (.num 0))))
: c.denote ctx = (x.denote ctx = y.denote ctx) := by
have h := congrArg (PolyCnstr.denote ctx) (eq_of_beq h)
rw [denote_toPoly] at h
rw [h]; simp
rw [ Int.sub_eq_add_neg, Int.sub_eq_zero]
theorem ExprCnstr.eq_of_toPoly_eq_const (ctx : Context) (x : Var) (k : Int) (c : ExprCnstr) (h : c.toPoly == .eq (.add 1 x (.num (-k))))
: c.denote ctx = (x.denote ctx = k) := by
have h := congrArg (PolyCnstr.denote ctx) (eq_of_beq h)
rw [denote_toPoly] at h
rw [h]; simp
rw [Int.add_comm, Int.sub_eq_add_neg, Int.sub_eq_zero]
def PolyCnstr.isUnsat : PolyCnstr Bool
| .eq (.num k) => k != 0
| .eq _ => false
| .le (.num k) => k > 0
| .le _ => false
theorem PolyCnstr.eq_false_of_isUnsat (ctx : Context) (p : PolyCnstr) : p.isUnsat p.denote ctx = False := by
unfold isUnsat <;> split <;> simp <;> try contradiction
apply Int.not_le_of_gt
theorem ExprCnstr.eq_false_of_isUnsat (ctx : Context) (c : ExprCnstr) (h : c.toPoly.isUnsat) : c.denote ctx = False := by
have := PolyCnstr.eq_false_of_isUnsat ctx (c.toPoly) h
rw [ExprCnstr.denote_toPoly] at this
assumption
def PolyCnstr.isValid : PolyCnstr Bool
| .eq (.num k) => k == 0
| .eq _ => false
| .le (.num k) => k 0
| .le _ => false
theorem PolyCnstr.eq_true_of_isValid (ctx : Context) (p : PolyCnstr) : p.isValid p.denote ctx = True := by
unfold isValid <;> split <;> simp
theorem ExprCnstr.eq_true_of_isValid (ctx : Context) (c : ExprCnstr) (h : c.toPoly.isValid) : c.denote ctx = True := by
have := PolyCnstr.eq_true_of_isValid ctx (c.toPoly) h
rw [ExprCnstr.denote_toPoly] at this
assumption
end Int.Linear
theorem Int.not_le_eq (a b : Int) : (¬a b) = (b + 1 a) := by
apply propext; constructor
· intro h; have h := Int.add_one_le_of_lt (Int.lt_of_not_ge h); assumption
· intro h; apply Int.not_le_of_gt; exact h
theorem Int.not_ge_eq (a b : Int) : (¬a b) = (a + 1 b) := by
apply Int.not_le_eq
theorem Int.not_lt_eq (a b : Int) : (¬a < b) = (b a) := by
apply propext; constructor
· intro h; simp [Int.not_lt] at h; assumption
· intro h; apply Int.not_le_of_gt; simp [Int.lt_add_one_iff, *]
theorem Int.not_gt_eq (a b : Int) : (¬a > b) = (a b) := by
apply Int.not_lt_eq

View File

@@ -128,7 +128,7 @@ Applies the monadic function `f` on every element `x` in the list, left-to-right
results `y` for which `f x` returns `some y`.
-/
@[inline]
def filterMapM {m : Type u Type v} [Monad m] {α : Type w} {β : Type u} (f : α m (Option β)) (as : List α) : m (List β) :=
def filterMapM {m : Type u Type v} [Monad m] {α β : Type u} (f : α m (Option β)) (as : List α) : m (List β) :=
let rec @[specialize] loop
| [], bs => pure bs.reverse
| a :: as, bs => do
@@ -161,7 +161,7 @@ foldlM f x₀ [a, b, c] = do
```
-/
@[specialize]
def foldlM {m : Type u Type v} [Monad m] {s : Type u} {α : Type w} : (f : s α m s) (init : s) List α m s
protected def foldlM {m : Type u Type v} [Monad m] {s : Type u} {α : Type w} : (f : s α m s) (init : s) List α m s
| _, s, [] => pure s
| f, s, a :: as => do
let s' f s a

View File

@@ -11,9 +11,6 @@ import Init.Data.List.Attach
# Lemmas about `List.mapM` and `List.forM`.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat

View File

@@ -52,7 +52,6 @@ theorem length_insertIdx_of_le_length (h : n ≤ length as) : length (insertIdx
theorem length_insertIdx_of_length_lt (h : length as < n) : length (insertIdx n a as) = length as := by
simp [length_insertIdx, h]
@[simp]
theorem eraseIdx_insertIdx (n : Nat) (l : List α) : (l.insertIdx n a).eraseIdx n = l := by
rw [eraseIdx_eq_modifyTailIdx, insertIdx, modifyTailIdx_modifyTailIdx_self]
exact modifyTailIdx_id _ _
@@ -151,7 +150,7 @@ theorem getElem_insertIdx_self {l : List α} {x : α} {n : Nat} (hn : n < (inser
· simp only [insertIdx_succ_cons, length_cons, length_insertIdx, Nat.add_lt_add_iff_right] at hn ih
simpa using ih hn
theorem getElem_insertIdx_of_gt {l : List α} {x : α} {n k : Nat} (hn : n < k)
theorem getElem_insertIdx_of_ge {l : List α} {x : α} {n k : Nat} (hn : n + 1 k)
(hk : k < (insertIdx n x l).length) :
(insertIdx n x l)[k] = l[k - 1]'(by simp [length_insertIdx] at hk; split at hk <;> omega) := by
induction l generalizing n k with
@@ -178,9 +177,6 @@ theorem getElem_insertIdx_of_gt {l : List α} {x : α} {n k : Nat} (hn : n < k)
| zero => omega
| succ k => simp
@[deprecated getElem_insertIdx_of_gt (since := "2025-02-04")]
abbrev getElem_insertIdx_of_ge := @getElem_insertIdx_of_gt
theorem getElem_insertIdx {l : List α} {x : α} {n k : Nat} (h : k < (insertIdx n x l).length) :
(insertIdx n x l)[k] =
if h₁ : k < n then
@@ -195,7 +191,7 @@ theorem getElem_insertIdx {l : List α} {x : α} {n k : Nat} (h : k < (insertIdx
· split <;> rename_i h₂
· subst h₂
rw [getElem_insertIdx_self h]
· rw [getElem_insertIdx_of_gt (by omega)]
· rw [getElem_insertIdx_of_ge (by omega)]
theorem getElem?_insertIdx {l : List α} {x : α} {n k : Nat} :
(insertIdx n x l)[k]? =
@@ -237,13 +233,10 @@ theorem getElem?_insertIdx_self {l : List α} {x : α} {n : Nat} :
rw [getElem?_insertIdx, if_neg (by omega)]
simp
theorem getElem?_insertIdx_of_gt {l : List α} {x : α} {n k : Nat} (h : n < k) :
theorem getElem?_insertIdx_of_ge {l : List α} {x : α} {n k : Nat} (h : n + 1 k) :
(insertIdx n x l)[k]? = l[k - 1]? := by
rw [getElem?_insertIdx, if_neg (by omega), if_neg (by omega)]
@[deprecated getElem?_insertIdx_of_gt (since := "2025-02-04")]
abbrev getElem?_insertIdx_of_ge := @getElem?_insertIdx_of_gt
end InsertIdx
end List

View File

@@ -16,8 +16,6 @@ These are in a separate file from most of the list lemmas
as they required importing more lemmas about natural numbers, and use `omega`.
-/
set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -29,12 +27,12 @@ open Nat
| succ n, [] => by simp [Nat.min_zero]
| succ n, _ :: l => by simp [Nat.succ_min_succ, length_take]
theorem length_take_le (i) (l : List α) : length (take i l) i := by simp [Nat.min_le_left]
theorem length_take_le (n) (l : List α) : length (take n l) n := by simp [Nat.min_le_left]
theorem length_take_le' (i) (l : List α) : length (take i l) l.length :=
theorem length_take_le' (n) (l : List α) : length (take n l) l.length :=
by simp [Nat.min_le_right]
theorem length_take_of_le (h : i length l) : length (take i l) = i := by simp [Nat.min_eq_left h]
theorem length_take_of_le (h : n length l) : length (take n l) = n := by simp [Nat.min_eq_left h]
/-- The `i`-th element of a list coincides with the `i`-th element of any of its prefixes of
length `> i`. Version designed to rewrite from the big list to the small list. -/
@@ -49,30 +47,30 @@ length `> i`. Version designed to rewrite from the small list to the big list. -
L[i]'(Nat.lt_of_lt_of_le h (length_take_le' _ _)) := by
rw [length_take, Nat.lt_min] at h; rw [getElem_take' L _ h.1]
theorem getElem?_take_eq_none {l : List α} {i j : Nat} (h : i j) :
(l.take i)[j]? = none :=
theorem getElem?_take_eq_none {l : List α} {n m : Nat} (h : n m) :
(l.take n)[m]? = none :=
getElem?_eq_none <| Nat.le_trans (length_take_le _ _) h
theorem getElem?_take {l : List α} {i j : Nat} :
(l.take i)[j]? = if j < i then l[j]? else none := by
theorem getElem?_take {l : List α} {n m : Nat} :
(l.take n)[m]? = if m < n then l[m]? else none := by
split
· next h => exact getElem?_take_of_lt h
· next h => exact getElem?_take_eq_none (Nat.le_of_not_lt h)
theorem head?_take {l : List α} {i : Nat} :
(l.take i).head? = if i = 0 then none else l.head? := by
theorem head?_take {l : List α} {n : Nat} :
(l.take n).head? = if n = 0 then none else l.head? := by
simp [head?_eq_getElem?, getElem?_take]
split
· rw [if_neg (by omega)]
· rw [if_pos (by omega)]
theorem head_take {l : List α} {i : Nat} (h : l.take i []) :
(l.take i).head h = l.head (by simp_all) := by
theorem head_take {l : List α} {n : Nat} (h : l.take n []) :
(l.take n).head h = l.head (by simp_all) := by
apply Option.some_inj.1
rw [ head?_eq_head, head?_eq_head, head?_take, if_neg]
simp_all
theorem getLast?_take {l : List α} : (l.take i).getLast? = if i = 0 then none else l[i - 1]?.or l.getLast? := by
theorem getLast?_take {l : List α} : (l.take n).getLast? = if n = 0 then none else l[n - 1]?.or l.getLast? := by
rw [getLast?_eq_getElem?, getElem?_take, length_take]
split
· rw [if_neg (by omega)]
@@ -85,8 +83,8 @@ theorem getLast?_take {l : List α} : (l.take i).getLast? = if i = 0 then none e
· rw [if_pos]
omega
theorem getLast_take {l : List α} (h : l.take i []) :
(l.take i).getLast h = l[i - 1]?.getD (l.getLast (by simp_all)) := by
theorem getLast_take {l : List α} (h : l.take n []) :
(l.take n).getLast h = l[n - 1]?.getD (l.getLast (by simp_all)) := by
rw [getLast_eq_getElem, getElem_take]
simp [length_take, Nat.min_def]
simp at h
@@ -96,49 +94,46 @@ theorem getLast_take {l : List α} (h : l.take i ≠ []) :
· rw [getElem?_eq_none (by omega), getLast_eq_getElem]
simp
theorem take_take : (i j) (l : List α), take i (take j l) = take (min i j) l
theorem take_take : (n m) (l : List α), take n (take m l) = take (min n m) l
| n, 0, l => by rw [Nat.min_zero, take_zero, take_nil]
| 0, m, l => by rw [Nat.zero_min, take_zero, take_zero]
| succ n, succ m, nil => by simp only [take_nil]
| succ n, succ m, a :: l => by
simp only [take, succ_min_succ, take_take n m l]
theorem take_set_of_le (a : α) {i j : Nat} (l : List α) (h : j i) :
(l.set i a).take j = l.take j :=
theorem take_set_of_lt (a : α) {n m : Nat} (l : List α) (h : m < n) :
(l.set n a).take m = l.take m :=
List.ext_getElem? fun i => by
rw [getElem?_take, getElem?_take]
split
· next h' => rw [getElem?_set_ne (by omega)]
· rfl
@[deprecated take_set_of_le (since := "2025-02-04")]
abbrev take_set_of_lt := @take_set_of_le
@[simp] theorem take_replicate (a : α) : i j : Nat, take i (replicate j a) = replicate (min i j) a
@[simp] theorem take_replicate (a : α) : n m : Nat, take n (replicate m a) = replicate (min n m) a
| n, 0 => by simp [Nat.min_zero]
| 0, m => by simp [Nat.zero_min]
| succ n, succ m => by simp [replicate_succ, succ_min_succ, take_replicate]
@[simp] theorem drop_replicate (a : α) : i j : Nat, drop i (replicate j a) = replicate (j - i) a
@[simp] theorem drop_replicate (a : α) : n m : Nat, drop n (replicate m a) = replicate (m - n) a
| n, 0 => by simp
| 0, m => by simp
| succ n, succ m => by simp [replicate_succ, succ_sub_succ, drop_replicate]
/-- Taking the first `i` elements in `l₁ ++ l₂` is the same as appending the first `i` elements
/-- Taking the first `n` elements in `l₁ ++ l₂` is the same as appending the first `n` elements
of `l₁` to the first `n - l₁.length` elements of `l₂`. -/
theorem take_append_eq_append_take {l₁ l₂ : List α} {i : Nat} :
take i (l₁ ++ l₂) = take i l₁ ++ take (i - l₁.length) l₂ := by
induction l₁ generalizing i
theorem take_append_eq_append_take {l₁ l₂ : List α} {n : Nat} :
take n (l₁ ++ l₂) = take n l₁ ++ take (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases i
· cases n
· simp [*]
· simp only [cons_append, take_succ_cons, length_cons, succ_eq_add_one, cons.injEq,
append_cancel_left_eq, true_and, *]
congr 1
omega
theorem take_append_of_le_length {l₁ l₂ : List α} {i : Nat} (h : i l₁.length) :
(l₁ ++ l₂).take i = l₁.take i := by
theorem take_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).take n = l₁.take n := by
simp [take_append_eq_append_take, Nat.sub_eq_zero_of_le h]
/-- Taking the first `l₁.length + i` elements in `l₁ ++ l₂` is the same as appending the first
@@ -149,33 +144,29 @@ theorem take_append {l₁ l₂ : List α} (i : Nat) :
@[simp]
theorem take_eq_take :
{l : List α} {i j : Nat}, l.take i = l.take j min i l.length = min j l.length
| [], i, j => by simp [Nat.min_zero]
{l : List α} {m n : Nat}, l.take m = l.take n min m l.length = min n l.length
| [], m, n => by simp [Nat.min_zero]
| _ :: xs, 0, 0 => by simp
| x :: xs, i + 1, 0 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, 0, j + 1 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, i + 1, j + 1 => by simp [succ_min_succ, take_eq_take]
| x :: xs, m + 1, 0 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, 0, n + 1 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, m + 1, n + 1 => by simp [succ_min_succ, take_eq_take]
theorem take_add (l : List α) (i j : Nat) : l.take (i + j) = l.take i ++ (l.drop i).take j := by
suffices take (i + j) (take i l ++ drop i l) = take i l ++ take j (drop i l) by
theorem take_add (l : List α) (m n : Nat) : l.take (m + n) = l.take m ++ (l.drop m).take n := by
suffices take (m + n) (take m l ++ drop m l) = take m l ++ take n (drop m l) by
rw [take_append_drop] at this
assumption
rw [take_append_eq_append_take, take_of_length_le, append_right_inj]
· simp only [take_eq_take, length_take, length_drop]
omega
apply Nat.le_trans (m := i)
apply Nat.le_trans (m := m)
· apply length_take_le
· apply Nat.le_add_right
theorem take_one {l : List α} : l.take 1 = l.head?.toList := by
induction l <;> simp
theorem take_eq_append_getElem_of_pos {i} {l : List α} (h₁ : 0 < i) (h : i < l.length) : l.take i = l.take (i - 1) ++ [l[i - 1]] :=
match i, h₁ with
| i + 1, _ => take_succ_eq_append_getElem (by omega)
theorem dropLast_take {i : Nat} {l : List α} (h : i < l.length) :
(l.take i).dropLast = l.take (i - 1) := by
theorem dropLast_take {n : Nat} {l : List α} (h : n < l.length) :
(l.take n).dropLast = l.take (n - 1) := by
simp only [dropLast_eq_take, length_take, Nat.le_of_lt h, Nat.min_eq_left, take_take, sub_le]
@[deprecated map_eq_append_iff (since := "2024-09-05")] abbrev map_eq_append_split := @map_eq_append_iff
@@ -194,17 +185,17 @@ theorem take_eq_dropLast {l : List α} {i : Nat} (h : i + 1 = l.length) :
rw [ih]
simpa using h
theorem take_prefix_take_left (l : List α) {i j : Nat} (h : i j) : take i l <+: take j l := by
theorem take_prefix_take_left (l : List α) {m n : Nat} (h : m n) : take m l <+: take n l := by
rw [isPrefix_iff]
intro i w
rw [getElem?_take_of_lt, getElem_take, getElem?_eq_getElem]
simp only [length_take] at w
exact Nat.lt_of_lt_of_le (Nat.lt_of_lt_of_le w (Nat.min_le_left _ _)) h
theorem take_sublist_take_left (l : List α) {i j : Nat} (h : i j) : take i l <+ take j l :=
theorem take_sublist_take_left (l : List α) {m n : Nat} (h : m n) : take m l <+ take n l :=
(take_prefix_take_left l h).sublist
theorem take_subset_take_left (l : List α) {i j : Nat} (h : i j) : take i l take j l :=
theorem take_subset_take_left (l : List α) {m n : Nat} (h : m n) : take m l take n l :=
(take_sublist_take_left l h).subset
/-! ### drop -/
@@ -244,17 +235,17 @@ theorem getElem?_drop (L : List α) (i j : Nat) : (L.drop i)[j]? = L[i + j]? :=
apply Nat.lt_sub_of_add_lt h
theorem mem_take_iff_getElem {l : List α} {a : α} :
a l.take i (j : Nat) (hm : j < min i l.length), l[j] = a := by
a l.take n (i : Nat) (hm : i < min n l.length), l[i] = a := by
rw [mem_iff_getElem]
constructor
· rintro j, hm, rfl
· rintro i, hm, rfl
simp at hm
refine j, by omega, by rw [getElem_take]
· rintro j, hm, rfl
refine j, by simpa, by rw [getElem_take]
refine i, by omega, by rw [getElem_take]
· rintro i, hm, rfl
refine i, by simpa, by rw [getElem_take]
theorem mem_drop_iff_getElem {l : List α} {a : α} :
a l.drop i (j : Nat) (hm : j + i < l.length), l[i + j] = a := by
a l.drop n (i : Nat) (hm : i + n < l.length), l[n + i] = a := by
rw [mem_iff_getElem]
constructor
· rintro i, hm, rfl
@@ -263,16 +254,16 @@ theorem mem_drop_iff_getElem {l : List α} {a : α} :
· rintro i, hm, rfl
refine i, by simp; omega, by rw [getElem_drop]
@[simp] theorem head?_drop (l : List α) (i : Nat) :
(l.drop i).head? = l[i]? := by
@[simp] theorem head?_drop (l : List α) (n : Nat) :
(l.drop n).head? = l[n]? := by
rw [head?_eq_getElem?, getElem?_drop, Nat.add_zero]
@[simp] theorem head_drop {l : List α} {i : Nat} (h : l.drop i []) :
(l.drop i).head h = l[i]'(by simp_all) := by
have w : i < l.length := length_lt_of_drop_ne_nil h
@[simp] theorem head_drop {l : List α} {n : Nat} (h : l.drop n []) :
(l.drop n).head h = l[n]'(by simp_all) := by
have w : n < l.length := length_lt_of_drop_ne_nil h
simp [getElem?_eq_getElem, h, w, head_eq_iff_head?_eq_some]
theorem getLast?_drop {l : List α} : (l.drop i).getLast? = if l.length i then none else l.getLast? := by
theorem getLast?_drop {l : List α} : (l.drop n).getLast? = if l.length n then none else l.getLast? := by
rw [getLast?_eq_getElem?, getElem?_drop]
rw [length_drop]
split
@@ -281,8 +272,8 @@ theorem getLast?_drop {l : List α} : (l.drop i).getLast? = if l.length ≤ i th
congr
omega
@[simp] theorem getLast_drop {l : List α} (h : l.drop i []) :
(l.drop i).getLast h = l.getLast (ne_nil_of_length_pos (by simp at h; omega)) := by
@[simp] theorem getLast_drop {l : List α} (h : l.drop n []) :
(l.drop n).getLast h = l.getLast (ne_nil_of_length_pos (by simp at h; omega)) := by
simp only [ne_eq, drop_eq_nil_iff] at h
apply Option.some_inj.1
simp only [ getLast?_eq_getLast, getLast?_drop, ite_eq_right_iff]
@@ -300,20 +291,20 @@ theorem drop_length_cons {l : List α} (h : l ≠ []) (a : α) :
rw [getLast_cons h₁]
exact ih h₁ y
/-- Dropping the elements up to `i` in `l₁ ++ l₂` is the same as dropping the elements up to `i`
in `l₁`, dropping the elements up to `i - l₁.length` in `l₂`, and appending them. -/
theorem drop_append_eq_append_drop {l₁ l₂ : List α} {i : Nat} :
drop i (l₁ ++ l₂) = drop i l₁ ++ drop (i - l₁.length) l₂ := by
induction l₁ generalizing i
/-- Dropping the elements up to `n` in `l₁ ++ l₂` is the same as dropping the elements up to `n`
in `l₁`, dropping the elements up to `n - l₁.length` in `l₂`, and appending them. -/
theorem drop_append_eq_append_drop {l₁ l₂ : List α} {n : Nat} :
drop n (l₁ ++ l₂) = drop n l₁ ++ drop (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases i
· cases n
· simp [*]
· simp only [cons_append, drop_succ_cons, length_cons, succ_eq_add_one, append_cancel_left_eq, *]
congr 1
omega
theorem drop_append_of_le_length {l₁ l₂ : List α} {i : Nat} (h : i l₁.length) :
(l₁ ++ l₂).drop i = l₁.drop i ++ l₂ := by
theorem drop_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).drop n = l₁.drop n ++ l₂ := by
simp [drop_append_eq_append_drop, Nat.sub_eq_zero_of_le h]
/-- Dropping the elements up to `l₁.length + i` in `l₁ + l₂` is the same as dropping the elements
@@ -323,24 +314,24 @@ theorem drop_append {l₁ l₂ : List α} (i : Nat) : drop (l₁.length + i) (l
rw [drop_append_eq_append_drop, drop_eq_nil_of_le] <;>
simp [Nat.add_sub_cancel_left, Nat.le_add_right]
theorem set_eq_take_append_cons_drop (l : List α) (i : Nat) (a : α) :
l.set i a = if i < l.length then l.take i ++ a :: l.drop (i + 1) else l := by
theorem set_eq_take_append_cons_drop (l : List α) (n : Nat) (a : α) :
l.set n a = if n < l.length then l.take n ++ a :: l.drop (n + 1) else l := by
split <;> rename_i h
· ext1 j
by_cases h' : j < i
· ext1 m
by_cases h' : m < n
· rw [getElem?_append_left (by simp [length_take]; omega), getElem?_set_ne (by omega),
getElem?_take_of_lt h']
· by_cases h'' : j = i
· by_cases h'' : m = n
· subst h''
rw [getElem?_set_self _, getElem?_append_right, length_take,
Nat.min_eq_left (by omega), Nat.sub_self, getElem?_cons_zero]
rw [length_take]
exact Nat.min_le_left j l.length
· have h''' : i < j := by omega
exact Nat.min_le_left m l.length
· have h''' : n < m := by omega
rw [getElem?_set_ne (by omega), getElem?_append_right, length_take,
Nat.min_eq_left (by omega)]
· obtain k, rfl := Nat.exists_eq_add_of_lt h'''
have p : i + k + 1 - i = k + 1 := by omega
have p : n + k + 1 - n = k + 1 := by omega
rw [p]
rw [getElem?_cons_succ, getElem?_drop]
congr 1
@@ -350,39 +341,35 @@ theorem set_eq_take_append_cons_drop (l : List α) (i : Nat) (a : α) :
· rw [set_eq_of_length_le]
omega
theorem exists_of_set {i : Nat} {a' : α} {l : List α} (h : i < l.length) :
l₁ l₂, l = l₁ ++ l[i] :: l₂ l₁.length = i l.set i a' = l₁ ++ a' :: l₂ := by
refine l.take i, l.drop (i + 1), by simp, length_take_of_le (Nat.le_of_lt h), ?_
theorem exists_of_set {n : Nat} {a' : α} {l : List α} (h : n < l.length) :
l₁ l₂, l = l₁ ++ l[n] :: l₂ l₁.length = n l.set n a' = l₁ ++ a' :: l₂ := by
refine l.take n, l.drop (n + 1), by simp, length_take_of_le (Nat.le_of_lt h), ?_
simp [set_eq_take_append_cons_drop, h]
theorem drop_set_of_lt (a : α) {i j : Nat} (l : List α)
(hnm : i < j) : drop j (l.set i a) = l.drop j :=
theorem drop_set_of_lt (a : α) {n m : Nat} (l : List α)
(hnm : n < m) : drop m (l.set n a) = l.drop m :=
ext_getElem? fun k => by simpa only [getElem?_drop] using getElem?_set_ne (by omega)
theorem drop_take : (i j : Nat) (l : List α), drop i (take j l) = take (j - i) (drop i l)
theorem drop_take : (m n : Nat) (l : List α), drop n (take m l) = take (m - n) (drop n l)
| 0, _, _ => by simp
| _, 0, _ => by simp
| _, _, [] => by simp
| i+1, j+1, h :: t => by
simp [take_succ_cons, drop_succ_cons, drop_take i j t]
| m+1, n+1, h :: t => by
simp [take_succ_cons, drop_succ_cons, drop_take m n t]
congr 1
omega
@[simp] theorem drop_take_self : drop i (take i l) = [] := by
rw [drop_take]
simp
theorem take_reverse {α} {xs : List α} {i : Nat} :
xs.reverse.take i = (xs.drop (xs.length - i)).reverse := by
by_cases h : i xs.length
· induction xs generalizing i <;>
theorem take_reverse {α} {xs : List α} {n : Nat} :
xs.reverse.take n = (xs.drop (xs.length - n)).reverse := by
by_cases h : n xs.length
· induction xs generalizing n <;>
simp only [reverse_cons, drop, reverse_nil, Nat.zero_sub, length, take_nil]
next xs_hd xs_tl xs_ih =>
cases Nat.lt_or_eq_of_le h with
| inl h' =>
have h' := Nat.le_of_succ_le_succ h'
rw [take_append_of_le_length, xs_ih h']
rw [show xs_tl.length + 1 - i = succ (xs_tl.length - i) from _, drop]
rw [show xs_tl.length + 1 - n = succ (xs_tl.length - n) from _, drop]
· rwa [succ_eq_add_one, Nat.sub_add_comm]
· rwa [length_reverse]
| inr h' =>
@@ -392,14 +379,14 @@ theorem take_reverse {α} {xs : List α} {i : Nat} :
rw [this, take_length, reverse_cons]
rw [length_append, length_reverse]
rfl
· have w : xs.length - i = 0 := by omega
· have w : xs.length - n = 0 := by omega
rw [take_of_length_le, w, drop_zero]
simp
omega
theorem drop_reverse {α} {xs : List α} {i : Nat} :
xs.reverse.drop i = (xs.take (xs.length - i)).reverse := by
by_cases h : i xs.length
theorem drop_reverse {α} {xs : List α} {n : Nat} :
xs.reverse.drop n = (xs.take (xs.length - n)).reverse := by
by_cases h : n xs.length
· conv =>
rhs
rw [ reverse_reverse xs]
@@ -409,47 +396,47 @@ theorem drop_reverse {α} {xs : List α} {i : Nat} :
· simp only [length_reverse, reverse_reverse] at *
congr
omega
· have w : xs.length - i = 0 := by omega
· have w : xs.length - n = 0 := by omega
rw [drop_of_length_le, w, take_zero, reverse_nil]
simp
omega
theorem reverse_take {l : List α} {i : Nat} :
(l.take i).reverse = l.reverse.drop (l.length - i) := by
by_cases h : i l.length
theorem reverse_take {l : List α} {n : Nat} :
(l.take n).reverse = l.reverse.drop (l.length - n) := by
by_cases h : n l.length
· rw [drop_reverse]
congr
omega
· have w : l.length - i = 0 := by omega
· have w : l.length - n = 0 := by omega
rw [w, drop_zero, take_of_length_le]
omega
theorem reverse_drop {l : List α} {i : Nat} :
(l.drop i).reverse = l.reverse.take (l.length - i) := by
by_cases h : i l.length
theorem reverse_drop {l : List α} {n : Nat} :
(l.drop n).reverse = l.reverse.take (l.length - n) := by
by_cases h : n l.length
· rw [take_reverse]
congr
omega
· have w : l.length - i = 0 := by omega
· have w : l.length - n = 0 := by omega
rw [w, take_zero, drop_of_length_le, reverse_nil]
omega
theorem take_add_one {l : List α} {i : Nat} :
l.take (i + 1) = l.take i ++ l[i]?.toList := by
theorem take_add_one {l : List α} {n : Nat} :
l.take (n + 1) = l.take n ++ l[n]?.toList := by
simp [take_add, take_one]
theorem drop_eq_getElem?_toList_append {l : List α} {i : Nat} :
l.drop i = l[i]?.toList ++ l.drop (i + 1) := by
induction l generalizing i with
theorem drop_eq_getElem?_toList_append {l : List α} {n : Nat} :
l.drop n = l[n]?.toList ++ l.drop (n + 1) := by
induction l generalizing n with
| nil => simp
| cons hd tl ih =>
cases i
cases n
· simp
· simp only [drop_succ_cons, getElem?_cons_succ]
rw [ih]
theorem drop_sub_one {l : List α} {i : Nat} (h : 0 < i) :
l.drop (i - 1) = l[i - 1]?.toList ++ l.drop i := by
theorem drop_sub_one {l : List α} {n : Nat} (h : 0 < n) :
l.drop (n - 1) = l[n - 1]?.toList ++ l.drop n := by
rw [drop_eq_getElem?_toList_append]
congr
omega
@@ -462,24 +449,24 @@ theorem false_of_mem_take_findIdx {xs : List α} {p : α → Bool} (h : x ∈ xs
obtain i, h, rfl := h
exact not_of_lt_findIdx (by omega)
@[simp] theorem findIdx_take {xs : List α} {i : Nat} {p : α Bool} :
(xs.take i).findIdx p = min i (xs.findIdx p) := by
induction xs generalizing i with
@[simp] theorem findIdx_take {xs : List α} {n : Nat} {p : α Bool} :
(xs.take n).findIdx p = min n (xs.findIdx p) := by
induction xs generalizing n with
| nil => simp
| cons x xs ih =>
cases i
cases n
· simp
· simp only [take_succ_cons, findIdx_cons, ih, cond_eq_if]
split
· simp
· rw [Nat.add_min_add_right]
@[simp] theorem findIdx?_take {xs : List α} {i : Nat} {p : α Bool} :
(xs.take i).findIdx? p = (xs.findIdx? p).bind (Option.guard (fun j => j < i)) := by
induction xs generalizing i with
@[simp] theorem findIdx?_take {xs : List α} {n : Nat} {p : α Bool} :
(xs.take n).findIdx? p = (xs.findIdx? p).bind (Option.guard (fun i => i < n)) := by
induction xs generalizing n with
| nil => simp
| cons x xs ih =>
cases i
cases n
· simp
· simp only [take_succ_cons, findIdx?_cons]
split

View File

@@ -11,9 +11,6 @@ import Init.Data.Nat.Div.Basic
-/
set_option linter.missingDocs true -- keep it documented
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
open Decidable List
/--

View File

@@ -11,9 +11,6 @@ import Init.Data.Fin.Fold
# Theorems about `List.ofFn`
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
/--

View File

@@ -11,9 +11,6 @@ import Init.Data.List.Attach
# Lemmas about `List.Pairwise` and `List.Nodup`.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -172,7 +169,7 @@ theorem pairwise_flatten {L : List (List α)} :
simp only [flatten, pairwise_append, IH, mem_flatten, exists_imp, and_imp, forall_mem_cons,
pairwise_cons, and_assoc, and_congr_right_iff]
rw [and_comm, and_congr_left_iff]
intros; exact fun h l' b c d e => h c d e l' b, fun h c d e l' b => h l' b c d e
intros; exact fun h a b c d e => h c d e a b, fun h c d e a b => h a b c d e
@[deprecated pairwise_flatten (since := "2024-10-14")] abbrev pairwise_join := @pairwise_flatten
@@ -209,10 +206,10 @@ theorem pairwise_reverse {l : List α} :
simp
· exact fun _ => h, Or.inr h
theorem Pairwise.drop {l : List α} {i : Nat} (h : List.Pairwise R l) : List.Pairwise R (l.drop i) :=
theorem Pairwise.drop {l : List α} {n : Nat} (h : List.Pairwise R l) : List.Pairwise R (l.drop n) :=
h.sublist (drop_sublist _ _)
theorem Pairwise.take {l : List α} {i : Nat} (h : List.Pairwise R l) : List.Pairwise R (l.take i) :=
theorem Pairwise.take {l : List α} {n : Nat} (h : List.Pairwise R l) : List.Pairwise R (l.take n) :=
h.sublist (take_sublist _ _)
theorem pairwise_iff_forall_sublist : l.Pairwise R ( {a b}, [a,b] <+ l R a b) := by
@@ -234,9 +231,9 @@ theorem pairwise_iff_forall_sublist : l.Pairwise R ↔ (∀ {a b}, [a,b] <+ l
apply h; exact hab.cons _
theorem Pairwise.rel_of_mem_take_of_mem_drop
{l : List α} (h : l.Pairwise R) (hx : x l.take i) (hy : y l.drop i) : R x y := by
{l : List α} (h : l.Pairwise R) (hx : x l.take n) (hy : y l.drop n) : R x y := by
apply pairwise_iff_forall_sublist.mp h
rw [ take_append_drop i l, sublist_append_iff]
rw [ take_append_drop n l, sublist_append_iff]
refine [x], [y], rfl, by simpa, by simpa
theorem Pairwise.rel_of_mem_append

View File

@@ -18,9 +18,6 @@ another.
The notation `~` is used for permutation equivalence.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
open Nat
namespace List
@@ -93,8 +90,8 @@ theorem Perm.append_left {t₁ t₂ : List α} : ∀ l : List α, t₁ ~ t₂
theorem Perm.append {l₁ l₂ t₁ t₂ : List α} (p₁ : l₁ ~ l₂) (p₂ : t₁ ~ t₂) : l₁ ++ t₁ ~ l₂ ++ t₂ :=
(p₁.append_right t₁).trans (p₂.append_left l₂)
theorem Perm.append_cons (a : α) {l l r r : List α} (p₁ : l ~ l) (p₂ : r ~ r) :
l ++ a :: r ~ l ++ a :: r := p₁.append (p₂.cons a)
theorem Perm.append_cons (a : α) {h h t t : List α} (p₁ : h ~ h) (p₂ : t ~ t) :
h ++ a :: t ~ h ++ a :: t := p₁.append (p₂.cons a)
@[simp] theorem perm_middle {a : α} : {l₁ l₂ : List α}, l₁ ++ a :: l₂ ~ a :: (l₁ ++ l₂)
| [], _ => .refl _
@@ -170,7 +167,7 @@ theorem Perm.singleton_eq (h : [a] ~ l) : [a] = l := singleton_perm.mp h
theorem singleton_perm_singleton {a b : α} : [a] ~ [b] a = b := by simp
theorem perm_cons_erase [DecidableEq α] {a : α} {l : List α} (h : a l) : l ~ a :: l.erase a :=
let _, _, _, e₁, e₂ := exists_erase_eq h
let _l₁, _l₂, _, e₁, e₂ := exists_erase_eq h
e₂ e₁ perm_middle
theorem Perm.filterMap (f : α Option β) {l₁ l₂ : List α} (p : l₁ ~ l₂) :
@@ -219,7 +216,7 @@ theorem exists_perm_sublist {l₁ l₂ l₂' : List α} (s : l₁ <+ l₂) (p :
| .cons₂ _ (.cons _ s) => exact y :: _, .rfl, (s.cons₂ _).cons _
| .cons₂ _ (.cons₂ _ s) => exact x :: y :: _, .swap .., (s.cons₂ _).cons₂ _
| trans _ _ IH₁ IH₂ =>
let _, pm, sm := IH₁ s
let m₁, pm, sm := IH₁ s
let r₁, pr, sr := IH₂ sm
exact r₁, pr.trans pm, sr
@@ -513,15 +510,15 @@ theorem Perm.eraseP (f : α → Bool) {l₁ l₂ : List α}
refine (IH₁ H).trans (IH₂ ((p₁.pairwise_iff ?_).1 H))
exact fun h h₁ h₂ => h h₂ h₁
theorem perm_insertIdx {α} (x : α) (l : List α) {i} (h : i l.length) :
insertIdx i x l ~ x :: l := by
induction l generalizing i with
theorem perm_insertIdx {α} (x : α) (l : List α) {n} (h : n l.length) :
insertIdx n x l ~ x :: l := by
induction l generalizing n with
| nil =>
cases i with
cases n with
| zero => rfl
| succ => cases h
| cons _ _ ih =>
cases i with
cases n with
| zero => simp [insertIdx]
| succ =>
simp only [insertIdx, modifyTailIdx]

View File

@@ -14,9 +14,6 @@ Most of the results are deferred to `Data.Init.List.Nat.Range`, where more resul
natural arithmetic are available.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -74,7 +71,7 @@ theorem mem_range' : ∀{n}, m ∈ range' s n step ↔ ∃ i < n, m = s + step *
rw [exists_comm]; simp [Nat.mul_succ, Nat.add_assoc, Nat.add_comm]
theorem getElem?_range' (s step) :
{i j : Nat}, i < j (range' s j step)[i]? = some (s + step * i)
{m n : Nat}, m < n (range' s n step)[m]? = some (s + step * m)
| 0, n + 1, _ => by simp [range'_succ]
| m + 1, n + 1, h => by
simp only [range'_succ, getElem?_cons_succ]
@@ -147,10 +144,10 @@ theorem range_loop_range' : ∀ s n : Nat, range.loop s (range' s n) = range' 0
theorem range_eq_range' (n : Nat) : range n = range' 0 n :=
(range_loop_range' n 0).trans <| by rw [Nat.zero_add]
theorem getElem?_range {i j : Nat} (h : i < j) : (range j)[i]? = some i := by
theorem getElem?_range {m n : Nat} (h : m < n) : (range n)[m]? = some m := by
simp [range_eq_range', getElem?_range' _ _ h]
@[simp] theorem getElem_range {i : Nat} (j) (h : j < (range i).length) : (range i)[j] = j := by
@[simp] theorem getElem_range {n : Nat} (m) (h : m < (range n).length) : (range n)[m] = m := by
simp [range_eq_range']
theorem range_succ_eq_map (n : Nat) : range (n + 1) = 0 :: map succ (range n) := by
@@ -223,7 +220,7 @@ theorem zipIdx_eq_nil_iff {l : List α} {n : Nat} : List.zipIdx l n = [] ↔ l =
@[simp]
theorem getElem?_zipIdx :
(l : List α) i j, (zipIdx l i)[j]? = l[j]?.map fun a => (a, i + j)
(l : List α) n m, (zipIdx l n)[m]? = l[m]?.map fun a => (a, n + m)
| [], _, _ => rfl
| _ :: _, _, 0 => by simp
| _ :: l, n, m + 1 => by
@@ -303,7 +300,7 @@ theorem enumFrom_length : ∀ {n} {l : List α}, (enumFrom n l).length = l.lengt
@[deprecated getElem?_zipIdx (since := "2025-01-21"), simp]
theorem getElem?_enumFrom :
i (l : List α) j, (enumFrom i l)[j]? = l[j]?.map fun a => (i + j, a)
n (l : List α) m, (enumFrom n l)[m]? = l[m]?.map fun a => (n + m, a)
| _, [], _ => rfl
| _, _ :: _, 0 => by simp
| n, _ :: l, m + 1 => by

View File

@@ -11,9 +11,6 @@ import Init.Data.List.TakeDrop
# Lemmas about `List.Subset`, `List.Sublist`, `List.IsPrefix`, `List.IsSuffix`, and `List.IsInfix`.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -25,14 +22,14 @@ variable [BEq α]
@[simp] theorem isPrefixOf_cons₂_self [LawfulBEq α] {a : α} :
isPrefixOf (a::as) (a::bs) = isPrefixOf as bs := by simp [isPrefixOf_cons₂]
@[simp] theorem isPrefixOf_length_pos_nil {l : List α} (h : 0 < l.length) : isPrefixOf l [] = false := by
cases l <;> simp_all [isPrefixOf]
@[simp] theorem isPrefixOf_length_pos_nil {L : List α} (h : 0 < L.length) : isPrefixOf L [] = false := by
cases L <;> simp_all [isPrefixOf]
@[simp] theorem isPrefixOf_replicate {a : α} :
isPrefixOf l (replicate n a) = (decide (l.length n) && l.all (· == a)) := by
induction l generalizing n with
| nil => simp
| cons _ _ ih =>
| cons h t ih =>
cases n
· simp
· simp [replicate_succ, isPrefixOf_cons₂, ih, Nat.succ_le_succ_iff, Bool.and_left_comm]
@@ -571,9 +568,9 @@ theorem flatten_sublist_iff {L : List (List α)} {l} :
instance [DecidableEq α] (l₁ l₂ : List α) : Decidable (l₁ <+ l₂) :=
decidable_of_iff (l₁.isSublist l₂) isSublist_iff_sublist
protected theorem Sublist.drop : {l₁ l₂ : List α}, l₁ <+ l₂ i, l₁.drop i <+ l₂.drop i
protected theorem Sublist.drop : {l₁ l₂ : List α}, l₁ <+ l₂ n, l₁.drop n <+ l₂.drop n
| _, _, h, 0 => h
| _, _, h, i + 1 => by rw [ drop_tail, drop_tail]; exact h.tail.drop i
| _, _, h, n + 1 => by rw [ drop_tail, drop_tail]; exact h.tail.drop n
/-! ### IsPrefix / IsSuffix / IsInfix -/
@@ -607,10 +604,10 @@ theorem infix_refl (l : List α) : l <:+: l := prefix_rfl.isInfix
@[simp] theorem suffix_cons (a : α) : l, l <:+ a :: l := suffix_append [a]
theorem infix_cons : l₁ <:+: l₂ l₁ <:+: a :: l₂ := fun l₁', l₂', h => a :: l₁', l₂', h rfl
theorem infix_cons : l₁ <:+: l₂ l₁ <:+: a :: l₂ := fun L₁, L₂, h => a :: L₁, L₂, h rfl
theorem infix_concat : l₁ <:+: l₂ l₁ <:+: concat l₂ a := fun l₁', l₂', h =>
l₁', concat l₂' a, by simp [ h, concat_eq_append, append_assoc]
theorem infix_concat : l₁ <:+: l₂ l₁ <:+: concat l₂ a := fun L₁, L₂, h =>
L₁, concat L₂ a, by simp [ h, concat_eq_append, append_assoc]
theorem IsPrefix.trans : {l₁ l₂ l₃ : List α}, l₁ <+: l₂ l₂ <+: l₃ l₁ <+: l₃
| _, _, _, r₁, rfl, r₂, rfl => r₁ ++ r₂, (append_assoc _ _ _).symm
@@ -649,13 +646,13 @@ theorem eq_nil_of_infix_nil (h : l <:+: []) : l = [] := infix_nil.mp h
theorem eq_nil_of_prefix_nil (h : l <+: []) : l = [] := prefix_nil.mp h
theorem eq_nil_of_suffix_nil (h : l <:+ []) : l = [] := suffix_nil.mp h
theorem IsPrefix.ne_nil {xs ys : List α} (h : xs <+: ys) (hx : xs []) : ys [] := by
theorem IsPrefix.ne_nil {x y : List α} (h : x <+: y) (hx : x []) : y [] := by
rintro rfl; exact hx <| List.prefix_nil.mp h
theorem IsSuffix.ne_nil {xs ys : List α} (h : xs <:+ ys) (hx : xs []) : ys [] := by
theorem IsSuffix.ne_nil {x y : List α} (h : x <:+ y) (hx : x []) : y [] := by
rintro rfl; exact hx <| List.suffix_nil.mp h
theorem IsInfix.ne_nil {xs ys : List α} (h : xs <:+: ys) (hx : xs []) : ys [] := by
theorem IsInfix.ne_nil {x y : List α} (h : x <:+: y) (hx : x []) : y [] := by
rintro rfl; exact hx <| List.infix_nil.mp h
theorem IsInfix.length_le (h : l₁ <:+: l₂) : l₁.length l₂.length :=
@@ -667,10 +664,10 @@ theorem IsPrefix.length_le (h : l₁ <+: l₂) : l₁.length ≤ l₂.length :=
theorem IsSuffix.length_le (h : l₁ <:+ l₂) : l₁.length l₂.length :=
h.sublist.length_le
theorem IsPrefix.getElem {xs ys : List α} (h : xs <+: ys) {i} (hi : i < xs.length) :
xs[i] = ys[i]'(Nat.le_trans hi h.length_le) := by
theorem IsPrefix.getElem {x y : List α} (h : x <+: y) {n} (hn : n < x.length) :
x[n] = y[n]'(Nat.le_trans hn h.length_le) := by
obtain _, rfl := h
exact (List.getElem_append_left hi).symm
exact (List.getElem_append_left hn).symm
-- See `Init.Data.List.Nat.Sublist` for `IsSuffix.getElem`.
@@ -705,13 +702,13 @@ theorem IsSuffix.reverse : l₁ <:+ l₂ → reverse l₁ <+: reverse l₂ :=
theorem IsPrefix.reverse : l₁ <+: l₂ reverse l₁ <:+ reverse l₂ :=
reverse_suffix.2
theorem IsPrefix.head {l₁ l₂ : List α} (h : l₁ <+: l₂) (hx : l₁ []) :
l₁.head hx = l₂.head (h.ne_nil hx) := by
cases l₁ <;> cases l₂ <;> simp only [head_cons, ne_eq, not_true_eq_false] at hx
theorem IsPrefix.head {x y : List α} (h : x <+: y) (hx : x []) :
x.head hx = y.head (h.ne_nil hx) := by
cases x <;> cases y <;> simp only [head_cons, ne_eq, not_true_eq_false] at hx
all_goals (obtain _, h := h; injection h)
theorem IsSuffix.getLast {l₁ l₂ : List α} (h : l₁ <:+ l₂) (hx : l₁ []) :
l₁.getLast hx = l₂.getLast (h.ne_nil hx) := by
theorem IsSuffix.getLast {x y : List α} (h : x <:+ y) (hx : x []) :
x.getLast hx = y.getLast (h.ne_nil hx) := by
rw [ head_reverse (by simpa), h.reverse.head,
head_reverse (by rintro h; simp only [reverse_eq_nil_iff] at h; simp_all)]
@@ -842,8 +839,8 @@ theorem isPrefix_iff : l₁ <+: l₂ ↔ ∀ i (h : i < l₁.length), l₂[i]? =
simp [Nat.succ_lt_succ_iff, eq_comm]
theorem isPrefix_iff_getElem {l₁ l₂ : List α} :
l₁ <+: l₂ (h : l₁.length l₂.length), i (hx : i < l₁.length),
l₁[i] = l₂[i]'(Nat.lt_of_lt_of_le hx h) where
l₁ <+: l₂ (h : l₁.length l₂.length), x (hx : x < l₁.length),
l₁[x] = l₂[x]'(Nat.lt_of_lt_of_le hx h) where
mp h := h.length_le, fun _ h' h.getElem h'
mpr h := by
obtain hl, h := h
@@ -954,40 +951,40 @@ theorem infix_of_mem_flatten : ∀ {L : List (List α)}, l ∈ L → l <:+: flat
theorem prefix_cons_inj (a) : a :: l₁ <+: a :: l₂ l₁ <+: l₂ :=
prefix_append_right_inj [a]
theorem take_prefix (i) (l : List α) : take i l <+: l :=
theorem take_prefix (n) (l : List α) : take n l <+: l :=
_, take_append_drop _ _
theorem drop_suffix (i) (l : List α) : drop i l <:+ l :=
theorem drop_suffix (n) (l : List α) : drop n l <:+ l :=
_, take_append_drop _ _
theorem take_sublist (i) (l : List α) : take i l <+ l :=
(take_prefix i l).sublist
theorem take_sublist (n) (l : List α) : take n l <+ l :=
(take_prefix n l).sublist
theorem drop_sublist (i) (l : List α) : drop i l <+ l :=
(drop_suffix i l).sublist
theorem drop_sublist (n) (l : List α) : drop n l <+ l :=
(drop_suffix n l).sublist
theorem take_subset (i) (l : List α) : take i l l :=
(take_sublist i l).subset
theorem take_subset (n) (l : List α) : take n l l :=
(take_sublist n l).subset
theorem drop_subset (i) (l : List α) : drop i l l :=
(drop_sublist i l).subset
theorem drop_subset (n) (l : List α) : drop n l l :=
(drop_sublist n l).subset
theorem mem_of_mem_take {l : List α} (h : a l.take i) : a l :=
take_subset _ _ h
theorem mem_of_mem_take {l : List α} (h : a l.take n) : a l :=
take_subset n l h
theorem mem_of_mem_drop {i} {l : List α} (h : a l.drop i) : a l :=
theorem mem_of_mem_drop {n} {l : List α} (h : a l.drop n) : a l :=
drop_subset _ _ h
theorem drop_suffix_drop_left (l : List α) {i j : Nat} (h : i j) : drop j l <:+ drop i l := by
theorem drop_suffix_drop_left (l : List α) {m n : Nat} (h : m n) : drop n l <:+ drop m l := by
rw [ Nat.sub_add_cancel h, Nat.add_comm, drop_drop]
apply drop_suffix
-- See `Init.Data.List.Nat.TakeDrop` for `take_prefix_take_left`.
theorem drop_sublist_drop_left (l : List α) {i j : Nat} (h : i j) : drop j l <+ drop i l :=
theorem drop_sublist_drop_left (l : List α) {m n : Nat} (h : m n) : drop n l <+ drop m l :=
(drop_suffix_drop_left l h).sublist
theorem drop_subset_drop_left (l : List α) {i j : Nat} (h : i j) : drop j l drop i l :=
theorem drop_subset_drop_left (l : List α) {m n : Nat} (h : m n) : drop n l drop m l :=
(drop_sublist_drop_left l h).subset
theorem takeWhile_prefix (p : α Bool) : l.takeWhile p <+: l :=

View File

@@ -10,9 +10,6 @@ import Init.Data.List.Lemmas
# Lemmas about `List.take` and `List.drop`.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -23,19 +20,19 @@ Further results on `List.take` and `List.drop`, which rely on stronger automatio
are given in `Init.Data.List.TakeDrop`.
-/
theorem take_cons {l : List α} (h : 0 < i) : take i (a :: l) = a :: take (i - 1) l := by
cases i with
theorem take_cons {l : List α} (h : 0 < n) : take n (a :: l) = a :: take (n - 1) l := by
cases n with
| zero => exact absurd h (Nat.lt_irrefl _)
| succ i => rfl
| succ n => rfl
@[simp]
theorem drop_one : l : List α, drop 1 l = tail l
| [] | _ :: _ => rfl
@[simp] theorem take_append_drop : (i : Nat) (l : List α), take i l ++ drop i l = l
@[simp] theorem take_append_drop : (n : Nat) (l : List α), take n l ++ drop n l = l
| 0, _ => rfl
| _+1, [] => rfl
| i+1, x :: xs => congrArg (cons x) <| take_append_drop i xs
| n+1, x :: xs => congrArg (cons x) <| take_append_drop n xs
@[simp] theorem length_drop : (i : Nat) (l : List α), length (drop i l) = length l - i
| 0, _ => rfl
@@ -47,14 +44,14 @@ theorem drop_one : ∀ l : List α, drop 1 l = tail l
theorem drop_of_length_le {l : List α} (h : l.length i) : drop i l = [] :=
length_eq_zero.1 (length_drop .. Nat.sub_eq_zero_of_le h)
theorem length_lt_of_drop_ne_nil {l : List α} {i} (h : drop i l []) : i < l.length :=
theorem length_lt_of_drop_ne_nil {l : List α} {n} (h : drop n l []) : n < l.length :=
gt_of_not_le (mt drop_of_length_le h)
theorem take_of_length_le {l : List α} (h : l.length i) : take i l = l := by
have := take_append_drop i l
rw [drop_of_length_le h, append_nil] at this; exact this
theorem lt_length_of_take_ne_self {l : List α} {i} (h : l.take i l) : i < l.length :=
theorem lt_length_of_take_ne_self {l : List α} {n} (h : l.take n l) : n < l.length :=
gt_of_not_le (mt take_of_length_le h)
@[deprecated drop_of_length_le (since := "2024-07-07")] abbrev drop_length_le := @drop_of_length_le
@@ -70,66 +67,66 @@ theorem getElem_cons_drop : ∀ (l : List α) (i : Nat) (h : i < l.length),
| _::_, 0, _ => rfl
| _::_, i+1, h => getElem_cons_drop _ i (Nat.add_one_lt_add_one_iff.mp h)
theorem drop_eq_getElem_cons {i} {l : List α} (h : i < l.length) : drop i l = l[i] :: drop (i + 1) l :=
(getElem_cons_drop _ i h).symm
theorem drop_eq_getElem_cons {n} {l : List α} (h : n < l.length) : drop n l = l[n] :: drop (n + 1) l :=
(getElem_cons_drop _ n h).symm
@[simp]
theorem getElem?_take_of_lt {l : List α} {i j : Nat} (h : i < j) : (l.take j)[i]? = l[i]? := by
induction j generalizing l i with
theorem getElem?_take_of_lt {l : List α} {n m : Nat} (h : m < n) : (l.take n)[m]? = l[m]? := by
induction n generalizing l m with
| zero =>
exact absurd h (Nat.not_lt_of_le i.zero_le)
| succ _ hj =>
exact absurd h (Nat.not_lt_of_le m.zero_le)
| succ _ hn =>
cases l with
| nil => simp only [take_nil]
| cons hd tl =>
cases i
cases m
· simp
· simpa using hj (Nat.lt_of_succ_lt_succ h)
· simpa using hn (Nat.lt_of_succ_lt_succ h)
theorem getElem?_take_of_succ {l : List α} {i : Nat} : (l.take (i + 1))[i]? = l[i]? := by simp
theorem getElem?_take_of_succ {l : List α} {n : Nat} : (l.take (n + 1))[n]? = l[n]? := by simp
@[simp] theorem drop_drop (i : Nat) : (j) (l : List α), drop i (drop j l) = drop (j + i) l
| j, [] => by simp
@[simp] theorem drop_drop (n : Nat) : (m) (l : List α), drop n (drop m l) = drop (m + n) l
| m, [] => by simp
| 0, l => by simp
| j + 1, a :: l =>
| m + 1, a :: l =>
calc
drop i (drop (j + 1) (a :: l)) = drop i (drop j l) := rfl
_ = drop (j + i) l := drop_drop i j l
_ = drop ((j + 1) + i) (a :: l) := by rw [Nat.add_right_comm]; rfl
drop n (drop (m + 1) (a :: l)) = drop n (drop m l) := rfl
_ = drop (m + n) l := drop_drop n m l
_ = drop ((m + 1) + n) (a :: l) := by rw [Nat.add_right_comm]; rfl
theorem drop_add_one_eq_tail_drop (l : List α) : l.drop (i + 1) = (l.drop i).tail := by
rw [ drop_drop, drop_one]
theorem take_drop : (i j : Nat) (l : List α), take i (drop j l) = drop j (take (j + i) l)
| _, 0, _ => by simp
theorem take_drop : (m n : Nat) (l : List α), take n (drop m l) = drop m (take (m + n) l)
| 0, _, _ => by simp
| _, _, [] => by simp
| _, _+1, _ :: _ => by simpa [Nat.succ_add, take_succ_cons, drop_succ_cons] using take_drop ..
| _+1, _, _ :: _ => by simpa [Nat.succ_add, take_succ_cons, drop_succ_cons] using take_drop ..
@[simp]
theorem tail_drop (l : List α) (i : Nat) : (l.drop i).tail = l.drop (i + 1) := by
induction l generalizing i with
theorem tail_drop (l : List α) (n : Nat) : (l.drop n).tail = l.drop (n + 1) := by
induction l generalizing n with
| nil => simp
| cons hd tl hl =>
cases i
cases n
· simp
· simp [hl]
@[simp]
theorem drop_tail (l : List α) (i : Nat) : l.tail.drop i = l.drop (i + 1) := by
theorem drop_tail (l : List α) (n : Nat) : l.tail.drop n = l.drop (n + 1) := by
rw [Nat.add_comm, drop_drop, drop_one]
@[simp]
theorem drop_eq_nil_iff {l : List α} {i : Nat} : l.drop i = [] l.length i := by
theorem drop_eq_nil_iff {l : List α} {k : Nat} : l.drop k = [] l.length k := by
refine fun h => ?_, drop_eq_nil_of_le
induction i generalizing l with
induction k generalizing l with
| zero =>
simp only [drop] at h
simp [h]
| succ i hi =>
| succ k hk =>
cases l
· simp
· simp only [drop] at h
simpa [Nat.succ_le_succ_iff] using hi h
simpa [Nat.succ_le_succ_iff] using hk h
@[deprecated drop_eq_nil_iff (since := "2024-09-10")] abbrev drop_eq_nil_iff_le := @drop_eq_nil_iff
@@ -149,33 +146,33 @@ theorem take_eq_nil_of_eq_nil : ∀ {as : List α} {i}, as = [] → as.take i =
theorem ne_nil_of_take_ne_nil {as : List α} {i : Nat} (h : as.take i []) : as [] :=
mt take_eq_nil_of_eq_nil h
theorem set_take {l : List α} {i j : Nat} {a : α} :
(l.set j a).take i = (l.take i).set j a := by
induction i generalizing l j with
theorem set_take {l : List α} {n m : Nat} {a : α} :
(l.set m a).take n = (l.take n).set m a := by
induction n generalizing l m with
| zero => simp
| succ _ hi =>
| succ _ hn =>
cases l with
| nil => simp
| cons hd tl => cases j <;> simp_all
| cons hd tl => cases m <;> simp_all
theorem drop_set {l : List α} {i j : Nat} {a : α} :
(l.set j a).drop i = if j < i then l.drop i else (l.drop i).set (j - i) a := by
induction i generalizing l j with
theorem drop_set {l : List α} {n m : Nat} {a : α} :
(l.set m a).drop n = if m < n then l.drop n else (l.drop n).set (m - n) a := by
induction n generalizing l m with
| zero => simp
| succ _ hi =>
| succ _ hn =>
cases l with
| nil => simp
| cons hd tl =>
cases j
cases m
· simp_all
· simp only [hi, set_cons_succ, drop_succ_cons, succ_lt_succ_iff]
· simp only [hn, set_cons_succ, drop_succ_cons, succ_lt_succ_iff]
congr 2
exact (Nat.add_sub_add_right ..).symm
theorem set_drop {l : List α} {i j : Nat} {a : α} :
(l.drop i).set j a = (l.set (i + j) a).drop i := by
rw [drop_set, if_neg, add_sub_self_left]
exact (Nat.not_lt).2 (le_add_right ..)
theorem set_drop {l : List α} {n m : Nat} {a : α} :
(l.drop n).set m a = (l.set (n + m) a).drop n := by
rw [drop_set, if_neg, add_sub_self_left n m]
exact (Nat.not_lt).2 (le_add_right n m)
theorem take_concat_get (l : List α) (i : Nat) (h : i < l.length) :
(l.take i).concat l[i] = l.take (i+1) :=
@@ -186,9 +183,6 @@ theorem take_concat_get (l : List α) (i : Nat) (h : i < l.length) :
(l.take i) ++ [l[i]] = l.take (i+1) := by
simpa using take_concat_get l i h
theorem take_succ_eq_append_getElem {i} {l : List α} (h : i < l.length) : l.take (i + 1) = l.take i ++ [l[i]] :=
(take_append_getElem _ _ h).symm
@[simp] theorem take_append_getLast (l : List α) (h : l []) :
(l.take (l.length - 1)) ++ [l.getLast h] = l := by
rw [getLast_eq_getElem]
@@ -207,7 +201,7 @@ theorem take_succ_eq_append_getElem {i} {l : List α} (h : i < l.length) : l.tak
theorem take_cons_succ : (a::as).take (i+1) = a :: as.take i := rfl
@[deprecated take_of_length_le (since := "2024-07-25")]
theorem take_all_of_le {l : List α} {i} (h : length l i) : take i l = l :=
theorem take_all_of_le {n} {l : List α} (h : length l n) : take n l = l :=
take_of_length_le h
theorem drop_left : l₁ l₂ : List α, drop (length l₁) (l₁ ++ l₂) = l₂
@@ -215,7 +209,7 @@ theorem drop_left : ∀ l₁ l₂ : List α, drop (length l₁) (l₁ ++ l₂) =
| _ :: l₁, l₂ => drop_left l₁ l₂
@[simp]
theorem drop_left' {l₁ l₂ : List α} {i} (h : length l₁ = i) : drop i (l₁ ++ l₂) = l₂ := by
theorem drop_left' {l₁ l₂ : List α} {n} (h : length l₁ = n) : drop n (l₁ ++ l₂) = l₂ := by
rw [ h]; apply drop_left
theorem take_left : l₁ l₂ : List α, take (length l₁) (l₁ ++ l₂) = l₁
@@ -223,24 +217,24 @@ theorem take_left : ∀ l₁ l₂ : List α, take (length l₁) (l₁ ++ l₂) =
| a :: l₁, l₂ => congrArg (cons a) (take_left l₁ l₂)
@[simp]
theorem take_left' {l₁ l₂ : List α} {i} (h : length l₁ = i) : take i (l₁ ++ l₂) = l₁ := by
theorem take_left' {l₁ l₂ : List α} {n} (h : length l₁ = n) : take n (l₁ ++ l₂) = l₁ := by
rw [ h]; apply take_left
theorem take_succ {l : List α} {i : Nat} : l.take (i + 1) = l.take i ++ l[i]?.toList := by
induction l generalizing i with
theorem take_succ {l : List α} {n : Nat} : l.take (n + 1) = l.take n ++ l[n]?.toList := by
induction l generalizing n with
| nil =>
simp only [take_nil, Option.toList, getElem?_nil, append_nil]
| cons hd tl hl =>
cases i
cases n
· simp only [take, Option.toList, getElem?_cons_zero, nil_append]
· simp only [take, hl, getElem?_cons_succ, cons_append]
@[deprecated "Deprecated without replacement." (since := "2024-07-25")]
theorem drop_sizeOf_le [SizeOf α] (l : List α) (i : Nat) : sizeOf (l.drop i) sizeOf l := by
induction l generalizing i with
theorem drop_sizeOf_le [SizeOf α] (l : List α) (n : Nat) : sizeOf (l.drop n) sizeOf l := by
induction l generalizing n with
| nil => rw [drop_nil]; apply Nat.le_refl
| cons _ _ lih =>
induction i with
induction n with
| zero => apply Nat.le_refl
| succ n =>
exact Trans.trans (lih _) (Nat.le_add_left _ _)
@@ -252,18 +246,18 @@ theorem dropLast_eq_take (l : List α) : l.dropLast = l.take (l.length - 1) := b
induction l generalizing x <;> simp_all [dropLast]
@[simp] theorem map_take (f : α β) :
(l : List α) (i : Nat), (l.take i).map f = (l.map f).take i
(L : List α) (i : Nat), (L.take i).map f = (L.map f).take i
| [], i => by simp
| _, 0 => by simp
| _ :: tl, n + 1 => by dsimp; rw [map_take f tl n]
| h :: t, n + 1 => by dsimp; rw [map_take f t n]
@[simp] theorem map_drop (f : α β) :
(l : List α) (i : Nat), (l.drop i).map f = (l.map f).drop i
(L : List α) (i : Nat), (L.drop i).map f = (L.map f).drop i
| [], i => by simp
| l, 0 => by simp
| _ :: tl, n + 1 => by
| L, 0 => by simp
| h :: t, n + 1 => by
dsimp
rw [map_drop f tl]
rw [map_drop f t]
/-! ### takeWhile and dropWhile -/
@@ -396,7 +390,7 @@ theorem dropWhile_append {xs ys : List α} :
if (xs.dropWhile p).isEmpty then ys.dropWhile p else xs.dropWhile p ++ ys := by
induction xs with
| nil => simp
| cons _ _ ih =>
| cons h t ih =>
simp only [cons_append, dropWhile_cons]
split <;> simp_all
@@ -431,23 +425,23 @@ theorem dropWhile_replicate (p : α → Bool) :
simp only [dropWhile_replicate_eq_filter_not, filter_replicate]
split <;> simp_all
theorem take_takeWhile {l : List α} (p : α Bool) i :
(l.takeWhile p).take i = (l.take i).takeWhile p := by
induction l generalizing i with
theorem take_takeWhile {l : List α} (p : α Bool) n :
(l.takeWhile p).take n = (l.take n).takeWhile p := by
induction l generalizing n with
| nil => simp
| cons x xs ih =>
by_cases h : p x <;> cases i <;> simp [takeWhile_cons, h, ih, take_succ_cons]
by_cases h : p x <;> cases n <;> simp [takeWhile_cons, h, ih, take_succ_cons]
@[simp] theorem all_takeWhile {l : List α} : (l.takeWhile p).all p = true := by
induction l with
| nil => rfl
| cons h _ ih => by_cases p h <;> simp_all
| cons h t ih => by_cases p h <;> simp_all
@[simp] theorem any_dropWhile {l : List α} :
(l.dropWhile p).any (fun x => !p x) = !l.all p := by
induction l with
| nil => rfl
| cons h _ ih => by_cases p h <;> simp_all
| cons h t ih => by_cases p h <;> simp_all
theorem replace_takeWhile [BEq α] [LawfulBEq α] {l : List α} {p : α Bool} (h : p a = p b) :
(l.takeWhile p).replace a b = (l.replace a b).takeWhile p := by
@@ -463,7 +457,7 @@ theorem replace_takeWhile [BEq α] [LawfulBEq α] {l : List α} {p : α → Bool
/-! ### splitAt -/
@[simp] theorem splitAt_eq (i : Nat) (l : List α) : splitAt i l = (l.take i, l.drop i) := by
@[simp] theorem splitAt_eq (n : Nat) (l : List α) : splitAt n l = (l.take n, l.drop n) := by
rw [splitAt, splitAt_go, reverse_nil, nil_append]
split <;> simp_all [take_of_length_le, drop_of_length_le]

View File

@@ -7,7 +7,6 @@ prelude
import Init.Data.List.Impl
import Init.Data.List.Nat.Erase
import Init.Data.List.Monadic
import Init.Data.List.Nat.InsertIdx
import Init.Data.Array.Lex.Basic
/-! ### Lemmas about `List.toArray`.
@@ -15,20 +14,17 @@ import Init.Data.Array.Lex.Basic
We prefer to pull `List.toArray` outwards past `Array` operations.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace Array
@[simp] theorem toList_set (xs : Array α) (i x h) :
(xs.set i x).toList = xs.toList.set i x := rfl
@[simp] theorem toList_set (a : Array α) (i x h) :
(a.set i x).toList = a.toList.set i x := rfl
theorem swap_def (xs : Array α) (i j : Nat) (hi hj) :
xs.swap i j hi hj = (xs.set i xs[j]).set j xs[i] (by simpa using hj) := by
theorem swap_def (a : Array α) (i j : Nat) (hi hj) :
a.swap i j hi hj = (a.set i a[j]).set j a[i] (by simpa using hj) := by
simp [swap]
@[simp] theorem toList_swap (xs : Array α) (i j : Nat) (hi hj) :
(xs.swap i j hi hj).toList = (xs.toList.set i xs[j]).set j xs[i] := by simp [swap_def]
@[simp] theorem toList_swap (a : Array α) (i j : Nat) (hi hj) :
(a.swap i j hi hj).toList = (a.toList.set i a[j]).set j a[i] := by simp [swap_def]
end Array
@@ -36,16 +32,16 @@ namespace List
open Array
theorem toArray_inj {as bs : List α} (h : as.toArray = bs.toArray) : as = bs := by
cases as with
theorem toArray_inj {a b : List α} (h : a.toArray = b.toArray) : a = b := by
cases a with
| nil => simpa using h
| cons a as =>
cases bs with
cases b with
| nil => simp at h
| cons b bs => simpa using h
@[simp] theorem size_toArrayAux {as : List α} {xs : Array α} :
(as.toArrayAux xs).size = xs.size + as.length := by
@[simp] theorem size_toArrayAux {a : List α} {b : Array α} :
(a.toArrayAux b).size = b.size + a.length := by
simp [size]
-- This is not a `@[simp]` lemma because it is pushing `toArray` inwards.
@@ -368,8 +364,8 @@ theorem isPrefixOfAux_toArray_zero [BEq α] (l₁ l₂ : List α) (hle : l₁.le
rw [ih]
simp_all
theorem zipWithAux_toArray_succ (as : List α) (bs : List β) (f : α β γ) (i : Nat) (xs : Array γ) :
zipWithAux as.toArray bs.toArray f (i + 1) xs = zipWithAux as.tail.toArray bs.tail.toArray f i xs := by
theorem zipWithAux_toArray_succ (as : List α) (bs : List β) (f : α β γ) (i : Nat) (cs : Array γ) :
zipWithAux as.toArray bs.toArray f (i + 1) cs = zipWithAux as.tail.toArray bs.tail.toArray f i cs := by
rw [zipWithAux]
conv => rhs; rw [zipWithAux]
simp only [size_toArray, getElem_toArray, length_tail, getElem_tail]
@@ -380,16 +376,16 @@ theorem zipWithAux_toArray_succ (as : List α) (bs : List β) (f : α → β →
rw [dif_neg (by omega)]
· rw [dif_neg (by omega)]
theorem zipWithAux_toArray_succ' (as : List α) (bs : List β) (f : α β γ) (i : Nat) (xs : Array γ) :
zipWithAux as.toArray bs.toArray f (i + 1) xs = zipWithAux (as.drop (i+1)).toArray (bs.drop (i+1)).toArray f 0 xs := by
induction i generalizing as bs xs with
theorem zipWithAux_toArray_succ' (as : List α) (bs : List β) (f : α β γ) (i : Nat) (cs : Array γ) :
zipWithAux as.toArray bs.toArray f (i + 1) cs = zipWithAux (as.drop (i+1)).toArray (bs.drop (i+1)).toArray f 0 cs := by
induction i generalizing as bs cs with
| zero => simp [zipWithAux_toArray_succ]
| succ i ih =>
rw [zipWithAux_toArray_succ, ih]
simp
theorem zipWithAux_toArray_zero (f : α β γ) (as : List α) (bs : List β) (xs : Array γ) :
zipWithAux as.toArray bs.toArray f 0 xs = xs ++ (List.zipWith f as bs).toArray := by
theorem zipWithAux_toArray_zero (f : α β γ) (as : List α) (bs : List β) (cs : Array γ) :
zipWithAux as.toArray bs.toArray f 0 cs = cs ++ (List.zipWith f as bs).toArray := by
rw [Array.zipWithAux]
match as, bs with
| [], _ => simp
@@ -406,8 +402,8 @@ theorem zipWithAux_toArray_zero (f : α → β → γ) (as : List α) (bs : List
Array.zip as.toArray bs.toArray = (List.zip as bs).toArray := by
simp [Array.zip, zipWith_toArray, zip]
theorem zipWithAll_go_toArray (as : List α) (bs : List β) (f : Option α Option β γ) (i : Nat) (xs : Array γ) :
zipWithAll.go f as.toArray bs.toArray i xs = xs ++ (List.zipWithAll f (as.drop i) (bs.drop i)).toArray := by
theorem zipWithAll_go_toArray (as : List α) (bs : List β) (f : Option α Option β γ) (i : Nat) (cs : Array γ) :
zipWithAll.go f as.toArray bs.toArray i cs = cs ++ (List.zipWithAll f (as.drop i) (bs.drop i)).toArray := by
unfold zipWithAll.go
split <;> rename_i h
· rw [zipWithAll_go_toArray]
@@ -503,12 +499,12 @@ abbrev _root_.Array.mkArray_eq_toArray_replicate := @toArray_replicate
theorem flatMap_toArray_cons {β} (f : α Array β) (a : α) (as : List α) :
(a :: as).toArray.flatMap f = f a ++ as.toArray.flatMap f := by
simp [Array.flatMap]
suffices xs, List.foldl (fun ys a => ys ++ f a) (f a ++ xs) as =
f a ++ List.foldl (fun ys a => ys ++ f a) xs as by
suffices cs, List.foldl (fun bs a => bs ++ f a) (f a ++ cs) as =
f a ++ List.foldl (fun bs a => bs ++ f a) cs as by
erw [empty_append] -- Why doesn't this work via `simp`?
simpa using this #[]
intro xs
induction as generalizing xs <;> simp_all
intro cs
induction as generalizing cs <;> simp_all
@[simp] theorem flatMap_toArray {β} (f : α Array β) (as : List α) :
as.toArray.flatMap f = (as.flatMap (fun a => (f a).toList)).toArray := by
@@ -559,59 +555,4 @@ decreasing_by
rw [idxOf?_eq_map_finIdxOf?_val]
split <;> simp_all
private theorem insertIdx_loop_toArray (i : Nat) (l : List α) (j : Nat) (hj : j < l.toArray.size) (h : i j) :
insertIdx.loop i l.toArray j, hj = (l.take i ++ l[j] :: (l.take j).drop i ++ l.drop (j + 1)).toArray := by
rw [insertIdx.loop]
simp only [size_toArray] at hj
split <;> rename_i h'
· simp only at h'
have w : j - 1 + 1 = j := by omega
simp only [append_assoc, cons_append]
rw [insertIdx_loop_toArray _ _ _ _ (by omega)]
simp only [swap_toArray, w, append_assoc, cons_append, mk.injEq]
rw [take_set_of_le _ _ (by omega), drop_eq_getElem_cons (i := j) (by simpa), getElem_set_self,
drop_set_of_lt _ _ (by omega), drop_set_of_lt _ _ (by omega), getElem_set_ne (by omega),
getElem_set_self, take_set_of_le (j := j - 1) _ _ (by omega),
take_set_of_le (j := j - 1) _ _ (by omega), take_eq_append_getElem_of_pos (by omega) hj,
drop_append_of_le_length (by simp; omega)]
simp only [append_assoc, cons_append, nil_append, append_cancel_right_eq]
cases i with
| zero => simp
| succ i => rw [take_set_of_le _ _ (by omega)]
· simp only [Nat.not_lt] at h'
have : i = j := by omega
subst this
simp
@[simp] theorem insertIdx_toArray (l : List α) (i : Nat) (a : α) (h : i l.toArray.size):
l.toArray.insertIdx i a = (l.insertIdx i a).toArray := by
rw [Array.insertIdx]
rw [insertIdx_loop_toArray (h := h)]
ext j h₁ h₂
· simp at h
simp [length_insertIdx, h]
omega
· simp [length_insertIdx] at h₁ h₂
simp [getElem_insertIdx]
split <;> rename_i h₃
· rw [getElem_append_left (by simp; split at h₂ <;> omega)]
simp only [getElem_take]
rw [getElem_append_left]
· rw [getElem_append_right (by simp; omega)]
rw [getElem_cons]
simp
split <;> rename_i h₄
· rw [dif_pos (by omega)]
· rw [dif_neg (by omega)]
congr
omega
@[simp] theorem insertIdxIfInBounds_toArray (l : List α) (i : Nat) (a : α) :
l.toArray.insertIdxIfInBounds i a = (l.insertIdx i a).toArray := by
rw [Array.insertIdxIfInBounds]
split <;> rename_i h'
· simp
· simp only [size_toArray, Nat.not_le] at h'
rw [List.insertIdx_of_length_lt (h := h')]
end List

View File

@@ -6,21 +6,18 @@ Authors: Henrik Böving
prelude
import Init.Data.List.Basic
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
/--
Auxiliary definition for `List.toArray`.
`List.toArrayAux as r = r ++ as.toArray`
-/
@[inline_if_reduce]
def List.toArrayAux : List α Array α Array α
| nil, xs => xs
| cons a as, xs => toArrayAux as (xs.push a)
| nil, r => r
| cons a as, r => toArrayAux as (r.push a)
/-- Convert a `List α` into an `Array α`. This is O(n) in the length of the list. -/
-- This function is exported to C, where it is called by `Array.mk`
-- (the constructor) to implement this functionality.
@[inline, match_pattern, pp_nodot, export lean_list_to_array]
def List.toArrayImpl (xs : List α) : Array α :=
xs.toArrayAux (Array.mkEmpty xs.length)
def List.toArrayImpl (as : List α) : Array α :=
as.toArrayAux (Array.mkEmpty as.length)

View File

@@ -11,9 +11,6 @@ import Init.Data.Function
# Lemmas about `List.zip`, `List.zipWith`, `List.zipWithAll`, and `List.unzip`.
-/
-- set_option linter.listName true -- Enforce naming conventions for `List`/`Array`/`Vector` variables.
-- set_option linter.indexVariables true -- Enforce naming conventions for index variables.
namespace List
open Nat
@@ -23,7 +20,7 @@ open Nat
/-! ### zipWith -/
theorem zipWith_comm (f : α β γ) :
(as : List α) (bs : List β), zipWith f as bs = zipWith (fun b a => f a b) bs as
(la : List α) (lb : List β), zipWith f la lb = zipWith (fun b a => f a b) lb la
| [], _ => List.zipWith_nil_right.symm
| _ :: _, [] => rfl
| _ :: as, _ :: bs => congrArg _ (zipWith_comm f as bs)
@@ -60,7 +57,7 @@ theorem getElem?_zipWith' {f : α → β → γ} {i : Nat} :
(zipWith f l₁ l₂)[i]? = (l₁[i]?.map f).bind fun g => l₂[i]?.map g := by
induction l₁ generalizing l₂ i with
| nil => rw [zipWith] <;> simp
| cons _ _ =>
| cons head tail =>
cases l₂
· simp
· cases i <;> simp_all
@@ -125,25 +122,25 @@ theorem map_zipWith {δ : Type _} (f : α → β) (g : γ → δ → α) (l : Li
· simp
· simp [hl]
theorem take_zipWith : (zipWith f l l').take i = zipWith f (l.take i) (l'.take i) := by
induction l generalizing l' i with
theorem take_zipWith : (zipWith f l l').take n = zipWith f (l.take n) (l'.take n) := by
induction l generalizing l' n with
| nil => simp
| cons hd tl hl =>
cases l'
· simp
· cases i
· cases n
· simp
· simp [hl]
@[deprecated take_zipWith (since := "2024-07-26")] abbrev zipWith_distrib_take := @take_zipWith
theorem drop_zipWith : (zipWith f l l').drop i = zipWith f (l.drop i) (l'.drop i) := by
induction l generalizing l' i with
theorem drop_zipWith : (zipWith f l l').drop n = zipWith f (l.drop n) (l'.drop n) := by
induction l generalizing l' n with
| nil => simp
| cons hd tl hl =>
· cases l'
· simp
· cases i
· cases n
· simp
· simp [hl]
@@ -155,17 +152,17 @@ theorem tail_zipWith : (zipWith f l l').tail = zipWith f l.tail l'.tail := by
@[deprecated tail_zipWith (since := "2024-07-28")] abbrev zipWith_distrib_tail := @tail_zipWith
theorem zipWith_append (f : α β γ) (l l₁' : List α) (l l₂' : List β)
(h : l.length = l.length) :
zipWith f (l ++ l₁') (l ++ l₂') = zipWith f l l ++ zipWith f l₁' l₂' := by
induction l generalizing l with
theorem zipWith_append (f : α β γ) (l la : List α) (l' lb : List β)
(h : l.length = l'.length) :
zipWith f (l ++ la) (l' ++ lb) = zipWith f l l' ++ zipWith f la lb := by
induction l generalizing l' with
| nil =>
have : l = [] := eq_nil_of_length_eq_zero (by simpa using h.symm)
have : l' = [] := eq_nil_of_length_eq_zero (by simpa using h.symm)
simp [this]
| cons hl tl ih =>
cases l with
cases l' with
| nil => simp at h
| cons _ _ =>
| cons head tail =>
simp only [length_cons, Nat.succ.injEq] at h
simp [ih _ h]
@@ -202,7 +199,7 @@ theorem zipWith_eq_append_iff {f : α → β → γ} {l₁ : List α} {l₂ : Li
· simp only [zipWith_nil_right, nil_eq, append_eq_nil_iff, exists_and_left, and_imp]
rintro rfl rfl
exact [], x₁ :: l₁, [], by simp
· rintro _, _, _, _, h₁, _, h₃, rfl, rfl
· rintro w, x, y, z, h₁, _, h₃, rfl, rfl
simp only [nil_eq, append_eq_nil_iff] at h₃
obtain rfl, rfl := h₃
simp
@@ -210,21 +207,21 @@ theorem zipWith_eq_append_iff {f : α → β → γ} {l₁ : List α} {l₂ : Li
simp only [zipWith_cons_cons]
rw [cons_eq_append_iff]
constructor
· rintro (rfl, rfl | _, rfl, h)
· rintro (rfl, rfl | l₁'', rfl, h)
· exact [], x₁ :: l₁, [], x₂ :: l₂, by simp
· rw [ih₁] at h
obtain ws, xs, ys, zs, h, rfl, rfl, h', rfl := h
refine x₁ :: ws, xs, x₂ :: ys, zs, by simp [h, h']
· rintro _, _, _, _, h₁, h₂, h₃, rfl, rfl
obtain w, x, y, z, h, rfl, rfl, h', rfl := h
refine x₁ :: w, x, x₂ :: y, z, by simp [h, h']
· rintro w, x, y, z, h₁, h₂, h₃, rfl, rfl
rw [cons_eq_append_iff] at h₂
rw [cons_eq_append_iff] at h₃
obtain (rfl, rfl | _, rfl, rfl) := h₂
obtain (rfl, rfl | w', rfl, rfl) := h₂
· simp only [zipWith_nil_left, true_and, nil_eq, reduceCtorEq, false_and, exists_const,
or_false]
obtain (rfl, rfl | _, rfl, rfl) := h₃
obtain (rfl, rfl | y', rfl, rfl) := h₃
· simp
· simp_all
· obtain (rfl, rfl | _, rfl, rfl) := h₃
· obtain (rfl, rfl | y', rfl, rfl) := h₃
· simp_all
· simp_all [zipWith_append, Nat.succ_inj']
@@ -277,9 +274,9 @@ theorem zip_map_right (f : β → γ) (l₁ : List α) (l₂ : List β) :
theorem zip_append :
{l₁ r₁ : List α} {l₂ r₂ : List β} (_h : length l₁ = length l₂),
zip (l₁ ++ r₁) (l₂ ++ r₂) = zip l₁ l₂ ++ zip r₁ r₂
| [], _, _, _, h => by simp only [eq_nil_of_length_eq_zero h.symm]; rfl
| _, _, [], _, h => by simp only [eq_nil_of_length_eq_zero h]; rfl
| _ :: _, _, _ :: _, _, h => by
| [], r₁, l₂, r₂, h => by simp only [eq_nil_of_length_eq_zero h.symm]; rfl
| l₁, r₁, [], r₂, h => by simp only [eq_nil_of_length_eq_zero h]; rfl
| a :: l₁, r₁, b :: l₂, r₂, h => by
simp only [cons_append, zip_cons_cons, zip_append (Nat.succ.inj h)]
theorem zip_map' (f : α β) (g : α γ) :
@@ -451,9 +448,9 @@ theorem unzip_zip {l₁ : List α} {l₂ : List β} (h : length l₁ = length l
· rw [unzip_zip_left (Nat.le_of_eq h)]
· rw [unzip_zip_right (Nat.le_of_eq h.symm)]
theorem zip_of_prod {l : List α} {l' : List β} {xs : List (α × β)} (hl : xs.map Prod.fst = l)
(hr : xs.map Prod.snd = l') : xs = l.zip l' := by
rw [ hl, hr, zip_unzip xs, unzip_fst, unzip_snd, zip_unzip, zip_unzip]
theorem zip_of_prod {l : List α} {l' : List β} {lp : List (α × β)} (hl : lp.map Prod.fst = l)
(hr : lp.map Prod.snd = l') : lp = l.zip l' := by
rw [ hl, hr, zip_unzip lp, unzip_fst, unzip_snd, zip_unzip, zip_unzip]
theorem tail_zip_fst {l : List (α × β)} : l.unzip.1.tail = l.tail.unzip.1 := by
simp

View File

@@ -780,19 +780,16 @@ protected theorem max_def {n m : Nat} : max n m = if n ≤ m then m else n := rf
/-! # Auxiliary theorems for well-founded recursion -/
protected theorem ne_zero_of_lt (h : b < a) : a 0 := by
theorem not_eq_zero_of_lt (h : b < a) : a 0 := by
cases a
exact absurd h (Nat.not_lt_zero _)
apply Nat.noConfusion
@[deprecated Nat.ne_zero_of_lt (since := "2025-02-06")]
theorem not_eq_zero_of_lt (h : b < a) : a 0 := Nat.ne_zero_of_lt h
theorem pred_lt_of_lt {n m : Nat} (h : m < n) : pred n < n :=
pred_lt (Nat.ne_zero_of_lt h)
pred_lt (not_eq_zero_of_lt h)
theorem sub_one_lt_of_lt {n m : Nat} (h : m < n) : n - 1 < n :=
sub_one_lt (Nat.ne_zero_of_lt h)
sub_one_lt (not_eq_zero_of_lt h)
/-! # pred theorems -/
@@ -857,7 +854,7 @@ theorem zero_lt_sub_of_lt (h : i < a) : 0 < a - i := by
theorem sub_succ_lt_self (a i : Nat) (h : i < a) : a - (i + 1) < a - i := by
rw [Nat.add_succ, Nat.sub_succ]
apply Nat.pred_lt
apply Nat.ne_zero_of_lt
apply Nat.not_eq_zero_of_lt
apply Nat.zero_lt_sub_of_lt
assumption

View File

@@ -637,8 +637,8 @@ theorem PolyCnstr.eq_false_of_isUnsat (ctx : Context) {c : PolyCnstr} : c.isUnsa
simp [isUnsat]
by_cases he : eq = true <;> simp [he, denote, Poly.denote_eq, Poly.denote_le, -and_imp]
· intro
| Or.inl h₁, h₂ => simp [Poly.of_isZero, h₁]; have := Nat.ne_zero_of_lt (Poly.of_isNonZero ctx h₂); simp [this.symm]
| Or.inr h₁, h₂ => simp [Poly.of_isZero, h₂]; have := Nat.ne_zero_of_lt (Poly.of_isNonZero ctx h₁); simp [this]
| Or.inl h₁, h₂ => simp [Poly.of_isZero, h₁]; have := Nat.not_eq_zero_of_lt (Poly.of_isNonZero ctx h₂); simp [this.symm]
| Or.inr h₁, h₂ => simp [Poly.of_isZero, h₂]; have := Nat.not_eq_zero_of_lt (Poly.of_isNonZero ctx h₁); simp [this]
· intro h₁, h₂
simp [Poly.of_isZero, h₂]
exact Poly.of_isNonZero ctx h₁

View File

@@ -15,4 +15,3 @@ import Init.Data.Vector.OfFn
import Init.Data.Vector.Range
import Init.Data.Vector.Erase
import Init.Data.Vector.Monadic
import Init.Data.Vector.InsertIdx

View File

@@ -7,7 +7,6 @@ Authors: Shreyas Srinivas, François G. Dorais, Kim Morrison
prelude
import Init.Data.Array.Lemmas
import Init.Data.Array.MapIdx
import Init.Data.Array.InsertIdx
import Init.Data.Range
import Init.Data.Stream
@@ -356,19 +355,6 @@ instance [BEq α] : BEq (Vector α n) where
have : Inhabited (Vector α (n-1)) := v.pop
panic! "index out of bounds"
/-- Insert an element into a vector using a `Nat` index and a tactic provided proof. -/
@[inline] def insertIdx (v : Vector α n) (i : Nat) (x : α) (h : i n := by get_elem_tactic) :
Vector α (n+1) :=
v.toArray.insertIdx i x (v.size_toArray.symm h), by simp [Array.size_insertIdx]
/-- Insert an element into a vector using a `Nat` index. Panics if the index is out of bounds. -/
@[inline] def insertIdx! (v : Vector α n) (i : Nat) (x : α) : Vector α (n+1) :=
if _ : i n then
v.insertIdx i x
else
have : Inhabited (Vector α (n+1)) := v.push x
panic! "index out of bounds"
/-- Delete the first element of a vector. Returns the empty vector if the input vector is empty. -/
@[inline] def tail (v : Vector α n) : Vector α (n-1) :=
if _ : 0 < n then

View File

@@ -1,126 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Vector.Lemmas
import Init.Data.Array.InsertIdx
/-!
# insertIdx
Proves various lemmas about `Vector.insertIdx`.
-/
open Function
open Nat
namespace Vector
universe u
variable {α : Type u}
section InsertIdx
variable {a : α}
@[simp]
theorem insertIdx_zero (s : Vector α n) (x : α) : s.insertIdx 0 x = (#v[x] ++ s).cast (by omega) := by
cases s
simp
theorem eraseIdx_insertIdx (i : Nat) (l : Vector α n) (h : i n) :
(l.insertIdx i a).eraseIdx i = l := by
rcases l with l, rfl
simp_all [Array.eraseIdx_insertIdx]
theorem insertIdx_eraseIdx_of_ge {as : Vector α n}
(w₁ : i < n) (w₂ : j n - 1) (h : i j) :
(as.eraseIdx i).insertIdx j a =
((as.insertIdx (j + 1) a).eraseIdx i).cast (by omega) := by
rcases as with as, rfl
simpa using Array.insertIdx_eraseIdx_of_ge (by simpa) (by simpa) (by simpa)
theorem insertIdx_eraseIdx_of_le {as : Vector α n}
(w₁ : i < n) (w₂ : j n - 1) (h : j i) :
(as.eraseIdx i).insertIdx j a =
((as.insertIdx j a).eraseIdx (i + 1)).cast (by omega) := by
rcases as with as, rfl
simpa using Array.insertIdx_eraseIdx_of_le (by simpa) (by simpa) (by simpa)
theorem insertIdx_comm (a b : α) (i j : Nat) (l : Vector α n) (_ : i j) (_ : j n) :
(l.insertIdx i a).insertIdx (j + 1) b =
(l.insertIdx j b).insertIdx i a := by
rcases l with l, rfl
simpa using Array.insertIdx_comm a b i j _ (by simpa) (by simpa)
theorem mem_insertIdx {l : Vector α n} {h : i n} : a l.insertIdx i b h a = b a l := by
rcases l with l, rfl
simpa using Array.mem_insertIdx
@[simp]
theorem insertIdx_size_self (l : Vector α n) (x : α) : l.insertIdx n x = l.push x := by
rcases l with l, rfl
simp
theorem getElem_insertIdx {as : Vector α n} {x : α} {i k : Nat} (w : i n) (h : k < n + 1) :
(as.insertIdx i x)[k] =
if h₁ : k < i then
as[k]
else
if h₂ : k = i then
x
else
as[k-1] := by
rcases as with as, rfl
simp [Array.getElem_insertIdx, w]
theorem getElem_insertIdx_of_lt {as : Vector α n} {x : α} {i k : Nat} (w : i n) (h : k < i) :
(as.insertIdx i x)[k] = as[k] := by
rcases as with as, rfl
simp [Array.getElem_insertIdx, w, h]
theorem getElem_insertIdx_self {as : Vector α n} {x : α} {i : Nat} (w : i n) :
(as.insertIdx i x)[i] = x := by
rcases as with as, rfl
simp [Array.getElem_insertIdx, w]
theorem getElem_insertIdx_of_gt {as : Vector α n} {x : α} {i k : Nat} (w : k n) (h : k > i) :
(as.insertIdx i x)[k] = as[k - 1] := by
rcases as with as, rfl
simp [Array.getElem_insertIdx, w, h]
rw [dif_neg (by omega), dif_neg (by omega)]
theorem getElem?_insertIdx {l : Vector α n} {x : α} {i k : Nat} (h : i n) :
(l.insertIdx i x)[k]? =
if k < i then
l[k]?
else
if k = i then
if k l.size then some x else none
else
l[k-1]? := by
rcases l with l, rfl
simp [Array.getElem?_insertIdx, h]
theorem getElem?_insertIdx_of_lt {l : Vector α n} {x : α} {i k : Nat} (w : i n) (h : k < i) :
(l.insertIdx i x)[k]? = l[k]? := by
rcases l with l, rfl
rw [getElem?_insertIdx, if_pos h]
theorem getElem?_insertIdx_self {l : Vector α n} {x : α} {i : Nat} (w : i n) :
(l.insertIdx i x)[i]? = some x := by
rcases l with l, rfl
rw [getElem?_insertIdx, if_neg (by omega), if_pos rfl, if_pos w]
theorem getElem?_insertIdx_of_ge {l : Vector α n} {x : α} {i k : Nat} (w : i < k) (h : k n) :
(l.insertIdx i x)[k]? = l[k - 1]? := by
rcases l with l, rfl
rw [getElem?_insertIdx, if_neg (by omega), if_neg (by omega)]
end InsertIdx
end Vector

View File

@@ -95,13 +95,6 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
(Vector.mk a h).eraseIdx! i = Vector.mk (a.eraseIdx i) (by simp [h, hi]) := by
simp [Vector.eraseIdx!, hi]
@[simp] theorem insertIdx_mk (a : Array α) (h : a.size = n) (i x) (h') :
(Vector.mk a h).insertIdx i x h' = Vector.mk (a.insertIdx i x) (by simp [h, h']) := rfl
@[simp] theorem insertIdx!_mk (a : Array α) (h : a.size = n) (i x) (hi : i n) :
(Vector.mk a h).insertIdx! i x = Vector.mk (a.insertIdx i x) (by simp [h, hi]) := by
simp [Vector.insertIdx!, hi]
@[simp] theorem cast_mk (a : Array α) (h : a.size = n) (h' : n = m) :
(Vector.mk a h).cast h' = Vector.mk a (by simp [h, h']) := rfl
@@ -279,13 +272,6 @@ abbrev zipWithIndex_mk := @zipIdx_mk
(a.eraseIdx! i).toArray = a.toArray.eraseIdx! i := by
cases a; simp_all [Array.eraseIdx!]
@[simp] theorem toArray_insertIdx (a : Vector α n) (i x) (h) :
(a.insertIdx i x h).toArray = a.toArray.insertIdx i x (by simp [h]) := rfl
@[simp] theorem toArray_insertIdx! (a : Vector α n) (i x) (hi : i n) :
(a.insertIdx! i x).toArray = a.toArray.insertIdx! i x := by
cases a; simp_all [Array.insertIdx!]
@[simp] theorem toArray_cast (a : Vector α n) (h : n = m) :
(a.cast h).toArray = a.toArray := rfl
@@ -508,13 +494,6 @@ theorem toList_eraseIdx (a : Vector α n) (i) (h) :
(a.eraseIdx! i).toList = a.toList.eraseIdx i := by
cases a; simp_all [Array.eraseIdx!]
theorem toList_insertIdx (a : Vector α n) (i x) (h) :
(a.insertIdx i x h).toList = a.toList.insertIdx i x := by simp
theorem toList_insertIdx! (a : Vector α n) (i x) (hi : i n) :
(a.insertIdx! i x).toList = a.toList.insertIdx i x := by
cases a; simp_all [Array.insertIdx!]
theorem toList_cast (a : Vector α n) (h : n = m) :
(a.cast h).toList = a.toList := rfl

View File

@@ -363,25 +363,4 @@ theorem mapIdx_eq_mkVector_iff {l : Vector α n} {f : Nat → α → β} {b : β
rcases l with l, rfl
simp [Array.mapIdx_reverse]
theorem toArray_mapFinIdxM [Monad m] [LawfulMonad m]
(a : Vector α n) (f : (i : Nat) α (h : i < n) m β) :
toArray <$> a.mapFinIdxM f = a.toArray.mapFinIdxM
(fun i x h => f i x (size_toArray a h)) := by
let rec go (i j : Nat) (inv : i + j = n) (bs : Vector β (n - i)) :
toArray <$> mapFinIdxM.map a f i j inv bs
= Array.mapFinIdxM.map a.toArray (fun i x h => f i x (size_toArray a h))
i j (size_toArray _ inv) bs.toArray := by
match i with
| 0 => simp only [mapFinIdxM.map, map_pure, Array.mapFinIdxM.map, Nat.sub_zero]
| k + 1 =>
simp only [mapFinIdxM.map, map_bind, Array.mapFinIdxM.map, getElem_toArray]
conv => lhs; arg 2; intro; rw [go]
rfl
simp only [mapFinIdxM, Array.mapFinIdxM, size_toArray]
exact go _ _ _ _
theorem toArray_mapIdxM [Monad m] [LawfulMonad m] (a : Vector α n) (f : Nat α m β) :
toArray <$> a.mapIdxM f = a.toArray.mapIdxM f := by
exact toArray_mapFinIdxM _ _
end Vector

View File

@@ -19,10 +19,11 @@ open Nat
/-! ## Monadic operations -/
@[simp] theorem map_toArray_inj [Monad m] [LawfulMonad m]
{v₁ : m (Vector α n)} {v₂ : m (Vector α n)} :
toArray <$> v₁ = toArray <$> v₂ v₁ = v₂ :=
_root_.map_inj_right (by simp)
theorem map_toArray_inj [Monad m] [LawfulMonad m] [Nonempty α]
{v₁ : m (Vector α n)} {v₂ : m (Vector α n)} (w : toArray <$> v₁ = toArray <$> v₂) :
v₁ = v₂ := by
apply map_inj_of_inj ?_ w
simp
/-! ### mapM -/
@@ -38,10 +39,11 @@ open Nat
unfold mapM.go
simp
@[simp] theorem mapM_append [Monad m] [LawfulMonad m]
-- The `[Nonempty β]` hypothesis should be avoidable by unfolding `mapM` directly.
@[simp] theorem mapM_append [Monad m] [LawfulMonad m] [Nonempty β]
(f : α m β) {l₁ : Vector α n} {l₂ : Vector α n'} :
(l₁ ++ l₂).mapM f = (return ( l₁.mapM f) ++ ( l₂.mapM f)) := by
apply map_toArray_inj.mp
apply map_toArray_inj
suffices toArray <$> (l₁ ++ l₂).mapM f = (return ( toArray <$> l₁.mapM f) ++ ( toArray <$> l₂.mapM f)) by
rw [this]
simp only [bind_pure_comp, Functor.map_map, bind_map_left, map_bind, toArray_append]

View File

@@ -101,9 +101,9 @@ theorem Bool.false_of_not_eq_self {a : Bool} (h : (!a) = a) : False := by
by_cases a <;> simp_all
/- The following two helper theorems are used to case-split `a = b` representing `iff`. -/
theorem of_eq_eq_true {a b : Prop} (h : (a = b) = True) : (a b) (¬ a ¬ b) := by
theorem of_eq_eq_true {a b : Prop} (h : (a = b) = True) : (¬a b) (¬b a) := by
by_cases a <;> by_cases b <;> simp_all
theorem of_eq_eq_false {a b : Prop} (h : (a = b) = False) : (a ¬b) (¬ a b) := by
theorem of_eq_eq_false {a b : Prop} (h : (a = b) = False) : (¬a ¬b) (b a) := by
by_cases a <;> by_cases b <;> simp_all
/-! Forall -/

View File

@@ -67,8 +67,6 @@ structure Config where
ext : Bool := true
/-- If `verbose` is `false`, additional diagnostics information is not collected. -/
verbose : Bool := true
/-- If `clean` is `true`, `grind` uses `expose_names` and only generates accessible names. -/
clean : Bool := true
deriving Inhabited, BEq
end Lean.Grind

View File

@@ -749,14 +749,6 @@ The top-level command elaborator only runs the linters if `#guard_msgs` is not p
syntax (name := guardMsgsCmd)
(docComment)? "#guard_msgs" (ppSpace guardMsgsSpec)? " in" ppLine command : command
/--
Format and print the info trees for a given command.
This is mostly useful for debugging info trees.
-/
syntax (name := infoTreesCmd)
"#info_trees" " in" ppLine command : command
namespace Parser
/--

View File

@@ -2503,9 +2503,6 @@ instance (p₁ p₂ : String.Pos) : Decidable (LE.le p₁ p₂) :=
instance (p₁ p₂ : String.Pos) : Decidable (LT.lt p₁ p₂) :=
inferInstanceAs (Decidable (LT.lt p₁.byteIdx p₂.byteIdx))
instance : Min String.Pos := minOfLe
instance : Max String.Pos := maxOfLe
/-- A `String.Pos` pointing at the end of this string. -/
@[inline] def String.endPos (s : String) : String.Pos where
byteIdx := utf8ByteSize s
@@ -2612,13 +2609,7 @@ structure Array (α : Type u) where
attribute [extern "lean_array_to_list"] Array.toList
attribute [extern "lean_array_mk"] Array.mk
/--
Converts a `List α` into an `Array α`. (This is preferred over the synonym `Array.mk`.)
At runtime, this constructor is implemented by `List.toArrayImpl` and is O(n) in the length of the
list.
-/
@[match_pattern]
@[inherit_doc Array.mk, match_pattern]
abbrev List.toArray (xs : List α) : Array α := .mk xs
/-- Construct a new empty array with initial capacity `c`. -/

View File

@@ -238,12 +238,7 @@ protected def TaskState.toString : TaskState → String
instance : ToString TaskState := TaskState.toString
/--
Returns current state of the `Task` in the Lean runtime's task manager.
Note that for tasks derived from `Promise`s, `waiting` and `running` should be considered
equivalent.
-/
/-- Returns current state of the `Task` in the Lean runtime's task manager. -/
@[extern "lean_io_get_task_state"] opaque getTaskState : @& Task α BaseIO TaskState
/-- Check if the task has finished execution, at which point calling `Task.get` will return immediately. -/

View File

@@ -21,13 +21,14 @@ private structure PromiseImpl (α : Type) : Type where
Typical usage is as follows:
1. `let promise ← Promise.new` creates a promise
2. `promise.result? : Task (Option α)` can now be passed around
3. `promise.result?.get` blocks until the promise is resolved
2. `promise.result : Task α` can now be passed around
3. `promise.result.get` blocks until the promise is resolved
4. `promise.resolve a` resolves the promise
5. `promise.result?.get` now returns `some a`
5. `promise.result.get` now returns `a`
If the promise is dropped without ever being resolved, `promise.result?.get` will return `none`.
See `Promise.result!/resultD` for other ways to handle this case.
Every promise must eventually be resolved.
Otherwise the memory used for the promise will be leaked,
and any tasks depending on the promise's result will wait forever.
-/
def Promise (α : Type) : Type := PromiseImpl α
@@ -46,32 +47,12 @@ Only the first call to this function has an effect.
@[extern "lean_io_promise_resolve"]
opaque Promise.resolve (value : α) (promise : @& Promise α) : BaseIO Unit
/--
Like `Promise.result`, but resolves to `none` if the promise is dropped without ever being resolved.
-/
@[extern "lean_io_promise_result_opt"]
opaque Promise.result? (promise : @& Promise α) : Task (Option α)
-- SU: not planning to make this public without a lot more thought and motivation
@[extern "lean_option_get_or_block"]
private opaque Option.getOrBlock! [Nonempty α] : Option α α
/--
The result task of a `Promise`.
The task blocks until `Promise.resolve` is called. If the promise is dropped without ever being
resolved, evaluating the task will panic and, when not using fatal panics, block forever. Use
`Promise.result?` to handle this case explicitly.
The task blocks until `Promise.resolve` is called.
-/
def Promise.result! (promise : @& Promise α) : Task α :=
let _ : Nonempty α := promise.h
promise.result?.map (sync := true) Option.getOrBlock!
@[inherit_doc Promise.result!, deprecated Promise.result! (since := "2025-02-05")]
def Promise.result := @Promise.result!
/--
Like `Promise.result`, but resolves to `dflt` if the promise is dropped without ever being resolved.
-/
def Promise.resultD (promise : Promise α) (dflt : α): Task α :=
promise.result?.map (sync := true) (·.getD dflt)
@[extern "lean_io_promise_result"]
opaque Promise.result (promise : Promise α) : Task α :=
have : Nonempty α := promise.h
Classical.choice inferInstance

View File

@@ -1603,68 +1603,11 @@ using `show_term`.
macro (name := by?) tk:"by?" t:tacticSeq : term => `(show_term%$tk by%$tk $t)
/--
`expose_names` renames all inaccessible variables with accessible names, making them available
for reference in generated tactics. However, this renaming introduces machine-generated names
that are not fully under user control. `expose_names` is primarily intended as a preamble for
auto-generated end-game tactic scripts. It is also useful as an alternative to
`set_option tactic.hygienic false`. If explicit control over renaming is needed in the
middle of a tactic script, consider using structured tactic scripts with
`match .. with`, `induction .. with`, or `intro` with explicit user-defined names,
as well as tactics such as `next`, `case`, and `rename_i`.
`expose_names` creates a new goal whose local context has been "exposed" so that every local declaration has a clear,
accessible name. If no local declarations require renaming, the original goal is returned unchanged.
-/
syntax (name := exposeNames) "expose_names" : tactic
/--
Close fixed-width `BitVec` and `Bool` goals by obtaining a proof from an external SAT solver and
verifying it inside Lean. The solvable goals are currently limited to
- the Lean equivalent of [`QF_BV`](https://smt-lib.org/logics-all.shtml#QF_BV)
- automatically splitting up `structure`s that contain information about `BitVec` or `Bool`
```lean
example : ∀ (a b : BitVec 64), (a &&& b) + (a ^^^ b) = a ||| b := by
intros
bv_decide
```
If `bv_decide` encounters an unknown definition it will be treated like an unconstrained `BitVec`
variable. Sometimes this enables solving goals despite not understanding the definition because
the precise properties of the definition do not matter in the specific proof.
If `bv_decide` fails to close a goal it provides a counter-example, containing assignments for all
terms that were considered as variables.
In order to avoid calling a SAT solver every time, the proof can be cached with `bv_decide?`.
If solving your problem relies inherently on using associativity or commutativity, consider enabling
the `bv.ac_nf` option.
Note: `bv_decide` uses `ofReduceBool` and thus trusts the correctness of the code generator.
Note: include `import Std.Tactic.BVDecide`
-/
macro (name := bvDecideMacro) (priority:=low) "bv_decide" optConfig : tactic =>
Macro.throwError "to use `bv_decide`, please include `import Std.Tactic.BVDecide`"
/--
Suggest a proof script for a `bv_decide` tactic call. Useful for caching LRAT proofs.
Note: include `import Std.Tactic.BVDecide`
-/
macro (name := bvTraceMacro) (priority:=low) "bv_decide?" optConfig : tactic =>
Macro.throwError "to use `bv_decide?`, please include `import Std.Tactic.BVDecide`"
/--
Run the normalization procedure of `bv_decide` only. Sometimes this is enough to solve basic
`BitVec` goals already.
Note: include `import Std.Tactic.BVDecide`
-/
macro (name := bvNormalizeMacro) (priority:=low) "bv_normalize" optConfig : tactic =>
Macro.throwError "to use `bv_normalize`, please include `import Std.Tactic.BVDecide`"
end Tactic
namespace Attr
@@ -1719,14 +1662,6 @@ If there are several with the same priority, it is uses the "most recent one". E
-/
syntax (name := simp) "simp" (Tactic.simpPre <|> Tactic.simpPost)? patternIgnore("" <|> "<- ")? (ppSpace prio)? : attr
/--
Theorems tagged with the `wf_preprocess` attribute are used during the processing of functions defined
by well-founded recursion. They are applied to the function's body to add additional hypotheses,
such as replacing `if c then _ else _` with `if h : c then _ else _` or `xs.map` with
`xs.attach.map`. Also see `wfParam`.
-/
syntax (name := wf_preprocess) "wf_preprocess" (Tactic.simpPre <|> Tactic.simpPost)? patternIgnore("" <|> "<- ")? (ppSpace prio)? : attr
/-- The possible `norm_cast` kinds: `elim`, `move`, or `squash`. -/
syntax normCastLabel := &"elim" <|> &"move" <|> &"squash"

View File

@@ -15,29 +15,10 @@ structure Config where
main := true
/-- If `name` is `true`, all functions in the same namespace are considere for function induction, unfolding, etc. -/
name := true
/-- If `lib` is `true`, uses `libSearch` results. -/
lib := true
/-- If `targetOnly` is `true`, `try?` collects information using the goal target only. -/
targetOnly := false
/-- Maximum number of suggestions. -/
max := 8
/-- If `missing` is `true`, allows the construction of partial solutions where some of the subgoals are `sorry`. -/
missing := false
/-- If `only` is `true`, generates solutions using `grind only` and `simp only`. -/
only := true
/-- If `harder` is `true`, more expensive tactics and operations are tried. -/
harder := false
/--
If `merge` is `true`, it tries to compress suggestions such as
```
induction a
· grind only [= f]
· grind only [→ g]
```
as
```
induction a <;> grind only [= f, → g]
```
-/
merge := true
deriving Inhabited
end Lean.Try
@@ -46,10 +27,4 @@ namespace Lean.Parser.Tactic
syntax (name := tryTrace) "try?" optConfig : tactic
/-- Helper internal tactic for implementing the tactic `try?`. -/
syntax (name := attemptAll) "attempt_all " withPosition((ppDedent(ppLine) colGe "| " tacticSeq)+) : tactic
/-- Helper internal tactic used to implement `evalSuggest` in `try?` -/
syntax (name := tryResult) "try_suggestions " tactic* : tactic
end Lean.Parser.Tactic

View File

@@ -77,9 +77,12 @@ def addDecl (decl : Declaration) : CoreM Unit := do
async.commitConst async.asyncEnv (some info)
setEnv async.mainEnv
let checkAct Core.wrapAsyncAsSnapshot fun _ => do
setEnv async.asyncEnv
doAdd
async.commitCheckEnv ( getEnv)
try
setEnv async.asyncEnv
doAdd
async.commitCheckEnv ( getEnv)
finally
async.commitFailure
let t BaseIO.mapTask (fun _ => checkAct) env.checked
let endRange? := ( getRef).getTailPos?.map fun pos => pos, pos
Core.logSnapshotTask { range? := endRange?, task := t }

View File

@@ -208,7 +208,7 @@ Specialize `decl` using
- `levelParamsNew`: the universe level parameters for the new declaration.
-/
def mkSpecDecl (decl : Decl) (us : List Level) (argMask : Array (Option Arg)) (params : Array Param) (decls : Array CodeDecl) (levelParamsNew : List Name) : SpecializeM Decl := do
let nameNew := decl.name.appendCore `_at_ |>.appendCore ( read).declName |>.appendCore `spec |>.appendIndexAfter ( get).decls.size
let nameNew := decl.name ++ `_at_ ++ ( read).declName.eraseMacroScopes ++ (`spec).appendIndexAfter ( get).decls.size
/-
Recall that we have just retrieved `decl` using `getDecl?`, and it may have free variable identifiers that overlap with the free-variables
in `params` and `decls` (i.e., the "closure").

View File

@@ -302,6 +302,8 @@ def mkFreshUserName (n : Name) : CoreM Name :=
protected def withIncRecDepth [Monad m] [MonadControlT CoreM m] (x : m α) : m α :=
controlAt CoreM fun runInBase => withIncRecDepth (runInBase x)
builtin_initialize interruptExceptionId : InternalExceptionId registerInternalExceptionId `interrupt
/--
Throws an internal interrupt exception if cancellation has been requested. The exception is not
caught by `try catch` but is intended to be caught by `Command.withLoggingExceptions` at the top
@@ -311,7 +313,7 @@ the exception has been thrown.
@[inline] def checkInterrupted : CoreM Unit := do
if let some tk := ( read).cancelTk? then
if ( tk.isSet) then
throwInterruptException
throw <| .internal interruptExceptionId
register_builtin_option debug.moduleNameAtTimeout : Bool := {
defValue := true
@@ -430,8 +432,7 @@ def wrapAsyncAsSnapshot (act : Unit → CoreM Unit) (desc : String := by exact d
withTraceNode `Elab.async (fun _ => return desc) do
act ()
catch e =>
unless e.isInterrupt do
logError e.toMessageData
logError e.toMessageData
finally
addTraceAsMessages
get
@@ -578,6 +579,11 @@ def ImportM.runCoreM (x : CoreM α) : ImportM α := do
def Exception.isRuntime (ex : Exception) : Bool :=
ex.isMaxHeartbeat || ex.isMaxRecDepth
/-- Returns `true` if the exception is an interrupt generated by `checkInterrupted`. -/
def Exception.isInterrupt : Exception Bool
| Exception.internal id _ => id == Core.interruptExceptionId
| _ => false
/--
Custom `try-catch` for all monads based on `CoreM`. We usually don't want to catch "runtime
exceptions" these monads, but on `CommandElabM` or, in specific cases, using `tryCatchRuntimeEx`.

View File

@@ -27,7 +27,7 @@ offsets. For diagnostics, one-based `Lean.Position`s are used internally.
structure Position where
line : Nat
character : Nat
deriving Inhabited, BEq, Ord, Hashable, ToJson, FromJson, Repr
deriving Inhabited, BEq, Ord, Hashable, ToJson, FromJson
instance : ToString Position := fun p =>
"(" ++ toString p.line ++ ", " ++ toString p.character ++ ")"
@@ -38,7 +38,7 @@ instance : LE Position := leOfOrd
structure Range where
start : Position
«end» : Position
deriving Inhabited, BEq, Hashable, ToJson, FromJson, Ord, Repr
deriving Inhabited, BEq, Hashable, ToJson, FromJson, Ord
instance : LT Range := ltOfOrd
instance : LE Range := leOfOrd
@@ -426,9 +426,5 @@ structure WorkDoneProgressOptions where
workDoneProgress := false
deriving ToJson, FromJson
structure ResolveSupport where
properties : Array String
deriving FromJson, ToJson
end Lsp
end Lean

View File

@@ -30,7 +30,6 @@ structure CompletionClientCapabilities where
structure TextDocumentClientCapabilities where
completion? : Option CompletionClientCapabilities := none
codeAction? : Option CodeActionClientCapabilities := none
inlayHint? : Option InlayHintClientCapabilities := none
deriving ToJson, FromJson
structure ShowDocumentClientCapabilities where
@@ -82,7 +81,6 @@ structure ServerCapabilities where
foldingRangeProvider : Bool := false
semanticTokensProvider? : Option SemanticTokensOptions := none
codeActionProvider? : Option CodeActionOptions := none
inlayHintProvider? : Option InlayHintOptions := none
deriving ToJson, FromJson
end Lsp

View File

@@ -129,6 +129,10 @@ structure CodeAction extends WorkDoneProgressParams, PartialResultParams where
data? : Option Json := none
deriving ToJson, FromJson
structure ResolveSupport where
properties : Array String
deriving FromJson, ToJson
structure CodeActionLiteralSupportValueSet where
/-- The code action kind values the client supports. When this
property exists the client also guarantees that it will

View File

@@ -350,9 +350,10 @@ def SemanticTokenType.toNat (tokenType : SemanticTokenType) : Nat :=
tokenType.toCtorIdx
-- sanity check
example {v : SemanticTokenType} : open SemanticTokenType in
names[v.toNat]?.map (toString <| toJson ·) = some (toString <| toJson v) := by
cases v <;> native_decide
-- TODO: restore after update-stage0
--example {v : SemanticTokenType} : open SemanticTokenType in
-- names[v.toNat]?.map (toString <| toJson ·) = some (toString <| toJson v) := by
-- cases v <;> native_decide
/--
The semantic token modifiers included by default in the LSP specification.
@@ -444,82 +445,5 @@ structure RenameParams extends TextDocumentPositionParams where
structure PrepareRenameParams extends TextDocumentPositionParams
deriving FromJson, ToJson
structure InlayHintParams extends WorkDoneProgressParams where
textDocument : TextDocumentIdentifier
range : Range
deriving FromJson, ToJson
inductive InlayHintTooltip
| plaintext (text : String)
| markdown (markup : MarkupContent)
instance : FromJson InlayHintTooltip where
fromJson?
| .str s => .ok <| .plaintext s
| j@(.obj _) => do return .markdown ( fromJson? j)
| j => .error s!"invalid inlay hint tooltip {j}"
instance : ToJson InlayHintTooltip where
toJson
| .plaintext text => toJson text
| .markdown markup => toJson markup
structure InlayHintLabelPart where
value : String
tooltip? : Option InlayHintTooltip := none
location? : Option Location := none
command? : Option Command := none
deriving FromJson, ToJson
inductive InlayHintLabel
| name (n : String)
| parts (p : Array InlayHintLabelPart)
instance : FromJson InlayHintLabel where
fromJson?
| .str s => .ok <| .name s
| j@(.arr _) => do return .parts ( fromJson? j)
| j => .error s!"invalid inlay hint label {j}"
instance : ToJson InlayHintLabel where
toJson
| .name n => toJson n
| .parts p => toJson p
inductive InlayHintKind where
| type
| parameter
instance : FromJson InlayHintKind where
fromJson?
| 1 => .ok .type
| 2 => .ok .parameter
| j => .error s!"unknown inlay hint kind {j}"
instance : ToJson InlayHintKind where
toJson
| .type => 1
| .parameter => 2
structure InlayHint where
position : Position
label : InlayHintLabel
kind? : Option InlayHintKind := none
textEdits? : Option (Array TextEdit) := none
tooltip? : Option (InlayHintTooltip) := none
paddingLeft? : Option Bool := none
paddingRight? : Option Bool := none
data? : Option Json := none
deriving FromJson, ToJson
structure InlayHintClientCapabilities where
dynamicRegistration? : Option Bool := none
resolveSupport? : Option ResolveSupport := none
deriving FromJson, ToJson
structure InlayHintOptions extends WorkDoneProgressOptions where
resolveProvider? : Option Bool := none
deriving FromJson, ToJson
end Lsp
end Lean

View File

@@ -89,31 +89,15 @@ def utf8PosToLspPos (text : FileMap) (pos : String.Pos) : Lsp.Position :=
def utf8RangeToLspRange (text : FileMap) (range : String.Range) : Lsp.Range :=
{ start := text.utf8PosToLspPos range.start, «end» := text.utf8PosToLspPos range.stop }
def lspRangeToUtf8Range (text : FileMap) (range : Lsp.Range) : String.Range :=
{ start := text.lspPosToUtf8Pos range.start, stop := text.lspPosToUtf8Pos range.end }
end FileMap
def DeclarationRange.ofFilePositions (text : FileMap) (pos : Position) (endPos : Position)
: DeclarationRange := {
pos,
charUtf16 := text.leanPosToLspPos pos |>.character
endPos,
endCharUtf16 := text.leanPosToLspPos endPos |>.character
}
def DeclarationRange.ofStringPositions (text : FileMap) (pos : String.Pos) (endPos : String.Pos)
: DeclarationRange :=
.ofFilePositions text (text.toPosition pos) (text.toPosition endPos)
end Lean
/--
Convert the Lean `DeclarationRange` to an LSP `Range` by turning the 1-indexed line numbering into a
0-indexed line numbering and converting the character offset within the line to a UTF-16 indexed
offset.
-/
def DeclarationRange.toLspRange (r : DeclarationRange) : Lsp.Range := {
def Lean.DeclarationRange.toLspRange (r : Lean.DeclarationRange) : Lsp.Range := {
start := r.pos.line - 1, r.charUtf16
«end» := r.endPos.line - 1, r.endCharUtf16
}
end Lean

View File

@@ -55,4 +55,3 @@ import Lean.Elab.MatchExpr
import Lean.Elab.Tactic.Doc
import Lean.Elab.Time
import Lean.Elab.RecommendedSpelling
import Lean.Elab.InfoTrees

View File

@@ -148,20 +148,17 @@ private partial def elabChoiceAux (cmds : Array Syntax) (i : Nat) : CommandElabM
open Lean.Parser.Term
private def typelessBinder? : Syntax Option (Array (TSyntax [`ident, `Lean.Parser.Term.hole]) × BinderInfo)
| `(bracketedBinderF|($ids*)) => some (ids, .default)
| `(bracketedBinderF|{$ids*}) => some (ids, .implicit)
| `(bracketedBinderF|$ids*) => some (ids, .strictImplicit)
| `(bracketedBinderF|[$id:ident]) => some (#[id], .instImplicit)
| _ => none
private def typelessBinder? : Syntax Option (Array (TSyntax [`ident, `Lean.Parser.Term.hole]) × Bool)
| `(bracketedBinderF|($ids*)) => some (ids, true)
| `(bracketedBinderF|{$ids*}) => some (ids, false)
| _ => none
/-- If `id` is an identifier, return true if `ids` contains `id`. -/
private def containsId (ids : Array (TSyntax [`ident, ``Parser.Term.hole])) (id : TSyntax [`ident, ``Parser.Term.hole]) : Bool :=
id.raw.isIdent && ids.any fun id' => id'.raw.getId == id.raw.getId
/--
Auxiliary method for processing binder annotation update commands:
`variable (α)`, `variable {α}`, `variable ⦃α⦄`, and `variable [α]`.
Auxiliary method for processing binder annotation update commands: `variable (α)` and `variable {α}`.
The argument `binder` is the binder of the `variable` command.
The method returns an array containing the "residue", that is, variables that do not correspond to updates.
Recall that a `bracketedBinder` can be of the form `(x y)`.
@@ -172,7 +169,7 @@ private def containsId (ids : Array (TSyntax [`ident, ``Parser.Term.hole])) (id
The second `variable` command updates the binder annotation for `α`, and returns "residue" `γ`.
-/
private def replaceBinderAnnotation (binder : TSyntax ``Parser.Term.bracketedBinder) : CommandElabM (Array (TSyntax ``Parser.Term.bracketedBinder)) := do
let some (binderIds, binderInfo) := typelessBinder? binder | return #[binder]
let some (binderIds, explicit) := typelessBinder? binder | return #[binder]
let varDecls := ( getScope).varDecls
let mut varDeclsNew := #[]
let mut binderIds := binderIds
@@ -180,22 +177,23 @@ private def replaceBinderAnnotation (binder : TSyntax ``Parser.Term.bracketedBin
let mut modifiedVarDecls := false
-- Go through declarations in reverse to respect shadowing
for varDecl in varDecls.reverse do
let (ids, ty?, binderInfo') match varDecl with
let (ids, ty?, explicit') match varDecl with
| `(bracketedBinderF|($ids* $[: $ty?]? $(annot?)?)) =>
if annot?.isSome then
for binderId in binderIds do
if containsId ids binderId then
throwErrorAt binderId "cannot update binder annotation of variables with default values/tactics"
pure (ids, ty?, .default)
pure (ids, ty?, true)
| `(bracketedBinderF|{$ids* $[: $ty?]?}) =>
pure (ids, ty?, .implicit)
| `(bracketedBinderF|$ids* $[: $ty?]?) =>
pure (ids, ty?, .strictImplicit)
| `(bracketedBinderF|[$id : $ty]) =>
pure (#[id], some ty, .instImplicit)
pure (ids, ty?, false)
| `(bracketedBinderF|[$id : $_]) =>
for binderId in binderIds do
if binderId.raw.isIdent && binderId.raw.getId == id.getId then
throwErrorAt binderId "cannot change the binder annotation of the previously declared local instance `{id.getId}`"
varDeclsNew := varDeclsNew.push varDecl; continue
| _ =>
varDeclsNew := varDeclsNew.push varDecl; continue
if binderInfo == binderInfo' then
if explicit == explicit' then
-- no update, ensure we don't have redundant annotations.
for binderId in binderIds do
if containsId ids binderId then
@@ -205,55 +203,35 @@ private def replaceBinderAnnotation (binder : TSyntax ``Parser.Term.bracketedBin
-- `binderIds` and `ids` are disjoint
varDeclsNew := varDeclsNew.push varDecl
else
let mkBinder (id : TSyntax [`ident, ``Parser.Term.hole]) (binderInfo : BinderInfo) : CommandElabM (TSyntax ``Parser.Term.bracketedBinder) :=
match binderInfo with
| .default => `(bracketedBinderF| ($id $[: $ty?]?))
| .implicit => `(bracketedBinderF| {$id $[: $ty?]?})
| .strictImplicit => `(bracketedBinderF| {{$id $[: $ty?]?}})
| .instImplicit => do
let some ty := ty?
| throwErrorAt binder "cannot update binder annotation of variable '{id}' to instance implicit:\n\
variable was originally declared without an explicit type"
`(bracketedBinderF| [$(id) : $ty])
let mkBinder (id : TSyntax [`ident, ``Parser.Term.hole]) (explicit : Bool) : CommandElabM (TSyntax ``Parser.Term.bracketedBinder) :=
if explicit then
`(bracketedBinderF| ($id $[: $ty?]?))
else
`(bracketedBinderF| {$id $[: $ty?]?})
for id in ids.reverse do
if let some idx := binderIds.findFinIdx? fun binderId => binderId.raw.isIdent && binderId.raw.getId == id.raw.getId then
binderIds := binderIds.eraseIdx idx
modifiedVarDecls := true
let newBinder mkBinder id binderInfo
if binderInfo.isInstImplicit then
-- We elaborate the new binder to make sure it's valid as instance implicit
try
runTermElabM fun _ => Term.withSynthesize <| Term.withAutoBoundImplicit <|
Term.elabBinder newBinder fun _ => pure ()
catch e =>
throwErrorAt binder m!"cannot update binder annotation of variable '{id}' to instance implicit:\n\
{e.toMessageData}"
varDeclsNew := varDeclsNew.push ( mkBinder id binderInfo)
varDeclsNew := varDeclsNew.push ( mkBinder id explicit)
else
varDeclsNew := varDeclsNew.push ( mkBinder id binderInfo')
varDeclsNew := varDeclsNew.push ( mkBinder id explicit')
if modifiedVarDecls then
modifyScope fun scope => { scope with varDecls := varDeclsNew.reverse }
if binderIds.size != binderIdsIniSize then
binderIds.mapM fun binderId =>
match binderInfo with
| .default => `(bracketedBinderF| ($binderId))
| .implicit => `(bracketedBinderF| {$binderId})
| .strictImplicit => `(bracketedBinderF| {{$binderId}})
| .instImplicit => throwUnsupportedSyntax
if explicit then
`(bracketedBinderF| ($binderId))
else
`(bracketedBinderF| {$binderId})
else
return #[binder]
@[builtin_command_elab «variable»] def elabVariable : CommandElab
| `(variable%$tk $binders*) => do
| `(variable $binders*) => do
let binders binders.flatMapM replaceBinderAnnotation
-- Try to elaborate `binders` for sanity checking
runTermElabM fun _ => Term.withSynthesize <| Term.withAutoBoundImplicit <|
Term.elabBinders binders fun xs => do
-- Determine the set of auto-implicits for this variable command and add an inlay hint
-- for them. We will only actually add the auto-implicits to a type when the variables
-- declared here are used in some other declaration, but this is nonetheless the right
-- place to display the inlay hint.
let _ Term.addAutoBoundImplicits xs (tk.getTailPos? (canonicalOnly := true))
Term.elabBinders binders fun _ => pure ()
-- Remark: if we want to produce error messages when variables shadow existing ones, here is the place to do it.
for binder in binders do
let varUIds ( getBracketedBinderIds binder) |>.mapM (withFreshMacroScope MonadQuotation.addMacroScope)

View File

@@ -489,38 +489,38 @@ partial def elabCommand (stx : Syntax) : CommandElabM Unit := do
return oldSnap
let oldCmds? := oldSnap?.map fun old =>
if old.newStx.isOfKind nullKind then old.newStx.getArgs else #[old.newStx]
let cmdPromises cmds.mapM fun _ => IO.Promise.new
snap.new.resolve <| .ofTyped {
diagnostics := .empty
macroDecl := decl
newStx := stxNew
newNextMacroScope := nextMacroScope
hasTraces
next := Array.zipWith (fun cmdPromise cmd =>
{ range? := cmd.getRange?, task := cmdPromise.resultD default }) cmdPromises cmds
: MacroExpandedSnapshot
}
-- After the first command whose syntax tree changed, we must disable
-- incremental reuse
let mut reusedCmds := true
let opts getOptions
-- For each command, associate it with new promise and old snapshot, if any, and
-- elaborate recursively
for cmd in cmds, cmdPromise in cmdPromises, i in [0:cmds.size] do
let oldCmd? := oldCmds?.bind (·[i]?)
withReader ({ · with snap? := some {
new := cmdPromise
old? := do
guard reusedCmds
let old oldSnap?
return { stx := ( oldCmd?), val := ( old.next[i]?) }
} }) do
elabCommand cmd
-- Resolve promise for commands not supporting incrementality; waiting for
-- `withAlwaysResolvedPromises` to do this could block reporting by later
-- commands
cmdPromise.resolve default
reusedCmds := reusedCmds && oldCmd?.any (·.eqWithInfoAndTraceReuse opts cmd)
Language.withAlwaysResolvedPromises cmds.size fun cmdPromises => do
snap.new.resolve <| .ofTyped {
diagnostics := .empty
macroDecl := decl
newStx := stxNew
newNextMacroScope := nextMacroScope
hasTraces
next := Array.zipWith (fun cmdPromise cmd =>
{ range? := cmd.getRange?, task := cmdPromise.result }) cmdPromises cmds
: MacroExpandedSnapshot
}
-- After the first command whose syntax tree changed, we must disable
-- incremental reuse
let mut reusedCmds := true
let opts getOptions
-- For each command, associate it with new promise and old snapshot, if any, and
-- elaborate recursively
for cmd in cmds, cmdPromise in cmdPromises, i in [0:cmds.size] do
let oldCmd? := oldCmds?.bind (·[i]?)
withReader ({ · with snap? := some {
new := cmdPromise
old? := do
guard reusedCmds
let old oldSnap?
return { stx := ( oldCmd?), val := ( old.next[i]?) }
} }) do
elabCommand cmd
-- Resolve promise for commands not supporting incrementality; waiting for
-- `withAlwaysResolvedPromises` to do this could block reporting by later
-- commands
cmdPromise.resolve default
reusedCmds := reusedCmds && oldCmd?.any (·.eqWithInfoAndTraceReuse opts cmd)
else
elabCommand stxNew
| _ =>

View File

@@ -105,13 +105,12 @@ def elabAxiom (modifiers : Modifiers) (stx : Syntax) : CommandElabM Unit := do
let scopeLevelNames Term.getLevelNames
let shortName, declName, allUserLevelNames Term.expandDeclId ( getCurrNamespace) scopeLevelNames declId modifiers
addDeclarationRangesForBuiltin declName modifiers.stx stx
Term.withAutoBoundImplicit do
Term.withAutoBoundImplicitForbiddenPred (fun n => shortName == n) do
Term.withDeclName declName <| Term.withLevelNames allUserLevelNames <| Term.elabBinders binders.getArgs fun xs => do
Term.applyAttributesAt declName modifiers.attrs AttributeApplicationTime.beforeElaboration
let type Term.elabType typeStx
Term.synthesizeSyntheticMVarsNoPostponing
let xs Term.addAutoBoundImplicits xs (declId.getTailPos? (canonicalOnly := true))
let xs Term.addAutoBoundImplicits xs
let type instantiateMVars type
let type mkForallFVars xs type
let type mkForallFVars vars type (usedOnly := true)

View File

@@ -15,7 +15,15 @@ def getDeclarationRange? [Monad m] [MonadFileMap m] (stx : Syntax) : m (Option D
let some range := stx.getRange?
| return none
let fileMap getFileMap
return some <| .ofStringPositions fileMap range.start range.stop
--let range := fileMap.utf8RangeToLspRange
let pos := fileMap.toPosition range.start
let endPos := fileMap.toPosition range.stop
return some {
pos := pos
charUtf16 := fileMap.leanPosToLspPos pos |>.character
endPos := endPos
endCharUtf16 := fileMap.leanPosToLspPos endPos |>.character
}
/--
For most builtin declarations, the selection range is just its name, which is stored in the second position.

View File

@@ -185,7 +185,7 @@ def mkDefViewOfOpaque (modifiers : Modifiers) (stx : Syntax) : CommandElabM DefV
def mkDefViewOfExample (modifiers : Modifiers) (stx : Syntax) : DefView :=
-- leading_parser "example " >> declSig >> declVal
let (binders, type) := expandOptDeclSig stx[1]
let id := mkIdentFrom stx[0] `_example (canonical := true)
let id := mkIdentFrom stx `_example
let declId := mkNode ``Parser.Command.declId #[id, mkNullNode]
{ ref := stx, headerRef := mkNullNode stx.getArgs[:2], kind := DefKind.example, modifiers := modifiers,
declId := declId, binders := binders, type? := type, value := stx[2] }

View File

@@ -26,10 +26,12 @@ private def throwForInFailure (forInInstance : Expr) : TermElabM Expr :=
@[builtin_term_elab forInMacro] def elabForIn : TermElab := fun stx expectedType? => do
match stx with
| `(for_in% $col $init $body) =>
match ( isLocalIdent? col) with
| none => elabTerm ( `(let col := $col; for_in% col $init $body)) expectedType?
| some colFVar =>
tryPostponeIfNoneOrMVar expectedType?
let colE elabTerm col none
let m getMonadForIn expectedType?
let colType inferType colE
let colType inferType colFVar
let elemType mkFreshExprMVar (mkSort (mkLevelSucc ( mkFreshLevelMVar)))
let forInInstance try
mkAppM ``ForIn #[m, colType, elemType]
@@ -40,7 +42,7 @@ private def throwForInFailure (forInInstance : Expr) : TermElabM Expr :=
let forInFn mkConst ``forIn
elabAppArgs forInFn
(namedArgs := #[{ name := `m, val := Arg.expr m}, { name := `α, val := Arg.expr elemType }, { name := `self, val := Arg.expr inst }])
(args := #[Arg.expr colE, Arg.stx init, Arg.stx body])
(args := #[Arg.stx col, Arg.stx init, Arg.stx body])
(expectedType? := expectedType?)
(explicit := false) (ellipsis := false) (resultIsOutParamSupport := false)
| .undef => tryPostpone; throwForInFailure forInInstance
@@ -50,10 +52,12 @@ private def throwForInFailure (forInInstance : Expr) : TermElabM Expr :=
@[builtin_term_elab forInMacro'] def elabForIn' : TermElab := fun stx expectedType? => do
match stx with
| `(for_in'% $col $init $body) =>
match ( isLocalIdent? col) with
| none => elabTerm ( `(let col := $col; for_in'% col $init $body)) expectedType?
| some colFVar =>
tryPostponeIfNoneOrMVar expectedType?
let colE elabTerm col none
let m getMonadForIn expectedType?
let colType inferType colE
let colType inferType colFVar
let elemType mkFreshExprMVar (mkSort (mkLevelSucc ( mkFreshLevelMVar)))
let forInInstance
try
@@ -66,7 +70,7 @@ private def throwForInFailure (forInInstance : Expr) : TermElabM Expr :=
let forInFn mkConst ``forIn'
elabAppArgs forInFn
(namedArgs := #[{ name := `m, val := Arg.expr m}, { name := `α, val := Arg.expr elemType}, { name := `self, val := Arg.expr inst }])
(args := #[Arg.expr colE, Arg.stx init, Arg.stx body])
(args := #[Arg.expr colFVar, Arg.stx init, Arg.stx body])
(expectedType? := expectedType?)
(explicit := false) (ellipsis := false) (resultIsOutParamSupport := false)
| .undef => tryPostpone; throwForInFailure forInInstance

View File

@@ -143,14 +143,13 @@ def runFrontend
(ileanFileName? : Option String := none)
(jsonOutput : Bool := false)
(errorOnKinds : Array Name := #[])
(plugins : Array System.FilePath := #[])
: IO (Environment × Bool) := do
let startTime := ( IO.monoNanosNow).toFloat / 1000000000
let inputCtx := Parser.mkInputContext input fileName
let opts := Language.Lean.internal.cmdlineSnapshots.setIfNotSet opts true
let ctx := { inputCtx with }
let processor := Language.Lean.process
let snap processor (fun _ => pure <| .ok { mainModuleName, opts, trustLevel, plugins }) none ctx
let snap processor (fun _ => pure <| .ok { mainModuleName, opts, trustLevel }) none ctx
let snaps := Language.toSnapshotTree snap
let severityOverrides := errorOnKinds.foldl (·.insert · .error) {}
let hasErrors snaps.runAndReport opts jsonOutput severityOverrides

View File

@@ -18,11 +18,10 @@ def headerToImports (header : Syntax) : Array Import :=
{ module := id, runtimeOnly := runtime }
def processHeader (header : Syntax) (opts : Options) (messages : MessageLog)
(inputCtx : Parser.InputContext) (trustLevel : UInt32 := 0)
(plugins : Array System.FilePath := #[]) (leakEnv := false)
(inputCtx : Parser.InputContext) (trustLevel : UInt32 := 0) (leakEnv := false)
: IO (Environment × MessageLog) := do
try
let env importModules (leakEnv := leakEnv) (headerToImports header) opts trustLevel plugins
let env importModules (leakEnv := leakEnv) (headerToImports header) opts trustLevel
pure (env, messages)
catch e =>
let env mkEmptyEnvironment

View File

@@ -194,7 +194,7 @@ private def elabCtors (indFVars : Array Expr) (params : Array Expr) (r : ElabHea
return type
let type elabCtorType
Term.synthesizeSyntheticMVarsNoPostponing
let ctorParams Term.addAutoBoundImplicits ctorParams (ctorView.declId.getTailPos? (canonicalOnly := true))
let ctorParams Term.addAutoBoundImplicits ctorParams
let except (mvarId : MVarId) := ctorParams.any fun ctorParam => ctorParam.isMVar && ctorParam.mvarId! == mvarId
/-
We convert metavariables in the resulting type into extra parameters. Otherwise, we would not be able to elaborate

View File

@@ -1,64 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Marc Huisinga
-/
prelude
import Lean.Meta.Basic
namespace Lean.Elab
open Lean
structure InlayHintLinkLocation where
module : Name
range : String.Range
structure InlayHintLabelPart where
value : String
tooltip? : Option String := none
location? : Option InlayHintLinkLocation := none
inductive InlayHintLabel
| name (n : String)
| parts (p : Array InlayHintLabelPart)
inductive InlayHintKind where
| type
| parameter
structure InlayHintTextEdit where
range : String.Range
newText : String
deriving BEq
structure InlayHintInfo where
position : String.Pos
label : InlayHintLabel
kind? : Option InlayHintKind := none
textEdits : Array InlayHintTextEdit := #[]
tooltip? : Option String := none
paddingLeft : Bool := false
paddingRight : Bool := false
structure InlayHint extends InlayHintInfo where
lctx : LocalContext
deferredResolution : InlayHintInfo MetaM InlayHintInfo := fun i => .pure i
deriving TypeName
namespace InlayHint
def toCustomInfo (i : InlayHint) : CustomInfo := {
stx := .missing
value := .mk i
}
def ofCustomInfo? (c : CustomInfo) : Option InlayHint :=
c.value.get? InlayHint
def resolveDeferred (i : InlayHint) : MetaM InlayHint := do
let info := i.toInlayHintInfo
let info i.deferredResolution info
return { i with toInlayHintInfo := info }
end InlayHint

View File

@@ -1,25 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Lean.Elab.Command
open Lean Elab Command
namespace Lean.Elab.Tactic.InfoTrees
@[builtin_command_elab infoTreesCmd, inherit_doc guardMsgsCmd]
def elabInfoTrees : CommandElab
| `(command| #info_trees%$tk in $cmd) => do
if ! ( getInfoState).enabled then
logError "Info trees are disabled, can not use `#info_trees`."
else
elabCommand cmd
let infoTrees getInfoTrees
for t in infoTrees do
logInfoAt tk ( t.format)
| _ => throwUnsupportedSyntax
end Lean.Elab.Tactic.InfoTrees

View File

@@ -30,7 +30,7 @@ private def mkUserNameFor (e : Expr) : TermElabM Name := do
/--
Remark: if the discriminant is `Syntax.missing`, we abort the elaboration of the `match`-expression.
Remark: if the discriminat is `Syntax.missing`, we abort the elaboration of the `match`-expression.
This can happen due to error recovery. Example
```
example : (p p) → p := fun h => match
@@ -56,11 +56,6 @@ private def elabAtomicDiscr (discr : Syntax) : TermElabM Expr := do
let term := discr[1]
elabTerm term none
/-- Creates syntax for a fresh `h : `-notation annotation for a discriminant, copying source
information from an existing annotation `stx`. -/
private def mkFreshDiscrIdentFrom (stx : Syntax) : CoreM Ident :=
return mkIdentFrom stx ( mkFreshUserName `h)
structure Discr where
expr : Expr
/-- `some h` if discriminant is annotated with the `h : ` notation. -/
@@ -83,11 +78,11 @@ private partial def elabMatchTypeAndDiscrs (discrStxs : Array Syntax) (matchOptM
-- motive := leading_parser atomic ("(" >> nonReservedSymbol "motive" >> " := ") >> termParser >> ")"
let matchTypeStx := matchOptMotive[0][3]
let matchType elabType matchTypeStx
let (discrs, isDep) elabDiscrsWithMatchType matchType
let (discrs, isDep) elabDiscrsWitMatchType matchType
return { discrs := discrs, matchType := matchType, isDep := isDep, alts := matchAltViews }
where
/-- Easy case: elaborate discriminant when the match-type has been explicitly provided by the user. -/
elabDiscrsWithMatchType (matchType : Expr) : TermElabM (Array Discr × Bool) := do
elabDiscrsWitMatchType (matchType : Expr) : TermElabM (Array Discr × Bool) := do
let mut discrs := #[]
let mut i := 0
let mut matchType := matchType
@@ -110,10 +105,6 @@ where
markIsDep (r : ElabMatchTypeAndDiscrsResult) :=
{ r with isDep := true }
expandDiscrIdent : Syntax MetaM Syntax
| stx@`(_) => mkFreshDiscrIdentFrom stx
| stx => return stx
/-- Elaborate discriminants inferring the match-type -/
elabDiscrs (i : Nat) (discrs : Array Discr) : TermElabM ElabMatchTypeAndDiscrsResult := do
if h : i < discrStxs.size then
@@ -121,12 +112,7 @@ where
let discr elabAtomicDiscr discrStx
let discr instantiateMVars discr
let userName mkUserNameFor discr
let h?
if discrStx[0].isNone then
pure none
else
let h expandDiscrIdent discrStx[0][0]
pure (some h)
let h? := if discrStx[0].isNone then none else some discrStx[0][0]
let discrs := discrs.push { expr := discr, h? }
let mut result elabDiscrs (i + 1) discrs
let matchTypeBody kabstract result.matchType discr
@@ -930,7 +916,7 @@ where
| none => return { expr := i : Discr }
| some h =>
-- If the discriminant that introduced this index is annotated with `h : discr`, then we should annotate the new discriminant too.
let h mkFreshDiscrIdentFrom h
let h := mkIdentFrom h ( mkFreshUserName `h)
return { expr := i, h? := h : Discr }
let discrs := indDiscrs ++ discrs
let indexFVarIds := indices.filterMap fun | .fvar fvarId .. => some fvarId | _ => none
@@ -1208,7 +1194,7 @@ Remark the `optIdent` must be `none` at `matchDiscr`. They are expanded by `expa
-/
private def elabMatchCore (stx : Syntax) (expectedType? : Option Expr) : TermElabM Expr := do
let expectedType waitExpectedTypeAndDiscrs stx expectedType?
let discrStxs := getDiscrs stx
let discrStxs := (getDiscrs stx).map fun d => d
let gen? := getMatchGeneralizing? stx
let altViews := getMatchAlts stx
let matchOptMotive := getMatchOptMotive stx

View File

@@ -197,7 +197,7 @@ private def elabHeaders (views : Array DefView)
type cleanupOfNat type
let (binderIds, xs) := xs.unzip
-- TODO: add forbidden predicate using `shortDeclName` from `views`
let xs addAutoBoundImplicits xs (view.declId.getTailPos? (canonicalOnly := true))
let xs addAutoBoundImplicits xs
type mkForallFVars' xs type
type instantiateMVarsProfiling type
let levelNames getLevelNames
@@ -215,14 +215,6 @@ private def elabHeaders (views : Array DefView)
return newHeader
if let some snap := view.headerSnap? then
let (tacStx?, newTacTask?) mkTacTask view.value tacPromise
let bodySnap :=
-- Only use first line of body as range when we have incremental tactics as otherwise we
-- would cover their progress
{ range? := if newTacTask?.isSome then
view.ref.getPos?.map fun pos => pos, pos
else
getBodyTerm? view.value |>.getD view.value |>.getRange?
task := bodyPromise.resultD default }
snap.new.resolve <| some {
diagnostics :=
( Language.Snapshot.Diagnostics.ofMessageLog ( Core.getAndEmptyMessageLog))
@@ -231,7 +223,7 @@ private def elabHeaders (views : Array DefView)
tacStx?
tacSnap? := newTacTask?
bodyStx := view.value
bodySnap
bodySnap := mkBodyTask view.value bodyPromise
}
newHeader := { newHeader with
-- We should only forward the promise if we are actually waiting on the
@@ -253,6 +245,12 @@ where
guard whereDeclsOpt.isNone
return body
/-- Creates snapshot task with appropriate range from body syntax and promise. -/
mkBodyTask (body : Syntax) (new : IO.Promise (Option BodyProcessedSnapshot)) :
Language.SnapshotTask (Option BodyProcessedSnapshot) :=
let rangeStx := getBodyTerm? body |>.getD body
{ range? := rangeStx.getRange?, task := new.result }
/--
If `body` allows for incremental tactic reporting and reuse, creates a snapshot task out of the
passed promise with appropriate range, otherwise immediately resolves the promise to a dummy
@@ -263,7 +261,7 @@ where
:= do
if let some e := getBodyTerm? body then
if let `(by $tacs*) := e then
return (e, some { range? := mkNullNode tacs |>.getRange?, task := tacPromise.resultD default })
return (e, some { range? := mkNullNode tacs |>.getRange?, task := tacPromise.result })
tacPromise.resolve default
return (none, none)
@@ -983,21 +981,6 @@ private def levelMVarToParamHeaders (views : Array DefView) (headers : Array Def
let newHeaders (process).run' 1
newHeaders.mapM fun header => return { header with type := ( instantiateMVarsProfiling header.type) }
/--
Ensures that all declarations given by `preDefs` have distinct names.
Remark: we wait to perform this check until the pre-definition phase because we must account for
auxiliary declarations introduced by `where` and `let rec`.
-/
private def checkAllDeclNamesDistinct (preDefs : Array PreDefinition) : TermElabM Unit := do
let mut names : Std.HashMap Name Syntax := {}
for preDef in preDefs do
let userName := privateToUserName preDef.declName
if let some dupStx := names[userName]? then
let errorMsg := m!"'mutual' block contains two declarations of the same name '{userName}'"
Lean.logErrorAt dupStx errorMsg
throwErrorAt preDef.ref errorMsg
names := names.insert userName preDef.ref
def elabMutualDef (vars : Array Expr) (sc : Command.Scope) (views : Array DefView) : TermElabM Unit :=
if isExample views then
withoutModifyingEnv do
@@ -1007,45 +990,44 @@ def elabMutualDef (vars : Array Expr) (sc : Command.Scope) (views : Array DefVie
else
go
where
go := do
let bodyPromises views.mapM fun _ => IO.Promise.new
let tacPromises views.mapM fun _ => IO.Promise.new
let scopeLevelNames getLevelNames
let headers elabHeaders views bodyPromises tacPromises
let headers levelMVarToParamHeaders views headers
let allUserLevelNames := getAllUserLevelNames headers
withFunLocalDecls headers fun funFVars => do
for view in views, funFVar in funFVars do
addLocalVarInfo view.declId funFVar
let values
try
let values elabFunValues headers vars sc
Term.synthesizeSyntheticMVarsNoPostponing
values.mapM (instantiateMVarsProfiling ·)
catch ex =>
logException ex
headers.mapM fun header => withRef header.declId <| mkLabeledSorry header.type (synthetic := true) (unique := true)
let headers headers.mapM instantiateMVarsAtHeader
let letRecsToLift getLetRecsToLift
let letRecsToLift letRecsToLift.mapM instantiateMVarsAtLetRecToLift
checkLetRecsToLiftTypes funFVars letRecsToLift
(if headers.all (·.kind.isTheorem) && !deprecated.oldSectionVars.get ( getOptions) then withHeaderSecVars vars sc headers else withUsed vars headers values letRecsToLift) fun vars => do
let preDefs MutualClosure.main vars headers funFVars values letRecsToLift
checkAllDeclNamesDistinct preDefs
for preDef in preDefs do
trace[Elab.definition] "{preDef.declName} : {preDef.type} :=\n{preDef.value}"
let preDefs withLevelNames allUserLevelNames <| levelMVarToParamTypesPreDecls preDefs
let preDefs instantiateMVarsAtPreDecls preDefs
let preDefs shareCommonPreDefs preDefs
let preDefs fixLevelParams preDefs scopeLevelNames allUserLevelNames
for preDef in preDefs do
trace[Elab.definition] "after eraseAuxDiscr, {preDef.declName} : {preDef.type} :=\n{preDef.value}"
addPreDefinitions preDefs
processDeriving headers
for view in views, header in headers do
-- NOTE: this should be the full `ref`, and thus needs to be done after any snapshotting
-- that depends only on a part of the ref
addDeclarationRangesForBuiltin header.declName view.modifiers.stx view.ref
go :=
withAlwaysResolvedPromises views.size fun bodyPromises =>
withAlwaysResolvedPromises views.size fun tacPromises => do
let scopeLevelNames getLevelNames
let headers elabHeaders views bodyPromises tacPromises
let headers levelMVarToParamHeaders views headers
let allUserLevelNames := getAllUserLevelNames headers
withFunLocalDecls headers fun funFVars => do
for view in views, funFVar in funFVars do
addLocalVarInfo view.declId funFVar
let values
try
let values elabFunValues headers vars sc
Term.synthesizeSyntheticMVarsNoPostponing
values.mapM (instantiateMVarsProfiling ·)
catch ex =>
logException ex
headers.mapM fun header => withRef header.declId <| mkLabeledSorry header.type (synthetic := true) (unique := true)
let headers headers.mapM instantiateMVarsAtHeader
let letRecsToLift getLetRecsToLift
let letRecsToLift letRecsToLift.mapM instantiateMVarsAtLetRecToLift
checkLetRecsToLiftTypes funFVars letRecsToLift
(if headers.all (·.kind.isTheorem) && !deprecated.oldSectionVars.get ( getOptions) then withHeaderSecVars vars sc headers else withUsed vars headers values letRecsToLift) fun vars => do
let preDefs MutualClosure.main vars headers funFVars values letRecsToLift
for preDef in preDefs do
trace[Elab.definition] "{preDef.declName} : {preDef.type} :=\n{preDef.value}"
let preDefs withLevelNames allUserLevelNames <| levelMVarToParamTypesPreDecls preDefs
let preDefs instantiateMVarsAtPreDecls preDefs
let preDefs shareCommonPreDefs preDefs
let preDefs fixLevelParams preDefs scopeLevelNames allUserLevelNames
for preDef in preDefs do
trace[Elab.definition] "after eraseAuxDiscr, {preDef.declName} : {preDef.type} :=\n{preDef.value}"
addPreDefinitions preDefs
processDeriving headers
for view in views, header in headers do
-- NOTE: this should be the full `ref`, and thus needs to be done after any snapshotting
-- that depends only on a part of the ref
addDeclarationRangesForBuiltin header.declName view.modifiers.stx view.ref
processDeriving (headers : Array DefViewElabHeader) := do
@@ -1062,46 +1044,46 @@ namespace Command
def elabMutualDef (ds : Array Syntax) : CommandElabM Unit := do
let opts getOptions
let headerPromises ds.mapM fun _ => IO.Promise.new
let snap? := ( read).snap?
let mut views := #[]
let mut defs := #[]
let mut reusedAllHeaders := true
for h : i in [0:ds.size], headerPromise in headerPromises do
let d := ds[i]
let modifiers elabModifiers d[0]
if ds.size > 1 && modifiers.isNonrec then
throwErrorAt d "invalid use of 'nonrec' modifier in 'mutual' block"
let mut view mkDefView modifiers d[1]
let fullHeaderRef := mkNullNode #[d[0], view.headerRef]
withAlwaysResolvedPromises ds.size fun headerPromises => do
let snap? := ( read).snap?
let mut views := #[]
let mut defs := #[]
let mut reusedAllHeaders := true
for h : i in [0:ds.size], headerPromise in headerPromises do
let d := ds[i]
let modifiers elabModifiers d[0]
if ds.size > 1 && modifiers.isNonrec then
throwErrorAt d "invalid use of 'nonrec' modifier in 'mutual' block"
let mut view mkDefView modifiers d[1]
let fullHeaderRef := mkNullNode #[d[0], view.headerRef]
if let some snap := snap? then
view := { view with headerSnap? := some {
old? := do
-- transitioning from `Context.snap?` to `DefView.headerSnap?` invariant: if the
-- elaboration context and state are unchanged, and the syntax of this as well as all
-- previous headers is unchanged, then the elaboration result for this header (which
-- includes state from elaboration of previous headers!) should be unchanged.
guard reusedAllHeaders
let old snap.old?
-- blocking wait, `HeadersParsedSnapshot` (and hopefully others) should be quick
let old old.val.get.toTyped? DefsParsedSnapshot
let oldParsed old.defs[i]?
guard <| fullHeaderRef.eqWithInfoAndTraceReuse opts oldParsed.fullHeaderRef
-- no syntax guard to store, we already did the necessary checks
return .missing, oldParsed.headerProcessedSnap
new := headerPromise
} }
defs := defs.push {
fullHeaderRef
headerProcessedSnap := { range? := d.getRange?, task := headerPromise.result }
}
reusedAllHeaders := reusedAllHeaders && view.headerSnap?.any (·.old?.isSome)
views := views.push view
if let some snap := snap? then
view := { view with headerSnap? := some {
old? := do
-- transitioning from `Context.snap?` to `DefView.headerSnap?` invariant: if the
-- elaboration context and state are unchanged, and the syntax of this as well as all
-- previous headers is unchanged, then the elaboration result for this header (which
-- includes state from elaboration of previous headers!) should be unchanged.
guard reusedAllHeaders
let old snap.old?
-- blocking wait, `HeadersParsedSnapshot` (and hopefully others) should be quick
let old old.val.get.toTyped? DefsParsedSnapshot
let oldParsed old.defs[i]?
guard <| fullHeaderRef.eqWithInfoAndTraceReuse opts oldParsed.fullHeaderRef
-- no syntax guard to store, we already did the necessary checks
return .missing, oldParsed.headerProcessedSnap
new := headerPromise
} }
defs := defs.push {
fullHeaderRef
headerProcessedSnap := { range? := d.getRange?, task := headerPromise.resultD default }
}
reusedAllHeaders := reusedAllHeaders && view.headerSnap?.any (·.old?.isSome)
views := views.push view
if let some snap := snap? then
-- no non-fatal diagnostics at this point
snap.new.resolve <| .ofTyped { defs, diagnostics := .empty : DefsParsedSnapshot }
let sc getScope
runTermElabM fun vars => Term.elabMutualDef vars sc views
-- no non-fatal diagnostics at this point
snap.new.resolve <| .ofTyped { defs, diagnostics := .empty : DefsParsedSnapshot }
let sc getScope
runTermElabM fun vars => Term.elabMutualDef vars sc views
builtin_initialize
registerTraceClass `Elab.definition.mkClosure

View File

@@ -288,9 +288,7 @@ private def elabHeadersAux (views : Array InductiveView) (i : Nat) (acc : Array
let typeStx view.type?.getDM `(Sort _)
let type Term.elabType typeStx
Term.synthesizeSyntheticMVarsNoPostponing
let inlayHintPos? := view.binders.getTailPos? (canonicalOnly := true)
<|> view.declId.getTailPos? (canonicalOnly := true)
let indices Term.addAutoBoundImplicits #[] inlayHintPos?
let indices Term.addAutoBoundImplicits #[]
let type mkForallFVars indices type
if view.allowIndices then
unless ( isTypeFormerType type) do
@@ -299,7 +297,7 @@ private def elabHeadersAux (views : Array InductiveView) (i : Nat) (acc : Array
unless ( whnfD type).isSort do
throwErrorAt typeStx "invalid resulting type, expecting 'Type _' or 'Prop'"
return (type, indices.size)
let params Term.addAutoBoundImplicits params (view.declId.getTailPos? (canonicalOnly := true))
let params Term.addAutoBoundImplicits params
trace[Elab.inductive] "header params: {params}, type: {type}"
let levelNames Term.getLevelNames
return acc.push { lctx := ( getLCtx), localInsts := ( getLocalInstances), levelNames, params, type, view }
@@ -962,26 +960,6 @@ private def elabInductiveViews (vars : Array Expr) (elabs : Array InductiveElabS
mkInjectiveTheorems e.view.declName
return res
/-- Ensures that there are no conflicts among or between the type and constructor names defined in `elabs`. -/
private def checkNoInductiveNameConflicts (elabs : Array InductiveElabStep1) : TermElabM Unit := do
let throwErrorsAt (init cur : Syntax) (msg : MessageData) : TermElabM Unit := do
logErrorAt init msg
throwErrorAt cur msg
-- Maps names of inductive types to to `true` and those of constructors to `false`, along with syntax refs
let mut uniqueNames : Std.HashMap Name (Bool × Syntax) := {}
for { view, .. } in elabs do
let typeDeclName := privateToUserName view.declName
if let some (prevNameIsType, prevRef) := uniqueNames[typeDeclName]? then
let declKinds := if prevNameIsType then "multiple inductive types" else "an inductive type and a constructor"
throwErrorsAt prevRef view.declId m!"cannot define {declKinds} with the same name '{typeDeclName}'"
uniqueNames := uniqueNames.insert typeDeclName (true, view.declId)
for ctor in view.ctors do
let ctorName := privateToUserName ctor.declName
if let some (prevNameIsType, prevRef) := uniqueNames[ctorName]? then
let declKinds := if prevNameIsType then "an inductive type and a constructor" else "multiple constructors"
throwErrorsAt prevRef ctor.declId m!"cannot define {declKinds} with the same name '{ctorName}'"
uniqueNames := uniqueNames.insert ctorName (false, ctor.declId)
private def applyComputedFields (indViews : Array InductiveView) : CommandElabM Unit := do
if indViews.all (·.computedFields.isEmpty) then return
@@ -1038,7 +1016,6 @@ def elabInductives (inductives : Array (Modifiers × Syntax)) : CommandElabM Uni
let (elabs, res) runTermElabM fun vars => do
let elabs inductives.mapM fun (modifiers, stx) => mkInductiveView modifiers stx
elabs.forM fun e => checkValidInductiveModifier e.view.modifiers
checkNoInductiveNameConflicts elabs
let res elabInductiveViews vars elabs
pure (elabs, res)
elabInductiveViewsPostprocessing (elabs.map (·.view)) res

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Meta.Transform
import Lean.Meta.Match.MatcherApp.Basic
import Lean.Elab.Tactic.Simp
open Lean Meta
namespace Lean.Elab.WF
builtin_initialize wfPreprocessSimpExtension : SimpExtension
registerSimpAttr `wf_preprocess
"(not yet functional)"
end Lean.Elab.WF

View File

@@ -13,7 +13,6 @@ import Lean.Elab.PreDefinition.WF.Rel
import Lean.Elab.PreDefinition.WF.Fix
import Lean.Elab.PreDefinition.WF.Unfold
import Lean.Elab.PreDefinition.WF.Ite
import Lean.Elab.PreDefinition.WF.AutoAttach
import Lean.Elab.PreDefinition.WF.GuessLex
namespace Lean.Elab

View File

@@ -562,7 +562,7 @@ private def elabFieldTypeValue (view : StructFieldView) : TermElabM (Option Expr
| some valStx =>
Term.synthesizeSyntheticMVarsNoPostponing
-- TODO: add forbidden predicate using `shortDeclName` from `view`
let params Term.addAutoBoundImplicits params (view.nameId.getTailPos? (canonicalOnly := true))
let params Term.addAutoBoundImplicits params
let value Term.withoutAutoBoundImplicit <| Term.elabTerm valStx none
registerFailedToInferFieldType view.name ( inferType value) view.nameId
registerFailedToInferDefaultValue view.name value valStx
@@ -572,7 +572,7 @@ private def elabFieldTypeValue (view : StructFieldView) : TermElabM (Option Expr
let type Term.elabType typeStx
registerFailedToInferFieldType view.name type typeStx
Term.synthesizeSyntheticMVarsNoPostponing
let params Term.addAutoBoundImplicits params (view.nameId.getTailPos? (canonicalOnly := true))
let params Term.addAutoBoundImplicits params
match view.value? with
| none =>
let type mkForallFVars params type

View File

@@ -132,7 +132,7 @@ where
if let some tk := ( read).cancelTk? then
if tk.isSet then
cleanup
throwInterruptException
throw <| .internal Core.interruptExceptionId
x
/--

View File

@@ -13,7 +13,6 @@ import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.EmbeddedConstraint
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.AC
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.Structures
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.IntToBitVec
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.TypeAnalysis
/-!
This module contains the implementation of `bv_normalize`, the preprocessing tactic for `bv_decide`.
@@ -52,9 +51,6 @@ where
trace[Meta.Tactic.bv] m!"Running preprocessing pipeline on:\n{g}"
let cfg PreProcessM.getConfig
if cfg.structures || cfg.enums then
g := ( typeAnalysisPass.run g).get!
if cfg.structures then
let some g' structuresPass.run g | return none
g := g'

View File

@@ -16,39 +16,17 @@ namespace Frontend.Normalize
open Lean.Meta
/--
Contains the result of the type analysis to be used in the structures and enums pass.
-/
structure TypeAnalysis where
/--
Structures that are interesting for the structures pass.
-/
interestingStructures : Std.HashSet Name := {}
/--
Inductives enums that are interesting for the enums pass.
-/
interestingEnums : Std.HashSet Name := {}
/--
Other types that we've seen that are not interesting, currently only used as a cache.
-/
uninteresting : Std.HashSet Name := {}
structure PreProcessState where
/--
Contains `FVarId` that we already know are in `bv_normalize` simp normal form and thus don't
need to be processed again when we visit the next time.
-/
rewriteCache : Std.HashSet FVarId := {}
/--
Analysis results for the structure and enum pass if required.
-/
typeAnalysis : TypeAnalysis := {}
abbrev PreProcessM : Type Type := ReaderT BVDecideConfig <| StateRefT PreProcessState MetaM
namespace PreProcessM
@[inline]
def getConfig : PreProcessM BVDecideConfig := read
@[inline]
@@ -59,38 +37,6 @@ def checkRewritten (fvar : FVarId) : PreProcessM Bool := do
def rewriteFinished (fvar : FVarId) : PreProcessM Unit := do
modify (fun s => { s with rewriteCache := s.rewriteCache.insert fvar })
@[inline]
def getTypeAnalysis : PreProcessM TypeAnalysis := do
return ( get).typeAnalysis
@[inline]
def lookupInterestingStructure (n : Name) : PreProcessM (Option Bool) := do
let s getTypeAnalysis
if s.uninteresting.contains n then
return some false
else if s.interestingStructures.contains n then
return some true
else
return none
@[inline]
def modifyTypeAnalysis (f : TypeAnalysis TypeAnalysis) :
PreProcessM Unit := do
modify fun s => { s with typeAnalysis := f s.typeAnalysis }
@[inline]
def markInterestingStructure (n : Name) : PreProcessM Unit := do
modifyTypeAnalysis (fun s => { s with interestingStructures := s.interestingStructures.insert n })
@[inline]
def markInterestingEnum (n : Name) : PreProcessM Unit := do
modifyTypeAnalysis (fun s => { s with interestingEnums := s.interestingEnums.insert n })
@[inline]
def markUninterestingType (n : Name) : PreProcessM Unit := do
modifyTypeAnalysis (fun s => { s with uninteresting := s.uninteresting.insert n })
@[inline]
def run (cfg : BVDecideConfig) (goal : MVarId) (x : PreProcessM α) : MetaM α := do
let hyps goal.withContext do getPropHyps
ReaderT.run x cfg |>.run' { rewriteCache := Std.HashSet.empty hyps.size }
@@ -107,7 +53,6 @@ structure Pass where
namespace Pass
@[inline]
def run (pass : Pass) (goal : MVarId) : PreProcessM (Option MVarId) := do
withTraceNode `bv (fun _ => return m!"Running pass: {pass.name} on\n{goal}") do
pass.run' goal

View File

@@ -27,11 +27,43 @@ namespace Frontend.Normalize
open Lean.Meta
/--
Contains a cache for interesting and uninteresting types such that we don't duplicate work in the
structures pass.
-/
structure InterestingStructures where
interesting : Std.HashSet Name := {}
uninteresting : Std.HashSet Name := {}
private abbrev M := StateRefT InterestingStructures MetaM
namespace M
@[inline]
def lookup (n : Name) : M (Option Bool) := do
let s get
if s.uninteresting.contains n then
return some false
else if s.interesting.contains n then
return some true
else
return none
@[inline]
def markInteresting (n : Name) : M Unit := do
modify (fun s => {s with interesting := s.interesting.insert n })
@[inline]
def markUninteresting (n : Name) : M Unit := do
modify (fun s => {s with uninteresting := s.uninteresting.insert n })
end M
partial def structuresPass : Pass where
name := `structures
run' goal := do
let interesting := ( PreProcessM.getTypeAnalysis).interestingStructures
if interesting.isEmpty then return goal
let (_, { interesting, .. }) checkContext goal |>.run {}
let goals goal.casesRec fun decl => do
if decl.isLet || decl.isImplementationDetail then
return false
@@ -45,7 +77,6 @@ where
postprocess (goal : MVarId) (interesting : Std.HashSet Name) : PreProcessM (Option MVarId) := do
goal.withContext do
let mut relevantLemmas : SimpTheoremsArray := #[]
relevantLemmas relevantLemmas.addTheorem (.decl ``ne_eq) ( mkConstWithLevelParams ``ne_eq)
for const in interesting do
let constInfo getConstInfoInduct const
let ctorName := ( getConstInfoCtor constInfo.ctors.head!).name
@@ -63,5 +94,50 @@ where
let some (_, newGoal) := result? | return none
return newGoal
checkContext (goal : MVarId) : M Unit := do
goal.withContext do
for decl in getLCtx do
if !decl.isLet && !decl.isImplementationDetail then
discard <| typeInteresting decl.type
constInterestingCached (n : Name) : M Bool := do
if let some cached M.lookup n then
return cached
let interesting constInteresting n
if interesting then
M.markInteresting n
return true
else
M.markUninteresting n
return false
constInteresting (n : Name) : M Bool := do
let env getEnv
if !isStructure env n then
return false
let constInfo getConstInfoInduct n
if constInfo.isRec then
return false
let ctorTyp := ( getConstInfoCtor constInfo.ctors.head!).type
let analyzer state arg := do
return state || ( typeInteresting ( arg.fvarId!.getType))
forallTelescope ctorTyp fun args _ => args.foldlM (init := false) analyzer
typeInteresting (expr : Expr) : M Bool := do
match_expr expr with
| BitVec n => return ( getNatValue? n).isSome
| UInt8 => return true
| UInt16 => return true
| UInt32 => return true
| UInt64 => return true
| USize => return true
| Bool => return true
| _ =>
let some const := expr.getAppFn.constName? | return false
constInterestingCached const
end Frontend.Normalize
end Lean.Elab.Tactic.BVDecide

View File

@@ -1,73 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
prelude
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.Basic
/-!
This file implements the type analysis pass for the structures and enum inductives pass. It figures
out which types that occur either directly or transitively (e.g. through being contained in a
structure) qualify for further treatment by the structures or enum pass.
-/
namespace Lean.Elab.Tactic.BVDecide
namespace Frontend.Normalize
open Lean.Meta
partial def typeAnalysisPass : Pass where
name := `typeAnalysis
run' goal := do
checkContext goal
return goal
where
checkContext (goal : MVarId) : PreProcessM Unit := do
goal.withContext do
for decl in getLCtx do
if !decl.isLet && !decl.isImplementationDetail then
discard <| typeInteresting decl.type
constInteresting (n : Name) : PreProcessM Bool := do
let analysis PreProcessM.getTypeAnalysis
if analysis.interestingStructures.contains n || analysis.interestingEnums.contains n then
return true
else if analysis.uninteresting.contains n then
return false
let env getEnv
if isStructure env n then
let constInfo getConstInfoInduct n
if constInfo.isRec then
PreProcessM.markUninterestingType n
return false
let ctorTyp := ( getConstInfoCtor constInfo.ctors.head!).type
let analyzer state arg := do return state || ( typeInteresting ( arg.fvarId!.getType))
let interesting forallTelescope ctorTyp fun args _ => args.foldlM (init := false) analyzer
if interesting then
PreProcessM.markInterestingStructure n
return interesting
else if isEnumType n then
PreProcessM.markInterestingEnum n
return true
else
PreProcessM.markUninterestingType n
return false
typeInteresting (expr : Expr) : PreProcessM Bool := do
match_expr expr with
| BitVec n => return ( getNatValue? n).isSome
| UInt8 => return true
| UInt16 => return true
| UInt32 => return true
| UInt64 => return true
| USize => return true
| Bool => return true
| _ =>
let some const := expr.getAppFn.constName? | return false
constInteresting const
end Frontend.Normalize
end Lean.Elab.Tactic.BVDecide

View File

@@ -224,26 +224,26 @@ where
guard <| state.term.meta.core.traceState.traces.size == 0
guard <| traceState.traces.size == 0
return old.val.get
let promise IO.Promise.new
-- Store new unfolding in the snapshot tree
snap.new.resolve {
stx := stx'
diagnostics := .empty
inner? := none
finished := .pure {
Language.withAlwaysResolvedPromise fun promise => do
-- Store new unfolding in the snapshot tree
snap.new.resolve {
stx := stx'
diagnostics := .empty
state? := ( Tactic.saveState)
inner? := none
finished := .pure {
diagnostics := .empty
state? := ( Tactic.saveState)
}
next := #[{ range? := stx'.getRange?, task := promise.result }]
}
next := #[{ range? := stx'.getRange?, task := promise.resultD default }]
}
-- Update `tacSnap?` to old unfolding
withTheReader Term.Context ({ · with tacSnap? := some {
new := promise
old? := do
let old old?
return old.stx, ( old.next.get? 0)
} }) do
evalTactic stx'
-- Update `tacSnap?` to old unfolding
withTheReader Term.Context ({ · with tacSnap? := some {
new := promise
old? := do
let old old?
return old.stx, ( old.next.get? 0)
} }) do
evalTactic stx'
return
evalTactic stx'
catch ex => handleEx s failures ex (expandEval s ms evalFns)

View File

@@ -86,41 +86,41 @@ where
-- snapshots than necessary.
if let some range := range? then
range? := some { range with stop := range.stop.byteIdx + tac.getTrailingSize }
let next IO.Promise.new
let finished IO.Promise.new
let inner IO.Promise.new
snap.new.resolve {
desc := tac.getKind.toString
diagnostics := .empty
stx := tac
inner? := some { range?, task := inner.resultD default }
finished := { range?, task := finished.resultD default }
next := #[{ range? := stxs.getRange?, task := next.resultD default }]
}
-- Run `tac` in a fresh info tree state and store resulting state in snapshot for
-- incremental reporting, then add back saved trees. Here we rely on `evalTactic`
-- producing at most one info tree as otherwise `getInfoTreeWithContext?` would panic.
let trees getResetInfoTrees
try
let (_, state) withRestoreOrSaveFull reusableResult?
-- set up nested reuse; `evalTactic` will check for `isIncrementalElab`
(tacSnap? := some { old? := oldInner?, new := inner }) do
Term.withReuseContext tac do
evalTactic tac
finished.resolve {
diagnostics := ( Language.Snapshot.Diagnostics.ofMessageLog
( Core.getAndEmptyMessageLog))
infoTree? := ( Term.getInfoTreeWithContext?)
state? := state
}
finally
modifyInfoState fun s => { s with trees := trees ++ s.trees }
withAlwaysResolvedPromise fun next => do
withAlwaysResolvedPromise fun finished => do
withAlwaysResolvedPromise fun inner => do
snap.new.resolve {
desc := tac.getKind.toString
diagnostics := .empty
stx := tac
inner? := some { range?, task := inner.result }
finished := { range?, task := finished.result }
next := #[{ range? := stxs.getRange?, task := next.result }]
}
-- Run `tac` in a fresh info tree state and store resulting state in snapshot for
-- incremental reporting, then add back saved trees. Here we rely on `evalTactic`
-- producing at most one info tree as otherwise `getInfoTreeWithContext?` would panic.
let trees getResetInfoTrees
try
let (_, state) withRestoreOrSaveFull reusableResult?
-- set up nested reuse; `evalTactic` will check for `isIncrementalElab`
(tacSnap? := some { old? := oldInner?, new := inner }) do
Term.withReuseContext tac do
evalTactic tac
finished.resolve {
diagnostics := ( Language.Snapshot.Diagnostics.ofMessageLog
( Core.getAndEmptyMessageLog))
infoTree? := ( Term.getInfoTreeWithContext?)
state? := state
}
finally
modifyInfoState fun s => { s with trees := trees ++ s.trees }
withTheReader Term.Context ({ · with tacSnap? := some {
new := next
old? := oldNext?
} }) do
goOdd stxs
withTheReader Term.Context ({ · with tacSnap? := some {
new := next
old? := oldNext?
} }) do
goOdd stxs
-- `stx[0]` is the next separator, if any
goOdd stx := do
if stx.getNumArgs == 0 then

View File

@@ -23,10 +23,13 @@ private def mkKey (e : Expr) (simp : Bool) : MetaM (Array Key) := do
mkPath lhs
else if let some (lhs, _) := type.iff? then
mkPath lhs
else if let some (_, lhs, rhs) := type.ne? then
mkPath ( mkEq lhs rhs)
else if let some (_, lhs, _) := type.ne? then
mkPath lhs
else if let some p := type.not? then
mkPath p
match p.eq? with
| some (_, lhs, _) =>
mkPath lhs
| _ => mkPath p
else
mkPath type
else

View File

@@ -156,12 +156,13 @@ private def elabFallback (fallback? : Option Term) : TermElabM (Grind.GoalM Unit
pure auxDeclName
unsafe evalConst (Grind.GoalM Unit) auxDeclName
def evalGrindCore
private def evalGrindCore
(ref : Syntax)
(config : Grind.Config)
(config : TSyntax `Lean.Parser.Tactic.optConfig)
(only : Option Syntax)
(params : Option (Syntax.TSepArray `Lean.Parser.Tactic.grindParam ","))
(fallback? : Option Term)
(trace : Bool)
: TacticM Grind.Trace := do
let fallback elabFallback fallback?
let only := only.isSome
@@ -169,32 +170,16 @@ def evalGrindCore
if Grind.grind.warning.get ( getOptions) then
logWarningAt ref "The `grind` tactic is experimental and still under development. Avoid using it in production projects"
let declName := ( Term.getDeclName?).getD `_grind
let mut config elabGrindConfig config
if trace then
config := { config with trace }
withMainContext do
let result grind ( getMainGoal) config only params declName fallback
replaceMainGoal []
return result
/-- Position for the `[..]` child syntax in the `grind` tactic. -/
def grindParamsPos := 3
/-- Position for the `only` child syntax in the `grind` tactic. -/
def grindOnlyPos := 2
def isGrindOnly (stx : TSyntax `tactic) : Bool :=
stx.raw.getKind == ``Parser.Tactic.grind && !stx.raw[grindOnlyPos].isNone
def setGrindParams (stx : TSyntax `tactic) (params : Array Syntax) : TSyntax `tactic :=
if params.isEmpty then
stx.raw.setArg grindParamsPos (mkNullNode)
else
let paramsStx := #[mkAtom "[", (mkAtom ",").mkSep params, mkAtom "]"]
stx.raw.setArg grindParamsPos (mkNullNode paramsStx)
def getGrindParams (stx : TSyntax `tactic) : Array Syntax :=
stx.raw[grindParamsPos][1].getSepArgs
def mkGrindOnly
(config : TSyntax ``Lean.Parser.Tactic.optConfig)
private def mkGrindOnly
(config : TSyntax `Lean.Parser.Tactic.optConfig)
(fallback? : Option Term)
(trace : Grind.Trace)
: MetaM (TSyntax `tactic) := do
@@ -237,22 +222,23 @@ def mkGrindOnly
`(tactic| grind $config:optConfig only on_failure $fallback)
else
`(tactic| grind $config:optConfig only)
return setGrindParams result params
if params.isEmpty then
return result
else
let paramsStx := #[mkAtom "[", (mkAtom ",").mkSep params, mkAtom "]"]
return result.raw.setArg 3 (mkNullNode paramsStx)
@[builtin_tactic Lean.Parser.Tactic.grind] def evalGrind : Tactic := fun stx => do
match stx with
| `(tactic| grind $config:optConfig $[only%$only]? $[ [$params:grindParam,*] ]? $[on_failure $fallback?]?) =>
let config elabGrindConfig config
discard <| evalGrindCore stx config only params fallback?
discard <| evalGrindCore stx config only params fallback? false
| _ => throwUnsupportedSyntax
@[builtin_tactic Lean.Parser.Tactic.grindTrace] def evalGrindTrace : Tactic := fun stx => do
match stx with
| `(tactic| grind?%$tk $configStx:optConfig $[only%$only]? $[ [$params:grindParam,*] ]? $[on_failure $fallback?]?) =>
let config elabGrindConfig configStx
let config := { config with trace := true }
let trace evalGrindCore stx config only params fallback?
let stx mkGrindOnly configStx fallback? trace
| `(tactic| grind?%$tk $config:optConfig $[only%$only]? $[ [$params:grindParam,*] ]? $[on_failure $fallback?]?) =>
let trace evalGrindCore stx config only params fallback? true
let stx mkGrindOnly config fallback? trace
Tactic.TryThis.addSuggestion tk stx (origSpan? := getRef)
| _ => throwUnsupportedSyntax

View File

@@ -278,30 +278,30 @@ where
if let some tacSnap := ( readThe Term.Context).tacSnap? then
-- incrementality: create a new promise for each alternative, resolve current snapshot to
-- them, eventually put each of them back in `Context.tacSnap?` in `applyAltStx`
let finished IO.Promise.new
let altPromises altStxs.mapM fun _ => IO.Promise.new
tacSnap.new.resolve {
-- save all relevant syntax here for comparison with next document version
stx := mkNullNode altStxs
diagnostics := .empty
inner? := none
finished := { range? := none, task := finished.resultD default }
next := Array.zipWith
(fun stx prom => { range? := stx.getRange?, task := prom.resultD default })
altStxs altPromises
}
goWithIncremental <| altPromises.mapIdx fun i prom => {
old? := do
let old tacSnap.old?
-- waiting is fine here: this is the old version of the snapshot resolved above
-- immediately at the beginning of the tactic
let old := old.val.get
-- use old version of `mkNullNode altsSyntax` as guard, will be compared with new
-- version and picked apart in `applyAltStx`
return old.stx, ( old.next[i]?)
new := prom
}
finished.resolve { diagnostics := .empty, state? := ( saveState) }
withAlwaysResolvedPromise fun finished => do
withAlwaysResolvedPromises altStxs.size fun altPromises => do
tacSnap.new.resolve {
-- save all relevant syntax here for comparison with next document version
stx := mkNullNode altStxs
diagnostics := .empty
inner? := none
finished := { range? := none, task := finished.result }
next := Array.zipWith
(fun stx prom => { range? := stx.getRange?, task := prom.result })
altStxs altPromises
}
goWithIncremental <| altPromises.mapIdx fun i prom => {
old? := do
let old tacSnap.old?
-- waiting is fine here: this is the old version of the snapshot resolved above
-- immediately at the beginning of the tactic
let old := old.val.get
-- use old version of `mkNullNode altsSyntax` as guard, will be compared with new
-- version and picked apart in `applyAltStx`
return old.stx, ( old.next[i]?)
new := prom
}
finished.resolve { diagnostics := .empty, state? := ( saveState) }
return
goWithIncremental #[]

View File

@@ -15,8 +15,6 @@ open Meta
def rewriteTarget (stx : Syntax) (symm : Bool) (config : Rewrite.Config := {}) : TacticM Unit := do
Term.withSynthesize <| withMainContext do
let e elabTerm stx none true
if e.hasSyntheticSorry then
throwAbortTactic
let r ( getMainGoal).rewrite ( getMainTarget) e symm (config := config)
let mvarId' ( getMainGoal).replaceTargetEq r.eNew r.eqProof
replaceMainGoal (mvarId' :: r.mvarIds)
@@ -27,8 +25,6 @@ def rewriteLocalDecl (stx : Syntax) (symm : Bool) (fvarId : FVarId) (config : Re
-- See issues #2711 and #2727.
let rwResult Term.withSynthesize <| withMainContext do
let e elabTerm stx none true
if e.hasSyntheticSorry then
throwAbortTactic
let localDecl fvarId.getDecl
( getMainGoal).rewrite localDecl.type e symm (config := config)
let replaceResult ( getMainGoal).replaceLocalDecl fvarId rwResult.eNew rwResult.eqProof

Some files were not shown because too many files have changed in this diff Show More