Compare commits

..

99 Commits

Author SHA1 Message Date
Kim Morrison
e4a4bfa41b chore: update some Array doc-strings 2025-01-29 12:11:50 +11:00
Kim Morrison
9b5fadd275 chore: remove unnecessary simp priorities (#6812)
This PR remove simp priorities that are not needed. Some of these will
probably cause complaints from the `simpNF` linter downstream in
Batteries, which I will re-address separately.
2025-01-28 23:50:33 +00:00
Kim Morrison
7e8af0fc9d feat: rename List.enum(From) to List.zipIdx, and Array/Vector.zipWithIndex to zipIdx (#6800)
This PR uniformizes the naming of `enum`/`enumFrom` (on `List`) and
`zipWithIndex` (on `Array` on `Vector`), replacing all with `zipIdx`. At
the same time, we generalize to add an optional `Nat` parameter for the
initial value of the index (which previously existed, only for `List`,
as the separate function `enumFrom`).
2025-01-28 23:34:30 +00:00
Kim Morrison
f10d0d07d9 feat: lemmas about BitVec.setWidth (#6808)
This PR adds simp lemmas replacing `BitVec.setWidth'` with `setWidth`,
and conditionally simplifying `setWidth v (setWidth w v)`.

---------

Co-authored-by: Tobias Grosser <tobias@grosser.es>
2025-01-28 23:33:45 +00:00
Joachim Breitner
312759e369 fix: injectivity theorems for more prelude inductives (#6826)
This PR adds injectivity theorems for inductives that did not get them
automatically (because they are defined too early) but also not yet
manuall later.

It also adds a test case to notice when new ones fall through.o

It does not add them for clearly meta-programming related types that are
not yet defined in `Init/Core.lean`, and uses `#guard_msgs` as an
allowlist.

---------

Co-authored-by: Kim Morrison <scott.morrison@gmail.com>
2025-01-28 23:09:28 +00:00
Vlad Tsyrklevich
1d9439752c feat: teach bv_normalize that (x >> x) = 0 (#6818)
This PR adds a BitVec lemma that `(x >> x) = 0` and plumbs it through to
bv_normalize. I also move some theorems I found useful to the top of the
ushiftRight section.
2025-01-28 20:56:21 +00:00
Leonardo de Moura
9f5a9a071a feat: add %reset_grind_attrs (#6824)
This PR introduces the auxiliary command `%reset_grind_attrs` for
debugging purposes. It is particularly useful for writing self-contained
tests.
2025-01-28 18:29:02 +00:00
Leonardo de Moura
26bc8c5b2a feat: builtin case splits for grind (#6822)
This PR adds a few builtin case-splits for `grind`. They are similar to
builtin `simp` theorems. They reduce the noise in the tactics produced
by `grind?`.
2025-01-28 17:30:36 +00:00
Kim Morrison
eea2d49078 chore: lower List/Array/Vector.mem_map simp priority (#6815)
This PR lowers the simp priority of `List/Array/Vector.mem_map`, as
downstream in Mathlib many lemmas currently need their priority raised
to fire before this.
2025-01-28 12:23:24 +00:00
Markus Himmel
f9d3deaafe perf: improve elaboration performance of Std.Data.DHashMap.Internal.RawLemmas (#6814)
This PR optimizes the proofs in the internal file
`Std/Data/DHashMap/Internal/RawLemmas.lean` so that the file is quicker
to elaborate.
2025-01-28 11:04:42 +00:00
Sebastian Ullrich
e4364e747f chore: temporarily disable async in server (#6813)
... pending an interruption bug fix and further testing
2025-01-28 10:42:17 +00:00
Eric Wieser
6aa6407af1 perf: use C23's free_sized when available (#6598)
See https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2699.htm for an
explanation of this feature.

---------

Co-authored-by: Chris Kennelly <ckennelly@google.com>
2025-01-28 10:17:15 +00:00
Eric Wieser
9247206c0e doc: clarify that lean_initialize_runtime_module is implied by lean_initialize (#6677)
Calling both runs the initializers twice, resulting in a memory leak
2025-01-28 10:12:59 +00:00
Vlad Tsyrklevich
c7c1e091c9 feat: add BitVec comparison lemmas to bv_normalize (#6799)
This PR adds a number of simple comparison lemmas to the top/bottom
element for BitVec. Then they are applied to teach bv_normalize that
`(a<1) = (a==0)` and to remove an intermediate proof that is no longer
necessary along the way.
2025-01-28 08:48:11 +00:00
Leonardo de Moura
20c616503a feat: add grind? (#6810)
This PR implements a basic `grind?` tactic companion for `grind`. We
will add more bells and whistles later.
2025-01-28 04:17:25 +00:00
Kim Morrison
104b3519d7 feat: add Fin.ofNat'_zero (#6806)
This PR adds a simp lemma for `Fin.ofNat'`.
2025-01-28 02:00:01 +00:00
Kim Morrison
d8fcfead97 feat: add LawfulMonad helper simp lemmas (#6805)
This PR adds to helper lemmas in the `LawfulMonad` namespace, which
sometimes fire via `simp` when the original versions taking
`LawfulApplicative` or `Functor` do not fire.
2025-01-28 01:59:55 +00:00
Kim Morrison
d0b947bf52 chore: add @[simp] to Option.not_mem_none (#6804)
This PR improves simp lemma confluence.
2025-01-28 01:59:47 +00:00
Mac Malone
5f0fea60a6 refactor: lake: deprecate -U (#6798)
This PR deprecates the `-U` shorthand for the `--update` option.

It is likely the `-U` option will be used for something different in the
future, so deprecating it now seems wise.
2025-01-28 01:54:55 +00:00
Mac Malone
3e54597db4 feat: lake query (#6323)
This PR adds a new Lake CLI command, `lake query`, that both builds
targets and outputs their results. It can produce raw text or JSON
-formatted output (with `--json` / `-J`).

This PR removes the `lean.` prefix from the module import facets (for
ease-of-use in the `lake query` CLII). It also renames the package
`deps` facet, `transDeps`. The new `deps` facet just returns the
package's direct dependencies.
2025-01-28 01:43:03 +00:00
Kim Morrison
eb1c9b9ab2 chore: two BitVec lemmas that help simp confluence (#6807)
This PR adds two simple `BitVec` lemmas which improve `simp` local
confluence.
2025-01-28 01:12:05 +00:00
Kim Morrison
4d66e7bdc0 feat: add List.modifyHead_dropLast (#6803)
This PR adds the simp lemma `List.modifyHead_dropLast`. This is one of
many small PRs that will improve simp lemma confluence.
2025-01-28 00:25:58 +00:00
Kim Morrison
f8660485d7 feat: Option.elim_pmap, improving simp confluence (#6802)
This PR adds the simp lemma `Option.elim_pmap`. This is one of many
small PRs that will improve simp lemma confluence.
2025-01-28 00:21:10 +00:00
Leonardo de Moura
64766f8724 fix: offset constraint propagation in grind (#6801)
This PR fixes a bug in the exhaustive offset constraint propagation
module used in `grind`.
2025-01-27 23:43:31 +00:00
Marc Huisinga
f64bce6ef1 fix: auto-completion performance regression (#6794)
This PR fixes a significant auto-completion performance regression that
was introduced in #5666, i.e. v4.14.0.

#5666 introduced tactic docstrings, which were attempted to be collected
for every single completion item. This is slow for hundreds of thousands
of completion items. To fix this, this PR moves the docstring
computation into the completion item resolution, which is only called
when users select a specific completion item in the UI.

A downside of this approach is that we currently can't test completion
item resolution, so we lose a few tests that cover docstrings in
completions in this PR.
2025-01-27 21:15:09 +00:00
Marc Huisinga
0160aa1a89 test: identifier completion benchmark (#6796)
Adds a basic identifier completion benchmark so that bugs like the one
in #6794 are caught earlier.
2025-01-27 19:31:32 +00:00
Joachim Breitner
3418d6db8e fix: more robust equational theorems generation for partial_fixpoint (#6790)
This PR fixes an issue with the generation of equational theorems from
`partial_fixpoint` when case-splitting is necessary. Fixes #6786.
2025-01-27 14:00:55 +00:00
Vlad Tsyrklevich
3aea0fd810 feat: add comparison lemmas to bv_normalize (#6788)
This PR teaches bv_normalize that !(x < x) and !(x < 0).
2025-01-27 13:44:44 +00:00
Joachim Breitner
4ca98dcca2 doc: typos in partial_fixpoint related docstrings (#6787)
H'T David
2025-01-27 09:43:09 +00:00
Markus Himmel
55b0bed5df doc: standard library vision and call for contributions (#6762) 2025-01-27 09:07:02 +00:00
Henrik Böving
d86a408944 feat: bv_decide can reason about equality of structures (#6740)
This PR extends `bv_decide`'s structure reasoning support for also
reasoning about equalities of supported structures.
2025-01-27 08:11:43 +00:00
Leonardo de Moura
69a73a18fb feat: grind? infrastructure (#6785)
This PR adds infrastructure for the `grind?` tactic. It also adds the
new modifier `usr` which allows users to write `grind only [usr
thmName]` to instruct `grind` to only use theorem `thmName`, but using
the patterns specified with the command `grind_pattern`.
2025-01-27 01:31:25 +00:00
Leonardo de Moura
98bd162ad4 feat: close goals using match-expression conditions in grind (#6783)
This PR adds support for closing goals using `match`-expression
conditions that are known to be true in the `grind` tactic state.
`grind` can now solve goals such as:
```lean
def f : List Nat → List Nat → Nat
  | _, 1 :: _ :: _ => 1
  | _, _ :: _ => 2
  | _, _  => 0

example : z = a :: as → y = z → f x y > 0
```
Without `grind`, we would use the `split` tactic. The first two goals,
corresponding to the first two alternatives, are closed using `simp`,
and the the third using the `match`-expression condition produced by
`split`. The proof would proceed as follows.
```lean
example : z = a :: as → y = z → f x y > 0 := by
  intros
  unfold f
  split
  next => simp
  next => simp
  next h =>
    /-
    ...
    _ : z = a :: as
    _ : y = z
    ...
    h : ∀ (head : Nat) (tail : List Nat), y = head :: tail → False
    |- 0 > 0
    -/
    subst_vars
    /-
    ...
    h : ∀ (head : Nat) (tail : List Nat), a :: as = head :: tail → False
    |- 0 > 0
    -/
    have : False := h a as rfl
    contradiction
```
Here is the same proof using `grind`.
```lean
example : z = a :: as → y = z → f x y > 0 := by
  grind [f.eq_def]
```
2025-01-26 17:13:11 +00:00
Joachim Breitner
ba95dbc36b feat: zetaUnused option (implementation) (#6755)
This PR implements the `zetaUnused` simp and reduction option (added in
#6754).

True by default, and implied by `zeta`, this can be turned off to make
simp even more careful about preserving the expression structure,
including unused let and have expressions.

Breaking change: The `split` tactic no longer removes unused let and
have expressions as a side-effect, in rare cases this may break proofs.
`dsimp only` can be used to remove unused have and let expressions.
2025-01-26 11:14:12 +00:00
Mac Malone
6278839534 refactor: lake: all targets produce jobs (#6780)
This PR makes all targets and all `fetch` calls produce a `Job` of some
value. As part of this change, facet definitions (e.g., `library_data`,
`module_data`, `package_data`) and Lake type families (e.g.,
`FamilyOut`) should no longer include `Job` in their types (as this is
now implicit).
2025-01-26 05:03:07 +00:00
Leonardo de Moura
849a252b20 fix: case split on data in grind (#6781)
This PR fixes the support for case splitting on data in the `grind`
tactic. The following example works now:
```lean
inductive C where
  | a | b | c

def f : C → Nat
  | .a => 2
  | .b => 3
  | .c => 4

example : f x > 1 := by
  grind [
      f, -- instructs `grind` to use `f`-equation theorems, 
      C -- instructs `grind` to case-split on free variables of type `C`
  ]
```
2025-01-26 02:14:08 +00:00
Leonardo de Moura
ca56c5ecc0 feat: improve support for match-expressions in grind (#6779)
This PR improves the support for `match`-expressions in the `grind`
tactic.
2025-01-26 00:50:29 +00:00
Leonardo de Moura
d10666731c fix: assignment for offset constraints in grind (#6778)
This PR fixes the assignment produced by `grind` to satisfy the offset
constraints in a goal.
2025-01-25 23:21:53 +00:00
Leonardo de Moura
6dbb54d221 fix: offset terms internalization (#6777)
This PR fixes a bug in the internalization of offset terms in the
`grind` tactic. For example, `grind` was failing to solve the following
example because of this bug.
```lean
example (f : Nat → Nat) : f (a + 1) = 1 → a = 0 → f 1 = 1 := by
  grind
```
2025-01-25 21:14:48 +00:00
Cameron Zwarich
cc260dd231 feat: support for csimp theorems in toLCNF (#6757)
This PR adds support for applying crimp theorems in toLCNF.
2025-01-25 21:07:08 +00:00
Leonardo de Moura
9565334c0e fix: Grind.MatchCond in checkParents (#6776)
This PR fixes the `checkParents` sanity checker used in `grind`. It did
not have support for checking the auxiliary gadget `Grind.MatchCond`.
2025-01-25 19:53:26 +00:00
Markus Himmel
2fa38e6ceb fix: suggest correct trace option name in partial_fixpoint error message (#6774)
This PR fixes a `partial_fixpoint` error message to suggest the option
`trace.Elab.Tactic.monotonicity` rather than the nonexistent
`trace.Elab.Tactic.partial_monotonicity`.
2025-01-25 14:42:15 +00:00
Markus Himmel
056d1dbeef fix: typo in partial_fixpoint docstring (#6775)
This PR fixes a typo in the `partial_fixpoint` hover docstring.
2025-01-25 14:41:52 +00:00
Rob23oba
e8bbba06b7 fix: fix builtin simproc Nat.reduceAnd (#6773)
This PR fixes a typo that prevented `Nat.reduceAnd` from working
correctly.

Closes #6772
2025-01-25 12:57:24 +00:00
Mac Malone
58c7a4f15e feat: lake: lift FetchM into JobM (and vice versa) (#6771)
This PR enables `FetchM` to be run from `JobM` / `SpawnM` and
vice-versa. This allows calls of `fetch` to asynchronously depend on the
outputs of other jobs.
2025-01-25 03:59:47 +00:00
Mac Malone
c8be581bc8 refactor: lake: use Job for all builtin facets (#6418)
This PR alters all builtin Lake facets to produce `Job` objects.
2025-01-25 02:53:21 +00:00
Siddharth
c6e244d811 feat: BitVec.shift x (n#w) -> shift x (n % 2^w) (#6767)
This PR adds lemmas to rewrite
`BitVec.shiftLeft,shiftRight,sshiftRight'` by a `BitVec.ofNat` into a
shift-by-natural number. This will be used to canonicalize shifts by
constant bitvectors into shift by constant numbers, which have further
rewrites on them if the number is a power of two.
2025-01-24 17:12:34 +00:00
Siddharth
044bf85fe9 feat: commute BitVec.extractLsb(')? with bitwise ops (#6747)
This PR adds the ability to push `BitVec.extractLsb` and
`BitVec.extractLsb'` with bitwise operations. This is useful for
constant-folding extracts.
2025-01-24 15:23:30 +00:00
Siddharth
1059e25ca2 feat: BitVec.shiftLeft in terms of extractLsb' (#6743)
This PR adds rewrites that normalizes left shifts by extracting bits and
concatenating zeroes. If the shift amount is larger than the bit-width,
then the resulting bitvector is zero.

```lean
theorem shiftLeft_eq_zero {x : BitVec w} {n : Nat} (hn : w ≤ n) : x <<< n = 0#w

theorem shiftLeft_eq_concat_of_lt {x : BitVec w} {n : Nat} (hn : n < w) :
    x <<< n = ((x.extractLsb' 0 (w-n)).append (BitVec.zero n)).cast (by omega)
```
2025-01-24 15:14:50 +00:00
Leonardo de Moura
c70f4064b4 fix: heterogenenous equality support in match conditions within grind (#6761)
This PR fixes issues in `grind` when processing `match`-expressions with
indexed families.
2025-01-24 04:08:29 +00:00
Sebastian Ullrich
757899a7d1 chore: run test suite with Elab.async for more coverage 2025-01-23 19:07:31 -07:00
Sebastian Ullrich
a901e34362 perf: avoid cross-thread environment extension state synchronization for now 2025-01-23 19:07:31 -07:00
Sebastian Ullrich
bab10cc2b5 feat: asynchronous kernel checking 2025-01-23 19:07:31 -07:00
Sebastian Ullrich
d26dbe73d5 fix: do not double-report snapshotTasks after wrapAsyncAsSnapshot 2025-01-23 19:07:31 -07:00
Sebastian Ullrich
214093e6c4 fix: prevent Task.get deadlocks from threadpool starvation (#6758)
This PR prevents deadlocks from non-cyclical task waits that may
otherwise occur during parallel elaboration with small threadpool sizes.
2025-01-23 23:01:39 +00:00
Lean stage0 autoupdater
ebda2d4d25 chore: update stage0 2025-01-23 15:28:44 +00:00
Joachim Breitner
7e03920bbb feat: zetaUnused option (option only) (#6754)
This PR adds the `+zetaUnused` option.

Implementation to follow after the stage0 update.
2025-01-23 14:37:41 +00:00
Jon Eugster
d033804190 doc: remove duplicated sentense in Lean.Syntax.node (#6752) 2025-01-23 11:43:48 +00:00
Joachim Breitner
56733b953e refactor: TerminationArgument → TerminationMeasure (#6727)
this PR aligns the terminology of the code with the one use in the
reference manual, as developed with and refined by @david-christiansen.
2025-01-23 10:41:38 +00:00
Lean stage0 autoupdater
c073da20ce chore: update stage0 2025-01-23 08:33:17 +00:00
damiano
d8bcd6a32e doc: correspondence ModuleIdx <--> Environment.moduleNames (#6749)
This PR documents the equality between the `ModuleIdx` of an module and
the index in the array of `moduleNames` of the same module.

I asked about this in the Office hours and it was confirmed that this is
a current feature and one that is likely not to change!
2025-01-23 07:47:38 +00:00
Leonardo de Moura
f35a602070 feat: use cast to "fix" types in the E-matching module within grind (#6750)
This PR adds support for fixing type mismatches using `cast` while
instantiating quantifiers in the E-matching module used by the grind
tactic.
2025-01-23 03:36:20 +00:00
Leonardo de Moura
14841ad1ed fix: bugs in grind (#6748)
This PR fixes a few bugs in the `grind` tactic: missing issues, bad
error messages, incorrect threshold in the canonicalizer, and bug in the
ground pattern internalizer.
2025-01-22 21:59:58 +00:00
Siddharth
5f3c0daf3d feat: BitVec.ushiftRight in terms of extractLsb' (#6745)
This PR supports rewriting `ushiftRight` in terms of `extractLsb'`. This
is the companion PR to #6743 which adds the similar lemmas about
`shiftLeft`.


```lean
theorem ushiftRight_eq_zero {x : BitVec w} {n : Nat} (hn : w ≤ n) :
    x >>> n = 0#w

theorem ushiftRight_eq_extractLsb'_of_lt {x : BitVec w} {n : Nat} (hn : n < w) :
    x >>> n = ((0#n) ++ (x.extractLsb' n (w - n))).cast (by omega)
```
2025-01-22 19:14:20 +00:00
Siddharth
6befda831d feat: add twoPow multiplication lemmas (#6742)
This PR adds the lemmas that show what happens when multiplying by
`twoPow` to an arbitrary term, as well to another `twoPow`.

This will be followed up by a PR that uses these to build a simproc to
canonicalize `twoPow w i * x` and `x * twoPow w i`.
2025-01-22 19:05:17 +00:00
Leonardo de Moura
6595ca8f29 feat: improve equation theorem support in grind (#6746)
This PR ensures that conditional equation theorems for function
definitions are handled correctly in `grind`. We use the same
infrastructure built for `match`-expression equations. Recall that in
both cases, these theorems are conditional when there are overlapping
patterns.
2025-01-22 18:41:09 +00:00
Sebastian Ullrich
91e261da38 chore: disable Elab.async on the cmdline for now (#6722)
Avoids build time overhead until the option is proven to speed up
average projects. Adds Init.Prelude (many tiny declarations, "worst
case") and Init.List.Sublist (many nontrivial theorems, "best case")
under -DElab.async=true as new benchmarks for tracking.
2025-01-22 18:25:47 +00:00
Henrik Böving
6ebce42142 perf: fast path for multiplication with constants in bv_decide (#6739)
This PR adds a fast path for bitblasting multiplication with constants
in `bv_decide`.

While the circuit generated is the same (as the AIG already performs
constant folding) this avoids calling out to the shift and addition
bitblaster unless required. Thus the overall time to generate the
circuit is reduced. Inspired by
[bitwuzla](25d77f819c/src/lib/bitblast/bitblaster.h (L454)).
2025-01-22 10:32:47 +00:00
Henrik Böving
b6db90a316 doc: mention subscript j in the lexical structure (#6738)
This PR updates our lexical structure documentation to mention the newly
supported ⱼ which lives in a separate unicode block and is thus not
captured by the current ranges.
2025-01-22 09:10:31 +00:00
Henrik Böving
7706b876f6 feat: bv_decide support for structures of supported types (#6724)
This PR adds support for `bv_decide` to automatically split up
non-recursive structures that contain information about supported types.
It can be controlled using the new `structures` field in the `bv_decide`
config.
2025-01-22 09:01:43 +00:00
Leonardo de Moura
9b74c07767 feat: lazy ite branch internalization in grind (#6737)
This PR ensures that the branches of an `if-then-else` term are
internalized only after establishing the truth value of the condition.
This change makes its behavior consistent with the `match`-expression
and dependent `if-then-else` behavior in `grind`.
This feature is particularly important for recursive functions defined
by well-founded recursion and `if-then-else`. Without lazy
`if-then-else` branch internalization, the equation theorem for the
recursive function would unfold until reaching the generation depth
threshold, and before performing any case analysis. See new tests for an
example.
2025-01-22 05:22:31 +00:00
Leonardo de Moura
533af01dab feat: improve grind canonicalizer (#6736)
This PR ensures the canonicalizer used in `grind` does not waste time
checking whether terms with different types are definitionally equal.
2025-01-22 03:59:45 +00:00
Leonardo de Moura
de31faa470 feat: case splitting match-expressions with overlapping patterns in grind (#6735)
This PR adds support for case splitting on `match`-expressions with
overlapping patterns to the `grind` tactic. `grind` can now solve
examples such as:
```
inductive S where
  | mk1 (n : Nat)
  | mk2 (n : Nat) (s : S)
  | mk3 (n : Bool)
  | mk4 (s1 s2 : S)

def g (x y : S) :=
  match x, y with
  | .mk1 a, _ => a + 2
  | _, .mk2 1 (.mk4 _ _) => 3
  | .mk3 _, .mk4 _ _ => 4
  | _, _ => 5

example : g a b > 1 := by
  grind [g.eq_def]
```
2025-01-22 02:59:42 +00:00
Leonardo de Moura
3881f21df1 fix: redundant information in the offset constraint module (#6734)
This PR ensures there are no redundant entries in the offset constraint
model produced by `grind`
2025-01-21 22:19:24 +00:00
Leonardo de Moura
c9a03c7613 feat: overlapping match patterns in grind (#6733)
This PR adds better support for overlapping `match` patterns in `grind`.
`grind` can now solve examples such as
```lean
inductive S where
  | mk1 (n : Nat)
  | mk2 (n : Nat) (s : S)
  | mk3 (n : Bool)
  | mk4 (s1 s2 : S)

def f (x y : S) :=
  match x, y with
  | .mk1 _, _ => 2
  | _, .mk2 1 (.mk4 _ _) => 3
  | .mk3 _, _ => 4
  | _, _ => 5

example : b = .mk2 y1 y2 → y1 = 2 → a = .mk4 y3 y4 → f a b = 5 := by
  unfold f
  grind (splits := 0)
```

---------

Co-authored-by: Leonardo de Moura <leodemoura@amazon.com>
2025-01-21 22:05:15 +00:00
Luisa Cicolini
0c2fb34c82 chore: remove useless Nat.mul_one from proof (#6728)
This PR removes theorems `Nat.mul_one` to simplify a rewrite in the
proof of `BitVec.getMsbD_rotateLeft_of_lt`
2025-01-21 17:00:19 +00:00
Martin Dvořák
eb30249b11 doc: make description of pp.analyze more precise (#6726)
As @nomeata told me, it should be "try to (...)" because even with
`pp.analyze` roundtripping often fails.
2025-01-21 15:03:48 +00:00
Paul Reichert
31929c0acd feat: lemmas for HashMap.alter and .modify (#6620)
This PR adds lemmas about HashMap.alter and .modify. These lemmas
describe the interaction of alter and modify with the read methods of
the HashMap. The additions affect the HashMap, the DHashMap and their
respective raw versions. Moreover, the raw versions of alter and modify
are defined.

---------

Co-authored-by: Paul Reichert <6992158+datokrat@users.noreply.github.com>
2025-01-21 12:34:19 +00:00
Joachim Breitner
3569797377 feat: functional cases theorem for non-recursive functions (#6261)
This PR adds `foo.fun_cases`, an automatically generated theorem that
splits the goal according to the branching structure of `foo`, much like
the Functional Induction Principle, but for all functions (not just
recursive ones), and without providing inductive hypotheses.

The design isn't quite final yet as to which function parameters should
become targets of the motive, and which parameters of the theorem, but
the current version is already proven to be useful, so start with this
and iterate later.
2025-01-21 10:16:42 +00:00
Joachim Breitner
7b813d4f5d feat: partial_fixpoint: partial functions with equations (#6355)
This PR adds the ability to define possibly non-terminating functions
and still be able to reason about them equationally, as long as they are
tail-recursive or monadic.

Typical uses of this feature are
```lean4
def ack : (n m : Nat) → Option Nat
  | 0,   y   => some (y+1)
  | x+1, 0   => ack x 1
  | x+1, y+1 => do ack x (← ack (x+1) y)
partial_fixpiont

def whileSome (f : α → Option α) (x : α) : α :=
  match f x with
  | none => x
  | some x' => whileSome f x'
partial_fixpiont

def computeLfp {α : Type u} [DecidableEq α] (f : α → α) (x : α) : α :=
  let next := f x
  if x ≠ next then
    computeLfp f next
  else
    x
partial_fixpiont

noncomputable def geom : Distr Nat := do
  let head ← coin
  if head then
    return 0
  else
    let n ← geom
    return (n + 1)
partial_fixpiont
```

This PR contains

* The necessary fragment of domain theory, up to (a variant of)
Knaster–Tarski theorem (merged as
https://github.com/leanprover/lean4/pull/6477)
* A tactic to solve monotonicity goals compositionally (a bit like
mathlib’s `fun_prop`) (merged as
https://github.com/leanprover/lean4/pull/6506)
* An attribute to extend that tactic (merged as
https://github.com/leanprover/lean4/pull/6506)
* A “derecursifier” that uses that machinery to define recursive
function, including support for dependent functions and mutual
recursion.
* Fixed-point induction principles (technical, tedious to use)
* For `Option`-valued functions: Partial correctness induction theorems
that hide all the domain theory

This is heavily inspired by [Isabelle’s `partial_function`
command](https://isabelle.in.tum.de/doc/codegen.pdf).
2025-01-21 09:54:30 +00:00
Luisa Cicolini
edeae18f5e feat: add Bitvec reverse definition, getLsbD_reverse, getMsbD_reverse, reverse_append, reverse_replicate and Nat.mod_sub_eq_sub_mod (#6476)
This PR defines `reverse` for bitvectors and implements a first subset
of theorems (`getLsbD_reverse, getMsbD_reverse, reverse_append,
reverse_replicate, reverse_cast, msb_reverse`). We also include some
necessary related theorems (`cons_append, cons_append_append,
append_assoc, replicate_append_self, replicate_succ'`) and deprecate
theorems`replicate_zero_eq` and `replicate_succ_eq`.

---------

Co-authored-by: Alex Keizer <alex@keizer.dev>
Co-authored-by: Kim Morrison <kim@tqft.net>
2025-01-21 08:44:50 +00:00
Kim Morrison
91bae2e064 feat: align {List/Array/Vector}.{attach,attachWith,pmap} lemmas (#6723)
This PR completes the alignment of
{List/Array/Vector}.{attach,attachWith,pmap} lemmas. I had to fill in a
number of gaps in the List API.
2025-01-21 06:36:36 +00:00
Luisa Cicolini
f9e904af50 feat: add BitVec.[getMsbD_mul, getElem_udiv, getLsbD_udiv, getMsbD_udiv] (#6674)
This PR adds theorems `BitVec.[getMsbD_mul, getElem_udiv, getLsbD_udiv,
getMsbD_udiv]`

---------

Co-authored-by: Siddharth <siddu.druid@gmail.com>
2025-01-21 03:59:27 +00:00
Henrik Böving
8375d00d8c fix: allow ⱼ in identifiers (#6679)
This PR changes the identifier parser to allow for the ⱼ unicode
character which was forgotten as it lives by itself in a codeblock with
coptic characters.
2025-01-21 03:51:51 +00:00
Kim Morrison
16bd7ea455 chore: deprecate List.iota (#6708)
This PR deprecates `List.iota`, which we make no essential use of. `iota
n` can be replaced with `(range' 1 n).reverse`. The verification lemmas
for `range'` already have better coverage than those for `iota`.
Any downstream projects using it (I am not aware of any) are encouraged
to adopt it.
2025-01-21 02:32:35 +00:00
Cameron Zwarich
c54287fb0d feat: add proper erasure of type dependencies in LCNF (#6678)
This PR modifies LCNF.toMonoType to use a more refined type erasure
scheme, which distinguishes between irrelevant/erased information
(represented by lcErased) and erased type dependencies (represented by
lcAny). This corresponds to the irrelevant/object distinction in the old
code generator.
2025-01-21 02:07:16 +00:00
Cameron Zwarich
e3771e3ad6 fix: don't generate code for decls with an implemented_by attribute (#6680)
This PR makes the new code generator skip generating code for decls with
an implemented_by decl, just like the old code generator.
2025-01-21 02:06:41 +00:00
Sebastian Ullrich
4935829abe feat: generalize infoview.maxTraceChildren to the cmdline (#6716)
This PR renames the option `infoview.maxTraceChildren` to
`maxTraceChildren` and applies it to the cmdline driver and language
server clients lacking an info view as well. It also implements the
common idiom of the option value `0` meaning "unlimited".
2025-01-21 02:06:24 +00:00
Leonardo de Moura
778333c667 fix: match equality generation (#6719)
This PR fixes a bug in the equational theorem generator for
`match`-expressions. See new test for an example.

Signed-off-by: Leonardo de Moura <leodemoura@amazon.com>
Co-authored-by: Leonardo de Moura <leodemoura@amazon.com>
2025-01-21 02:05:37 +00:00
Leonardo de Moura
189f5d41fb feat: case splitting in grind (#6717)
This PR introduces a new feature that allows users to specify which
inductive datatypes the `grind` tactic should perform case splits on.
The configuration option `splitIndPred` is now set to `false` by
default. The attribute `[grind cases]` is used to mark inductive
datatypes and predicates that `grind` may case split on during the
search. Additionally, the attribute `[grind cases eager]` can be used to
mark datatypes and predicates for case splitting both during
pre-processing and the search.

Users can also write `grind [HasType]` or `grind [cases HasType]` to
instruct `grind` to perform case splitting on the inductive predicate
`HasType` in a specific instance. Similarly, `grind [-Or]` can be used
to instruct `grind` not to case split on disjunctions.

Co-authored-by: Leonardo de Moura <leodemoura@amazon.com>
2025-01-20 22:44:56 +00:00
Vlad Tsyrklevich
c07f64a621 doc: Fix (and expand) docstrings for bmod/bdiv (#6713)
The current text is missing a negative sign on the bottom of the
interval that `Int.bmod` can return. While I'm here, I added
illustrative example outputs to match docs for tdiv/ediv/fdiv/etc.
2025-01-20 13:03:34 +00:00
Kim Morrison
22117f21e3 feat: align List/Array/Vector.count theorems (#6712)
This PR aligns `List`/`Array`/`Vector` theorems for `countP` and
`count`.
2025-01-20 10:20:16 +00:00
Sofia Rodrigues
1d03cd6a6b fix: negative timestamps and PlainDateTimes before 1970 (#6668)
This PR fixes negative timestamps and `PlainDateTime`s before 1970.
2025-01-20 07:52:13 +00:00
Kim Morrison
ac6a29ee83 feat: complete alignment of {List,Array,Vector}.{mapIdx,mapFinIdx} (#6701)
This PR completes aligning `mapIdx` and `mapFinIdx` across
`List/Array/Vector`.
2025-01-20 04:06:37 +00:00
Kim Morrison
57f0006c9b feat: align {List/Array/Vector}.{foldl, foldr, foldlM, foldrM} lemmas (#6707)
This PR completes aligning lemmas for `List` / `Array` / `Vector` about
`foldl`, `foldr`, and their monadic versions.
2025-01-20 04:05:31 +00:00
Lean stage0 autoupdater
e40e0892c1 chore: update stage0 2025-01-20 03:43:04 +00:00
Leonardo de Moura
1fcdd7ad9a feat: add [grind cases] and [grind cases eager] attributes (#6705)
This PR adds the attributes `[grind cases]` and `[grind cases eager]`
for controlling case splitting in `grind`. They will replace the
`[grind_cases]` and the configuration option `splitIndPred`.

After update stage0, we will push the second part of this PR.
2025-01-20 03:01:40 +00:00
Leonardo de Moura
9b7bd58c14 feat: add [grind ←=] attribute (#6702)
This PR adds support for equality backward reasoning to `grind`. We can
illustrate the new feature with the following example. Suppose we have a
theorem:
```lean
theorem inv_eq {a b : α} (w : a * b = 1) : inv a = b
```
and we want to instantiate the theorem whenever we are tying to prove
`inv t = s` for some terms `t` and `s`
The attribute `[grind ←]` is not applicable in this case because, by
default, `=` is not eligible for E-matching. The new attribute `[grind
←=]` instructs `grind` to use the equality and consider disequalities in
the `grind` proof state as candidates for E-matching.
2025-01-20 01:16:01 +00:00
Leonardo de Moura
a062eea204 feat: beta reduction in grind (#6700)
This PR adds support for beta reduction in the `grind` tactic. `grind`
can now solve goals such as
```lean
example (f : Nat → Nat) : f = (fun x : Nat => x + 5) → f 2 > 5 := by
  grind
```
2025-01-19 21:29:24 +00:00
558 changed files with 17641 additions and 3514 deletions

View File

@@ -26,7 +26,7 @@
"displayName": "Sanitize build config",
"cacheVariables": {
"LEAN_EXTRA_CXX_FLAGS": "-fsanitize=address,undefined",
"LEANC_EXTRA_FLAGS": "-fsanitize=address,undefined -fsanitize-link-c++-runtime",
"LEANC_EXTRA_CC_FLAGS": "-fsanitize=address,undefined -fsanitize-link-c++-runtime",
"SMALL_ALLOCATOR": "OFF",
"BSYMBOLIC": "OFF"
},

View File

@@ -590,9 +590,9 @@ This table should be read as follows:
* No other proofs were attempted, either because the parameter has a type without a non-trivial ``WellFounded`` instance (parameter 3), or because it is already clear that no decreasing measure can be found.
Lean will print the termination argument it found if ``set_option showInferredTerminationBy true`` is set.
Lean will print the termination measure it found if ``set_option showInferredTerminationBy true`` is set.
If Lean does not find the termination argument, or if you want to be explicit, you can append a `termination_by` clause to the function definition, after the function's body, but before the `where` clause if present. It is of the form
If Lean does not find the termination measure, or if you want to be explicit, you can append a `termination_by` clause to the function definition, after the function's body, but before the `where` clause if present. It is of the form
```
termination_by e
```
@@ -672,7 +672,7 @@ def num_consts_lst : List Term → Nat
end
```
In a set of mutually recursive function, either all or no functions must have an explicit termination argument (``termination_by``). A change of the default termination tactic (``decreasing_by``) only affects the proofs about the recursive calls of that function, not the other functions in the group.
In a set of mutually recursive function, either all or no functions must have an explicit termination measure (``termination_by``). A change of the default termination tactic (``decreasing_by``) only affects the proofs about the recursive calls of that function, not the other functions in the group.
```
mutual

View File

@@ -140,7 +140,7 @@ lean_object * initialize_C(uint8_t builtin, lean_object *);
...
lean_initialize_runtime_module();
//lean_initialize(); // necessary if you (indirectly) access the `Lean` package
//lean_initialize(); // necessary (and replaces `lean_initialize_runtime_module`) if you (indirectly) access the `Lean` package
lean_object * res;
// use same default as for Lean executables

View File

@@ -61,7 +61,7 @@ Parts of atomic names can be escaped by enclosing them in pairs of French double
letterlike_symbols: [℀-⅏]
escaped_ident_part: "«" [^«»\r\n\t]* "»"
atomic_ident_rest: atomic_ident_start | [0-9'ⁿ] | subscript
subscript: [₀-₉ₐ-ₜᵢ-ᵪ]
subscript: [₀-₉ₐ-ₜᵢ-ᵪ]
```
String Literals

10
doc/std/style.md Normal file
View File

@@ -0,0 +1,10 @@
Please take some time to familiarize yourself with the stylistic conventions of
the project and the specific part of the library you are planning to contribute
to. While the Lean compiler may not enforce strict formatting rules,
consistently formatted code is much easier for others to read and maintain.
Attention to formatting is more than a cosmetic concern—it reflects the same
level of precision and care required to meet the deeper standards of the Lean 4
standard library.
A full style guide and naming convention are currently under construction and
will be added here soon.

97
doc/std/vision.md Normal file
View File

@@ -0,0 +1,97 @@
# The Lean 4 standard library
Maintainer team (in alphabetical order): Henrik Böving, Markus Himmel
(community contact & external contribution coordinator), Kim Morrison, Paul
Reichert, Sofia Rodrigues.
The Lean 4 standard library is a core part of the Lean distribution, providing
essential building blocks for functional programming, verified software
development, and software verification. Unlike the standard libraries of most
other languages, many of its components are formally verified and can be used
as part of verified applications.
The standard library is a public API that contains the components listed in the
standard library outline below. Not all public APIs in the Lean distribution
are part of the standard library, and the standard library does not correspond
to a certain directory within the Lean source repository. For example, the
metaprogramming framework is not part of the standard library.
The standard library is under active development. Our guiding principles are:
* Provide comprehensive, verified building blocks for real-world software.
* Build a public API of the highest quality with excellent internal consistency.
* Carefully optimize components that may be used in performance-critical software.
* Ensure smooth adoption and maintenance for users.
* Offer excellent documentation, example projects, and guides.
* Provide a reliable and extensible basis that libraries for software
development, software verification and mathematics can build on.
The standard library is principally developed by the Lean FRO. Community
contributions are welcome. If you would like to contribute, please refer to the
call for contributions below.
### Standard library outline
1. Core types and operations
1. Basic types
2. Numeric types, including floating point numbers
3. Containers
4. Strings and formatting
2. Language constructs
1. Ranges and iterators
2. Comparison, ordering, hashing and related type classes
3. Basic monad infrastructure
3. Libraries
1. Random numbers
2. Dates and times
4. Operating system abstractions
1. Concurrency and parallelism primitives
2. Asynchronous I/O
3. FFI helpers
4. Environment, file system, processes
5. Locales
The material covered in the first three sections (core types and operations,
language constructs and libraries) will be verified, with the exception of
floating point numbers and the parts of the libraries that interface with the
operating system (e.g., sources of operating system randomness or time zone
database access).
### Call for contributions
Thank you for taking interest in contributing to the Lean standard library\!
There are two main ways for community members to contribute to the Lean
standard library: by contributing experience reports or by contributing code
and lemmas.
**If you are using Lean for software verification or verified software
development:** hearing about your experiences using Lean and its standard
library for software verification is extremely valuable to us. We are committed
to building a standard library suitable for real-world applications and your
input will directly influence the continued evolution of the Lean standard
library. Please reach out to the standard library maintainer team via Zulip
(either in a public thread in the \#lean4 channel or via direct message). Even
just a link to your code helps. Thanks\!
**If you have code that you believe could enhance the Lean 4 standard
library:** we encourage you to initiate a discussion in the \#lean4 channel on
Zulip. This is the most effective way to receive preliminary feedback on your
contribution. The Lean standard library has a very precise scope and it has
very high quality standards, so at the moment we are mostly interested in
contributions that expand upon existing material rather than introducing novel
concepts.
**If you would like to contribute code to the standard library but dont know
what to work on:** we are always excited to meet motivated community members
who would like to contribute, and there is always impactful work that is
suitable for new contributors. Please reach out to Markus Himmel on Zulip to
discuss possible contributions.
As laid out in the [project-wide External Contribution
Guidelines](../../CONTRIBUTING.md),
PRs are much more likely to be merged if they are preceded by an RFC or if you
discussed your planned contribution with a member of the standard library
maintainer team. When in doubt, introducing yourself is always a good idea.
All code in the standard library is expected to strictly adhere to the
[standard library coding conventions](./style.md).

View File

@@ -136,6 +136,23 @@ theorem seqLeft_eq_bind [Monad m] [LawfulMonad m] (x : m α) (y : m β) : x <* y
theorem Functor.map_unit [Monad m] [LawfulMonad m] {a : m PUnit} : (fun _ => PUnit.unit) <$> a = a := by
simp [map]
/--
This is just a duplicate of `LawfulApplicative.map_pure`,
but sometimes applies when that doesn't.
It is named with a prime to avoid conflict with the inherited field `LawfulMonad.map_pure`.
-/
@[simp] theorem LawfulMonad.map_pure' [Monad m] [LawfulMonad m] {a : α} :
(f <$> pure a : m β) = pure (f a) := by
simp only [map_pure]
/--
This is just a duplicate of `Functor.map_map`, but sometimes applies when that doesn't.
-/
@[simp] theorem LawfulMonad.map_map {m} [Monad m] [LawfulMonad m] {x : m α} :
g <$> f <$> x = (fun a => g (f a)) <$> x := by
simp only [Functor.map_map]
/--
An alternative constructor for `LawfulMonad` which has more
defaultable fields in the common case.

View File

@@ -516,8 +516,17 @@ The tasks have an overridden representation in the runtime.
structure Task (α : Type u) : Type u where
/-- `Task.pure (a : α)` constructs a task that is already resolved with value `a`. -/
pure ::
/-- If `task : Task α` then `task.get : α` blocks the current thread until the
value is available, and then returns the result of the task. -/
/--
Blocks the current thread until the given task has finished execution, and then returns the result
of the task. If the current thread is itself executing a (non-dedicated) task, the maximum
threadpool size is temporarily increased by one while waiting so as to ensure the process cannot
be deadlocked by threadpool starvation. Note that when the current thread is unblocked, more tasks
than the configured threadpool size may temporarily be running at the same time until sufficiently
many tasks have finished.
`Task.map` and `Task.bind` should be preferred over `Task.get` for setting up task dependencies
where possible as they do not require temporarily growing the threadpool in this way.
-/
get : α
deriving Inhabited, Nonempty
@@ -1375,21 +1384,43 @@ instance {p q : Prop} [d : Decidable (p ↔ q)] : Decidable (p = q) :=
| isTrue h => isTrue (propext h)
| isFalse h => isFalse fun heq => h (heq Iff.rfl)
gen_injective_theorems% Prod
gen_injective_theorems% PProd
gen_injective_theorems% MProd
gen_injective_theorems% Subtype
gen_injective_theorems% Fin
gen_injective_theorems% Array
gen_injective_theorems% Sum
gen_injective_theorems% PSum
gen_injective_theorems% Option
gen_injective_theorems% List
gen_injective_theorems% Except
gen_injective_theorems% BitVec
gen_injective_theorems% Char
gen_injective_theorems% DoResultBC
gen_injective_theorems% DoResultPR
gen_injective_theorems% DoResultPRBC
gen_injective_theorems% DoResultSBC
gen_injective_theorems% EStateM.Result
gen_injective_theorems% Except
gen_injective_theorems% Fin
gen_injective_theorems% ForInStep
gen_injective_theorems% Lean.Name
gen_injective_theorems% Lean.Syntax
gen_injective_theorems% BitVec
gen_injective_theorems% List
gen_injective_theorems% MProd
gen_injective_theorems% NonScalar
gen_injective_theorems% Option
gen_injective_theorems% PLift
gen_injective_theorems% PNonScalar
gen_injective_theorems% PProd
gen_injective_theorems% Prod
gen_injective_theorems% PSigma
gen_injective_theorems% PSum
gen_injective_theorems% Sigma
gen_injective_theorems% String
gen_injective_theorems% String.Pos
gen_injective_theorems% Substring
gen_injective_theorems% Subtype
gen_injective_theorems% Sum
gen_injective_theorems% Task
gen_injective_theorems% Thunk
gen_injective_theorems% UInt16
gen_injective_theorems% UInt32
gen_injective_theorems% UInt64
gen_injective_theorems% UInt8
gen_injective_theorems% ULift
gen_injective_theorems% USize
theorem Nat.succ.inj {m n : Nat} : m.succ = n.succ m = n :=
fun x => Nat.noConfusion x id

View File

@@ -6,6 +6,7 @@ Authors: Joachim Breitner, Mario Carneiro
prelude
import Init.Data.Array.Mem
import Init.Data.Array.Lemmas
import Init.Data.Array.Count
import Init.Data.List.Attach
namespace Array
@@ -142,10 +143,16 @@ theorem pmap_eq_map_attach {p : α → Prop} (f : ∀ a, p a → β) (l H) :
cases l
simp [List.pmap_eq_map_attach]
@[simp]
theorem pmap_eq_attachWith {p q : α Prop} (f : a, p a q a) (l H) :
pmap (fun a h => a, f a h) l H = l.attachWith q (fun x h => f x (H x h)) := by
cases l
simp [List.pmap_eq_attachWith]
theorem attach_map_coe (l : Array α) (f : α β) :
(l.attach.map fun (i : {i // i l}) => f i) = l.map f := by
cases l
simp [List.attach_map_coe]
simp
theorem attach_map_val (l : Array α) (f : α β) : (l.attach.map fun i => f i.val) = l.map f :=
attach_map_coe _ _
@@ -172,6 +179,12 @@ theorem mem_attach (l : Array α) : ∀ x, x ∈ l.attach
rcases this with _, _, m, rfl
exact m
@[simp]
theorem mem_attachWith (l : Array α) {q : α Prop} (H) (x : {x // q x}) :
x l.attachWith q H x.1 l := by
cases l
simp
@[simp]
theorem mem_pmap {p : α Prop} {f : a, p a β} {l H b} :
b pmap f l H (a : _) (h : a l), f a (H a h) = b := by
@@ -223,16 +236,16 @@ theorem attachWith_ne_empty_iff {l : Array α} {P : α → Prop} {H : ∀ a ∈
cases l; simp
@[simp]
theorem getElem?_pmap {p : α Prop} (f : a, p a β) {l : Array α} (h : a l, p a) (n : Nat) :
(pmap f l h)[n]? = Option.pmap f l[n]? fun x H => h x (mem_of_getElem? H) := by
theorem getElem?_pmap {p : α Prop} (f : a, p a β) {l : Array α} (h : a l, p a) (i : Nat) :
(pmap f l h)[i]? = Option.pmap f l[i]? fun x H => h x (mem_of_getElem? H) := by
cases l; simp
@[simp]
theorem getElem_pmap {p : α Prop} (f : a, p a β) {l : Array α} (h : a l, p a) {n : Nat}
(hn : n < (pmap f l h).size) :
(pmap f l h)[n] =
f (l[n]'(@size_pmap _ _ p f l h hn))
(h _ (getElem_mem (@size_pmap _ _ p f l h hn))) := by
theorem getElem_pmap {p : α Prop} (f : a, p a β) {l : Array α} (h : a l, p a) {i : Nat}
(hi : i < (pmap f l h).size) :
(pmap f l h)[i] =
f (l[i]'(@size_pmap _ _ p f l h hi))
(h _ (getElem_mem (@size_pmap _ _ p f l h hi))) := by
cases l; simp
@[simp]
@@ -256,6 +269,18 @@ theorem getElem_attach {xs : Array α} {i : Nat} (h : i < xs.attach.size) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp] theorem pmap_attach (l : Array α) {p : {x // x l} Prop} (f : a, p a β) (H) :
pmap f l.attach H =
l.pmap (P := fun a => h : a l, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
ext <;> simp
@[simp] theorem pmap_attachWith (l : Array α) {p : {x // q x} Prop} (f : a, p a β) (H₁ H₂) :
pmap f (l.attachWith q H₁) H₂ =
l.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
ext <;> simp
theorem foldl_pmap (l : Array α) {P : α Prop} (f : (a : α) P a β)
(H : (a : α), a l P a) (g : γ β γ) (x : γ) :
(l.pmap f H).foldl g x = l.attach.foldl (fun acc a => g acc (f a.1 (H _ a.2))) x := by
@@ -313,11 +338,7 @@ theorem attachWith_map {l : Array α} (f : α → β) {P : β → Prop} {H : ∀
(l.map f).attachWith P H = (l.attachWith (P f) (fun _ h => H _ (mem_map_of_mem f h))).map
fun x, h => f x, h := by
cases l
ext
· simp
· simp only [List.map_toArray, List.attachWith_toArray, List.getElem_toArray,
List.getElem_attachWith, List.getElem_map, Function.comp_apply]
erw [List.getElem_attachWith] -- Why is `erw` needed here?
simp [List.attachWith_map]
theorem map_attachWith {l : Array α} {P : α Prop} {H : (a : α), a l P a}
(f : { x // P x } β) :
@@ -347,7 +368,23 @@ theorem attach_filter {l : Array α} (p : α → Bool) :
simp [List.attach_filter, List.map_filterMap, Function.comp_def]
-- We are still missing here `attachWith_filterMap` and `attachWith_filter`.
-- Also missing are `filterMap_attach`, `filter_attach`, `filterMap_attachWith` and `filter_attachWith`.
@[simp]
theorem filterMap_attachWith {q : α Prop} {l : Array α} {f : {x // q x} Option β} (H)
(w : stop = (l.attachWith q H).size) :
(l.attachWith q H).filterMap f 0 stop = l.attach.filterMap (fun x, h => f x, H _ h) := by
subst w
cases l
simp [Function.comp_def]
@[simp]
theorem filter_attachWith {q : α Prop} {l : Array α} {p : {x // q x} Bool} (H)
(w : stop = (l.attachWith q H).size) :
(l.attachWith q H).filter p 0 stop =
(l.attach.filter (fun x, h => p x, H _ h)).map (fun x, h => x, H _ h) := by
subst w
cases l
simp [Function.comp_def, List.filter_map]
theorem pmap_pmap {p : α Prop} {q : β Prop} (g : a, p a β) (f : b, q b γ) (l H₁ H₂) :
pmap f (pmap g l H₁) H₂ =
@@ -427,16 +464,48 @@ theorem reverse_attach (xs : Array α) :
@[simp] theorem back?_attachWith {P : α Prop} {xs : Array α}
{H : (a : α), a xs P a} :
(xs.attachWith P H).back? = xs.back?.pbind (fun a h => some a, H _ (mem_of_back?_eq_some h)) := by
(xs.attachWith P H).back? = xs.back?.pbind (fun a h => some a, H _ (mem_of_back? h)) := by
cases xs
simp
@[simp]
theorem back?_attach {xs : Array α} :
xs.attach.back? = xs.back?.pbind fun a h => some a, mem_of_back?_eq_some h := by
xs.attach.back? = xs.back?.pbind fun a h => some a, mem_of_back? h := by
cases xs
simp
@[simp]
theorem countP_attach (l : Array α) (p : α Bool) :
l.attach.countP (fun a : {x // x l} => p a) = l.countP p := by
cases l
simp [Function.comp_def]
@[simp]
theorem countP_attachWith {p : α Prop} (l : Array α) (H : a l, p a) (q : α Bool) :
(l.attachWith p H).countP (fun a : {x // p x} => q a) = l.countP q := by
cases l
simp
@[simp]
theorem count_attach [DecidableEq α] (l : Array α) (a : {x // x l}) :
l.attach.count a = l.count a := by
rcases l with l
simp only [List.attach_toArray, List.attachWith_mem_toArray, List.count_toArray]
rw [List.map_attach, List.count_eq_countP]
simp only [Subtype.beq_iff]
rw [List.countP_pmap, List.countP_attach (p := (fun x => x == a.1)), List.count]
@[simp]
theorem count_attachWith [DecidableEq α] {p : α Prop} (l : Array α) (H : a l, p a) (a : {x // p x}) :
(l.attachWith p H).count a = l.count a := by
cases l
simp
@[simp] theorem countP_pmap {p : α Prop} (g : a, p a β) (f : β Bool) (l : Array α) (H₁) :
(l.pmap g H₁).countP f =
l.attach.countP (fun a, m => f (g a (H₁ a m))) := by
simp [pmap_eq_map_attach, countP_map, Function.comp_def]
/-! ## unattach
`Array.unattach` is the (one-sided) inverse of `Array.attach`. It is a synonym for `Array.map Subtype.val`.
@@ -455,7 +524,7 @@ and is ideally subsequently simplified away by `unattach_attach`.
If not, usually the right approach is `simp [Array.unattach, -Array.map_subtype]` to unfold.
-/
def unattach {α : Type _} {p : α Prop} (l : Array { x // p x }) := l.map (·.val)
def unattach {α : Type _} {p : α Prop} (l : Array { x // p x }) : Array α := l.map (·.val)
@[simp] theorem unattach_nil {p : α Prop} : (#[] : Array { x // p x }).unattach = #[] := rfl
@[simp] theorem unattach_push {p : α Prop} {a : { x // p x }} {l : Array { x // p x }} :
@@ -578,4 +647,16 @@ and simplifies these to the function directly taking the value.
cases l₂
simp
@[simp] theorem unattach_flatten {p : α Prop} {l : Array (Array { x // p x })} :
l.flatten.unattach = (l.map unattach).flatten := by
unfold unattach
cases l using array₂_induction
simp only [flatten_toArray, List.map_map, Function.comp_def, List.map_id_fun', id_eq,
List.map_toArray, List.map_flatten, map_subtype, map_id_fun', List.unattach_toArray, mk.injEq]
simp only [List.unattach]
@[simp] theorem unattach_mkArray {p : α Prop} {n : Nat} {x : { x // p x }} :
(Array.mkArray n x).unattach = Array.mkArray n x.1 := by
simp [unattach]
end Array

View File

@@ -579,9 +579,18 @@ def foldr {α : Type u} {β : Type v} (f : α → β → β) (init : β) (as : A
/-- Sum of an array.
`Array.sum #[a, b, c] = a + (b + (c + 0))` -/
@[inline]
def sum {α} [Add α] [Zero α] : Array α α :=
foldr (· + ·) 0
@[inline]
def countP {α : Type u} (p : α Bool) (as : Array α) : Nat :=
as.foldr (init := 0) fun a acc => bif p a then acc + 1 else acc
@[inline]
def count {α : Type u} [BEq α] (a : α) (as : Array α) : Nat :=
countP (· == a) as
@[inline]
def map {α : Type u} {β : Type v} (f : α β) (as : Array α) : Array β :=
Id.run <| as.mapM f
@@ -596,8 +605,10 @@ def mapIdx {α : Type u} {β : Type v} (f : Nat → α → β) (as : Array α) :
Id.run <| as.mapIdxM f
/-- Turns `#[a, b]` into `#[(a, 0), (b, 1)]`. -/
def zipWithIndex (arr : Array α) : Array (α × Nat) :=
arr.mapIdx fun i a => (a, i)
def zipIdx (arr : Array α) (start := 0) : Array (α × Nat) :=
arr.mapIdx fun i a => (a, i + start)
@[deprecated zipIdx (since := "2025-01-21")] abbrev zipWithIndex := @zipIdx
@[inline]
def find? {α : Type u} (p : α Bool) (as : Array α) : Option α :=
@@ -845,12 +856,19 @@ it has to backshift all elements at positions greater than `i`. -/
def eraseIdx! (a : Array α) (i : Nat) : Array α :=
if h : i < a.size then a.eraseIdx i h else panic! "invalid index"
/-- Remove a specified element from an array, or do nothing if it is not present.
This function takes worst case O(n) time because
it has to backshift all later elements. -/
def erase [BEq α] (as : Array α) (a : α) : Array α :=
match as.indexOf? a with
| none => as
| some i => as.eraseIdx i
/-- Erase the first element that satisfies the predicate `p`. -/
/-- Erase the first element that satisfies the predicate `p`.
This function takes worst case O(n) time because
it has to backshift all later elements. -/
def eraseP (as : Array α) (p : α Bool) : Array α :=
match as.findIdx? p with
| none => as

View File

@@ -0,0 +1,270 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Array.Lemmas
import Init.Data.List.Nat.Count
/-!
# Lemmas about `Array.countP` and `Array.count`.
-/
namespace Array
open Nat
/-! ### countP -/
section countP
variable (p q : α Bool)
@[simp] theorem countP_empty : countP p #[] = 0 := rfl
@[simp] theorem countP_push_of_pos (l) (pa : p a) : countP p (l.push a) = countP p l + 1 := by
rcases l with l
simp_all
@[simp] theorem countP_push_of_neg (l) (pa : ¬p a) : countP p (l.push a) = countP p l := by
rcases l with l
simp_all
theorem countP_push (a : α) (l) : countP p (l.push a) = countP p l + if p a then 1 else 0 := by
rcases l with l
simp_all
@[simp] theorem countP_singleton (a : α) : countP p #[a] = if p a then 1 else 0 := by
simp [countP_push]
theorem size_eq_countP_add_countP (l) : l.size = countP p l + countP (fun a => ¬p a) l := by
cases l
simp [List.length_eq_countP_add_countP (p := p)]
theorem countP_eq_size_filter (l) : countP p l = (filter p l).size := by
cases l
simp [List.countP_eq_length_filter]
theorem countP_eq_size_filter' : countP p = size filter p := by
funext l
apply countP_eq_size_filter
theorem countP_le_size : countP p l l.size := by
simp only [countP_eq_size_filter]
apply size_filter_le
@[simp] theorem countP_append (l₁ l₂) : countP p (l₁ ++ l₂) = countP p l₁ + countP p l₂ := by
cases l₁
cases l₂
simp
@[simp] theorem countP_pos_iff {p} : 0 < countP p l a l, p a := by
cases l
simp
@[simp] theorem one_le_countP_iff {p} : 1 countP p l a l, p a :=
countP_pos_iff
@[simp] theorem countP_eq_zero {p} : countP p l = 0 a l, ¬p a := by
cases l
simp
@[simp] theorem countP_eq_size {p} : countP p l = l.size a l, p a := by
cases l
simp
theorem countP_mkArray (p : α Bool) (a : α) (n : Nat) :
countP p (mkArray n a) = if p a then n else 0 := by
simp [ List.toArray_replicate, List.countP_replicate]
theorem boole_getElem_le_countP (p : α Bool) (l : Array α) (i : Nat) (h : i < l.size) :
(if p l[i] then 1 else 0) l.countP p := by
cases l
simp [List.boole_getElem_le_countP]
theorem countP_set (p : α Bool) (l : Array α) (i : Nat) (a : α) (h : i < l.size) :
(l.set i a).countP p = l.countP p - (if p l[i] then 1 else 0) + (if p a then 1 else 0) := by
cases l
simp [List.countP_set, h]
theorem countP_filter (l : Array α) :
countP p (filter q l) = countP (fun a => p a && q a) l := by
cases l
simp [List.countP_filter]
@[simp] theorem countP_true : (countP fun (_ : α) => true) = size := by
funext l
simp
@[simp] theorem countP_false : (countP fun (_ : α) => false) = Function.const _ 0 := by
funext l
simp
@[simp] theorem countP_map (p : β Bool) (f : α β) (l : Array α) :
countP p (map f l) = countP (p f) l := by
cases l
simp
theorem size_filterMap_eq_countP (f : α Option β) (l : Array α) :
(filterMap f l).size = countP (fun a => (f a).isSome) l := by
cases l
simp [List.length_filterMap_eq_countP]
theorem countP_filterMap (p : β Bool) (f : α Option β) (l : Array α) :
countP p (filterMap f l) = countP (fun a => ((f a).map p).getD false) l := by
cases l
simp [List.countP_filterMap]
@[simp] theorem countP_flatten (l : Array (Array α)) :
countP p l.flatten = (l.map (countP p)).sum := by
cases l using array₂_induction
simp [List.countP_flatten, Function.comp_def]
theorem countP_flatMap (p : β Bool) (l : Array α) (f : α Array β) :
countP p (l.flatMap f) = sum (map (countP p f) l) := by
cases l
simp [List.countP_flatMap, Function.comp_def]
@[simp] theorem countP_reverse (l : Array α) : countP p l.reverse = countP p l := by
cases l
simp [List.countP_reverse]
variable {p q}
theorem countP_mono_left (h : x l, p x q x) : countP p l countP q l := by
cases l
simpa using List.countP_mono_left (by simpa using h)
theorem countP_congr (h : x l, p x q x) : countP p l = countP q l :=
Nat.le_antisymm
(countP_mono_left fun x hx => (h x hx).1)
(countP_mono_left fun x hx => (h x hx).2)
end countP
/-! ### count -/
section count
variable [BEq α]
@[simp] theorem count_empty (a : α) : count a #[] = 0 := rfl
theorem count_push (a b : α) (l : Array α) :
count a (l.push b) = count a l + if b == a then 1 else 0 := by
simp [count, countP_push]
theorem count_eq_countP (a : α) (l : Array α) : count a l = countP (· == a) l := rfl
theorem count_eq_countP' {a : α} : count a = countP (· == a) := by
funext l
apply count_eq_countP
theorem count_le_size (a : α) (l : Array α) : count a l l.size := countP_le_size _
theorem count_le_count_push (a b : α) (l : Array α) : count a l count a (l.push b) := by
simp [count_push]
@[simp] theorem count_singleton (a b : α) : count a #[b] = if b == a then 1 else 0 := by
simp [count_eq_countP]
@[simp] theorem count_append (a : α) : l₁ l₂, count a (l₁ ++ l₂) = count a l₁ + count a l₂ :=
countP_append _
@[simp] theorem count_flatten (a : α) (l : Array (Array α)) :
count a l.flatten = (l.map (count a)).sum := by
cases l using array₂_induction
simp [List.count_flatten, Function.comp_def]
@[simp] theorem count_reverse (a : α) (l : Array α) : count a l.reverse = count a l := by
cases l
simp
theorem boole_getElem_le_count (a : α) (l : Array α) (i : Nat) (h : i < l.size) :
(if l[i] == a then 1 else 0) l.count a := by
rw [count_eq_countP]
apply boole_getElem_le_countP (· == a)
theorem count_set (a b : α) (l : Array α) (i : Nat) (h : i < l.size) :
(l.set i a).count b = l.count b - (if l[i] == b then 1 else 0) + (if a == b then 1 else 0) := by
simp [count_eq_countP, countP_set, h]
variable [LawfulBEq α]
@[simp] theorem count_push_self (a : α) (l : Array α) : count a (l.push a) = count a l + 1 := by
simp [count_push]
@[simp] theorem count_push_of_ne (h : b a) (l : Array α) : count a (l.push b) = count a l := by
simp_all [count_push, h]
theorem count_singleton_self (a : α) : count a #[a] = 1 := by simp
@[simp]
theorem count_pos_iff {a : α} {l : Array α} : 0 < count a l a l := by
simp only [count, countP_pos_iff, beq_iff_eq, exists_eq_right]
@[simp] theorem one_le_count_iff {a : α} {l : Array α} : 1 count a l a l :=
count_pos_iff
theorem count_eq_zero_of_not_mem {a : α} {l : Array α} (h : a l) : count a l = 0 :=
Decidable.byContradiction fun h' => h <| count_pos_iff.1 (Nat.pos_of_ne_zero h')
theorem not_mem_of_count_eq_zero {a : α} {l : Array α} (h : count a l = 0) : a l :=
fun h' => Nat.ne_of_lt (count_pos_iff.2 h') h.symm
theorem count_eq_zero {l : Array α} : count a l = 0 a l :=
not_mem_of_count_eq_zero, count_eq_zero_of_not_mem
theorem count_eq_size {l : Array α} : count a l = l.size b l, a = b := by
rw [count, countP_eq_size]
refine fun h b hb => Eq.symm ?_, fun h b hb => ?_
· simpa using h b hb
· rw [h b hb, beq_self_eq_true]
@[simp] theorem count_mkArray_self (a : α) (n : Nat) : count a (mkArray n a) = n := by
simp [ List.toArray_replicate]
theorem count_mkArray (a b : α) (n : Nat) : count a (mkArray n b) = if b == a then n else 0 := by
simp [ List.toArray_replicate, List.count_replicate]
theorem filter_beq (l : Array α) (a : α) : l.filter (· == a) = mkArray (count a l) a := by
cases l
simp [List.filter_beq]
theorem filter_eq {α} [DecidableEq α] (l : Array α) (a : α) : l.filter (· = a) = mkArray (count a l) a :=
filter_beq l a
theorem mkArray_count_eq_of_count_eq_size {l : Array α} (h : count a l = l.size) :
mkArray (count a l) a = l := by
cases l
rw [ toList_inj]
simp [List.replicate_count_eq_of_count_eq_length (by simpa using h)]
@[simp] theorem count_filter {l : Array α} (h : p a) : count a (filter p l) = count a l := by
cases l
simp [List.count_filter, h]
theorem count_le_count_map [DecidableEq β] (l : Array α) (f : α β) (x : α) :
count x l count (f x) (map f l) := by
cases l
simp [List.count_le_count_map, countP_map]
theorem count_filterMap {α} [BEq β] (b : β) (f : α Option β) (l : Array α) :
count b (filterMap f l) = countP (fun a => f a == some b) l := by
cases l
simp [List.count_filterMap, countP_filterMap]
theorem count_flatMap {α} [BEq β] (l : Array α) (f : α Array β) (x : β) :
count x (l.flatMap f) = sum (map (count x f) l) := by
simp [count_eq_countP, countP_flatMap, Function.comp_def]
-- FIXME these theorems can be restored once `List.erase` and `Array.erase` have been related.
-- theorem count_erase (a b : α) (l : Array α) : count a (l.erase b) = count a l - if b == a then 1 else 0 := by
-- sorry
-- @[simp] theorem count_erase_self (a : α) (l : Array α) :
-- count a (l.erase a) = count a l - 1 := by rw [count_erase, if_pos (by simp)]
-- @[simp] theorem count_erase_of_ne (ab : a ≠ b) (l : Array α) : count a (l.erase b) = count a l := by
-- rw [count_erase, if_neg (by simpa using ab.symm), Nat.sub_zero]
end count

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,7 @@ Authors: Mario Carneiro, Kim Morrison
-/
prelude
import Init.Data.Array.Lemmas
import Init.Data.Array.Attach
import Init.Data.List.MapIdx
namespace Array
@@ -47,9 +48,11 @@ theorem mapFinIdx_spec (as : Array α) (f : (i : Nat) → α → (h : i < as.siz
(a.mapFinIdx f).size = a.size :=
(mapFinIdx_spec (p := fun _ _ _ => True) (hs := fun _ _ => trivial)).1
@[simp] theorem size_zipWithIndex (as : Array α) : as.zipWithIndex.size = as.size :=
@[simp] theorem size_zipIdx (as : Array α) (k : Nat) : (as.zipIdx k).size = as.size :=
Array.size_mapFinIdx _ _
@[deprecated size_zipIdx (since := "2025-01-21")] abbrev size_zipWithIndex := @size_zipIdx
@[simp] theorem getElem_mapFinIdx (a : Array α) (f : (i : Nat) α (h : i < a.size) β) (i : Nat)
(h : i < (mapFinIdx a f).size) :
(a.mapFinIdx f)[i] = f i (a[i]'(by simp_all)) (by simp_all) :=
@@ -111,3 +114,323 @@ namespace List
ext <;> simp
end List
namespace Array
/-! ### zipIdx -/
@[simp] theorem getElem_zipIdx (a : Array α) (k : Nat) (i : Nat) (h : i < (a.zipIdx k).size) :
(a.zipIdx k)[i] = (a[i]'(by simp_all), i + k) := by
simp [zipIdx]
@[deprecated getElem_zipIdx (since := "2025-01-21")]
abbrev getElem_zipWithIndex := @getElem_zipIdx
@[simp] theorem zipIdx_toArray {l : List α} {k : Nat} :
l.toArray.zipIdx k = (l.zipIdx k).toArray := by
ext i hi₁ hi₂ <;> simp [Nat.add_comm]
@[deprecated zipIdx_toArray (since := "2025-01-21")]
abbrev zipWithIndex_toArray := @zipIdx_toArray
@[simp] theorem toList_zipIdx (a : Array α) (k : Nat) :
(a.zipIdx k).toList = a.toList.zipIdx k := by
rcases a with a
simp
@[deprecated toList_zipIdx (since := "2025-01-21")]
abbrev toList_zipWithIndex := @toList_zipIdx
theorem mk_mem_zipIdx_iff_le_and_getElem?_sub {k i : Nat} {x : α} {l : Array α} :
(x, i) zipIdx l k k i l[i - k]? = some x := by
rcases l with l
simp [List.mk_mem_zipIdx_iff_le_and_getElem?_sub]
/-- Variant of `mk_mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mk_mem_zipIdx_iff_getElem? {x : α} {i : Nat} {l : Array α} :
(x, i) l.zipIdx l[i]? = x := by
rw [mk_mem_zipIdx_iff_le_and_getElem?_sub]
simp
theorem mem_zipIdx_iff_le_and_getElem?_sub {x : α × Nat} {l : Array α} {k : Nat} :
x zipIdx l k k x.2 l[x.2 - k]? = some x.1 := by
cases x
simp [mk_mem_zipIdx_iff_le_and_getElem?_sub]
/-- Variant of `mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mem_zipIdx_iff_getElem? {x : α × Nat} {l : Array α} :
x l.zipIdx l[x.2]? = some x.1 := by
rw [mk_mem_zipIdx_iff_getElem?]
@[deprecated mk_mem_zipIdx_iff_getElem? (since := "2025-01-21")]
abbrev mk_mem_zipWithIndex_iff_getElem? := @mk_mem_zipIdx_iff_getElem?
@[deprecated mem_zipIdx_iff_getElem? (since := "2025-01-21")]
abbrev mem_zipWithIndex_iff_getElem? := @mem_zipIdx_iff_getElem?
/-! ### mapFinIdx -/
@[congr] theorem mapFinIdx_congr {xs ys : Array α} (w : xs = ys)
(f : (i : Nat) α (h : i < xs.size) β) :
mapFinIdx xs f = mapFinIdx ys (fun i a h => f i a (by simp [w]; omega)) := by
subst w
rfl
@[simp]
theorem mapFinIdx_empty {f : (i : Nat) α (h : i < 0) β} : mapFinIdx #[] f = #[] :=
rfl
theorem mapFinIdx_eq_ofFn {as : Array α} {f : (i : Nat) α (h : i < as.size) β} :
as.mapFinIdx f = Array.ofFn fun i : Fin as.size => f i as[i] i.2 := by
cases as
simp [List.mapFinIdx_eq_ofFn]
theorem mapFinIdx_append {K L : Array α} {f : (i : Nat) α (h : i < (K ++ L).size) β} :
(K ++ L).mapFinIdx f =
K.mapFinIdx (fun i a h => f i a (by simp; omega)) ++
L.mapFinIdx (fun i a h => f (i + K.size) a (by simp; omega)) := by
cases K
cases L
simp [List.mapFinIdx_append]
@[simp]
theorem mapFinIdx_push {l : Array α} {a : α} {f : (i : Nat) α (h : i < (l.push a).size) β} :
mapFinIdx (l.push a) f =
(mapFinIdx l (fun i a h => f i a (by simp; omega))).push (f l.size a (by simp)) := by
simp [ append_singleton, mapFinIdx_append]
theorem mapFinIdx_singleton {a : α} {f : (i : Nat) α (h : i < 1) β} :
#[a].mapFinIdx f = #[f 0 a (by simp)] := by
simp
theorem mapFinIdx_eq_zipIdx_map {l : Array α} {f : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f = l.zipIdx.attach.map
fun x, i, m =>
f i x (by simp [mk_mem_zipIdx_iff_getElem?, getElem?_eq_some_iff] at m; exact m.1) := by
ext <;> simp
@[deprecated mapFinIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapFinIdx_eq_zipWithIndex_map := @mapFinIdx_eq_zipIdx_map
@[simp]
theorem mapFinIdx_eq_empty_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f = #[] l = #[] := by
cases l
simp
theorem mapFinIdx_ne_empty_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f #[] l #[] := by
simp
theorem exists_of_mem_mapFinIdx {b : β} {l : Array α} {f : (i : Nat) α (h : i < l.size) β}
(h : b l.mapFinIdx f) : (i : Nat) (h : i < l.size), f i l[i] h = b := by
rcases l with l
exact List.exists_of_mem_mapFinIdx (by simpa using h)
@[simp] theorem mem_mapFinIdx {b : β} {l : Array α} {f : (i : Nat) α (h : i < l.size) β} :
b l.mapFinIdx f (i : Nat) (h : i < l.size), f i l[i] h = b := by
rcases l with l
simp
theorem mapFinIdx_eq_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f = l' h : l'.size = l.size, (i : Nat) (h : i < l.size), l'[i] = f i l[i] h := by
rcases l with l
rcases l' with l'
simpa using List.mapFinIdx_eq_iff
@[simp] theorem mapFinIdx_eq_singleton_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} {b : β} :
l.mapFinIdx f = #[b] (a : α) (w : l = #[a]), f 0 a (by simp [w]) = b := by
rcases l with l
simp
theorem mapFinIdx_eq_append_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} {l₁ l₂ : Array β} :
l.mapFinIdx f = l₁ ++ l₂
(l₁' : Array α) (l₂' : Array α) (w : l = l₁' ++ l₂'),
l₁'.mapFinIdx (fun i a h => f i a (by simp [w]; omega)) = l₁
l₂'.mapFinIdx (fun i a h => f (i + l₁'.size) a (by simp [w]; omega)) = l₂ := by
rcases l with l
rcases l₁ with l₁
rcases l₂ with l₂
simp only [List.mapFinIdx_toArray, List.append_toArray, mk.injEq, List.mapFinIdx_eq_append_iff,
toArray_eq_append_iff]
constructor
· rintro l₁, l₂, rfl, rfl, rfl
refine l₁.toArray, l₂.toArray, by simp_all
· rintro l₁, l₂, rfl, h₁, h₂
simp [ toList_inj] at h₁ h₂
obtain rfl := h₁
obtain rfl := h₂
refine l₁, l₂, by simp_all
theorem mapFinIdx_eq_push_iff {l : Array α} {b : β} {f : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f = l₂.push b
(l₁ : Array α) (a : α) (w : l = l₁.push a),
l₁.mapFinIdx (fun i a h => f i a (by simp [w]; omega)) = l₂ b = f (l.size - 1) a (by simp [w]) := by
rw [push_eq_append, mapFinIdx_eq_append_iff]
constructor
· rintro l₁, l₂, rfl, rfl, h₂
simp only [mapFinIdx_eq_singleton_iff, Nat.zero_add] at h₂
obtain a, rfl, rfl := h₂
exact l₁, a, by simp
· rintro l₁, a, rfl, rfl, rfl
exact l₁, #[a], by simp
theorem mapFinIdx_eq_mapFinIdx_iff {l : Array α} {f g : (i : Nat) α (h : i < l.size) β} :
l.mapFinIdx f = l.mapFinIdx g (i : Nat) (h : i < l.size), f i l[i] h = g i l[i] h := by
rw [eq_comm, mapFinIdx_eq_iff]
simp
@[simp] theorem mapFinIdx_mapFinIdx {l : Array α}
{f : (i : Nat) α (h : i < l.size) β}
{g : (i : Nat) β (h : i < (l.mapFinIdx f).size) γ} :
(l.mapFinIdx f).mapFinIdx g = l.mapFinIdx (fun i a h => g i (f i a h) (by simpa using h)) := by
simp [mapFinIdx_eq_iff]
theorem mapFinIdx_eq_mkArray_iff {l : Array α} {f : (i : Nat) α (h : i < l.size) β} {b : β} :
l.mapFinIdx f = mkArray l.size b (i : Nat) (h : i < l.size), f i l[i] h = b := by
rcases l with l
rw [ toList_inj]
simp [List.mapFinIdx_eq_replicate_iff]
@[simp] theorem mapFinIdx_reverse {l : Array α} {f : (i : Nat) α (h : i < l.reverse.size) β} :
l.reverse.mapFinIdx f = (l.mapFinIdx (fun i a h => f (l.size - 1 - i) a (by simp; omega))).reverse := by
rcases l with l
simp [List.mapFinIdx_reverse]
/-! ### mapIdx -/
@[simp]
theorem mapIdx_empty {f : Nat α β} : mapIdx f #[] = #[] :=
rfl
@[simp] theorem mapFinIdx_eq_mapIdx {l : Array α} {f : (i : Nat) α (h : i < l.size) β} {g : Nat α β}
(h : (i : Nat) (h : i < l.size), f i l[i] h = g i l[i]) :
l.mapFinIdx f = l.mapIdx g := by
simp_all [mapFinIdx_eq_iff]
theorem mapIdx_eq_mapFinIdx {l : Array α} {f : Nat α β} :
l.mapIdx f = l.mapFinIdx (fun i a _ => f i a) := by
simp [mapFinIdx_eq_mapIdx]
theorem mapIdx_eq_zipIdx_map {l : Array α} {f : Nat α β} :
l.mapIdx f = l.zipIdx.map fun a, i => f i a := by
ext <;> simp
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapIdx_eq_zipWithIndex_map := @mapIdx_eq_zipIdx_map
theorem mapIdx_append {K L : Array α} :
(K ++ L).mapIdx f = K.mapIdx f ++ L.mapIdx fun i => f (i + K.size) := by
rcases K with K
rcases L with L
simp [List.mapIdx_append]
@[simp]
theorem mapIdx_push {l : Array α} {a : α} :
mapIdx f (l.push a) = (mapIdx f l).push (f l.size a) := by
simp [ append_singleton, mapIdx_append]
theorem mapIdx_singleton {a : α} : mapIdx f #[a] = #[f 0 a] := by
simp
@[simp]
theorem mapIdx_eq_empty_iff {l : Array α} : mapIdx f l = #[] l = #[] := by
rcases l with l
simp
theorem mapIdx_ne_empty_iff {l : Array α} :
mapIdx f l #[] l #[] := by
simp
theorem exists_of_mem_mapIdx {b : β} {l : Array α}
(h : b mapIdx f l) : (i : Nat) (h : i < l.size), f i l[i] = b := by
rw [mapIdx_eq_mapFinIdx] at h
simpa [Fin.exists_iff] using exists_of_mem_mapFinIdx h
@[simp] theorem mem_mapIdx {b : β} {l : Array α} :
b mapIdx f l (i : Nat) (h : i < l.size), f i l[i] = b := by
constructor
· intro h
exact exists_of_mem_mapIdx h
· rintro i, h, rfl
rw [mem_iff_getElem]
exact i, by simpa using h, by simp
theorem mapIdx_eq_push_iff {l : Array α} {b : β} :
mapIdx f l = l₂.push b
(a : α) (l₁ : Array α), l = l₁.push a mapIdx f l₁ = l₂ f l₁.size a = b := by
rw [mapIdx_eq_mapFinIdx, mapFinIdx_eq_push_iff]
simp only [mapFinIdx_eq_mapIdx, exists_and_left, exists_prop]
constructor
· rintro l₁, rfl, a, rfl, rfl
exact a, l₁, by simp
· rintro a, l₁, rfl, rfl, rfl
exact l₁, rfl, a, by simp
@[simp] theorem mapIdx_eq_singleton_iff {l : Array α} {f : Nat α β} {b : β} :
mapIdx f l = #[b] (a : α), l = #[a] f 0 a = b := by
rcases l with l
simp [List.mapIdx_eq_singleton_iff]
theorem mapIdx_eq_append_iff {l : Array α} {f : Nat α β} {l₁ l₂ : Array β} :
mapIdx f l = l₁ ++ l₂
(l₁' : Array α) (l₂' : Array α), l = l₁' ++ l₂'
l₁'.mapIdx f = l₁
l₂'.mapIdx (fun i => f (i + l₁'.size)) = l₂ := by
rcases l with l
rcases l₁ with l₁
rcases l₂ with l₂
simp only [List.mapIdx_toArray, List.append_toArray, mk.injEq, List.mapIdx_eq_append_iff,
toArray_eq_append_iff]
constructor
· rintro l₁, l₂, rfl, rfl, rfl
exact l₁.toArray, l₂.toArray, by simp
· rintro l₁, l₂, rfl, h₁, h₂
simp only [List.mapIdx_toArray, mk.injEq, size_toArray] at h₁ h₂
obtain rfl := h₁
obtain rfl := h₂
exact l₁, l₂, by simp
theorem mapIdx_eq_iff {l : Array α} : mapIdx f l = l' i : Nat, l'[i]? = l[i]?.map (f i) := by
rcases l with l
rcases l' with l'
simp [List.mapIdx_eq_iff]
theorem mapIdx_eq_mapIdx_iff {l : Array α} :
mapIdx f l = mapIdx g l i : Nat, (h : i < l.size) f i l[i] = g i l[i] := by
rcases l with l
simp [List.mapIdx_eq_mapIdx_iff]
@[simp] theorem mapIdx_set {l : Array α} {i : Nat} {h : i < l.size} {a : α} :
(l.set i a).mapIdx f = (l.mapIdx f).set i (f i a) (by simpa) := by
rcases l with l
simp [List.mapIdx_set]
@[simp] theorem mapIdx_setIfInBounds {l : Array α} {i : Nat} {a : α} :
(l.setIfInBounds i a).mapIdx f = (l.mapIdx f).setIfInBounds i (f i a) := by
rcases l with l
simp [List.mapIdx_set]
@[simp] theorem back?_mapIdx {l : Array α} {f : Nat α β} :
(mapIdx f l).back? = (l.back?).map (f (l.size - 1)) := by
rcases l with l
simp [List.getLast?_mapIdx]
@[simp] theorem mapIdx_mapIdx {l : Array α} {f : Nat α β} {g : Nat β γ} :
(l.mapIdx f).mapIdx g = l.mapIdx (fun i => g i f i) := by
simp [mapIdx_eq_iff]
theorem mapIdx_eq_mkArray_iff {l : Array α} {f : Nat α β} {b : β} :
mapIdx f l = mkArray l.size b (i : Nat) (h : i < l.size), f i l[i] = b := by
rcases l with l
rw [ toList_inj]
simp [List.mapIdx_eq_replicate_iff]
@[simp] theorem mapIdx_reverse {l : Array α} {f : Nat α β} :
l.reverse.mapIdx f = (mapIdx (fun i => f (l.size - 1 - i)) l).reverse := by
rcases l with l
simp [List.mapIdx_reverse]
end Array

View File

@@ -379,7 +379,8 @@ SMT-Lib name: `extract`.
def extractLsb (hi lo : Nat) (x : BitVec n) : BitVec (hi - lo + 1) := extractLsb' lo _ x
/--
A version of `setWidth` that requires a proof, but is a noop.
A version of `setWidth` that requires a proof the new width is at least as large,
and is a computational noop.
-/
def setWidth' {n w : Nat} (le : n w) (x : BitVec n) : BitVec w :=
x.toNat#'(by
@@ -669,4 +670,11 @@ def ofBoolListLE : (bs : List Bool) → BitVec bs.length
| [] => 0#0
| b :: bs => concat (ofBoolListLE bs) b
/- ### reverse -/
/-- Reverse the bits in a bitvector. -/
def reverse : {w : Nat} BitVec w BitVec w
| 0, x => x
| w + 1, x => concat (reverse (x.truncate w)) (x.msb)
end BitVec

View File

@@ -631,6 +631,13 @@ theorem getLsbD_mul (x y : BitVec w) (i : Nat) :
· simp
· omega
theorem getMsbD_mul (x y : BitVec w) (i : Nat) :
(x * y).getMsbD i = (mulRec x y w).getMsbD i := by
simp only [mulRec_eq_mul_signExtend_setWidth]
rw [setWidth_setWidth_of_le]
· simp
· omega
theorem getElem_mul {x y : BitVec w} {i : Nat} (h : i < w) :
(x * y)[i] = (mulRec x y w)[i] := by
simp [mulRec_eq_mul_signExtend_setWidth]
@@ -1084,6 +1091,21 @@ theorem divRec_succ' (m : Nat) (args : DivModArgs w) (qr : DivModState w) :
divRec m args input := by
simp [divRec_succ, divSubtractShift]
theorem getElem_udiv (n d : BitVec w) (hy : 0#w < d) (i : Nat) (hi : i < w) :
(n / d)[i] = (divRec w {n, d} (DivModState.init w)).q[i] := by
rw [udiv_eq_divRec (by assumption)]
theorem getLsbD_udiv (n d : BitVec w) (hy : 0#w < d) (i : Nat) :
(n / d).getLsbD i = (decide (i < w) && (divRec w {n, d} (DivModState.init w)).q.getLsbD i) := by
by_cases hi : i < w
· simp [udiv_eq_divRec (by assumption)]
omega
· simp_all
theorem getMsbD_udiv (n d : BitVec w) (hd : 0#w < d) (i : Nat) :
(n / d).getMsbD i = (decide (i < w) && (divRec w {n, d} (DivModState.init w)).q.getMsbD i) := by
simp [getMsbD_eq_getLsbD, getLsbD_udiv, udiv_eq_divRec (by assumption)]
/- ### Arithmetic shift right (sshiftRight) recurrence -/
/--

View File

@@ -378,6 +378,16 @@ theorem getElem_ofBool {b : Bool} : (ofBool b)[0] = b := by simp
@[simp] theorem msb_ofBool (b : Bool) : (ofBool b).msb = b := by
cases b <;> simp [BitVec.msb]
@[simp] theorem one_eq_zero_iff : 1#w = 0#w w = 0 := by
constructor
· intro h
cases w
· rfl
· replace h := congrArg BitVec.toNat h
simp at h
· rintro rfl
simp
/-! ### msb -/
@[simp] theorem msb_zero : (0#w).msb = false := by simp [BitVec.msb, getMsbD]
@@ -595,12 +605,6 @@ theorem zeroExtend_eq_setWidth {v : Nat} {x : BitVec w} :
(x.setWidth v).toFin = Fin.ofNat' (2^v) x.toNat := by
ext; simp
theorem setWidth'_eq {x : BitVec w} (h : w v) : x.setWidth' h = x.setWidth v := by
apply eq_of_toNat_eq
rw [toNat_setWidth, toNat_setWidth']
rw [Nat.mod_eq_of_lt]
exact Nat.lt_of_lt_of_le x.isLt (Nat.pow_le_pow_right (Nat.zero_lt_two) h)
@[simp] theorem setWidth_eq (x : BitVec n) : setWidth n x = x := by
apply eq_of_toNat_eq
let x, lt_n := x
@@ -655,10 +659,10 @@ theorem getElem?_setWidth (m : Nat) (x : BitVec n) (i : Nat) :
simp [getLsbD, toNat_setWidth']
@[simp] theorem getMsbD_setWidth' (ge : m n) (x : BitVec n) (i : Nat) :
getMsbD (setWidth' ge x) i = (decide (i m - n) && getMsbD x (i - (m - n))) := by
getMsbD (setWidth' ge x) i = (decide (m - n i) && getMsbD x (i + n - m)) := by
simp only [getMsbD, getLsbD_setWidth', gt_iff_lt]
by_cases h₁ : decide (i < m) <;> by_cases h₂ : decide (i m - n) <;> by_cases h₃ : decide (i - (m - n) < n) <;>
by_cases h₄ : n - 1 - (i - (m - n)) = m - 1 - i
by_cases h₁ : decide (i < m) <;> by_cases h₂ : decide (m - n i) <;> by_cases h₃ : decide (i + n - m < n) <;>
by_cases h₄ : n - 1 - (i + n - m) = m - 1 - i
all_goals
simp only [h₁, h₂, h₃, h₄]
simp_all only [ge_iff_le, decide_eq_true_eq, Nat.not_le, Nat.not_lt, Bool.true_and,
@@ -671,7 +675,7 @@ theorem getElem?_setWidth (m : Nat) (x : BitVec n) (i : Nat) :
getLsbD (setWidth m x) i = (decide (i < m) && getLsbD x i) := by
simp [getLsbD, toNat_setWidth, Nat.testBit_mod_two_pow]
theorem getMsbD_setWidth {m : Nat} {x : BitVec n} {i : Nat} :
@[simp] theorem getMsbD_setWidth {m : Nat} {x : BitVec n} {i : Nat} :
getMsbD (setWidth m x) i = (decide (m - n i) && getMsbD x (i + n - m)) := by
unfold setWidth
by_cases h : n m <;> simp only [h]
@@ -685,6 +689,15 @@ theorem getMsbD_setWidth {m : Nat} {x : BitVec n} {i : Nat} :
· simp [h']
omega
-- This is a simp lemma as there is only a runtime difference between `setWidth'` and `setWidth`,
-- and for verification purposes they are equivalent.
@[simp]
theorem setWidth'_eq {x : BitVec w} (h : w v) : x.setWidth' h = x.setWidth v := by
apply eq_of_toNat_eq
rw [toNat_setWidth, toNat_setWidth']
rw [Nat.mod_eq_of_lt]
exact Nat.lt_of_lt_of_le x.isLt (Nat.pow_le_pow_right (Nat.zero_lt_two) h)
@[simp] theorem getMsbD_setWidth_add {x : BitVec w} (h : k i) :
(x.setWidth (w + k)).getMsbD i = x.getMsbD (i - k) := by
by_cases h : w = 0
@@ -755,6 +768,22 @@ theorem setWidth_one {x : BitVec w} :
rw [Nat.mod_mod_of_dvd]
exact Nat.pow_dvd_pow_iff_le_right'.mpr h
/--
Iterated `setWidth` agrees with the second `setWidth`
except in the case the first `setWidth` is a non-trivial truncation,
and the second `setWidth` is a non-trivial extension.
-/
-- Note that in the special cases `v = u` or `v = w`,
-- `simp` can discharge the side condition itself.
@[simp] theorem setWidth_setWidth {x : BitVec u} {w v : Nat} (h : ¬ (v < u v < w)) :
setWidth w (setWidth v x) = setWidth w x := by
ext
simp_all only [getLsbD_setWidth, decide_true, Bool.true_and, Bool.and_iff_right_iff_imp,
decide_eq_true_eq]
intro h
replace h := lt_of_getLsbD h
omega
/-! ## extractLsb -/
@[simp]
@@ -905,6 +934,16 @@ instance : Std.LawfulCommIdentity (α := BitVec n) (· ||| · ) (0#n) where
ext i h
simp [h]
theorem extractLsb'_or {x y : BitVec w} {start len : Nat} :
(x ||| y).extractLsb' start len = (x.extractLsb' start len) ||| (y.extractLsb' start len) := by
ext i hi
simp [hi]
theorem extractLsb_or {x : BitVec w} {hi lo : Nat} :
(x ||| y).extractLsb lo hi = (x.extractLsb lo hi) ||| (y.extractLsb lo hi) := by
ext k hk
simp [hk, show k lo - hi by omega]
/-! ### and -/
@[simp] theorem toNat_and (x y : BitVec v) :
@@ -978,6 +1017,16 @@ instance : Std.LawfulCommIdentity (α := BitVec n) (· &&& · ) (allOnes n) wher
ext i h
simp [h]
theorem extractLsb'_and {x y : BitVec w} {start len : Nat} :
(x &&& y).extractLsb' start len = (x.extractLsb' start len) &&& (y.extractLsb' start len) := by
ext i hi
simp [hi]
theorem extractLsb_and {x : BitVec w} {hi lo : Nat} :
(x &&& y).extractLsb lo hi = (x.extractLsb lo hi) &&& (y.extractLsb lo hi) := by
ext k hk
simp [hk, show k lo - hi by omega]
/-! ### xor -/
@[simp] theorem toNat_xor (x y : BitVec v) :
@@ -1043,6 +1092,16 @@ instance : Std.LawfulCommIdentity (α := BitVec n) (· ^^^ · ) (0#n) where
ext i
simp
theorem extractLsb'_xor {x y : BitVec w} {start len : Nat} :
(x ^^^ y).extractLsb' start len = (x.extractLsb' start len) ^^^ (y.extractLsb' start len) := by
ext i hi
simp [hi]
theorem extractLsb_xor {x : BitVec w} {hi lo : Nat} :
(x ^^^ y).extractLsb lo hi = (x.extractLsb lo hi) ^^^ (y.extractLsb lo hi) := by
ext k hk
simp [hk, show k lo - hi by omega]
/-! ### not -/
theorem not_def {x : BitVec v} : ~~~x = allOnes v ^^^ x := rfl
@@ -1134,6 +1193,10 @@ theorem not_not {b : BitVec w} : ~~~(~~~b) = b := by
ext i h
simp [h]
@[simp] theorem and_not_self (x : BitVec n) : x &&& ~~~x = 0 := by
ext i
simp_all
theorem not_eq_comm {x y : BitVec w} : ~~~ x = y x = ~~~ y := by
constructor
· intro h
@@ -1149,6 +1212,31 @@ theorem getMsb_not {x : BitVec w} :
@[simp] theorem msb_not {x : BitVec w} : (~~~x).msb = (decide (0 < w) && !x.msb) := by
simp [BitVec.msb]
/--
Negating `x` and then extracting [start..start+len) is the same as extracting and then negating,
as long as the range [start..start+len) is in bounds.
See that if the index is out-of-bounds, then `extractLsb` will return `false`,
which makes the operation not commute.
-/
theorem extractLsb'_not_of_lt {x : BitVec w} {start len : Nat} (h : start + len < w) :
(~~~ x).extractLsb' start len = ~~~ (x.extractLsb' start len) := by
ext i hi
simp [hi]
omega
/--
Negating `x` and then extracting [lo:hi] is the same as extracting and then negating.
For the extraction to be well-behaved,
we need the range [lo:hi] to be a valid closed interval inside the bitvector:
1. `lo ≤ hi` for the interval to be a well-formed closed interval.
2. `hi < w`, for the interval to be contained inside the bitvector.
-/
theorem extractLsb_not_of_lt {x : BitVec w} {hi lo : Nat} (hlo : lo hi) (hhi : hi < w) :
(~~~ x).extractLsb hi lo = ~~~ (x.extractLsb hi lo) := by
ext k hk
simp [hk, show k hi - lo by omega]
omega
/-! ### cast -/
@[simp] theorem not_cast {x : BitVec w} (h : w = w') : ~~~(x.cast h) = (~~~x).cast h := by
@@ -1243,7 +1331,7 @@ theorem shiftLeftZeroExtend_eq {x : BitVec w} :
apply eq_of_toNat_eq
rw [shiftLeftZeroExtend, setWidth]
split
· simp
· simp only [toNat_ofNatLt, toNat_shiftLeft, toNat_setWidth']
rw [Nat.mod_eq_of_lt]
rw [Nat.shiftLeft_eq, Nat.pow_add]
exact Nat.mul_lt_mul_of_pos_right x.isLt (Nat.two_pow_pos _)
@@ -1267,11 +1355,15 @@ theorem shiftLeftZeroExtend_eq {x : BitVec w} :
@[simp] theorem getMsbD_shiftLeftZeroExtend (x : BitVec m) (n : Nat) :
getMsbD (shiftLeftZeroExtend x n) i = getMsbD x i := by
have : m + n - m i + n := by omega
have : i + n + m - (m + n) = i := by omega
simp_all [shiftLeftZeroExtend_eq]
@[simp] theorem msb_shiftLeftZeroExtend (x : BitVec w) (i : Nat) :
(shiftLeftZeroExtend x i).msb = x.msb := by
simp [shiftLeftZeroExtend_eq, BitVec.msb]
have : w + i - w i := by omega
have : i + w - (w + i) = 0 := by omega
simp_all [shiftLeftZeroExtend_eq, BitVec.msb]
theorem shiftLeft_add {w : Nat} (x : BitVec w) (n m : Nat) :
x <<< (n + m) = (x <<< n) <<< m := by
@@ -1313,8 +1405,20 @@ theorem getElem_shiftLeft' {x : BitVec w₁} {y : BitVec w₂} {i : Nat} (h : i
(x <<< y)[i] = (!decide (i < y.toNat) && x.getLsbD (i - y.toNat)) := by
simp
@[simp] theorem shiftLeft_eq_zero {x : BitVec w} {n : Nat} (hn : w n) : x <<< n = 0#w := by
ext i hi
simp [hn, hi]
omega
theorem shiftLeft_ofNat_eq {x : BitVec w} {k : Nat} : x <<< (BitVec.ofNat w k) = x <<< (k % 2^w) := rfl
/-! ### ushiftRight -/
@[simp] theorem ushiftRight_eq' (x : BitVec w₁) (y : BitVec w₂) :
x >>> y = x >>> y.toNat := by rfl
theorem ushiftRight_ofNat_eq {x : BitVec w} {k : Nat} : x >>> (BitVec.ofNat w k) = x >>> (k % 2^w) := rfl
@[simp, bv_toNat] theorem toNat_ushiftRight (x : BitVec n) (i : Nat) :
(x >>> i).toNat = x.toNat >>> i := rfl
@@ -1438,11 +1542,9 @@ theorem msb_ushiftRight {x : BitVec w} {n : Nat} :
case succ nn ih =>
simp [BitVec.ushiftRight_eq, getMsbD_ushiftRight, BitVec.msb, ih, show nn + 1 > 0 by omega]
/-! ### ushiftRight reductions from BitVec to Nat -/
@[simp]
theorem ushiftRight_eq' (x : BitVec w) (y : BitVec w₂) :
x >>> y = x >>> y.toNat := by rfl
theorem ushiftRight_self (n : BitVec w) : n >>> n.toNat = 0#w := by
simp [BitVec.toNat_eq, Nat.shiftRight_eq_div_pow, Nat.lt_two_pow_self, Nat.div_eq_of_lt]
/-! ### sshiftRight -/
@@ -1541,6 +1643,9 @@ theorem sshiftRight_or_distrib (x y : BitVec w) (n : Nat) :
<;> by_cases w i
<;> simp [*]
theorem sshiftRight'_ofNat_eq_sshiftRight {x : BitVec w} {k : Nat} : x.sshiftRight' (BitVec.ofNat w k) = x.sshiftRight (k % 2^w) := rfl
/-- The msb after arithmetic shifting right equals the original msb. -/
@[simp]
theorem msb_sshiftRight {n : Nat} {x : BitVec w} :
@@ -1821,8 +1926,9 @@ theorem getElem_append {x : BitVec n} {y : BitVec m} (h : i < n + m) :
@[simp] theorem getMsbD_append {x : BitVec n} {y : BitVec m} :
getMsbD (x ++ y) i = if n i then getMsbD y (i - n) else getMsbD x i := by
simp only [append_def]
have : i + m - (n + m) = i - n := by omega
by_cases h : n i
· simp [h]
· simp_all
· simp [h]
theorem msb_append {x : BitVec w} {y : BitVec v} :
@@ -1941,6 +2047,25 @@ theorem msb_shiftLeft {x : BitVec w} {n : Nat} :
(x <<< n).msb = x.getMsbD n := by
simp [BitVec.msb]
theorem ushiftRight_eq_extractLsb'_of_lt {x : BitVec w} {n : Nat} (hn : n < w) :
x >>> n = ((0#n) ++ (x.extractLsb' n (w - n))).cast (by omega) := by
ext i hi
simp only [getLsbD_ushiftRight, getLsbD_cast, getLsbD_append, getLsbD_extractLsb', getLsbD_zero,
Bool.if_false_right, Bool.and_self_left, Bool.iff_and_self, decide_eq_true_eq]
intros h
have := lt_of_getLsbD h
omega
theorem shiftLeft_eq_concat_of_lt {x : BitVec w} {n : Nat} (hn : n < w) :
x <<< n = (x.extractLsb' 0 (w - n) ++ 0#n).cast (by omega) := by
ext i hi
simp only [getLsbD_shiftLeft, hi, decide_true, Bool.true_and, getLsbD_cast, getLsbD_append,
getLsbD_zero, getLsbD_extractLsb', Nat.zero_add, Bool.if_false_left]
by_cases hi' : i < n
· simp [hi']
· simp [hi']
omega
/-! ### rev -/
theorem getLsbD_rev (x : BitVec w) (i : Fin w) :
@@ -2053,6 +2178,32 @@ theorem eq_msb_cons_setWidth (x : BitVec (w+1)) : x = (cons x.msb (x.setWidth w)
ext i
simp [cons]
theorem cons_append (x : BitVec w₁) (y : BitVec w₂) (a : Bool) :
(cons a x) ++ y = (cons a (x ++ y)).cast (by omega) := by
apply eq_of_toNat_eq
simp only [toNat_append, toNat_cons, toNat_cast]
rw [Nat.shiftLeft_add, Nat.shiftLeft_or_distrib, Nat.or_assoc]
theorem cons_append_append (x : BitVec w₁) (y : BitVec w₂) (z : BitVec w₃) (a : Bool) :
(cons a x) ++ y ++ z = (cons a (x ++ y ++ z)).cast (by omega) := by
ext i h
simp only [cons, getLsbD_append, getLsbD_cast, getLsbD_ofBool, cast_cast]
by_cases h₀ : i < w₁ + w₂ + w₃
· simp only [h₀, reduceIte]
by_cases h₁ : i < w₃
· simp [h₁]
· simp only [h₁, reduceIte]
by_cases h₂ : i - w₃ < w₂
· simp [h₂]
· simp [h₂]
omega
· simp only [show ¬i - w₃ - w₂ < w₁ by omega, reduceIte, show i - w₃ - w₂ - w₁ = 0 by omega,
decide_true, Bool.true_and, h₀, show i - (w₁ + w₂ + w₃) = 0 by omega]
by_cases h₂ : i < w₃
· simp [h₂]; omega
· simp [h₂]; omega
/-! ### concat -/
@[simp] theorem toNat_concat (x : BitVec w) (b : Bool) :
@@ -2590,6 +2741,40 @@ theorem not_lt_iff_le {x y : BitVec w} : (¬ x < y) ↔ y ≤ x := by
constructor <;>
(intro h; simp only [lt_def, Nat.not_lt, le_def] at h ; omega)
@[simp]
theorem not_lt_zero {x : BitVec w} : ¬x < 0#w := of_decide_eq_false rfl
@[simp]
theorem le_zero_iff {x : BitVec w} : x 0#w x = 0#w := by
constructor
· intro h
have : x 0 := not_lt_iff_le.mp not_lt_zero
exact Eq.symm (BitVec.le_antisymm this h)
· simp_all
@[simp]
theorem lt_one_iff {x : BitVec w} (h : 0 < w) : x < 1#w x = 0#w := by
constructor
· intro h₂
rw [lt_def, toNat_ofNat, Int.ofNat_lt, Int.ofNat_emod, Int.ofNat_one, Int.natCast_pow,
Int.ofNat_two, @Int.emod_eq_of_lt 1 (2^w) (by omega) (by omega)] at h₂
simp [toNat_eq, show x.toNat = 0 by omega]
· simp_all
@[simp]
theorem not_allOnes_lt {x : BitVec w} : ¬allOnes w < x := by
have : 2^w 0 := Ne.symm (NeZero.ne' (2^w))
rw [BitVec.not_lt, le_def, Nat.le_iff_lt_add_one, toNat_allOnes, Nat.sub_one_add_one this]
exact isLt x
@[simp]
theorem allOnes_le_iff {x : BitVec w} : allOnes w x x = allOnes w := by
constructor
· intro h
have : x allOnes w := not_lt_iff_le.mp not_allOnes_lt
exact Eq.symm (BitVec.le_antisymm h this)
· simp_all
/-! ### udiv -/
theorem udiv_def {x y : BitVec n} : x / y = BitVec.ofNat n (x.toNat / y.toNat) := by
@@ -3053,7 +3238,7 @@ theorem getMsbD_rotateLeft_of_lt {n w : Nat} {x : BitVec w} (hi : r < w):
· simp only [h₁, decide_true, Bool.true_and]
have h₂ : (r + n) < 2 * (w + 1) := by omega
congr 1
rw [ Nat.sub_mul_eq_mod_of_lt_of_le (n := 1) (by omega) (by omega), Nat.mul_one]
rw [ Nat.sub_mul_eq_mod_of_lt_of_le (n := 1) (by omega) (by omega)]
omega
· simp [h₁]
@@ -3302,6 +3487,11 @@ theorem mul_twoPow_eq_shiftLeft (x : BitVec w) (i : Nat) :
apply Nat.pow_dvd_pow 2 (by omega)
simp [Nat.mul_mod, hpow]
theorem twoPow_mul_eq_shiftLeft (x : BitVec w) (i : Nat) :
(twoPow w i) * x = x <<< i := by
rw [BitVec.mul_comm, mul_twoPow_eq_shiftLeft]
theorem twoPow_zero {w : Nat} : twoPow w 0 = 1#w := by
apply eq_of_toNat_eq
simp
@@ -3311,6 +3501,12 @@ theorem shiftLeft_eq_mul_twoPow (x : BitVec w) (n : Nat) :
ext i
simp [getLsbD_shiftLeft, Fin.is_lt, decide_true, Bool.true_and, mul_twoPow_eq_shiftLeft]
/-- 2^i * 2^j = 2^(i + j) with bitvectors as well -/
theorem twoPow_mul_twoPow_eq {w : Nat} (i j : Nat) : twoPow w i * twoPow w j = twoPow w (i + j) := by
apply BitVec.eq_of_toNat_eq
simp only [toNat_mul, toNat_twoPow]
rw [ Nat.mul_mod, Nat.pow_add]
/--
The unsigned division of `x` by `2^k` equals shifting `x` right by `k`,
when `k` is less than the bitwidth `w`.
@@ -3373,11 +3569,11 @@ theorem and_one_eq_setWidth_ofBool_getLsbD {x : BitVec w} :
ext (_ | i) h <;> simp [Bool.and_comm]
@[simp]
theorem replicate_zero_eq {x : BitVec w} : x.replicate 0 = 0#0 := by
theorem replicate_zero {x : BitVec w} : x.replicate 0 = 0#0 := by
simp [replicate]
@[simp]
theorem replicate_succ_eq {x : BitVec w} :
theorem replicate_succ {x : BitVec w} :
x.replicate (n + 1) =
(x ++ replicate n x).cast (by rw [Nat.mul_succ]; omega) := by
simp [replicate]
@@ -3389,7 +3585,7 @@ theorem getLsbD_replicate {n w : Nat} (x : BitVec w) :
induction n generalizing x
case zero => simp
case succ n ih =>
simp only [replicate_succ_eq, getLsbD_cast, getLsbD_append]
simp only [replicate_succ, getLsbD_cast, getLsbD_append]
by_cases hi : i < w * (n + 1)
· simp only [hi, decide_true, Bool.true_and]
by_cases hi' : i < w * n
@@ -3406,6 +3602,33 @@ theorem getElem_replicate {n w : Nat} (x : BitVec w) (h : i < w * n) :
simp only [ getLsbD_eq_getElem, getLsbD_replicate]
by_cases h' : w = 0 <;> simp [h'] <;> omega
theorem append_assoc {x₁ : BitVec w₁} {x₂ : BitVec w₂} {x₃ : BitVec w₃} :
(x₁ ++ x₂) ++ x₃ = (x₁ ++ (x₂ ++ x₃)).cast (by omega) := by
induction w₁ generalizing x₂ x₃
case zero => simp
case succ n ih =>
specialize @ih (setWidth n x₁)
rw [ cons_msb_setWidth x₁, cons_append_append, ih, cons_append]
ext j h
simp [getLsbD_cons, show n + w₂ + w₃ = n + (w₂ + w₃) by omega]
theorem replicate_append_self {x : BitVec w} :
x ++ x.replicate n = (x.replicate n ++ x).cast (by omega) := by
induction n with
| zero => simp
| succ n ih =>
rw [replicate_succ]
conv => lhs; rw [ih]
simp only [cast_cast, cast_eq]
rw [ cast_append_left]
· rw [append_assoc]; congr
· rw [Nat.add_comm, Nat.mul_add, Nat.mul_one]; omega
theorem replicate_succ' {x : BitVec w} :
x.replicate (n + 1) =
(replicate n x ++ x).cast (by rw [Nat.mul_succ]) := by
simp [replicate_append_self]
/-! ### intMin -/
/-- The bitvector of width `w` that has the smallest value when interpreted as an integer. -/
@@ -3691,6 +3914,57 @@ theorem toInt_abs_eq_natAbs_of_ne_intMin {x : BitVec w} (hx : x ≠ intMin w) :
x.abs.toInt = x.toInt.natAbs := by
simp [toInt_abs_eq_natAbs, hx]
/-! ### Reverse -/
theorem getLsbD_reverse {i : Nat} {x : BitVec w} :
(x.reverse).getLsbD i = x.getMsbD i := by
induction w generalizing i
case zero => simp
case succ n ih =>
simp only [reverse, truncate_eq_setWidth, getLsbD_concat]
rcases i with rfl | i
· rfl
· simp only [Nat.add_one_ne_zero, reduceIte, Nat.add_one_sub_one, ih]
rw [getMsbD_setWidth]
simp only [show n - (n + 1) = 0 by omega, Nat.zero_le, decide_true, Bool.true_and]
congr; omega
theorem getMsbD_reverse {i : Nat} {x : BitVec w} :
(x.reverse).getMsbD i = x.getLsbD i := by
simp only [getMsbD_eq_getLsbD, getLsbD_reverse]
by_cases hi : i < w
· simp only [hi, decide_true, show w - 1 - i < w by omega, Bool.true_and]
congr; omega
· simp [hi, show i w by omega]
theorem msb_reverse {x : BitVec w} :
(x.reverse).msb = x.getLsbD 0 :=
by rw [BitVec.msb, getMsbD_reverse]
theorem reverse_append {x : BitVec w} {y : BitVec v} :
(x ++ y).reverse = (y.reverse ++ x.reverse).cast (by omega) := by
ext i h
simp only [getLsbD_append, getLsbD_reverse]
by_cases hi : i < v
· by_cases hw : w i
· simp [getMsbD_append, getLsbD_cast, getLsbD_append, getLsbD_reverse, hw]
· simp [getMsbD_append, getLsbD_cast, getLsbD_append, getLsbD_reverse, hw, show i < w by omega]
· by_cases hw : w i
· simp [getMsbD_append, getLsbD_cast, getLsbD_append, hw, show ¬ i < w by omega, getLsbD_reverse]
· simp [getMsbD_append, getLsbD_cast, getLsbD_append, hw, show i < w by omega, getLsbD_reverse]
@[simp]
theorem reverse_cast {w v : Nat} (h : w = v) (x : BitVec w) :
(x.cast h).reverse = x.reverse.cast h := by
subst h; simp
theorem reverse_replicate {n : Nat} {x : BitVec w} :
(x.replicate n).reverse = (x.reverse).replicate n := by
induction n with
| zero => rfl
| succ n ih =>
conv => lhs; simp only [replicate_succ']
simp [reverse_append, ih]
/-! ### Decidable quantifiers -/
@@ -3906,4 +4180,10 @@ abbrev shiftLeft_zero_eq := @shiftLeft_zero
@[deprecated ushiftRight_zero (since := "2024-10-27")]
abbrev ushiftRight_zero_eq := @ushiftRight_zero
@[deprecated replicate_zero (since := "2025-01-08")]
abbrev replicate_zero_eq := @replicate_zero
@[deprecated replicate_succ (since := "2025-01-08")]
abbrev replicate_succ_eq := @replicate_succ
end BitVec

View File

@@ -620,3 +620,12 @@ but may be used locally.
-/
def boolRelToRel : Coe (α α Bool) (α α Prop) where
coe r := fun a b => Eq (r a b) true
/-! ### subtypes -/
@[simp] theorem Subtype.beq_iff {α : Type u} [DecidableEq α] {p : α Prop} {x y : {a : α // p a}} :
(x == y) = (x.1 == y.1) := by
cases x
cases y
rw [Bool.eq_iff_iff]
simp [beq_iff_eq]

View File

@@ -13,6 +13,8 @@ import Init.Omega
namespace Fin
@[simp] theorem ofNat'_zero (n : Nat) [NeZero n] : Fin.ofNat' n 0 = 0 := rfl
@[deprecated Fin.pos (since := "2024-11-11")]
theorem size_pos (i : Fin n) : 0 < n := i.pos

View File

@@ -257,7 +257,7 @@ theorem ofNat_fdiv : ∀ m n : Nat, ↑(m / n) = fdiv ↑m ↑n
# `bmod` ("balanced" mod)
Balanced mod (and balanced div) are a division and modulus pair such
that `b * (Int.bdiv a b) + Int.bmod a b = a` and `b/2 ≤ Int.bmod a b <
that `b * (Int.bdiv a b) + Int.bmod a b = a` and `-b/2 ≤ Int.bmod a b <
b/2` for all `a : Int` and `b > 0`.
This is used in Omega as well as signed bitvectors.
@@ -266,10 +266,26 @@ This is used in Omega as well as signed bitvectors.
/--
Balanced modulus. This version of Integer modulus uses the
balanced rounding convention, which guarantees that
`m/2 ≤ bmod x m < m/2` for `m ≠ 0` and `bmod x m` is congruent
`-m/2 ≤ bmod x m < m/2` for `m ≠ 0` and `bmod x m` is congruent
to `x` modulo `m`.
If `m = 0`, then `bmod x m = x`.
Examples:
```
#eval (7 : Int).bdiv 0 -- 0
#eval (0 : Int).bdiv 7 -- 0
#eval (12 : Int).bdiv 6 -- 2
#eval (12 : Int).bdiv 7 -- 2
#eval (12 : Int).bdiv 8 -- 2
#eval (12 : Int).bdiv 9 -- 1
#eval (-12 : Int).bdiv 6 -- -2
#eval (-12 : Int).bdiv 7 -- -2
#eval (-12 : Int).bdiv 8 -- -1
#eval (-12 : Int).bdiv 9 -- -1
```
-/
def bmod (x : Int) (m : Nat) : Int :=
let r := x % m
@@ -281,6 +297,22 @@ def bmod (x : Int) (m : Nat) : Int :=
/--
Balanced division. This returns the unique integer so that
`b * (Int.bdiv a b) + Int.bmod a b = a`.
Examples:
```
#eval (7 : Int).bmod 0 -- 7
#eval (0 : Int).bmod 7 -- 0
#eval (12 : Int).bmod 6 -- 0
#eval (12 : Int).bmod 7 -- -2
#eval (12 : Int).bmod 8 -- -4
#eval (12 : Int).bmod 9 -- 3
#eval (-12 : Int).bmod 6 -- 0
#eval (-12 : Int).bmod 7 -- 2
#eval (-12 : Int).bmod 8 -- -4
#eval (-12 : Int).bmod 9 -- -3
```
-/
def bdiv (x : Int) (m : Nat) : Int :=
if m = 0 then

View File

@@ -111,6 +111,14 @@ theorem pmap_eq_map_attach {p : α → Prop} (f : ∀ a, p a → β) (l H) :
pmap f l H = l.attach.map fun x => f x.1 (H _ x.2) := by
rw [attach, attachWith, map_pmap]; exact pmap_congr_left l fun _ _ _ _ => rfl
@[simp]
theorem pmap_eq_attachWith {p q : α Prop} (f : a, p a q a) (l H) :
pmap (fun a h => a, f a h) l H = l.attachWith q (fun x h => f x (H x h)) := by
induction l with
| nil => rfl
| cons a l ih =>
simp [pmap, attachWith, ih]
theorem attach_map_coe (l : List α) (f : α β) :
(l.attach.map fun (i : {i // i l}) => f i) = l.map f := by
rw [attach, attachWith, map_pmap]; exact pmap_eq_map _ _ _ _
@@ -136,10 +144,23 @@ theorem attachWith_map_subtype_val {p : α → Prop} (l : List α) (H : ∀ a
@[simp]
theorem mem_attach (l : List α) : x, x l.attach
| a, h => by
have := mem_map.1 (by rw [attach_map_subtype_val] <;> exact h)
have := mem_map.1 (by rw [attach_map_subtype_val]; exact h)
rcases this with _, _, m, rfl
exact m
@[simp]
theorem mem_attachWith (l : List α) {q : α Prop} (H) (x : {x // q x}) :
x l.attachWith q H x.1 l := by
induction l with
| nil => simp
| cons a l ih =>
simp [ih]
constructor
· rintro (_ | _) <;> simp_all
· rintro (h | h)
· simp [ h]
· simp_all
@[simp]
theorem mem_pmap {p : α Prop} {f : a, p a β} {l H b} :
b pmap f l H (a : _) (h : a l), f a (H a h) = b := by
@@ -266,6 +287,18 @@ theorem getElem_attach {xs : List α} {i : Nat} (h : i < xs.attach.length) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp] theorem pmap_attach (l : List α) {p : {x // x l} Prop} (f : a, p a β) (H) :
pmap f l.attach H =
l.pmap (P := fun a => h : a l, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
apply ext_getElem <;> simp
@[simp] theorem pmap_attachWith (l : List α) {p : {x // q x} Prop} (f : a, p a β) (H₁ H₂) :
pmap f (l.attachWith q H₁) H₂ =
l.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
apply ext_getElem <;> simp
@[simp] theorem head?_pmap {P : α Prop} (f : (a : α) P a β) (xs : List α)
(H : (a : α), a xs P a) :
(xs.pmap f H).head? = xs.attach.head?.map fun a, m => f a (H a m) := by
@@ -431,7 +464,25 @@ theorem attach_filter {l : List α} (p : α → Bool) :
split <;> simp
-- We are still missing here `attachWith_filterMap` and `attachWith_filter`.
-- Also missing are `filterMap_attach`, `filter_attach`, `filterMap_attachWith` and `filter_attachWith`.
@[simp]
theorem filterMap_attachWith {q : α Prop} {l : List α} {f : {x // q x} Option β} (H) :
(l.attachWith q H).filterMap f = l.attach.filterMap (fun x, h => f x, H _ h) := by
induction l with
| nil => rfl
| cons x xs ih =>
simp only [attachWith_cons, filterMap_cons]
split <;> simp_all [Function.comp_def]
@[simp]
theorem filter_attachWith {q : α Prop} {l : List α} {p : {x // q x} Bool} (H) :
(l.attachWith q H).filter p =
(l.attach.filter (fun x, h => p x, H _ h)).map (fun x, h => x, H _ h) := by
induction l with
| nil => rfl
| cons x xs ih =>
simp only [attachWith_cons, filter_cons]
split <;> simp_all [Function.comp_def, filter_map]
theorem pmap_pmap {p : α Prop} {q : β Prop} (g : a, p a β) (f : b, q b γ) (l H₁ H₂) :
pmap f (pmap g l H₁) H₂ =
@@ -520,7 +571,7 @@ theorem reverse_attach (xs : List α) :
@[simp] theorem getLast?_attachWith {P : α Prop} {xs : List α}
{H : (a : α), a xs P a} :
(xs.attachWith P H).getLast? = xs.getLast?.pbind (fun a h => some a, H _ (mem_of_getLast?_eq_some h)) := by
(xs.attachWith P H).getLast? = xs.getLast?.pbind (fun a h => some a, H _ (mem_of_getLast? h)) := by
rw [getLast?_eq_head?_reverse, reverse_attachWith, head?_attachWith]
simp
@@ -531,7 +582,7 @@ theorem reverse_attach (xs : List α) :
@[simp]
theorem getLast?_attach {xs : List α} :
xs.attach.getLast? = xs.getLast?.pbind fun a h => some a, mem_of_getLast?_eq_some h := by
xs.attach.getLast? = xs.getLast?.pbind fun a h => some a, mem_of_getLast? h := by
rw [getLast?_eq_head?_reverse, reverse_attach, head?_map, head?_attach]
simp
@@ -560,6 +611,11 @@ theorem count_attachWith [DecidableEq α] {p : α → Prop} (l : List α) (H :
(l.attachWith p H).count a = l.count a :=
Eq.trans (countP_congr fun _ _ => by simp [Subtype.ext_iff]) <| countP_attachWith _ _ _
@[simp] theorem countP_pmap {p : α Prop} (g : a, p a β) (f : β Bool) (l : List α) (H₁) :
(l.pmap g H₁).countP f =
l.attach.countP (fun a, m => f (g a (H₁ a m))) := by
simp [pmap_eq_map_attach, countP_map, Function.comp_def]
/-! ## unattach
`List.unattach` is the (one-sided) inverse of `List.attach`. It is a synonym for `List.map Subtype.val`.
@@ -578,7 +634,7 @@ and is ideally subsequently simplified away by `unattach_attach`.
If not, usually the right approach is `simp [List.unattach, -List.map_subtype]` to unfold.
-/
def unattach {α : Type _} {p : α Prop} (l : List { x // p x }) := l.map (·.val)
def unattach {α : Type _} {p : α Prop} (l : List { x // p x }) : List α := l.map (·.val)
@[simp] theorem unattach_nil {p : α Prop} : ([] : List { x // p x }).unattach = [] := rfl
@[simp] theorem unattach_cons {p : α Prop} {a : { x // p x }} {l : List { x // p x }} :

View File

@@ -43,7 +43,7 @@ The operations are organized as follow:
`countP`, `count`, and `lookup`.
* Logic: `any`, `all`, `or`, and `and`.
* Zippers: `zipWith`, `zip`, `zipWithAll`, and `unzip`.
* Ranges and enumeration: `range`, `iota`, `enumFrom`, and `enum`.
* Ranges and enumeration: `range`, `zipIdx`.
* Minima and maxima: `min?` and `max?`.
* Other functions: `intersperse`, `intercalate`, `eraseDups`, `eraseReps`, `span`, `splitBy`,
`removeAll`
@@ -74,7 +74,7 @@ namespace List
@[simp] theorem length_nil : length ([] : List α) = 0 :=
rfl
@[simp 1100] theorem length_singleton (a : α) : length [a] = 1 := rfl
@[simp] theorem length_singleton (a : α) : length [a] = 1 := rfl
@[simp] theorem length_cons {α} (a : α) (as : List α) : (cons a as).length = as.length + 1 :=
rfl
@@ -352,8 +352,8 @@ def headD : (as : List α) → (fallback : α) → α
| [], fallback => fallback
| a::_, _ => a
@[simp 1100] theorem headD_nil : @headD α [] d = d := rfl
@[simp 1100] theorem headD_cons : @headD α (a::l) d = a := rfl
@[simp] theorem headD_nil : @headD α [] d = d := rfl
@[simp] theorem headD_cons : @headD α (a::l) d = a := rfl
/-! ### tail -/
@@ -393,8 +393,8 @@ def tailD (list fallback : List α) : List α :=
| [] => fallback
| _ :: tl => tl
@[simp 1100] theorem tailD_nil : @tailD α [] l' = l' := rfl
@[simp 1100] theorem tailD_cons : @tailD α (a::l) l' = l := rfl
@[simp] theorem tailD_nil : @tailD α [] l' = l' := rfl
@[simp] theorem tailD_cons : @tailD α (a::l) l' = l := rfl
/-! ## Basic `List` operations.
@@ -1520,35 +1520,61 @@ def range' : (start len : Nat) → (step : Nat := 1) → List Nat
`O(n)`. `iota n` is the numbers from `1` to `n` inclusive, in decreasing order.
* `iota 5 = [5, 4, 3, 2, 1]`
-/
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
def iota : Nat List Nat
| 0 => []
| m@(n+1) => m :: iota n
set_option linter.deprecated false in
@[simp] theorem iota_zero : iota 0 = [] := rfl
set_option linter.deprecated false in
@[simp] theorem iota_succ : iota (i+1) = (i+1) :: iota i := rfl
/-! ### zipIdx -/
/--
`O(|l|)`. `zipIdx l` zips a list with its indices, optionally starting from a given index.
* `zipIdx [a, b, c] = [(a, 0), (b, 1), (c, 2)]`
* `zipIdx [a, b, c] 5 = [(a, 5), (b, 6), (c, 7)]`
-/
def zipIdx : List α (n : Nat := 0) List (α × Nat)
| [], _ => nil
| x :: xs, n => (x, n) :: zipIdx xs (n + 1)
@[simp] theorem zipIdx_nil : ([] : List α).zipIdx i = [] := rfl
@[simp] theorem zipIdx_cons : (a::as).zipIdx i = (a, i) :: as.zipIdx (i+1) := rfl
/-! ### enumFrom -/
/--
`O(|l|)`. `enumFrom n l` is like `enum` but it allows you to specify the initial index.
* `enumFrom 5 [a, b, c] = [(5, a), (6, b), (7, c)]`
-/
@[deprecated "Use `zipIdx` instead; note the signature change." (since := "2025-01-21")]
def enumFrom : Nat List α List (Nat × α)
| _, [] => nil
| n, x :: xs => (n, x) :: enumFrom (n + 1) xs
@[simp] theorem enumFrom_nil : ([] : List α).enumFrom i = [] := rfl
@[simp] theorem enumFrom_cons : (a::as).enumFrom i = (i, a) :: as.enumFrom (i+1) := rfl
set_option linter.deprecated false in
@[deprecated zipIdx_nil (since := "2025-01-21"), simp]
theorem enumFrom_nil : ([] : List α).enumFrom i = [] := rfl
set_option linter.deprecated false in
@[deprecated zipIdx_cons (since := "2025-01-21"), simp]
theorem enumFrom_cons : (a::as).enumFrom i = (i, a) :: as.enumFrom (i+1) := rfl
/-! ### enum -/
set_option linter.deprecated false in
/--
`O(|l|)`. `enum l` pairs up each element with its index in the list.
* `enum [a, b, c] = [(0, a), (1, b), (2, c)]`
-/
@[deprecated "Use `zipIdx` instead; note the signature change." (since := "2025-01-21")]
def enum : List α List (Nat × α) := enumFrom 0
@[simp] theorem enum_nil : ([] : List α).enum = [] := rfl
set_option linter.deprecated false in
@[deprecated zipIdx_nil (since := "2025-01-21"), simp]
theorem enum_nil : ([] : List α).enum = [] := rfl
/-! ## Minima and maxima -/
@@ -1848,12 +1874,14 @@ def unzipTR (l : List (α × β)) : List α × List β :=
/-! ### iota -/
/-- Tail-recursive version of `List.iota`. -/
@[deprecated "Use `List.range' 1 n` instead of `iota n`." (since := "2025-01-20")]
def iotaTR (n : Nat) : List Nat :=
let rec go : Nat List Nat List Nat
| 0, r => r.reverse
| m@(n+1), r => go n (m::r)
go n []
set_option linter.deprecated false in
@[csimp]
theorem iota_eq_iotaTR : @iota = @iotaTR :=
have aux (n : Nat) (r : List Nat) : iotaTR.go n r = r.reverse ++ iota n := by

View File

@@ -254,6 +254,7 @@ theorem findM?_eq_findSomeM? [Monad m] [LawfulMonad m] (p : α → m Bool) (as :
| [], b, _ => pure b
| a::as', b, h => do
have : a as := by
clear f
have bs, h := h
subst h
exact mem_append_right _ (Mem.head ..)

View File

@@ -40,7 +40,7 @@ protected theorem countP_go_eq_add (l) : countP.go p l n = n + countP.go p l 0 :
theorem countP_cons (a : α) (l) : countP p (a :: l) = countP p l + if p a then 1 else 0 := by
by_cases h : p a <;> simp [h]
theorem countP_singleton (a : α) : countP p [a] = if p a then 1 else 0 := by
@[simp] theorem countP_singleton (a : α) : countP p [a] = if p a then 1 else 0 := by
simp [countP_cons]
theorem length_eq_countP_add_countP (l) : length l = countP p l + countP (fun a => ¬p a) l := by

View File

@@ -6,6 +6,7 @@ Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, M
-/
prelude
import Init.Data.List.Pairwise
import Init.Data.List.Find
/-!
# Lemmas about `List.eraseP` and `List.erase`.
@@ -572,4 +573,19 @@ protected theorem IsPrefix.eraseIdx {l l' : List α} (h : l <+: l') (k : Nat) :
-- See also `mem_eraseIdx_iff_getElem` and `mem_eraseIdx_iff_getElem?` in
-- `Init/Data/List/Nat/Basic.lean`.
theorem erase_eq_eraseIdx [BEq α] [LawfulBEq α] (l : List α) (a : α) (i : Nat) (w : l.indexOf a = i) :
l.erase a = l.eraseIdx i := by
subst w
rw [erase_eq_iff]
by_cases h : a l
· right
obtain as, bs, rfl, h' := eq_append_cons_of_mem h
refine as, bs, h', by simp, ?_
rw [indexOf_append, if_neg h', indexOf_cons_self, eraseIdx_append_of_length_le] <;>
simp
· left
refine h, ?_
rw [eq_comm, eraseIdx_eq_self]
exact Nat.le_of_eq (indexOf_eq_length h).symm
end List

View File

@@ -822,28 +822,28 @@ theorem findIdx?_flatten {l : List (List α)} {p : α → Bool} :
simp only [replicate, findIdx?_cons, Nat.zero_add, findIdx?_succ, zero_lt_succ, true_and]
split <;> simp_all
theorem findIdx?_eq_findSome?_enum {xs : List α} {p : α Bool} :
xs.findIdx? p = xs.enum.findSome? fun i, a => if p a then some i else none := by
theorem findIdx?_eq_findSome?_zipIdx {xs : List α} {p : α Bool} :
xs.findIdx? p = xs.zipIdx.findSome? fun a, i => if p a then some i else none := by
induction xs with
| nil => simp
| cons x xs ih =>
simp only [findIdx?_cons, Nat.zero_add, findIdx?_succ, enum]
simp only [findIdx?_cons, Nat.zero_add, findIdx?_succ, zipIdx]
split
· simp_all
· simp_all only [enumFrom_cons, ite_false, Option.isNone_none, findSome?_cons_of_isNone, reduceCtorEq]
simp [Function.comp_def, map_fst_add_enum_eq_enumFrom, findSome?_map]
· simp_all only [zipIdx_cons, ite_false, Option.isNone_none, findSome?_cons_of_isNone, reduceCtorEq]
rw [ map_snd_add_zipIdx_eq_zipIdx (n := 1) (k := 0)]
simp [Function.comp_def, findSome?_map]
theorem findIdx?_eq_fst_find?_enum {xs : List α} {p : α Bool} :
xs.findIdx? p = (xs.enum.find? fun _, x => p x).map (·.1) := by
theorem findIdx?_eq_fst_find?_zipIdx {xs : List α} {p : α Bool} :
xs.findIdx? p = (xs.zipIdx.find? fun x, _ => p x).map (·.2) := by
induction xs with
| nil => simp
| cons x xs ih =>
simp only [findIdx?_cons, Nat.zero_add, findIdx?_start_succ, enum_cons]
simp only [findIdx?_cons, Nat.zero_add, findIdx?_start_succ, zipIdx_cons]
split
· simp_all
· simp only [Option.map_map, enumFrom_eq_map_enum, Bool.false_eq_true, not_false_eq_true,
find?_cons_of_neg, find?_map, *]
congr
· rw [ih, map_snd_add_zipIdx_eq_zipIdx (n := 1) (k := 0)]
simp [Function.comp_def, *]
-- See also `findIdx_le_findIdx`.
theorem findIdx?_eq_none_of_findIdx?_eq_none {xs : List α} {p q : α Bool} (w : x xs, p x q x) :
@@ -884,14 +884,68 @@ theorem IsInfix.findIdx?_eq_none {l₁ l₂ : List α} {p : α → Bool} (h : l
List.findIdx? p l₂ = none List.findIdx? p l₁ = none :=
h.sublist.findIdx?_eq_none
/-! ### indexOf -/
/-! ### indexOf
The verification API for `indexOf` is still incomplete.
The lemmas below should be made consistent with those for `findIdx` (and proved using them).
-/
theorem indexOf_cons [BEq α] :
(x :: xs : List α).indexOf y = bif x == y then 0 else xs.indexOf y + 1 := by
dsimp [indexOf]
simp [findIdx_cons]
@[simp] theorem indexOf_cons_self [BEq α] [ReflBEq α] {l : List α} : (a :: l).indexOf a = 0 := by
simp [indexOf_cons]
theorem indexOf_append [BEq α] [LawfulBEq α] {l₁ l₂ : List α} {a : α} :
(l₁ ++ l₂).indexOf a = if a l₁ then l₁.indexOf a else l₂.indexOf a + l₁.length := by
rw [indexOf, findIdx_append]
split <;> rename_i h
· rw [if_pos]
simpa using h
· rw [if_neg]
simpa using h
theorem indexOf_eq_length [BEq α] [LawfulBEq α] {l : List α} (h : a l) : l.indexOf a = l.length := by
induction l with
| nil => rfl
| cons x xs ih =>
simp only [mem_cons, not_or] at h
simp only [indexOf_cons, cond_eq_if, beq_iff_eq]
split <;> simp_all
theorem indexOf_lt_length [BEq α] [LawfulBEq α] {l : List α} (h : a l) : l.indexOf a < l.length := by
induction l with
| nil => simp at h
| cons x xs ih =>
simp only [mem_cons] at h
obtain rfl | h := h
· simp
· simp only [indexOf_cons, cond_eq_if, beq_iff_eq, length_cons]
specialize ih h
split
· exact zero_lt_succ xs.length
· exact Nat.add_lt_add_right ih 1
/-! ### indexOf?
The verification API for `indexOf?` is still incomplete.
The lemmas below should be made consistent with those for `findIdx?` (and proved using them).
-/
@[simp] theorem indexOf?_eq_none_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
l.indexOf? a = none a l := by
simp only [indexOf?, findIdx?_eq_none_iff, beq_eq_false_iff_ne, ne_eq]
constructor
· intro w h
specialize w _ h
simp at w
· rintro w x h rfl
contradiction
/-! ### lookup -/
section lookup
variable [BEq α] [LawfulBEq α]

View File

@@ -316,14 +316,35 @@ theorem insertIdxTR_go_eq : ∀ n l, insertIdxTR.go a n l acc = acc.toList ++ in
/-! ## Ranges and enumeration -/
/-! ### zipIdx -/
/-- Tail recursive version of `List.zipIdx`. -/
def zipIdxTR (l : List α) (n : Nat := 0) : List (α × Nat) :=
let arr := l.toArray
(arr.foldr (fun a (n, acc) => (n-1, (a, n-1) :: acc)) (n + arr.size, [])).2
@[csimp] theorem zipIdx_eq_zipIdxTR : @zipIdx = @zipIdxTR := by
funext α l n; simp [zipIdxTR, -Array.size_toArray]
let f := fun (a : α) (n, acc) => (n-1, (a, n-1) :: acc)
let rec go : l n, l.foldr f (n + l.length, []) = (n, zipIdx l n)
| [], n => rfl
| a::as, n => by
rw [ show _ + as.length = n + (a::as).length from Nat.succ_add .., foldr, go as]
simp [zipIdx, f]
rw [ Array.foldr_toList]
simp +zetaDelta [go]
/-! ### enumFrom -/
/-- Tail recursive version of `List.enumFrom`. -/
@[deprecated zipIdxTR (since := "2025-01-21")]
def enumFromTR (n : Nat) (l : List α) : List (Nat × α) :=
let arr := l.toArray
(arr.foldr (fun a (n, acc) => (n-1, (n-1, a) :: acc)) (n + arr.size, [])).2
@[csimp] theorem enumFrom_eq_enumFromTR : @enumFrom = @enumFromTR := by
set_option linter.deprecated false in
@[deprecated zipIdx_eq_zipIdxTR (since := "2025-01-21"), csimp]
theorem enumFrom_eq_enumFromTR : @enumFrom = @enumFromTR := by
funext α n l; simp [enumFromTR, -Array.size_toArray]
let f := fun (a : α) (n, acc) => (n-1, (n-1, a) :: acc)
let rec go : l n, l.foldr f (n + l.length, []) = (n, enumFrom n l)

View File

@@ -379,7 +379,7 @@ theorem eq_nil_iff_forall_not_mem {l : List α} : l = [] ↔ ∀ a, a ∉ l := b
theorem eq_of_mem_singleton : a [b] a = b
| .head .. => rfl
@[simp 1100] theorem mem_singleton {a b : α} : a [b] a = b :=
@[simp] theorem mem_singleton {a b : α} : a [b] a = b :=
eq_of_mem_singleton, (by simp [·])
theorem forall_mem_cons {p : α Prop} {a : α} {l : List α} :
@@ -1046,7 +1046,9 @@ theorem map_id'' {f : αα} (h : ∀ x, f x = x) (l : List α) : map f l =
theorem map_singleton (f : α β) (a : α) : map f [a] = [f a] := rfl
@[simp] theorem mem_map {f : α β} : {l : List α}, b l.map f a, a l f a = b
-- We use a lower priority here as there are more specific lemmas in downstream libraries
-- which should be able to fire first.
@[simp 500] theorem mem_map {f : α β} : {l : List α}, b l.map f a, a l f a = b
| [] => by simp
| _ :: l => by simp [mem_map (l := l), eq_comm (a := b)]
@@ -1556,7 +1558,7 @@ theorem getElem_of_append {l : List α} (eq : l = l₁ ++ a :: l₂) (h : l₁.l
rw [ getElem?_eq_getElem, eq, getElem?_append_right (h Nat.le_refl _), h]
simp
@[simp 1100] theorem singleton_append : [x] ++ l = x :: l := rfl
@[simp] theorem singleton_append : [x] ++ l = x :: l := rfl
theorem append_inj :
{s₁ s₂ t₁ t₂ : List α}, s₁ ++ t₁ = s₂ ++ t₂ length s₁ = length s₂ s₁ = s₂ t₁ = t₂
@@ -2546,20 +2548,24 @@ theorem foldr_filterMap (f : α → Option β) (g : β → γγ) (l : List
simp only [filterMap_cons, foldr_cons]
cases f a <;> simp [ih]
theorem foldl_map' (g : α β) (f : α α α) (f' : β β β) (a : α) (l : List α)
theorem foldl_map_hom (g : α β) (f : α α α) (f' : β β β) (a : α) (l : List α)
(h : x y, f' (g x) (g y) = g (f x y)) :
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
induction l generalizing a
· simp
· simp [*, h]
theorem foldr_map' (g : α β) (f : α α α) (f' : β β β) (a : α) (l : List α)
@[deprecated foldl_map_hom (since := "2025-01-20")] abbrev foldl_map' := @foldl_map_hom
theorem foldr_map_hom (g : α β) (f : α α α) (f' : β β β) (a : α) (l : List α)
(h : x y, f' (g x) (g y) = g (f x y)) :
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
induction l generalizing a
· simp
· simp [*, h]
@[deprecated foldr_map_hom (since := "2025-01-20")] abbrev foldr_map' := @foldr_map_hom
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α β m β) (b) (l l' : List α) :
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
induction l <;> simp [*]
@@ -2746,10 +2752,12 @@ theorem getLast?_eq_some_iff {xs : List α} {a : α} : xs.getLast? = some a ↔
rw [getLast?_eq_head?_reverse, head?_isSome]
simp
theorem mem_of_getLast?_eq_some {xs : List α} {a : α} (h : xs.getLast? = some a) : a xs := by
theorem mem_of_getLast? {xs : List α} {a : α} (h : xs.getLast? = some a) : a xs := by
obtain ys, rfl := getLast?_eq_some_iff.1 h
exact mem_concat_self ys a
@[deprecated mem_of_getLast? (since := "2024-10-21")] abbrev mem_of_getLast?_eq_some := @mem_of_getLast?
@[simp] theorem getLast_reverse {l : List α} (h : l.reverse []) :
l.reverse.getLast h = l.head (by simp_all) := by
simp [getLast_eq_head_reverse]
@@ -2959,7 +2967,7 @@ theorem dropLast_append {l₁ l₂ : List α} :
theorem dropLast_append_cons : dropLast (l₁ ++ b :: l₂) = l₁ ++ dropLast (b :: l₂) := by
simp
@[simp 1100] theorem dropLast_concat : dropLast (l₁ ++ [b]) = l₁ := by simp
@[simp] theorem dropLast_concat : dropLast (l₁ ++ [b]) = l₁ := by simp
@[simp] theorem dropLast_replicate (n) (a : α) : dropLast (replicate n a) = replicate (n - 1) a := by
match n with
@@ -3125,7 +3133,7 @@ variable [LawfulBEq α]
| Or.inr h' => exact h'
else rw [insert_of_not_mem h, mem_cons]
@[simp 1100] theorem mem_insert_self (a : α) (l : List α) : a l.insert a :=
@[simp] theorem mem_insert_self (a : α) (l : List α) : a l.insert a :=
mem_insert_iff.2 (Or.inl rfl)
theorem mem_insert_of_mem {l : List α} (h : a l) : a l.insert b :=

View File

@@ -17,12 +17,13 @@ namespace List
/-! ### mapIdx -/
/--
Given a list `as = [a₀, a₁, ...]` function `f : Fin as.length → α → β`, returns the list
`[f 0 a₀, f 1 a₁, ...]`.
-/
@[inline] def mapFinIdx (as : List α) (f : (i : Nat) α (h : i < as.length) β) : List β := go as #[] (by simp) where
@[inline] def mapFinIdx (as : List α) (f : (i : Nat) α (h : i < as.length) β) : List β :=
go as #[] (by simp)
where
/-- Auxiliary for `mapFinIdx`:
`mapFinIdx.go [a₀, a₁, ...] acc = acc.toList ++ [f 0 a₀, f 1 a₁, ...]` -/
@[specialize] go : (bs : List α) (acc : Array β) bs.length + acc.size = as.length List β
@@ -43,6 +44,12 @@ Given a function `f : Nat → α → β` and `as : List α`, `as = [a₀, a₁,
/-! ### mapFinIdx -/
@[congr] theorem mapFinIdx_congr {xs ys : List α} (w : xs = ys)
(f : (i : Nat) α (h : i < xs.length) β) :
mapFinIdx xs f = mapFinIdx ys (fun i a h => f i a (by simp [w]; omega)) := by
subst w
rfl
@[simp]
theorem mapFinIdx_nil {f : (i : Nat) α (h : i < 0) β} : mapFinIdx [] f = [] :=
rfl
@@ -125,16 +132,19 @@ theorem mapFinIdx_singleton {a : α} {f : (i : Nat) → α → (h : i < 1) →
[a].mapFinIdx f = [f 0 a (by simp)] := by
simp
theorem mapFinIdx_eq_enum_map {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f = l.enum.attach.map
fun i, x, m =>
f i x (by rw [mk_mem_enum_iff_getElem?, getElem?_eq_some_iff] at m; exact m.1) := by
theorem mapFinIdx_eq_zipIdx_map {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f = l.zipIdx.attach.map
fun x, i, m =>
f i x (by rw [mk_mem_zipIdx_iff_getElem?, getElem?_eq_some_iff] at m; exact m.1) := by
apply ext_getElem <;> simp
@[deprecated mapFinIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapFinIdx_eq_zipWithIndex_map := @mapFinIdx_eq_zipIdx_map
@[simp]
theorem mapFinIdx_eq_nil_iff {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f = [] l = [] := by
rw [mapFinIdx_eq_enum_map, map_eq_nil_iff, attach_eq_nil_iff, enum_eq_nil_iff]
rw [mapFinIdx_eq_zipIdx_map, map_eq_nil_iff, attach_eq_nil_iff, zipIdx_eq_nil_iff]
theorem mapFinIdx_ne_nil_iff {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f [] l [] := by
@@ -142,10 +152,10 @@ theorem mapFinIdx_ne_nil_iff {l : List α} {f : (i : Nat) → α → (h : i < l.
theorem exists_of_mem_mapFinIdx {b : β} {l : List α} {f : (i : Nat) α (h : i < l.length) β}
(h : b l.mapFinIdx f) : (i : Nat) (h : i < l.length), f i l[i] h = b := by
rw [mapFinIdx_eq_enum_map] at h
rw [mapFinIdx_eq_zipIdx_map] at h
replace h := exists_of_mem_map h
simp only [mem_attach, true_and, Subtype.exists, Prod.exists, mk_mem_enum_iff_getElem?] at h
obtain i, b, h, rfl := h
simp only [mem_attach, true_and, Subtype.exists, Prod.exists, mk_mem_zipIdx_iff_getElem?] at h
obtain b, i, h, rfl := h
rw [getElem?_eq_some_iff] at h
obtain h', rfl := h
exact i, h', rfl
@@ -188,6 +198,49 @@ theorem mapFinIdx_eq_iff {l : List α} {f : (i : Nat) → α → (h : i < l.leng
· rintro h, w
apply ext_getElem <;> simp_all
@[simp] theorem mapFinIdx_eq_singleton_iff {l : List α} {f : (i : Nat) α (h : i < l.length) β} {b : β} :
l.mapFinIdx f = [b] (a : α) (w : l = [a]), f 0 a (by simp [w]) = b := by
simp [mapFinIdx_eq_cons_iff]
theorem mapFinIdx_eq_append_iff {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f = l₁ ++ l₂
(l₁' : List α) (l₂' : List α) (w : l = l₁' ++ l₂'),
l₁'.mapFinIdx (fun i a h => f i a (by simp [w]; omega)) = l₁
l₂'.mapFinIdx (fun i a h => f (i + l₁'.length) a (by simp [w]; omega)) = l₂ := by
rw [mapFinIdx_eq_iff]
constructor
· intro h, w
simp only [length_append] at h
refine l.take l₁.length, l.drop l₁.length, by simp, ?_
constructor
· apply ext_getElem
· simp
omega
· intro i hi₁ hi₂
simp only [getElem_mapFinIdx, getElem_take]
specialize w i (by omega)
rw [getElem_append_left hi₂] at w
exact w.symm
· apply ext_getElem
· simp
omega
· intro i hi₁ hi₂
simp only [getElem_mapFinIdx, getElem_take]
simp only [length_take, getElem_drop]
have : l₁.length l.length := by omega
simp only [Nat.min_eq_left this, Nat.add_comm]
specialize w (i + l₁.length) (by omega)
rw [getElem_append_right (by omega)] at w
simpa using w.symm
· rintro l₁', l₂', rfl, rfl, rfl
refine by simp, fun i h => ?_
rw [getElem_append]
split <;> rename_i h'
· simp [getElem_append_left (by simpa using h')]
· simp only [length_mapFinIdx, Nat.not_lt] at h'
have : i - l₁'.length + l₁'.length = i := by omega
simp [getElem_append_right h', this]
theorem mapFinIdx_eq_mapFinIdx_iff {l : List α} {f g : (i : Nat) α (h : i < l.length) β} :
l.mapFinIdx f = l.mapFinIdx g (i : Nat) (h : i < l.length), f i l[i] h = g i l[i] h := by
rw [eq_comm, mapFinIdx_eq_iff]
@@ -281,17 +334,19 @@ theorem mapIdx_eq_mapFinIdx {l : List α} {f : Nat → α → β} :
l.mapIdx f = l.mapFinIdx (fun i a _ => f i a) := by
simp [mapFinIdx_eq_mapIdx]
theorem mapIdx_eq_enum_map {l : List α} :
l.mapIdx f = l.enum.map (Function.uncurry f) := by
theorem mapIdx_eq_zipIdx_map {l : List α} {f : Nat α β} :
l.mapIdx f = l.zipIdx.map (fun a, i => f i a) := by
ext1 i
simp only [getElem?_mapIdx, Option.map, getElem?_map, getElem?_enum]
simp only [getElem?_mapIdx, Option.map, getElem?_map, getElem?_zipIdx]
split <;> simp
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapIdx_eq_enum_map := @mapIdx_eq_zipIdx_map
@[simp]
theorem mapIdx_cons {l : List α} {a : α} :
mapIdx f (a :: l) = f 0 a :: mapIdx (fun i => f (i + 1)) l := by
simp [mapIdx_eq_enum_map, enum_eq_zip_range, map_uncurry_zip_eq_zipWith,
range_succ_eq_map, zipWith_map_left]
simp [mapIdx_eq_zipIdx_map, List.zipIdx_succ]
theorem mapIdx_append {K L : List α} :
(K ++ L).mapIdx f = K.mapIdx f ++ L.mapIdx fun i => f (i + K.length) := by
@@ -308,7 +363,7 @@ theorem mapIdx_singleton {a : α} : mapIdx f [a] = [f 0 a] := by
@[simp]
theorem mapIdx_eq_nil_iff {l : List α} : List.mapIdx f l = [] l = [] := by
rw [List.mapIdx_eq_enum_map, List.map_eq_nil_iff, List.enum_eq_nil_iff]
rw [List.mapIdx_eq_zipIdx_map, List.map_eq_nil_iff, List.zipIdx_eq_nil_iff]
theorem mapIdx_ne_nil_iff {l : List α} :
List.mapIdx f l [] l [] := by
@@ -338,6 +393,10 @@ theorem mapIdx_eq_cons_iff' {l : List α} {b : β} :
l.head?.map (f 0) = some b l.tail?.map (mapIdx fun i => f (i + 1)) = some l₂ := by
cases l <;> simp
@[simp] theorem mapIdx_eq_singleton_iff {l : List α} {f : Nat α β} {b : β} :
mapIdx f l = [b] (a : α), l = [a] f 0 a = b := by
simp [mapIdx_eq_cons_iff]
theorem mapIdx_eq_iff {l : List α} : mapIdx f l = l' i : Nat, l'[i]? = l[i]?.map (f i) := by
constructor
· intro w i
@@ -346,6 +405,19 @@ theorem mapIdx_eq_iff {l : List α} : mapIdx f l = l' ↔ ∀ i : Nat, l'[i]? =
ext1 i
simp [w]
theorem mapIdx_eq_append_iff {l : List α} :
mapIdx f l = l₁ ++ l₂
(l₁' : List α) (l₂' : List α), l = l₁' ++ l₂'
mapIdx f l₁' = l₁
mapIdx (fun i => f (i + l₁'.length)) l₂' = l₂ := by
rw [mapIdx_eq_mapFinIdx, mapFinIdx_eq_append_iff]
simp only [mapFinIdx_eq_mapIdx, exists_and_left, exists_prop]
constructor
· rintro l₁, rfl, l₂, rfl, h
refine l₁, l₂, by simp_all
· rintro l₁, l₂, rfl, rfl, rfl
refine l₁, rfl, l₂, by simp_all
theorem mapIdx_eq_mapIdx_iff {l : List α} :
mapIdx f l = mapIdx g l i : Nat, (h : i < l.length) f i l[i] = g i l[i] := by
constructor

View File

@@ -37,14 +37,14 @@ theorem find?_eq_some_iff_getElem {xs : List α} {p : α → Bool} {b : α} :
theorem findIdx?_eq_some_le_of_findIdx?_eq_some {xs : List α} {p q : α Bool} (w : x xs, p x q x) {i : Nat}
(h : xs.findIdx? p = some i) : j, j i xs.findIdx? q = some j := by
simp only [findIdx?_eq_findSome?_enum] at h
simp only [findIdx?_eq_findSome?_zipIdx] at h
rw [findSome?_eq_some_iff] at h
simp only [Option.ite_none_right_eq_some, Option.some.injEq, ite_eq_right_iff, reduceCtorEq,
imp_false, Bool.not_eq_true, Prod.forall, exists_and_right, Prod.exists] at h
obtain h, h₁, b, es, h₂, hb, rfl, h₃ := h
rw [enum_eq_enumFrom, enumFrom_eq_append_iff] at h₂
rw [zipIdx_eq_append_iff] at h₂
obtain l₁', l₂', rfl, rfl, h₂ := h₂
rw [eq_comm, enumFrom_eq_cons_iff] at h₂
rw [eq_comm, zipIdx_eq_cons_iff] at h₂
obtain a, as, rfl, h₂, rfl := h₂
simp only [Nat.zero_add, Prod.mk.injEq] at h₂
obtain rfl, rfl := h₂

View File

@@ -76,6 +76,12 @@ theorem eraseIdx_modifyHead_zero {f : αα} {l : List α} :
@[simp] theorem modifyHead_id : modifyHead (id : α α) = id := by funext l; cases l <;> simp
@[simp] theorem modifyHead_dropLast {l : List α} {f : α α} :
l.dropLast.modifyHead f = (l.modifyHead f).dropLast := by
rcases l with _|a, l
· simp
· rcases l with _|b, l <;> simp
/-! ### modifyTailIdx -/
@[simp] theorem modifyTailIdx_id : n (l : List α), l.modifyTailIdx id n = l

View File

@@ -195,24 +195,32 @@ theorem erase_range : (range n).erase i = range (min n i) ++ range' (i + 1) (n -
/-! ### iota -/
section
set_option linter.deprecated false
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem iota_eq_reverse_range' : n : Nat, iota n = reverse (range' 1 n)
| 0 => rfl
| n + 1 => by simp [iota, range'_concat, iota_eq_reverse_range' n, reverse_append, Nat.add_comm]
@[simp] theorem length_iota (n : Nat) : length (iota n) = n := by simp [iota_eq_reverse_range']
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem length_iota (n : Nat) : length (iota n) = n := by simp [iota_eq_reverse_range']
@[simp] theorem iota_eq_nil {n : Nat} : iota n = [] n = 0 := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem iota_eq_nil {n : Nat} : iota n = [] n = 0 := by
cases n <;> simp
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem iota_ne_nil {n : Nat} : iota n [] n 0 := by
cases n <;> simp
@[simp]
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem mem_iota {m n : Nat} : m iota n 0 < m m n := by
simp [iota_eq_reverse_range', Nat.add_comm, Nat.lt_succ]
omega
@[simp] theorem iota_inj : iota n = iota n' n = n' := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem iota_inj : iota n = iota n' n = n' := by
constructor
· intro h
have h' := congrArg List.length h
@@ -221,6 +229,7 @@ theorem mem_iota {m n : Nat} : m ∈ iota n ↔ 0 < m ∧ m ≤ n := by
· rintro rfl
simp
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem iota_eq_cons_iff : iota n = a :: xs n = a 0 < n xs = iota (n - 1) := by
simp [iota_eq_reverse_range']
simp [range'_eq_append_iff, reverse_eq_iff]
@@ -234,6 +243,7 @@ theorem iota_eq_cons_iff : iota n = a :: xs ↔ n = a ∧ 0 < n ∧ xs = iota (n
rw [eq_comm, range'_eq_singleton]
omega
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem iota_eq_append_iff : iota n = xs ++ ys k, k n xs = (range' (k + 1) (n - k)).reverse ys = iota k := by
simp only [iota_eq_reverse_range']
rw [reverse_eq_append_iff]
@@ -245,42 +255,52 @@ theorem iota_eq_append_iff : iota n = xs ++ ys ↔ ∃ k, k ≤ n ∧ xs = (rang
· rintro k, h, rfl, rfl
exact k, by simp; omega
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem pairwise_gt_iota (n : Nat) : Pairwise (· > ·) (iota n) := by
simpa only [iota_eq_reverse_range', pairwise_reverse] using pairwise_lt_range' 1 n
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem nodup_iota (n : Nat) : Nodup (iota n) :=
(pairwise_gt_iota n).imp Nat.ne_of_gt
@[simp] theorem head?_iota (n : Nat) : (iota n).head? = if n = 0 then none else some n := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem head?_iota (n : Nat) : (iota n).head? = if n = 0 then none else some n := by
cases n <;> simp
@[simp] theorem head_iota (n : Nat) (h) : (iota n).head h = n := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem head_iota (n : Nat) (h) : (iota n).head h = n := by
cases n with
| zero => simp at h
| succ n => simp
@[simp] theorem tail_iota (n : Nat) : (iota n).tail = iota (n - 1) := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem tail_iota (n : Nat) : (iota n).tail = iota (n - 1) := by
cases n <;> simp
@[simp] theorem reverse_iota : reverse (iota n) = range' 1 n := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem reverse_iota : reverse (iota n) = range' 1 n := by
induction n with
| zero => simp
| succ n ih =>
rw [iota_succ, reverse_cons, ih, range'_1_concat, Nat.add_comm]
@[simp] theorem getLast?_iota (n : Nat) : (iota n).getLast? = if n = 0 then none else some 1 := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem getLast?_iota (n : Nat) : (iota n).getLast? = if n = 0 then none else some 1 := by
rw [getLast?_eq_head?_reverse]
simp [head?_range']
@[simp] theorem getLast_iota (n : Nat) (h) : (iota n).getLast h = 1 := by
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem getLast_iota (n : Nat) (h) : (iota n).getLast h = 1 := by
rw [getLast_eq_head_reverse]
simp
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20")]
theorem find?_iota_eq_none {n : Nat} {p : Nat Bool} :
(iota n).find? p = none i, 0 < i i n !p i := by
simp
@[simp] theorem find?_iota_eq_some {n : Nat} {i : Nat} {p : Nat Bool} :
@[deprecated "Use `(List.range' 1 n).reverse` instead of `iota n`." (since := "2025-01-20"), simp]
theorem find?_iota_eq_some {n : Nat} {i : Nat} {p : Nat Bool} :
(iota n).find? p = some i p i i iota n j, i < j j n !p j := by
rw [find?_eq_some_iff_append]
simp only [iota_eq_reverse_range', reverse_eq_append_iff, reverse_cons, append_assoc, cons_append,
@@ -317,25 +337,168 @@ theorem find?_iota_eq_none {n : Nat} {p : Nat → Bool} :
· omega
· omega
/-! ### enumFrom -/
end
/-! ### zipIdx -/
@[simp]
theorem zipIdx_singleton (x : α) (k : Nat) : zipIdx [x] k = [(x, k)] :=
rfl
@[simp] theorem head?_zipIdx (l : List α) (k : Nat) :
(zipIdx l k).head? = l.head?.map fun a => (a, k) := by
simp [head?_eq_getElem?]
@[simp] theorem getLast?_zipIdx (l : List α) (k : Nat) :
(zipIdx l k).getLast? = l.getLast?.map fun a => (a, k + l.length - 1) := by
simp [getLast?_eq_getElem?]
cases l <;> simp; omega
theorem mk_add_mem_zipIdx_iff_getElem? {k i : Nat} {x : α} {l : List α} :
(x, k + i) zipIdx l k l[i]? = some x := by
simp [mem_iff_getElem?, and_left_comm]
theorem mk_mem_zipIdx_iff_le_and_getElem?_sub {k i : Nat} {x : α} {l : List α} :
(x, i) zipIdx l k k i l[i - k]? = some x := by
if h : k i then
rcases Nat.exists_eq_add_of_le h with i, rfl
simp [mk_add_mem_zipIdx_iff_getElem?, Nat.add_sub_cancel_left]
else
have : m, k + m i := by rintro _ rfl; simp at h
simp [h, mem_iff_get?, this]
/-- Variant of `mk_mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mk_mem_zipIdx_iff_getElem? {i : Nat} {x : α} {l : List α} : (x, i) zipIdx l l[i]? = x := by
simp [mk_mem_zipIdx_iff_le_and_getElem?_sub]
theorem mem_zipIdx_iff_le_and_getElem?_sub {x : α × Nat} {l : List α} {k : Nat} :
x zipIdx l k k x.2 l[x.2 - k]? = some x.1 := by
cases x
simp [mk_mem_zipIdx_iff_le_and_getElem?_sub]
/-- Variant of `mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mem_zipIdx_iff_getElem? {x : α × Nat} {l : List α} : x zipIdx l l[x.2]? = some x.1 := by
cases x
simp [mk_mem_zipIdx_iff_le_and_getElem?_sub]
theorem le_snd_of_mem_zipIdx {x : α × Nat} {k : Nat} {l : List α} (h : x zipIdx l k) :
k x.2 :=
(mk_mem_zipIdx_iff_le_and_getElem?_sub.1 h).1
theorem snd_lt_add_of_mem_zipIdx {x : α × Nat} {l : List α} {k : Nat} (h : x zipIdx l k) :
x.2 < k + length l := by
rcases mem_iff_get.1 h with i, rfl
simpa using i.isLt
theorem snd_lt_of_mem_zipIdx {x : α × Nat} {l : List α} {k : Nat} (h : x l.zipIdx k) : x.2 < l.length + k := by
simpa [Nat.add_comm] using snd_lt_add_of_mem_zipIdx h
theorem map_zipIdx (f : α β) (l : List α) (k : Nat) :
map (Prod.map f id) (zipIdx l k) = zipIdx (l.map f) k := by
induction l generalizing k <;> simp_all
theorem fst_mem_of_mem_zipIdx {x : α × Nat} {l : List α} {k : Nat} (h : x zipIdx l k) : x.1 l :=
zipIdx_map_fst k l mem_map_of_mem _ h
theorem fst_eq_of_mem_zipIdx {x : α × Nat} {l : List α} {k : Nat} (h : x zipIdx l k) :
x.1 = l[x.2 - k]'(by have := le_snd_of_mem_zipIdx h; have := snd_lt_add_of_mem_zipIdx h; omega) := by
induction l generalizing k with
| nil => cases h
| cons hd tl ih =>
cases h with
| head h => simp
| tail h m =>
specialize ih m
have : x.2 - k = x.2 - (k + 1) + 1 := by
have := le_snd_of_mem_zipIdx m
omega
simp [this, ih]
theorem mem_zipIdx {x : α} {i : Nat} {xs : List α} {k : Nat} (h : (x, i) xs.zipIdx k) :
k i i < k + xs.length
x = xs[i - k]'(by have := le_snd_of_mem_zipIdx h; have := snd_lt_add_of_mem_zipIdx h; omega) :=
le_snd_of_mem_zipIdx h, snd_lt_add_of_mem_zipIdx h, fst_eq_of_mem_zipIdx h
/-- Variant of `mem_zipIdx` specialized at `k = 0`. -/
theorem mem_zipIdx' {x : α} {i : Nat} {xs : List α} (h : (x, i) xs.zipIdx) :
i < xs.length x = xs[i]'(by have := le_snd_of_mem_zipIdx h; have := snd_lt_add_of_mem_zipIdx h; omega) :=
by simpa using snd_lt_add_of_mem_zipIdx h, fst_eq_of_mem_zipIdx h
theorem zipIdx_map (l : List α) (k : Nat) (f : α β) :
zipIdx (l.map f) k = (zipIdx l k).map (Prod.map f id) := by
induction l with
| nil => rfl
| cons hd tl IH =>
rw [map_cons, zipIdx_cons', zipIdx_cons', map_cons, map_map, IH, map_map]
rfl
theorem zipIdx_append (xs ys : List α) (k : Nat) :
zipIdx (xs ++ ys) k = zipIdx xs k ++ zipIdx ys (k + xs.length) := by
induction xs generalizing ys k with
| nil => simp
| cons x xs IH =>
rw [cons_append, zipIdx_cons, IH, cons_append, zipIdx_cons, length, Nat.add_right_comm,
Nat.add_assoc]
theorem zipIdx_eq_cons_iff {l : List α} {k : Nat} :
zipIdx l k = x :: l' a as, l = a :: as x = (a, k) l' = zipIdx as (k + 1) := by
rw [zipIdx_eq_zip_range', zip_eq_cons_iff]
constructor
· rintro l₁, l₂, rfl, h, rfl
rw [range'_eq_cons_iff] at h
obtain rfl, -, rfl := h
exact x.1, l₁, by simp [zipIdx_eq_zip_range']
· rintro a, as, rfl, rfl, rfl
refine as, range' (k+1) as.length, ?_
simp [zipIdx_eq_zip_range', range'_succ]
theorem zipIdx_eq_append_iff {l : List α} {k : Nat} :
zipIdx l k = l₁ ++ l₂
l₁' l₂', l = l₁' ++ l₂' l₁ = zipIdx l₁' k l₂ = zipIdx l₂' (k + l₁'.length) := by
rw [zipIdx_eq_zip_range', zip_eq_append_iff]
constructor
· rintro w, x, y, z, h, rfl, h', rfl, rfl
rw [range'_eq_append_iff] at h'
obtain k, -, rfl, rfl := h'
simp only [length_range'] at h
obtain rfl := h
refine w, x, rfl, ?_
simp only [zipIdx_eq_zip_range', length_append, true_and]
congr
omega
· rintro l₁', l₂', rfl, rfl, rfl
simp only [zipIdx_eq_zip_range']
refine l₁', l₂', range' k l₁'.length, range' (k + l₁'.length) l₂'.length, ?_
simp [Nat.add_comm]
/-! ### enumFrom -/
section
set_option linter.deprecated false
@[deprecated zipIdx_singleton (since := "2025-01-21"), simp]
theorem enumFrom_singleton (x : α) (n : Nat) : enumFrom n [x] = [(n, x)] :=
rfl
@[simp] theorem head?_enumFrom (n : Nat) (l : List α) :
@[deprecated head?_zipIdx (since := "2025-01-21"), simp]
theorem head?_enumFrom (n : Nat) (l : List α) :
(enumFrom n l).head? = l.head?.map fun a => (n, a) := by
simp [head?_eq_getElem?]
@[simp] theorem getLast?_enumFrom (n : Nat) (l : List α) :
@[deprecated getLast?_zipIdx (since := "2025-01-21"), simp]
theorem getLast?_enumFrom (n : Nat) (l : List α) :
(enumFrom n l).getLast? = l.getLast?.map fun a => (n + l.length - 1, a) := by
simp [getLast?_eq_getElem?]
cases l <;> simp; omega
@[deprecated mk_add_mem_zipIdx_iff_getElem? (since := "2025-01-21")]
theorem mk_add_mem_enumFrom_iff_getElem? {n i : Nat} {x : α} {l : List α} :
(n + i, x) enumFrom n l l[i]? = some x := by
simp [mem_iff_get?]
@[deprecated mk_mem_zipIdx_iff_le_and_getElem?_sub (since := "2025-01-21")]
theorem mk_mem_enumFrom_iff_le_and_getElem?_sub {n i : Nat} {x : α} {l : List α} :
(i, x) enumFrom n l n i l[i - n]? = x := by
if h : n i then
@@ -345,22 +508,27 @@ theorem mk_mem_enumFrom_iff_le_and_getElem?_sub {n i : Nat} {x : α} {l : List
have : k, n + k i := by rintro k rfl; simp at h
simp [h, mem_iff_get?, this]
@[deprecated le_snd_of_mem_zipIdx (since := "2025-01-21")]
theorem le_fst_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) :
n x.1 :=
(mk_mem_enumFrom_iff_le_and_getElem?_sub.1 h).1
@[deprecated snd_lt_add_of_mem_zipIdx (since := "2025-01-21")]
theorem fst_lt_add_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) :
x.1 < n + length l := by
rcases mem_iff_get.1 h with i, rfl
simpa using i.isLt
@[deprecated map_zipIdx (since := "2025-01-21")]
theorem map_enumFrom (f : α β) (n : Nat) (l : List α) :
map (Prod.map id f) (enumFrom n l) = enumFrom n (map f l) := by
induction l generalizing n <;> simp_all
@[deprecated fst_mem_of_mem_zipIdx (since := "2025-01-21")]
theorem snd_mem_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) : x.2 l :=
enumFrom_map_snd n l mem_map_of_mem _ h
@[deprecated fst_eq_of_mem_zipIdx (since := "2025-01-21")]
theorem snd_eq_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) :
x.2 = l[x.1 - n]'(by have := le_fst_of_mem_enumFrom h; have := fst_lt_add_of_mem_enumFrom h; omega) := by
induction l generalizing n with
@@ -375,11 +543,13 @@ theorem snd_eq_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x
omega
simp [this, ih]
@[deprecated mem_zipIdx (since := "2025-01-21")]
theorem mem_enumFrom {x : α} {i j : Nat} {xs : List α} (h : (i, x) xs.enumFrom j) :
j i i < j + xs.length
x = xs[i - j]'(by have := le_fst_of_mem_enumFrom h; have := fst_lt_add_of_mem_enumFrom h; omega) :=
le_fst_of_mem_enumFrom h, fst_lt_add_of_mem_enumFrom h, snd_eq_of_mem_enumFrom h
@[deprecated zipIdx_map (since := "2025-01-21")]
theorem enumFrom_map (n : Nat) (l : List α) (f : α β) :
enumFrom n (l.map f) = (enumFrom n l).map (Prod.map id f) := by
induction l with
@@ -388,6 +558,7 @@ theorem enumFrom_map (n : Nat) (l : List α) (f : α → β) :
rw [map_cons, enumFrom_cons', enumFrom_cons', map_cons, map_map, IH, map_map]
rfl
@[deprecated zipIdx_append (since := "2025-01-21")]
theorem enumFrom_append (xs ys : List α) (n : Nat) :
enumFrom n (xs ++ ys) = enumFrom n xs ++ enumFrom (n + xs.length) ys := by
induction xs generalizing ys n with
@@ -396,6 +567,7 @@ theorem enumFrom_append (xs ys : List α) (n : Nat) :
rw [cons_append, enumFrom_cons, IH, cons_append, enumFrom_cons, length, Nat.add_right_comm,
Nat.add_assoc]
@[deprecated zipIdx_eq_cons_iff (since := "2025-01-21")]
theorem enumFrom_eq_cons_iff {l : List α} {n : Nat} :
l.enumFrom n = x :: l' a as, l = a :: as x = (n, a) l' = enumFrom (n + 1) as := by
rw [enumFrom_eq_zip_range', zip_eq_cons_iff]
@@ -408,6 +580,7 @@ theorem enumFrom_eq_cons_iff {l : List α} {n : Nat} :
refine range' (n+1) as.length, as, ?_
simp [enumFrom_eq_zip_range', range'_succ]
@[deprecated zipIdx_eq_append_iff (since := "2025-01-21")]
theorem enumFrom_eq_append_iff {l : List α} {n : Nat} :
l.enumFrom n = l₁ ++ l₂
l₁' l₂', l = l₁' ++ l₂' l₁ = l₁'.enumFrom n l₂ = l₂'.enumFrom (n + l₁'.length) := by
@@ -427,89 +600,113 @@ theorem enumFrom_eq_append_iff {l : List α} {n : Nat} :
refine range' n l₁'.length, range' (n + l₁'.length) l₂'.length, l₁', l₂', ?_
simp [Nat.add_comm]
end
/-! ### enum -/
@[simp]
section
set_option linter.deprecated false
@[deprecated zipIdx_eq_nil_iff (since := "2025-01-21"), simp]
theorem enum_eq_nil_iff {l : List α} : List.enum l = [] l = [] := enumFrom_eq_nil
@[deprecated enum_eq_nil_iff (since := "2024-11-04")]
@[deprecated zipIdx_eq_nil_iff (since := "2024-11-04")]
theorem enum_eq_nil {l : List α} : List.enum l = [] l = [] := enum_eq_nil_iff
@[simp] theorem enum_singleton (x : α) : enum [x] = [(0, x)] := rfl
@[deprecated zipIdx_singleton (since := "2025-01-21"), simp]
theorem enum_singleton (x : α) : enum [x] = [(0, x)] := rfl
@[simp] theorem enum_length : (enum l).length = l.length :=
@[deprecated length_zipIdx (since := "2025-01-21"), simp]
theorem enum_length : (enum l).length = l.length :=
enumFrom_length
@[simp]
@[deprecated getElem?_zipIdx (since := "2025-01-21"), simp]
theorem getElem?_enum (l : List α) (n : Nat) : (enum l)[n]? = l[n]?.map fun a => (n, a) := by
rw [enum, getElem?_enumFrom, Nat.zero_add]
@[simp]
@[deprecated getElem_zipIdx (since := "2025-01-21"), simp]
theorem getElem_enum (l : List α) (i : Nat) (h : i < l.enum.length) :
l.enum[i] = (i, l[i]'(by simpa [enum_length] using h)) := by
simp [enum]
@[simp] theorem head?_enum (l : List α) :
@[deprecated head?_zipIdx (since := "2025-01-21"), simp] theorem head?_enum (l : List α) :
l.enum.head? = l.head?.map fun a => (0, a) := by
simp [head?_eq_getElem?]
@[simp] theorem getLast?_enum (l : List α) :
@[deprecated getLast?_zipIdx (since := "2025-01-21"), simp]
theorem getLast?_enum (l : List α) :
l.enum.getLast? = l.getLast?.map fun a => (l.length - 1, a) := by
simp [getLast?_eq_getElem?]
@[simp] theorem tail_enum (l : List α) : (enum l).tail = enumFrom 1 l.tail := by
@[deprecated tail_zipIdx (since := "2025-01-21"), simp]
theorem tail_enum (l : List α) : (enum l).tail = enumFrom 1 l.tail := by
simp [enum]
@[deprecated mk_mem_zipIdx_iff_getElem? (since := "2025-01-21")]
theorem mk_mem_enum_iff_getElem? {i : Nat} {x : α} {l : List α} : (i, x) enum l l[i]? = x := by
simp [enum, mk_mem_enumFrom_iff_le_and_getElem?_sub]
@[deprecated mem_zipIdx_iff_getElem? (since := "2025-01-21")]
theorem mem_enum_iff_getElem? {x : Nat × α} {l : List α} : x enum l l[x.1]? = some x.2 :=
mk_mem_enum_iff_getElem?
@[deprecated snd_lt_of_mem_zipIdx (since := "2025-01-21")]
theorem fst_lt_of_mem_enum {x : Nat × α} {l : List α} (h : x enum l) : x.1 < length l := by
simpa using fst_lt_add_of_mem_enumFrom h
@[deprecated fst_mem_of_mem_zipIdx (since := "2025-01-21")]
theorem snd_mem_of_mem_enum {x : Nat × α} {l : List α} (h : x enum l) : x.2 l :=
snd_mem_of_mem_enumFrom h
@[deprecated fst_eq_of_mem_zipIdx (since := "2025-01-21")]
theorem snd_eq_of_mem_enum {x : Nat × α} {l : List α} (h : x enum l) :
x.2 = l[x.1]'(fst_lt_of_mem_enum h) :=
snd_eq_of_mem_enumFrom h
@[deprecated mem_zipIdx (since := "2025-01-21")]
theorem mem_enum {x : α} {i : Nat} {xs : List α} (h : (i, x) xs.enum) :
i < xs.length x = xs[i]'(fst_lt_of_mem_enum h) :=
by simpa using mem_enumFrom h
@[deprecated map_zipIdx (since := "2025-01-21")]
theorem map_enum (f : α β) (l : List α) : map (Prod.map id f) (enum l) = enum (map f l) :=
map_enumFrom f 0 l
@[simp] theorem enum_map_fst (l : List α) : map Prod.fst (enum l) = range l.length := by
@[deprecated zipIdx_map_snd (since := "2025-01-21"), simp]
theorem enum_map_fst (l : List α) : map Prod.fst (enum l) = range l.length := by
simp only [enum, enumFrom_map_fst, range_eq_range']
@[simp]
@[deprecated zipIdx_map_fst (since := "2025-01-21"), simp]
theorem enum_map_snd (l : List α) : map Prod.snd (enum l) = l :=
enumFrom_map_snd _ _
@[deprecated zipIdx_map (since := "2025-01-21")]
theorem enum_map (l : List α) (f : α β) : (l.map f).enum = l.enum.map (Prod.map id f) :=
enumFrom_map _ _ _
@[deprecated zipIdx_append (since := "2025-01-21")]
theorem enum_append (xs ys : List α) : enum (xs ++ ys) = enum xs ++ enumFrom xs.length ys := by
simp [enum, enumFrom_append]
@[deprecated zipIdx_eq_zip_range' (since := "2025-01-21")]
theorem enum_eq_zip_range (l : List α) : l.enum = (range l.length).zip l :=
zip_of_prod (enum_map_fst _) (enum_map_snd _)
@[simp]
@[deprecated unzip_zipIdx_eq_prod (since := "2025-01-21"), simp]
theorem unzip_enum_eq_prod (l : List α) : l.enum.unzip = (range l.length, l) := by
simp only [enum_eq_zip_range, unzip_zip, length_range]
@[deprecated zipIdx_eq_cons_iff (since := "2025-01-21")]
theorem enum_eq_cons_iff {l : List α} :
l.enum = x :: l' a as, l = a :: as x = (0, a) l' = enumFrom 1 as := by
rw [enum, enumFrom_eq_cons_iff]
@[deprecated zipIdx_eq_append_iff (since := "2025-01-21")]
theorem enum_eq_append_iff {l : List α} :
l.enum = l₁ ++ l₂
l₁' l₂', l = l₁' ++ l₂' l₁ = l₁'.enum l₂ = l₂'.enumFrom l₁'.length := by
simp [enum, enumFrom_eq_append_iff]
end
end List

View File

@@ -204,17 +204,97 @@ theorem getLast?_range (n : Nat) : (range n).getLast? = if n = 0 then none else
| zero => simp at h
| succ n => simp [getLast?_range, getLast_eq_iff_getLast_eq_some]
/-! ### enumFrom -/
/-! ### zipIdx -/
@[simp]
theorem zipIdx_eq_nil_iff {l : List α} {n : Nat} : List.zipIdx l n = [] l = [] := by
cases l <;> simp
@[simp] theorem length_zipIdx : {l : List α} {n}, (zipIdx l n).length = l.length
| [], _ => rfl
| _ :: _, _ => congrArg Nat.succ length_zipIdx
@[simp]
theorem getElem?_zipIdx :
(l : List α) n m, (zipIdx l n)[m]? = l[m]?.map fun a => (a, n + m)
| [], _, _ => rfl
| _ :: _, _, 0 => by simp
| _ :: l, n, m + 1 => by
simp only [zipIdx_cons, getElem?_cons_succ]
exact (getElem?_zipIdx l (n + 1) m).trans <| by rw [Nat.add_right_comm]; rfl
@[simp]
theorem getElem_zipIdx (l : List α) (n) (i : Nat) (h : i < (l.zipIdx n).length) :
(l.zipIdx n)[i] = (l[i]'(by simpa [length_zipIdx] using h), n + i) := by
simp only [length_zipIdx] at h
rw [getElem_eq_getElem?_get]
simp only [getElem?_zipIdx, getElem?_eq_getElem h]
simp
@[simp]
theorem tail_zipIdx (l : List α) (n : Nat) : (zipIdx l n).tail = zipIdx l.tail (n + 1) := by
induction l generalizing n with
| nil => simp
| cons _ l ih => simp [ih, zipIdx_cons]
theorem map_snd_add_zipIdx_eq_zipIdx (l : List α) (n k : Nat) :
map (Prod.map id (· + n)) (zipIdx l k) = zipIdx l (n + k) :=
ext_getElem? fun i by simp [(· ·), Nat.add_comm, Nat.add_left_comm]; rfl
theorem zipIdx_cons' (n : Nat) (x : α) (xs : List α) :
zipIdx (x :: xs) n = (x, n) :: (zipIdx xs n).map (Prod.map id (· + 1)) := by
rw [zipIdx_cons, Nat.add_comm, map_snd_add_zipIdx_eq_zipIdx]
@[simp]
theorem zipIdx_map_snd (n) :
(l : List α), map Prod.snd (zipIdx l n) = range' n l.length
| [] => rfl
| _ :: _ => congrArg (cons _) (zipIdx_map_snd _ _)
@[simp]
theorem zipIdx_map_fst : (n) (l : List α), map Prod.fst (zipIdx l n) = l
| _, [] => rfl
| _, _ :: _ => congrArg (cons _) (zipIdx_map_fst _ _)
theorem zipIdx_eq_zip_range' (l : List α) {n : Nat} : l.zipIdx n = l.zip (range' n l.length) :=
zip_of_prod (zipIdx_map_fst _ _) (zipIdx_map_snd _ _)
@[simp]
theorem unzip_zipIdx_eq_prod (l : List α) {n : Nat} :
(l.zipIdx n).unzip = (l, range' n l.length) := by
simp only [zipIdx_eq_zip_range', unzip_zip, length_range']
/-- Replace `zipIdx` with a starting index `n+1` with `zipIdx` starting from `n`,
followed by a `map` increasing the indices by one. -/
theorem zipIdx_succ (l : List α) (n : Nat) :
l.zipIdx (n + 1) = (l.zipIdx n).map (fun a, i => (a, i + 1)) := by
induction l generalizing n with
| nil => rfl
| cons _ _ ih => simp only [zipIdx_cons, ih (n + 1), map_cons]
/-- Replace `zipIdx` with a starting index with `zipIdx` starting from 0,
followed by a `map` increasing the indices. -/
theorem zipIdx_eq_map_add (l : List α) (n : Nat) :
l.zipIdx n = l.zipIdx.map (fun a, i => (a, n + i)) := by
induction l generalizing n with
| nil => rfl
| cons _ _ ih => simp [ih (n+1), zipIdx_succ, Nat.add_assoc, Nat.add_comm 1]
/-! ### enumFrom -/
section
set_option linter.deprecated false
@[deprecated zipIdx_eq_nil_iff (since := "2025-01-21"), simp]
theorem enumFrom_eq_nil {n : Nat} {l : List α} : List.enumFrom n l = [] l = [] := by
cases l <;> simp
@[simp] theorem enumFrom_length : {n} {l : List α}, (enumFrom n l).length = l.length
@[deprecated length_zipIdx (since := "2025-01-21"), simp]
theorem enumFrom_length : {n} {l : List α}, (enumFrom n l).length = l.length
| _, [] => rfl
| _, _ :: _ => congrArg Nat.succ enumFrom_length
@[simp]
@[deprecated getElem?_zipIdx (since := "2025-01-21"), simp]
theorem getElem?_enumFrom :
n (l : List α) m, (enumFrom n l)[m]? = l[m]?.map fun a => (n + m, a)
| _, [], _ => rfl
@@ -223,7 +303,7 @@ theorem getElem?_enumFrom :
simp only [enumFrom_cons, getElem?_cons_succ]
exact (getElem?_enumFrom (n + 1) l m).trans <| by rw [Nat.add_right_comm]; rfl
@[simp]
@[deprecated getElem_zipIdx (since := "2025-01-21"), simp]
theorem getElem_enumFrom (l : List α) (n) (i : Nat) (h : i < (l.enumFrom n).length) :
(l.enumFrom n)[i] = (n + i, l[i]'(by simpa [enumFrom_length] using h)) := by
simp only [enumFrom_length] at h
@@ -231,53 +311,66 @@ theorem getElem_enumFrom (l : List α) (n) (i : Nat) (h : i < (l.enumFrom n).len
simp only [getElem?_enumFrom, getElem?_eq_getElem h]
simp
@[simp]
@[deprecated tail_zipIdx (since := "2025-01-21"), simp]
theorem tail_enumFrom (l : List α) (n : Nat) : (enumFrom n l).tail = enumFrom (n + 1) l.tail := by
induction l generalizing n with
| nil => simp
| cons _ l ih => simp [ih, enumFrom_cons]
@[deprecated map_snd_add_zipIdx_eq_zipIdx (since := "2025-01-21"), simp]
theorem map_fst_add_enumFrom_eq_enumFrom (l : List α) (n k : Nat) :
map (Prod.map (· + n) id) (enumFrom k l) = enumFrom (n + k) l :=
ext_getElem? fun i by simp [(· ·), Nat.add_comm, Nat.add_left_comm]; rfl
@[deprecated map_snd_add_zipIdx_eq_zipIdx (since := "2025-01-21"), simp]
theorem map_fst_add_enum_eq_enumFrom (l : List α) (n : Nat) :
map (Prod.map (· + n) id) (enum l) = enumFrom n l :=
map_fst_add_enumFrom_eq_enumFrom l _ _
@[deprecated zipIdx_cons' (since := "2025-01-21"), simp]
theorem enumFrom_cons' (n : Nat) (x : α) (xs : List α) :
enumFrom n (x :: xs) = (n, x) :: (enumFrom n xs).map (Prod.map (· + 1) id) := by
rw [enumFrom_cons, Nat.add_comm, map_fst_add_enumFrom_eq_enumFrom]
@[simp]
@[deprecated zipIdx_map_snd (since := "2025-01-21"), simp]
theorem enumFrom_map_fst (n) :
(l : List α), map Prod.fst (enumFrom n l) = range' n l.length
| [] => rfl
| _ :: _ => congrArg (cons _) (enumFrom_map_fst _ _)
@[simp]
@[deprecated zipIdx_map_fst (since := "2025-01-21"), simp]
theorem enumFrom_map_snd : (n) (l : List α), map Prod.snd (enumFrom n l) = l
| _, [] => rfl
| _, _ :: _ => congrArg (cons _) (enumFrom_map_snd _ _)
@[deprecated zipIdx_eq_zip_range' (since := "2025-01-21")]
theorem enumFrom_eq_zip_range' (l : List α) {n : Nat} : l.enumFrom n = (range' n l.length).zip l :=
zip_of_prod (enumFrom_map_fst _ _) (enumFrom_map_snd _ _)
@[simp]
@[deprecated unzip_zipIdx_eq_prod (since := "2025-01-21"), simp]
theorem unzip_enumFrom_eq_prod (l : List α) {n : Nat} :
(l.enumFrom n).unzip = (range' n l.length, l) := by
simp only [enumFrom_eq_zip_range', unzip_zip, length_range']
end
/-! ### enum -/
section
set_option linter.deprecated false
@[deprecated zipIdx_cons (since := "2025-01-21")]
theorem enum_cons : (a::as).enum = (0, a) :: as.enumFrom 1 := rfl
@[deprecated zipIdx_cons (since := "2025-01-21")]
theorem enum_cons' (x : α) (xs : List α) :
enum (x :: xs) = (0, x) :: (enum xs).map (Prod.map (· + 1) id) :=
enumFrom_cons' _ _ _
@[deprecated "These are now both `l.zipIdx 0`" (since := "2025-01-21")]
theorem enum_eq_enumFrom {l : List α} : l.enum = l.enumFrom 0 := rfl
@[deprecated "Use the reverse direction of `map_snd_add_zipIdx_eq_zipIdx` instead" (since := "2025-01-21")]
theorem enumFrom_eq_map_enum (l : List α) (n : Nat) :
enumFrom n l = (enum l).map (Prod.map (· + n) id) := by
induction l generalizing n with
@@ -288,4 +381,6 @@ theorem enumFrom_eq_map_enum (l : List α) (n : Nat) :
intro a b _
exact (succ_add a n).symm
end
end List

View File

@@ -73,14 +73,14 @@ termination_by xs => xs.length
/--
Given an ordering relation `le : αα → Bool`,
construct the reverse lexicographic ordering on `Nat × α`.
which first compares the second components using `le`,
construct the lexicographic ordering on `α × Nat`.
which first compares the first components using `le`,
but if these are equivalent (in the sense `le a.2 b.2 && le b.2 a.2`)
then compares the first components using `≤`.
then compares the second components using `≤`.
This function is only used in stating the stability properties of `mergeSort`.
-/
def enumLE (le : α α Bool) (a b : Nat × α) : Bool :=
if le a.2 b.2 then if le b.2 a.2 then a.1 b.1 else true else false
def zipIdxLE (le : α α Bool) (a b : α × Nat) : Bool :=
if le a.1 b.1 then if le b.1 a.1 then a.2 b.2 else true else false
end List

View File

@@ -38,35 +38,35 @@ namespace MergeSort.Internal
theorem splitInTwo_fst_append_splitInTwo_snd (l : { l : List α // l.length = n }) : (splitInTwo l).1.1 ++ (splitInTwo l).2.1 = l.1 := by
simp
theorem splitInTwo_cons_cons_enumFrom_fst (i : Nat) (l : List α) :
(splitInTwo (i, a) :: (i+1, b) :: l.enumFrom (i+2), rfl).1.1 =
(splitInTwo a :: b :: l, rfl).1.1.enumFrom i := by
simp only [length_cons, splitInTwo_fst, enumFrom_length]
theorem splitInTwo_cons_cons_zipIdx_fst (i : Nat) (l : List α) :
(splitInTwo (a, i) :: (b, i+1) :: l.zipIdx (i+2), rfl).1.1 =
(splitInTwo a :: b :: l, rfl).1.1.zipIdx i := by
simp only [length_cons, splitInTwo_fst, length_zipIdx]
ext1 j
rw [getElem?_take, getElem?_enumFrom, getElem?_take]
rw [getElem?_take, getElem?_zipIdx, getElem?_take]
split
· rw [getElem?_cons, getElem?_cons, getElem?_cons, getElem?_cons]
split
· simp; omega
· split
· simp; omega
· simp only [getElem?_enumFrom]
· simp only [getElem?_zipIdx]
congr
ext <;> simp; omega
· simp
theorem splitInTwo_cons_cons_enumFrom_snd (i : Nat) (l : List α) :
(splitInTwo (i, a) :: (i+1, b) :: l.enumFrom (i+2), rfl).2.1 =
(splitInTwo a :: b :: l, rfl).2.1.enumFrom (i+(l.length+3)/2) := by
simp only [length_cons, splitInTwo_snd, enumFrom_length]
theorem splitInTwo_cons_cons_zipIdx_snd (i : Nat) (l : List α) :
(splitInTwo (a, i) :: (b, i+1) :: l.zipIdx (i+2), rfl).2.1 =
(splitInTwo a :: b :: l, rfl).2.1.zipIdx (i+(l.length+3)/2) := by
simp only [length_cons, splitInTwo_snd, length_zipIdx]
ext1 j
rw [getElem?_drop, getElem?_enumFrom, getElem?_drop]
rw [getElem?_drop, getElem?_zipIdx, getElem?_drop]
rw [getElem?_cons, getElem?_cons, getElem?_cons, getElem?_cons]
split
· simp; omega
· split
· simp; omega
· simp only [getElem?_enumFrom]
· simp only [getElem?_zipIdx]
congr
ext <;> simp; omega
@@ -88,13 +88,13 @@ end MergeSort.Internal
open MergeSort.Internal
/-! ### enumLE -/
/-! ### zipIdxLE -/
variable {le : α α Bool}
theorem enumLE_trans (trans : a b c, le a b le b c le a c)
(a b c : Nat × α) : enumLE le a b enumLE le b c enumLE le a c := by
simp only [enumLE]
theorem zipIdxLE_trans (trans : a b c, le a b le b c le a c)
(a b c : α × Nat) : zipIdxLE le a b zipIdxLE le b c zipIdxLE le a c := by
simp only [zipIdxLE]
split <;> split <;> split <;> rename_i ab₂ ba₂ bc₂
· simp_all
intro ab₁
@@ -120,14 +120,14 @@ theorem enumLE_trans (trans : ∀ a b c, le a b → le b c → le a c)
· simp_all
· simp_all
theorem enumLE_total (total : a b, le a b || le b a)
(a b : Nat × α) : enumLE le a b || enumLE le b a := by
simp only [enumLE]
theorem zipIdxLE_total (total : a b, le a b || le b a)
(a b : α × Nat) : zipIdxLE le a b || zipIdxLE le b a := by
simp only [zipIdxLE]
split <;> split
· simpa using Nat.le_total a.fst b.fst
· simpa using Nat.le_total a.2 b.2
· simp
· simp
· have := total a.2 b.2
· have := total a.1 b.1
simp_all
/-! ### merge -/
@@ -179,12 +179,12 @@ theorem mem_merge_left (s : αα → Bool) (h : x ∈ l) : x ∈ merge l r
theorem mem_merge_right (s : α α Bool) (h : x r) : x merge l r s :=
mem_merge.2 <| .inr h
theorem merge_stable : (xs ys) (_ : x y, x xs y ys x.1 y.1),
(merge xs ys (enumLE le)).map (·.2) = merge (xs.map (·.2)) (ys.map (·.2)) le
theorem merge_stable : (xs ys) (_ : x y, x xs y ys x.2 y.2),
(merge xs ys (zipIdxLE le)).map (·.1) = merge (xs.map (·.1)) (ys.map (·.1)) le
| [], ys, _ => by simp [merge]
| xs, [], _ => by simp [merge]
| (i, x) :: xs, (j, y) :: ys, h => by
simp only [merge, enumLE, map_cons]
simp only [merge, zipIdxLE, map_cons]
split <;> rename_i w
· rw [if_pos (by simp [h _ _ (mem_cons_self ..) (mem_cons_self ..)])]
simp only [map_cons, cons.injEq, true_and]
@@ -331,57 +331,59 @@ See also:
* `sublist_mergeSort`: if `c <+ l` and `c.Pairwise le`, then `c <+ mergeSort le l`.
* `pair_sublist_mergeSort`: if `[a, b] <+ l` and `le a b`, then `[a, b] <+ mergeSort le l`)
-/
theorem mergeSort_enum {l : List α} :
(mergeSort (l.enum) (enumLE le)).map (·.2) = mergeSort l le :=
theorem mergeSort_zipIdx {l : List α} :
(mergeSort (l.zipIdx) (zipIdxLE le)).map (·.1) = mergeSort l le :=
go 0 l
where go : (i : Nat) (l : List α),
(mergeSort (l.enumFrom i) (enumLE le)).map (·.2) = mergeSort l le
(mergeSort (l.zipIdx i) (zipIdxLE le)).map (·.1) = mergeSort l le
| _, []
| _, [a] => by simp [mergeSort]
| _, a :: b :: xs => by
have : (splitInTwo a :: b :: xs, rfl).1.1.length < xs.length + 1 + 1 := by simp [splitInTwo_fst]; omega
have : (splitInTwo a :: b :: xs, rfl).2.1.length < xs.length + 1 + 1 := by simp [splitInTwo_snd]; omega
simp only [mergeSort, enumFrom]
rw [splitInTwo_cons_cons_enumFrom_fst]
rw [splitInTwo_cons_cons_enumFrom_snd]
simp only [mergeSort, zipIdx]
rw [splitInTwo_cons_cons_zipIdx_fst]
rw [splitInTwo_cons_cons_zipIdx_snd]
rw [merge_stable]
· rw [go, go]
· simp only [mem_mergeSort, Prod.forall]
intros j x k y mx my
have := mem_enumFrom mx
have := mem_enumFrom my
have := mem_zipIdx mx
have := mem_zipIdx my
simp_all
omega
termination_by _ l => l.length
@[deprecated mergeSort_zipIdx (since := "2025-01-21")] abbrev mergeSort_enum := @mergeSort_zipIdx
theorem mergeSort_cons {le : α α Bool}
(trans : (a b c : α), le a b le b c le a c)
(total : (a b : α), le a b || le b a)
(a : α) (l : List α) :
l₁ l₂, mergeSort (a :: l) le = l₁ ++ a :: l₂ mergeSort l le = l₁ ++ l₂
b, b l₁ !le a b := by
rw [ mergeSort_enum]
rw [enum_cons]
have nd : Nodup ((a :: l).enum.map (·.1)) := by rw [enum_map_fst]; exact nodup_range _
have m₁ : (0, a) mergeSort ((a :: l).enum) (enumLE le) :=
rw [ mergeSort_zipIdx]
rw [zipIdx_cons]
have nd : Nodup ((a :: l).zipIdx.map (·.2)) := by rw [zipIdx_map_snd]; exact nodup_range' _ _
have m₁ : (a, 0) mergeSort ((a :: l).zipIdx) (zipIdxLE le) :=
mem_mergeSort.mpr (mem_cons_self _ _)
obtain l₁, l₂, h := append_of_mem m₁
have s := sorted_mergeSort (enumLE_trans trans) (enumLE_total total) ((a :: l).enum)
have s := sorted_mergeSort (zipIdxLE_trans trans) (zipIdxLE_total total) ((a :: l).zipIdx)
rw [h] at s
have p := mergeSort_perm ((a :: l).enum) (enumLE le)
have p := mergeSort_perm ((a :: l).zipIdx) (zipIdxLE le)
rw [h] at p
refine l₁.map (·.2), l₂.map (·.2), ?_, ?_, ?_
· simpa using congrArg (·.map (·.2)) h
· rw [ mergeSort_enum.go 1, map_append]
refine l₁.map (·.1), l₂.map (·.1), ?_, ?_, ?_
· simpa using congrArg (·.map (·.1)) h
· rw [ mergeSort_zipIdx.go 1, map_append]
congr 1
have q : mergeSort (enumFrom 1 l) (enumLE le) ~ l₁ ++ l₂ :=
(mergeSort_perm (enumFrom 1 l) (enumLE le)).trans
have q : mergeSort (l.zipIdx 1) (zipIdxLE le) ~ l₁ ++ l₂ :=
(mergeSort_perm (l.zipIdx 1) (zipIdxLE le)).trans
(p.symm.trans perm_middle).cons_inv
apply Perm.eq_of_sorted (le := enumLE le)
· rintro i, a j, b ha hb
apply Perm.eq_of_sorted (le := zipIdxLE le)
· rintro a, i b, j ha hb
simp only [mem_mergeSort] at ha
simp only [ q.mem_iff, mem_mergeSort] at hb
simp only [enumLE]
simp only [zipIdxLE]
simp only [Bool.if_false_right, Bool.and_eq_true, Prod.mk.injEq, and_imp]
intro ab h ba h'
simp only [Bool.decide_eq_true] at ba
@@ -389,24 +391,24 @@ theorem mergeSort_cons {le : αα → Bool}
replace h' : j i := by simpa [ab, ba] using h'
cases Nat.le_antisymm h h'
constructor
· rfl
· have := mem_enumFrom ha
have := mem_enumFrom hb
· have := mem_zipIdx ha
have := mem_zipIdx hb
simp_all
· exact sorted_mergeSort (enumLE_trans trans) (enumLE_total total) ..
· exact s.sublist ((sublist_cons_self (0, a) l₂).append_left l₁)
· rfl
· exact sorted_mergeSort (zipIdxLE_trans trans) (zipIdxLE_total total) ..
· exact s.sublist ((sublist_cons_self (a, 0) l₂).append_left l₁)
· exact q
· intro b m
simp only [mem_map, Prod.exists, exists_eq_right] at m
obtain j, m := m
replace p := p.map (·.1)
simp only [mem_map, Prod.exists] at m
obtain j, _, m, rfl := m
replace p := p.map (·.2)
have nd' := nd.perm p.symm
rw [map_append] at nd'
have j0 := nd'.rel_of_mem_append
(mem_map_of_mem (·.1) m) (mem_map_of_mem _ (mem_cons_self _ _))
(mem_map_of_mem (·.2) m) (mem_map_of_mem _ (mem_cons_self _ _))
simp only [ne_eq] at j0
have r := s.rel_of_mem_append m (mem_cons_self _ _)
simp_all [enumLE]
simp_all [zipIdxLE]
/--
Another statement of stability of merge sort.

View File

@@ -238,6 +238,14 @@ theorem map_uncurry_zip_eq_zipWith (f : α → β → γ) (l : List α) (l' : Li
| cons hl tl ih =>
cases l' <;> simp [ih]
theorem map_zip_eq_zipWith (f : α × β γ) (l : List α) (l' : List β) :
map f (l.zip l') = zipWith (Function.curry f) l l' := by
rw [zip]
induction l generalizing l' with
| nil => simp
| cons hl tl ih =>
cases l' <;> simp [ih]
/-! ### zip -/
theorem zip_eq_zipWith : (l₁ : List α) (l₂ : List β), zip l₁ l₂ = zipWith Prod.mk l₁ l₂

View File

@@ -718,8 +718,7 @@ theorem Expr.eq_of_toNormPoly_eq (ctx : Context) (e e' : Expr) (h : e.toNormPoly
end Linear
def elimOffset {α : Sort u} (a b k : Nat) (h₁ : a + k = b + k) (h₂ : a = b α) : α := by
simp_arith at h₁
exact h₂ h₁
def elimOffset {α : Sort u} (a b k : Nat) (h₁ : a + k = b + k) (h₂ : a = b α) : α :=
h₂ (Nat.add_right_cancel h₁)
end Nat

View File

@@ -57,11 +57,11 @@ theorem mod_mul_right_div_self (m n k : Nat) : m % (n * k) / n = m / n % k := by
theorem mod_mul_left_div_self (m n k : Nat) : m % (k * n) / n = m / n % k := by
rw [Nat.mul_comm k n, mod_mul_right_div_self]
@[simp 1100]
@[simp]
theorem mod_mul_right_mod (a b c : Nat) : a % (b * c) % b = a % b :=
Nat.mod_mod_of_dvd a (Nat.dvd_mul_right b c)
@[simp 1100]
@[simp]
theorem mod_mul_left_mod (a b c : Nat) : a % (b * c) % c = a % c :=
Nat.mod_mod_of_dvd a (Nat.mul_comm _ _ Nat.dvd_mul_left c b)

View File

@@ -34,7 +34,7 @@ theorem get_mem : ∀ {o : Option α} (h : isSome o), o.get h ∈ o
theorem get_of_mem : {o : Option α} (h : isSome o), a o o.get h = a
| _, _, rfl => rfl
theorem not_mem_none (a : α) : a (none : Option α) := nofun
@[simp] theorem not_mem_none (a : α) : a (none : Option α) := nofun
theorem getD_of_ne_none {x : Option α} (hx : x none) (y : α) : some (x.getD y) = x := by
cases x; {contradiction}; rw [getD_some]
@@ -655,4 +655,10 @@ theorem map_pmap {p : α → Prop} (g : β → γ) (f : ∀ a, p a → β) (o H)
@[simp] theorem pelim_eq_elim : pelim o b (fun a _ => f a) = o.elim b f := by
cases o <;> simp
@[simp] theorem elim_pmap {p : α Prop} (f : (a : α) p a β) (o : Option α)
(H : (a : α), a o p a) (g : γ) (g' : β γ) :
(o.pmap f H).elim g g' =
o.pelim g (fun a h => g' (f a (H a h))) := by
cases o <;> simp
end Option

View File

@@ -5,6 +5,7 @@ Authors: Johannes Hölzl
-/
prelude
import Init.Ext
import Init.Core
namespace Subtype

View File

@@ -5,3 +5,7 @@ Authors: Kim Morrison
-/
prelude
import Init.Data.Vector.Basic
import Init.Data.Vector.Lemmas
import Init.Data.Vector.Lex
import Init.Data.Vector.MapIdx
import Init.Data.Vector.Count

View File

@@ -0,0 +1,551 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Vector.Lemmas
import Init.Data.Array.Attach
namespace Vector
/--
`O(n)`. Partial map. If `f : Π a, P a → β` is a partial function defined on
`a : α` satisfying `P`, then `pmap f l h` is essentially the same as `map f l`
but is defined only when all members of `l` satisfy `P`, using the proof
to apply `f`.
We replace this at runtime with a more efficient version via the `csimp` lemma `pmap_eq_pmapImpl`.
-/
def pmap {P : α Prop} (f : a, P a β) (l : Vector α n) (H : a l, P a) : Vector β n :=
Vector.mk (l.toArray.pmap f (fun a m => H a (by simpa using m))) (by simp)
/--
Unsafe implementation of `attachWith`, taking advantage of the fact that the representation of
`Vector {x // P x} n` is the same as the input `Vector α n`.
-/
@[inline] private unsafe def attachWithImpl
(xs : Vector α n) (P : α Prop) (_ : x xs, P x) : Vector {x // P x} n := unsafeCast xs
/-- `O(1)`. "Attach" a proof `P x` that holds for all the elements of `xs` to produce a new array
with the same elements but in the type `{x // P x}`. -/
@[implemented_by attachWithImpl] def attachWith
(xs : Vector α n) (P : α Prop) (H : x xs, P x) : Vector {x // P x} n :=
Vector.mk (xs.toArray.attachWith P fun x h => H x (by simpa using h)) (by simp)
/-- `O(1)`. "Attach" the proof that the elements of `xs` are in `xs` to produce a new vector
with the same elements but in the type `{x // x ∈ xs}`. -/
@[inline] def attach (xs : Vector α n) : Vector {x // x xs} n := xs.attachWith _ fun _ => id
@[simp] theorem attachWith_mk {xs : Array α} {h : xs.size = n} {P : α Prop} {H : x mk xs h, P x} :
(mk xs h).attachWith P H = mk (xs.attachWith P (by simpa using H)) (by simpa using h) := by
simp [attachWith]
@[simp] theorem attach_mk {xs : Array α} {h : xs.size = n} :
(mk xs h).attach = mk (xs.attachWith (· mk xs h) (by simp)) (by simpa using h):= by
simp [attach]
@[simp] theorem pmap_mk {xs : Array α} {h : xs.size = n} {P : α Prop} {f : a, P a β}
{H : a mk xs h, P a} :
(mk xs h).pmap f H = mk (xs.pmap f (by simpa using H)) (by simpa using h) := by
simp [pmap]
@[simp] theorem toArray_attachWith {l : Vector α n} {P : α Prop} {H : x l, P x} :
(l.attachWith P H).toArray = l.toArray.attachWith P (by simpa using H) := by
simp [attachWith]
@[simp] theorem toArray_attach {α : Type _} {l : Vector α n} :
l.attach.toArray = l.toArray.attachWith (· l) (by simp) := by
simp [attach]
@[simp] theorem toArray_pmap {l : Vector α n} {P : α Prop} {f : a, P a β} {H : a l, P a} :
(l.pmap f H).toArray = l.toArray.pmap f (fun a m => H a (by simpa using m)) := by
simp [pmap]
@[simp] theorem toList_attachWith {l : Vector α n} {P : α Prop} {H : x l, P x} :
(l.attachWith P H).toList = l.toList.attachWith P (by simpa using H) := by
simp [attachWith]
@[simp] theorem toList_attach {α : Type _} {l : Vector α n} :
l.attach.toList = l.toList.attachWith (· l) (by simp) := by
simp [attach]
@[simp] theorem toList_pmap {l : Vector α n} {P : α Prop} {f : a, P a β} {H : a l, P a} :
(l.pmap f H).toList = l.toList.pmap f (fun a m => H a (by simpa using m)) := by
simp [pmap]
/-- Implementation of `pmap` using the zero-copy version of `attach`. -/
@[inline] private def pmapImpl {P : α Prop} (f : a, P a β) (l : Vector α n) (H : a l, P a) :
Vector β n := (l.attachWith _ H).map fun x, h' => f x h'
@[csimp] private theorem pmap_eq_pmapImpl : @pmap = @pmapImpl := by
funext α β n p f L h'
rcases L with L, rfl
simp only [pmap, pmapImpl, attachWith_mk, map_mk, Array.map_attachWith, eq_mk]
apply Array.pmap_congr_left
intro a m h₁ h₂
congr
@[simp] theorem pmap_empty {P : α Prop} (f : a, P a β) : pmap f #v[] (by simp) = #v[] := rfl
@[simp] theorem pmap_push {P : α Prop} (f : a, P a β) (a : α) (l : Vector α n) (h : b l.push a, P b) :
pmap f (l.push a) h =
(pmap f l (fun a m => by simp at h; exact h a (.inl m))).push (f a (h a (by simp))) := by
simp [pmap]
@[simp] theorem attach_empty : (#v[] : Vector α 0).attach = #v[] := rfl
@[simp] theorem attachWith_empty {P : α Prop} (H : x #v[], P x) : (#v[] : Vector α 0).attachWith P H = #v[] := rfl
@[simp]
theorem pmap_eq_map (p : α Prop) (f : α β) (l : Vector α n) (H) :
@pmap _ _ _ p (fun a _ => f a) l H = map f l := by
cases l; simp
theorem pmap_congr_left {p q : α Prop} {f : a, p a β} {g : a, q a β} (l : Vector α n) {H₁ H₂}
(h : a l, (h₁ h₂), f a h₁ = g a h₂) : pmap f l H₁ = pmap g l H₂ := by
rcases l with l, rfl
simp only [pmap_mk, eq_mk]
apply Array.pmap_congr_left
simpa using h
theorem map_pmap {p : α Prop} (g : β γ) (f : a, p a β) (l : Vector α n) (H) :
map g (pmap f l H) = pmap (fun a h => g (f a h)) l H := by
rcases l with l, rfl
simp [Array.map_pmap]
theorem pmap_map {p : β Prop} (g : b, p b γ) (f : α β) (l : Vector α n) (H) :
pmap g (map f l) H = pmap (fun a h => g (f a) h) l fun _ h => H _ (mem_map_of_mem _ h) := by
rcases l with l, rfl
simp [Array.pmap_map]
theorem attach_congr {l₁ l₂ : Vector α n} (h : l₁ = l₂) :
l₁.attach = l₂.attach.map (fun x => x.1, h x.2) := by
subst h
simp
theorem attachWith_congr {l₁ l₂ : Vector α n} (w : l₁ = l₂) {P : α Prop} {H : x l₁, P x} :
l₁.attachWith P H = l₂.attachWith P fun _ h => H _ (w h) := by
subst w
simp
@[simp] theorem attach_push {a : α} {l : Vector α n} :
(l.push a).attach =
(l.attach.map (fun x, h => x, mem_push_of_mem a h)).push a, by simp := by
rcases l with l, rfl
simp [Array.map_attachWith]
@[simp] theorem attachWith_push {a : α} {l : Vector α n} {P : α Prop} {H : x l.push a, P x} :
(l.push a).attachWith P H =
(l.attachWith P (fun x h => by simp at H; exact H x (.inl h))).push a, H a (by simp) := by
rcases l with l, rfl
simp
theorem pmap_eq_map_attach {p : α Prop} (f : a, p a β) (l : Vector α n) (H) :
pmap f l H = l.attach.map fun x => f x.1 (H _ x.2) := by
rcases l with l, rfl
simp only [pmap_mk, Array.pmap_eq_map_attach, attach_mk, map_mk, eq_mk]
rw [Array.map_attach, Array.map_attachWith]
ext i hi₁ hi₂ <;> simp
@[simp]
theorem pmap_eq_attachWith {p q : α Prop} (f : a, p a q a) (l : Vector α n) (H) :
pmap (fun a h => a, f a h) l H = l.attachWith q (fun x h => f x (H x h)) := by
cases l
simp
theorem attach_map_coe (l : Vector α n) (f : α β) :
(l.attach.map fun (i : {i // i l}) => f i) = l.map f := by
cases l
simp
theorem attach_map_val (l : Vector α n) (f : α β) : (l.attach.map fun i => f i.val) = l.map f :=
attach_map_coe _ _
theorem attach_map_subtype_val (l : Vector α n) : l.attach.map Subtype.val = l := by
cases l; simp
theorem attachWith_map_coe {p : α Prop} (f : α β) (l : Vector α n) (H : a l, p a) :
((l.attachWith p H).map fun (i : { i // p i}) => f i) = l.map f := by
cases l; simp
theorem attachWith_map_val {p : α Prop} (f : α β) (l : Vector α n) (H : a l, p a) :
((l.attachWith p H).map fun i => f i.val) = l.map f :=
attachWith_map_coe _ _ _
theorem attachWith_map_subtype_val {p : α Prop} (l : Vector α n) (H : a l, p a) :
(l.attachWith p H).map Subtype.val = l := by
cases l; simp
@[simp]
theorem mem_attach (l : Vector α n) : x, x l.attach
| a, h => by
have := mem_map.1 (by rw [attach_map_subtype_val] <;> exact h)
rcases this with _, _, m, rfl
exact m
@[simp]
theorem mem_attachWith (l : Vector α n) {q : α Prop} (H) (x : {x // q x}) :
x l.attachWith q H x.1 l := by
rcases l with l, rfl
simp
@[simp]
theorem mem_pmap {p : α Prop} {f : a, p a β} {l : Vector α n} {H b} :
b pmap f l H (a : _) (h : a l), f a (H a h) = b := by
simp only [pmap_eq_map_attach, mem_map, mem_attach, true_and, Subtype.exists, eq_comm]
theorem mem_pmap_of_mem {p : α Prop} {f : a, p a β} {l : Vector α n} {H} {a} (h : a l) :
f a (H a h) pmap f l H := by
rw [mem_pmap]
exact a, h, rfl
theorem pmap_eq_self {l : Vector α n} {p : α Prop} {hp : (a : α), a l p a}
{f : (a : α) p a α} : l.pmap f hp = l a (h : a l), f a (hp a h) = a := by
cases l; simp [Array.pmap_eq_self]
@[simp]
theorem getElem?_pmap {p : α Prop} (f : a, p a β) {l : Vector α n} (h : a l, p a) (i : Nat) :
(pmap f l h)[i]? = Option.pmap f l[i]? fun x H => h x (mem_of_getElem? H) := by
cases l; simp
@[simp]
theorem getElem_pmap {p : α Prop} (f : a, p a β) {l : Vector α n} (h : a l, p a) {i : Nat}
(hn : i < n) :
(pmap f l h)[i] = f (l[i]) (h _ (by simp)) := by
cases l; simp
@[simp]
theorem getElem?_attachWith {xs : Vector α n} {i : Nat} {P : α Prop} {H : a xs, P a} :
(xs.attachWith P H)[i]? = xs[i]?.pmap Subtype.mk (fun _ a => H _ (mem_of_getElem? a)) :=
getElem?_pmap ..
@[simp]
theorem getElem?_attach {xs : Vector α n} {i : Nat} :
xs.attach[i]? = xs[i]?.pmap Subtype.mk (fun _ a => mem_of_getElem? a) :=
getElem?_attachWith
@[simp]
theorem getElem_attachWith {xs : Vector α n} {P : α Prop} {H : a xs, P a}
{i : Nat} (h : i < n) :
(xs.attachWith P H)[i] = xs[i]'(by simpa using h), H _ (getElem_mem (by simpa using h)) :=
getElem_pmap _ _ h
@[simp]
theorem getElem_attach {xs : Vector α n} {i : Nat} (h : i < n) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp] theorem pmap_attach (l : Vector α n) {p : {x // x l} Prop} (f : a, p a β) (H) :
pmap f l.attach H =
l.pmap (P := fun a => h : a l, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
ext <;> simp
@[simp] theorem pmap_attachWith (l : Vector α n) {p : {x // q x} Prop} (f : a, p a β) (H₁ H₂) :
pmap f (l.attachWith q H₁) H₂ =
l.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
ext <;> simp
theorem foldl_pmap (l : Vector α n) {P : α Prop} (f : (a : α) P a β)
(H : (a : α), a l P a) (g : γ β γ) (x : γ) :
(l.pmap f H).foldl g x = l.attach.foldl (fun acc a => g acc (f a.1 (H _ a.2))) x := by
rw [pmap_eq_map_attach, foldl_map]
theorem foldr_pmap (l : Vector α n) {P : α Prop} (f : (a : α) P a β)
(H : (a : α), a l P a) (g : β γ γ) (x : γ) :
(l.pmap f H).foldr g x = l.attach.foldr (fun a acc => g (f a.1 (H _ a.2)) acc) x := by
rw [pmap_eq_map_attach, foldr_map]
/--
If we fold over `l.attach` with a function that ignores the membership predicate,
we get the same results as folding over `l` directly.
This is useful when we need to use `attach` to show termination.
Unfortunately this can't be applied by `simp` because of the higher order unification problem,
and even when rewriting we need to specify the function explicitly.
See however `foldl_subtype` below.
-/
theorem foldl_attach (l : Vector α n) (f : β α β) (b : β) :
l.attach.foldl (fun acc t => f acc t.1) b = l.foldl f b := by
rcases l with l, rfl
simp [Array.foldl_attach]
/--
If we fold over `l.attach` with a function that ignores the membership predicate,
we get the same results as folding over `l` directly.
This is useful when we need to use `attach` to show termination.
Unfortunately this can't be applied by `simp` because of the higher order unification problem,
and even when rewriting we need to specify the function explicitly.
See however `foldr_subtype` below.
-/
theorem foldr_attach (l : Vector α n) (f : α β β) (b : β) :
l.attach.foldr (fun t acc => f t.1 acc) b = l.foldr f b := by
rcases l with l, rfl
simp [Array.foldr_attach]
theorem attach_map {l : Vector α n} (f : α β) :
(l.map f).attach = l.attach.map (fun x, h => f x, mem_map_of_mem f h) := by
cases l
ext <;> simp
theorem attachWith_map {l : Vector α n} (f : α β) {P : β Prop} {H : (b : β), b l.map f P b} :
(l.map f).attachWith P H = (l.attachWith (P f) (fun _ h => H _ (mem_map_of_mem f h))).map
fun x, h => f x, h := by
rcases l with l, rfl
simp [Array.attachWith_map]
theorem map_attachWith {l : Vector α n} {P : α Prop} {H : (a : α), a l P a}
(f : { x // P x } β) :
(l.attachWith P H).map f =
l.pmap (fun a (h : a l P a) => f a, H _ h.1) (fun a h => h, H a h) := by
cases l
ext <;> simp
/-- See also `pmap_eq_map_attach` for writing `pmap` in terms of `map` and `attach`. -/
theorem map_attach {l : Vector α n} (f : { x // x l } β) :
l.attach.map f = l.pmap (fun a h => f a, h) (fun _ => id) := by
cases l
ext <;> simp
theorem pmap_pmap {p : α Prop} {q : β Prop} (g : a, p a β) (f : b, q b γ) (l : Vector α n) (H₁ H₂) :
pmap f (pmap g l H₁) H₂ =
pmap (α := { x // x l }) (fun a h => f (g a h) (H₂ (g a h) (mem_pmap_of_mem a.2))) l.attach
(fun a _ => H₁ a a.2) := by
rcases l with l, rfl
ext <;> simp
@[simp] theorem pmap_append {p : ι Prop} (f : a : ι, p a α) (l₁ : Vector ι n) (l₂ : Vector ι m)
(h : a l₁ ++ l₂, p a) :
(l₁ ++ l₂).pmap f h =
(l₁.pmap f fun a ha => h a (mem_append_left l₂ ha)) ++
l₂.pmap f fun a ha => h a (mem_append_right l₁ ha) := by
cases l₁
cases l₂
simp
theorem pmap_append' {p : α Prop} (f : a : α, p a β) (l₁ : Vector α n) (l₂ : Vector α m)
(h₁ : a l₁, p a) (h₂ : a l₂, p a) :
((l₁ ++ l₂).pmap f fun a ha => (mem_append.1 ha).elim (h₁ a) (h₂ a)) =
l₁.pmap f h₁ ++ l₂.pmap f h₂ :=
pmap_append f l₁ l₂ _
@[simp] theorem attach_append (xs : Vector α n) (ys : Vector α m) :
(xs ++ ys).attach = xs.attach.map (fun x, h => (x, mem_append_left ys h : { x // x xs ++ ys })) ++
ys.attach.map (fun y, h => (y, mem_append_right xs h : { y // y xs ++ ys })) := by
rcases xs with xs, rfl
rcases ys with ys, rfl
simp [Array.map_attachWith]
@[simp] theorem attachWith_append {P : α Prop} {xs : Vector α n} {ys : Vector α m}
{H : (a : α), a xs ++ ys P a} :
(xs ++ ys).attachWith P H = xs.attachWith P (fun a h => H a (mem_append_left ys h)) ++
ys.attachWith P (fun a h => H a (mem_append_right xs h)) := by
simp [attachWith, attach_append, map_pmap, pmap_append]
@[simp] theorem pmap_reverse {P : α Prop} (f : (a : α) P a β) (xs : Vector α n)
(H : (a : α), a xs.reverse P a) :
xs.reverse.pmap f H = (xs.pmap f (fun a h => H a (by simpa using h))).reverse := by
induction xs <;> simp_all
theorem reverse_pmap {P : α Prop} (f : (a : α) P a β) (xs : Vector α n)
(H : (a : α), a xs P a) :
(xs.pmap f H).reverse = xs.reverse.pmap f (fun a h => H a (by simpa using h)) := by
rw [pmap_reverse]
@[simp] theorem attachWith_reverse {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs.reverse P a} :
xs.reverse.attachWith P H =
(xs.attachWith P (fun a h => H a (by simpa using h))).reverse := by
cases xs
simp
theorem reverse_attachWith {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs P a} :
(xs.attachWith P H).reverse = (xs.reverse.attachWith P (fun a h => H a (by simpa using h))) := by
cases xs
simp
@[simp] theorem attach_reverse (xs : Vector α n) :
xs.reverse.attach = xs.attach.reverse.map fun x, h => x, by simpa using h := by
cases xs
rw [attach_congr (reverse_mk ..)]
simp [Array.map_attachWith]
theorem reverse_attach (xs : Vector α n) :
xs.attach.reverse = xs.reverse.attach.map fun x, h => x, by simpa using h := by
cases xs
simp [Array.map_attachWith]
@[simp] theorem back?_pmap {P : α Prop} (f : (a : α) P a β) (xs : Vector α n)
(H : (a : α), a xs P a) :
(xs.pmap f H).back? = xs.attach.back?.map fun a, m => f a (H a m) := by
cases xs
simp
@[simp] theorem back?_attachWith {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs P a} :
(xs.attachWith P H).back? = xs.back?.pbind (fun a h => some a, H _ (mem_of_back? h)) := by
cases xs
simp
@[simp]
theorem back?_attach {xs : Vector α n} :
xs.attach.back? = xs.back?.pbind fun a h => some a, mem_of_back? h := by
cases xs
simp
@[simp]
theorem countP_attach (l : Vector α n) (p : α Bool) :
l.attach.countP (fun a : {x // x l} => p a) = l.countP p := by
cases l
simp [Function.comp_def]
@[simp]
theorem countP_attachWith {p : α Prop} (l : Vector α n) (H : a l, p a) (q : α Bool) :
(l.attachWith p H).countP (fun a : {x // p x} => q a) = l.countP q := by
cases l
simp
@[simp]
theorem count_attach [DecidableEq α] (l : Vector α n) (a : {x // x l}) :
l.attach.count a = l.count a := by
rcases l with l, rfl
simp
@[simp]
theorem count_attachWith [DecidableEq α] {p : α Prop} (l : Vector α n) (H : a l, p a) (a : {x // p x}) :
(l.attachWith p H).count a = l.count a := by
cases l
simp
@[simp] theorem countP_pmap {p : α Prop} (g : a, p a β) (f : β Bool) (l : Vector α n) (H₁) :
(l.pmap g H₁).countP f =
l.attach.countP (fun a, m => f (g a (H₁ a m))) := by
rcases l with l, rfl
simp only [pmap_mk, countP_mk, Array.countP_pmap]
simp [Array.countP_eq_size_filter]
/-! ## unattach
`Vector.unattach` is the (one-sided) inverse of `Vector.attach`. It is a synonym for `Vector.map Subtype.val`.
We use it by providing a simp lemma `l.attach.unattach = l`, and simp lemmas which recognize higher order
functions applied to `l : Vector { x // p x }` which only depend on the value, not the predicate, and rewrite these
in terms of a simpler function applied to `l.unattach`.
Further, we provide simp lemmas that push `unattach` inwards.
-/
/--
A synonym for `l.map (·.val)`. Mostly this should not be needed by users.
It is introduced as in intermediate step by lemmas such as `map_subtype`,
and is ideally subsequently simplified away by `unattach_attach`.
If not, usually the right approach is `simp [Vector.unattach, -Vector.map_subtype]` to unfold.
-/
def unattach {α : Type _} {p : α Prop} (l : Vector { x // p x } n) : Vector α n := l.map (·.val)
@[simp] theorem unattach_nil {p : α Prop} : (#v[] : Vector { x // p x } 0).unattach = #v[] := rfl
@[simp] theorem unattach_push {p : α Prop} {a : { x // p x }} {l : Vector { x // p x } n} :
(l.push a).unattach = l.unattach.push a.1 := by
simp only [unattach, Vector.map_push]
@[simp] theorem unattach_mk {p : α Prop} {l : Array { x // p x }} {h : l.size = n} :
(mk l h).unattach = mk l.unattach (by simpa using h) := by
simp [unattach]
@[simp] theorem toArray_unattach {p : α Prop} {l : Vector { x // p x } n} :
l.unattach.toArray = l.toArray.unattach := by
simp [unattach]
@[simp] theorem toList_unattach {p : α Prop} {l : Array { x // p x }} :
l.unattach.toList = l.toList.unattach := by
simp [unattach]
@[simp] theorem unattach_attach {l : Vector α n} : l.attach.unattach = l := by
rcases l with l, rfl
simp
@[simp] theorem unattach_attachWith {p : α Prop} {l : Vector α n}
{H : a l, p a} :
(l.attachWith p H).unattach = l := by
cases l
simp
@[simp] theorem getElem?_unattach {p : α Prop} {l : Vector { x // p x } n} (i : Nat) :
l.unattach[i]? = l[i]?.map Subtype.val := by
simp [unattach]
@[simp] theorem getElem_unattach
{p : α Prop} {l : Vector { x // p x } n} (i : Nat) (h : i < n) :
l.unattach[i] = (l[i]'(by simpa using h)).1 := by
simp [unattach]
/-! ### Recognizing higher order functions using a function that only depends on the value. -/
/--
This lemma identifies folds over arrays of subtypes, where the function only depends on the value, not the proposition,
and simplifies these to the function directly taking the value.
-/
@[simp] theorem foldl_subtype {p : α Prop} {l : Vector { x // p x } n}
{f : β { x // p x } β} {g : β α β} {x : β}
{hf : b x h, f b x, h = g b x} :
l.foldl f x = l.unattach.foldl g x := by
rcases l with l, rfl
simp [Array.foldl_subtype (hf := hf)]
/--
This lemma identifies folds over arrays of subtypes, where the function only depends on the value, not the proposition,
and simplifies these to the function directly taking the value.
-/
@[simp] theorem foldr_subtype {p : α Prop} {l : Vector { x // p x } n}
{f : { x // p x } β β} {g : α β β} {x : β}
{hf : x h b, f x, h b = g x b} :
l.foldr f x = l.unattach.foldr g x := by
rcases l with l, rfl
simp [Array.foldr_subtype (hf := hf)]
/--
This lemma identifies maps over arrays of subtypes, where the function only depends on the value, not the proposition,
and simplifies these to the function directly taking the value.
-/
@[simp] theorem map_subtype {p : α Prop} {l : Vector { x // p x } n}
{f : { x // p x } β} {g : α β} {hf : x h, f x, h = g x} :
l.map f = l.unattach.map g := by
rcases l with l, rfl
simp [Array.map_subtype (hf := hf)]
/-! ### Simp lemmas pushing `unattach` inwards. -/
@[simp] theorem unattach_reverse {p : α Prop} {l : Vector { x // p x } n} :
l.reverse.unattach = l.unattach.reverse := by
rcases l with l, rfl
simp [Array.unattach_reverse]
@[simp] theorem unattach_append {p : α Prop} {l₁ l₂ : Vector { x // p x } n} :
(l₁ ++ l₂).unattach = l₁.unattach ++ l₂.unattach := by
rcases l₁
rcases l₂
simp
@[simp] theorem unattach_flatten {p : α Prop} {l : Vector (Vector { x // p x } n) n} :
l.flatten.unattach = (l.map unattach).flatten := by
unfold unattach
cases l using vector₂_induction
simp only [flatten_mk, Array.map_map, Function.comp_apply, Array.map_subtype,
Array.unattach_attach, Array.map_id_fun', id_eq, map_mk, Array.map_flatten, map_subtype,
map_id_fun', unattach_mk, eq_mk]
unfold Array.unattach
rfl
@[simp] theorem unattach_mkVector {p : α Prop} {n : Nat} {x : { x // p x }} :
(mkVector n x).unattach = mkVector n x.1 := by
simp [unattach]
end Vector

View File

@@ -6,6 +6,7 @@ Authors: Shreyas Srinivas, François G. Dorais, Kim Morrison
prelude
import Init.Data.Array.Lemmas
import Init.Data.Array.MapIdx
import Init.Data.Range
/-!
@@ -90,14 +91,12 @@ of bounds.
/-- The last element of a vector. Panics if the vector is empty. -/
@[inline] def back! [Inhabited α] (v : Vector α n) : α := v.toArray.back!
/-- The last element of a vector, or `none` if the array is empty. -/
/-- The last element of a vector, or `none` if the vector is empty. -/
@[inline] def back? (v : Vector α n) : Option α := v.toArray.back?
/-- The last element of a non-empty vector. -/
@[inline] def back [NeZero n] (v : Vector α n) : α :=
-- TODO: change to just `v[n]`
have : Inhabited α := v[0]'(Nat.pos_of_neZero n)
v.back!
v[n - 1]'(Nat.sub_one_lt (NeZero.ne n))
/-- The first element of a non-empty vector. -/
@[inline] def head [NeZero n] (v : Vector α n) := v[0]'(Nat.pos_of_neZero n)
@@ -170,6 +169,15 @@ result is empty. If `stop` is greater than the size of the vector, the size is u
@[inline] def map (f : α β) (v : Vector α n) : Vector β n :=
v.toArray.map f, by simp
/-- Maps elements of a vector using the function `f`, which also receives the index of the element. -/
@[inline] def mapIdx (f : Nat α β) (v : Vector α n) : Vector β n :=
v.toArray.mapIdx f, by simp
/-- Maps elements of a vector using the function `f`,
which also receives the index of the element, and the fact that the index is less than the size of the vector. -/
@[inline] def mapFinIdx (v : Vector α n) (f : (i : Nat) α (h : i < n) β) : Vector β n :=
v.toArray.mapFinIdx (fun i a h => f i a (by simpa [v.size_toArray] using h)), by simp
@[inline] def flatten (v : Vector (Vector α n) m) : Vector α (m * n) :=
(v.toArray.map Vector.toArray).flatten,
by rcases v; simp_all [Function.comp_def, Array.map_const']
@@ -177,6 +185,12 @@ result is empty. If `stop` is greater than the size of the vector, the size is u
@[inline] def flatMap (v : Vector α n) (f : α Vector β m) : Vector β (n * m) :=
v.toArray.flatMap fun a => (f a).toArray, by simp [Array.map_const']
@[inline] def zipIdx (v : Vector α n) (k : Nat := 0) : Vector (α × Nat) n :=
v.toArray.zipIdx k, by simp
@[deprecated zipIdx (since := "2025-01-21")]
abbrev zipWithIndex := @zipIdx
/-- Maps corresponding elements of two vectors of equal size using the function `f`. -/
@[inline] def zipWith (a : Vector α n) (b : Vector β n) (f : α β φ) : Vector φ n :=
Array.zipWith a.toArray b.toArray f, by simp
@@ -301,6 +315,14 @@ no element of the index matches the given value.
@[inline] def all (v : Vector α n) (p : α Bool) : Bool :=
v.toArray.all p
/-- Count the number of elements of a vector that satisfy the predicate `p`. -/
@[inline] def countP (p : α Bool) (v : Vector α n) : Nat :=
v.toArray.countP p
/-- Count the number of elements of a vector that are equal to `a`. -/
@[inline] def count [BEq α] (a : α) (v : Vector α n) : Nat :=
v.toArray.count a
/-! ### Lexicographic ordering -/
instance instLT [LT α] : LT (Vector α n) := fun v w => v.toArray < w.toArray

View File

@@ -0,0 +1,233 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Array.Count
import Init.Data.Vector.Lemmas
/-!
# Lemmas about `Vector.countP` and `Vector.count`.
-/
namespace Vector
open Nat
/-! ### countP -/
section countP
variable (p q : α Bool)
@[simp] theorem countP_empty : countP p #v[] = 0 := rfl
@[simp] theorem countP_push_of_pos (l : Vector α n) (pa : p a) : countP p (l.push a) = countP p l + 1 := by
rcases l with l
simp_all
@[simp] theorem countP_push_of_neg (l : Vector α n) (pa : ¬p a) : countP p (l.push a) = countP p l := by
rcases l with l, rfl
simp_all
theorem countP_push (a : α) (l : Vector α n) : countP p (l.push a) = countP p l + if p a then 1 else 0 := by
rcases l with l, rfl
simp [Array.countP_push]
@[simp] theorem countP_singleton (a : α) : countP p #v[a] = if p a then 1 else 0 := by
simp [countP_push]
theorem size_eq_countP_add_countP (l : Vector α n) : n = countP p l + countP (fun a => ¬p a) l := by
rcases l with l, rfl
simp [List.length_eq_countP_add_countP (p := p)]
theorem countP_le_size {l : Vector α n} : countP p l n := by
rcases l with l, rfl
simp [Array.countP_le_size (p := p)]
@[simp] theorem countP_append (l₁ : Vector α n) (l₂ : Vector α m) : countP p (l₁ ++ l₂) = countP p l₁ + countP p l₂ := by
cases l₁
cases l₂
simp
@[simp] theorem countP_pos_iff {p} : 0 < countP p l a l, p a := by
cases l
simp
@[simp] theorem one_le_countP_iff {p} : 1 countP p l a l, p a :=
countP_pos_iff
@[simp] theorem countP_eq_zero {p} : countP p l = 0 a l, ¬p a := by
cases l
simp
@[simp] theorem countP_eq_size {p} : countP p l = l.size a l, p a := by
cases l
simp
@[simp] theorem countP_cast (p : α Bool) (l : Vector α n) : countP p (l.cast h) = countP p l := by
rcases l with l, rfl
simp
theorem countP_mkVector (p : α Bool) (a : α) (n : Nat) :
countP p (mkVector n a) = if p a then n else 0 := by
simp only [mkVector_eq_toVector_mkArray, countP_cast, countP_mk]
simp [Array.countP_mkArray]
theorem boole_getElem_le_countP (p : α Bool) (l : Vector α n) (i : Nat) (h : i < n) :
(if p l[i] then 1 else 0) l.countP p := by
rcases l with l, rfl
simp [Array.boole_getElem_le_countP]
theorem countP_set (p : α Bool) (l : Vector α n) (i : Nat) (a : α) (h : i < n) :
(l.set i a).countP p = l.countP p - (if p l[i] then 1 else 0) + (if p a then 1 else 0) := by
cases l
simp [Array.countP_set, h]
@[simp] theorem countP_true : (countP fun (_ : α) => true) = (fun (_ : Vector α n) => n) := by
funext l
rw [countP]
simp only [Array.countP_true, l.2]
@[simp] theorem countP_false : (countP fun (_ : α) => false) = (fun (_ : Vector α n) => 0) := by
funext l
simp
@[simp] theorem countP_map (p : β Bool) (f : α β) (l : Vector α n) :
countP p (map f l) = countP (p f) l := by
rcases l with l, rfl
simp
@[simp] theorem countP_flatten (l : Vector (Vector α m) n) :
countP p l.flatten = (l.map (countP p)).sum := by
rcases l with l, rfl
simp [Function.comp_def]
theorem countP_flatMap (p : β Bool) (l : Vector α n) (f : α Vector β m) :
countP p (l.flatMap f) = (map (countP p f) l).sum := by
rcases l with l, rfl
simp [Array.countP_flatMap, Function.comp_def]
@[simp] theorem countP_reverse (l : Vector α n) : countP p l.reverse = countP p l := by
cases l
simp
variable {p q}
theorem countP_mono_left (h : x l, p x q x) : countP p l countP q l := by
cases l
simpa using Array.countP_mono_left (by simpa using h)
theorem countP_congr (h : x l, p x q x) : countP p l = countP q l :=
Nat.le_antisymm
(countP_mono_left fun x hx => (h x hx).1)
(countP_mono_left fun x hx => (h x hx).2)
end countP
/-! ### count -/
section count
variable [BEq α]
@[simp] theorem count_empty (a : α) : count a #v[] = 0 := rfl
theorem count_push (a b : α) (l : Vector α n) :
count a (l.push b) = count a l + if b == a then 1 else 0 := by
rcases l with l, rfl
simp [Array.count_push]
theorem count_eq_countP (a : α) (l : Vector α n) : count a l = countP (· == a) l := rfl
theorem count_eq_countP' {a : α} : count (n := n) a = countP (· == a) := by
funext l
apply count_eq_countP
theorem count_le_size (a : α) (l : Vector α n) : count a l n := countP_le_size _
theorem count_le_count_push (a b : α) (l : Vector α n) : count a l count a (l.push b) := by
rcases l with l, rfl
simp [Array.count_push]
@[simp] theorem count_singleton (a b : α) : count a #v[b] = if b == a then 1 else 0 := by
simp [count_eq_countP]
@[simp] theorem count_append (a : α) (l₁ : Vector α n) (l₂ : Vector α m) :
count a (l₁ ++ l₂) = count a l₁ + count a l₂ :=
countP_append ..
@[simp] theorem count_flatten (a : α) (l : Vector (Vector α m) n) :
count a l.flatten = (l.map (count a)).sum := by
rcases l with l, rfl
simp [Array.count_flatten, Function.comp_def]
@[simp] theorem count_reverse (a : α) (l : Vector α n) : count a l.reverse = count a l := by
rcases l with l, rfl
simp
theorem boole_getElem_le_count (a : α) (l : Vector α n) (i : Nat) (h : i < n) :
(if l[i] == a then 1 else 0) l.count a := by
rcases l with l, rfl
simp [Array.boole_getElem_le_count, h]
theorem count_set (a b : α) (l : Vector α n) (i : Nat) (h : i < n) :
(l.set i a).count b = l.count b - (if l[i] == b then 1 else 0) + (if a == b then 1 else 0) := by
rcases l with l, rfl
simp [Array.count_set, h]
@[simp] theorem count_cast (l : Vector α n) : (l.cast h).count a = l.count a := by
rcases l with l, rfl
simp
variable [LawfulBEq α]
@[simp] theorem count_push_self (a : α) (l : Vector α n) : count a (l.push a) = count a l + 1 := by
rcases l with l, rfl
simp [Array.count_push_self]
@[simp] theorem count_push_of_ne (h : b a) (l : Vector α n) : count a (l.push b) = count a l := by
rcases l with l, rfl
simp [Array.count_push_of_ne, h]
theorem count_singleton_self (a : α) : count a #v[a] = 1 := by simp
@[simp]
theorem count_pos_iff {a : α} {l : Vector α n} : 0 < count a l a l := by
rcases l with l, rfl
simp [Array.count_pos_iff, beq_iff_eq, exists_eq_right]
@[simp] theorem one_le_count_iff {a : α} {l : Vector α n} : 1 count a l a l :=
count_pos_iff
theorem count_eq_zero_of_not_mem {a : α} {l : Vector α n} (h : a l) : count a l = 0 :=
Decidable.byContradiction fun h' => h <| count_pos_iff.1 (Nat.pos_of_ne_zero h')
theorem not_mem_of_count_eq_zero {a : α} {l : Vector α n} (h : count a l = 0) : a l :=
fun h' => Nat.ne_of_lt (count_pos_iff.2 h') h.symm
theorem count_eq_zero {l : Vector α n} : count a l = 0 a l :=
not_mem_of_count_eq_zero, count_eq_zero_of_not_mem
theorem count_eq_size {l : Vector α n} : count a l = l.size b l, a = b := by
rcases l with l, rfl
simp [Array.count_eq_size]
@[simp] theorem count_mkVector_self (a : α) (n : Nat) : count a (mkVector n a) = n := by
simp only [mkVector_eq_toVector_mkArray, count_cast, count_mk]
simp
theorem count_mkVector (a b : α) (n : Nat) : count a (mkVector n b) = if b == a then n else 0 := by
simp only [mkVector_eq_toVector_mkArray, count_cast, count_mk]
simp [Array.count_mkArray]
theorem count_le_count_map [DecidableEq β] (l : Vector α n) (f : α β) (x : α) :
count x l count (f x) (map f l) := by
rcases l with l, rfl
simp [Array.count_le_count_map]
theorem count_flatMap {α} [BEq β] (l : Vector α n) (f : α Vector β m) (x : β) :
count x (l.flatMap f) = (map (count x f) l).sum := by
rcases l with l, rfl
simp [Array.count_flatMap, Function.comp_def]
end count

View File

@@ -23,7 +23,6 @@ end Array
namespace Vector
/-! ### mk lemmas -/
theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a := rfl
@@ -70,6 +69,10 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem back?_mk (a : Array α) (h : a.size = n) :
(Vector.mk a h).back? = a.back? := rfl
@[simp] theorem back_mk [NeZero n] (a : Array α) (h : a.size = n) :
(Vector.mk a h).back =
a[n - 1]'(Nat.lt_of_lt_of_eq (Nat.sub_one_lt (NeZero.ne n)) h.symm) := rfl
@[simp] theorem foldlM_mk [Monad m] (f : β α m β) (b : β) (a : Array α) (h : a.size = n) :
(Vector.mk a h).foldlM f b = a.foldlM f b := rfl
@@ -111,6 +114,13 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem map_mk (a : Array α) (h : a.size = n) (f : α β) :
(Vector.mk a h).map f = Vector.mk (a.map f) (by simp [h]) := rfl
@[simp] theorem mapIdx_mk (a : Array α) (h : a.size = n) (f : Nat α β) :
(Vector.mk a h).mapIdx f = Vector.mk (a.mapIdx f) (by simp [h]) := rfl
@[simp] theorem mapFinIdx_mk (a : Array α) (h : a.size = n) (f : (i : Nat) α (h : i < n) β) :
(Vector.mk a h).mapFinIdx f =
Vector.mk (a.mapFinIdx fun i a h' => f i a (by simpa [h] using h')) (by simp [h]) := rfl
@[simp] theorem reverse_mk (a : Array α) (h : a.size = n) :
(Vector.mk a h).reverse = Vector.mk a.reverse (by simp [h]) := rfl
@@ -141,6 +151,12 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem take_mk (a : Array α) (h : a.size = n) (m) :
(Vector.mk a h).take m = Vector.mk (a.take m) (by simp [h]) := rfl
@[simp] theorem zipIdx_mk (a : Array α) (h : a.size = n) (k : Nat := 0) :
(Vector.mk a h).zipIdx k = Vector.mk (a.zipIdx k) (by simp [h]) := rfl
@[deprecated zipIdx_mk (since := "2025-01-21")]
abbrev zipWithIndex_mk := @zipIdx_mk
@[simp] theorem mk_zipWith_mk (f : α β γ) (a : Array α) (b : Array β)
(ha : a.size = n) (hb : b.size = n) : zipWith (Vector.mk a ha) (Vector.mk b hb) f =
Vector.mk (Array.zipWith a b f) (by simp [ha, hb]) := rfl
@@ -157,6 +173,12 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem all_mk (p : α Bool) (a : Array α) (h : a.size = n) :
(Vector.mk a h).all p = a.all p := rfl
@[simp] theorem countP_mk (p : α Bool) (a : Array α) (h : a.size = n) :
(Vector.mk a h).countP p = a.countP p := rfl
@[simp] theorem count_mk [BEq α] (a : Array α) (h : a.size = n) (b : α) :
(Vector.mk a h).count b = a.count b := rfl
@[simp] theorem eq_mk : v = Vector.mk a h v.toArray = a := by
cases v
simp
@@ -204,6 +226,14 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem toArray_map (f : α β) (a : Vector α n) :
(a.map f).toArray = a.toArray.map f := rfl
@[simp] theorem toArray_mapIdx (f : Nat α β) (a : Vector α n) :
(a.mapIdx f).toArray = a.toArray.mapIdx f := rfl
@[simp] theorem toArray_mapFinIdx (f : (i : Nat) α (h : i < n) β) (v : Vector α n) :
(v.mapFinIdx f).toArray =
v.toArray.mapFinIdx (fun i a h => f i a (by simpa [v.size_toArray] using h)) :=
rfl
@[simp] theorem toArray_ofFn (f : Fin n α) : (Vector.ofFn f).toArray = Array.ofFn f := rfl
@[simp] theorem toArray_pop (a : Vector α n) : a.pop.toArray = a.toArray.pop := rfl
@@ -246,6 +276,9 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
@[simp] theorem toArray_take (a : Vector α n) (m) : (a.take m).toArray = a.toArray.take m := rfl
@[simp] theorem toArray_zipIdx (a : Vector α n) (k : Nat := 0) :
(a.zipIdx k).toArray = a.toArray.zipIdx k := rfl
@[simp] theorem toArray_zipWith (f : α β γ) (a : Vector α n) (b : Vector β n) :
(Vector.zipWith a b f).toArray = Array.zipWith a.toArray b.toArray f := rfl
@@ -269,6 +302,16 @@ theorem toArray_mk (a : Array α) (h : a.size = n) : (Vector.mk a h).toArray = a
cases v
simp
@[simp] theorem countP_toArray (p : α Bool) (v : Vector α n) :
v.toArray.countP p = v.countP p := by
cases v
simp
@[simp] theorem count_toArray [BEq α] (a : α) (v : Vector α n) :
v.toArray.count a = v.count a := by
cases v
simp
@[simp] theorem toArray_mkVector : (mkVector n a).toArray = mkArray n a := rfl
@[simp] theorem toArray_inj {v w : Vector α n} : v.toArray = w.toArray v = w := by
@@ -298,6 +341,8 @@ protected theorem ext {a b : Vector α n} (h : (i : Nat) → (_ : i < n) → a[i
/-! ### toList -/
theorem toArray_toList (a : Vector α n) : a.toArray.toList = a.toList := rfl
@[simp] theorem getElem_toList {α n} (xs : Vector α n) (i : Nat) (h : i < xs.toList.length) :
xs.toList[i] = xs[i]'(by simpa using h) := by
cases xs
@@ -337,6 +382,14 @@ theorem toList_extract (a : Vector α n) (start stop) :
theorem toList_map (f : α β) (a : Vector α n) :
(a.map f).toList = a.toList.map f := by simp
theorem toList_mapIdx (f : Nat α β) (a : Vector α n) :
(a.mapIdx f).toList = a.toList.mapIdx f := by simp
theorem toList_mapFinIdx (f : (i : Nat) α (h : i < n) β) (v : Vector α n) :
(v.mapFinIdx f).toList =
v.toList.mapFinIdx (fun i a h => f i a (by simpa [v.size_toArray] using h)) := by
simp
theorem toList_ofFn (f : Fin n α) : (Vector.ofFn f).toList = List.ofFn f := by simp
theorem toList_pop (a : Vector α n) : a.pop.toList = a.toList.dropLast := rfl
@@ -389,6 +442,16 @@ theorem toList_swap (a : Vector α n) (i j) (hi hj) :
cases v
simp
@[simp] theorem countP_toList (p : α Bool) (v : Vector α n) :
v.toList.countP p = v.countP p := by
cases v
simp
@[simp] theorem count_toList [BEq α] (a : α) (v : Vector α n) :
v.toList.count a = v.count a := by
cases v
simp
@[simp] theorem toList_mkVector : (mkVector n a).toList = List.replicate n a := rfl
theorem toList_inj {v w : Vector α n} : v.toList = w.toList v = w := by
@@ -468,6 +531,32 @@ theorem exists_push {xs : Vector α (n + 1)} :
theorem singleton_inj : #v[a] = #v[b] a = b := by
simp
/-! ### cast -/
@[simp] theorem getElem_cast (a : Vector α n) (h : n = m) (i : Nat) (hi : i < m) :
(a.cast h)[i] = a[i] := by
cases a
simp
@[simp] theorem getElem?_cast {l : Vector α n} {m : Nat} {w : n = m} {i : Nat} :
(l.cast w)[i]? = l[i]? := by
rcases l with l, rfl
simp
@[simp] theorem mem_cast {a : α} {l : Vector α n} {m : Nat} {w : n = m} :
a l.cast w a l := by
rcases l with l, rfl
simp
@[simp] theorem cast_cast {l : Vector α n} {w : n = m} {w' : m = k} :
(l.cast w).cast w' = l.cast (w.trans w') := by
rcases l with l, rfl
simp
@[simp] theorem cast_rfl {l : Vector α n} : l.cast rfl = l := by
rcases l with l, rfl
simp
/-! ### mkVector -/
@[simp] theorem mkVector_zero : mkVector 0 a = #v[] := rfl
@@ -478,6 +567,13 @@ theorem mkVector_succ : mkVector (n + 1) a = (mkVector n a).push a := by
@[simp] theorem mkVector_inj : mkVector n a = mkVector n b n = 0 a = b := by
simp [ toArray_inj, toArray_mkVector, Array.mkArray_inj]
@[simp] theorem _root_.Array.toVector_mkArray (a : α) (n : Nat) :
(Array.mkArray n a).toVector = (mkVector n a).cast (by simp) := rfl
theorem mkVector_eq_toVector_mkArray (a : α) (n : Nat) :
mkVector n a = (Array.mkArray n a).toVector.cast (by simp) := by
simp
/-! ## L[i] and L[i]? -/
@[simp] theorem getElem?_eq_none_iff {a : Vector α n} : a[i]? = none n i := by
@@ -686,6 +782,10 @@ theorem getElem?_of_mem {a} {l : Vector α n} (h : a ∈ l) : ∃ i : Nat, l[i]?
theorem mem_of_getElem? {l : Vector α n} {i : Nat} {a : α} (e : l[i]? = some a) : a l :=
let _, e := getElem?_eq_some_iff.1 e; e getElem_mem ..
theorem mem_of_back? {xs : Vector α n} {a : α} (h : xs.back? = some a) : a xs := by
cases xs
simpa using Array.mem_of_back? (by simpa using h)
theorem mem_iff_getElem {a} {l : Vector α n} : a l (i : Nat) (h : i < n), l[i]'h = a :=
getElem_of_mem, fun _, _, e => e getElem_mem ..
@@ -697,24 +797,6 @@ theorem forall_getElem {l : Vector α n} {p : α → Prop} :
rcases l with l, rfl
simp [Array.forall_getElem]
/-! ### cast -/
@[simp] theorem getElem_cast (a : Vector α n) (h : n = m) (i : Nat) (hi : i < m) :
(a.cast h)[i] = a[i] := by
cases a
simp
@[simp] theorem getElem?_cast {l : Vector α n} {m : Nat} {w : n = m} {i : Nat} :
(l.cast w)[i]? = l[i]? := by
rcases l with l, rfl
simp
@[simp] theorem mem_cast {a : α} {l : Vector α n} {m : Nat} {w : n = m} :
a l.cast w a l := by
rcases l with l, rfl
simp
/-! ### Decidability of bounded quantifiers -/
instance {xs : Vector α n} {p : α Prop} [DecidablePred p] :
@@ -1072,6 +1154,11 @@ theorem mem_setIfInBounds (v : Vector α n) (i : Nat) (hi : i < n) (a : α) :
cases a
simp
@[simp] theorem getElem?_map (f : α β) (a : Vector α n) (i : Nat) :
(a.map f)[i]? = a[i]?.map f := by
cases a
simp
/-- The empty vector maps to the empty vector. -/
@[simp]
theorem map_empty (f : α β) : map f #v[] = #v[] := by
@@ -1104,7 +1191,9 @@ theorem map_id'' {f : αα} (h : ∀ x, f x = x) (l : Vector α n) : map f
theorem map_singleton (f : α β) (a : α) : map f #v[a] = #v[f a] := rfl
@[simp] theorem mem_map {f : α β} {l : Vector α n} : b l.map f a, a l f a = b := by
-- We use a lower priority here as there are more specific lemmas in downstream libraries
-- which should be able to fire first.
@[simp 500] theorem mem_map {f : α β} {l : Vector α n} : b l.map f a, a l f a = b := by
cases l
simp
@@ -1248,10 +1337,10 @@ theorem singleton_eq_toVector_singleton (a : α) : #v[a] = #[a].toVector := rfl
cases t
simp
theorem mem_append_left {a : α} {s : Vector α n} {t : Vector α m} (h : a s) : a s ++ t :=
theorem mem_append_left {a : α} {s : Vector α n} (t : Vector α m) (h : a s) : a s ++ t :=
mem_append.2 (Or.inl h)
theorem mem_append_right {a : α} {s : Vector α n} {t : Vector α m} (h : a t) : a s ++ t :=
theorem mem_append_right {a : α} (s : Vector α n) {t : Vector α m} (h : a t) : a s ++ t :=
mem_append.2 (Or.inr h)
theorem not_mem_append {a : α} {s : Vector α n} {t : Vector α m} (h₁ : a s) (h₂ : a t) :
@@ -1331,7 +1420,7 @@ theorem getElem_of_append {l : Vector α n} {l₁ : Vector α m} {l₂ : Vector
rw [ getElem?_eq_getElem, eq, getElem?_cast, getElem?_append_left (by simp)]
simp
@[simp 1100] theorem append_singleton {a : α} {as : Vector α n} : as ++ #v[a] = as.push a := by
@[simp] theorem append_singleton {a : α} {as : Vector α n} : as ++ #v[a] = as.push a := by
cases as
simp
@@ -1832,6 +1921,222 @@ theorem flatMap_reverse {β} (l : Vector α n) (f : α → Vector β m) :
rw [ toArray_inj]
simp
/-! ### extract -/
@[simp] theorem getElem_extract {as : Vector α n} {start stop : Nat}
(h : i < min stop n - start) :
(as.extract start stop)[i] = as[start + i] := by
rcases as with as, rfl
simp
theorem getElem?_extract {as : Vector α n} {start stop : Nat} :
(as.extract start stop)[i]? = if i < min stop as.size - start then as[start + i]? else none := by
rcases as with as, rfl
simp [Array.getElem?_extract]
@[simp] theorem extract_size (as : Vector α n) : as.extract 0 n = as.cast (by simp) := by
rcases as with as, rfl
simp
theorem extract_empty (start stop : Nat) :
(#v[] : Vector α 0).extract start stop = #v[].cast (by simp) := by
simp
/-! ### foldlM and foldrM -/
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β α m β) (b) (l : Vector α n) (l' : Vector α k) :
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
rcases l with l, rfl
rcases l' with l', rfl
simp
@[simp] theorem foldlM_empty [Monad m] (f : β α m β) (init : β) :
foldlM f init #v[] = return init := by
simp [foldlM]
@[simp] theorem foldrM_empty [Monad m] (f : α β m β) (init : β) :
foldrM f init #v[] = return init := by
simp [foldrM]
@[simp] theorem foldlM_push [Monad m] [LawfulMonad m] (l : Vector α n) (a : α) (f : β α m β) (b) :
(l.push a).foldlM f b = l.foldlM f b >>= fun b => f b a := by
rcases l with l, rfl
simp
theorem foldl_eq_foldlM (f : β α β) (b) (l : Vector α n) :
l.foldl f b = l.foldlM (m := Id) f b := by
rcases l with l, rfl
simp [Array.foldl_eq_foldlM]
theorem foldr_eq_foldrM (f : α β β) (b) (l : Vector α n) :
l.foldr f b = l.foldrM (m := Id) f b := by
rcases l with l, rfl
simp [Array.foldr_eq_foldrM]
@[simp] theorem id_run_foldlM (f : β α Id β) (b) (l : Vector α n) :
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
@[simp] theorem id_run_foldrM (f : α β Id β) (b) (l : Vector α n) :
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
@[simp] theorem foldlM_reverse [Monad m] (l : Vector α n) (f : β α m β) (b) :
l.reverse.foldlM f b = l.foldrM (fun x y => f y x) b := by
rcases l with l, rfl
simp [Array.foldlM_reverse]
@[simp] theorem foldrM_reverse [Monad m] (l : Vector α n) (f : α β m β) (b) :
l.reverse.foldrM f b = l.foldlM (fun x y => f y x) b := by
rcases l with l, rfl
simp
@[simp] theorem foldrM_push [Monad m] (f : α β m β) (init : β) (arr : Vector α n) (a : α) :
(arr.push a).foldrM f init = f a init >>= arr.foldrM f := by
rcases arr with arr, rfl
simp [Array.foldrM_push]
/-! ### foldl / foldr -/
@[congr]
theorem foldl_congr {as bs : Vector α n} (h₀ : as = bs) {f g : β α β} (h₁ : f = g)
{a b : β} (h₂ : a = b) :
as.foldl f a = bs.foldl g b := by
congr
@[congr]
theorem foldr_congr {as bs : Vector α n} (h₀ : as = bs) {f g : α β β} (h₁ : f = g)
{a b : β} (h₂ : a = b) :
as.foldr f a = bs.foldr g b := by
congr
@[simp] theorem foldr_push (f : α β β) (init : β) (arr : Vector α n) (a : α) :
(arr.push a).foldr f init = arr.foldr f (f a init) := by
rcases arr with arr, rfl
simp [Array.foldr_push]
theorem foldl_map (f : β₁ β₂) (g : α β₂ α) (l : Vector β₁ n) (init : α) :
(l.map f).foldl g init = l.foldl (fun x y => g x (f y)) init := by
cases l; simp [Array.foldl_map']
theorem foldr_map (f : α₁ α₂) (g : α₂ β β) (l : Vector α₁ n) (init : β) :
(l.map f).foldr g init = l.foldr (fun x y => g (f x) y) init := by
cases l; simp [Array.foldr_map']
theorem foldl_filterMap (f : α Option β) (g : γ β γ) (l : Vector α n) (init : γ) :
(l.filterMap f).foldl g init = l.foldl (fun x y => match f y with | some b => g x b | none => x) init := by
rcases l with l, rfl
simp [Array.foldl_filterMap']
rfl
theorem foldr_filterMap (f : α Option β) (g : β γ γ) (l : Vector α n) (init : γ) :
(l.filterMap f).foldr g init = l.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
rcases l with l, rfl
simp [Array.foldr_filterMap']
rfl
theorem foldl_map_hom (g : α β) (f : α α α) (f' : β β β) (a : α) (l : Vector α n)
(h : x y, f' (g x) (g y) = g (f x y)) :
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
rcases l with l, rfl
simp
rw [Array.foldl_map_hom' _ _ _ _ _ h rfl]
theorem foldr_map_hom (g : α β) (f : α α α) (f' : β β β) (a : α) (l : Vector α n)
(h : x y, f' (g x) (g y) = g (f x y)) :
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
rcases l with l, rfl
simp
rw [Array.foldr_map_hom' _ _ _ _ _ h rfl]
@[simp] theorem foldrM_append [Monad m] [LawfulMonad m] (f : α β m β) (b) (l : Vector α n) (l' : Vector α k) :
(l ++ l').foldrM f b = l'.foldrM f b >>= l.foldrM f := by
rcases l with l, rfl
rcases l' with l', rfl
simp
@[simp] theorem foldl_append {β : Type _} (f : β α β) (b) (l : Vector α n) (l' : Vector α k) :
(l ++ l').foldl f b = l'.foldl f (l.foldl f b) := by simp [foldl_eq_foldlM]
@[simp] theorem foldr_append (f : α β β) (b) (l : Vector α n) (l' : Vector α k) :
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM]
@[simp] theorem foldl_flatten (f : β α β) (b : β) (L : Vector (Vector α m) n) :
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
cases L using vector₂_induction
simp [Array.foldl_flatten', Array.foldl_map']
@[simp] theorem foldr_flatten (f : α β β) (b : β) (L : Vector (Vector α m) n) :
(flatten L).foldr f b = L.foldr (fun l b => l.foldr f b) b := by
cases L using vector₂_induction
simp [Array.foldr_flatten', Array.foldr_map']
@[simp] theorem foldl_reverse (l : Vector α n) (f : β α β) (b) :
l.reverse.foldl f b = l.foldr (fun x y => f y x) b := by simp [foldl_eq_foldlM, foldr_eq_foldrM]
@[simp] theorem foldr_reverse (l : Vector α n) (f : α β β) (b) :
l.reverse.foldr f b = l.foldl (fun x y => f y x) b :=
(foldl_reverse ..).symm.trans <| by simp
theorem foldl_eq_foldr_reverse (l : Vector α n) (f : β α β) (b) :
l.foldl f b = l.reverse.foldr (fun x y => f y x) b := by simp
theorem foldr_eq_foldl_reverse (l : Vector α n) (f : α β β) (b) :
l.foldr f b = l.reverse.foldl (fun x y => f y x) b := by simp
theorem foldl_assoc {op : α α α} [ha : Std.Associative op] {l : Vector α n} {a₁ a₂} :
l.foldl op (op a₁ a₂) = op a₁ (l.foldl op a₂) := by
rcases l with l, rfl
simp [Array.foldl_assoc]
@[simp] theorem foldr_assoc {op : α α α} [ha : Std.Associative op] {l : Vector α n} {a₁ a₂} :
l.foldr op (op a₁ a₂) = op (l.foldr op a₁) a₂ := by
rcases l with l, rfl
simp [Array.foldr_assoc]
theorem foldl_hom (f : α₁ α₂) (g₁ : α₁ β α₁) (g₂ : α₂ β α₂) (l : Vector β n) (init : α₁)
(H : x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
rcases l with l, rfl
simp
rw [Array.foldl_hom _ _ _ _ _ H]
theorem foldr_hom (f : β₁ β₂) (g₁ : α β₁ β₁) (g₂ : α β₂ β₂) (l : Vector α n) (init : β₁)
(H : x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
cases l
simp
rw [Array.foldr_hom _ _ _ _ _ H]
/--
We can prove that two folds over the same array are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
-/
theorem foldl_rel {l : Array α} {f g : β α β} {a b : β} (r : β β Prop)
(h : r a b) (h' : (a : α), a l (c c' : β), r c c' r (f c a) (g c' a)) :
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
rcases l with l
simpa using List.foldl_rel r h (by simpa using h')
/--
We can prove that two folds over the same array are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
-/
theorem foldr_rel {l : Array α} {f g : α β β} {a b : β} (r : β β Prop)
(h : r a b) (h' : (a : α), a l (c c' : β), r c c' r (f a c) (g a c')) :
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
rcases l with l
simpa using List.foldr_rel r h (by simpa using h')
@[simp] theorem foldl_add_const (l : Array α) (a b : Nat) :
l.foldl (fun x _ => x + a) b = b + a * l.size := by
rcases l with l
simp
@[simp] theorem foldr_add_const (l : Array α) (a b : Nat) :
l.foldr (fun _ x => x + a) b = b + a * l.size := by
rcases l with l
simp
/-! Content below this point has not yet been aligned with `List` and `Array`. -/
@[simp] theorem getElem_ofFn {α n} (f : Fin n α) (i : Nat) (h : i < n) :
@@ -1860,14 +2165,7 @@ defeq issues in the implicit size argument.
· simp [h]
· replace h : i = v.size - 1 := by rw [size_toArray]; omega
subst h
simp [pop, back, back!, Array.eq_push_pop_back!_of_size_ne_zero]
/-! ### extract -/
@[simp] theorem getElem_extract (a : Vector α n) (start stop) (i : Nat) (hi : i < min stop n - start) :
(a.extract start stop)[i] = a[start + i] := by
cases a
simp
simp [back]
/-! ### zipWith -/
@@ -1877,37 +2175,6 @@ defeq issues in the implicit size argument.
cases b
simp
/-! ### foldlM and foldrM -/
@[simp] theorem foldlM_append [Monad m] [LawfulMonad m] (f : β α m β) (b) (l : Vector α n) (l' : Vector α n') :
(l ++ l').foldlM f b = l.foldlM f b >>= l'.foldlM f := by
cases l
cases l'
simp
@[simp] theorem foldrM_push [Monad m] (f : α β m β) (init : β) (l : Vector α n) (a : α) :
(l.push a).foldrM f init = f a init >>= l.foldrM f := by
cases l
simp
theorem foldl_eq_foldlM (f : β α β) (b) (l : Vector α n) :
l.foldl f b = l.foldlM (m := Id) f b := by
cases l
simp [Array.foldl_eq_foldlM]
theorem foldr_eq_foldrM (f : α β β) (b) (l : Vector α n) :
l.foldr f b = l.foldrM (m := Id) f b := by
cases l
simp [Array.foldr_eq_foldrM]
@[simp] theorem id_run_foldlM (f : β α Id β) (b) (l : Vector α n) :
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
@[simp] theorem id_run_foldrM (f : α β Id β) (b) (l : Vector α n) :
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
/-! ### foldl and foldr -/
/-! ### take -/
@[simp] theorem take_size (a : Vector α n) : a.take n = a.cast (by simp) := by

View File

@@ -0,0 +1,366 @@
/-
Copyright (c) 2025 Lean FRO. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
prelude
import Init.Data.Array.MapIdx
import Init.Data.Vector.Lemmas
namespace Vector
/-! ### mapFinIdx -/
@[simp] theorem getElem_mapFinIdx (a : Vector α n) (f : (i : Nat) α (h : i < n) β) (i : Nat)
(h : i < n) :
(a.mapFinIdx f)[i] = f i a[i] h := by
rcases a with a, rfl
simp
@[simp] theorem getElem?_mapFinIdx (a : Vector α n) (f : (i : Nat) α (h : i < n) β) (i : Nat) :
(a.mapFinIdx f)[i]? =
a[i]?.pbind fun b h => f i b (getElem?_eq_some_iff.1 h).1 := by
simp only [getElem?_def, getElem_mapFinIdx]
split <;> simp_all
/-! ### mapIdx -/
@[simp] theorem getElem_mapIdx (f : Nat α β) (a : Vector α n) (i : Nat) (h : i < n) :
(a.mapIdx f)[i] = f i (a[i]'(by simp_all)) := by
rcases a with a, rfl
simp
@[simp] theorem getElem?_mapIdx (f : Nat α β) (a : Vector α n) (i : Nat) :
(a.mapIdx f)[i]? = a[i]?.map (f i) := by
rcases a with a, rfl
simp
end Vector
namespace Array
@[simp] theorem mapFinIdx_toVector (l : Array α) (f : (i : Nat) α (h : i < l.size) β) :
l.toVector.mapFinIdx f = (l.mapFinIdx f).toVector.cast (by simp) := by
ext <;> simp
@[simp] theorem mapIdx_toVector (f : Nat α β) (l : Array α) :
l.toVector.mapIdx f = (l.mapIdx f).toVector.cast (by simp) := by
ext <;> simp
end Array
namespace Vector
/-! ### zipIdx -/
@[simp] theorem toList_zipIdx (a : Vector α n) (k : Nat := 0) :
(a.zipIdx k).toList = a.toList.zipIdx k := by
rcases a with a, rfl
simp
@[simp] theorem getElem_zipIdx (a : Vector α n) (i : Nat) (h : i < n) :
(a.zipIdx k)[i] = (a[i]'(by simp_all), i + k) := by
rcases a with a, rfl
simp
@[simp] theorem zipIdx_toVector {l : Array α} {k : Nat} :
l.toVector.zipIdx k = (l.zipIdx k).toVector.cast (by simp) := by
ext <;> simp
theorem mk_mem_zipIdx_iff_le_and_getElem?_sub {x : α} {i : Nat} {l : Vector α n} {k : Nat} :
(x, i) l.zipIdx k k i l[i - k]? = x := by
rcases l with l, rfl
simp [Array.mk_mem_zipIdx_iff_le_and_getElem?_sub]
/-- Variant of `mk_mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mk_mem_zipIdx_iff_getElem? {x : α} {i : Nat} {l : Vector α n} :
(x, i) l.zipIdx l[i]? = x := by
rcases l with l, rfl
simp [Array.mk_mem_zipIdx_iff_le_and_getElem?_sub]
theorem mem_zipIdx_iff_le_and_getElem?_sub {x : α × Nat} {l : Vector α n} {k : Nat} :
x zipIdx l k k x.2 l[x.2 - k]? = some x.1 := by
cases x
simp [mk_mem_zipIdx_iff_le_and_getElem?_sub]
/-- Variant of `mem_zipIdx_iff_le_and_getElem?_sub` specialized at `k = 0`,
to avoid the inequality and the subtraction. -/
theorem mem_zipIdx_iff_getElem? {x : α × Nat} {l : Vector α n} :
x l.zipIdx l[x.2]? = some x.1 := by
rcases l with l, rfl
simp [Array.mem_zipIdx_iff_getElem?]
@[deprecated toList_zipIdx (since := "2025-01-27")]
abbrev toList_zipWithIndex := @toList_zipIdx
@[deprecated getElem_zipIdx (since := "2025-01-27")]
abbrev getElem_zipWithIndex := @getElem_zipIdx
@[deprecated zipIdx_toVector (since := "2025-01-27")]
abbrev zipWithIndex_toVector := @zipIdx_toVector
@[deprecated mk_mem_zipIdx_iff_le_and_getElem?_sub (since := "2025-01-27")]
abbrev mk_mem_zipWithIndex_iff_le_and_getElem?_sub := @mk_mem_zipIdx_iff_le_and_getElem?_sub
@[deprecated mk_mem_zipIdx_iff_getElem? (since := "2025-01-27")]
abbrev mk_mem_zipWithIndex_iff_getElem? := @mk_mem_zipIdx_iff_getElem?
@[deprecated mem_zipIdx_iff_le_and_getElem?_sub (since := "2025-01-27")]
abbrev mem_zipWithIndex_iff_le_and_getElem?_sub := @mem_zipIdx_iff_le_and_getElem?_sub
@[deprecated mem_zipIdx_iff_getElem? (since := "2025-01-27")]
abbrev mem_zipWithIndex_iff_getElem? := @mem_zipIdx_iff_getElem?
/-! ### mapFinIdx -/
@[congr] theorem mapFinIdx_congr {xs ys : Vector α n} (w : xs = ys)
(f : (i : Nat) α (h : i < n) β) :
mapFinIdx xs f = mapFinIdx ys f := by
subst w
rfl
@[simp]
theorem mapFinIdx_empty {f : (i : Nat) α (h : i < 0) β} : mapFinIdx #v[] f = #v[] :=
rfl
theorem mapFinIdx_eq_ofFn {as : Vector α n} {f : (i : Nat) α (h : i < n) β} :
as.mapFinIdx f = Vector.ofFn fun i : Fin n => f i as[i] i.2 := by
rcases as with as, rfl
simp [Array.mapFinIdx_eq_ofFn]
theorem mapFinIdx_append {K : Vector α n} {L : Vector α m} {f : (i : Nat) α (h : i < n + m) β} :
(K ++ L).mapFinIdx f =
K.mapFinIdx (fun i a h => f i a (by omega)) ++
L.mapFinIdx (fun i a h => f (i + n) a (by omega)) := by
rcases K with K, rfl
rcases L with L, rfl
simp [Array.mapFinIdx_append]
@[simp]
theorem mapFinIdx_push {l : Vector α n} {a : α} {f : (i : Nat) α (h : i < n + 1) β} :
mapFinIdx (l.push a) f =
(mapFinIdx l (fun i a h => f i a (by omega))).push (f l.size a (by simp)) := by
simp [ append_singleton, mapFinIdx_append]
theorem mapFinIdx_singleton {a : α} {f : (i : Nat) α (h : i < 1) β} :
#v[a].mapFinIdx f = #v[f 0 a (by simp)] := by
simp
-- FIXME this lemma can't be stated until we've aligned `List/Array/Vector.attach`:
-- theorem mapFinIdx_eq_zipWithIndex_map {l : Vector α n} {f : (i : Nat) → α → (h : i < n) → β} :
-- l.mapFinIdx f = l.zipWithIndex.attach.map
-- fun ⟨⟨x, i⟩, m⟩ =>
-- f i x (by simp [mk_mem_zipWithIndex_iff_getElem?, getElem?_eq_some_iff] at m; exact m.1) := by
-- ext <;> simp
theorem exists_of_mem_mapFinIdx {b : β} {l : Vector α n} {f : (i : Nat) α (h : i < n) β}
(h : b l.mapFinIdx f) : (i : Nat) (h : i < n), f i l[i] h = b := by
rcases l with l, rfl
exact List.exists_of_mem_mapFinIdx (by simpa using h)
@[simp] theorem mem_mapFinIdx {b : β} {l : Vector α n} {f : (i : Nat) α (h : i < n) β} :
b l.mapFinIdx f (i : Nat) (h : i < n), f i l[i] h = b := by
rcases l with l, rfl
simp
theorem mapFinIdx_eq_iff {l : Vector α n} {f : (i : Nat) α (h : i < n) β} :
l.mapFinIdx f = l' (i : Nat) (h : i < n), l'[i] = f i l[i] h := by
rcases l with l, rfl
rcases l' with l', h
simp [mapFinIdx_mk, eq_mk, getElem_mk, Array.mapFinIdx_eq_iff, h]
@[simp] theorem mapFinIdx_eq_singleton_iff {l : Vector α 1} {f : (i : Nat) α (h : i < 1) β} {b : β} :
l.mapFinIdx f = #v[b] (a : α), l = #v[a] f 0 a (by omega) = b := by
rcases l with l, h
simp only [mapFinIdx_mk, eq_mk, Array.mapFinIdx_eq_singleton_iff]
constructor
· rintro a, rfl, rfl
exact a, by simp
· rintro a, rfl, rfl
exact a, by simp
theorem mapFinIdx_eq_append_iff {l : Vector α (n + m)} {f : (i : Nat) α (h : i < n + m) β}
{l₁ : Vector β n} {l₂ : Vector β m} :
l.mapFinIdx f = l₁ ++ l₂
(l₁' : Vector α n) (l₂' : Vector α m), l = l₁' ++ l₂'
l₁'.mapFinIdx (fun i a h => f i a (by omega)) = l₁
l₂'.mapFinIdx (fun i a h => f (i + n) a (by omega)) = l₂ := by
rcases l with l, h
rcases l₁ with l₁, rfl
rcases l₂ with l₂, rfl
simp only [mapFinIdx_mk, mk_append_mk, eq_mk, Array.mapFinIdx_eq_append_iff, toArray_mapFinIdx,
mk_eq, toArray_append, exists_and_left, exists_prop]
constructor
· rintro l₁', l₂', rfl, h₁, h₂
have h₁' := congrArg Array.size h₁
have h₂' := congrArg Array.size h₂
simp only [Array.size_mapFinIdx] at h₁' h₂'
exact l₁', h₁', l₂', h₂', by simp_all
· rintro l₁, s₁, l₂, s₂, rfl, h₁, h₂
refine l₁, l₂, by simp_all
theorem mapFinIdx_eq_push_iff {l : Vector α (n + 1)} {b : β} {f : (i : Nat) α (h : i < n + 1) β} {l₂ : Vector β n} :
l.mapFinIdx f = l₂.push b
(l₁ : Vector α n) (a : α), l = l₁.push a
l₁.mapFinIdx (fun i a h => f i a (by omega)) = l₂ b = f n a (by omega) := by
rcases l with l, h
rcases l₂ with l₂, rfl
simp only [mapFinIdx_mk, push_mk, eq_mk, Array.mapFinIdx_eq_push_iff, mk_eq, toArray_push,
toArray_mapFinIdx]
constructor
· rintro l₁, a, rfl, h₁, rfl
simp only [Array.size_push, Nat.add_right_cancel_iff] at h
exact l₁, h, a, by simp_all
· rintro l₁, h, a, rfl, h₁, rfl
exact l₁, a, by simp_all
theorem mapFinIdx_eq_mapFinIdx_iff {l : Vector α n} {f g : (i : Nat) α (h : i < n) β} :
l.mapFinIdx f = l.mapFinIdx g (i : Nat) (h : i < n), f i l[i] h = g i l[i] h := by
rw [eq_comm, mapFinIdx_eq_iff]
simp
@[simp] theorem mapFinIdx_mapFinIdx {l : Vector α n}
{f : (i : Nat) α (h : i < n) β}
{g : (i : Nat) β (h : i < n) γ} :
(l.mapFinIdx f).mapFinIdx g = l.mapFinIdx (fun i a h => g i (f i a h) h) := by
simp [mapFinIdx_eq_iff]
theorem mapFinIdx_eq_mkVector_iff {l : Vector α n} {f : (i : Nat) α (h : i < n) β} {b : β} :
l.mapFinIdx f = mkVector n b (i : Nat) (h : i < n), f i l[i] h = b := by
rcases l with l, rfl
simp [Array.mapFinIdx_eq_mkArray_iff]
@[simp] theorem mapFinIdx_reverse {l : Vector α n} {f : (i : Nat) α (h : i < n) β} :
l.reverse.mapFinIdx f = (l.mapFinIdx (fun i a h => f (n - 1 - i) a (by omega))).reverse := by
rcases l with l, rfl
simp
/-! ### mapIdx -/
@[simp]
theorem mapIdx_empty {f : Nat α β} : mapIdx f #v[] = #v[] :=
rfl
@[simp] theorem mapFinIdx_eq_mapIdx {l : Vector α n} {f : (i : Nat) α (h : i < n) β} {g : Nat α β}
(h : (i : Nat) (h : i < n), f i l[i] h = g i l[i]) :
l.mapFinIdx f = l.mapIdx g := by
simp_all [mapFinIdx_eq_iff]
theorem mapIdx_eq_mapFinIdx {l : Vector α n} {f : Nat α β} :
l.mapIdx f = l.mapFinIdx (fun i a _ => f i a) := by
simp [mapFinIdx_eq_mapIdx]
theorem mapIdx_eq_zipIdx_map {l : Vector α n} {f : Nat α β} :
l.mapIdx f = l.zipIdx.map fun a, i => f i a := by
ext <;> simp
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-27")]
abbrev mapIdx_eq_zipWithIndex_map := @mapIdx_eq_zipIdx_map
theorem mapIdx_append {K : Vector α n} {L : Vector α m} :
(K ++ L).mapIdx f = K.mapIdx f ++ L.mapIdx fun i => f (i + K.size) := by
rcases K with K, rfl
rcases L with L, rfl
simp [Array.mapIdx_append]
@[simp]
theorem mapIdx_push {l : Vector α n} {a : α} :
mapIdx f (l.push a) = (mapIdx f l).push (f l.size a) := by
simp [ append_singleton, mapIdx_append]
theorem mapIdx_singleton {a : α} : mapIdx f #v[a] = #v[f 0 a] := by
simp
theorem exists_of_mem_mapIdx {b : β} {l : Vector α n}
(h : b l.mapIdx f) : (i : Nat) (h : i < n), f i l[i] = b := by
rw [mapIdx_eq_mapFinIdx] at h
simpa [Fin.exists_iff] using exists_of_mem_mapFinIdx h
@[simp] theorem mem_mapIdx {b : β} {l : Vector α n} :
b l.mapIdx f (i : Nat) (h : i < n), f i l[i] = b := by
constructor
· intro h
exact exists_of_mem_mapIdx h
· rintro i, h, rfl
rw [mem_iff_getElem]
exact i, by simpa using h, by simp
theorem mapIdx_eq_push_iff {l : Vector α (n + 1)} {b : β} :
mapIdx f l = l₂.push b
(a : α) (l₁ : Vector α n), l = l₁.push a mapIdx f l₁ = l₂ f l₁.size a = b := by
rw [mapIdx_eq_mapFinIdx, mapFinIdx_eq_push_iff]
simp only [mapFinIdx_eq_mapIdx, exists_and_left, exists_prop]
constructor
· rintro l₁, a, rfl, rfl, rfl
exact a, l₁, by simp
· rintro a, l₁, rfl, rfl, rfl
exact l₁, a, rfl, by simp
@[simp] theorem mapIdx_eq_singleton_iff {l : Vector α 1} {f : Nat α β} {b : β} :
mapIdx f l = #v[b] (a : α), l = #v[a] f 0 a = b := by
rcases l with l
simp
theorem mapIdx_eq_append_iff {l : Vector α (n + m)} {f : Nat α β} {l₁ : Vector β n} {l₂ : Vector β m} :
mapIdx f l = l₁ ++ l₂
(l₁' : Vector α n) (l₂' : Vector α m), l = l₁' ++ l₂'
l₁'.mapIdx f = l₁
l₂'.mapIdx (fun i => f (i + l₁'.size)) = l₂ := by
rcases l with l, h
rcases l₁ with l₁, rfl
rcases l₂ with l₂, rfl
rw [mapIdx_eq_mapFinIdx, mapFinIdx_eq_append_iff]
simp
theorem mapIdx_eq_iff {l : Vector α n} :
mapIdx f l = l' (i : Nat) (h : i < n), f i l[i] = l'[i] := by
rcases l with l, rfl
rcases l' with l', h
simp only [mapIdx_mk, eq_mk, Array.mapIdx_eq_iff, getElem_mk]
constructor
· rintro h' i h
specialize h' i
simp_all
· intro h' i
specialize h' i
by_cases w : i < l.size
· specialize h' w
simp_all
· simp only [Nat.not_lt] at w
simp_all [Array.getElem?_eq_none_iff.mpr w]
theorem mapIdx_eq_mapIdx_iff {l : Vector α n} :
mapIdx f l = mapIdx g l (i : Nat) (h : i < n), f i l[i] = g i l[i] := by
rcases l with l, rfl
simp [Array.mapIdx_eq_mapIdx_iff]
@[simp] theorem mapIdx_set {l : Vector α n} {i : Nat} {h : i < n} {a : α} :
(l.set i a).mapIdx f = (l.mapIdx f).set i (f i a) (by simpa) := by
rcases l with l, rfl
simp
@[simp] theorem mapIdx_setIfInBounds {l : Vector α n} {i : Nat} {a : α} :
(l.setIfInBounds i a).mapIdx f = (l.mapIdx f).setIfInBounds i (f i a) := by
rcases l with l, rfl
simp
@[simp] theorem back?_mapIdx {l : Vector α n} {f : Nat α β} :
(mapIdx f l).back? = (l.back?).map (f (l.size - 1)) := by
rcases l with l, rfl
simp
@[simp] theorem back_mapIdx [NeZero n] {l : Vector α n} {f : Nat α β} :
(mapIdx f l).back = f (l.size - 1) (l.back) := by
rcases l with l, rfl
simp
@[simp] theorem mapIdx_mapIdx {l : Vector α n} {f : Nat α β} {g : Nat β γ} :
(l.mapIdx f).mapIdx g = l.mapIdx (fun i => g i f i) := by
simp [mapIdx_eq_iff]
theorem mapIdx_eq_mkVector_iff {l : Vector α n} {f : Nat α β} {b : β} :
mapIdx f l = mkVector n b (i : Nat) (h : i < n), f i l[i] = b := by
rcases l with l, rfl
simp [Array.mapIdx_eq_mkArray_iff]
@[simp] theorem mapIdx_reverse {l : Vector α n} {f : Nat α β} :
l.reverse.mapIdx f = (mapIdx (fun i => f (l.size - 1 - i)) l).reverse := by
rcases l with l, rfl
simp [Array.mapIdx_reverse]
end Vector

View File

@@ -5,11 +5,7 @@ Authors: Leonardo de Moura
-/
prelude
import Init.Core
import Init.Grind.Tactics
attribute [grind_cases] And Prod False Empty True Unit Exists
namespace Lean.Grind.Eager
attribute [scoped grind_cases] Or
end Lean.Grind.Eager
attribute [grind cases eager] And Prod False Empty True Unit Exists
attribute [grind cases] Or

View File

@@ -6,19 +6,26 @@ Authors: Leonardo de Moura
prelude
import Init.Tactics
namespace Lean.Parser.Attr
namespace Lean.Parser
/--
Reset all `grind` attributes. This command is intended for testing purposes only and should not be used in applications.
-/
syntax (name := resetGrindAttrs) "%reset_grind_attrs" : command
syntax grindEq := "="
syntax grindEqBoth := atomic("_" "=" "_")
syntax grindEqRhs := atomic("=" "_")
syntax grindBwd := ""
syntax grindFwd := ""
syntax grindThmMod := grindEqBoth <|> grindEqRhs <|> grindEq <|> grindBwd <|> grindFwd
syntax (name := grind) "grind" (grindThmMod)? : attr
end Lean.Parser.Attr
namespace Attr
syntax grindEq := "= "
syntax grindEqBoth := atomic("_" "=" "_ ")
syntax grindEqRhs := atomic("=" "_ ")
syntax grindEqBwd := atomic("" "= ")
syntax grindBwd := ""
syntax grindFwd := ""
syntax grindUsr := &"usr "
syntax grindCases := &"cases "
syntax grindCasesEager := atomic(&"cases" &"eager ")
syntax grindMod := grindEqBoth <|> grindEqRhs <|> grindEq <|> grindEqBwd <|> grindBwd <|> grindFwd <|> grindUsr <|> grindCasesEager <|> grindCases
syntax (name := grind) "grind" (grindMod)? : attr
end Attr
end Lean.Parser
namespace Lean.Grind
/--
@@ -26,6 +33,8 @@ The configuration for `grind`.
Passed to `grind` using, for example, the `grind (config := { matchEqs := true })` syntax.
-/
structure Config where
/-- If `trace` is `true`, `grind` records used E-matching theorems and case-splits. -/
trace : Bool := false
/-- Maximum number of case-splits in a proof search branch. It does not include splits performed during normalization. -/
splits : Nat := 8
/-- Maximum number of E-matching (aka heuristic theorem instantiation) rounds before each case split. -/
@@ -45,8 +54,8 @@ structure Config where
splitIte : Bool := true
/--
If `splitIndPred` is `true`, `grind` performs case-splitting on inductive predicates.
Otherwise, it performs case-splitting only on types marked with `[grind_split]` attribute. -/
splitIndPred : Bool := true
Otherwise, it performs case-splitting only on types marked with `[grind cases]` attribute. -/
splitIndPred : Bool := false
/-- By default, `grind` halts as soon as it encounters a sub-goal where no further progress can be made. -/
failures : Nat := 1
/-- Maximum number of heartbeats (in thousands) the canonicalizer can spend per definitional equality test. -/
@@ -64,7 +73,7 @@ namespace Lean.Parser.Tactic
-/
syntax grindErase := "-" ident
syntax grindLemma := (Attr.grindThmMod)? ident
syntax grindLemma := (Attr.grindMod)? ident
syntax grindParam := grindErase <|> grindLemma
syntax (name := grind)
@@ -72,4 +81,10 @@ syntax (name := grind)
(" [" withoutPosition(grindParam,*) "]")?
("on_failure " term)? : tactic
syntax (name := grindTrace)
"grind?" optConfig (&" only")?
(" [" withoutPosition(grindParam,*) "]")?
("on_failure " term)? : tactic
end Lean.Parser.Tactic

View File

@@ -12,15 +12,17 @@ namespace Lean.Grind
def nestedProof (p : Prop) {h : p} : p := h
/--
Gadget for marking terms that should not be normalized by `grind`s simplifier.
`grind` uses a simproc to implement this feature.
Gadget for marking `match`-expressions that should not be reduced by the `grind` simplifier, but the discriminants should be normalized.
We use it when adding instances of `match`-equations to prevent them from being simplified to true.
-/
def doNotSimp {α : Sort u} (a : α) : α := a
def simpMatchDiscrsOnly {α : Sort u} (a : α) : α := a
/-- Gadget for representing offsets `t+k` in patterns. -/
def offset (a b : Nat) : Nat := a + b
/-- Gadget for representing `a = b` in patterns for backward propagation. -/
def eqBwdPattern (a b : α) : Prop := a = b
/--
Gadget for annotating the equalities in `match`-equations conclusions.
`_origin` is the term used to instantiate the `match`-equation using E-matching.
@@ -28,6 +30,14 @@ When `EqMatch a b origin` is `True`, we mark `origin` as a resolved case-split.
-/
def EqMatch (a b : α) {_origin : α} : Prop := a = b
/--
Gadget for annotating conditions of `match` equational lemmas.
We use this annotation for two different reasons:
- We don't want to normalize them.
- We have a propagator for them.
-/
def MatchCond (p : Prop) : Prop := p
theorem nestedProof_congr (p q : Prop) (h : p = q) (hp : p) (hq : q) : HEq (@nestedProof p hp) (@nestedProof q hq) := by
subst h; apply HEq.refl

View File

@@ -5,4 +5,5 @@ Authors: Joachim Breitner
-/
prelude
import Init.Internal.Order.Basic
import Init.Internal.Order.Lemmas
import Init.Internal.Order.Tactic

View File

@@ -59,7 +59,7 @@ end PartialOrder
section CCPO
/--
A chain-complete partial order (CCPO) is a partial order where every chain a least upper bound.
A chain-complete partial order (CCPO) is a partial order where every chain has a least upper bound.
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
-/
@@ -104,7 +104,7 @@ variable {α : Sort u} [PartialOrder α]
variable {β : Sort v} [PartialOrder β]
/--
A function is monotone if if it maps related elements to releated elements.
A function is monotone if it maps related elements to related elements.
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
-/
@@ -401,6 +401,7 @@ theorem monotone_letFun
(hmono : y, monotone (fun x => k x y)) :
monotone fun (x : α) => letFun v (k x) := hmono v
@[partial_fixpoint_monotone]
theorem monotone_ite
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
(c : Prop) [Decidable c]
@@ -411,6 +412,7 @@ theorem monotone_ite
· apply hmono₁
· apply hmono₂
@[partial_fixpoint_monotone]
theorem monotone_dite
{α : Sort u} {β : Sort v} [PartialOrder α] [PartialOrder β]
(c : Prop) [Decidable c]
@@ -440,38 +442,41 @@ instance [PartialOrder α] [PartialOrder β] : PartialOrder (α ×' β) where
dsimp at *
rw [rel_antisymm ha.1 hb.1, rel_antisymm ha.2 hb.2]
theorem monotone_pprod [PartialOrder α] [PartialOrder β] [PartialOrder γ]
@[partial_fixpoint_monotone]
theorem PProd.monotone_mk [PartialOrder α] [PartialOrder β] [PartialOrder γ]
{f : γ α} {g : γ β} (hf : monotone f) (hg : monotone g) :
monotone (fun x => PProd.mk (f x) (g x)) :=
fun _ _ h12 => hf _ _ h12, hg _ _ h12
theorem monotone_pprod_fst [PartialOrder α] [PartialOrder β] [PartialOrder γ]
@[partial_fixpoint_monotone]
theorem PProd.monotone_fst [PartialOrder α] [PartialOrder β] [PartialOrder γ]
{f : γ α ×' β} (hf : monotone f) : monotone (fun x => (f x).1) :=
fun _ _ h12 => (hf _ _ h12).1
theorem monotone_pprod_snd [PartialOrder α] [PartialOrder β] [PartialOrder γ]
@[partial_fixpoint_monotone]
theorem PProd.monotone_snd [PartialOrder α] [PartialOrder β] [PartialOrder γ]
{f : γ α ×' β} (hf : monotone f) : monotone (fun x => (f x).2) :=
fun _ _ h12 => (hf _ _ h12).2
def chain_pprod_fst [CCPO α] [CCPO β] (c : α ×' β Prop) : α Prop := fun a => b, c a, b
def chain_pprod_snd [CCPO α] [CCPO β] (c : α ×' β Prop) : β Prop := fun b => a, c a, b
def PProd.chain.fst [CCPO α] [CCPO β] (c : α ×' β Prop) : α Prop := fun a => b, c a, b
def PProd.chain.snd [CCPO α] [CCPO β] (c : α ×' β Prop) : β Prop := fun b => a, c a, b
theorem chain.pprod_fst [CCPO α] [CCPO β] (c : α ×' β Prop) (hchain : chain c) :
chain (chain_pprod_fst c) := by
theorem PProd.chain.chain_fst [CCPO α] [CCPO β] {c : α ×' β Prop} (hchain : chain c) :
chain (chain.fst c) := by
intro a₁ a₂ b₁, h₁ b₂, h₂
cases hchain a₁, b₁ a₂, b₂ h₁ h₂
case inl h => left; exact h.1
case inr h => right; exact h.1
theorem chain.pprod_snd [CCPO α] [CCPO β] (c : α ×' β Prop) (hchain : chain c) :
chain (chain_pprod_snd c) := by
theorem PProd.chain.chain_snd [CCPO α] [CCPO β] {c : α ×' β Prop} (hchain : chain c) :
chain (chain.snd c) := by
intro b₁ b₂ a₁, h₁ a₂, h₂
cases hchain a₁, b₁ a₂, b₂ h₁ h₂
case inl h => left; exact h.2
case inr h => right; exact h.2
instance [CCPO α] [CCPO β] : CCPO (α ×' β) where
csup c := CCPO.csup (chain_pprod_fst c), CCPO.csup (chain_pprod_snd c)
instance instCCPOPProd [CCPO α] [CCPO β] : CCPO (α ×' β) where
csup c := CCPO.csup (PProd.chain.fst c), CCPO.csup (PProd.chain.snd c)
csup_spec := by
intro a, b c hchain
dsimp
@@ -480,32 +485,32 @@ instance [CCPO α] [CCPO β] : CCPO (α ×' β) where
intro h₁, h₂ a', b' cab
constructor <;> dsimp at *
· apply rel_trans ?_ h₁
apply le_csup hchain.pprod_fst
apply le_csup (PProd.chain.chain_fst hchain)
exact b', cab
· apply rel_trans ?_ h₂
apply le_csup hchain.pprod_snd
apply le_csup (PProd.chain.chain_snd hchain)
exact a', cab
next =>
intro h
constructor <;> dsimp
· apply csup_le hchain.pprod_fst
· apply csup_le (PProd.chain.chain_fst hchain)
intro a' b', hcab
apply (h _ hcab).1
· apply csup_le hchain.pprod_snd
· apply csup_le (PProd.chain.chain_snd hchain)
intro b' a', hcab
apply (h _ hcab).2
theorem admissible_pprod_fst {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : α Prop)
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.1) := by
intro c hchain h
apply hadm _ hchain.pprod_fst
apply hadm _ (PProd.chain.chain_fst hchain)
intro x y, hxy
apply h x,y hxy
theorem admissible_pprod_snd {α : Sort u} {β : Sort v} [CCPO α] [CCPO β] (P : β Prop)
(hadm : admissible P) : admissible (fun (x : α ×' β) => P x.2) := by
intro c hchain h
apply hadm _ hchain.pprod_snd
apply hadm _ (PProd.chain.chain_snd hchain)
intro y x, hxy
apply h x,y hxy
@@ -609,6 +614,7 @@ class MonoBind (m : Type u → Type v) [Bind m] [∀ α, PartialOrder (m α)] wh
bind_mono_left {a₁ a₂ : m α} {f : α m b} (h : a₁ a₂) : a₁ >>= f a₂ >>= f
bind_mono_right {a : m α} {f₁ f₂ : α m b} (h : x, f₁ x f₂ x) : a >>= f₁ a >>= f₂
@[partial_fixpoint_monotone]
theorem monotone_bind
(m : Type u Type v) [Bind m] [ α, PartialOrder (m α)] [MonoBind m]
{α β : Type u}
@@ -634,7 +640,7 @@ noncomputable instance : MonoBind Option where
· exact FlatOrder.rel.refl
· exact h _
theorem admissible_eq_some (P : Prop) (y : α) :
theorem Option.admissible_eq_some (P : Prop) (y : α) :
admissible (fun (x : Option α) => x = some y P) := by
apply admissible_flatOrder; simp
@@ -677,7 +683,7 @@ theorem find_spec : ∀ n m, find P n = some m → n ≤ m ∧ P m := by
refine fix_induct (motive := fun (f : Nat Option Nat) => n m, f n = some m n m P m) _ ?hadm ?hstep
case hadm =>
-- apply admissible_pi_apply does not work well, hard to infer everything
exact admissible_pi_apply _ (fun n => admissible_pi _ (fun m => admissible_eq_some _ m))
exact admissible_pi_apply _ (fun n => admissible_pi _ (fun m => Option.admissible_eq_some _ m))
case hstep =>
intro f ih n m heq
simp only [findF] at heq

View File

@@ -0,0 +1,685 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Init.Data.List.Control
import Init.Data.Array.Basic
import Init.Internal.Order.Basic
/-!
This file contains monotonicity lemmas for higher-order monadic operations (e.g. `mapM`) in the
standard library. This allows recursive definitions using `partial_fixpoint` to use nested
recursion.
Ideally, every higher-order monadic funciton in the standard library has a lemma here. At the time
of writing, this file covers functions from
* Init/Data/Option/Basic.lean
* Init/Data/List/Control.lean
* Init/Data/Array/Basic.lean
in the order of their apperance there. No automation to check the exhaustiveness exists yet.
The lemma statements are written manually, but follow a predictable scheme, and could be automated.
Likewise, the proofs are written very naively. Most of them could be handled by a tactic like
`monotonicity` (extended to make use of local hypotheses).
-/
namespace Lean.Order
open Lean.Order
variable {m : Type u Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m]
variable {α β : Type u}
variable {γ : Type w} [PartialOrder γ]
@[partial_fixpoint_monotone]
theorem Functor.monotone_map [LawfulMonad m] (f : γ m α) (g : α β) (hmono : monotone f) :
monotone (fun x => g <$> f x) := by
simp only [ LawfulMonad.bind_pure_comp ]
apply monotone_bind _ _ _ hmono
apply monotone_const
@[partial_fixpoint_monotone]
theorem Seq.monotone_seq [LawfulMonad m] (f : γ m α) (g : γ m (α β))
(hmono₁ : monotone g) (hmono₂ : monotone f) :
monotone (fun x => g x <*> f x) := by
simp only [ LawfulMonad.bind_map ]
apply monotone_bind
· assumption
· apply monotone_of_monotone_apply
intro y
apply Functor.monotone_map
assumption
@[partial_fixpoint_monotone]
theorem SeqLeft.monotone_seqLeft [LawfulMonad m] (f : γ m α) (g : γ m β)
(hmono₁ : monotone g) (hmono₂ : monotone f) :
monotone (fun x => g x <* f x) := by
simp only [seqLeft_eq]
apply Seq.monotone_seq
· apply Functor.monotone_map
assumption
· assumption
@[partial_fixpoint_monotone]
theorem SeqRight.monotone_seqRight [LawfulMonad m] (f : γ m α) (g : γ m β)
(hmono₁ : monotone g) (hmono₂ : monotone f) :
monotone (fun x => g x *> f x) := by
simp only [seqRight_eq]
apply Seq.monotone_seq
· apply Functor.monotone_map
assumption
· assumption
namespace Option
@[partial_fixpoint_monotone]
theorem monotone_bindM (f : γ α m (Option β)) (xs : Option α) (hmono : monotone f) :
monotone (fun x => xs.bindM (f x)) := by
cases xs with
| none => apply monotone_const
| some x =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_mapM (f : γ α m β) (xs : Option α) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x)) := by
cases xs with
| none => apply monotone_const
| some x =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_elimM (a : γ m (Option α)) (n : γ m β) (s : γ α m β)
(hmono₁ : monotone a) (hmono₂ : monotone n) (hmono₃ : monotone s) :
monotone (fun x => Option.elimM (a x) (n x) (s x)) := by
apply monotone_bind
· apply hmono₁
· apply monotone_of_monotone_apply
intro o
cases o
case none => apply hmono₂
case some y =>
dsimp only [Option.elim]
apply monotone_apply
apply hmono₃
omit [MonoBind m] in
@[partial_fixpoint_monotone]
theorem monotone_getDM (o : Option α) (y : γ m α) (hmono : monotone y) :
monotone (fun x => o.getDM (y x)) := by
cases o
· apply hmono
· apply monotone_const
end Option
namespace List
@[partial_fixpoint_monotone]
theorem monotone_mapM (f : γ α m β) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x)) := by
cases xs with
| nil => apply monotone_const
| cons _ xs =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
dsimp
generalize [y] = ys
induction xs generalizing ys with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
apply ih
@[partial_fixpoint_monotone]
theorem monotone_forM (f : γ α m PUnit) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.forM (f x)) := by
induction xs with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
apply ih
@[partial_fixpoint_monotone]
theorem monotone_filterAuxM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs acc : List α) (hmono : monotone f) :
monotone (fun x => xs.filterAuxM (f x) acc) := by
induction xs generalizing acc with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
apply ih
@[partial_fixpoint_monotone]
theorem monotone_filterM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.filterM (f x)) := by
apply monotone_bind
· exact monotone_filterAuxM f xs [] hmono
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_filterRevM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.filterRevM (f x)) := by
exact monotone_filterAuxM f xs.reverse [] hmono
@[partial_fixpoint_monotone]
theorem monotone_foldlM
(f : γ β α m β) (init : β) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.foldlM (f x) (init := init)) := by
induction xs generalizing init with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
apply ih
@[partial_fixpoint_monotone]
theorem monotone_foldrM
(f : γ α β m β) (init : β) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.foldrM (f x) (init := init)) := by
apply monotone_foldlM
apply monotone_of_monotone_apply
intro s
apply monotone_of_monotone_apply
intro a
apply monotone_apply (a := s)
apply monotone_apply (a := a)
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_anyM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.anyM (f x)) := by
induction xs with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y
· apply ih
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_allM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.allM (f x)) := by
induction xs with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y
· apply monotone_const
· apply ih
@[partial_fixpoint_monotone]
theorem monotone_findM?
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.findM? (f x)) := by
induction xs with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y
· apply ih
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_findSomeM?
(f : γ α m (Option β)) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.findSomeM? (f x)) := by
induction xs with
| nil => apply monotone_const
| cons _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y
· apply ih
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_forIn'_loop {α : Type uu}
(as : List α) (f : γ (a : α) a as β m (ForInStep β)) (as' : List α) (b : β)
(p : Exists (fun bs => bs ++ as' = as)) (hmono : monotone f) :
monotone (fun x => List.forIn'.loop as (f x) as' b p) := by
induction as' generalizing b with
| nil => apply monotone_const
| cons a as' ih =>
apply monotone_bind
· apply monotone_apply
apply monotone_apply
apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y with
| done => apply monotone_const
| yield => apply ih
@[partial_fixpoint_monotone]
theorem monotone_forIn' {α : Type uu}
(as : List α) (init : β) (f : γ (a : α) a as β m (ForInStep β)) (hmono : monotone f) :
monotone (fun x => forIn' as init (f x)) := by
apply monotone_forIn'_loop
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_forIn {α : Type uu}
(as : List α) (init : β) (f : γ (a : α) β m (ForInStep β)) (hmono : monotone f) :
monotone (fun x => forIn as init (f x)) := by
apply monotone_forIn' as init _
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro p
apply monotone_apply (a := y)
apply hmono
end List
namespace Array
@[partial_fixpoint_monotone]
theorem monotone_modifyM (a : Array α) (i : Nat) (f : γ α m α) (hmono : monotone f) :
monotone (fun x => a.modifyM i (f x)) := by
unfold Array.modifyM
split
· apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_forIn'_loop {α : Type uu}
(as : Array α) (f : γ (a : α) a as β m (ForInStep β)) (i : Nat) (h : i as.size)
(b : β) (hmono : monotone f) :
monotone (fun x => Array.forIn'.loop as (f x) i h b) := by
induction i, h, b using Array.forIn'.loop.induct with
| case1 => apply monotone_const
| case2 _ _ _ _ _ _ _ ih =>
apply monotone_bind
· apply monotone_apply
apply monotone_apply
apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y with
| done => apply monotone_const
| yield => apply ih
@[partial_fixpoint_monotone]
theorem monotone_forIn' {α : Type uu}
(as : Array α) (init : β) (f : γ (a : α) a as β m (ForInStep β)) (hmono : monotone f) :
monotone (fun x => forIn' as init (f x)) := by
apply monotone_forIn'_loop
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_forIn {α : Type uu}
(as : Array α) (init : β) (f : γ (a : α) β m (ForInStep β)) (hmono : monotone f) :
monotone (fun x => forIn as init (f x)) := by
apply monotone_forIn' as init _
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro p
apply monotone_apply (a := y)
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_foldlM_loop
(f : γ β α m β) (xs : Array α) (stop : Nat) (h : stop xs.size) (i j : Nat) (b : β)
(hmono : monotone f) : monotone (fun x => Array.foldlM.loop (f x) xs stop h i j b) := by
induction i, j, b using Array.foldlM.loop.induct (h := h) with
| case1 =>
simp only [Array.foldlM.loop, reduceDIte, *]
apply monotone_const
| case2 _ _ _ _ _ ih =>
unfold Array.foldlM.loop
simp only [reduceDIte, *]
apply monotone_bind
· apply monotone_apply
apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
apply ih
| case3 =>
simp only [Array.foldlM.loop, reduceDIte, *]
apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_foldlM
(f : γ β α m β) (init : β) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.foldlM (f x) init start stop) := by
unfold Array.foldlM
split <;> apply monotone_foldlM_loop (hmono := hmono)
@[partial_fixpoint_monotone]
theorem monotone_foldrM_fold
(f : γ α β m β) (xs : Array α) (stop i : Nat) (h : i xs.size) (b : β)
(hmono : monotone f) : monotone (fun x => Array.foldrM.fold (f x) xs stop i h b) := by
induction i, h, b using Array.foldrM.fold.induct (stop := stop) with
| case1 =>
unfold Array.foldrM.fold
simp only [reduceIte, *]
apply monotone_const
| case2 =>
unfold Array.foldrM.fold
simp only [reduceIte, *]
apply monotone_const
| case3 _ _ _ _ _ _ ih =>
unfold Array.foldrM.fold
simp only [reduceCtorEq, reduceIte, *]
apply monotone_bind
· apply monotone_apply
apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
apply ih
@[partial_fixpoint_monotone]
theorem monotone_foldrM
(f : γ α β m β) (init : β) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.foldrM (f x) init start stop) := by
unfold Array.foldrM
split
· split
· apply monotone_foldrM_fold (hmono := hmono)
· apply monotone_const
· split
· apply monotone_foldrM_fold (hmono := hmono)
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_mapM (xs : Array α) (f : γ α m β) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x)) := by
suffices i r, monotone (fun x => Array.mapM.map (f x) xs i r) by apply this
intros i r
induction i, r using Array.mapM.map.induct xs
case case1 ih =>
unfold Array.mapM.map
simp only [reduceDIte, *]
apply monotone_bind
· apply monotone_apply
apply hmono
· intro y
apply monotone_of_monotone_apply
apply ih
case case2 =>
unfold Array.mapM.map
simp only [reduceDIte, *]
apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_mapFinIdxM (xs : Array α) (f : γ (i : Nat) α i < xs.size m β)
(hmono : monotone f) : monotone (fun x => xs.mapFinIdxM (f x)) := by
suffices i j (h : i + j = xs.size) r, monotone (fun x => Array.mapFinIdxM.map xs (f x) i j h r) by apply this
intros i j h r
induction i, j, h, r using Array.mapFinIdxM.map.induct xs
case case1 =>
apply monotone_const
case case2 ih =>
apply monotone_bind
· dsimp
apply monotone_apply
apply monotone_apply
apply monotone_apply
apply hmono
· intro y
apply monotone_of_monotone_apply
apply ih
@[partial_fixpoint_monotone]
theorem monotone_findSomeM?
(f : γ α m (Option β)) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.findSomeM? (f x)) := by
unfold Array.findSomeM?
apply monotone_bind
· apply monotone_forIn
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro r
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_findM?
{m : Type Type} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.findM? (f x)) := by
unfold Array.findM?
apply monotone_bind
· apply monotone_forIn
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro r
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_findIdxM?
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type u}
(f : γ α m Bool) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.findIdxM? (f x)) := by
unfold Array.findIdxM?
apply monotone_bind
· apply monotone_forIn
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro r
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_anyM_loop
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type u}
(f : γ α m Bool) (xs : Array α) (stop : Nat) (h : stop xs.size) (j : Nat)
(hmono : monotone f) : monotone (fun x => Array.anyM.loop (f x) xs stop h j) := by
induction j using Array.anyM.loop.induct (h := h) with
| case2 =>
unfold Array.anyM.loop
simp only [reduceDIte, *]
apply monotone_const
| case1 _ _ _ ih =>
unfold Array.anyM.loop
simp only [reduceDIte, *]
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
split
· apply monotone_const
· apply ih
@[partial_fixpoint_monotone]
theorem monotone_anyM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type u}
(f : γ α m Bool) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.anyM (f x) start stop) := by
unfold Array.anyM
split
· apply monotone_anyM_loop
apply hmono
· apply monotone_anyM_loop
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_allM
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type u}
(f : γ α m Bool) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.allM (f x) start stop) := by
unfold Array.allM
apply monotone_bind
· apply monotone_anyM
apply monotone_of_monotone_apply
intro y
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_findSomeRevM?
(f : γ α m (Option β)) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.findSomeRevM? (f x)) := by
unfold Array.findSomeRevM?
suffices i (h : i xs.size), monotone (fun x => Array.findSomeRevM?.find (f x) xs i h) by apply this
intros i h
induction i, h using Array.findSomeRevM?.find.induct with
| case1 =>
unfold Array.findSomeRevM?.find
apply monotone_const
| case2 _ _ _ _ ih =>
unfold Array.findSomeRevM?.find
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_of_monotone_apply
intro y
cases y with
| none => apply ih
| some y => apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_findRevM?
{m : Type Type v} [Monad m] [ α, PartialOrder (m α)] [MonoBind m] {α : Type}
(f : γ α m Bool) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.findRevM? (f x)) := by
unfold Array.findRevM?
apply monotone_findSomeRevM?
apply monotone_of_monotone_apply
intro y
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_array_forM
(f : γ α m PUnit) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.forM (f x) start stop) := by
unfold Array.forM
apply monotone_foldlM
apply monotone_of_monotone_apply
intro y
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_array_forRevM
(f : γ α m PUnit) (xs : Array α) (start stop : Nat) (hmono : monotone f) :
monotone (fun x => xs.forRevM (f x) start stop) := by
unfold Array.forRevM
apply monotone_foldrM
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro z
apply monotone_apply
apply hmono
@[partial_fixpoint_monotone]
theorem monotone_flatMapM
(f : γ α m (Array β)) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.flatMapM (f x)) := by
unfold Array.flatMapM
apply monotone_foldlM
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro z
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
@[partial_fixpoint_monotone]
theorem monotone_array_filterMapM
(f : γ α m (Option β)) (xs : Array α) (hmono : monotone f) :
monotone (fun x => xs.filterMapM (f x)) := by
unfold Array.filterMapM
apply monotone_foldlM
apply monotone_of_monotone_apply
intro y
apply monotone_of_monotone_apply
intro z
apply monotone_bind
· apply monotone_apply
apply hmono
· apply monotone_const
end Array
end Lean.Order

View File

@@ -93,7 +93,8 @@ def isLetterLike (c : Char) : Bool :=
def isSubScriptAlnum (c : Char) : Bool :=
isNumericSubscript c ||
(0x2090 c.val && c.val 0x209c) ||
(0x1d62 c.val && c.val 0x1d6a)
(0x1d62 c.val && c.val 0x1d6a) ||
c.val == 0x2c7c
@[inline] def isIdFirst (c : Char) : Bool :=
c.isAlpha || c = '_' || isLetterLike c

View File

@@ -109,6 +109,11 @@ structure Config where
to find candidate `simp` theorems. It approximates Lean 3 `simp` behavior.
-/
index : Bool := true
/--
When `true` (default : `true`), then simps will remove unused let-declarations:
`let x := v; e` simplifies to `e` when `x` does not occur in `e`.
-/
zetaUnused : Bool := true
deriving Inhabited, BEq
end DSimp
@@ -228,6 +233,11 @@ structure Config where
input and output terms are definitionally equal.
-/
implicitDefEqProofs : Bool := true
/--
When `true` (default : `true`), then simps will remove unused let-declarations:
`let x := v; e` simplifies to `e` when `x` does not occur in `e`.
-/
zetaUnused : Bool := true
deriving Inhabited, BEq
-- Configuration object for `simp_all`
@@ -248,6 +258,7 @@ def neutralConfig : Simp.Config := {
autoUnfold := false
ground := false
zetaDelta := false
zetaUnused := false
}
structure NormCastConfig extends Simp.Config where

View File

@@ -67,9 +67,7 @@ abbrev leading (xs : Coeffs) : Int := IntList.leading xs
abbrev map (f : Int Int) (xs : Coeffs) : Coeffs := List.map f xs
/-- Shim for `.enum.find?`. -/
abbrev findIdx? (f : Int Bool) (xs : Coeffs) : Option Nat :=
-- List.findIdx? f xs
-- We could avoid `Batteries.Data.List.Basic` by using the less efficient:
xs.enum.find? (f ·.2) |>.map (·.1)
List.findIdx? f xs
/-- Shim for `IntList.bmod`. -/
abbrev bmod (x : Coeffs) (m : Nat) : Coeffs := IntList.bmod x m
/-- Shim for `IntList.bmod_dot_sub_dot_bmod`. -/

View File

@@ -28,7 +28,7 @@ namespace LinearCombo
instance : ToString LinearCombo where
toString lc :=
s!"{lc.const}{String.join <| lc.coeffs.toList.enum.map fun ⟨i, c⟩ => s!" + {c} * x{i+1}"}"
s!"{lc.const}{String.join <| lc.coeffs.toList.zipIdx.map fun ⟨c, i⟩ => s!" + {c} * x{i+1}"}"
instance : Inhabited LinearCombo := {const := 1}

View File

@@ -3705,8 +3705,7 @@ inductive Syntax where
/-- Node in the syntax tree.
The `info` field is used by the delaborator to store the position of the
subexpression corresponding to this node. The parser sets the `info` field
to `none`.
subexpression corresponding to this node.
The parser sets the `info` field to `none`, with position retrieval continuing recursively.
Nodes created by quotations use the result from `SourceInfo.fromRef` so that they are marked
as synthetic even when the leading/trailing token is not.

View File

@@ -50,18 +50,49 @@ where go env
| _ => env
def addDecl (decl : Declaration) : CoreM Unit := do
profileitM Exception "type checking" ( getOptions) do
let mut env withTraceNode `Kernel (fun _ => return m!"typechecking declaration") do
if !( MonadLog.hasErrors) && decl.hasSorry then
logWarning "declaration uses 'sorry'"
( getEnv).addDeclAux ( getOptions) decl ( read).cancelTk? |> ofExceptKernelException
let mut env getEnv
-- register namespaces for newly added constants; this used to be done by the kernel itself
-- but that is incompatible with moving it to a separate task
env := decl.getNames.foldl registerNamePrefixes env
if let .inductDecl _ _ types _ := decl then
env := types.foldl (registerNamePrefixes · <| ·.name ++ `rec) env
-- register namespaces for newly added constants; this used to be done by the kernel itself
-- but that is incompatible with moving it to a separate task
env := decl.getNames.foldl registerNamePrefixes env
if let .inductDecl _ _ types _ := decl then
env := types.foldl (registerNamePrefixes · <| ·.name ++ `rec) env
if !Elab.async.get ( getOptions) then
setEnv env
return ( doAdd)
-- convert `Declaration` to `ConstantInfo` to use as a preliminary value in the environment until
-- kernel checking has finished; not all cases are supported yet
let (name, info, kind) match decl with
| .thmDecl thm => pure (thm.name, .thmInfo thm, .thm)
| .defnDecl defn => pure (defn.name, .defnInfo defn, .defn)
| .mutualDefnDecl [defn] => pure (defn.name, .defnInfo defn, .defn)
| _ => return ( doAdd)
-- no environment extension changes to report after kernel checking; ensures we do not
-- accidentally wait for this snapshot when querying extension states
let async env.addConstAsync (reportExts := false) name kind
-- report preliminary constant info immediately
async.commitConst async.asyncEnv (some info)
setEnv async.mainEnv
let checkAct Core.wrapAsyncAsSnapshot fun _ => do
try
setEnv async.asyncEnv
doAdd
async.commitCheckEnv ( getEnv)
finally
async.commitFailure
let t BaseIO.mapTask (fun _ => checkAct) env.checked
let endRange? := ( getRef).getTailPos?.map fun pos => pos, pos
Core.logSnapshotTask { range? := endRange?, task := t }
where doAdd := do
profileitM Exception "type checking" ( getOptions) do
withTraceNode `Kernel (fun _ => return m!"typechecking declarations {decl.getNames}") do
if !( MonadLog.hasErrors) && decl.hasSorry then
logWarning m!"declaration uses 'sorry'"
let env ( getEnv).addDeclAux ( getOptions) decl ( read).cancelTk?
|> ofExceptKernelException
setEnv env
def addAndCompile (decl : Declaration) : CoreM Unit := do
addDecl decl

View File

@@ -33,6 +33,7 @@ def shouldGenerateCode (declName : Name) : CoreM Bool := do
let some info getDeclInfo? declName | return false
unless info.hasValue (allowOpaque := true) do return false
if hasMacroInlineAttribute env declName then return false
if (getImplementedBy? env declName).isSome then return false
if ( Meta.isMatcher declName) then return false
if isCasesOnRecursor env declName then return false
-- TODO: check if type class instance

View File

@@ -72,21 +72,23 @@ The type contains only `→` and constants.
-/
partial def toMonoType (type : Expr) : CoreM Expr := do
let type := type.headBeta
if type.isErased then
return erasedExpr
else if isTypeFormerType type then
return erasedExpr
else match type with
| .const .. => visitApp type #[]
| .app .. => type.withApp visitApp
| .forallE _ d b _ => mkArrow ( toMonoType d) ( toMonoType (b.instantiate1 erasedExpr))
| _ => return erasedExpr
match type with
| .const .. => visitApp type #[]
| .app .. => type.withApp visitApp
| .forallE _ d b _ =>
let monoB toMonoType (b.instantiate1 anyExpr)
match monoB with
| .const ``lcErased _ => return erasedExpr
| _ => mkArrow ( toMonoType d) monoB
| .sort _ => return erasedExpr
| _ => return anyExpr
where
visitApp (f : Expr) (args : Array Expr) : CoreM Expr := do
match f with
| .const ``lcErased _ => return erasedExpr
| .const ``lcAny _ => return anyExpr
| .const ``Decidable _ => return mkConst ``Bool
| .const declName us =>
if declName == ``Decidable then
return mkConst ``Bool
if let some info hasTrivialStructure? declName then
let ctorType getOtherDeclBaseType info.ctorName []
toMonoType (getParamTypes ( instantiateForall ctorType args[:info.numParams]))[info.fieldIdx]!
@@ -96,15 +98,13 @@ where
for arg in args do
let .forallE _ d b _ := type.headBeta | unreachable!
let arg := arg.headBeta
if arg.isErased then
result := mkApp result arg
else if d.isErased || d matches .sort _ then
if d matches .const ``lcErased _ | .sort _ then
result := mkApp result ( toMonoType arg)
else
result := mkApp result erasedExpr
type := b.instantiate1 arg
return result
| _ => return erasedExpr
| _ => return anyExpr
/--
State for the environment extension used to save the LCNF mono phase type for declarations

View File

@@ -81,7 +81,10 @@ def ppLetDecl (letDecl : LetDecl) : M Format := do
return f!"let {letDecl.binderName} := {← ppLetValue letDecl.value}"
def getFunType (ps : Array Param) (type : Expr) : CoreM Expr :=
instantiateForall type (ps.map (mkFVar ·.fvarId))
if type.isErased then
pure type
else
instantiateForall type (ps.map (mkFVar ·.fvarId))
mutual
partial def ppFunDecl (funDecl : FunDecl) : M Format := do

View File

@@ -7,6 +7,7 @@ prelude
import Lean.ProjFns
import Lean.Meta.CtorRecognizer
import Lean.Compiler.BorrowedAnnotation
import Lean.Compiler.CSimpAttr
import Lean.Compiler.LCNF.Types
import Lean.Compiler.LCNF.Bind
import Lean.Compiler.LCNF.InferType
@@ -472,7 +473,7 @@ where
/-- Giving `f` a constant `.const declName us`, convert `args` into `args'`, and return `.const declName us args'` -/
visitAppDefaultConst (f : Expr) (args : Array Expr) : M Arg := do
let .const declName us := f | unreachable!
let .const declName us := CSimp.replaceConstants ( getEnv) f | unreachable!
let args args.mapM visitAppArg
letValueToArg <| .const declName us args
@@ -670,7 +671,7 @@ where
visitApp (e : Expr) : M Arg := do
if let some (args, n, t, v, b) := e.letFunAppArgs? then
visitCore <| mkAppN (.letE n t v b (nonDep := true)) args
else if let .const declName _ := e.getAppFn then
else if let .const declName _ := CSimp.replaceConstants ( getEnv) e.getAppFn then
if declName == ``Quot.lift then
visitQuotLift e
else if declName == ``Quot.mk then

View File

@@ -13,6 +13,7 @@ scoped notation:max "◾" => lcErased
namespace LCNF
def erasedExpr := mkConst ``lcErased
def anyExpr := mkConst ``lcAny
def _root_.Lean.Expr.isErased (e : Expr) :=
e.isAppOf ``lcErased

View File

@@ -36,7 +36,7 @@ register_builtin_option Elab.async : Bool := {
descr := "perform elaboration using multiple threads where possible\
\n\
\nThis option defaults to `false` but (when not explicitly set) is overridden to `true` in \
`Lean.Language.Lean.process` as used by the cmdline driver and language server. \
the language server. \
Metaprogramming users driving elaboration directly via e.g. \
`Lean.Elab.Command.elabCommandTopLevel` can opt into asynchronous elaboration by setting \
this option but then are responsible for processing messages and other data not only in the \
@@ -423,7 +423,11 @@ def wrapAsyncAsSnapshot (act : Unit → CoreM Unit) (desc : String := by exact d
IO.FS.withIsolatedStreams (isolateStderr := stderrAsMessages.get ( getOptions)) do
let tid IO.getTID
-- reset trace state and message log so as not to report them twice
modify fun st => { st with messages := st.messages.markAllReported, traceState := { tid } }
modify fun st => { st with
messages := st.messages.markAllReported
traceState := { tid }
snapshotTasks := #[]
}
try
withTraceNode `Elab.async (fun _ => return desc) do
act ()
@@ -518,6 +522,10 @@ opaque compileDeclsNew (declNames : List Name) : CoreM Unit
opaque compileDeclsOld (env : Environment) (opt : @& Options) (decls : @& List Name) : Except Kernel.Exception Environment
def compileDecl (decl : Declaration) : CoreM Unit := do
-- don't compile if kernel errored; should be converted into a task dependency when compilation
-- is made async as well
if !decl.getNames.all ( getEnv).constants.contains then
return
let opts getOptions
let decls := Compiler.getDeclNamesForCodeGen decl
if compiler.enableNew.get opts then
@@ -533,6 +541,10 @@ def compileDecl (decl : Declaration) : CoreM Unit := do
throwKernelException ex
def compileDecls (decls : List Name) : CoreM Unit := do
-- don't compile if kernel errored; should be converted into a task dependency when compilation
-- is made async as well
if !decls.all ( getEnv).constants.contains then
return
let opts getOptions
if compiler.enableNew.get opts then
compileDeclsNew decls

View File

@@ -313,7 +313,11 @@ def wrapAsyncAsSnapshot (act : Unit → CommandElabM Unit)
IO.FS.withIsolatedStreams (isolateStderr := Core.stderrAsMessages.get ( getOptions)) do
let tid IO.getTID
-- reset trace state and message log so as not to report them twice
modify fun st => { st with messages := st.messages.markAllReported, traceState := { tid } }
modify fun st => { st with
messages := st.messages.markAllReported
traceState := { tid }
snapshotTasks := #[]
}
try
withTraceNode `Elab.async (fun _ => return desc) do
act ()

View File

@@ -308,6 +308,115 @@ def whnfReducibleLHS? (mvarId : MVarId) : MetaM (Option MVarId) := mvarId.withCo
def tryContradiction (mvarId : MVarId) : MetaM Bool := do
mvarId.contradictionCore { genDiseq := true }
/--
Returns the type of the unfold theorem, as the starting point for calculating the equational
types.
-/
private def unfoldThmType (declName : Name) : MetaM Expr := do
if let some unfoldThm getUnfoldEqnFor? declName (nonRec := false) then
let info getConstInfo unfoldThm
pure info.type
else
let info getConstInfoDefn declName
let us := info.levelParams.map mkLevelParam
lambdaTelescope (cleanupAnnotations := true) info.value fun xs body => do
let type mkEq (mkAppN (Lean.mkConst declName us) xs) body
mkForallFVars xs type
private def unfoldLHS (declName : Name) (mvarId : MVarId) : MetaM MVarId := mvarId.withContext do
if let some unfoldThm getUnfoldEqnFor? declName (nonRec := false) then
-- Recursive definition: Use unfolding lemma
let mut mvarId := mvarId
let target mvarId.getType'
let some (_, lhs, rhs) := target.eq? | throwError "unfoldLHS: Unexpected target {target}"
unless lhs.isAppOf declName do throwError "unfoldLHS: Unexpected LHS {lhs}"
let h := mkAppN (.const unfoldThm lhs.getAppFn.constLevels!) lhs.getAppArgs
let some (_, _, lhsNew) := ( inferType h).eq? | unreachable!
let targetNew mkEq lhsNew rhs
let mvarNew mkFreshExprSyntheticOpaqueMVar targetNew
mvarId.assign ( mkEqTrans h mvarNew)
return mvarNew.mvarId!
else
-- Else use delta reduction
deltaLHS mvarId
private partial def mkEqnProof (declName : Name) (type : Expr) : MetaM Expr := do
trace[Elab.definition.eqns] "proving: {type}"
withNewMCtxDepth do
let main mkFreshExprSyntheticOpaqueMVar type
let (_, mvarId) main.mvarId!.intros
-- Try rfl before deltaLHS to avoid `id` checkpoints in the proof, which would make
-- the lemma ineligible for dsimp
unless withAtLeastTransparency .all (tryURefl mvarId) do
go ( unfoldLHS declName mvarId)
instantiateMVars main
where
/--
The core loop of proving an equation. Assumes that the function call on the left-hand side has
already been unfolded, using whatever method applies to the current function definition strategy.
Currently used for non-recursive functions and partial fixpoints; maybe later well-founded
recursion and structural recursion can and should use this too.
-/
go (mvarId : MVarId) : MetaM Unit := do
trace[Elab.definition.eqns] "step\n{MessageData.ofGoal mvarId}"
if withAtLeastTransparency .all (tryURefl mvarId) then
return ()
else if ( tryContradiction mvarId) then
return ()
else if let some mvarId simpMatch? mvarId then
go mvarId
else if let some mvarId simpIf? mvarId then
go mvarId
else if let some mvarId whnfReducibleLHS? mvarId then
go mvarId
else
let ctx Simp.mkContext (config := { dsimp := false })
match ( simpTargetStar mvarId ctx (simprocs := {})).1 with
| TacticResultCNM.closed => return ()
| TacticResultCNM.modified mvarId => go mvarId
| TacticResultCNM.noChange =>
if let some mvarIds casesOnStuckLHS? mvarId then
mvarIds.forM go
else if let some mvarIds splitTarget? mvarId then
mvarIds.forM go
else
throwError "failed to generate equational theorem for '{declName}'\n{MessageData.ofGoal mvarId}"
/--
Generate equations for `declName`.
This unfolds the function application on the LHS (using an unfold theorem, if present, or else by
delta-reduction), calculates the types for the equational theorems using `mkEqnTypes`, and then
proves them using `mkEqnProof`.
This is currently used for non-recursive functions and for functions defined by partial_fixpoint.
-/
def mkEqns (declName : Name) : MetaM (Array Name) := do
let info getConstInfoDefn declName
let us := info.levelParams.map mkLevelParam
withOptions (tactic.hygienic.set · false) do
let target unfoldThmType declName
let eqnTypes withNewMCtxDepth <|
forallTelescope (cleanupAnnotations := true) target fun xs target => do
let goal mkFreshExprSyntheticOpaqueMVar target
withReducible do
mkEqnTypes #[] goal.mvarId!
let mut thmNames := #[]
for h : i in [: eqnTypes.size] do
let type := eqnTypes[i]
trace[Elab.definition.eqns] "eqnType[{i}]: {eqnTypes[i]}"
let name := (Name.str declName eqnThmSuffixBase).appendIndexAfter (i+1)
thmNames := thmNames.push name
let value mkEqnProof declName type
let (type, value) removeUnusedEqnHypotheses type value
addDecl <| Declaration.thmDecl {
name, type, value
levelParams := info.levelParams
}
return thmNames
/--
Auxiliary method for `mkUnfoldEq`. The structure is based on `mkEqnTypes`.
`mvarId` is the goal to be proved. It is a goal of the form

View File

@@ -9,6 +9,7 @@ import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.Structural
import Lean.Elab.PreDefinition.WF.Main
import Lean.Elab.PreDefinition.MkInhabitant
import Lean.Elab.PreDefinition.PartialFixpoint
namespace Lean.Elab
open Meta
@@ -162,7 +163,8 @@ def ensureFunIndReservedNamesAvailable (preDefs : Array PreDefinition) : MetaM U
Checks consistency of a clique of TerminationHints:
* If not all have a hint, the hints are ignored (log error)
* If one has `structural`, check that all have it, (else throw error)
* None have both `termination_by` and `nontermination` (throw error)
* If one has `structural` or `partialFixpoint`, check that all have it (else throw error)
* A `structural` should not have a `decreasing_by` (else log error)
-/
@@ -171,21 +173,26 @@ def checkTerminationByHints (preDefs : Array PreDefinition) : CoreM Unit := do
let preDefsWithout := preDefs.filter (·.termination.terminationBy?.isNone)
let structural :=
preDefWith.termination.terminationBy? matches some {structural := true, ..}
let partialFixpoint := preDefWith.termination.partialFixpoint?.isSome
for preDef in preDefs do
if let .some termBy := preDef.termination.terminationBy? then
if !structural && !preDefsWithout.isEmpty then
if let .some partialFixpointStx := preDef.termination.partialFixpoint? then
throwErrorAt partialFixpointStx.ref m!"conflicting annotations: this function cannot \
be both terminating and a partial fixpoint"
if !structural && !partialFixpoint && !preDefsWithout.isEmpty then
let m := MessageData.andList (preDefsWithout.toList.map (m!"{·.declName}"))
let doOrDoes := if preDefsWithout.size = 1 then "does" else "do"
logErrorAt termBy.ref (m!"incomplete set of `termination_by` annotations:\n"++
m!"This function is mutually with {m}, which {doOrDoes} not have " ++
m!"This function is mutually recursive with {m}, which {doOrDoes} not have " ++
m!"a `termination_by` clause.\n" ++
m!"The present clause is ignored.")
if structural && ! termBy.structural then
if structural && !termBy.structural then
throwErrorAt termBy.ref (m!"Invalid `termination_by`; this function is mutually " ++
m!"recursive with {preDefWith.declName}, which is marked as `termination_by " ++
m!"structural` so this one also needs to be marked `structural`.")
if ! structural && termBy.structural then
if !structural && termBy.structural then
throwErrorAt termBy.ref (m!"Invalid `termination_by`; this function is mutually " ++
m!"recursive with {preDefWith.declName}, which is not marked as `structural` " ++
m!"so this one cannot be `structural` either.")
@@ -194,20 +201,41 @@ def checkTerminationByHints (preDefs : Array PreDefinition) : CoreM Unit := do
logErrorAt decr.ref (m!"Invalid `decreasing_by`; this function is marked as " ++
m!"structurally recursive, so no explicit termination proof is needed.")
if partialFixpoint && preDef.termination.partialFixpoint?.isNone then
throwErrorAt preDef.ref (m!"Invalid `termination_by`; this function is mutually " ++
m!"recursive with {preDefWith.declName}, which is marked as " ++
m!"`nontermination_partialFixpointursive` so this one also needs to be marked " ++
m!"`nontermination_partialFixpointursive`.")
if preDef.termination.partialFixpoint?.isSome then
if let .some decr := preDef.termination.decreasingBy? then
logErrorAt decr.ref (m!"Invalid `decreasing_by`; this function is marked as " ++
m!"nonterminating, so no explicit termination proof is needed.")
if !partialFixpoint then
if let some stx := preDef.termination.partialFixpoint? then
throwErrorAt stx.ref (m!"Invalid `termination_by`; this function is mutually " ++
m!"recursive with {preDefWith.declName}, which is not also marked as " ++
m!"`nontermination_partialFixpointursive`, so this one cannot be either.")
/--
Elaborates the `TerminationHint` in the clique to `TerminationArguments`
Elaborates the `TerminationHint` in the clique to `TerminationMeasures`
-/
def elabTerminationByHints (preDefs : Array PreDefinition) : TermElabM (Array (Option TerminationArgument)) := do
def elabTerminationByHints (preDefs : Array PreDefinition) : TermElabM (Array (Option TerminationMeasure)) := do
preDefs.mapM fun preDef => do
let arity lambdaTelescope preDef.value fun xs _ => pure xs.size
let hints := preDef.termination
hints.terminationBy?.mapM
(TerminationArgument.elab preDef.declName preDef.type arity hints.extraParams ·)
(TerminationMeasure.elab preDef.declName preDef.type arity hints.extraParams ·)
def shouldUseStructural (preDefs : Array PreDefinition) : Bool :=
preDefs.any fun preDef =>
preDef.termination.terminationBy? matches some {structural := true, ..}
def shouldUsepartialFixpoint (preDefs : Array PreDefinition) : Bool :=
preDefs.any fun preDef =>
preDef.termination.partialFixpoint?.isSome
def shouldUseWF (preDefs : Array PreDefinition) : Bool :=
preDefs.any fun preDef =>
preDef.termination.terminationBy? matches some {structural := false, ..} ||
@@ -251,16 +279,18 @@ def addPreDefinitions (preDefs : Array PreDefinition) : TermElabM Unit := withLC
try
checkCodomainsLevel preDefs
checkTerminationByHints preDefs
let termArg?s elabTerminationByHints preDefs
let termMeasures?s elabTerminationByHints preDefs
if shouldUseStructural preDefs then
structuralRecursion preDefs termArg?s
structuralRecursion preDefs termMeasures?s
else if shouldUsepartialFixpoint preDefs then
partialFixpoint preDefs
else if shouldUseWF preDefs then
wfRecursion preDefs termArg?s
wfRecursion preDefs termMeasures?s
else
withRef (preDefs[0]!.ref) <| mapError
(orelseMergeErrors
(structuralRecursion preDefs termArg?s)
(wfRecursion preDefs termArg?s))
(structuralRecursion preDefs termMeasures?s)
(wfRecursion preDefs termMeasures?s))
(fun msg =>
let preDefMsgs := preDefs.toList.map (MessageData.ofExpr $ mkConst ·.declName)
m!"fail to show termination for{indentD (MessageData.joinSep preDefMsgs Format.line)}\nwith errors\n{msg}")

View File

@@ -0,0 +1,92 @@
/-
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura, Joachim Breitner
-/
prelude
import Lean.Elab.PreDefinition.Basic
/-!
This module contains code common to mutual-via-fixedpoint constructions, i.e.
well-founded recursion and partial fixed-points, but independent of the details of the mutual packing.
-/
namespace Lean.Elab.Mutual
open Meta
partial def withCommonTelescope (preDefs : Array PreDefinition) (k : Array Expr Array Expr MetaM α) : MetaM α :=
go #[] (preDefs.map (·.value))
where
go (fvars : Array Expr) (vals : Array Expr) : MetaM α := do
if !(vals.all fun val => val.isLambda) then
k fvars vals
else if !( vals.allM fun val => isDefEq val.bindingDomain! vals[0]!.bindingDomain!) then
k fvars vals
else
withLocalDecl vals[0]!.bindingName! vals[0]!.binderInfo vals[0]!.bindingDomain! fun x =>
go (fvars.push x) (vals.map fun val => val.bindingBody!.instantiate1 x)
def getFixedPrefix (preDefs : Array PreDefinition) : MetaM Nat :=
withCommonTelescope preDefs fun xs vals => do
let resultRef IO.mkRef xs.size
for val in vals do
if ( resultRef.get) == 0 then return 0
forEachExpr' val fun e => do
if preDefs.any fun preDef => e.isAppOf preDef.declName then
let args := e.getAppArgs
resultRef.modify (min args.size ·)
for arg in args, x in xs do
if !( withoutProofIrrelevance <| withReducible <| isDefEq arg x) then
-- We continue searching if e's arguments are not a prefix of `xs`
return true
return false
else
return true
resultRef.get
def addPreDefsFromUnary (preDefs : Array PreDefinition) (preDefsNonrec : Array PreDefinition)
(unaryPreDefNonRec : PreDefinition) : TermElabM Unit := do
/-
We must remove `implemented_by` attributes from the auxiliary application because
this attribute is only relevant for code that is compiled. Moreover, the `[implemented_by <decl>]`
attribute would check whether the `unaryPreDef` type matches with `<decl>`'s type, and produce
and error. See issue #2899
-/
let preDefNonRec := unaryPreDefNonRec.filterAttrs fun attr => attr.name != `implemented_by
let declNames := preDefs.toList.map (·.declName)
-- Do not complain if the user sets @[semireducible], which usually is a noop,
-- we recognize that below and then do not set @[irreducible]
withOptions (allowUnsafeReducibility.set · true) do
if unaryPreDefNonRec.declName = preDefs[0]!.declName then
addNonRec preDefNonRec (applyAttrAfterCompilation := false)
else
withEnableInfoTree false do
addNonRec preDefNonRec (applyAttrAfterCompilation := false)
preDefsNonrec.forM (addNonRec · (applyAttrAfterCompilation := false) (all := declNames))
/--
Cleans the right-hand-sides of the predefinitions, to prepare for inclusion in the EqnInfos:
* Remove RecAppSyntax markers
* Abstracts nested proofs (and for that, add the `_unsafe_rec` definitions)
-/
def cleanPreDefs (preDefs : Array PreDefinition) : TermElabM (Array PreDefinition) := do
addAndCompilePartialRec preDefs
let preDefs preDefs.mapM (eraseRecAppSyntax ·)
let preDefs preDefs.mapM (abstractNestedProofs ·)
return preDefs
/--
Assign final attributes to the definitions. Assumes the EqnInfos to be already present.
-/
def addPreDefAttributes (preDefs : Array PreDefinition) : TermElabM Unit := do
for preDef in preDefs do
markAsRecursive preDef.declName
generateEagerEqns preDef.declName
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation
-- Unless the user asks for something else, mark the definition as irreducible
unless preDef.modifiers.attrs.any fun a =>
a.name = `reducible || a.name = `semireducible do
setIrreducibleAttribute preDef.declName
end Lean.Elab.Mutual

View File

@@ -33,71 +33,12 @@ private def mkSimpleEqThm (declName : Name) (suffix := Name.mkSimple unfoldThmSu
else
return none
private partial def mkProof (declName : Name) (type : Expr) : MetaM Expr := do
trace[Elab.definition.eqns] "proving: {type}"
withNewMCtxDepth do
let main mkFreshExprSyntheticOpaqueMVar type
let (_, mvarId) main.mvarId!.intros
let rec go (mvarId : MVarId) : MetaM Unit := do
trace[Elab.definition.eqns] "step\n{MessageData.ofGoal mvarId}"
if withAtLeastTransparency .all (tryURefl mvarId) then
return ()
else if ( tryContradiction mvarId) then
return ()
else if let some mvarId simpMatch? mvarId then
go mvarId
else if let some mvarId simpIf? mvarId then
go mvarId
else if let some mvarId whnfReducibleLHS? mvarId then
go mvarId
else
let ctx Simp.mkContext (config := { dsimp := false })
match ( simpTargetStar mvarId ctx (simprocs := {})).1 with
| TacticResultCNM.closed => return ()
| TacticResultCNM.modified mvarId => go mvarId
| TacticResultCNM.noChange =>
if let some mvarIds casesOnStuckLHS? mvarId then
mvarIds.forM go
else if let some mvarIds splitTarget? mvarId then
mvarIds.forM go
else
throwError "failed to generate equational theorem for '{declName}'\n{MessageData.ofGoal mvarId}"
-- Try rfl before deltaLHS to avoid `id` checkpoints in the proof, which would make
-- the lemma ineligible for dsimp
unless withAtLeastTransparency .all (tryURefl mvarId) do
go ( deltaLHS mvarId)
instantiateMVars main
def mkEqns (declName : Name) (info : DefinitionVal) : MetaM (Array Name) :=
withOptions (tactic.hygienic.set · false) do
let baseName := declName
let eqnTypes withNewMCtxDepth <| lambdaTelescope (cleanupAnnotations := true) info.value fun xs body => do
let us := info.levelParams.map mkLevelParam
let target mkEq (mkAppN (Lean.mkConst declName us) xs) body
let goal mkFreshExprSyntheticOpaqueMVar target
withReducible do
mkEqnTypes #[] goal.mvarId!
let mut thmNames := #[]
for h : i in [: eqnTypes.size] do
let type := eqnTypes[i]
trace[Elab.definition.eqns] "eqnType[{i}]: {eqnTypes[i]}"
let name := (Name.str baseName eqnThmSuffixBase).appendIndexAfter (i+1)
thmNames := thmNames.push name
let value mkProof declName type
let (type, value) removeUnusedEqnHypotheses type value
addDecl <| Declaration.thmDecl {
name, type, value
levelParams := info.levelParams
}
return thmNames
def getEqnsFor? (declName : Name) : MetaM (Option (Array Name)) := do
if ( isRecursiveDefinition declName) then
return none
if let some (.defnInfo info) := ( getEnv).find? declName then
if ( getEnv).contains declName then
if backward.eqns.nonrecursive.get ( getOptions) then
mkEqns declName info
mkEqns declName
else
let o mkSimpleEqThm declName
return o.map (#[·])

View File

@@ -0,0 +1,9 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Elab.PreDefinition.PartialFixpoint.Eqns
import Lean.Elab.PreDefinition.PartialFixpoint.Main
import Lean.Elab.PreDefinition.PartialFixpoint.Induction

View File

@@ -0,0 +1,117 @@
/-
Copyright (c) 2022 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Lean.Elab.Tactic.Conv
import Lean.Meta.Tactic.Rewrite
import Lean.Meta.Tactic.Split
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.Eqns
import Lean.Meta.ArgsPacker.Basic
import Init.Data.Array.Basic
import Init.Internal.Order.Basic
namespace Lean.Elab.PartialFixpoint
open Meta
open Eqns
structure EqnInfo extends EqnInfoCore where
declNames : Array Name
declNameNonRec : Name
fixedPrefixSize : Nat
deriving Inhabited
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo mkMapDeclarationExtension
def registerEqnsInfo (preDefs : Array PreDefinition) (declNameNonRec : Name) (fixedPrefixSize : Nat) : MetaM Unit := do
preDefs.forM fun preDef => ensureEqnReservedNamesAvailable preDef.declName
unless preDefs.all fun p => p.kind.isTheorem do
unless ( preDefs.allM fun p => isProp p.type) do
let declNames := preDefs.map (·.declName)
modifyEnv fun env =>
preDefs.foldl (init := env) fun env preDef =>
eqnInfoExt.insert env preDef.declName { preDef with
declNames, declNameNonRec, fixedPrefixSize }
private def deltaLHSUntilFix (declName declNameNonRec : Name) (mvarId : MVarId) : MetaM MVarId := mvarId.withContext do
let target mvarId.getType'
let some (_, lhs, rhs) := target.eq? | throwTacticEx `deltaLHSUntilFix mvarId "equality expected"
let lhs' deltaExpand lhs fun n => n == declName || n == declNameNonRec
mvarId.replaceTargetDefEq ( mkEq lhs' rhs)
partial def rwFixUnder (lhs : Expr) : MetaM Expr := do
if lhs.isAppOfArity ``Order.fix 4 then
return mkAppN (mkConst ``Order.fix_eq lhs.getAppFn.constLevels!) lhs.getAppArgs
else if lhs.isApp then
let h rwFixUnder lhs.appFn!
mkAppM ``congrFun #[h, lhs.appArg!]
else if lhs.isProj then
let f := mkLambda `p .default ( inferType lhs.projExpr!) (lhs.updateProj! (.bvar 0))
let h rwFixUnder lhs.projExpr!
mkAppM ``congrArg #[f, h]
else
throwError "rwFixUnder: unexpected expression {lhs}"
private def rwFixEq (mvarId : MVarId) : MetaM MVarId := mvarId.withContext do
let mut mvarId := mvarId
let target mvarId.getType'
let some (_, lhs, rhs) := target.eq? | unreachable!
let h rwFixUnder lhs
let some (_, _, lhsNew) := ( inferType h).eq? | unreachable!
let targetNew mkEq lhsNew rhs
let mvarNew mkFreshExprSyntheticOpaqueMVar targetNew
mvarId.assign ( mkEqTrans h mvarNew)
return mvarNew.mvarId!
/-- Generate the "unfold" lemma for `declName`. -/
def mkUnfoldEq (declName : Name) (info : EqnInfo) : MetaM Name := withLCtx {} {} do
withOptions (tactic.hygienic.set · false) do
let baseName := declName
lambdaTelescope info.value fun xs body => do
let us := info.levelParams.map mkLevelParam
let type mkEq (mkAppN (Lean.mkConst declName us) xs) body
let goal withNewMCtxDepth do
try
let goal mkFreshExprSyntheticOpaqueMVar type
let mvarId := goal.mvarId!
trace[Elab.definition.partialFixpoint] "mkUnfoldEq start:{mvarId}"
let mvarId deltaLHSUntilFix declName info.declNameNonRec mvarId
trace[Elab.definition.partialFixpoint] "mkUnfoldEq after deltaLHS:{mvarId}"
let mvarId rwFixEq mvarId
trace[Elab.definition.partialFixpoint] "mkUnfoldEq after rwFixEq:{mvarId}"
withAtLeastTransparency .all <|
withOptions (smartUnfolding.set · false) <|
mvarId.refl
trace[Elab.definition.partialFixpoint] "mkUnfoldEq rfl succeeded"
instantiateMVars goal
catch e =>
throwError "failed to generate unfold theorem for '{declName}':\n{e.toMessageData}"
let type mkForallFVars xs type
let value mkLambdaFVars xs goal
let name := Name.str baseName unfoldThmSuffix
addDecl <| Declaration.thmDecl {
name, type, value
levelParams := info.levelParams
}
return name
def getUnfoldFor? (declName : Name) : MetaM (Option Name) := do
let name := Name.str declName unfoldThmSuffix
let env getEnv
if env.contains name then return name
let some info := eqnInfoExt.find? env declName | return none
return some ( mkUnfoldEq declName info)
def getEqnsFor? (declName : Name) : MetaM (Option (Array Name)) := do
if let some _ := eqnInfoExt.find? ( getEnv) declName then
mkEqns declName
else
return none
builtin_initialize
registerGetEqnsFn getEqnsFor?
registerGetUnfoldEqnFn getUnfoldFor?
end Lean.Elab.PartialFixpoint

View File

@@ -0,0 +1,292 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Meta.Basic
import Lean.Meta.Match.MatcherApp.Transform
import Lean.Meta.Check
import Lean.Meta.Tactic.Subst
import Lean.Meta.Injective -- for elimOptParam
import Lean.Meta.ArgsPacker
import Lean.Meta.PProdN
import Lean.Meta.Tactic.Apply
import Lean.Elab.PreDefinition.PartialFixpoint.Eqns
import Lean.Elab.Command
import Lean.Meta.Tactic.ElimInfo
namespace Lean.Elab.PartialFixpoint
open Lean Elab Meta
open Lean.Order
def mkAdmAnd (α instα adm₁ adm₂ : Expr) : MetaM Expr :=
mkAppOptM ``admissible_and #[α, instα, none, none, adm₁, adm₂]
partial def mkAdmProj (packedInst : Expr) (i : Nat) (e : Expr) : MetaM Expr := do
if let some inst whnfUntil packedInst ``instCCPOPProd then
let_expr instCCPOPProd α β instα instβ := inst | throwError "mkAdmProj: unexpected instance {inst}"
if i == 0 then
mkAppOptM ``admissible_pprod_fst #[α, β, instα, instβ, none, e]
else
let e mkAdmProj instβ (i - 1) e
mkAppOptM ``admissible_pprod_snd #[α, β, instα, instβ, none, e]
else
assert! i == 0
return e
def CCPOProdProjs (n : Nat) (inst : Expr) : Array Expr := Id.run do
let mut insts := #[inst]
while insts.size < n do
let inst := insts.back!
let_expr Lean.Order.instCCPOPProd _ _ inst₁ inst₂ := inst
| panic! s!"isOptionFixpoint: unexpected CCPO instance {inst}"
insts := insts.pop
insts := insts.push inst₁
insts := insts.push inst₂
return insts
/-- `maskArray mask xs` keeps those `x` where the corresponding entry in `mask` is `true` -/
-- Worth having in the standard libray?
private def maskArray {α} (mask : Array Bool) (xs : Array α) : Array α := Id.run do
let mut ys := #[]
for b in mask, x in xs do
if b then ys := ys.push x
return ys
/-- Appends `_1` etc to `base` unless `n == 1` -/
private def numberNames (n : Nat) (base : String) : Array Name :=
.ofFn (n := n) fun i, _ =>
if n == 1 then .mkSimple base else .mkSimple s!"{base}_{i+1}"
def deriveInduction (name : Name) : MetaM Unit := do
mapError (f := (m!"Cannot derive fixpoint induction principle (please report this issue)\n{indentD ·}")) do
let some eqnInfo := eqnInfoExt.find? ( getEnv) name |
throwError "{name} is not defined by partial_fixpoint"
let infos eqnInfo.declNames.mapM getConstInfoDefn
-- First open up the fixed parameters everywhere
let e' lambdaBoundedTelescope infos[0]!.value eqnInfo.fixedPrefixSize fun xs _ => do
-- Now look at the body of an arbitrary of the functions (they are essentially the same
-- up to the final projections)
let body instantiateLambda infos[0]!.value xs
-- The body should now be of the form of the form (fix … ).2.2.1
-- We strip the projections (if present)
let body' := PProdN.stripProjs body
let some fixApp whnfUntil body' ``fix
| throwError "Unexpected function body {body}"
let_expr fix α instCCPOα F hmono := fixApp
| throwError "Unexpected function body {body'}"
let instCCPOs := CCPOProdProjs infos.size instCCPOα
let types infos.mapM (instantiateForall ·.type xs)
let packedType PProdN.pack 0 types
let motiveTypes types.mapM (mkArrow · (.sort 0))
let motiveNames := numberNames motiveTypes.size "motive"
withLocalDeclsDND (motiveNames.zip motiveTypes) fun motives => do
let packedMotive
withLocalDeclD ( mkFreshUserName `x) packedType fun x => do
mkLambdaFVars #[x] <| PProdN.pack 0 <|
motives.mapIdx fun idx motive =>
mkApp motive (PProdN.proj motives.size idx packedType x)
let admTypes motives.mapIdxM fun i motive => do
mkAppOptM ``admissible #[types[i]!, instCCPOs[i]!, some motive]
let admNames := numberNames admTypes.size "adm"
withLocalDeclsDND (admNames.zip admTypes) fun adms => do
let adms' adms.mapIdxM fun i adm => mkAdmProj instCCPOα i adm
let packedAdm PProdN.genMk (mkAdmAnd α instCCPOα) adms'
let hNames := numberNames infos.size "h"
let hTypes_hmask : Array (Expr × Array Bool) infos.mapIdxM fun i _info => do
let approxNames := infos.map fun info =>
match info.name with
| .str _ n => .mkSimple n
| _ => `f
withLocalDeclsDND (approxNames.zip types) fun approxs => do
let ihTypes := approxs.mapIdx fun j approx => mkApp motives[j]! approx
withLocalDeclsDND (ihTypes.map (`ih, ·)) fun ihs => do
let f PProdN.mk 0 approxs
let Ff := F.beta #[f]
let Ffi := PProdN.proj motives.size i packedType Ff
let t := mkApp motives[i]! Ffi
let t PProdN.reduceProjs t
let mask := approxs.map fun approx => t.containsFVar approx.fvarId!
let t mkForallFVars (maskArray mask approxs ++ maskArray mask ihs) t
pure (t, mask)
let (hTypes, masks) := hTypes_hmask.unzip
withLocalDeclsDND (hNames.zip hTypes) fun hs => do
let packedH
withLocalDeclD `approx packedType fun approx =>
let packedIHType := packedMotive.beta #[approx]
withLocalDeclD `ih packedIHType fun ih => do
let approxs := PProdN.projs motives.size packedType approx
let ihs := PProdN.projs motives.size packedIHType ih
let e PProdN.mk 0 <| hs.mapIdx fun i h =>
let mask := masks[i]!
mkAppN h (maskArray mask approxs ++ maskArray mask ihs)
mkLambdaFVars #[approx, ih] e
let e' mkAppOptM ``fix_induct #[α, instCCPOα, F, hmono, packedMotive, packedAdm, packedH]
-- Should be the type of e', but with the function definitions folded
let packedConclusion PProdN.pack 0 <|
motives.mapIdxM fun i motive => do
let f mkConstWithLevelParams infos[i]!.name
return mkApp motive (mkAppN f xs)
let e' mkExpectedTypeHint e' packedConclusion
let e' mkLambdaFVars hs e'
let e' mkLambdaFVars adms e'
let e' mkLambdaFVars motives e'
let e' mkLambdaFVars (binderInfoForMVars := .default) (usedOnly := true) xs e'
let e' instantiateMVars e'
trace[Elab.definition.partialFixpoint.induction] "complete body of fixpoint induction principle:{indentExpr e'}"
pure e'
let eTyp inferType e'
let eTyp elimOptParam eTyp
-- logInfo m!"eTyp: {eTyp}"
let params := (collectLevelParams {} eTyp).params
-- Prune unused level parameters, preserving the original order
let us := infos[0]!.levelParams.filter (params.contains ·)
let inductName := name ++ `fixpoint_induct
addDecl <| Declaration.thmDecl
{ name := inductName, levelParams := us, type := eTyp, value := e' }
def isInductName (env : Environment) (name : Name) : Bool := Id.run do
let .str p s := name | return false
match s with
| "fixpoint_induct" =>
if let some eqnInfo := eqnInfoExt.find? env p then
return p == eqnInfo.declNames[0]!
return false
| _ => return false
builtin_initialize
registerReservedNamePredicate isInductName
registerReservedNameAction fun name => do
if isInductName ( getEnv) name then
let .str p _ := name | return false
MetaM.run' <| deriveInduction p
return true
return false
/--
Returns true if `name` defined by `partial_fixpoint`, the first in its mutual group,
and all functions are defined using the `CCPO` instance for `Option`.
-/
def isOptionFixpoint (env : Environment) (name : Name) : Bool := Option.isSome do
let eqnInfo eqnInfoExt.find? env name
guard <| name == eqnInfo.declNames[0]!
let defnInfo env.find? eqnInfo.declNameNonRec
assert! defnInfo.hasValue
let mut value := defnInfo.value!
while value.isLambda do value := value.bindingBody!
let_expr Lean.Order.fix _ inst _ _ := value | panic! s!"isOptionFixpoint: unexpected value {value}"
let insts := CCPOProdProjs eqnInfo.declNames.size inst
insts.forM fun inst => do
let mut inst := inst
while inst.isAppOfArity ``instCCPOPi 3 do
guard inst.appArg!.isLambda
inst := inst.appArg!.bindingBody!
guard <| inst.isAppOfArity ``instCCPOOption 1
def isPartialCorrectnessName (env : Environment) (name : Name) : Bool := Id.run do
let .str p s := name | return false
unless s == "partial_correctness" do return false
return isOptionFixpoint env p
/--
Given `motive : α → β → γ → Prop`, construct a proof of
`admissible (fun f => ∀ x y r, f x y = r → motive x y r)`
-/
def mkOptionAdm (motive : Expr) : MetaM Expr := do
let type inferType motive
forallTelescope type fun ysr _ => do
let P := mkAppN motive ysr
let ys := ysr.pop
let r := ysr.back!
let mut inst mkAppM ``Option.admissible_eq_some #[P, r]
inst mkLambdaFVars #[r] inst
inst mkAppOptM ``admissible_pi #[none, none, none, none, inst]
for y in ys.reverse do
inst mkLambdaFVars #[y] inst
inst mkAppOptM ``admissible_pi_apply #[none, none, none, none, inst]
pure inst
def derivePartialCorrectness (name : Name) : MetaM Unit := do
let fixpointInductThm := name ++ `fixpoint_induct
unless ( getEnv).contains fixpointInductThm do
deriveInduction name
mapError (f := (m!"Cannot derive partial correctness theorem (please report this issue)\n{indentD ·}")) do
let some eqnInfo := eqnInfoExt.find? ( getEnv) name |
throwError "{name} is not defined by partial_fixpoint"
let infos eqnInfo.declNames.mapM getConstInfoDefn
-- First open up the fixed parameters everywhere
let e' lambdaBoundedTelescope infos[0]!.value eqnInfo.fixedPrefixSize fun xs _ => do
let types infos.mapM (instantiateForall ·.type xs)
-- for `f : α → β → Option γ`, we expect a `motive : α → β → γ → Prop`
let motiveTypes types.mapM fun type =>
forallTelescopeReducing type fun ys type => do
let type whnf type
let_expr Option γ := type | throwError "Expected `Option`, got:{indentExpr type}"
withLocalDeclD ( mkFreshUserName `r) γ fun r =>
mkForallFVars (ys.push r) (.sort 0)
let motiveDecls motiveTypes.mapIdxM fun i motiveType => do
let n := if infos.size = 1 then .mkSimple "motive"
else .mkSimple s!"motive_{i+1}"
pure (n, fun _ => pure motiveType)
withLocalDeclsD motiveDecls fun motives => do
-- the motives, as expected by `f.fixpoint_induct`:
-- fun f => ∀ x y r, f x y = some r → motive x y r
let motives' motives.mapIdxM fun i motive => do
withLocalDeclD ( mkFreshUserName `f) types[i]! fun f => do
forallTelescope ( inferType motive) fun ysr _ => do
let ys := ysr.pop
let r := ysr.back!
let heq mkEq (mkAppN f ys) ( mkAppM ``some #[r])
let motive' mkArrow heq (mkAppN motive ysr)
let motive' mkForallFVars ysr motive'
mkLambdaFVars #[f] motive'
let e' mkAppOptM fixpointInductThm <| (xs ++ motives').map some
let adms motives.mapM mkOptionAdm
let e' := mkAppN e' adms
let e' mkLambdaFVars motives e'
let e' mkLambdaFVars (binderInfoForMVars := .default) (usedOnly := true) xs e'
let e' instantiateMVars e'
trace[Elab.definition.partialFixpoint.induction] "complete body of partial correctness principle:{indentExpr e'}"
pure e'
let eTyp inferType e'
let eTyp elimOptParam eTyp
let eTyp Core.betaReduce eTyp
-- logInfo m!"eTyp: {eTyp}"
let params := (collectLevelParams {} eTyp).params
-- Prune unused level parameters, preserving the original order
let us := infos[0]!.levelParams.filter (params.contains ·)
let inductName := name ++ `partial_correctness
addDecl <| Declaration.thmDecl
{ name := inductName, levelParams := us, type := eTyp, value := e' }
builtin_initialize
registerReservedNamePredicate isPartialCorrectnessName
registerReservedNameAction fun name => do
let .str p s := name | return false
unless s == "partial_correctness" do return false
unless isOptionFixpoint ( getEnv) p do return false
MetaM.run' <| derivePartialCorrectness p
return false
end Lean.Elab.PartialFixpoint
builtin_initialize Lean.registerTraceClass `Elab.definition.partialFixpoint.induction

View File

@@ -0,0 +1,188 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Elab.PreDefinition.MkInhabitant
import Lean.Elab.PreDefinition.Mutual
import Lean.Elab.PreDefinition.PartialFixpoint.Eqns
import Lean.Elab.Tactic.Monotonicity
import Init.Internal.Order.Basic
import Lean.Meta.PProdN
namespace Lean.Elab
open Meta
open Monotonicity
open Lean.Order
private def replaceRecApps (recFnNames : Array Name) (fixedPrefixSize : Nat) (f : Expr) (e : Expr) : MetaM Expr := do
let t inferType f
return e.replace fun e =>
if let some idx := recFnNames.findIdx? (e.isAppOfArity · fixedPrefixSize) then
some <| PProdN.proj recFnNames.size idx t f
else
none
/--
For pretty error messages:
Takes `F : (fun f => e)`, where `f` is the packed function, and replaces `f` in `e` with the user-visible
constants, which are added to the environment temporarily.
-/
private def unReplaceRecApps {α} (preDefs : Array PreDefinition) (fixedArgs : Array Expr)
(F : Expr) (k : Expr MetaM α) : MetaM α := do
unless F.isLambda do throwError "Expected lambda:{indentExpr F}"
withoutModifyingEnv do
preDefs.forM addAsAxiom
let fns := preDefs.map fun d =>
mkAppN (.const d.declName (d.levelParams.map mkLevelParam)) fixedArgs
let packedFn PProdN.mk 0 fns
let e lambdaBoundedTelescope F 1 fun f e => do
let f := f[0]!
-- Replace f with calls to the constants
let e := e.replace fun e => do if e == f then return packedFn else none
-- And reduce projection redexes
let e PProdN.reduceProjs e
pure e
k e
def mkInstCCPOPProd (inst₁ inst₂ : Expr) : MetaM Expr := do
mkAppOptM ``instCCPOPProd #[none, none, inst₁, inst₂]
def mkMonoPProd (hmono₁ hmono₂ : Expr) : MetaM Expr := do
-- mkAppM does not support the equivalent of (cfg := { synthAssignedInstances := false}),
-- so this is a bit more pedestrian
let_expr monotone _ inst _ inst₁ _ := ( inferType hmono₁)
| throwError "mkMonoPProd: unexpected type of{indentExpr hmono₁}"
let_expr monotone _ _ _ inst₂ _ := ( inferType hmono₂)
| throwError "mkMonoPProd: unexpected type of{indentExpr hmono₂}"
mkAppOptM ``PProd.monotone_mk #[none, none, none, inst₁, inst₂, inst, none, none, hmono₁, hmono₂]
def partialFixpoint (preDefs : Array PreDefinition) : TermElabM Unit := do
-- We expect all functions in the clique to have `partial_fixpoint` syntax
let hints := preDefs.filterMap (·.termination.partialFixpoint?)
assert! preDefs.size = hints.size
-- For every function of type `∀ x y, r x y`, an CCPO instance
-- ∀ x y, CCPO (r x y), but crucially constructed using `instCCPOPi`
let ccpoInsts preDefs.mapIdxM fun i preDef => withRef hints[i]!.ref do
lambdaTelescope preDef.value fun xs _body => do
let type instantiateForall preDef.type xs
let inst
try
synthInstance ( mkAppM ``CCPO #[type])
catch _ =>
trace[Elab.definition.partialFixpoint] "No CCPO instance found for {preDef.declName}, trying inhabitation"
let msg := m!"failed to compile definition '{preDef.declName}' using `partial_fixpoint`"
let w mkInhabitantFor msg #[] preDef.type
let instNonempty mkAppM ``Nonempty.intro #[mkAppN w xs]
let classicalWitness mkAppOptM ``Classical.ofNonempty #[none, instNonempty]
mkAppOptM ``FlatOrder.instCCPO #[none, classicalWitness]
mkLambdaFVars xs inst
let fixedPrefixSize Mutual.getFixedPrefix preDefs
trace[Elab.definition.partialFixpoint] "fixed prefix size: {fixedPrefixSize}"
let declNames := preDefs.map (·.declName)
forallBoundedTelescope preDefs[0]!.type fixedPrefixSize fun fixedArgs _ => do
-- ∀ x y, CCPO (rᵢ x y)
let ccpoInsts := ccpoInsts.map (·.beta fixedArgs)
let types preDefs.mapM (instantiateForall ·.type fixedArgs)
-- (∀ x y, r₁ x y) ×' (∀ x y, r₂ x y)
let packedType PProdN.pack 0 types
-- CCPO (∀ x y, rᵢ x y)
let ccpoInsts' ccpoInsts.mapM fun inst =>
lambdaTelescope inst fun xs inst => do
let mut inst := inst
for x in xs.reverse do
inst mkAppOptM ``instCCPOPi #[( inferType x), none, ( mkLambdaFVars #[x] inst)]
pure inst
-- CCPO ((∀ x y, r₁ x y) ×' (∀ x y, r₂ x y))
let packedCCPOInst PProdN.genMk mkInstCCPOPProd ccpoInsts'
-- Order ((∀ x y, r₁ x y) ×' (∀ x y, r₂ x y))
let packedPartialOrderInst mkAppOptM ``CCPO.toPartialOrder #[none, packedCCPOInst]
-- Error reporting hook, presenting monotonicity errors in terms of recursive functions
let failK {α} f (monoThms : Array Name) : MetaM α := do
unReplaceRecApps preDefs fixedArgs f fun t => do
let extraMsg := if monoThms.isEmpty then m!"" else
m!"Tried to apply {.andList (monoThms.toList.map (m!"'{.ofConstName ·}'"))}, but failed.\n\
Possible cause: A missing `{.ofConstName ``MonoBind}` instance.\n\
Use `set_option trace.Elab.Tactic.monotonicity true` to debug."
if let some recApp := t.find? hasRecAppSyntax then
let some syn := getRecAppSyntax? recApp | panic! "getRecAppSyntax? failed"
withRef syn <|
throwError "Cannot eliminate recursive call `{syn}` enclosed in{indentExpr t}\n{extraMsg}"
else
throwError "Cannot eliminate recursive call in{indentExpr t}\n{extraMsg}"
-- Adjust the body of each function to take the other functions as a
-- (packed) parameter
let Fs preDefs.mapM fun preDef => do
let body instantiateLambda preDef.value fixedArgs
withLocalDeclD ( mkFreshUserName `f) packedType fun f => do
let body' withoutModifyingEnv do
-- replaceRecApps needs the constants in the env to typecheck things
preDefs.forM (addAsAxiom ·)
replaceRecApps declNames fixedPrefixSize f body
mkLambdaFVars #[f] body'
-- Construct and solve monotonicity goals for each function separately
-- This way we preserve the user's parameter names as much as possible
-- and can (later) use the user-specified per-function tactic
let hmonos preDefs.mapIdxM fun i preDef => do
let type := types[i]!
let F := Fs[i]!
let inst mkAppOptM ``CCPO.toPartialOrder #[type, ccpoInsts'[i]!]
let goal mkAppOptM ``monotone #[packedType, packedPartialOrderInst, type, inst, F]
if let some term := hints[i]!.term? then
let hmono Term.withSynthesize <| Term.elabTermEnsuringType term goal
let hmono instantiateMVars hmono
let mvars getMVars hmono
if mvars.isEmpty then
pure hmono
else
discard <| Term.logUnassignedUsingErrorInfos mvars
mkSorry goal (synthetic := true)
else
let hmono mkFreshExprSyntheticOpaqueMVar goal
mapError (f := (m!"Could not prove '{preDef.declName}' to be monotone in its recursive calls:{indentD ·}")) do
solveMono failK hmono.mvarId!
trace[Elab.definition.partialFixpoint] "monotonicity proof for {preDef.declName}: {hmono}"
instantiateMVars hmono
let hmono PProdN.genMk mkMonoPProd hmonos
let packedValue mkAppOptM ``fix #[packedType, packedCCPOInst, none, hmono]
trace[Elab.definition.partialFixpoint] "packedValue: {packedValue}"
let declName :=
if preDefs.size = 1 then
preDefs[0]!.declName
else
preDefs[0]!.declName ++ `mutual
let packedType' mkForallFVars fixedArgs packedType
let packedValue' mkLambdaFVars fixedArgs packedValue
let preDefNonRec := { preDefs[0]! with
declName := declName
type := packedType'
value := packedValue'}
let preDefsNonrec preDefs.mapIdxM fun fidx preDef => do
let us := preDefNonRec.levelParams.map mkLevelParam
let value := mkConst preDefNonRec.declName us
let value := mkAppN value fixedArgs
let value := PProdN.proj preDefs.size fidx packedType value
let value mkLambdaFVars fixedArgs value
pure { preDef with value }
Mutual.addPreDefsFromUnary preDefs preDefsNonrec preDefNonRec
let preDefs Mutual.cleanPreDefs preDefs
PartialFixpoint.registerEqnsInfo preDefs preDefNonRec.declName fixedPrefixSize
Mutual.addPreDefAttributes preDefs
end Lean.Elab
builtin_initialize Lean.registerTraceClass `Elab.definition.partialFixpoint

View File

@@ -294,7 +294,7 @@ def mkBrecOnApp (positions : Positions) (fnIdx : Nat) (brecOnConst : Nat → Exp
let brecOn := mkAppN brecOn packedFArgs
let some (size, idx) := positions.findSome? fun pos => (pos.size, ·) <$> pos.indexOf? fnIdx
| throwError "mkBrecOnApp: Could not find {fnIdx} in {positions}"
let brecOn PProdN.proj size idx brecOn
let brecOn PProdN.projM size idx brecOn
mkLambdaFVars ys (mkAppN brecOn otherArgs)
end Lean.Elab.Structural

View File

@@ -4,7 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura, Joachim Breitner
-/
prelude
import Lean.Elab.PreDefinition.TerminationArgument
import Lean.Elab.PreDefinition.TerminationMeasure
import Lean.Elab.PreDefinition.Structural.Basic
import Lean.Elab.PreDefinition.Structural.RecArgInfo
@@ -56,7 +56,7 @@ private def hasBadParamDep? (ys : Array Expr) (indParams : Array Expr) : MetaM (
/--
Assemble the `RecArgInfo` for the `i`th parameter in the parameter list `xs`. This performs
various sanity checks on the argument (is it even an inductive type etc).
various sanity checks on the parameter (is it even of inductive type etc).
-/
def getRecArgInfo (fnName : Name) (numFixed : Nat) (xs : Array Expr) (i : Nat) : MetaM RecArgInfo := do
if h : i < xs.size then
@@ -112,17 +112,17 @@ considered.
The `xs` are the fixed parameters, `value` the body with the fixed prefix instantiated.
Takes the optional user annotations into account (`termArg?`). If this is given and the argument
Takes the optional user annotation into account (`termMeasure?`). If this is given and the measure
is unsuitable, throw an error.
-/
def getRecArgInfos (fnName : Name) (xs : Array Expr) (value : Expr)
(termArg? : Option TerminationArgument) : MetaM (Array RecArgInfo × MessageData) := do
(termMeasure? : Option TerminationMeasure) : MetaM (Array RecArgInfo × MessageData) := do
lambdaTelescope value fun ys _ => do
if let .some termArg := termArg? then
-- User explicitly asked to use a certain argument, so throw errors eagerly
let recArgInfo withRef termArg.ref do
mapError (f := (m!"cannot use specified parameter for structural recursion:{indentD ·}")) do
getRecArgInfo fnName xs.size (xs ++ ys) ( termArg.structuralArg)
if let .some termMeasure := termMeasure? then
-- User explicitly asked to use a certain measure, so throw errors eagerly
let recArgInfo withRef termMeasure.ref do
mapError (f := (m!"cannot use specified measure for structural recursion:{indentD ·}")) do
getRecArgInfo fnName xs.size (xs ++ ys) ( termMeasure.structuralArg)
return (#[recArgInfo], m!"")
else
let mut recArgInfos := #[]
@@ -233,12 +233,12 @@ def allCombinations (xss : Array (Array α)) : Option (Array (Array α)) :=
def tryAllArgs (fnNames : Array Name) (xs : Array Expr) (values : Array Expr)
(termArg?s : Array (Option TerminationArgument)) (k : Array RecArgInfo M α) : M α := do
(termMeasure?s : Array (Option TerminationMeasure)) (k : Array RecArgInfo M α) : M α := do
let mut report := m!""
-- Gather information on all possible recursive arguments
let mut recArgInfoss := #[]
for fnName in fnNames, value in values, termArg? in termArg?s do
let (recArgInfos, thisReport) getRecArgInfos fnName xs value termArg?
for fnName in fnNames, value in values, termMeasure? in termMeasure?s do
let (recArgInfos, thisReport) getRecArgInfos fnName xs value termMeasure?
report := report ++ thisReport
recArgInfoss := recArgInfoss.push recArgInfos
-- Put non-indices first

View File

@@ -4,7 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura, Joachim Breitner
-/
prelude
import Lean.Elab.PreDefinition.TerminationArgument
import Lean.Elab.PreDefinition.TerminationMeasure
import Lean.Elab.PreDefinition.Structural.Basic
import Lean.Elab.PreDefinition.Structural.FindRecArg
import Lean.Elab.PreDefinition.Structural.Preprocess
@@ -127,7 +127,7 @@ private def elimMutualRecursion (preDefs : Array PreDefinition) (xs : Array Expr
let valuesNew valuesNew.mapM (mkLambdaFVars xs ·)
return (Array.zip preDefs valuesNew).map fun preDef, valueNew => { preDef with value := valueNew }
private def inferRecArgPos (preDefs : Array PreDefinition) (termArg?s : Array (Option TerminationArgument)) :
private def inferRecArgPos (preDefs : Array PreDefinition) (termMeasure?s : Array (Option TerminationMeasure)) :
M (Array Nat × (Array PreDefinition) × Nat) := do
withoutModifyingEnv do
preDefs.forM (addAsAxiom ·)
@@ -142,7 +142,7 @@ private def inferRecArgPos (preDefs : Array PreDefinition) (termArg?s : Array (O
assert! xs.size = maxNumFixed
let values preDefs.mapM (instantiateLambda ·.value xs)
tryAllArgs fnNames xs values termArg?s fun recArgInfos => do
tryAllArgs fnNames xs values termMeasure?s fun recArgInfos => do
let recArgPoss := recArgInfos.map (·.recArgPos)
trace[Elab.definition.structural] "Trying argument set {recArgPoss}"
let numFixed := recArgInfos.foldl (·.min ·.numFixed) maxNumFixed
@@ -156,20 +156,20 @@ private def inferRecArgPos (preDefs : Array PreDefinition) (termArg?s : Array (O
let preDefs' elimMutualRecursion preDefs xs recArgInfos
return (recArgPoss, preDefs', numFixed)
def reportTermArg (preDef : PreDefinition) (recArgPos : Nat) : MetaM Unit := do
def reporttermMeasure (preDef : PreDefinition) (recArgPos : Nat) : MetaM Unit := do
if let some ref := preDef.termination.terminationBy?? then
let fn lambdaTelescope preDef.value fun xs _ => mkLambdaFVars xs xs[recArgPos]!
let termArg : TerminationArgument:= {ref := .missing, structural := true, fn}
let termMeasure : TerminationMeasure:= {ref := .missing, structural := true, fn}
let arity lambdaTelescope preDef.value fun xs _ => pure xs.size
let stx termArg.delab arity (extraParams := preDef.termination.extraParams)
let stx termMeasure.delab arity (extraParams := preDef.termination.extraParams)
Tactic.TryThis.addSuggestion ref stx
def structuralRecursion (preDefs : Array PreDefinition) (termArg?s : Array (Option TerminationArgument)) : TermElabM Unit := do
def structuralRecursion (preDefs : Array PreDefinition) (termMeasure?s : Array (Option TerminationMeasure)) : TermElabM Unit := do
let names := preDefs.map (·.declName)
let ((recArgPoss, preDefsNonRec, numFixed), state) run <| inferRecArgPos preDefs termArg?s
let ((recArgPoss, preDefsNonRec, numFixed), state) run <| inferRecArgPos preDefs termMeasure?s
for recArgPos in recArgPoss, preDef in preDefs do
reportTermArg preDef recArgPos
reporttermMeasure preDef recArgPos
state.addMatchers.forM liftM
preDefsNonRec.forM fun preDefNonRec => do
let preDefNonRec eraseRecAppSyntax preDefNonRec

View File

@@ -15,7 +15,7 @@ namespace Lean.Elab
/-- A single `termination_by` clause -/
structure TerminationBy where
ref : Syntax
structural : Bool
structural : Bool
vars : TSyntaxArray [`ident, ``Lean.Parser.Term.hole]
body : Term
/--
@@ -33,6 +33,12 @@ structure DecreasingBy where
tactic : TSyntax ``Lean.Parser.Tactic.tacticSeq
deriving Inhabited
/-- A single `partial_fixpoint` clause -/
structure PartialFixpoint where
ref : Syntax
term? : Option Term
deriving Inhabited
/--
The termination annotations for a single function.
For `decreasing_by`, we store the whole `decreasing_by tacticSeq` expression, as this
@@ -42,12 +48,13 @@ structure TerminationHints where
ref : Syntax
terminationBy?? : Option Syntax
terminationBy? : Option TerminationBy
partialFixpoint? : Option PartialFixpoint
decreasingBy? : Option DecreasingBy
/--
Here we record the number of parameters past the `:`. It is set by
`TerminationHints.rememberExtraParams` and used as follows:
* When we guess the termination argument in `GuessLex` and want to print it in surface-syntax
* When we guess the termination measure in `GuessLex` and want to print it in surface-syntax
compatible form.
* If there are fewer variables in the `termination_by` annotation than there are extra
parameters, we know which parameters they should apply to (`TerminationBy.checkVars`).
@@ -55,26 +62,29 @@ structure TerminationHints where
extraParams : Nat
deriving Inhabited
def TerminationHints.none : TerminationHints := .missing, .none, .none, .none, 0
def TerminationHints.none : TerminationHints := .missing, .none, .none, .none, .none, 0
/-- Logs warnings when the `TerminationHints` are unexpectedly present. -/
def TerminationHints.ensureNone (hints : TerminationHints) (reason : String) : CoreM Unit := do
match hints.terminationBy??, hints.terminationBy?, hints.decreasingBy? with
| .none, .none, .none => pure ()
| .none, .none, .some dec_by =>
match hints.terminationBy??, hints.terminationBy?, hints.decreasingBy?, hints.partialFixpoint? with
| .none, .none, .none, .none => pure ()
| .none, .none, .some dec_by, .none =>
logWarningAt dec_by.ref m!"unused `decreasing_by`, function is {reason}"
| .some term_by?, .none, .none =>
| .some term_by?, .none, .none, .none =>
logWarningAt term_by? m!"unused `termination_by?`, function is {reason}"
| .none, .some term_by, .none =>
| .none, .some term_by, .none, .none =>
logWarningAt term_by.ref m!"unused `termination_by`, function is {reason}"
| _, _, _ =>
| .none, .none, .none, .some partialFixpoint =>
logWarningAt partialFixpoint.ref m!"unused `partial_fixpoint`, function is {reason}"
| _, _, _, _=>
logWarningAt hints.ref m!"unused termination hints, function is {reason}"
/-- True if any form of termination hint is present. -/
def TerminationHints.isNotNone (hints : TerminationHints) : Bool :=
hints.terminationBy??.isSome ||
hints.terminationBy?.isSome ||
hints.decreasingBy?.isSome
hints.decreasingBy?.isSome ||
hints.partialFixpoint?.isSome
/--
Remembers `extraParams` for later use. Needs to happen early enough where we still know
@@ -117,6 +127,8 @@ def elabTerminationHints {m} [Monad m] [MonadError m] (stx : TSyntax ``suffix) :
| _ => pure none
else pure none
let terminationBy? : Option TerminationBy if let some t := t? then match t with
| `(terminationBy|termination_by partialFixpointursion) =>
pure (some {ref := t, structural := false, vars := #[], body := .missing : TerminationBy})
| `(terminationBy|termination_by $[structural%$s]? => $_body) =>
throwErrorAt t "no extra parameters bounds, please omit the `=>`"
| `(terminationBy|termination_by $[structural%$s]? $vars* => $body) =>
@@ -124,12 +136,17 @@ def elabTerminationHints {m} [Monad m] [MonadError m] (stx : TSyntax ``suffix) :
| `(terminationBy|termination_by $[structural%$s]? $body:term) =>
pure (some {ref := t, structural := s.isSome, vars := #[], body})
| `(terminationBy?|termination_by?) => pure none
| `(partialFixpoint|partial_fixpoint $[monotonicity $_]?) => pure none
| _ => throwErrorAt t "unexpected `termination_by` syntax"
else pure none
let partialFixpoint? : Option PartialFixpoint if let some t := t? then match t with
| `(partialFixpoint|partial_fixpoint $[monotonicity $term?]?) => pure (some {ref := t, term?})
| _ => pure none
else pure none
let decreasingBy? d?.mapM fun d => match d with
| `(decreasingBy|decreasing_by $tactic) => pure {ref := d, tactic}
| _ => throwErrorAt d "unexpected `decreasing_by` syntax"
return { ref := stx, terminationBy??, terminationBy?, decreasingBy?, extraParams := 0 }
return { ref := stx, terminationBy??, terminationBy?, partialFixpoint?, decreasingBy?, extraParams := 0 }
| _ => throwErrorAt stx s!"Unexpected Termination.suffix syntax: {stx} of kind {stx.raw.getKind}"
end Lean.Elab

View File

@@ -14,8 +14,8 @@ import Lean.PrettyPrinter.Delaborator.Basic
/-!
This module contains
* the data type `TerminationArgument`, the elaborated form of a `TerminationBy` clause,
* the `TerminationArguments` type for a clique, and
* the data type `TerminationMeasure`, the elaborated form of a `TerminationBy` clause,
* the `TerminationMeasures` type for a clique, and
* elaboration and deelaboration functions.
-/
@@ -29,28 +29,28 @@ open Lean Meta Elab Term
Elaborated form for a `termination_by` clause.
The `fn` has the same (value) arity as the recursive functions (stored in
`arity`), and maps its arguments (including fixed prefix, in unpacked form) to
the termination argument.
`arity`), and maps its measures (including fixed prefix, in unpacked form) to
the termination measure.
If `structural := Bool`, then the `fn` is a lambda picking out exactly one argument.
If `structural := Bool`, then the `fn` is a lambda picking out exactly one measure.
-/
structure TerminationArgument where
structure TerminationMeasure where
ref : Syntax
structural : Bool
fn : Expr
deriving Inhabited
/-- A complete set of `TerminationArgument`s, as applicable to a single clique. -/
abbrev TerminationArguments := Array TerminationArgument
/-- A complete set of `TerminationMeasure`s, as applicable to a single clique. -/
abbrev TerminationMeasures := Array TerminationMeasure
/--
Elaborates a `TerminationBy` to an `TerminationArgument`.
Elaborates a `TerminationBy` to an `TerminationMeasure`.
* `type` is the full type of the original recursive function, including fixed prefix.
* `hint : TerminationBy` is the syntactic `TerminationBy`.
-/
def TerminationArgument.elab (funName : Name) (type : Expr) (arity extraParams : Nat)
(hint : TerminationBy) : TermElabM TerminationArgument := withDeclName funName do
def TerminationMeasure.elab (funName : Name) (type : Expr) (arity extraParams : Nat)
(hint : TerminationBy) : TermElabM TerminationMeasure := withDeclName funName do
assert! extraParams arity
if h : hint.vars.size > extraParams then
@@ -73,7 +73,7 @@ def TerminationArgument.elab (funName : Name) (type : Expr) (arity extraParams :
-- Structural recursion: The body has to be a single parameter, whose index we return
if hint.structural then unless (ys ++ xs).contains body do
let params := MessageData.andList ((ys ++ xs).toList.map (m!"'{·}'"))
throwErrorAt hint.ref m!"The termination argument of a structurally recursive " ++
throwErrorAt hint.ref m!"The termination measure of a structurally recursive " ++
m!"function must be one of the parameters {params}, but{indentExpr body}\nisn't " ++
m!"one of these."
@@ -87,24 +87,24 @@ def TerminationArgument.elab (funName : Name) (type : Expr) (arity extraParams :
| 1 => "one parameter"
| n => m!"{n} parameters"
def TerminationArgument.structuralArg (termArg : TerminationArgument) : MetaM Nat := do
assert! termArg.structural
lambdaTelescope termArg.fn fun ys e => do
def TerminationMeasure.structuralArg (measure : TerminationMeasure) : MetaM Nat := do
assert! measure.structural
lambdaTelescope measure.fn fun ys e => do
let .some idx := ys.indexOf? e
| panic! "TerminationArgument.structuralArg: body not one of the parameters"
| panic! "TerminationMeasure.structuralArg: body not one of the parameters"
return idx
open PrettyPrinter Delaborator SubExpr Parser.Termination Parser.Term in
/--
Delaborates a `TerminationArgument` back to a `TerminationHint`, e.g. for `termination_by?`.
Delaborates a `TerminationMeasure` back to a `TerminationHint`, e.g. for `termination_by?`.
This needs extra information:
* `arity` is the value arity of the recursive function
* `extraParams` indicates how many of the functions arguments are bound after the colon.
* `extraParams` indicates how many of the function's parameters are bound after the colon.
-/
def TerminationArgument.delab (arity : Nat) (extraParams : Nat) (termArg : TerminationArgument) : MetaM (TSyntax ``terminationBy) := do
lambdaBoundedTelescope termArg.fn (arity - extraParams) fun _ys e => do
def TerminationMeasure.delab (arity : Nat) (extraParams : Nat) (measure : TerminationMeasure) : MetaM (TSyntax ``terminationBy) := do
lambdaBoundedTelescope measure.fn (arity - extraParams) fun _ys e => do
pure ( delabCore e (delab := go extraParams #[])).1
where
go : Nat TSyntaxArray `ident DelabM (TSyntax ``terminationBy)
@@ -119,7 +119,7 @@ def TerminationArgument.delab (arity : Nat) (extraParams : Nat) (termArg : Termi
-- drop trailing underscores
let mut vars := vars
while ! vars.isEmpty && vars.back!.raw.isOfKind ``hole do vars := vars.pop
if termArg.structural then
if measure.structural then
if vars.isEmpty then
`(terminationBy|termination_by structural $stxBody)
else

View File

@@ -10,6 +10,7 @@ import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.Eqns
import Lean.Meta.ArgsPacker.Basic
import Init.Data.Array.Basic
import Init.Internal.Order.Basic
namespace Lean.Elab.WF
open Meta
@@ -20,7 +21,6 @@ structure EqnInfo extends EqnInfoCore where
declNameNonRec : Name
fixedPrefixSize : Nat
argsPacker : ArgsPacker
hasInduct : Bool
deriving Inhabited
private partial def deltaLHSUntilFix (mvarId : MVarId) : MetaM MVarId := mvarId.withContext do
@@ -28,13 +28,23 @@ private partial def deltaLHSUntilFix (mvarId : MVarId) : MetaM MVarId := mvarId.
let some (_, lhs, _) := target.eq? | throwTacticEx `deltaLHSUntilFix mvarId "equality expected"
if lhs.isAppOf ``WellFounded.fix then
return mvarId
else if lhs.isAppOf ``Order.fix then
return mvarId
else
deltaLHSUntilFix ( deltaLHS mvarId)
private def rwFixEq (mvarId : MVarId) : MetaM MVarId := mvarId.withContext do
let target mvarId.getType'
let some (_, lhs, rhs) := target.eq? | unreachable!
let h := mkAppN (mkConst ``WellFounded.fix_eq lhs.getAppFn.constLevels!) lhs.getAppArgs
let h
if lhs.isAppOf ``WellFounded.fix then
pure <| mkAppN (mkConst ``WellFounded.fix_eq lhs.getAppFn.constLevels!) lhs.getAppArgs
else if lhs.isAppOf ``Order.fix then
let x := lhs.getAppArgs.back!
let args := lhs.getAppArgs.pop
mkAppM ``congrFun #[mkAppN (mkConst ``Order.fix_eq lhs.getAppFn.constLevels!) args, x]
else
throwTacticEx `rwFixEq mvarId "expected fixed-point application"
let some (_, _, lhsNew) := ( inferType h).eq? | unreachable!
let targetNew mkEq lhsNew rhs
let mvarNew mkFreshExprSyntheticOpaqueMVar targetNew
@@ -102,7 +112,7 @@ def mkEqns (declName : Name) (info : EqnInfo) : MetaM (Array Name) :=
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo mkMapDeclarationExtension
def registerEqnsInfo (preDefs : Array PreDefinition) (declNameNonRec : Name) (fixedPrefixSize : Nat)
(argsPacker : ArgsPacker) (hasInduct : Bool) : MetaM Unit := do
(argsPacker : ArgsPacker) : MetaM Unit := do
preDefs.forM fun preDef => ensureEqnReservedNamesAvailable preDef.declName
/-
See issue #2327.
@@ -115,7 +125,7 @@ def registerEqnsInfo (preDefs : Array PreDefinition) (declNameNonRec : Name) (fi
modifyEnv fun env =>
preDefs.foldl (init := env) fun env preDef =>
eqnInfoExt.insert env preDef.declName { preDef with
declNames, declNameNonRec, fixedPrefixSize, argsPacker, hasInduct }
declNames, declNameNonRec, fixedPrefixSize, argsPacker }
def getEqnsFor? (declName : Name) : MetaM (Option (Array Name)) := do
if let some info := eqnInfoExt.find? ( getEnv) declName then

View File

@@ -14,13 +14,13 @@ import Lean.Elab.Quotation
import Lean.Elab.RecAppSyntax
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.Structural.Basic
import Lean.Elab.PreDefinition.TerminationArgument
import Lean.Elab.PreDefinition.TerminationMeasure
import Lean.Elab.PreDefinition.WF.Basic
import Lean.Data.Array
/-!
This module finds lexicographic termination arguments for well-founded recursion.
This module finds lexicographic termination measures for well-founded recursion.
Starting with basic measures (`sizeOf xᵢ` for all parameters `xᵢ`), and complex measures
(e.g. `e₂ - e₁` if `e₁ < e₂` is found in the context of a recursive call) it tries all combinations
@@ -42,7 +42,7 @@ guessed lexicographic order.
The following optimizations are applied to make this feasible:
1. The crucial optimization is to look at each argument of each recursive call
1. The crucial optimization is to look at each measure of each recursive call
_once_, try to prove `<` and (if that fails `≤`), and then look at that table to
pick a suitable measure.
@@ -50,7 +50,7 @@ The following optimizations are applied to make this feasible:
expensive) tactics as few times as possible, while still being able to consider a possibly
large number of combinations.
3. Before we even try to prove `<`, we check if the arguments are equal (`=`). No well-founded
3. Before we even try to prove `<`, we check if the measures are equal (`=`). No well-founded
measure will relate equal terms, likely this check is faster than firing up the tactic engine,
and it adds more signal to the output.
@@ -91,7 +91,7 @@ def originalVarNames (preDef : PreDefinition) : MetaM (Array Name) := do
/--
Given the original parameter names from `originalVarNames`, find
good variable names to be used when talking about termination arguments:
good variable names to be used when talking about termination measures:
Use user-given parameter names if present; use x1...xn otherwise.
The names ought to accessible (no macro scopes) and fresh wrt to the current environment,
@@ -121,7 +121,7 @@ def naryVarNames (xs : Array Name) : MetaM (Array Name) := do
freshen ns (n.appendAfter "'")
/-- A termination measure with extra fields for use within GuessLex -/
structure Measure extends TerminationArgument where
structure BasicMeasure extends TerminationMeasure where
/--
Like `.fn`, but unconditionally with `sizeOf` at the right type.
We use this one when in `evalRecCall`
@@ -130,7 +130,7 @@ structure Measure extends TerminationArgument where
deriving Inhabited
/-- String description of this measure -/
def Measure.toString (measure : Measure) : MetaM String := do
def BasicMeasure.toString (measure : BasicMeasure) : MetaM String := do
lambdaTelescope measure.fn fun _xs e => do
-- This is a bit slopping if `measure.fn` takes more parameters than the `PreDefinition`
return ( ppExpr e).pretty
@@ -138,10 +138,10 @@ def Measure.toString (measure : Measure) : MetaM String := do
/--
Determine if the measure for parameter `x` should be `sizeOf x` or just `x`.
For non-mutual definitions, we omit `sizeOf` when the argument does not depend on
For non-mutual definitions, we omit `sizeOf` when the measure does not depend on
the other varying parameters, and its `WellFoundedRelation` instance goes via `SizeOf`.
For mutual definitions, we omit `sizeOf` only when the argument is (at reducible transparency!) of
For mutual definitions, we omit `sizeOf` only when the measure is (at reducible transparency!) of
type `Nat` (else we'd have to worry about differently-typed measures from different functions to
line up).
-/
@@ -170,12 +170,12 @@ def withUserNames {α} (xs : Array Expr) (ns : Array Name) (k : MetaM α) : Meta
/-- Create one measure for each (eligible) parameter of the given predefintion. -/
def simpleMeasures (preDefs : Array PreDefinition) (fixedPrefixSize : Nat)
(userVarNamess : Array (Array Name)) : MetaM (Array (Array Measure)) := do
(userVarNamess : Array (Array Name)) : MetaM (Array (Array BasicMeasure)) := do
let is_mutual : Bool := preDefs.size > 1
preDefs.mapIdxM fun funIdx preDef => do
lambdaTelescope preDef.value fun xs _ => do
withUserNames xs[fixedPrefixSize:] userVarNamess[funIdx]! do
let mut ret : Array Measure := #[]
let mut ret : Array BasicMeasure := #[]
for x in xs[fixedPrefixSize:] do
-- If the `SizeOf` instance produces a constant (e.g. because it's type is a `Prop` or
-- `Type`), then ignore this parameter
@@ -369,7 +369,7 @@ def isNatCmp (e : Expr) : Option (Expr × Expr) :=
def complexMeasures (preDefs : Array PreDefinition) (fixedPrefixSize : Nat)
(userVarNamess : Array (Array Name)) (recCalls : Array RecCallWithContext) :
MetaM (Array (Array Measure)) := do
MetaM (Array (Array BasicMeasure)) := do
preDefs.mapIdxM fun funIdx _preDef => do
let mut measures := #[]
for rc in recCalls do
@@ -426,7 +426,7 @@ def GuessLexRel.toNatRel : GuessLexRel → Expr
For a given recursive call, and a choice of parameter and argument index,
try to prove equality, < or ≤.
-/
def evalRecCall (decrTactic? : Option DecreasingBy) (callerMeasures calleeMeasures : Array Measure)
def evalRecCall (decrTactic? : Option DecreasingBy) (callerMeasures calleeMeasures : Array BasicMeasure)
(rcc : RecCallWithContext) (callerMeasureIdx calleeMeasureIdx : Nat) : MetaM GuessLexRel := do
rcc.ctxt.run do
let callerMeasure := callerMeasures[callerMeasureIdx]!
@@ -467,13 +467,13 @@ def evalRecCall (decrTactic? : Option DecreasingBy) (callerMeasures calleeMeasur
/- A cache for `evalRecCall` -/
structure RecCallCache where mk'' ::
decrTactic? : Option DecreasingBy
callerMeasures : Array Measure
calleeMeasures : Array Measure
callerMeasures : Array BasicMeasure
calleeMeasures : Array BasicMeasure
rcc : RecCallWithContext
cache : IO.Ref (Array (Array (Option GuessLexRel)))
/-- Create a cache to memoize calls to `evalRecCall descTactic? rcc` -/
def RecCallCache.mk (decrTactics : Array (Option DecreasingBy)) (measuress : Array (Array Measure))
def RecCallCache.mk (decrTactics : Array (Option DecreasingBy)) (measuress : Array (Array BasicMeasure))
(rcc : RecCallWithContext) :
BaseIO RecCallCache := do
let decrTactic? := decrTactics[rcc.caller]!
@@ -499,7 +499,7 @@ def RecCallCache.prettyEntry (rcc : RecCallCache) (callerMeasureIdx calleeMeasur
| .some rel => toString rel
| .none => "_"
/-- The measures that we order lexicographically can be comparing arguments,
/-- The measures that we order lexicographically can be comparing basic measures,
or numbering the functions -/
inductive MutualMeasure where
/-- For every function, the given argument index -/
@@ -509,9 +509,9 @@ inductive MutualMeasure where
/-- Evaluate a recursive call at a given `MutualMeasure` -/
def inspectCall (rc : RecCallCache) : MutualMeasure MetaM GuessLexRel
| .args taIdxs => do
let callerMeasureIdx := taIdxs[rc.rcc.caller]!
let calleeMeasureIdx := taIdxs[rc.rcc.callee]!
| .args tmIdxs => do
let callerMeasureIdx := tmIdxs[rc.rcc.caller]!
let calleeMeasureIdx := tmIdxs[rc.rcc.callee]!
rc.eval callerMeasureIdx calleeMeasureIdx
| .func funIdx => do
if rc.rcc.caller == funIdx && rc.rcc.callee != funIdx then
@@ -554,16 +554,16 @@ where
/--
Enumerate all measures we want to try.
All arguments (resp. combinations thereof) and
All measures (resp. combinations thereof) and
possible orderings of functions (if more than one)
-/
def generateMeasures (numTermArgs : Array Nat) : MetaM (Array MutualMeasure) := do
let some arg_measures := generateCombinations? numTermArgs
def generateMeasures (numMeasures : Array Nat) : MetaM (Array MutualMeasure) := do
let some arg_measures := generateCombinations? numMeasures
| throwError "Too many combinations"
let func_measures :=
if numTermArgs.size > 1 then
(List.range numTermArgs.size).toArray
if numMeasures.size > 1 then
(List.range numMeasures.size).toArray
else
#[]
@@ -652,8 +652,8 @@ def RecCallWithContext.posString (rcc : RecCallWithContext) : MetaM String := do
return s!"{position.line}:{position.column}{endPosStr}"
/-- How to present the measure in the table header, possibly abbreviated. -/
def measureHeader (measure : Measure) : StateT (Nat × String) MetaM String := do
/-- How to present the basic measure in the table header, possibly abbreviated. -/
def measureHeader (measure : BasicMeasure) : StateT (Nat × String) MetaM String := do
let s measure.toString
if s.length > 5 then
let (i, footer) get
@@ -670,7 +670,7 @@ def collectHeaders {α} (a : StateT (Nat × String) MetaM α) : MetaM (α × Str
/-- Explain what we found out about the recursive calls (non-mutual case) -/
def explainNonMutualFailure (measures : Array Measure) (rcs : Array RecCallCache) : MetaM Format := do
def explainNonMutualFailure (measures : Array BasicMeasure) (rcs : Array RecCallCache) : MetaM Format := do
let (header, footer) collectHeaders (measures.mapM measureHeader)
let mut table : Array (Array String) := #[#[""] ++ header]
for i in [:rcs.size], rc in rcs do
@@ -685,7 +685,7 @@ def explainNonMutualFailure (measures : Array Measure) (rcs : Array RecCallCache
return out ++ "\n\n" ++ footer
/-- Explain what we found out about the recursive calls (mutual case) -/
def explainMutualFailure (declNames : Array Name) (measuress : Array (Array Measure))
def explainMutualFailure (declNames : Array Name) (measuress : Array (Array BasicMeasure))
(rcs : Array RecCallCache) : MetaM Format := do
let (headerss, footer) collectHeaders (measuress.mapM (·.mapM measureHeader))
@@ -718,9 +718,9 @@ def explainMutualFailure (declNames : Array Name) (measuress : Array (Array Meas
return r
def explainFailure (declNames : Array Name) (measuress : Array (Array Measure))
def explainFailure (declNames : Array Name) (measuress : Array (Array BasicMeasure))
(rcs : Array RecCallCache) : MetaM Format := do
let mut r : Format := "The arguments relate at each recursive call as follows:\n" ++
let mut r : Format := "The basic measures relate at each recursive call as follows:\n" ++
"(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)\n"
if declNames.size = 1 then
r := r ++ ( explainNonMutualFailure measuress[0]! rcs)
@@ -739,29 +739,29 @@ def mkProdElem (xs : Array Expr) : MetaM Expr := do
let n := xs.size
xs[0:n-1].foldrM (init:=xs[n-1]!) fun x p => mkAppM ``Prod.mk #[x,p]
def toTerminationArguments (preDefs : Array PreDefinition) (fixedPrefixSize : Nat)
(userVarNamess : Array (Array Name)) (measuress : Array (Array Measure))
(solution : Array MutualMeasure) : MetaM TerminationArguments := do
def toTerminationMeasures (preDefs : Array PreDefinition) (fixedPrefixSize : Nat)
(userVarNamess : Array (Array Name)) (measuress : Array (Array BasicMeasure))
(solution : Array MutualMeasure) : MetaM TerminationMeasures := do
preDefs.mapIdxM fun funIdx preDef => do
let measures := measuress[funIdx]!
lambdaTelescope preDef.value fun xs _ => do
withUserNames xs[fixedPrefixSize:] userVarNamess[funIdx]! do
let args := solution.map fun
| .args taIdxs => measures[taIdxs[funIdx]!]!.fn.beta xs
| .args tmIdxs => measures[tmIdxs[funIdx]!]!.fn.beta xs
| .func funIdx' => mkNatLit <| if funIdx' == funIdx then 1 else 0
let fn mkLambdaFVars xs ( mkProdElem args)
return { ref := .missing, structural := false, fn}
/--
Shows the inferred termination argument to the user, and implements `termination_by?`
Shows the inferred termination measure to the user, and implements `termination_by?`
-/
def reportTermArgs (preDefs : Array PreDefinition) (termArgs : TerminationArguments) : MetaM Unit := do
for preDef in preDefs, termArg in termArgs do
def reportTerminationMeasures (preDefs : Array PreDefinition) (termMeasures : TerminationMeasures) : MetaM Unit := do
for preDef in preDefs, termMeasure in termMeasures do
let stx := do
let arity lambdaTelescope preDef.value fun xs _ => pure xs.size
termArg.delab arity (extraParams := preDef.termination.extraParams)
termMeasure.delab arity (extraParams := preDef.termination.extraParams)
if showInferredTerminationBy.get ( getOptions) then
logInfoAt preDef.ref m!"Inferred termination argument:\n{← stx}"
logInfoAt preDef.ref m!"Inferred termination measure:\n{← stx}"
if let some ref := preDef.termination.terminationBy?? then
Tactic.TryThis.addSuggestion ref ( stx)
@@ -771,14 +771,14 @@ open GuessLex
/--
Main entry point of this module:
Try to find a lexicographic ordering of the arguments for which the recursive definition
Try to find a lexicographic ordering of the basic measures for which the recursive definition
terminates. See the module doc string for a high-level overview.
The `preDefs` are used to determine arity and types of arguments; the bodies are ignored.
The `preDefs` are used to determine arity and types of parameters; the bodies are ignored.
-/
def guessLex (preDefs : Array PreDefinition) (unaryPreDef : PreDefinition)
(fixedPrefixSize : Nat) (argsPacker : ArgsPacker) :
MetaM TerminationArguments := do
MetaM TerminationMeasures := do
let userVarNamess argsPacker.varNamess.mapM (naryVarNames ·)
trace[Elab.definition.wf] "varNames is: {userVarNamess}"
@@ -788,30 +788,30 @@ def guessLex (preDefs : Array PreDefinition) (unaryPreDef : PreDefinition)
-- For every function, the measures we want to use
-- (One for each non-forbiddend arg)
let meassures₁ simpleMeasures preDefs fixedPrefixSize userVarNamess
let meassures₂ complexMeasures preDefs fixedPrefixSize userVarNamess recCalls
let measuress := Array.zipWith meassures₁ meassures₂ (· ++ ·)
let basicMeassures₁ simpleMeasures preDefs fixedPrefixSize userVarNamess
let basicMeassures₂ complexMeasures preDefs fixedPrefixSize userVarNamess recCalls
let basicMeasures := Array.zipWith basicMeassures₁ basicMeassures₂ (· ++ ·)
-- The list of measures, including the measures that order functions.
-- The function ordering measures come last
let measures generateMeasures (measuress.map (·.size))
let mutualMeasures generateMeasures (basicMeasures.map (·.size))
-- If there is only one plausible measure, use that
if let #[solution] := measures then
let termArgs toTerminationArguments preDefs fixedPrefixSize userVarNamess measuress #[solution]
reportTermArgs preDefs termArgs
return termArgs
if let #[solution] := mutualMeasures then
let termMeasures toTerminationMeasures preDefs fixedPrefixSize userVarNamess basicMeasures #[solution]
reportTerminationMeasures preDefs termMeasures
return termMeasures
let rcs recCalls.mapM (RecCallCache.mk (preDefs.map (·.termination.decreasingBy?)) measuress ·)
let rcs recCalls.mapM (RecCallCache.mk (preDefs.map (·.termination.decreasingBy?)) basicMeasures ·)
let callMatrix := rcs.map (inspectCall ·)
match liftMetaM <| solve measures callMatrix with
match liftMetaM <| solve mutualMeasures callMatrix with
| .some solution => do
let termArgs toTerminationArguments preDefs fixedPrefixSize userVarNamess measuress solution
reportTermArgs preDefs termArgs
return termArgs
let termMeasures toTerminationMeasures preDefs fixedPrefixSize userVarNamess basicMeasures solution
reportTerminationMeasures preDefs termMeasures
return termMeasures
| .none =>
let explanation explainFailure (preDefs.map (·.declName)) measuress rcs
let explanation explainFailure (preDefs.map (·.declName)) basicMeasures rcs
Lean.throwError <| "Could not find a decreasing measure.\n" ++
explanation ++ "\n" ++
"Please use `termination_by` to specify a decreasing measure."

View File

@@ -5,7 +5,8 @@ Authors: Leonardo de Moura
-/
prelude
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.TerminationArgument
import Lean.Elab.PreDefinition.TerminationMeasure
import Lean.Elab.PreDefinition.Mutual
import Lean.Elab.PreDefinition.WF.PackMutual
import Lean.Elab.PreDefinition.WF.Preprocess
import Lean.Elab.PreDefinition.WF.Rel
@@ -18,90 +19,25 @@ namespace Lean.Elab
open WF
open Meta
private partial def addNonRecPreDefs (fixedPrefixSize : Nat) (argsPacker : ArgsPacker) (preDefs : Array PreDefinition) (preDefNonRec : PreDefinition) : TermElabM Unit := do
let us := preDefNonRec.levelParams.map mkLevelParam
let all := preDefs.toList.map (·.declName)
for h : fidx in [:preDefs.size] do
let preDef := preDefs[fidx]
let value forallBoundedTelescope preDef.type (some fixedPrefixSize) fun xs _ => do
let value := mkAppN (mkConst preDefNonRec.declName us) xs
let value argsPacker.curryProj value fidx
mkLambdaFVars xs value
trace[Elab.definition.wf] "{preDef.declName} := {value}"
addNonRec { preDef with value } (applyAttrAfterCompilation := false) (all := all)
partial def withCommonTelescope (preDefs : Array PreDefinition) (k : Array Expr Array Expr TermElabM α) : TermElabM α :=
go #[] (preDefs.map (·.value))
where
go (fvars : Array Expr) (vals : Array Expr) : TermElabM α := do
if !(vals.all fun val => val.isLambda) then
k fvars vals
else if !( vals.allM fun val => isDefEq val.bindingDomain! vals[0]!.bindingDomain!) then
k fvars vals
else
withLocalDecl vals[0]!.bindingName! vals[0]!.binderInfo vals[0]!.bindingDomain! fun x =>
go (fvars.push x) (vals.map fun val => val.bindingBody!.instantiate1 x)
def getFixedPrefix (preDefs : Array PreDefinition) : TermElabM Nat :=
withCommonTelescope preDefs fun xs vals => do
let resultRef IO.mkRef xs.size
for val in vals do
if ( resultRef.get) == 0 then return 0
forEachExpr' val fun e => do
if preDefs.any fun preDef => e.isAppOf preDef.declName then
let args := e.getAppArgs
resultRef.modify (min args.size ·)
for arg in args, x in xs do
if !( withoutProofIrrelevance <| withReducible <| isDefEq arg x) then
-- We continue searching if e's arguments are not a prefix of `xs`
return true
return false
else
return true
resultRef.get
private def isOnlyOneUnaryDef (preDefs : Array PreDefinition) (fixedPrefixSize : Nat) : MetaM Bool := do
if preDefs.size == 1 then
lambdaTelescope preDefs[0]!.value fun xs _ => return xs.size == fixedPrefixSize + 1
else
return false
/--
Collect the names of the varying variables (after the fixed prefix); this also determines the
arity for the well-founded translations, and is turned into an `ArgsPacker`.
We use the term to determine the arity, but take the name from the type, for better names in the
```
fun : (n : Nat) → Nat | 0 => 0 | n+1 => fun n
```
idiom.
-/
def varyingVarNames (fixedPrefixSize : Nat) (preDef : PreDefinition) : MetaM (Array Name) := do
-- We take the arity from the term, but the names from the types
let arity lambdaTelescope preDef.value fun xs _ => return xs.size
assert! fixedPrefixSize arity
if arity = fixedPrefixSize then
throwError "well-founded recursion cannot be used, '{preDef.declName}' does not take any (non-fixed) arguments"
forallBoundedTelescope preDef.type arity fun xs _ => do
assert! xs.size = arity
let xs : Array Expr := xs[fixedPrefixSize:]
xs.mapM (·.fvarId!.getUserName)
def wfRecursion (preDefs : Array PreDefinition) (termArg?s : Array (Option TerminationArgument)) : TermElabM Unit := do
let termArgs? := termArg?s.mapM id -- Either all or none, checked by `elabTerminationByHints`
def wfRecursion (preDefs : Array PreDefinition) (termMeasure?s : Array (Option TerminationMeasure)) : TermElabM Unit := do
let termMeasures? := termMeasure?s.mapM id -- Either all or none, checked by `elabTerminationByHints`
let preDefs preDefs.mapM fun preDef =>
return { preDef with value := ( preprocess preDef.value) }
let (fixedPrefixSize, argsPacker, unaryPreDef) withoutModifyingEnv do
for preDef in preDefs do
addAsAxiom preDef
let fixedPrefixSize getFixedPrefix preDefs
let fixedPrefixSize Mutual.getFixedPrefix preDefs
trace[Elab.definition.wf] "fixed prefix: {fixedPrefixSize}"
let varNamess preDefs.mapM (varyingVarNames fixedPrefixSize ·)
for varNames in varNamess, preDef in preDefs do
if varNames.isEmpty then
throwError "well-founded recursion cannot be used, '{preDef.declName}' does not take any (non-fixed) arguments"
let argsPacker := { varNamess }
let preDefsDIte preDefs.mapM fun preDef => return { preDef with value := ( iteToDIte preDef.value) }
return (fixedPrefixSize, argsPacker, packMutual fixedPrefixSize argsPacker preDefsDIte)
let wf : TerminationArguments do
if let some tas := termArgs? then pure tas else
let wf : TerminationMeasures do
if let some tms := termMeasures? then pure tms else
-- No termination_by here, so use GuessLex to infer one
guessLex preDefs unaryPreDef fixedPrefixSize argsPacker
@@ -118,39 +54,14 @@ def wfRecursion (preDefs : Array PreDefinition) (termArg?s : Array (Option Termi
eraseRecAppSyntaxExpr value
/- `mkFix` invokes `decreasing_tactic` which may add auxiliary theorems to the environment. -/
let value unfoldDeclsFrom envNew value
let unaryPreDef := { unaryPreDef with value }
/-
We must remove `implemented_by` attributes from the auxiliary application because
this attribute is only relevant for code that is compiled. Moreover, the `[implemented_by <decl>]`
attribute would check whether the `unaryPreDef` type matches with `<decl>`'s type, and produce
and error. See issue #2899
-/
let unaryPreDef := unaryPreDef.filterAttrs fun attr => attr.name != `implemented_by
return unaryPreDef
return { unaryPreDef with value }
trace[Elab.definition.wf] ">> {preDefNonRec.declName} :=\n{preDefNonRec.value}"
let preDefs preDefs.mapM fun d => eraseRecAppSyntax d
-- Do not complain if the user sets @[semireducible], which usually is a noop,
-- we recognize that below and then do not set @[irreducible]
withOptions (allowUnsafeReducibility.set · true) do
if ( isOnlyOneUnaryDef preDefs fixedPrefixSize) then
addNonRec preDefNonRec (applyAttrAfterCompilation := false)
else
withEnableInfoTree false do
addNonRec preDefNonRec (applyAttrAfterCompilation := false)
addNonRecPreDefs fixedPrefixSize argsPacker preDefs preDefNonRec
-- We create the `_unsafe_rec` before we abstract nested proofs.
-- Reason: the nested proofs may be referring to the _unsafe_rec.
addAndCompilePartialRec preDefs
let preDefs preDefs.mapM (abstractNestedProofs ·)
registerEqnsInfo preDefs preDefNonRec.declName fixedPrefixSize argsPacker (hasInduct := true)
for preDef in preDefs do
markAsRecursive preDef.declName
generateEagerEqns preDef.declName
applyAttributesOf #[preDef] AttributeApplicationTime.afterCompilation
-- Unless the user asks for something else, mark the definition as irreducible
unless preDef.modifiers.attrs.any fun a =>
a.name = `reducible || a.name = `semireducible do
setIrreducibleAttribute preDef.declName
let preDefsNonrec preDefsFromUnaryNonRec fixedPrefixSize argsPacker preDefs preDefNonRec
Mutual.addPreDefsFromUnary preDefs preDefsNonrec preDefNonRec
let preDefs Mutual.cleanPreDefs preDefs
registerEqnsInfo preDefs preDefNonRec.declName fixedPrefixSize argsPacker
Mutual.addPreDefAttributes preDefs
builtin_initialize registerTraceClass `Elab.definition.wf

View File

@@ -1,11 +1,17 @@
/-
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
Authors: Leonardo de Moura, Joachim Breitner
-/
prelude
import Lean.Meta.ArgsPacker
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.WF.Eqns
/-!
This module contains roughly everything neede to turn mutual n-ary functions into a single unary
function, as used by well-founded recursion.
-/
namespace Lean.Elab.WF
open Meta
@@ -30,41 +36,49 @@ def withAppN (n : Nat) (e : Expr) (k : Array Expr → MetaM Expr) : MetaM Expr :
mkLambdaFVars xs e'
/--
A `post` for `Meta.transform` to replace recursive calls to the original `preDefs` with calls
to the new unary function `newfn`.
Processes the expression and replaces calls to the `preDefs` with calls to `f`.
-/
private partial def post (fixedPrefix : Nat) (argsPacker : ArgsPacker) (funNames : Array Name)
(domain : Expr) (newFn : Name) (e : Expr) : MetaM TransformStep := do
let f := e.getAppFn
if !f.isConst then
def packCalls (fixedPrefix : Nat) (argsPacker : ArgsPacker) (funNames : Array Name) (newF : Expr)
(e : Expr) : MetaM Expr := do
let fType inferType newF
unless fType.isForall do
throwError "Not a forall: {newF} : {fType}"
let domain := fType.bindingDomain!
transform e (skipConstInApp := true) (post := fun e => do
let f := e.getAppFn
if !f.isConst then
return TransformStep.done e
if let some fidx := funNames.indexOf? f.constName! then
let arity := fixedPrefix + argsPacker.varNamess[fidx]!.size
let e' withAppN arity e fun args => do
let packedArg argsPacker.pack domain fidx args[fixedPrefix:]
return mkApp newF packedArg
return TransformStep.done e'
return TransformStep.done e
let declName := f.constName!
let us := f.constLevels!
if let some fidx := funNames.indexOf? declName then
let arity := fixedPrefix + argsPacker.varNamess[fidx]!.size
let e' withAppN arity e fun args => do
let fixedArgs := args[:fixedPrefix]
let packedArg argsPacker.pack domain fidx args[fixedPrefix:]
return mkApp (mkAppN (mkConst newFn us) fixedArgs) packedArg
return TransformStep.done e'
return TransformStep.done e
)
def mutualName (argsPacker : ArgsPacker) (preDefs : Array PreDefinition) : Name :=
if argsPacker.onlyOneUnary then
preDefs[0]!.declName
else
if argsPacker.numFuncs > 1 then
preDefs[0]!.declName ++ `_mutual
else
preDefs[0]!.declName ++ `_unary
/--
Creates a single unary function from the given `preDefs`, using the machinery in the `ArgPacker`
module.
-/
def packMutual (fixedPrefix : Nat) (argsPacker : ArgsPacker) (preDefs : Array PreDefinition) : MetaM PreDefinition := do
let arities := argsPacker.arities
if let #[1] := arities then return preDefs[0]!
let newFn := if argsPacker.numFuncs > 1 then preDefs[0]!.declName ++ `_mutual
else preDefs[0]!.declName ++ `_unary
-- Bring the fixed Prefix into scope
if argsPacker.onlyOneUnary then return preDefs[0]!
let newFn := mutualName argsPacker preDefs
-- Bring the fixed prefix into scope
forallBoundedTelescope preDefs[0]!.type (some fixedPrefix) fun ys _ => do
let types preDefs.mapM (instantiateForall ·.type ys)
let vals preDefs.mapM (instantiateLambda ·.value ys)
let type argsPacker.uncurryType types
let packedDomain := type.bindingDomain!
-- Temporarily add the unary function as an axiom, so that all expressions
-- are still type correct
@@ -72,10 +86,44 @@ def packMutual (fixedPrefix : Nat) (argsPacker : ArgsPacker) (preDefs : Array Pr
let preDefNew := { preDefs[0]! with declName := newFn, type }
addAsAxiom preDefNew
let us := preDefs[0]!.levelParams.map mkLevelParam
let f := mkAppN (mkConst newFn us) ys
let value argsPacker.uncurry vals
let value transform value (skipConstInApp := true)
(post := post fixedPrefix argsPacker (preDefs.map (·.declName)) packedDomain newFn)
let value packCalls fixedPrefix argsPacker (preDefs.map (·.declName)) f value
let value mkLambdaFVars ys value
return { preDefNew with value }
/--
Collect the names of the varying variables (after the fixed prefix); this also determines the
arity for the well-founded translations, and is turned into an `ArgsPacker`.
We use the term to determine the arity, but take the name from the type, for better names in the
```
fun : (n : Nat) → Nat | 0 => 0 | n+1 => fun n
```
idiom.
-/
def varyingVarNames (fixedPrefixSize : Nat) (preDef : PreDefinition) : MetaM (Array Name) := do
-- We take the arity from the term, but the names from the types
let arity lambdaTelescope preDef.value fun xs _ => return xs.size
assert! fixedPrefixSize arity
forallBoundedTelescope preDef.type arity fun xs _ => do
assert! xs.size = arity
let xs : Array Expr := xs[fixedPrefixSize:]
xs.mapM (·.fvarId!.getUserName)
def preDefsFromUnaryNonRec (fixedPrefixSize : Nat) (argsPacker : ArgsPacker)
(preDefs : Array PreDefinition) (unaryPreDefNonRec : PreDefinition) : MetaM (Array PreDefinition) := do
withoutModifyingEnv do
let us := unaryPreDefNonRec.levelParams.map mkLevelParam
addAsAxiom unaryPreDefNonRec
preDefs.mapIdxM fun fidx preDef => do
let value forallBoundedTelescope preDef.type (some fixedPrefixSize) fun xs _ => do
let value := mkAppN (mkConst unaryPreDefNonRec.declName us) xs
let value argsPacker.curryProj value fidx
mkLambdaFVars xs value
trace[Elab.definition.wf] "{preDef.declName} := {value}"
pure { preDef with value }
end Lean.Elab.WF

View File

@@ -9,7 +9,7 @@ import Lean.Meta.Tactic.Cases
import Lean.Meta.Tactic.Rename
import Lean.Elab.SyntheticMVars
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.TerminationArgument
import Lean.Elab.PreDefinition.TerminationMeasure
import Lean.Meta.ArgsPacker
namespace Lean.Elab.WF
@@ -17,21 +17,21 @@ open Meta
open Term
/--
The termination arguments must not depend on the varying parameters of the function, and in
The termination measures must not depend on the varying parameters of the function, and in
a mutual clique, they must be the same for all functions.
This ensures the preconditions for `ArgsPacker.uncurryND`.
-/
def checkCodomains (names : Array Name) (prefixArgs : Array Expr) (arities : Array Nat)
(termArgs : TerminationArguments) : TermElabM Expr := do
(termMeasures : TerminationMeasures) : TermElabM Expr := do
let mut codomains := #[]
for name in names, arity in arities, termArg in termArgs do
let type inferType (termArg.fn.beta prefixArgs)
for name in names, arity in arities, termMeasure in termMeasures do
let type inferType (termMeasure.fn.beta prefixArgs)
let codomain forallBoundedTelescope type arity fun xs codomain => do
let fvars := xs.map (·.fvarId!)
if codomain.hasAnyFVar (fvars.contains ·) then
throwErrorAt termArg.ref m!"The termination argument's type must not depend on the " ++
m!"function's varying parameters, but {name}'s termination argument does:{indentExpr type}\n" ++
throwErrorAt termMeasure.ref m!"The termination measure's type must not depend on the " ++
m!"function's varying parameters, but {name}'s termination measure does:{indentExpr type}\n" ++
"Try using `sizeOf` explicitly"
pure codomain
codomains := codomains.push codomain
@@ -39,26 +39,26 @@ def checkCodomains (names : Array Name) (prefixArgs : Array Expr) (arities : Arr
let codomain0 := codomains[0]!
for h : i in [1 : codomains.size] do
unless isDefEqGuarded codomain0 codomains[i] do
throwErrorAt termArgs[i]!.ref m!"The termination arguments of mutually recursive functions " ++
m!"must have the same return type, but the termination argument of {names[0]!} has type" ++
throwErrorAt termMeasures[i]!.ref m!"The termination measures of mutually recursive functions " ++
m!"must have the same return type, but the termination measure of {names[0]!} has type" ++
m!"{indentExpr codomain0}\n" ++
m!"while the termination argument of {names[i]!} has type{indentExpr codomains[i]}\n" ++
m!"while the termination measure of {names[i]!} has type{indentExpr codomains[i]}\n" ++
"Try using `sizeOf` explicitly"
return codomain0
/--
If the `termArgs` map the packed argument `argType` to `β`, then this function passes to the
If the `termMeasures` map the packed argument `argType` to `β`, then this function passes to the
continuation a value of type `WellFoundedRelation argType` that is derived from the instance
for `WellFoundedRelation β` using `invImage`.
-/
def elabWFRel (declNames : Array Name) (unaryPreDefName : Name) (prefixArgs : Array Expr)
(argsPacker : ArgsPacker) (argType : Expr) (termArgs : TerminationArguments)
(argsPacker : ArgsPacker) (argType : Expr) (termMeasures : TerminationMeasures)
(k : Expr TermElabM α) : TermElabM α := withDeclName unaryPreDefName do
let α := argType
let u getLevel α
let β checkCodomains declNames prefixArgs argsPacker.arities termArgs
let β checkCodomains declNames prefixArgs argsPacker.arities termMeasures
let v getLevel β
let packedF argsPacker.uncurryND (termArgs.map (·.fn.beta prefixArgs))
let packedF argsPacker.uncurryND (termMeasures.map (·.fn.beta prefixArgs))
let inst synthInstance (.app (.const ``WellFoundedRelation [v]) β)
let rel instantiateMVars (mkApp4 (.const ``invImage [u,v]) α β packedF inst)
k rel

View File

@@ -308,7 +308,7 @@ def bvDecide (g : MVarId) (ctx : TacticContext) : MetaM Result := do
throwError ( addMessageContextFull errorMessage)
@[builtin_tactic Lean.Parser.Tactic.bvDecide]
def evalBvTrace : Tactic := fun
def evalBvDecide : Tactic := fun
| `(tactic| bv_decide $cfg:optConfig) => do
let cfg elabBVDecideConfig cfg
IO.FS.withTempFile fun _ lratFile => do
@@ -319,4 +319,3 @@ def evalBvTrace : Tactic := fun
end Frontend
end Lean.Elab.Tactic.BVDecide

View File

@@ -197,8 +197,10 @@ def LratCert.toReflectionProof [ToExpr α] (cert : LratCert) (cfg : TacticContex
( mkEqRefl (toExpr true))
try
let auxLemma
withTraceNode `sat (fun _ => return "Verifying LRAT certificate") do
mkAuxLemma [] auxType auxProof
-- disable async TC so we can catch its exceptions
withOptions (Elab.async.set · false) do
withTraceNode `sat (fun _ => return "Verifying LRAT certificate") do
mkAuxLemma [] auxType auxProof
return mkApp3 (mkConst unsat_of_verifier_eq_true) reflectedExpr certExpr (mkConst auxLemma)
catch e =>
throwError m!"Failed to check the LRAT certificate in the kernel:\n{e.toMessageData}"

View File

@@ -11,6 +11,7 @@ import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.Rewrite
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.AndFlatten
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.EmbeddedConstraint
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.AC
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.Structures
/-!
This module contains the implementation of `bv_normalize`, the preprocessing tactic for `bv_decide`.
@@ -43,9 +44,17 @@ def bvNormalize (g : MVarId) (cfg : BVDecideConfig) : MetaM (Option MVarId) := d
(go g).run cfg g
where
go (g : MVarId) : PreProcessM (Option MVarId) := do
let some g g.falseOrByContra | return none
let some g' g.falseOrByContra | return none
let mut g := g'
trace[Meta.Tactic.bv] m!"Running preprocessing pipeline on:\n{g}"
let cfg PreProcessM.getConfig
if cfg.structures then
let some g' structuresPass.run g | return none
g := g'
trace[Meta.Tactic.bv] m!"Running fixpoint pipeline on:\n{g}"
let pipeline passPipeline
Pass.fixpointPipeline pipeline g

View File

@@ -43,7 +43,7 @@ partial def andFlatteningPass : Pass where
where
processGoal (goal : MVarId) : StateRefT AndFlattenState MetaM Unit := do
goal.withContext do
let hyps goal.getNondepPropHyps
let hyps getPropHyps
hyps.forM processFVar
processFVar (fvar : FVarId) : StateRefT AndFlattenState MetaM Unit := do

View File

@@ -32,16 +32,14 @@ def getConfig : PreProcessM BVDecideConfig := read
@[inline]
def checkRewritten (fvar : FVarId) : PreProcessM Bool := do
let val := ( get).rewriteCache.contains fvar
trace[Meta.Tactic.bv] m!"{mkFVar fvar} was already rewritten? {val}"
return val
@[inline]
def rewriteFinished (fvar : FVarId) : PreProcessM Unit := do
trace[Meta.Tactic.bv] m!"Adding {mkFVar fvar} to the rewritten set"
modify (fun s => { s with rewriteCache := s.rewriteCache.insert fvar })
def run (cfg : BVDecideConfig) (goal : MVarId) (x : PreProcessM α) : MetaM α := do
let hyps goal.getNondepPropHyps
let hyps goal.withContext do getPropHyps
ReaderT.run x cfg |>.run' { rewriteCache := Std.HashSet.empty hyps.size }
end PreProcessM

View File

@@ -27,7 +27,7 @@ def embeddedConstraintPass : Pass where
name := `embeddedConstraintSubsitution
run' goal := do
goal.withContext do
let hyps goal.getNondepPropHyps
let hyps getPropHyps
let mut relevantHyps : SimpTheoremsArray := #[]
let mut seen : Std.HashSet Expr := {}
let mut duplicates : Array FVarId := #[]
@@ -49,11 +49,12 @@ def embeddedConstraintPass : Pass where
return goal
let cfg PreProcessM.getConfig
let targets goal.withContext getPropHyps
let simpCtx Simp.mkContext
(config := { failIfUnchanged := false, maxSteps := cfg.maxSteps })
(simpTheorems := relevantHyps)
(congrTheorems := ( getSimpCongrTheorems))
let result?, _ simpGoal goal (ctx := simpCtx) (fvarIdsToSimp := goal.getNondepPropHyps)
let result?, _ simpGoal goal (ctx := simpCtx) (fvarIdsToSimp := targets)
let some (_, newGoal) := result? | return none
return newGoal

View File

@@ -46,12 +46,12 @@ def rewriteRulesPass : Pass where
let some (_, newGoal) := result? | return none
newGoal.withContext do
( newGoal.getNondepPropHyps).forM PreProcessM.rewriteFinished
( getPropHyps).forM PreProcessM.rewriteFinished
return newGoal
where
getHyps (goal : MVarId) : PreProcessM (Array FVarId) := do
goal.withContext do
let mut hyps goal.getNondepPropHyps
let hyps getPropHyps
let filter hyp := do
return !( PreProcessM.checkRewritten hyp)
hyps.filterM filter

View File

@@ -0,0 +1,143 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
prelude
import Lean.Elab.Tactic.BVDecide.Frontend.Normalize.Basic
import Lean.Meta.Tactic.Cases
import Lean.Meta.Tactic.Simp
import Lean.Meta.Injective
/-!
This module contains the implementation of the pre processing pass for automatically splitting up
structures containing information about supported types into individual parts recursively.
The implementation runs cases recursively on all "interesting" types where a type is interesting if
it is a non recursive structure and at least one of the following conditions hold:
- it contains something of type `BitVec`/`UIntX`/`Bool`
- it is parametrized by an interesting type
- it contains another interesting type
Afterwards we also apply relevant `injEq` theorems to support at least equality for these types out
of the box.
-/
namespace Lean.Elab.Tactic.BVDecide
namespace Frontend.Normalize
open Lean.Meta
/--
Contains a cache for interesting and uninteresting types such that we don't duplicate work in the
structures pass.
-/
structure InterestingStructures where
interesting : Std.HashSet Name := {}
uninteresting : Std.HashSet Name := {}
private abbrev M := StateRefT InterestingStructures MetaM
namespace M
@[inline]
def lookup (n : Name) : M (Option Bool) := do
let s get
if s.uninteresting.contains n then
return some false
else if s.interesting.contains n then
return some true
else
return none
@[inline]
def markInteresting (n : Name) : M Unit := do
modify (fun s => {s with interesting := s.interesting.insert n })
@[inline]
def markUninteresting (n : Name) : M Unit := do
modify (fun s => {s with uninteresting := s.uninteresting.insert n })
end M
partial def structuresPass : Pass where
name := `structures
run' goal := do
let (_, { interesting, .. }) checkContext goal |>.run {}
let goals goal.casesRec fun decl => do
if decl.isLet || decl.isImplementationDetail then
return false
else
let some const := decl.type.getAppFn.constName? | return false
return interesting.contains const
match goals with
| [goal] => postprocess goal interesting
| _ => throwError "structures preprocessor generated more than 1 goal"
where
postprocess (goal : MVarId) (interesting : Std.HashSet Name) : PreProcessM (Option MVarId) := do
goal.withContext do
let mut relevantLemmas : SimpTheoremsArray := #[]
for const in interesting do
let constInfo getConstInfoInduct const
let ctorName := ( getConstInfoCtor constInfo.ctors.head!).name
let lemmaName := mkInjectiveEqTheoremNameFor ctorName
if ( getEnv).find? lemmaName |>.isSome then
trace[Meta.Tactic.bv] m!"Using injEq lemma: {lemmaName}"
let statement mkConstWithLevelParams lemmaName
relevantLemmas relevantLemmas.addTheorem (.decl lemmaName) statement
let cfg PreProcessM.getConfig
let simpCtx Simp.mkContext
(config := { failIfUnchanged := false, maxSteps := cfg.maxSteps })
(simpTheorems := relevantLemmas)
(congrTheorems := getSimpCongrTheorems)
let result?, _ simpGoal goal (ctx := simpCtx) (fvarIdsToSimp := getPropHyps)
let some (_, newGoal) := result? | return none
return newGoal
checkContext (goal : MVarId) : M Unit := do
goal.withContext do
for decl in getLCtx do
if !decl.isLet && !decl.isImplementationDetail then
discard <| typeInteresting decl.type
constInterestingCached (n : Name) : M Bool := do
if let some cached M.lookup n then
return cached
let interesting constInteresting n
if interesting then
M.markInteresting n
return true
else
M.markUninteresting n
return false
constInteresting (n : Name) : M Bool := do
let env getEnv
if !isStructure env n then
return false
let constInfo getConstInfoInduct n
if constInfo.isRec then
return false
let ctorTyp := ( getConstInfoCtor constInfo.ctors.head!).type
let analyzer state arg := do
return state || ( typeInteresting ( arg.fvarId!.getType))
forallTelescope ctorTyp fun args _ => args.foldlM (init := false) analyzer
typeInteresting (expr : Expr) : M Bool := do
match_expr expr with
| BitVec n => return ( getNatValue? n).isSome
| UInt8 => return true
| UInt16 => return true
| UInt32 => return true
| UInt64 => return true
| USize => return true
| Bool => return true
| _ =>
let some const := expr.getAppFn.constName? | return false
constInterestingCached const
end Frontend.Normalize
end Lean.Elab.Tactic.BVDecide

View File

@@ -407,8 +407,10 @@ private unsafe def elabNativeDecideCoreUnsafe (tacticName : Name) (expectedType
let pf := mkApp3 (mkConst ``of_decide_eq_true) expectedType s <|
mkApp3 (mkConst ``Lean.ofReduceBool) (mkConst auxDeclName levelParams) (toExpr true) rflPrf
try
let lemmaName mkAuxLemma levels expectedType pf
return .const lemmaName levelParams
-- disable async TC so we can catch its exceptions
withOptions (Elab.async.set · false) do
let lemmaName mkAuxLemma levels expectedType pf
return .const lemmaName levelParams
catch ex =>
-- Diagnose error
throwError MessageData.ofLazyM (es := #[expectedType]) do
@@ -473,7 +475,8 @@ where
-- Level variables occurring in `expectedType`, in ambient order
let lemmaLevels := ( Term.getLevelNames).reverse.filter levelsInType.contains
try
let lemmaName mkAuxLemma lemmaLevels expectedType pf
let lemmaName withOptions (Elab.async.set · false) do
mkAuxLemma lemmaLevels expectedType pf
return mkConst lemmaName (lemmaLevels.map .param)
catch _ =>
diagnose expectedType s none

View File

@@ -6,6 +6,7 @@ Authors: Leonardo de Moura
prelude
import Init.Grind.Tactics
import Lean.Meta.Tactic.Grind
import Lean.Meta.Tactic.TryThis
import Lean.Elab.Command
import Lean.Elab.Tactic.Basic
import Lean.Elab.Tactic.Config
@@ -31,9 +32,15 @@ def elabGrindPattern : CommandElab := fun stx => do
let pattern instantiateMVars pattern
let pattern Grind.preprocessPattern pattern
return pattern.abstract xs
Grind.addEMatchTheorem declName xs.size patterns.toList
Grind.addEMatchTheorem declName xs.size patterns.toList .user
| _ => throwUnsupportedSyntax
open Command in
@[builtin_command_elab Lean.Parser.resetGrindAttrs]
def elabResetGrindAttrs : CommandElab := fun _ => liftTermElabM do
Grind.resetCasesExt
Grind.resetEMatchTheoremsExt
open Command Term in
@[builtin_command_elab Lean.Parser.Command.initGrindNorm]
def elabInitGrindNorm : CommandElab := fun stx =>
@@ -45,58 +52,81 @@ def elabInitGrindNorm : CommandElab := fun stx =>
Grind.registerNormTheorems pre post
| _ => throwUnsupportedSyntax
def elabGrindParams (params : Grind.Params) (ps : TSyntaxArray ``Parser.Tactic.grindParam) : MetaM Grind.Params := do
def elabGrindParams (params : Grind.Params) (ps : TSyntaxArray ``Parser.Tactic.grindParam) (only : Bool) : MetaM Grind.Params := do
let mut params := params
for p in ps do
match p with
| `(Parser.Tactic.grindParam| - $id:ident) =>
let declName realizeGlobalConstNoOverloadWithInfo id
if ( isInductivePredicate declName) then
throwErrorAt p "NIY"
if ( Grind.isCasesAttrCandidate declName false) then
Grind.ensureNotBuiltinCases declName
params := { params with casesTypes := ( params.casesTypes.eraseDecl declName) }
else
params := { params with ematch := ( params.ematch.eraseDecl declName) }
| `(Parser.Tactic.grindParam| $[$mod?:grindThmMod]? $id:ident) =>
| `(Parser.Tactic.grindParam| $[$mod?:grindMod]? $id:ident) =>
let declName realizeGlobalConstNoOverloadWithInfo id
let kind if let some mod := mod? then Grind.getTheoremKindCore mod else pure .default
if ( isInductivePredicate declName) then
throwErrorAt p "NIY"
else
let info getConstInfo declName
match info with
| .thmInfo _ =>
if kind == .eqBoth then
params := { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName .eqLhs) }
params := { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName .eqRhs) }
else
params := { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName kind) }
| .defnInfo _ =>
if ( isReducible declName) then
throwErrorAt p "`{declName}` is a reducible definition, `grind` automatically unfolds them"
if kind != .eqLhs && kind != .default then
throwErrorAt p "invalid `grind` parameter, `{declName}` is a definition, the only acceptable (and redundant) modifier is '='"
let some thms Grind.mkEMatchEqTheoremsForDef? declName
| throwErrorAt p "failed to genereate equation theorems for `{declName}`"
params := { params with extra := params.extra ++ thms.toPArray' }
| _ =>
throwErrorAt p "invalid `grind` parameter, `{declName}` is not a theorem, definition, or inductive type"
let kind if let some mod := mod? then Grind.getAttrKindCore mod else pure .infer
match kind with
| .ematch .user =>
unless only do
withRef p <| Grind.throwInvalidUsrModifier
let s Grind.getEMatchTheorems
let thms := s.find (.decl declName)
let thms := thms.filter fun thm => thm.kind == .user
if thms.isEmpty then
throwErrorAt p "invalid use of `usr` modifier, `{declName}` does not have patterns specified with the command `grind_pattern`"
for thm in thms do
params := { params with extra := params.extra.push thm }
| .ematch kind =>
params withRef p <| addEMatchTheorem params declName kind
| .cases eager =>
withRef p <| Grind.validateCasesAttr declName eager
params := { params with casesTypes := params.casesTypes.insert declName eager }
| .infer =>
if ( Grind.isCasesAttrCandidate declName false) then
params := { params with casesTypes := params.casesTypes.insert declName false }
else
params withRef p <| addEMatchTheorem params declName .default
| _ => throwError "unexpected `grind` parameter{indentD p}"
return params
where
addEMatchTheorem (params : Grind.Params) (declName : Name) (kind : Grind.EMatchTheoremKind) : MetaM Grind.Params := do
let info getConstInfo declName
match info with
| .thmInfo _ =>
if kind == .eqBoth then
let params := { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName .eqLhs) }
return { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName .eqRhs) }
else
return { params with extra := params.extra.push ( Grind.mkEMatchTheoremForDecl declName kind) }
| .defnInfo _ =>
if ( isReducible declName) then
throwError "`{declName}` is a reducible definition, `grind` automatically unfolds them"
if kind != .eqLhs && kind != .default then
throwError "invalid `grind` parameter, `{declName}` is a definition, the only acceptable (and redundant) modifier is '='"
let some thms Grind.mkEMatchEqTheoremsForDef? declName
| throwError "failed to genereate equation theorems for `{declName}`"
return { params with extra := params.extra ++ thms.toPArray' }
| _ =>
throwError "invalid `grind` parameter, `{declName}` is not a theorem, definition, or inductive type"
def mkGrindParams (config : Grind.Config) (only : Bool) (ps : TSyntaxArray ``Parser.Tactic.grindParam) : MetaM Grind.Params := do
let params Grind.mkParams config
let ematch if only then pure {} else Grind.getEMatchTheorems
let params := { params with ematch }
elabGrindParams params ps
let casesTypes if only then pure {} else Grind.getCasesTypes
let params := { params with ematch, casesTypes }
elabGrindParams params ps only
def grind
(mvarId : MVarId) (config : Grind.Config)
(only : Bool)
(ps : TSyntaxArray ``Parser.Tactic.grindParam)
(mainDeclName : Name) (fallback : Grind.Fallback) : MetaM Unit := do
(mainDeclName : Name) (fallback : Grind.Fallback) : MetaM Grind.Trace := do
let params mkGrindParams config only ps
let goals Grind.main mvarId params mainDeclName fallback
unless goals.isEmpty do
throwError "`grind` failed\n{← Grind.goalsToMessageData goals config}"
let result Grind.main mvarId params mainDeclName fallback
if result.hasFailures then
throwError "`grind` failed\n{← result.toMessageData}"
return result.trace
private def elabFallback (fallback? : Option Term) : TermElabM (Grind.GoalM Unit) := do
let some fallback := fallback? | return (pure ())
@@ -115,16 +145,87 @@ private def elabFallback (fallback? : Option Term) : TermElabM (Grind.GoalM Unit
pure auxDeclName
unsafe evalConst (Grind.GoalM Unit) auxDeclName
private def evalGrindCore
(ref : Syntax)
(config : TSyntax `Lean.Parser.Tactic.optConfig)
(only : Option Syntax)
(params : Option (Syntax.TSepArray `Lean.Parser.Tactic.grindParam ","))
(fallback? : Option Term)
(trace : Bool)
: TacticM Grind.Trace := do
let fallback elabFallback fallback?
let only := only.isSome
let params := if let some params := params then params.getElems else #[]
logWarningAt ref "The `grind` tactic is experimental and still under development. Avoid using it in production projects"
let declName := ( Term.getDeclName?).getD `_grind
let mut config elabGrindConfig config
if trace then
config := { config with trace }
withMainContext do
let result grind ( getMainGoal) config only params declName fallback
replaceMainGoal []
return result
private def mkGrindOnly
(config : TSyntax `Lean.Parser.Tactic.optConfig)
(fallback? : Option Term)
(trace : Grind.Trace)
: MetaM (TSyntax `tactic) := do
let mut params := #[]
let mut foundFns : NameSet := {}
for { origin, kind } in trace.thms.toList do
if let .decl declName := origin then
unless Match.isMatchEqnTheorem ( getEnv) declName do
if let some declName isEqnThm? declName then
unless foundFns.contains declName do
foundFns := foundFns.insert declName
let decl : Ident := mkIdent ( unresolveNameGlobalAvoidingLocals declName)
let param `(Parser.Tactic.grindParam| $decl:ident)
params := params.push param
else
let decl : Ident := mkIdent ( unresolveNameGlobalAvoidingLocals declName)
let param match kind with
| .eqLhs => `(Parser.Tactic.grindParam| = $decl)
| .eqRhs => `(Parser.Tactic.grindParam| =_ $decl)
| .eqBoth => `(Parser.Tactic.grindParam| _=_ $decl)
| .eqBwd => `(Parser.Tactic.grindParam| = $decl)
| .bwd => `(Parser.Tactic.grindParam| $decl)
| .fwd => `(Parser.Tactic.grindParam| $decl)
| .user => `(Parser.Tactic.grindParam| usr $decl)
| .default => `(Parser.Tactic.grindParam| $decl:ident)
params := params.push param
for declName in trace.eagerCases.toList do
unless Grind.isBuiltinEagerCases declName do
let decl : Ident := mkIdent ( unresolveNameGlobalAvoidingLocals declName)
let param `(Parser.Tactic.grindParam| cases eager $decl)
params := params.push param
for declName in trace.cases.toList do
unless trace.eagerCases.contains declName || Grind.isBuiltinEagerCases declName do
let decl : Ident := mkIdent ( unresolveNameGlobalAvoidingLocals declName)
let param `(Parser.Tactic.grindParam| cases $decl)
params := params.push param
let result if let some fallback := fallback? then
`(tactic| grind $config:optConfig only on_failure $fallback)
else
`(tactic| grind $config:optConfig only)
if params.isEmpty then
return result
else
let paramsStx := #[mkAtom "[", (mkAtom ",").mkSep params, mkAtom "]"]
return result.raw.setArg 3 (mkNullNode paramsStx)
@[builtin_tactic Lean.Parser.Tactic.grind] def evalGrind : Tactic := fun stx => do
match stx with
| `(tactic| grind $config:optConfig $[only%$only]? $[ [$params:grindParam,*] ]? $[on_failure $fallback?]?) =>
let fallback elabFallback fallback?
let only := only.isSome
let params := if let some params := params then params.getElems else #[]
logWarningAt stx "The `grind` tactic is experimental and still under development. Avoid using it in production projects"
let declName := ( Term.getDeclName?).getD `_grind
let config elabGrindConfig config
withMainContext do liftMetaFinishingTactic (grind · config only params declName fallback)
discard <| evalGrindCore stx config only params fallback? false
| _ => throwUnsupportedSyntax
@[builtin_tactic Lean.Parser.Tactic.grindTrace] def evalGrindTrace : Tactic := fun stx => do
match stx with
| `(tactic| grind?%$tk $config:optConfig $[only%$only]? $[ [$params:grindParam,*] ]? $[on_failure $fallback?]?) =>
let trace evalGrindCore stx config only params fallback? true
let stx mkGrindOnly config fallback? trace
Tactic.TryThis.addSuggestion tk stx (origSpan? := getRef)
| _ => throwUnsupportedSyntax
end Lean.Elab.Tactic

View File

@@ -21,7 +21,6 @@ partial def headBetaUnderLambda (f : Expr) : Expr := Id.run do
f := f.updateLambda! f.bindingInfo! f.bindingDomain! f.bindingBody!.headBeta
return f
/-- Environment extensions for monotonicity lemmas -/
builtin_initialize monotoneExt :
SimpleScopedEnvExtension (Name × Array DiscrTree.Key) (DiscrTree Name)
@@ -85,7 +84,7 @@ partial def solveMonoCall (α inst_α : Expr) (e : Expr) : MetaM (Option Expr) :
let_expr monotone _ _ _ inst _ := hmonoType | throwError "solveMonoCall {e}: unexpected type {hmonoType}"
let some inst whnfUntil inst ``instPartialOrderPProd | throwError "solveMonoCall {e}: unexpected instance {inst}"
let_expr instPartialOrderPProd β γ inst_β inst_γ inst | throwError "solveMonoCall {e}: whnfUntil failed?{indentExpr inst}"
let n := if e.projIdx! == 0 then ``monotone_pprod_fst else ``monotone_pprod_snd
let n := if e.projIdx! == 0 then ``PProd.monotone_fst else ``PProd.monotone_snd
return mkAppOptM n #[β, γ, α, inst_β, inst_γ, inst_α, none, hmono]
if e == .bvar 0 then
@@ -126,19 +125,25 @@ def solveMonoStep (failK : ∀ {α}, Expr → Array Name → MetaM α := @defaul
goal.assign goal'
return [goal'.mvarId!]
-- Float letE to the environment
-- Handle let
if let .letE n t v b _nonDep := e then
if t.hasLooseBVars || v.hasLooseBVars then
failK f #[]
let goal' withLetDecl n t v fun x => do
let b' := f.updateLambdaE! f.bindingDomain! (b.instantiate1 x)
-- We cannot float the let to the context, so just zeta-reduce.
let b' := f.updateLambdaE! f.bindingDomain! (b.instantiate1 v)
let goal' mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! b')
goal.assign ( mkLetFVars #[x] goal')
pure goal'
return [goal'.mvarId!]
goal.assign goal'
return [goal'.mvarId!]
else
-- No recursive call in t or v, so float out
let goal' withLetDecl n t v fun x => do
let b' := f.updateLambdaE! f.bindingDomain! (b.instantiate1 x)
let goal' mkFreshExprSyntheticOpaqueMVar (mkApp type.appFn! b')
goal.assign ( mkLetFVars #[x] goal')
pure goal'
return [goal'.mvarId!]
-- Float `letFun` to the environment.
-- `applyConst` tends to reduce the redex
-- (cannot use `applyConst`, it tends to reduce the let redex)
match_expr e with
| letFun γ _ v b =>
if γ.hasLooseBVars || v.hasLooseBVars then

View File

@@ -585,10 +585,9 @@ where
s!"{x} ≤ {e} ≤ {y}"
prettyCoeffs (names : Array String) (coeffs : Coeffs) : String :=
coeffs.toList.enum
|>.filter (fun (_,c) => c 0)
|>.enum
|>.map (fun (j, (i,c)) =>
coeffs.toList.zipIdx
|>.filter (fun (c,_) => c 0)
|>.mapIdx (fun j (c,i) =>
(if j > 0 then if c > 0 then " + " else " - " else if c > 0 then "" else "- ") ++
(if Int.natAbs c = 1 then names[i]! else s!"{c.natAbs}*{names[i]!}"))
|> String.join
@@ -596,13 +595,13 @@ where
mentioned (atoms : Array Expr) (constraints : Std.HashMap Coeffs Fact) : MetaM (Array Bool) := do
let initMask := Array.mkArray atoms.size false
return constraints.fold (init := initMask) fun mask coeffs _ =>
coeffs.enum.foldl (init := mask) fun mask (i, c) =>
coeffs.zipIdx.foldl (init := mask) fun mask (c, i) =>
if c = 0 then mask else mask.set! i true
prettyAtoms (names : Array String) (atoms : Array Expr) (mask : Array Bool) : MessageData :=
(Array.zip names atoms).toList.enum
|>.filter (fun (i, _) => mask.getD i false)
|>.map (fun (_, (n, a)) => m!" {n} := {a}")
(Array.zip names atoms).toList.zipIdx
|>.filter (fun (_, i) => mask.getD i false)
|>.map (fun ((n, a),_) => m!" {n} := {a}")
|> m!"\n".joinSep
mutual

View File

@@ -141,7 +141,10 @@ structure EnvironmentHeader where
imports : Array Import := #[]
/-- Compacted regions for all imported modules. Objects in compacted memory regions do no require any memory management. -/
regions : Array CompactedRegion := #[]
/-- Name of all imported modules (directly and indirectly). -/
/--
Name of all imported modules (directly and indirectly).
The index of a module name in the array equals the `ModuleIdx` for the same module.
-/
moduleNames : Array Name := #[]
/-- Module data for all imported modules. -/
moduleData : Array ModuleData := #[]
@@ -448,7 +451,9 @@ def ofKernelEnv (env : Kernel.Environment) : Environment :=
@[export lean_elab_environment_to_kernel_env]
def toKernelEnv (env : Environment) : Kernel.Environment :=
env.checked.get
-- TODO: should just be the following when we store extension data in `checked`
--env.checked.get
{ env.checked.get with extensions := env.checkedWithoutAsync.extensions }
/-- Consistently updates synchronous and asynchronous parts of the environment without blocking. -/
private def modifyCheckedAsync (env : Environment) (f : Kernel.Environment Kernel.Environment) : Environment :=
@@ -495,7 +500,7 @@ def const2ModIdx (env : Environment) : Std.HashMap Name ModuleIdx :=
-- only needed for the lakefile.lean cache
@[export lake_environment_add]
private def lakeAdd (env : Environment) (cinfo : ConstantInfo) : Environment :=
{ env with checked := .pure <| env.checked.get.add cinfo }
env.setCheckedSync <| env.checked.get.add cinfo
/--
Save an extra constant name that is used to populate `const2ModIdx` when we import
@@ -864,22 +869,22 @@ opaque EnvExtensionInterfaceImp : EnvExtensionInterface
def EnvExtension (σ : Type) : Type := EnvExtensionInterfaceImp.ext σ
private def ensureExtensionsArraySize (env : Environment) : IO Environment := do
let exts EnvExtensionInterfaceImp.ensureExtensionsSize env.checked.get.extensions
let exts EnvExtensionInterfaceImp.ensureExtensionsSize env.checkedWithoutAsync.extensions
return env.modifyCheckedAsync ({ · with extensions := exts })
namespace EnvExtension
instance {σ} [s : Inhabited σ] : Inhabited (EnvExtension σ) := EnvExtensionInterfaceImp.inhabitedExt s
-- TODO: store extension state in `checked`
def setState {σ : Type} (ext : EnvExtension σ) (env : Environment) (s : σ) : Environment :=
let checked := env.checked.get
env.setCheckedSync { checked with extensions := EnvExtensionInterfaceImp.setState ext checked.extensions s }
{ env with checkedWithoutAsync.extensions := EnvExtensionInterfaceImp.setState ext env.checkedWithoutAsync.extensions s }
def modifyState {σ : Type} (ext : EnvExtension σ) (env : Environment) (f : σ σ) : Environment :=
let checked := env.checked.get
env.setCheckedSync { checked with extensions := EnvExtensionInterfaceImp.modifyState ext checked.extensions f }
{ env with checkedWithoutAsync.extensions := EnvExtensionInterfaceImp.modifyState ext env.checkedWithoutAsync.extensions f }
def getState {σ : Type} [Inhabited σ] (ext : EnvExtension σ) (env : Environment) : σ :=
EnvExtensionInterfaceImp.getState ext env.checked.get.extensions
EnvExtensionInterfaceImp.getState ext env.checkedWithoutAsync.extensions
end EnvExtension
@@ -1466,7 +1471,7 @@ def getNamespaceSet (env : Environment) : NameSSet :=
@[export lean_elab_environment_update_base_after_kernel_add]
private def updateBaseAfterKernelAdd (env : Environment) (kernel : Kernel.Environment) : Environment :=
env.setCheckedSync kernel
env.setCheckedSync { kernel with extensions := env.checkedWithoutAsync.extensions }
@[export lean_display_stats]
def displayStats (env : Environment) : IO Unit := do

View File

@@ -185,7 +185,7 @@ language server.
-/
def withAlwaysResolvedPromises [Monad m] [MonadLiftT BaseIO m] [MonadFinally m] [Inhabited α]
(count : Nat) (act : Array (IO.Promise α) m Unit) : m Unit := do
let ps List.iota count |>.toArray.mapM fun _ => IO.Promise.new
let ps Array.range count |>.mapM fun _ => IO.Promise.new
try
act ps
finally

View File

@@ -433,8 +433,6 @@ where
}
-- now that imports have been loaded, check options again
let opts reparseOptions setup.opts
-- default to async elaboration; see also `Elab.async` docs
let opts := Elab.async.setIfNotSet opts true
let cmdState := Elab.Command.mkState headerEnv msgLog opts
let cmdState := { cmdState with
infoState := {

View File

@@ -220,6 +220,15 @@ where
| trace _ msg msgs => visit mctx? msg || msgs.any (visit mctx?)
| _ => false
/--
Maximum number of trace node children to display by default to prevent slowdowns from rendering. In
the info view, more children can be expanded interactively.
-/
register_option maxTraceChildren : Nat := {
defValue := 50
descr := "Maximum number of trace node children to display"
}
partial def formatAux : NamingContext Option MessageDataContext MessageData BaseIO Format
| _, _, ofFormatWithInfos fmt => return fmt.1
| _, none, ofGoal mvarId => return formatRawGoal mvarId
@@ -236,8 +245,13 @@ partial def formatAux : NamingContext → Option MessageDataContext → MessageD
if data.startTime != 0 then
msg := f!"{msg} [{data.stopTime - data.startTime}]"
msg := f!"{msg} {(← formatAux nCtx ctx header).nest 2}"
let children children.mapM (formatAux nCtx ctx)
return .nest 2 (.joinSep (msg::children.toList) "\n")
let mut children := children
if let some maxNum := ctx.map (maxTraceChildren.get ·.opts) then
if maxNum > 0 && children.size > maxNum then
children := children.take maxNum |>.push <|
ofFormat f!"{children.size - maxNum} more entries... (increase `maxTraceChildren` to see more)"
let childFmts children.mapM (formatAux nCtx ctx)
return .nest 2 (.joinSep (msg::childFmts.toList) "\n")
| nCtx, ctx?, ofLazy pp _ => do
let dyn pp (ctx?.map (mkPPContext nCtx))
let some msg := dyn.get? MessageData

View File

@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Init.Grind.Util
import Lean.Meta.Closure
namespace Lean.Meta
@@ -16,7 +17,12 @@ def getLambdaBody (e : Expr) : Expr :=
def isNonTrivialProof (e : Expr) : MetaM Bool := do
if !( isProof e) then
pure false
return false
else if e.isAppOf ``Grind.nestedProof then
-- Grind.nestedProof is a gadget created by the `grind` tactic.
-- We want to avoid the situation where `grind` keeps creating them,
-- and this module, which is used by `grind`, keeps abstracting them.
return false
else
-- We consider proofs such as `fun x => f x a` as trivial.
-- For example, we don't want to abstract the body of `def rfl`

View File

@@ -56,6 +56,8 @@ Given a telescope of FVars of type `tᵢ`, iterates `PSigma` to produce the type
`t₁ ⊗' t₂ …`.
-/
def packType (xs : Array Expr) : MetaM Expr := do
if xs.isEmpty then
return mkConst ``Unit
let mut d inferType xs.back!
for x in xs.pop.reverse do
d mkAppOptM ``PSigma #[some ( inferType x), some ( mkLambdaFVars #[x] d)]
@@ -66,7 +68,11 @@ def packType (xs : Array Expr) : MetaM Expr := do
Create a unary application by packing the given arguments using `PSigma.mk`.
The `type` should be the expected type of the packed argument, as created with `packType`.
-/
partial def pack (type : Expr) (args : Array Expr) : Expr := go 0 type
partial def pack (type : Expr) (args : Array Expr) : Expr :=
if args.isEmpty then
mkConst ``Unit.unit
else
go 0 type
where
go (i : Nat) (type : Expr) : Expr :=
if h : i < args.size - 1 then
@@ -88,6 +94,7 @@ Unpacks a unary packed argument created with `Unary.pack`.
Throws an error if the expression is not of that form.
-/
def unpack (arity : Nat) (e : Expr) : Option (Array Expr) := do
if arity = 0 then return #[]
let mut e := e
let mut args := #[]
while args.size + 1 < arity do
@@ -105,6 +112,7 @@ def unpack (arity : Nat) (e : Expr) : Option (Array Expr) := do
Example: `mkTupleElems a 4` returns `#[a.1, a.2.1, a.2.2.1, a.2.2.2]`.
-/
private def mkTupleElems (t : Expr) (arity : Nat) : Array Expr := Id.run do
if arity = 0 then return #[]
let mut result := #[]
let mut t := t
for _ in [:arity - 1] do
@@ -117,14 +125,17 @@ Given a type `t` of the form `(x : A) → (y : B[x]) → … → (z : D[x,y])
returns the curried type `(x : A ⊗' B ⊗' … ⊗' D) → R[x.1, x.2.1, x.2.2]`.
-/
def uncurryType (varNames : Array Name) (type : Expr) : MetaM Expr := do
forallBoundedTelescope type varNames.size fun xs _ => do
assert! xs.size = varNames.size
let d packType xs
let name := if xs.size == 1 then varNames[0]! else `_x
withLocalDeclD name d fun tuple => do
let elems := mkTupleElems tuple xs.size
let codomain instantiateForall type elems
mkForallFVars #[tuple] codomain
if varNames.isEmpty then
mkArrow (mkConst ``Unit) type
else
forallBoundedTelescope type varNames.size fun xs _ => do
assert! xs.size = varNames.size
let d packType xs
let name := if xs.size == 1 then varNames[0]! else `_x
withLocalDeclD name d fun tuple => do
let elems := mkTupleElems tuple xs.size
let codomain instantiateForall type elems
mkForallFVars #[tuple] codomain
/--
Iterated `PSigma.casesOn`:
@@ -154,21 +165,23 @@ Given expression `e` of type `(x : A) → (y : B[x]) → … → (z : D[x,y])
returns an expression of type `(x : A ⊗' B ⊗' … ⊗' D) → R[x.1, x.2.1, x.2.2]`.
-/
def uncurry (varNames : Array Name) (e : Expr) : MetaM Expr := do
let type inferType e
let resultType uncurryType varNames type
forallBoundedTelescope resultType (some 1) fun xs codomain => do
let #[x] := xs | unreachable!
let u getLevel codomain
let value casesOn varNames.toList x u codomain e
mkLambdaFVars #[x] value
if varNames.isEmpty then
return mkLambda `x .default (mkConst ``Unit) e
else
let type inferType e
let resultType uncurryType varNames type
forallBoundedTelescope resultType (some 1) fun xs codomain => do
let #[x] := xs | unreachable!
let u getLevel codomain
let value casesOn varNames.toList x u codomain e
mkLambdaFVars #[x] value
/-- Given `(A ⊗' B ⊗' … ⊗' D) → R` (non-dependent) `R`, return `A → B → … → D → R` -/
private def curryType (varNames : Array Name) (type : Expr) :
MetaM Expr := do
let some (domain, codomain) := type.arrow? |
throwError "curryType: Expected arrow type, got {type}"
go codomain varNames.toList domain
where
private def curryType (varNames : Array Name) (type : Expr) : MetaM Expr := do
let some (domain, codomain) := type.arrow? |
throwError "curryType: Expected arrow type, got {type}"
go codomain varNames.toList domain
where
go (codomain : Expr) : List Name Expr MetaM Expr
| [], _ => pure codomain
| [_], domain => mkArrow domain codomain
@@ -184,6 +197,8 @@ Given expression `e` of type `(x : A ⊗' B ⊗' … ⊗' D) → R[x]`
return expression of type `(x : A) → (y : B) → … → (z : D) → R[(x,y,z)]`
-/
private partial def curry (varNames : Array Name) (e : Expr) : MetaM Expr := do
if varNames.isEmpty then
return e.beta #[mkConst ``Unit.unit]
let type whnfForall ( inferType e)
unless type.isForall do
throwError "curryPSigma: expected forall type, got {type}"
@@ -494,7 +509,9 @@ projects to the `i`th function of type,
-/
def curryProj (argsPacker : ArgsPacker) (e : Expr) (i : Nat) : MetaM Expr := do
let n := argsPacker.numFuncs
let t inferType e
let t whnf ( inferType e)
unless t.isForall do
panic! "curryProj: expected forall type, got {}"
let packedDomain := t.bindingDomain!
let unaryTypes Mutual.unpackType n packedDomain
unless i < unaryTypes.length do

Some files were not shown because too many files have changed in this diff Show More