Compare commits

..

145 Commits

Author SHA1 Message Date
Kim Morrison
7a5c0c0018 chore: convert DHashMap to a structure 2025-06-13 13:44:38 +10:00
Leonardo de Moura
140a633589 feat: model based theory combination for grind mbtc (#8759)
This PR implements model-based theory combination for grind linarith.
Example:
```lean
example [CommRing α] [LinearOrder α] [Ring.IsOrdered α] (f : α → α → α) (x y z : α)
    : z ≤ x → x ≤ 1 → z = 1 → f x y = 2 → f 1 y = 2 := by
  grind
```
2025-06-13 01:20:45 +00:00
Cameron Zwarich
3aa479fd8c fix: cache TrivialStructureInfo in LCNF toMono (#8758)
This PR adds caching for the `hasTrivialStructure?` function for LCNF
types. This is one of the hottest small functions in the new compiler,
so adding a cache makes a lot of sense.
2025-06-13 01:07:38 +00:00
Kim Morrison
b280b83c98 chore: add test case with bad grind pattern (#8757) 2025-06-13 01:06:02 +00:00
Kyle Miller
84f15ac93a fix: refine how simp tracks unfolded local definitions (#8753)
This PR fixes a bug in `simp` where it was not resetting the set of
zeta-delta reduced let definitions between `simp` calls. It also fixes a
bug where `simp` would report zeta-delta reduced let definitions that
weren't given as simp arguments (these extraneous let definitions appear
due to certain processes temporarily setting `zetaDelta := true`). This
PR also modifies the metaprogramming interface for the zeta-delta
tracking functions to be re-entrant and to prevent this kind of no-reset
bug from occurring again. Closes #6655.

Re-entrance of this metaprogramming interface is not needed to fix
#6655, but it is needed for some future PRs.

The `tests/lean/run/6655.lean` file has an example of a deficiency of
`simp?`, where `simp?` still over-reports unfolded let declarations.
This is likely due to `withInferTypeConfig` setting `zetaDelta := true`
from within `isDefEq`, but I did not verify this.

This PR supersedes #7539. The difference is that this PR has
`withResetZetaDeltaFVarIds` save and restore `zetaDeltaFVarIds`, but
that PR saves and then extends `zetaDeltaFVarIds` to persist unfolded
fvars. The behavior in this PR lets metaprograms control whether they
want to persist any of the unfolded fvars in this context themselves. In
practice, metaprograms that use `withResetZetaDeltaFVarIds` are creating
many temporary fvars and are doing dependence computations. These
temporary fvars shouldn't be persisted, and also dependence shouldn't be
inferred from the fact that a dependence calculation was done. (Concrete
example: the let-to-have transformation in an upcoming PR can be run
from within simp. Just because let-to-have unfolds an fvar while
calculating dependencies of lets doesn't mean that this fvar should be
included by `simp?`.)
2025-06-13 00:57:57 +00:00
Leonardo de Moura
d4b17b9fd2 feat: counterexamples for grind linarith module (#8756)
This PR implements counterexamples for grind linarith. Example:
```lean
example [CommRing α] [LinearOrder α] [Ring.IsOrdered α] (a b c d : α)
    : b ≥ 0 → c > b → d > b → a ≠ b + c → a > b + c → a < b + d →  False := by
  grind
```
produces the counterexample
```
a := 7/2
b := 1
c := 2
d := 3
```

```lean
example [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (a b c d : α)
    : a ≤ b → a - c ≥ 0 + d → d ≤ 0 → b = c → a ≠ b → False := by
  grind
```
generates the counterexample
```
a := 0
b := 1
c := 1
d := -1
```
2025-06-13 00:21:35 +00:00
Cameron Zwarich
4694aaad02 chore: rewrite mkFieldParamsForCtorType in a more readable style (#8755) 2025-06-12 23:54:30 +00:00
Rob23oba
e450a02621 fix: change show tactic to work as documented (#7395)
This PR changes the `show t` tactic to match its documentation.
Previously it was a synonym for `change t`, but now it finds the first
goal that unifies with the term `t` and moves it to the front of the
goal list.
2025-06-12 23:54:09 +00:00
Cameron Zwarich
deda28e6e3 fix: enable more optimizations on inductives with computed fields in the new compiler (#8754)
This PR changes the implementation of computed fields in the new
compiler, which should enable more optimizations (and remove a
questionable hack in `toLCNF` that was only suitable for bringup). We
convert `casesOn` to `cases` like we do for other inductive types, all
constructors get replaced by their real implementations late in the base
phase, and then the `cases` expression is rewritten to use the real
constructors in `toMono`.

In the future, it might be better to move to a model where the `cases`
expression gets rewritten earlier or the constructors get replaced
later, so that both are done at the same time.
2025-06-12 23:28:09 +00:00
Cameron Zwarich
8aa003bdfc fix: move structProjCases pass before extendJoinPointContext (#8752)
This PR fixes an issue where the `extendJoinPointContext` pass can lift
join points containing projections to the top level, as siblings of
`cases` constructs matching on other projections of the same base value.
This prevents the `structProjCases` pass from projecting both at once,
extending the lifetime of the parent value and breaking linearity at
runtime.

This would theoretically be possible to fix in `structProjCases`, but it
would require some better infrastructure for handling join points. It's
also likely that the IR passes dealing with reference counting would
have similar bugs that pessimize the code. For this reason, the simplest
thing is to just perform the `structProjCases` pass earlier, which
prevents `extendJoinPointContext` from lifting these join points.
2025-06-12 21:52:02 +00:00
Kim Morrison
6a698c1c22 feat: grind annotations for List/Array/Vector.zip functions (#8750)
This PR adds grind annotations for the
`List/Array/Vector.zipWith/zipWithAll/unzip` functions.
2025-06-12 18:41:24 +00:00
Kim Morrison
b4660c96a9 feat: grind annotations for List/Array/Vector.ofFn theorems and List.Impl (#8749)
This PR adds grind annotations for `List/Array/Vector.ofFn` theorems and
additional `List.Impl` find operations.

The annotations are added to theorems that correspond to those already
annotated in the List implementation, ensuring consistency across all
three container types (List, Array, Vector) for ofFn operations and
related functionality.

Key theorems annotated include:
- Element access theorems (`getElem_ofFn`, `getElem?_ofFn`)
- Construction and conversion theorems (`ofFn_zero`, `toList_ofFn`,
`toArray_ofFn`)
- Membership theorems (`mem_ofFn`)
- Head/tail operations (`back_ofFn`)
- Monadic operations (`ofFnM_zero`, `toList_ofFnM`, `toArray_ofFnM`,
`idRun_ofFnM`)
- List.Impl find operations (`find?_singleton`, `find?_append`,
`findSome?_singleton`, `findSome?_append`)
2025-06-12 18:09:08 +00:00
Kim Morrison
2cddf2394b feat: grind annotations for List/Array/Vector.mapIdx theorems (#8748)
This PR adds grind annotations for `Array/Vector.mapIdx` and `mapFinIdx`
theorems.

The annotations are added to theorems that correspond to those already
annotated in the List implementation, ensuring consistency across all
three container types (List, Array, Vector) for indexed mapping
operations.

Key theorems annotated include:
- Size and element access theorems (`size_mapIdx`, `getElem_mapIdx`,
`getElem?_mapIdx`)
- Construction theorems (`mapIdx_empty`, `mapIdx_push`, `mapIdx_append`)
- Membership and equality theorems (`mem_mapIdx`, `mapIdx_mapIdx`)
- Conversion theorems (`toList_mapIdx`, `mapIdx_toArray`, etc.)
- Reverse and composition operations
- Similar annotations for `mapFinIdx` variants
2025-06-12 18:06:01 +00:00
Kim Morrison
75fe50a33e feat: grind annotations for List/Array/Vector.finRange theorems (#8747)
This PR adds grind annotations for \`List/Array/Vector.finRange\`
theorems.
2025-06-12 17:49:58 +00:00
Sebastian Ullrich
c2876a1a6a chore: update stage0 2025-06-12 16:36:08 +02:00
Sebastian Ullrich
9f6846a343 chore: work around old compiler bug 2025-06-12 16:36:08 +02:00
Sebastian Ullrich
64e105c121 feat: meta phase restrictions 2025-06-12 16:36:08 +02:00
Kim Morrison
d10a85539a feat: grind annotations for List/Array/Vector.find?/findSome?/idxOf?/findIdx? (#8741)
This PR adds annotations for
`List/Array/Vector.find?/findSome?/idxOf?/findIdx?`.
2025-06-12 11:06:18 +00:00
Sebastian Ullrich
f0347ee719 chore: lean --stats gives number of imported bytes (#8725)
Thanks to `mmap`, startup time is not necessarily related to this
figure, but it can be used as a rough measure for that and how much data
the module depends on, i.e. the rebuild chance.

Also adds new cumulative benchmarks for this metric as well as the
number of imported constants and env ext entries.
2025-06-12 08:29:42 +00:00
Kim Morrison
faffe86334 chore: add failing grind tests from Mathlib (#8737) 2025-06-12 05:57:32 +00:00
Mac Malone
c168d06edf chore: partially revert "feat: lake: use lean --setup" (#8736)
This PR partially reverts #8024 which introduced a significant Lake
performance regression during builds. Once the cause is discovered and
fixed, a similar PR will be made to revert this.
2025-06-12 05:53:59 +00:00
Kim Morrison
abfc49d0f7 chore: cleanup of grind tests (#8735) 2025-06-12 04:42:25 +00:00
Kim Morrison
34e98c2efc feat: add Decidable (∃ i, P i) (#8734)
This PR adds the missing instance
```
instance decidableExistsFin (P : Fin n → Prop) [DecidablePred P] : Decidable (∃ i, P i)
```
2025-06-12 02:58:37 +00:00
Leonardo de Moura
e7549b5651 feat: diseq splitting and non-chronological backtracking for linarith (#8733)
This PR implements disequality splitting and non-chronological
backtracking for the `grind` linarith procedure.
```lean
example [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (a b c d : α)
    : a ≤ b → a - c ≥ 0 + d → d ≤ 0 → d ≥ 0 → b = c → a ≠ b → False := by
  grind
```
2025-06-12 02:49:35 +00:00
Cameron Zwarich
9f65d0251a chore: remove comments about missing functionality now implemented elsewhere (#8732) 2025-06-12 00:38:42 +00:00
Cameron Zwarich
a7af9f7d5f chore: fix a typo in a doc comment (#8731) 2025-06-11 20:41:32 +00:00
Cameron Zwarich
39cbe04946 fix: use Arg in LCNF FVarSubst rather than Expr (#8729)
This PR changes LCNF's `FVarSubst` to use `Arg` rather than `Expr`. This
enforces the requirements on substitutions, which match the requirements
on `Arg`.
2025-06-11 18:08:30 +00:00
Lean stage0 autoupdater
77fd1ba6b9 chore: update stage0 2025-06-11 16:51:07 +00:00
jrr6
0002ea8a37 feat: pre-stage0 groundwork for named error messages (#8649)
This PR adds the pre-stage0-update infrastructure for named error
messages. It adds macro syntax for registering and throwing named errors
(without elaborators), mechanisms for displaying error names in the
Infoview and at the command line, and the ability to link to error
explanations in the manual (once they are added).
2025-06-11 14:52:08 +00:00
jrr6
7bd82b103a feat: pre-stage0 groundwork for error explanations (#8651)
This PR adds the pre-stage0-update infrastructure for error
explanations. It adds the environment-extension machinery for
registering and accessing explanations, and it provides a cursory parser
that validates that the high-level structure of error explanations
matches the prescribed format.

---------

Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
2025-06-11 14:51:44 +00:00
Sebastian Ullrich
2c9c58b1f7 fix: allow mixing modules and non-modules when root is not a module (#8724) 2025-06-11 14:39:49 +00:00
Sebastian Ullrich
54c12df950 refactor: environment extension state splitting (#8653)
Replaces the previous `export/saveEntriesFn` split with a stricly more
general function such that `exportEntriesFn` could be deprecated at a
later point. Also gives the new function access to the `Environment`
while we're at it. Also gives `getModuleEntries` access to more olean
levels in preparation for `meta import`.
2025-06-11 12:52:04 +00:00
Sebastian Ullrich
01a0524749 chore: move benchmarking script to this repo (#8718)
Corresponding to
d3f39f8343
2025-06-11 12:27:06 +00:00
Lean stage0 autoupdater
551e755d23 chore: update stage0 2025-06-11 11:06:17 +00:00
Kim Morrison
082ca94d3b feat: add grind annotations for List/Array/Vector.eraseP/erase/eraseIdx (#8719)
This PR adds grind annotations for
List/Array/Vector.eraseP/erase/eraseIdx. It also adds some missing
lemmas.
2025-06-11 09:44:47 +00:00
Rob23oba
ee5b652136 doc: add documentation for builtin attributes (#8173)
This PR adds documentation to builtin attributes like `@[refl]` or
`@[implemented_by]`.

Closes #8432

---------

Co-authored-by: David Thrane Christiansen <david@davidchristiansen.dk>
Co-authored-by: David Thrane Christiansen <david@lean-fro.org>
2025-06-11 09:04:37 +00:00
Marc Huisinga
91b5e19833 feat: server-side for module hierarchy (#8654)
This PR adds server-side support for a new module hierarchy component in
VS Code that can be used to navigate both the import tree of a module
and the imported-by tree of a module. Specifically, it implements new
requests `$/lean/prepareModuleHierarchy`,
`$/lean/moduleHierarchy/imports` and
`$/lean/moduleHierarchy/importedBy`. These requests are not supported by
standard LSP. Companion PR at
[leanprover/vscode-lean4#620](https://github.com/leanprover/vscode-lean4/pull/620).


![Imports](https://github.com/user-attachments/assets/5ef650e7-3b0e-4a33-9ecb-f442bff88006)
![Imported
by](https://github.com/user-attachments/assets/d98e7a2c-3c4f-4509-afdf-08134a97aa78)

### Breaking changes
This PR augments the .ilean format with the direct imports of a file in
order to implement the `$/lean/moduleHierarchy/importedBy` request and
bumps the .ilean format version.
2025-06-11 08:02:18 +00:00
Paul Reichert
cf8315ed96 fix: restrict the IteratorLoop instance on DropWhile, which was accidentally more general (#8703)
This PR corrects the `IteratorLoop` instance in `DropWhile`, which
previously triggered for arbitrary iterator types.
2025-06-11 07:35:46 +00:00
Eric Wieser
44e36dec6f feat: strengthen finIdxOf? lemmas (#8678)
This PR makes the LHS of `isSome_finIdxOf?` and `isNone_finIdxOf?` more
general.
2025-06-11 07:32:01 +00:00
Cameron Zwarich
a92890ec84 fix: use the fvar subst for erased code in LCNF simp (#8717)
This PR uses the fvar substitution mechanism to replace erased code.
This isn't entirely satisfactory, since LCNF's `.return` doesn't support
a general `Arg` (which has a `.erased` constructor), it only supports an
`FVarId`. This is in contrast to the IR `.ret`, which does support a
general `Arg`.
2025-06-11 05:46:39 +00:00
Kim Morrison
eccc472e8d chore: remove set_option grind.warning false (#8714)
This PR removes the now unnecessary `set_option grind.warning false`
statements, now that the warning is disabled by default.
2025-06-11 05:09:19 +00:00
Cameron Zwarich
d8c54fb93d fix: consider any type application of an erased term to be erased (#8716)
This PR makes any type application of an erased term to be erased. This
comes up a bit more than one would expect in the implementation of Lean
itself.
2025-06-11 04:58:21 +00:00
Leonardo de Moura
aab65f595d feat: infrastructure for disequality constraints in grind linarith (#8715)
This PR implements the basic infrastructure for processing disequalities
in the `grind linarith` module. We still have to implement backtracking.
2025-06-11 04:04:41 +00:00
Lean stage0 autoupdater
0a9c246497 chore: update stage0 2025-06-11 02:42:58 +00:00
Leonardo de Moura
2a63b392dd fix: ring module in grind (#8713)
This PR fixes a bug in the commutative ring module used in `grind`. It
was missing simplification opportunities.
2025-06-11 01:20:50 +00:00
Cameron Zwarich
0b2884bfa3 fix: erase code of an erased type in LCNF simp (#8712)
This PR optimizes let decls of an erased type to an erased value.
Specialization can create local functions that produce a Prop, and
there's no point in keeping them around.
2025-06-11 00:58:55 +00:00
Johan Commelin
c53ab2835c fix: pin version of softprops/action-gh-release (#8710)
This PR pins the precise hash of softprops/action-gh-release to

    softprops/action-gh-release@da05d55257

because the latest version is broken.
See https://github.com/softprops/action-gh-release/issues/628 for more
details.
2025-06-11 00:08:18 +00:00
Anne Baanen
54dd7aae8c chore: improvements to release checklist and scripts (#8586)
This PR improves the release checklist and scripts:

* Check that the release's commit hash is not all-numeric starting with
0 (this can break SemVer, which [required us to release
v4.21.0-rc2](https://github.com/leanprover/lean4/releases/tag/v4.21.0-rc2)).
* Check that projects being bumped to a release tag do not reference
`nightly-testing` anymore.
* Clarify how to create subsequent release candidates if an `-rc1`
already exists.
* Fix typos in the release checklist documentation.
2025-06-10 22:56:06 +00:00
euprunin
52e0742108 chore: fix spelling mistakes (#8711)
Co-authored-by: euprunin <euprunin@users.noreply.github.com>
2025-06-10 20:24:28 +00:00
Sebastian Ullrich
614e6122f7 chore: fix LEAN_PATH for building stage2+ Leanc.lean (#8705)
It would accidentally fall back to stage 1 otherwise
2025-06-10 17:11:23 +00:00
Cameron Zwarich
1a9de502f2 fix: handle constants with erased types in toMonoType (#8709)
This PR handles constants with erased types in `toMonoType`. It is much
harder to write a test case for this than you would think, because most
references to such types get replaced with `lcErased` earlier.
2025-06-10 16:27:33 +00:00
Leonardo de Moura
085c4ed3f9 fix: internalization issue in the interface between linarith and ring (#8708)
This PR fixes an internalization bug in the interface between linarith
and ring modules in `grind`. The `CommRing` module may create new terms
during normalization.
2025-06-10 16:06:47 +00:00
Rob23oba
be4ebb8ac3 feat: equivalence of tree maps (#8210)
This PR adds an equivalence relation to tree maps akin to the existing
one for hash maps. In order to get many congruence lemmas to eventually
use for defining functions on extensional tree maps, almost all of the
remaining tree map functions have also been given lemmas to relate them
to list functions, although these aren't currently used to prove lemmas
other than congruence lemmas.
2025-06-10 14:49:52 +00:00
Kim Morrison
2344e3f254 chore: minor fixes to grind_indexmap test case (#8706) 2025-06-10 11:35:48 +00:00
Anne Baanen
48f394b1d4 chore: begin development cycle for v4.22.0 (#8642)
This PR bumps the version number of the Lean project to 4.22.0, since
v4.21.0 is now in the release candidate stage.
2025-06-10 11:29:41 +00:00
Sebastian Ullrich
2629921c01 fix: import completion after meta import (#8704)
The details of `identWithPartialTrailingDot` prevent a robust approach
using quotations.
2025-06-10 09:06:58 +00:00
Marc Huisinga
e123b327a5 feat: enable auto-implicits in lake math template (#8656)
This PR enables auto-implicits in the Lake math template. This resolves
an issue where new users sometimes set up a new project for math
formalization and then quickly realize that none of the code samples in
our official books and docs that use auto-implicits work in their
projects. With the introduction of [inlay hints for
auto-implicits](https://github.com/leanprover/lean4/pull/6768), we
consider the auto-implicit UX to be sufficiently usable that they can be
enabled by default in the math template.
Notably, this change does not affect Mathlib itself, which will proceed
to disable auto-implicits.

This change was previously discussed with and agreed to by the Mathlib
maintainer team.
2025-06-10 08:08:21 +00:00
Kim Morrison
e904314742 feat: add SHA-suffixed PR release tags (#8702)
This PR enhances the PR release workflow to create both short format and
SHA-suffixed release tags. Creates both pr-release-{PR_NUMBER} and
pr-release-{PR_NUMBER}-{SHORT_SHA} tags, generates separate releases for
both formats, adds separate GitHub status checks, and updates
Batteries/Mathlib testing branches to use SHA-suffixed tags for exact
commit traceability.

This removes the need for downstream repositories to deal with the
toolchain changing without the toolchain name changing.
2025-06-10 07:09:08 +00:00
Mac Malone
0ebd320940 fix: lake: export LeanOption in Lean from Lake (#8701)
This PR exports `LeanOption` in the `Lean` namespace from the `Lake`
namespace. `LeanOption` was moved from `Lean` to `Lake` in #8447, which
can cause unnecessary breakage without this.
2025-06-10 04:09:40 +00:00
Kim Morrison
b1980ef871 chore: cleanup notes about grind in LRAT (#8623)
This PR cleans up some notes about `grind` failures in the LRAT checker,
now that some `grind` bugs have been fixed.
2025-06-10 03:47:28 +00:00
Kim Morrison
8fce30e7cb chore: change grind.warning default to false (#8698)
This PR turns off the default warning when using `grind`, in preparation
for v4.22. I'll removing all the `set_option grind.warning false` in our
codebase in a second PR, after an update-stage0.
2025-06-10 03:40:45 +00:00
Kim Morrison
308a383079 chore: fix grind annotation on DHashMap.contains_iff_mem (#8700)
The original annotations produced patterns that matched too often.
2025-06-10 03:26:54 +00:00
Leonardo de Moura
2d67524e42 feat: equality in grind linarith (#8697)
This PR implements support for inequalities in the `grind` linear
arithmetic procedure and simplifies its design. Some examples that can
already be solved:
```lean
open Lean.Grind
example [IntModule α] [Preorder α] [IntModule.IsOrdered α] (a b c d : α)
    : a + d < c → b = a + (2:Int)*d → b - d > c → False := by
  grind

example [CommRing α] [LinearOrder α] [Ring.IsOrdered α] (a b : α)
    : a = 0 → b = 1 → a + b ≤ 2 := by
  grind

example [CommRing α] [Preorder α] [Ring.IsOrdered α] (a b c d e : α) :
    2*a + b ≥ 1 → b ≥ 0 → c ≥ 0 → d ≥ 0 → e ≥ 0
    → a ≥ 3*c → c ≥ 6*e → d - e*5 ≥ 0
    → a + b + 3*c + d + 2*e < 0 → False := by
  grind
```
2025-06-09 23:39:24 +00:00
Leonardo de Moura
41c41e455a feat: One.one support in linarith (#8694)
This PR implements special support for `One.one` in linarith when the
structure is a ordered ring. It also fixes bugs during initialization.
2025-06-09 20:17:48 +00:00
Cameron Zwarich
f61a412801 fix: make unsafeBaseIO noinline (#8669)
This PR makes `unsafeBaseIO` `noinline`. The new compiler is better at
optimizing `Result`-like types, which can cause the final operation in
an `unsafeBaseIO` block to be dropped, since `unsafeBaseIO` is
discarding the state.
2025-06-09 14:48:37 +00:00
Leonardo de Moura
00f6b1e70a fix: denotation functions for interfacing CommRing and linarith (#8693)
This PR fixes the denotation functions used to interface the ring and
linarith modules in grind.
2025-06-09 14:43:13 +00:00
Sebastian Ullrich
8422d936cf chore: revert "fix LEAN_PATH for building stage2+ Leanc.lean" (#8692)
Reverts leanprover/lean4#8685 pending Windows fix
2025-06-09 08:50:34 +00:00
Leonardo de Moura
dd1d3e6a3a feat: model search procedure for grind linarith (#8690)
This PR implements the main framework of the model search procedure for
the linarith component in grind. It currently handles only inequalities.
It can already solve simple goals such as
```lean
example [IntModule α] [Preorder α] [IntModule.IsOrdered α] (a b c : α)
    : a < b → b < c → c < a → False := by
  grind

example [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (a b c : α)
    : a < b → b < c + d → a - d < c := by
  grind
```
2025-06-09 04:31:28 +00:00
Leonardo de Moura
e38b8a0a7a feat: proof terms generation for CommRing and linarith interface (#8689)
This PR implements proof term generation for the `CommRing` and
`linarith` interface. It also fixes the `CommRing` helper theorems.
2025-06-08 23:38:03 +00:00
Leonardo de Moura
3e0168df58 feat: proof term construction infrastructure for linarith in grind (#8687)
This PR implements the infrastructure for constructing proof terms in
the linarith procedure in `grind`. It also adds the `ToExpr` instances
for the reified objects.
2025-06-08 19:58:48 +00:00
Mac Malone
fcaae1dc58 feat: lake: use lean --setup (#8447)
This PR makes use of `lean --setup` in Lake builds of Lean modules and
adds Lake support for the new `.olean` artifacts produced by the module
system.

Lake now computes the entire transitive import graph of a module for
Lean, allowing it eagerly provide the artifacts managed by Lake to Lean
via the `modules` field of `lean --setup`.

`lake setup-file` no longer respects the imports passed to it and
instead just parses the file's header for imports. This is necessary
because import statements are now more complex than a simple module
name.
2025-06-08 17:42:45 +00:00
Sebastian Ullrich
8cc6a4a028 chore: fix LEAN_PATH for building stage2+ Leanc.lean (#8685)
It would accidentally fall back to stage 1 otherwise
2025-06-08 16:17:05 +00:00
Cameron Zwarich
4ec5dad05f fix: only mark single-alt cases discriminant as used if any param is used (#8683)
This PR adds an optimization to the LCNF simp pass where the
discriminant of a single-alt cases is only marked as used if any param
is used.
2025-06-08 06:20:38 +00:00
Leonardo de Moura
7e1d0cc125 feat: use CommRing to normalize linarith expressions (#8682)
This PR uses the `CommRing` module to normalize linarith inequalities.
2025-06-08 05:41:00 +00:00
Cameron Zwarich
2ae066fdc0 fix: only mark a cases discriminant used if it has non-default alt (#8681)
This PR adds an optimization to the LCNF simp pass where the
discriminant of a `cases` construct will only be mark used if it has a
non-default alternative.
2025-06-08 05:07:02 +00:00
Leonardo de Moura
c9c794ee8a feat: reification and denotation for linarith module in grind (#8680)
This PR adds the `reify?` and `denoteExpr` for the new linarith module
in `grind`.
2025-06-08 02:53:28 +00:00
Leonardo de Moura
106708ee78 feat: grind linarith module infrastructure (#8677)
This PR adds the basic infrastructure for the linarith module in
`grind`.
2025-06-08 00:19:52 +00:00
Cameron Zwarich
666fb5c571 fix: update maxHeartbeats in tests/lean/run/match_expr_perf.lean (#8676)
This PR updates `maxHeartbeats` in the match_expr_perf.lean test, since
with the new compiler this also includes the allocations made by the
compiler.
2025-06-07 23:27:16 +00:00
Cameron Zwarich
8d8fd0715f fix: increase precision of new compiler's noncomputable check (#8675)
This PR increases the precision of the new compiler's non computable
check, particularly around irrelevant uses of `noncomputable` defs in
applications.

There are no tests included because they don't pass with the old
compiler. They are on the new compiler's branch and they will be merged
when it is enabled.
2025-06-07 22:20:55 +00:00
Leonardo de Moura
4abc4430dc refactor: ENodeKey => ExprPtr (#8674) 2025-06-07 19:30:02 +00:00
Lean stage0 autoupdater
d46188de54 chore: update stage0 2025-06-07 14:27:00 +00:00
Sebastian Ullrich
de57b77feb chore: support meta in ParseImportsFast (#8672) 2025-06-07 13:08:20 +00:00
Lean stage0 autoupdater
f0eae3b879 chore: update stage0 2025-06-07 11:04:28 +00:00
Sebastian Ullrich
1abf6fe1f5 chore: do not interpret meta as noncomputable (#8668)
To be replaced by actual handling of `meta`
2025-06-07 09:45:04 +00:00
Mac Malone
f917951745 fix: lake: ensure Lake versions are SemVer compatible (#8613)
This PR changes the Lake version syntax (to `5.0.0-src+<commit>`) to
ensure it is a well-formed SemVer,
2025-06-07 07:17:06 +00:00
Mac Malone
8904e5c070 feat: lake: builtin facet memoize toggle (#7738)
This PR makes memoization of built-in facets toggleable through a
`memoize` option on the facet configuration. Built-in facets which are
essentially aliases (e.g., `default`, `o`) have had memoization
disabled.
2025-06-07 06:00:05 +00:00
Leonardo de Moura
ef9094d7f8 feat: CommRing interface for grind linarith (#8670)
This PR adds helper theorems that will be used to interface the
`CommRing` module with the linarith procedure in `grind`.
2025-06-07 00:35:14 +00:00
Lean stage0 autoupdater
d50292d31b chore: update stage0 2025-06-06 20:02:08 +00:00
Joachim Breitner
24cb133eb2 feat: explicit defeq attribute (#8419)
This PR introduces an explicit `defeq` attribute to mark theorems that
can be used by `dsimp`. The benefit of an explicit attribute over the
prior logic of looking at the proof body is that we can reliably omit
theorem bodies across module boundaries. It also helps with intra-file
parallelism.

If a theorem is syntactically defined by `:= rfl`, then the attribute is
assumed and need not given explicitly. This is a purely syntactic check
and can be fooled, e.g. if in the current namespace, `rfl` is not
actually “the” `rfl` of `Eq`. In that case, some other syntax has be
used, such as `:= (rfl)`. This is also the way to go if a theorem can be
proved by `defeq`, but one does not actually want `dsimp` to use this
fact.

The `defeq` attribute will look at the *type* of the declaration, not
the body, to check if it really holds definitionally. Because of
different reduction settings, this can sometimes go wrong. Then one
should also write `:= (rfl)`, if one does not want this to be a defeq
theorem. (If one does then this is currently not possible, but it’s
probably a bad idea anyways).

The `set_option debug.tactic.simp.checkDefEqAttr true`, `dsimp` will
warn if could not apply a lemma due to a missing `defeq` attribute.

With `set_option backward.dsimp.useDefEqAttr.get false` one can revert
to the old behavior of inferring rfl-ness based on the theorem body.

Both options will go away eventually (too bad we can’t mark them as
deprecated right away, see #7969)

Meta programs that generate theorems (e.g. equational theorems) can use
`inferDefEqAttr` to set the attribute based on the theorem body of the
just created declaration.

This builds on #8501 to update Init to `@[expose]` a fair amount of
definitions that, if not exposed, would prevent some existing `:= rfl`
theorems from being `defeq` theorems. In the interest of starting
backwards compatible, I exposed these function. Hopefully many can be
un-exposed later again.

A mathlib adaption branch exists that includes both the meta programming
fixes and changes to the theorems (e.g. changing `:= by rfl` to `:=
rfl`).

With the module system there is now no special handling for `defeq`
theorem bodies, because we don’t look at the body anymore. The previous
hack is removed. The `defeq`-ness of the theorem needs to be checked in
the context of the theorem’s *type*; the error message contains a hint
if the defeq check fails because of the exported context.
2025-06-06 18:40:06 +00:00
Henrik Böving
eddbe08118 refactor: AIG doesn't need to be modified for constants (#8663) 2025-06-06 15:32:38 +00:00
Paul Reichert
d16c4052c2 feat: introduce empty iterator (#8615)
This PR provides a special empty iterator type. Although its behavior
can be emulated with a list iterator (for example), having a special
type has the advantage of being easier to optimize for the compiler.
2025-06-06 14:26:52 +00:00
tonneaus
febad6a380 doc: typo in IO.lean (#8657) 2025-06-06 13:12:12 +00:00
Marc Huisinga
257cd15a00 fix: wrong signature help after map/filter/etc (#8655)
This PR fixes a bug in the signature help where it would be displayed
for higher-order-functions that are the last argument of another
function.
2025-06-06 13:07:01 +00:00
Paul Reichert
5963bc8b8a fix: remove IteratorLoop instances without associated LawfulIteratorLoop instances (#8629)
This PR replaces special, more optimized `IteratorLoop` instances, for
which no lawfulness proof has been made, with the verified default
implementation. The specialization of the loop/collect implementations
is low priority, but having lawfulness instances for all iterators is
important for verification.
2025-06-06 08:06:59 +00:00
Paul Reichert
ec9b00996f feat: equivalence of iterators (#8545)
This PR provides the means to reason about "equivalent" iterators.
Simply speaking, two iterators are equivalent if they behave the same as
long as consumers do not introspect their states.
2025-06-06 08:06:39 +00:00
Kim Morrison
50474fef78 chore: cleanup after renaming get_elem_tactic_trivial 2025-06-06 13:10:18 +10:00
Kim Morrison
a5567618ac chore: update stage0 2025-06-06 13:10:18 +10:00
Kim Morrison
a3caf60f6a feat: rename get_elem_tactic_trivial to get_elem_tactic_extensible 2025-06-06 13:10:17 +10:00
Leonardo de Moura
c3d31cf24b feat: helper theorems for equality detection and coefficent normalization (#8650)
This PR adds helper theorems for coefficient normalization and equality
detection. This theorems are for the linear arithmetic procedure in
`grind`.
2025-06-06 02:42:57 +00:00
Leonardo de Moura
f7ecf06234 feat: normalization and ordered IntModule helper theorems (#8645)
This PR adds many helper theorems for the future `IntModule` linear
arithmetic procedure in `grind`.
It also adds helper theorems for normalizing input atoms and support for
disequality in the new linear arithmetic procedure in `grind`.
2025-06-05 23:39:10 +00:00
Cameron Zwarich
b97d35d879 fix: improve precision of the new compiler's noncomputable check for proj (#8647)
This PR improves the precision of the new compiler's `noncomputable`
check for projections. There is no test included because while this was
reduced from Mathlib, the old compiler does not correctly handle the
reduced test case. It's not entirely clear to me if the check is passing
with the old compiler for correct reasons. A test will be added to the
new compiler's branch.
2025-06-05 22:44:02 +00:00
Kim Morrison
ebf5fbd294 feat: complete grind's ToInt framework (#8639)
This PR completes the `ToInt` family of typeclasses which `grind` will
use to embed types into the integers for `cutsat`. It contains instances
for the usual concrete data types (`Fin`, `UIntX`, `IntX`, `BitVec`),
and is extensible (e.g. for Mathlib's `PNat`).
2025-06-05 11:25:04 +00:00
Luisa Cicolini
74d8746356 feat: add BitVec.setWidth'_eq to bv_normalize (#8640)
This PR adds `BitVec.setWidth'_eq` to `bv_normalize` such that
`bv_decide` can reduce it and solve lemmas involving `setWidth'_eq`
2025-06-05 09:42:47 +00:00
Joachim Breitner
1d9dd33bec feat: #print sig (#8641)
This PR adds the `#print sig $ident` variant of the `#print` command,
which omits the body. This is useful for testing meta-code, in the
```
#guard_msgs (drop trace, all) in #print sig foo
```
idiom. The benefit over `#check` is that it shows the declaration kind,
reducibility attributes (and in the future more built-in attributes,
like `@[defeq]` in #8419). (One downside is that `#check` shows unused
function parameter names, e.g. in induction principles; this could
probably be refined.)
2025-06-05 09:02:19 +00:00
Siddharth
9b9dd8546a feat: simplify T-division into E-division when numerator is positive (#8205)
This PR adds a simp lemma that simplifies T-division where the numerator
is a `Nat` into an E-division:


```lean
@[simp] theorem ofNat_tdiv_eq_ediv {a : Nat} {b : Int} : (a : Int).tdiv b = a / b :=
   tdiv_eq_ediv_of_nonneg (by simp)
```

---------

Co-authored-by: Tobias Grosser <tobias@grosser.es>
2025-06-05 06:20:49 +00:00
Siddharth
de7d43865e feat: bitvector trichotomy lemmas (#8203)
This PR adds trichotomy lemmas for unsigned and signed comparisons,
stating that only one of three cases may happen: either `x < y`, `x =
y`, or `x > y` (for both signed and unsigned comparsions). We use
explicit arguments so that users can write `rcases slt_trichotomy x y
with hlt | heq | hgt`.
2025-06-05 05:28:44 +00:00
Leonardo de Moura
3ce7dd318d feat: sort equivalence classes in grind diagnostics (#8638)
This PR improves the diagnostic information produced by `grind`. It now
sorts the equivalence classes by generation and then `Expr. lt`.
2025-06-05 04:35:59 +00:00
Leonardo de Moura
b1709d1fc1 feat: background theorems for IntModule (#8637)
This PR adds background theorems for normalizing `IntModule` expressions
using reflection.
2025-06-05 02:32:53 +00:00
Cameron Zwarich
6ebf39d0fc chore: fix formatting (#8635) 2025-06-04 22:43:45 +00:00
Cameron Zwarich
a6e2df6250 fix: don't treat types with erased constructor types as having trivial structure (#8634)
This PR makes `hasTrivialStructure?` return false for types whose
constructors have types that are erased, e.g. if they construct a
`Prop`.
2025-06-04 22:33:44 +00:00
Leonardo de Moura
e08b2a1f62 feat: track case-split source in grind (#8633)
This PR implements case-split tracking in `grind`. The information is
displayed when `grind` fails or diagnostic information is requested.
Examples:

- Failure

![image](https://github.com/user-attachments/assets/b10516c3-d205-4e08-80a4-daca195c1d8a)

- Success with `set_option diagnostics true`

![image](https://github.com/user-attachments/assets/15ee31e0-27d8-473f-a469-12b424ce6d24)
2025-06-04 16:59:36 +00:00
Sebastian Ullrich
2f4e56b5d2 chore: fixes after rebootstrap 2025-06-04 18:26:05 +02:00
Sebastian Ullrich
a487bb8d63 chore: update stage0 2025-06-04 18:26:05 +02:00
Sebastian Ullrich
8457342d33 feat: meta syntax 2025-06-04 18:26:05 +02:00
Siddharth
596e65d7df feat: AIG.relabel(Nat)_unsat_iff for AIGs with empty variable types (#8631)
This PR generalizes `Std.Sat.AIG. relabel(Nat)_unsat_iff` to allow the
AIG type to be empty. We generalize the proof, by showing that in the
case when `α` is empty, the environment doesn't matter, since all
environments `α → Bool` are isomorphic.

This showed up when reusing the AIG primitives for building a
k-induction based model checker to prove arbitrary width bitvector
identities.
2025-06-04 15:10:48 +00:00
Kim Morrison
7c76dbf6be feat: typeclasses for grind to extensibly embed types into Int (#8543)
This PR adds typeclasses for `grind` to embed types into `Int`, for
cutsat. This allows, for example, treating `Fin n`, or Mathlib's `ℕ+` in
a uniform and extensible way.

There is a primary typeclass that carries the `toInt` function, and a
description of the interval the type embeds in. There are then
individual typeclasses describing how arithmetic/order operations
interact with the embedding.
2025-06-04 13:04:19 +00:00
Lean stage0 autoupdater
6b102c91e3 chore: update stage0 2025-06-04 13:21:17 +00:00
Joachim Breitner
b9243e19be feat: make equational theorems of non-exposed defs private (#8519)
This PR makes the equational theorems of non-exposed defs private. If
the author of a module chose not to expose the body of their function,
then they likely don't want that implementation to leak through
equational theorems. Helps with #8419.

There is some amount of incidential complexity due to how `private`
works in lean, by mangling the name: lots of code paths that need now do
the right thing™ about private and non-private names, including the
whole reserved name machinery.

So this includes a number of refactorings:

* The logic for calculating an equational theorem name (or similar) is
now done by a single function, `mkEqLikeNameFor`, rather than all over
the place.

* Since the name of the equational theorem now depends on the current
context (in particular whether it’s a proper module, or a non-module
file), the forward map from declaration to equational theorem doesn’t
quite work anymore. This map is deleted; the list of equational theorems
are now always found by looking for declaration of the expected names
(`alreadyGenerated). If users define such theorems themselves (and make
it past the “do not allow reserved names to be declared”) they get to
keep both pieces.

* Because this map was deleted, mathlib’s `eqns` command can no longer
easily warn if equational lemmas have already been generated too early
(adaption branch exists). But in general I think lean could provide a
more principled way of supporting custom unfold lemmas, and ideally the
whole equational theorem machinery is just using that.

* The ReservedNamePredicate is used by `resolveExact`, so we need to
make sure that it returns the right name, including privateness. It is
not ok to just reserve both the private and non-private name but then
later in the ReservedNameAction produce just one of the two.
 
* We create `foo.def_eq` eagerly for well-founded recursion. This is
needed because we need feed in the proof of the rewriting done by
`wf_preprocess`. But if `foo.def_eq` is private in a module, then a
non-module importing it will still expect a non-private `foo.def_eq` to
exist. To patch that, we install a `copyPrivateUnfoldTheorem :
GetUnfoldEqnFn` that declares a theorem aliasing the private one. Seems
to work.
2025-06-04 11:52:08 +00:00
Kim Morrison
d6478e15c7 chore: remove slow and unnecessary @[grind] annotations (#8630) 2025-06-04 10:57:25 +00:00
Leonardo de Moura
1629440cb8 feat: improve grind diagnostics for successful case (#8625)
This PR improves the diagnostic information produced by `grind` when it
succeeds. We now include the list of case-splits performed, and the
number of application per function symbol. Example:


![image](https://github.com/user-attachments/assets/109f3f80-85a1-4368-8958-fdf56707ea7d)
2025-06-04 09:34:48 +00:00
Kim Morrison
4500a7f02b fix: remove global NatCast (Fin n) instance (#8620)
This PR removes the `NatCast (Fin n)` global instance (both the direct
instance, and the indirect one via `Lean.Grind.Semiring`), as that
instance causes causes `x < n` (for `x : Fin k`, `n : Nat`) to be
elaborated as `x < ↑n` rather than `↑x < n`, which is undesirable. Note
however that in Mathlib this happens anyway!
2025-06-04 06:58:39 +00:00
Leonardo de Moura
c12159b519 refactor: move read-only data to Grind.Context (#8624) 2025-06-04 02:50:43 +00:00
Kim Morrison
1260059a59 feat: add grind use case example IndexMap (#8622)
This PR adds a test case / use case example for `grind`, setting up the
very basics of `IndexMap`, modelled on Rust's
[`indexmap`](https://docs.rs/indexmap/latest/indexmap/). It is not
intended as a complete implementation: just enough to exercise `grind`.

(Thanks to @arthurpaulino for suggesting this as a test case.)
2025-06-04 01:33:56 +00:00
Leonardo de Moura
8165ecc1db fix: bug in the equality resolution procedure in grind (#8621)
This PR fixes a bug in the equality-resolution procedure used by
`grind`.
The procedure now performs a topological sort so that every simplified
theorem declaration is emitted **before** any place where it is
referenced.
Previously, applying equality resolution to
```lean
h : ∀ x, p x a → ∀ y, p y b → x ≠ y
```
in the example
```lean
example
  (p : Nat → Nat → Prop)
  (a b c : Nat)
  (h  : ∀ x, p x a → ∀ y, p y b → x ≠ y)
  (h₁ : p c a)
  (h₂ : p c b) :
  False := by
  grind
```
caused `grind` to produce the incorrect term
```lean
p ?y a → ∀ y, p y b → False
```
The patch eliminates this error, and the following correct simplified
theorem is generated
```lean
∀ y, p y a → p y b → False
```
2025-06-04 00:34:47 +00:00
Leonardo de Moura
344b52f999 fix: term internalization issue in grind (#8619)
This PR fixes an internalization (aka preprocessing) issue in `grind`
when applying injectivity theorems.
2025-06-04 00:13:51 +00:00
Kyle Miller
5e952598dc fix: let private names be unresolved in the pretty printer, fix shadowing bug when pp.universes is true (#8617)
This PR fixes (1) an issue where private names are not unresolved when
they are pretty printed, (2) an issue where in `pp.universes` mode names
were allowed to shadow local names, (3) an issue where in `match`
patterns constants shadowing locals wouldn't use `_root_`, and (4) an
issue where tactics might have an incorrect "try this" when
`pp.fullNames` is set. Adds more delaboration tests for name
unresolution.

It also cleans up the `delabConst` delaborator so that it uses
`unresolveNameGlobalAvoidingLocals`, rather than doing any local context
analysis itself. The `inPattern` logic has been removed; it was a
heuristic added back in #575, but it now leads to incorrect results (and
in `match` patterns, local names shadow constants in name resolution).
2025-06-03 23:37:35 +00:00
Cameron Zwarich
b9aefb4a50 feat: LCNF constant folding for Nat.nextPowerOfTwo (#8618)
This PR implements LCNF constant folding for `Nat.nextPowerOfTwo`.
2025-06-03 21:13:58 +00:00
Cameron Zwarich
9afe5ccae3 feat: LCNF constant folding for Nat.pow (#8616)
This PR adds constant folding for `Nat.pow` to the new compiler,
following the same limits as the old compiler.
2025-06-03 19:10:38 +00:00
Marc Huisinga
cb0284f98e feat: signature help (#8511)
This PR implements signature help support. When typing a function
application, editors with support for signature help will now display a
popup that designates the current (remaining) function type. This
removes the need to remember the function signature while typing the
function application, or having to constantly cycle between hovering
over the function identifier and typing the application. In VS Code, the
signature help can be triggered manually using `Ctrl+Shift+Space`.


![Demo](https://github.com/user-attachments/assets/d1f6ed79-bb16-4593-8d28-68b1cce5d5dc)

### Other changes

- In order to support signature help for the partial syntax `f a <|` or
`f a $`, these notations now elaborate as `f a`, not `f a .missing`.
- The logic in `delabConstWithSignature` that delaborates parameters is
factored out into a function `delabForallParamsWithSignature` so that it
can be used for arbitrary `forall`s, not just constants.
- The `InfoTree` formatter is adjusted to produce output where it is
easier to identify the kind of `Info` in the `InfoTree`.
- A bug in `InfoTree.smallestInfo?` is fixed so that it doesn't panic
anymore when its predicate `p` does not ensure that both `pos?` and
`tailPos?` of the `Info` are present.
2025-06-03 17:26:33 +00:00
Cameron Zwarich
35e83066e6 feat: implement LCNF constant folding for toNat (#8614)
This PR implements constant folding for `toNat` in the new compiler,
which improves parity with the old compiler.
2025-06-03 17:12:15 +00:00
Sebastian Ullrich
ba847d41f1 chore: revise environment constant addition details (#8610)
* Move constant registration with elab env from `Lean.addDecl` to
`Lean.Environment.addDeclCore` for compatibility
* Make module system behavior independent of `Elab.async` value
2025-06-03 15:16:45 +00:00
Cameron Zwarich
f5e72d0962 feat: make guard_msgs.diff=true the default (#8596)
This PR makes `guard_msgs.diff=true` the default. The main usage of
`#guard_msgs` is for writing tests, and this makes staring at altered
test outputs considerably less tiring.
2025-06-03 15:13:15 +00:00
Sebastian Ullrich
536c87d73c chore: make test more robust 2025-06-03 16:11:09 +02:00
Sebastian Ullrich
c95e058e3c chore: fix tests after rebootstrap 2025-06-03 16:11:09 +02:00
Sebastian Ullrich
4746e38414 chore: update stage0 2025-06-03 16:11:09 +02:00
Sebastian Ullrich
f718f26200 feat: create private aux decls in private contexts 2025-06-03 15:53:05 +02:00
Marc Huisinga
184dbae130 feat: reusable rpc refs (#8105)
This PR adds support for server-sided `RpcRef` reuse and fixes a bug
where trace nodes in the InfoView would close while the file was still
being processed.

The core of the trace node issue is that the server always serves new
RPC references in every single response to the client, which means that
the client is forced to reset its UI state.

In a previous attempt at fixing this (#8056), the server would memorize
the RPC-encoded JSON of interactive diagnostics (which includes RPC
references) and serve it for as long as it could reuse the snapshot
containing the diagnostics, so that RPC references are reused. The
problem with this was that the client then had multiple finalizers
registered for the same RPC reference (one for every reused RPC
reference that was served), and once the first reference was
garbage-collected, all other reused references would point into the
void.

This PR takes a different approach to resolve the issue: The meaning of
`$/lean/rpc/release` is relaxed from "Free the object pointed to by this
RPC reference" to "Decrement the RPC reference count of the object
pointed to by this RPC reference", and the server now maintains a
reference count to track how often a given `RpcRef` was served. Only
when every single served instance of the `RpcRef` has been released, the
object is freed. Additionally, the reuse mechanism is generalized from
being only supported for interactive diagnostics, to being supported for
any object using `WithRpcRef`. In order to make use of reusable RPC
references, downstream users still need to memorize the `WithRpcRef`
instances accordingly.

Closes #8053.

### Breaking changes

Since `WithRpcRef` is now capable of tracking its identity to decide
which `WithRpcRef` usage constitutes a reuse, the constructor of
`WithRpcRef` has been made `private` to discourage downstream users from
creating `WithRpcRef` instances with manually-set `id`s. Instead,
`WithRpcRef.mk` (which lives in `BaseIO`) is now the preferred way to
create `WithRpcRef` instances.
2025-06-03 12:35:12 +00:00
Kim Morrison
bc47aa180b feat: use grind to shorten some proofs in the LRAT checker (#8609)
This PR uses `grind` to shorten some proofs in the LRAT checker. The
intention is not particularly to improve the quality or maintainability
of these proofs (although hopefully this is a side effect), but just to
give `grind` a work out.

There are a number of remaining notes, either about places where `grind`
fails with an internal error (for which #8608 is hopefully
representative, and we can fix after that), or `omega` works but `grind`
doesn't (to be investigated later).

Only in some of the files have I thoroughly used grind. In many files
I've just replaced leaves or branches of proofs with `grind` where it
worked easily, without setting up the internal annotations in the LRAT
library required to optimize the use of `grind`. It's diminishing
returns to do this in a proof library that is not high priority, so I've
simply drawn a line.
2025-06-03 08:38:57 +00:00
Kim Morrison
f7b6e155d4 chore: add failing grind test (#8608) 2025-06-03 07:45:38 +00:00
Kim Morrison
f4e86e310c chore: add failing grind test (unknown metavariable) (#8607) 2025-06-03 07:00:56 +00:00
Kim Morrison
5f0bdfcada chore: initial @[grind] annotations for Array/Vector.range (#8606) 2025-06-03 06:44:01 +00:00
Kim Morrison
0f4459b42c chore: add @[grind] annotations to Fin.getElem_fin (#8605) 2025-06-03 06:37:35 +00:00
Paul Reichert
55b89aaf38 feat: introduce drop iterator combinator (#8420)
This PR provides the iterator combinator `drop` that transforms any
iterator into one that drops the first `n` elements.

Additionally, the PR removes the specialized `IteratorLoop` instance on
`Take`. It currently does not have a `LawfulIteratorLoop` instance,
which needs to exist for the loop consumer lemmas to work. Having the
specialized instance is low priority.
2025-06-03 06:37:09 +00:00
Kim Morrison
9fc8713946 chore: grind annotations for getElem?_pos and variants (#8590)
This PR adds `@[grind]` to `getElem?_pos` and variants.

I'd initially thought these would result in too much case splitting, but
it seems to be only minor, and in use cases the payoff is good.
2025-06-03 06:17:05 +00:00
Cameron Zwarich
106411420b fix: support compiler.extract_closed option in the new compiler (#8604)
This PR adds support for the `compiler.extract_closed` option to the new
compiler, since this is used by the definition of `unsafeBaseIO`. We'll
revisit this once we switch to the new compiler and rethink its
relationship with IO.
2025-06-03 05:58:32 +00:00
1340 changed files with 18281 additions and 4435 deletions

View File

@@ -364,7 +364,7 @@ jobs:
with:
path: artifacts
- name: Release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@da05d552573ad5aba039eaac05058a918a7bf631
with:
files: artifacts/*/*
fail_on_unmatched_files: true
@@ -408,7 +408,7 @@ jobs:
echo -e "\n*Full commit log*\n" >> diff.md
git log --oneline "$last_tag"..HEAD | sed 's/^/* /' >> diff.md
- name: Release Nightly
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@da05d552573ad5aba039eaac05058a918a7bf631
with:
body_path: diff.md
prerelease: true

View File

@@ -48,19 +48,30 @@ jobs:
git -C lean4.git remote add origin https://github.com/${{ github.repository_owner }}/lean4.git
git -C lean4.git fetch -n origin master
git -C lean4.git fetch -n origin "${{ steps.workflow-info.outputs.sourceHeadSha }}"
# Create both the original tag and the SHA-suffixed tag
SHORT_SHA="${{ steps.workflow-info.outputs.sourceHeadSha }}"
SHORT_SHA="${SHORT_SHA:0:7}"
# Export the short SHA for use in subsequent steps
echo "SHORT_SHA=${SHORT_SHA}" >> "$GITHUB_ENV"
git -C lean4.git tag -f pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }} "${{ steps.workflow-info.outputs.sourceHeadSha }}"
git -C lean4.git tag -f pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-"${SHORT_SHA}" "${{ steps.workflow-info.outputs.sourceHeadSha }}"
git -C lean4.git remote add pr-releases https://foo:'${{ secrets.PR_RELEASES_TOKEN }}'@github.com/${{ github.repository_owner }}/lean4-pr-releases.git
git -C lean4.git push -f pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}
git -C lean4.git push -f pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-"${SHORT_SHA}"
- name: Delete existing release if present
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
run: |
# Try to delete any existing release for the current PR.
# Try to delete any existing release for the current PR (just the version without the SHA suffix).
gh release delete --repo ${{ github.repository_owner }}/lean4-pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }} -y || true
env:
GH_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Release
- name: Release (short format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@da05d552573ad5aba039eaac05058a918a7bf631
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }}
# There are coredumps files here as well, but all in deeper subdirectories.
@@ -73,7 +84,22 @@ jobs:
# The token used here must have `workflow` privileges.
GITHUB_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Report release status
- name: Release (SHA-suffixed format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@da05d552573ad5aba039eaac05058a918a7bf631
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }} (${{ steps.workflow-info.outputs.sourceHeadSha }})
# There are coredumps files here as well, but all in deeper subdirectories.
files: artifacts/*/*
fail_on_unmatched_files: true
draft: false
tag_name: pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}
repository: ${{ github.repository_owner }}/lean4-pr-releases
env:
# The token used here must have `workflow` privileges.
GITHUB_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Report release status (short format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v7
with:
@@ -87,6 +113,20 @@ jobs:
description: "${{ github.repository_owner }}/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}",
});
- name: Report release status (SHA-suffixed format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v7
with:
script: |
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: "${{ steps.workflow-info.outputs.sourceHeadSha }}",
state: "success",
context: "PR toolchain (SHA-suffixed)",
description: "${{ github.repository_owner }}/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}",
});
- name: Add label
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v7
@@ -282,16 +322,18 @@ jobs:
if [ "$EXISTS" = "0" ]; then
echo "Branch does not exist, creating it."
git switch -c lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} "$BASE"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}" > lean-toolchain
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
git commit -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
else
echo "Branch already exists, pushing an empty commit."
echo "Branch already exists, updating lean-toolchain."
git switch lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
# The Batteries `nightly-testing` or `nightly-testing-YYYY-MM-DD` branch may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
git merge "$BASE" --strategy-option ours --no-commit --allow-unrelated-histories
git commit --allow-empty -m "Trigger CI for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
git commit -m "Update lean-toolchain for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
fi
- name: Push changes
@@ -346,21 +388,23 @@ jobs:
if [ "$EXISTS" = "0" ]; then
echo "Branch does not exist, creating it."
git switch -c lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} "$BASE"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}" > lean-toolchain
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
sed -i 's,require "leanprover-community" / "batteries" @ git ".\+",require "leanprover-community" / "batteries" @ git "lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}",' lakefile.lean
lake update batteries
git add lakefile.lean lake-manifest.json
git commit -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
else
echo "Branch already exists, merging $BASE and bumping Batteries."
echo "Branch already exists, updating lean-toolchain and bumping Batteries."
git switch lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
# The Mathlib `nightly-testing` branch or `nightly-testing-YYYY-MM-DD` tag may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
git merge "$BASE" --strategy-option ours --no-commit --allow-unrelated-histories
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
lake update batteries
git add lake-manifest.json
git commit --allow-empty -m "Trigger CI for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
git commit -m "Update lean-toolchain for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
fi
- name: Push changes

View File

@@ -50,7 +50,7 @@ We'll use `v4.6.0` as the intended release version as a running example.
- Re-running `script/release_checklist.py` will then create the tag `v4.6.0` from `master`/`main` and push it (unless `toolchain-tag: false` in the `release_repos.yml` file)
- `script/release_checklist.py` will then merge the tag `v4.6.0` into the `stable` branch and push it (unless `stable-branch: false` in the `release_repos.yml` file).
- Special notes on repositories with exceptional requirements:
- `doc-gen4` has addition dependencies which we do not update at each toolchain release, although occasionally these break and need to be updated manually.
- `doc-gen4` has additional dependencies which we do not update at each toolchain release, although occasionally these break and need to be updated manually.
- `verso`:
- The `subverso` dependency is unusual in that it needs to be compatible with _every_ Lean release simultaneously.
Usually you don't need to do anything.
@@ -94,6 +94,8 @@ We'll use `v4.6.0` as the intended release version as a running example.
This checklist walks you through creating the first release candidate for a version of Lean.
For subsequent release candidates, the process is essentially the same, but we start out with the `releases/v4.7.0` branch already created.
We'll use `v4.7.0-rc1` as the intended release version in this example.
- Decide which nightly release you want to turn into a release candidate.
@@ -112,7 +114,7 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
git fetch nightly tag nightly-2024-02-29
git checkout nightly-2024-02-29
git checkout -b releases/v4.7.0
git push --set-upstream origin releases/v4.18.0
git push --set-upstream origin releases/v4.7.0
```
- In `src/CMakeLists.txt`,
- verify that you see `set(LEAN_VERSION_MINOR 7)` (for whichever `7` is appropriate); this should already have been updated when the development cycle began.

9
script/bench.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
# We benchmark against stage 2 to test new optimizations.
timeout -s KILL 1h time bash -c 'mkdir -p build/release; cd build/release; cmake ../.. && make -j$(nproc) stage2' 1>&2
export PATH=$PWD/build/release/stage2/bin:$PATH
cd tests/bench
timeout -s KILL 1h time temci exec --config speedcenter.yaml --in speedcenter.exec.velcom.yaml 1>&2
temci report run_output.yaml --reporter codespeed2

View File

@@ -53,6 +53,23 @@ def tag_exists(repo_url, tag_name, github_token):
matching_tags = response.json()
return any(tag["ref"] == f"refs/tags/{tag_name}" for tag in matching_tags)
def commit_hash_for_tag(repo_url, tag_name, github_token):
# Use /git/matching-refs/tags/ to get all matching tags
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/git/matching-refs/tags/{tag_name}"
headers = {'Authorization': f'token {github_token}'} if github_token else {}
response = requests.get(api_url, headers=headers)
if response.status_code != 200:
return False
# Check if any of the returned refs exactly match our tag
matching_tags = response.json()
matching_commits = [tag["object"]["sha"] for tag in matching_tags if tag["ref"] == f"refs/tags/{tag_name}"]
if len(matching_commits) != 1:
return None
else:
return matching_commits[0]
def release_page_exists(repo_url, tag_name, github_token):
api_url = repo_url.replace("https://github.com/", "https://api.github.com/repos/") + f"/releases/tags/{tag_name}"
headers = {'Authorization': f'token {github_token}'} if github_token else {}
@@ -286,6 +303,14 @@ def main():
lean4_success = False
else:
print(f" ✅ Tag {toolchain} exists")
commit_hash = commit_hash_for_tag(lean_repo_url, toolchain, github_token)
SHORT_HASH_LENGTH = 7 # Lake abbreviates the Lean commit to 7 characters.
if commit_hash is None:
print(f" ❌ Could not resolve tag {toolchain} to a commit.")
lean4_success = False
elif commit_hash[0] == '0' and commit_hash[:SHORT_HASH_LENGTH].isnumeric():
print(f" ❌ Short commit hash {commit_hash[:SHORT_HASH_LENGTH]} is numeric and starts with 0, causing issues for version parsing. Try regenerating the last commit to get a new hash.")
lean4_success = False
if not release_page_exists(lean_repo_url, toolchain, github_token):
print(f" ❌ Release page for {toolchain} does not exist")

View File

@@ -94,6 +94,7 @@ def generate_script(repo, version, config):
"echo 'This repo has nightly-testing infrastructure'",
f"git merge origin/bump/{version.split('-rc')[0]}",
"echo 'Please resolve any conflicts.'",
"grep nightly-testing lakefile.* && echo 'Please ensure the lakefile does not include nightly-testing versions.'",
""
])
if re.search(r'rc\d+$', version) and repo_name in ["verso", "reference-manual"]:

View File

@@ -10,7 +10,7 @@ endif()
include(ExternalProject)
project(LEAN CXX C)
set(LEAN_VERSION_MAJOR 4)
set(LEAN_VERSION_MINOR 21)
set(LEAN_VERSION_MINOR 22)
set(LEAN_VERSION_PATCH 0)
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")

View File

@@ -7,6 +7,7 @@ module
prelude
import Init.Prelude
meta import Init.Prelude
set_option linter.missingDocs true -- keep it documented
/-!

View File

@@ -148,7 +148,7 @@ attribute [simp] pure_bind bind_assoc bind_pure_comp
attribute [grind] pure_bind
@[simp] theorem bind_pure [Monad m] [LawfulMonad m] (x : m α) : x >>= pure = x := by
show x >>= (fun a => pure (id a)) = x
change x >>= (fun a => pure (id a)) = x
rw [bind_pure_comp, id_map]
/--

View File

@@ -58,7 +58,7 @@ protected theorem bind_pure_comp [Monad m] (f : α → β) (x : ExceptT ε m α)
intros; rfl
protected theorem seqLeft_eq {α β ε : Type u} {m : Type u Type v} [Monad m] [LawfulMonad m] (x : ExceptT ε m α) (y : ExceptT ε m β) : x <* y = const β <$> x <*> y := by
show (x >>= fun a => y >>= fun _ => pure a) = (const (α := α) β <$> x) >>= fun f => f <$> y
change (x >>= fun a => y >>= fun _ => pure a) = (const (α := α) β <$> x) >>= fun f => f <$> y
rw [ ExceptT.bind_pure_comp]
apply ext
simp [run_bind]
@@ -70,7 +70,7 @@ protected theorem seqLeft_eq {α β ε : Type u} {m : Type u → Type v} [Monad
cases b <;> simp [comp, Except.map, const]
protected theorem seqRight_eq [Monad m] [LawfulMonad m] (x : ExceptT ε m α) (y : ExceptT ε m β) : x *> y = const α id <$> x <*> y := by
show (x >>= fun _ => y) = (const α id <$> x) >>= fun f => f <$> y
change (x >>= fun _ => y) = (const α id <$> x) >>= fun f => f <$> y
rw [ ExceptT.bind_pure_comp]
apply ext
simp [run_bind]
@@ -206,15 +206,15 @@ theorem run_bind_lift {α σ : Type u} [Monad m] [LawfulMonad m] (x : m α) (f :
(monadMap @f x : StateT σ m α).run s = monadMap @f (x.run s) := rfl
@[simp] theorem run_seq {α β σ : Type u} [Monad m] [LawfulMonad m] (f : StateT σ m (α β)) (x : StateT σ m α) (s : σ) : (f <*> x).run s = (f.run s >>= fun fs => (fun (p : α × σ) => (fs.1 p.1, p.2)) <$> x.run fs.2) := by
show (f >>= fun g => g <$> x).run s = _
change (f >>= fun g => g <$> x).run s = _
simp
@[simp] theorem run_seqRight [Monad m] (x : StateT σ m α) (y : StateT σ m β) (s : σ) : (x *> y).run s = (x.run s >>= fun p => y.run p.2) := by
show (x >>= fun _ => y).run s = _
change (x >>= fun _ => y).run s = _
simp
@[simp] theorem run_seqLeft {α β σ : Type u} [Monad m] (x : StateT σ m α) (y : StateT σ m β) (s : σ) : (x <* y).run s = (x.run s >>= fun p => y.run p.2 >>= fun p' => pure (p.1, p'.2)) := by
show (x >>= fun a => y >>= fun _ => pure a).run s = _
change (x >>= fun a => y >>= fun _ => pure a).run s = _
simp
theorem seqRight_eq [Monad m] [LawfulMonad m] (x : StateT σ m α) (y : StateT σ m β) : x *> y = const α id <$> x <*> y := by

View File

@@ -9,7 +9,7 @@ module
prelude
import Init.Tactics
import Init.Meta
meta import Init.Meta
namespace Lean.Parser.Tactic.Conv

View File

@@ -8,7 +8,7 @@ notation, basic datatypes and type classes
module
prelude
import Init.Prelude
meta import Init.Prelude
import Init.SizeOf
set_option linter.missingDocs true -- keep it documented
@@ -2252,7 +2252,7 @@ theorem funext {α : Sort u} {β : α → Sort v} {f g : (x : α) → β x}
Quot.liftOn f
(fun (f : (x : α), β x) => f x)
(fun _ _ h => h x)
show extfunApp (Quot.mk eqv f) = extfunApp (Quot.mk eqv g)
change extfunApp (Quot.mk eqv f) = extfunApp (Quot.mk eqv g)
exact congrArg extfunApp (Quot.sound h)
/--

View File

@@ -246,7 +246,7 @@ def swap (xs : Array α) (i j : @& Nat) (hi : i < xs.size := by get_elem_tactic)
xs'.set j v₁ (Nat.lt_of_lt_of_eq hj (size_set _).symm)
@[simp] theorem size_swap {xs : Array α} {i j : Nat} {hi hj} : (xs.swap i j hi hj).size = xs.size := by
show ((xs.set i xs[j]).set j xs[i]
change ((xs.set i xs[j]).set j xs[i]
(Nat.lt_of_lt_of_eq hj (size_set _).symm)).size = xs.size
rw [size_set, size_set]

View File

@@ -24,6 +24,7 @@ open Nat
/-! ### eraseP -/
@[grind =]
theorem eraseP_empty : #[].eraseP p = #[] := by simp
theorem eraseP_of_forall_mem_not {xs : Array α} (h : a, a xs ¬p a) : xs.eraseP p = xs := by
@@ -64,6 +65,7 @@ theorem exists_or_eq_self_of_eraseP (p) (xs : Array α) :
let _, ys, zs, _, _, e₁, e₂ := exists_of_eraseP al pa
rw [e₂]; simp [size_append, e₁]
@[grind =]
theorem size_eraseP {xs : Array α} : (xs.eraseP p).size = if xs.any p then xs.size - 1 else xs.size := by
split <;> rename_i h
· simp only [any_eq_true] at h
@@ -81,11 +83,12 @@ theorem le_size_eraseP {xs : Array α} : xs.size - 1 ≤ (xs.eraseP p).size := b
rcases xs with xs
simpa using List.le_length_eraseP
@[grind ]
theorem mem_of_mem_eraseP {xs : Array α} : a xs.eraseP p a xs := by
rcases xs with xs
simpa using List.mem_of_mem_eraseP
@[simp] theorem mem_eraseP_of_neg {xs : Array α} (pa : ¬p a) : a xs.eraseP p a xs := by
@[simp, grind] theorem mem_eraseP_of_neg {xs : Array α} (pa : ¬p a) : a xs.eraseP p a xs := by
rcases xs with xs
simpa using List.mem_eraseP_of_neg pa
@@ -93,15 +96,18 @@ theorem mem_of_mem_eraseP {xs : Array α} : a ∈ xs.eraseP p → a ∈ xs := by
rcases xs with xs
simp
@[grind _=_]
theorem eraseP_map {f : β α} {xs : Array β} : (xs.map f).eraseP p = (xs.eraseP (p f)).map f := by
rcases xs with xs
simpa using List.eraseP_map
@[grind =]
theorem eraseP_filterMap {f : α Option β} {xs : Array α} :
(filterMap f xs).eraseP p = filterMap f (xs.eraseP (fun x => match f x with | some y => p y | none => false)) := by
rcases xs with xs
simpa using List.eraseP_filterMap
@[grind =]
theorem eraseP_filter {f : α Bool} {xs : Array α} :
(filter f xs).eraseP p = filter f (xs.eraseP (fun x => p x && f x)) := by
rcases xs with xs
@@ -119,6 +125,7 @@ theorem eraseP_append_right {xs : Array α} ys (h : ∀ b ∈ xs, ¬p b) :
rcases ys with ys
simpa using List.eraseP_append_right ys (by simpa using h)
@[grind =]
theorem eraseP_append {xs : Array α} {ys : Array α} :
(xs ++ ys).eraseP p = if xs.any p then xs.eraseP p ++ ys else xs ++ ys.eraseP p := by
rcases xs with xs
@@ -126,6 +133,7 @@ theorem eraseP_append {xs : Array α} {ys : Array α} :
simp only [List.append_toArray, List.eraseP_toArray, List.eraseP_append, List.any_toArray]
split <;> simp
@[grind =]
theorem eraseP_replicate {n : Nat} {a : α} {p : α Bool} :
(replicate n a).eraseP p = if p a then replicate (n - 1) a else replicate n a := by
simp only [ List.toArray_replicate, List.eraseP_toArray, List.eraseP_replicate]
@@ -165,6 +173,7 @@ theorem eraseP_eq_iff {p} {xs : Array α} :
· exact Or.inl h
· exact Or.inr a, l₁, by simpa using h₁, h₂, l, by simp
@[grind =]
theorem eraseP_comm {xs : Array α} (h : a xs, ¬ p a ¬ q a) :
(xs.eraseP p).eraseP q = (xs.eraseP q).eraseP p := by
rcases xs with xs
@@ -208,6 +217,7 @@ theorem exists_erase_eq [LawfulBEq α] {a : α} {xs : Array α} (h : a ∈ xs) :
(xs.erase a).size = xs.size - 1 := by
rw [erase_eq_eraseP]; exact size_eraseP_of_mem h (beq_self_eq_true a)
@[grind =]
theorem size_erase [LawfulBEq α] {a : α} {xs : Array α} :
(xs.erase a).size = if a xs then xs.size - 1 else xs.size := by
rw [erase_eq_eraseP, size_eraseP]
@@ -222,11 +232,12 @@ theorem le_size_erase [LawfulBEq α] {a : α} {xs : Array α} : xs.size - 1 ≤
rcases xs with xs
simpa using List.le_length_erase
@[grind ]
theorem mem_of_mem_erase {a b : α} {xs : Array α} (h : a xs.erase b) : a xs := by
rcases xs with xs
simpa using List.mem_of_mem_erase (by simpa using h)
@[simp] theorem mem_erase_of_ne [LawfulBEq α] {a b : α} {xs : Array α} (ab : a b) :
@[simp, grind] theorem mem_erase_of_ne [LawfulBEq α] {a b : α} {xs : Array α} (ab : a b) :
a xs.erase b a xs :=
erase_eq_eraseP b xs mem_eraseP_of_neg (mt eq_of_beq ab.symm)
@@ -234,6 +245,7 @@ theorem mem_of_mem_erase {a b : α} {xs : Array α} (h : a ∈ xs.erase b) : a
rw [erase_eq_eraseP', eraseP_eq_self_iff]
simp [forall_mem_ne']
@[grind _=_]
theorem erase_filter [LawfulBEq α] {f : α Bool} {xs : Array α} :
(filter f xs).erase a = filter f (xs.erase a) := by
rcases xs with xs
@@ -251,6 +263,7 @@ theorem erase_append_right [LawfulBEq α] {a : α} {xs : Array α} (ys : Array
rcases ys with ys
simpa using List.erase_append_right ys (by simpa using h)
@[grind =]
theorem erase_append [LawfulBEq α] {a : α} {xs ys : Array α} :
(xs ++ ys).erase a = if a xs then xs.erase a ++ ys else xs ++ ys.erase a := by
rcases xs with xs
@@ -258,6 +271,7 @@ theorem erase_append [LawfulBEq α] {a : α} {xs ys : Array α} :
simp only [List.append_toArray, List.erase_toArray, List.erase_append, mem_toArray]
split <;> simp
@[grind =]
theorem erase_replicate [LawfulBEq α] {n : Nat} {a b : α} :
(replicate n a).erase b = if b == a then replicate (n - 1) a else replicate n a := by
simp only [ List.toArray_replicate, List.erase_toArray]
@@ -269,6 +283,7 @@ abbrev erase_mkArray := @erase_replicate
-- The arguments `a b` are explicit,
-- so they can be specified to prevent `simp` repeatedly applying the lemma.
@[grind =]
theorem erase_comm [LawfulBEq α] (a b : α) {xs : Array α} :
(xs.erase a).erase b = (xs.erase b).erase a := by
rcases xs with xs
@@ -312,6 +327,7 @@ theorem eraseIdx_eq_eraseIdxIfInBounds {xs : Array α} {i : Nat} (h : i < xs.siz
xs.eraseIdx i h = xs.eraseIdxIfInBounds i := by
simp [eraseIdxIfInBounds, h]
@[grind =]
theorem eraseIdx_eq_take_drop_succ {xs : Array α} {i : Nat} (h) :
xs.eraseIdx i h = xs.take i ++ xs.drop (i + 1) := by
rcases xs with xs
@@ -322,6 +338,7 @@ theorem eraseIdx_eq_take_drop_succ {xs : Array α} {i : Nat} (h) :
rw [List.take_of_length_le]
simp
@[grind =]
theorem getElem?_eraseIdx {xs : Array α} {i : Nat} (h : i < xs.size) {j : Nat} :
(xs.eraseIdx i)[j]? = if j < i then xs[j]? else xs[j + 1]? := by
rcases xs with xs
@@ -339,6 +356,7 @@ theorem getElem?_eraseIdx_of_ge {xs : Array α} {i : Nat} (h : i < xs.size) {j :
intro h'
omega
@[grind =]
theorem getElem_eraseIdx {xs : Array α} {i : Nat} (h : i < xs.size) {j : Nat} (h' : j < (xs.eraseIdx i).size) :
(xs.eraseIdx i)[j] = if h'' : j < i then
xs[j]
@@ -362,6 +380,7 @@ theorem eraseIdx_ne_empty_iff {xs : Array α} {i : Nat} {h} : xs.eraseIdx i ≠
simp [h]
· simp
@[grind ]
theorem mem_of_mem_eraseIdx {xs : Array α} {i : Nat} {h} {a : α} (h : a xs.eraseIdx i) : a xs := by
rcases xs with xs
simpa using List.mem_of_mem_eraseIdx (by simpa using h)
@@ -373,13 +392,29 @@ theorem eraseIdx_append_of_lt_size {xs : Array α} {k : Nat} (hk : k < xs.size)
simp at hk
simp [List.eraseIdx_append_of_lt_length, *]
theorem eraseIdx_append_of_length_le {xs : Array α} {k : Nat} (hk : xs.size k) (ys : Array α) (h) :
theorem eraseIdx_append_of_size_le {xs : Array α} {k : Nat} (hk : xs.size k) (ys : Array α) (h) :
eraseIdx (xs ++ ys) k = xs ++ eraseIdx ys (k - xs.size) (by simp at h; omega) := by
rcases xs with l
rcases ys with l'
simp at hk
simp [List.eraseIdx_append_of_length_le, *]
@[deprecated eraseIdx_append_of_size_le (since := "2025-06-11")]
abbrev eraseIdx_append_of_length_le := @eraseIdx_append_of_size_le
@[grind =]
theorem eraseIdx_append {xs ys : Array α} (h : k < (xs ++ ys).size) :
eraseIdx (xs ++ ys) k =
if h' : k < xs.size then
eraseIdx xs k ++ ys
else
xs ++ eraseIdx ys (k - xs.size) (by simp at h; omega) := by
split <;> rename_i h
· simp [eraseIdx_append_of_lt_size h]
· rw [eraseIdx_append_of_size_le]
omega
@[grind =]
theorem eraseIdx_replicate {n : Nat} {a : α} {k : Nat} {h} :
(replicate n a).eraseIdx k = replicate (n - 1) a := by
simp at h
@@ -428,6 +463,48 @@ theorem eraseIdx_set_gt {xs : Array α} {i : Nat} {j : Nat} {a : α} (h : i < j)
rcases xs with xs
simp [List.eraseIdx_set_gt, *]
@[grind =]
theorem eraseIdx_set {xs : Array α} {i : Nat} {a : α} {hi : i < xs.size} {j : Nat} {hj : j < (xs.set i a).size} :
(xs.set i a).eraseIdx j =
if h' : j < i then
(xs.eraseIdx j).set (i - 1) a (by simp; omega)
else if h'' : j = i then
xs.eraseIdx i
else
(xs.eraseIdx j (by simp at hj; omega)).set i a (by simp at hj ; omega) := by
split <;> rename_i h'
· rw [eraseIdx_set_lt]
omega
· split <;> rename_i h''
· subst h''
rw [eraseIdx_set_eq]
· rw [eraseIdx_set_gt]
omega
theorem set_eraseIdx_le {xs : Array α} {i : Nat} {w : i < xs.size} {j : Nat} {a : α} (h : i j) (hj : j < (xs.eraseIdx i).size) :
(xs.eraseIdx i).set j a = (xs.set (j + 1) a (by simp at hj; omega)).eraseIdx i (by simp at ; omega) := by
rw [eraseIdx_set_lt]
· simp
· omega
theorem set_eraseIdx_gt {xs : Array α} {i : Nat} {w : i < xs.size} {j : Nat} {a : α} (h : j < i) (hj : j < (xs.eraseIdx i).size) :
(xs.eraseIdx i).set j a = (xs.set j a).eraseIdx i (by simp at ; omega) := by
rw [eraseIdx_set_gt]
omega
@[grind =]
theorem set_eraseIdx {xs : Array α} {i : Nat} {w : i < xs.size} {j : Nat} {a : α} (hj : j < (xs.eraseIdx i).size) :
(xs.eraseIdx i).set j a =
if h' : i j then
(xs.set (j + 1) a (by simp at hj; omega)).eraseIdx i (by simp at ; omega)
else
(xs.set j a).eraseIdx i (by simp at ; omega) := by
split <;> rename_i h'
· rw [set_eraseIdx_le]
omega
· rw [set_eraseIdx_gt]
omega
@[simp] theorem set_getElem_succ_eraseIdx_succ
{xs : Array α} {i : Nat} (h : i + 1 < xs.size) :
(xs.eraseIdx (i + 1)).set i xs[i + 1] (by simp; omega) = xs.eraseIdx i := by

View File

@@ -23,10 +23,10 @@ Examples:
-/
protected def finRange (n : Nat) : Array (Fin n) := ofFn fun i => i
@[simp] theorem size_finRange {n} : (Array.finRange n).size = n := by
@[simp, grind =] theorem size_finRange {n} : (Array.finRange n).size = n := by
simp [Array.finRange]
@[simp] theorem getElem_finRange {i : Nat} (h : i < (Array.finRange n).size) :
@[simp, grind =] theorem getElem_finRange {i : Nat} (h : i < (Array.finRange n).size) :
(Array.finRange n)[i] = Fin.cast size_finRange i, h := by
simp [Array.finRange]
@@ -49,6 +49,7 @@ theorem finRange_succ_last {n} :
· simp_all
omega
@[grind _=_]
theorem finRange_reverse {n} : (Array.finRange n).reverse = (Array.finRange n).map Fin.rev := by
ext i h
· simp

View File

@@ -38,11 +38,22 @@ theorem findSome?_singleton {a : α} {f : α → Option β} : #[a].findSome? f =
@[simp] theorem findSomeRev?_push_of_isNone {xs : Array α} (h : (f a).isNone) : (xs.push a).findSomeRev? f = xs.findSomeRev? f := by
cases xs; simp_all
@[grind =]
theorem findSomeRev?_push {xs : Array α} {a : α} {f : α Option β} :
(xs.push a).findSomeRev? f = (f a).or (xs.findSomeRev? f) := by
match h : f a with
| some b =>
rw [findSomeRev?_push_of_isSome]
all_goals simp_all
| none =>
rw [findSomeRev?_push_of_isNone]
all_goals simp_all
theorem exists_of_findSome?_eq_some {f : α Option β} {xs : Array α} (w : xs.findSome? f = some b) :
a, a xs f a = some b := by
cases xs; simp_all [List.exists_of_findSome?_eq_some]
@[simp] theorem findSome?_eq_none_iff : findSome? p xs = none x xs, p x = none := by
@[simp, grind =] theorem findSome?_eq_none_iff : findSome? p xs = none x xs, p x = none := by
cases xs; simp
@[simp] theorem findSome?_isSome_iff {f : α Option β} {xs : Array α} :
@@ -59,36 +70,39 @@ theorem findSome?_eq_some_iff {f : α → Option β} {xs : Array α} {b : β} :
· rintro xs, a, ys, h₀, h₁, h₂
exact xs.toList, a, ys.toList, by simpa using congrArg toList h₀, h₁, by simpa
@[simp] theorem findSome?_guard {xs : Array α} : findSome? (Option.guard fun x => p x) xs = find? p xs := by
@[simp, grind =] theorem findSome?_guard {xs : Array α} : findSome? (Option.guard p) xs = find? p xs := by
cases xs; simp
theorem find?_eq_findSome?_guard {xs : Array α} : find? p xs = findSome? (Option.guard fun x => p x) xs :=
theorem find?_eq_findSome?_guard {xs : Array α} : find? p xs = findSome? (Option.guard p) xs :=
findSome?_guard.symm
@[simp] theorem getElem?_zero_filterMap {f : α Option β} {xs : Array α} : (xs.filterMap f)[0]? = xs.findSome? f := by
@[simp, grind =] theorem getElem?_zero_filterMap {f : α Option β} {xs : Array α} : (xs.filterMap f)[0]? = xs.findSome? f := by
cases xs; simp [ List.head?_eq_getElem?]
@[simp] theorem getElem_zero_filterMap {f : α Option β} {xs : Array α} (h) :
@[simp, grind =] theorem getElem_zero_filterMap {f : α Option β} {xs : Array α} (h) :
(xs.filterMap f)[0] = (xs.findSome? f).get (by cases xs; simpa [List.length_filterMap_eq_countP] using h) := by
cases xs; simp [ List.head_eq_getElem, getElem?_zero_filterMap]
@[simp] theorem back?_filterMap {f : α Option β} {xs : Array α} : (xs.filterMap f).back? = xs.findSomeRev? f := by
@[simp, grind =] theorem back?_filterMap {f : α Option β} {xs : Array α} : (xs.filterMap f).back? = xs.findSomeRev? f := by
cases xs; simp
@[simp] theorem back!_filterMap [Inhabited β] {f : α Option β} {xs : Array α} :
@[simp, grind =] theorem back!_filterMap [Inhabited β] {f : α Option β} {xs : Array α} :
(xs.filterMap f).back! = (xs.findSomeRev? f).getD default := by
cases xs; simp
@[simp] theorem map_findSome? {f : α Option β} {g : β γ} {xs : Array α} :
@[simp, grind _=_] theorem map_findSome? {f : α Option β} {g : β γ} {xs : Array α} :
(xs.findSome? f).map g = xs.findSome? (Option.map g f) := by
cases xs; simp
@[grind _=_]
theorem findSome?_map {f : β γ} {xs : Array β} : findSome? p (xs.map f) = xs.findSome? (p f) := by
cases xs; simp [List.findSome?_map]
@[grind =]
theorem findSome?_append {xs ys : Array α} : (xs ++ ys).findSome? f = (xs.findSome? f).or (ys.findSome? f) := by
cases xs; cases ys; simp [List.findSome?_append]
@[grind =]
theorem getElem?_zero_flatten (xss : Array (Array α)) :
(flatten xss)[0]? = xss.findSome? fun xs => xs[0]? := by
cases xss using array₂_induction
@@ -104,12 +118,14 @@ theorem getElem_zero_flatten.proof {xss : Array (Array α)} (h : 0 < xss.flatten
obtain _, xs, m, rfl, h := h
exact xs, m, by simpa using h
@[grind =]
theorem getElem_zero_flatten {xss : Array (Array α)} (h) :
(flatten xss)[0] = (xss.findSome? fun xs => xs[0]?).get (getElem_zero_flatten.proof h) := by
have t := getElem?_zero_flatten xss
simp [getElem?_eq_getElem, h] at t
simp [ t]
@[grind =]
theorem findSome?_replicate : findSome? f (replicate n a) = if n = 0 then none else f a := by
simp [ List.toArray_replicate, List.findSome?_replicate]
@@ -140,8 +156,9 @@ abbrev findSome?_mkArray_of_isNone := @findSome?_replicate_of_isNone
/-! ### find? -/
@[simp] theorem find?_empty : find? p #[] = none := rfl
@[simp, grind =] theorem find?_empty : find? p #[] = none := rfl
@[grind =]
theorem find?_singleton {a : α} {p : α Bool} :
#[a].find? p = if p a then some a else none := by
simp
@@ -150,11 +167,26 @@ theorem find?_singleton {a : α} {p : α → Bool} :
findRev? p (xs.push a) = some a := by
cases xs; simp [h]
@[simp] theorem findRev?_cons_of_neg {xs : Array α} (h : ¬p a) :
@[simp] theorem findRev?_push_of_neg {xs : Array α} (h : ¬p a) :
findRev? p (xs.push a) = findRev? p xs := by
cases xs; simp [h]
@[simp] theorem find?_eq_none : find? p xs = none x xs, ¬ p x := by
@[deprecated findRev?_push_of_neg (since := "2025-06-12")]
abbrev findRev?_cons_of_neg := @findRev?_push_of_neg
@[grind =]
theorem finRev?_push {xs : Array α} :
findRev? p (xs.push a) = (Option.guard p a).or (xs.findRev? p) := by
cases h : p a
· rw [findRev?_push_of_neg, Option.guard_eq_none_iff.mpr h]
all_goals simp [h]
· rw [findRev?_push_of_pos, Option.guard_eq_some_iff.mpr rfl, h]
all_goals simp [h]
@[deprecated finRev?_push (since := "2025-06-12")]
abbrev findRev?_cons := @finRev?_push
@[simp, grind =] theorem find?_eq_none : find? p xs = none x xs, ¬ p x := by
cases xs; simp
theorem find?_eq_some_iff_append {xs : Array α} :
@@ -178,60 +210,63 @@ theorem find?_push_eq_some {xs : Array α} :
(xs.push a).find? p = some b xs.find? p = some b (xs.find? p = none (p a a = b)) := by
cases xs; simp
@[simp] theorem find?_isSome {xs : Array α} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
@[simp, grind =] theorem find?_isSome {xs : Array α} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
cases xs; simp
@[grind ]
theorem find?_some {xs : Array α} (h : find? p xs = some a) : p a := by
cases xs
simp at h
exact List.find?_some h
@[grind ]
theorem mem_of_find?_eq_some {xs : Array α} (h : find? p xs = some a) : a xs := by
cases xs
simp at h
simpa using List.mem_of_find?_eq_some h
@[grind]
theorem get_find?_mem {xs : Array α} (h) : (xs.find? p).get h xs := by
cases xs
simp [List.get_find?_mem]
@[simp] theorem find?_filter {xs : Array α} (p q : α Bool) :
@[simp, grind =] theorem find?_filter {xs : Array α} (p q : α Bool) :
(xs.filter p).find? q = xs.find? (fun a => p a q a) := by
cases xs; simp
@[simp] theorem getElem?_zero_filter {p : α Bool} {xs : Array α} :
@[simp, grind =] theorem getElem?_zero_filter {p : α Bool} {xs : Array α} :
(xs.filter p)[0]? = xs.find? p := by
cases xs; simp [ List.head?_eq_getElem?]
@[simp] theorem getElem_zero_filter {p : α Bool} {xs : Array α} (h) :
@[simp, grind =] theorem getElem_zero_filter {p : α Bool} {xs : Array α} (h) :
(xs.filter p)[0] =
(xs.find? p).get (by cases xs; simpa [ List.countP_eq_length_filter] using h) := by
cases xs
simp [List.getElem_zero_eq_head]
@[simp] theorem back?_filter {p : α Bool} {xs : Array α} : (xs.filter p).back? = xs.findRev? p := by
@[simp, grind =] theorem back?_filter {p : α Bool} {xs : Array α} : (xs.filter p).back? = xs.findRev? p := by
cases xs; simp
@[simp] theorem back!_filter [Inhabited α] {p : α Bool} {xs : Array α} :
@[simp, grind =] theorem back!_filter [Inhabited α] {p : α Bool} {xs : Array α} :
(xs.filter p).back! = (xs.findRev? p).get! := by
cases xs; simp [Option.get!_eq_getD]
@[simp] theorem find?_filterMap {xs : Array α} {f : α Option β} {p : β Bool} :
@[simp, grind =] theorem find?_filterMap {xs : Array α} {f : α Option β} {p : β Bool} :
(xs.filterMap f).find? p = (xs.find? (fun a => (f a).any p)).bind f := by
cases xs; simp
@[simp] theorem find?_map {f : β α} {xs : Array β} :
@[simp, grind =] theorem find?_map {f : β α} {xs : Array β} :
find? p (xs.map f) = (xs.find? (p f)).map f := by
cases xs; simp
@[simp] theorem find?_append {xs ys : Array α} :
@[simp, grind =] theorem find?_append {xs ys : Array α} :
(xs ++ ys).find? p = (xs.find? p).or (ys.find? p) := by
cases xs
cases ys
simp
@[simp] theorem find?_flatten {xss : Array (Array α)} {p : α Bool} :
xss.flatten.find? p = xss.findSome? (·.find? p) := by
@[simp, grind _=_] theorem find?_flatten {xss : Array (Array α)} {p : α Bool} :
xss.flatten.find? p = xss.findSome? (find? p) := by
cases xss using array₂_induction
simp [List.findSome?_map, Function.comp_def]
@@ -270,7 +305,7 @@ theorem find?_flatten_eq_some_iff {xss : Array (Array α)} {p : α → Bool} {a
@[deprecated find?_flatten_eq_some_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_some := @find?_flatten_eq_some_iff
@[simp] theorem find?_flatMap {xs : Array α} {f : α Array β} {p : β Bool} :
@[simp, grind =] theorem find?_flatMap {xs : Array α} {f : α Array β} {p : β Bool} :
(xs.flatMap f).find? p = xs.findSome? (fun x => (f x).find? p) := by
cases xs
simp [List.find?_flatMap, Array.flatMap_toArray]
@@ -282,6 +317,7 @@ theorem find?_flatMap_eq_none_iff {xs : Array α} {f : α → Array β} {p : β
@[deprecated find?_flatMap_eq_none_iff (since := "2025-02-03")]
abbrev find?_flatMap_eq_none := @find?_flatMap_eq_none_iff
@[grind =]
theorem find?_replicate :
find? p (replicate n a) = if n = 0 then none else if p a then some a else none := by
simp [ List.toArray_replicate, List.find?_replicate]
@@ -334,6 +370,7 @@ abbrev find?_mkArray_eq_some := @find?_replicate_eq_some_iff
@[deprecated get_find?_replicate (since := "2025-03-18")]
abbrev get_find?_mkArray := @get_find?_replicate
@[grind =]
theorem find?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Array α}
(H : (a : α), a xs P a) {p : β Bool} :
(xs.pmap f H).find? p = (xs.attach.find? (fun a, m => p (f a (H a m)))).map fun a, m => f a (H a m) := by
@@ -347,12 +384,15 @@ theorem find?_eq_some_iff_getElem {xs : Array α} {p : α → Bool} {b : α} :
/-! ### findIdx -/
@[grind =]
theorem findIdx_empty : findIdx p #[] = 0 := rfl
@[grind =]
theorem findIdx_singleton {a : α} {p : α Bool} :
#[a].findIdx p = if p a then 0 else 1 := by
simp
@[grind ]
theorem findIdx_of_getElem?_eq_some {xs : Array α} (w : xs[xs.findIdx p]? = some y) : p y := by
rcases xs with xs
exact List.findIdx_of_getElem?_eq_some (by simpa using w)
@@ -361,6 +401,8 @@ theorem findIdx_getElem {xs : Array α} {w : xs.findIdx p < xs.size} :
p xs[xs.findIdx p] :=
xs.findIdx_of_getElem?_eq_some (getElem?_eq_getElem w)
grind_pattern findIdx_getElem => xs[xs.findIdx p]
theorem findIdx_lt_size_of_exists {xs : Array α} (h : x xs, p x) :
xs.findIdx p < xs.size := by
rcases xs with xs
@@ -387,18 +429,24 @@ theorem findIdx_le_size {p : α → Bool} {xs : Array α} : xs.findIdx p ≤ xs.
· simp at e
exact Nat.le_of_eq (findIdx_eq_size.mpr e)
grind_pattern findIdx_le_size => xs.findIdx p, xs.size
@[simp]
theorem findIdx_lt_size {p : α Bool} {xs : Array α} :
xs.findIdx p < xs.size x xs, p x := by
rcases xs with xs
simp
grind_pattern findIdx_lt_size => xs.findIdx p, xs.size
/-- `p` does not hold for elements with indices less than `xs.findIdx p`. -/
theorem not_of_lt_findIdx {p : α Bool} {xs : Array α} {i : Nat} (h : i < xs.findIdx p) :
p (xs[i]'(Nat.le_trans h findIdx_le_size)) = false := by
rcases xs with xs
simpa using List.not_of_lt_findIdx (by simpa using h)
grind_pattern not_of_lt_findIdx => xs.findIdx p, xs[i]
/-- If `¬ p xs[j]` for all `j < i`, then `i ≤ xs.findIdx p`. -/
theorem le_findIdx_of_not {p : α Bool} {xs : Array α} {i : Nat} (h : i < xs.size)
(h2 : j (hji : j < i), p (xs[j]'(Nat.lt_trans hji h)) = false) : i xs.findIdx p := by
@@ -426,6 +474,7 @@ theorem findIdx_eq {p : α → Bool} {xs : Array α} {i : Nat} (h : i < xs.size)
simp at h3
simp_all [not_of_lt_findIdx h3]
@[grind =]
theorem findIdx_append {p : α Bool} {xs ys : Array α} :
(xs ++ ys).findIdx p =
if xs.findIdx p < xs.size then xs.findIdx p else ys.findIdx p + xs.size := by
@@ -433,6 +482,7 @@ theorem findIdx_append {p : α → Bool} {xs ys : Array α} :
rcases ys with ys
simp [List.findIdx_append]
@[grind =]
theorem findIdx_push {xs : Array α} {a : α} {p : α Bool} :
(xs.push a).findIdx p = if xs.findIdx p < xs.size then xs.findIdx p else xs.size + if p a then 0 else 1 := by
simp only [push_eq_append, findIdx_append]
@@ -455,7 +505,7 @@ theorem false_of_mem_extract_findIdx {xs : Array α} {p : α → Bool} (h : x
rcases xs with xs
exact List.false_of_mem_take_findIdx (by simpa using h)
@[simp] theorem findIdx_extract {xs : Array α} {i : Nat} {p : α Bool} :
@[simp, grind =] theorem findIdx_extract {xs : Array α} {i : Nat} {p : α Bool} :
(xs.extract 0 i).findIdx p = min i (xs.findIdx p) := by
cases xs
simp
@@ -467,24 +517,24 @@ theorem false_of_mem_extract_findIdx {xs : Array α} {p : α → Bool} (h : x
/-! ### findIdx? -/
@[simp] theorem findIdx?_empty : (#[] : Array α).findIdx? p = none := by simp
theorem findIdx?_singleton {a : α} {p : α Bool} :
@[simp, grind =] theorem findIdx?_empty : (#[] : Array α).findIdx? p = none := by simp
@[grind =] theorem findIdx?_singleton {a : α} {p : α Bool} :
#[a].findIdx? p = if p a then some 0 else none := by
simp
@[simp]
@[simp, grind =]
theorem findIdx?_eq_none_iff {xs : Array α} {p : α Bool} :
xs.findIdx? p = none x, x xs p x = false := by
rcases xs with xs
simp
@[simp]
@[simp, grind =]
theorem findIdx?_isSome {xs : Array α} {p : α Bool} :
(xs.findIdx? p).isSome = xs.any p := by
rcases xs with xs
simp [List.findIdx?_isSome]
@[simp]
@[simp, grind =]
theorem findIdx?_isNone {xs : Array α} {p : α Bool} :
(xs.findIdx? p).isNone = xs.all (¬p ·) := by
rcases xs with xs
@@ -526,18 +576,19 @@ theorem of_findIdx?_eq_none {xs : Array α} {p : α → Bool} (w : xs.findIdx? p
rcases xs with xs
simpa using List.of_findIdx?_eq_none (by simpa using w)
@[simp] theorem findIdx?_map {f : β α} {xs : Array β} {p : α Bool} :
@[simp, grind =] theorem findIdx?_map {f : β α} {xs : Array β} {p : α Bool} :
findIdx? p (xs.map f) = xs.findIdx? (p f) := by
rcases xs with xs
simp [List.findIdx?_map]
@[simp] theorem findIdx?_append :
@[simp, grind =] theorem findIdx?_append :
(xs ++ ys : Array α).findIdx? p =
(xs.findIdx? p).or ((ys.findIdx? p).map fun i => i + xs.size) := by
rcases xs with xs
rcases ys with ys
simp [List.findIdx?_append]
@[grind =]
theorem findIdx?_push {xs : Array α} {a : α} {p : α Bool} :
(xs.push a).findIdx? p = (xs.findIdx? p).or (if p a then some xs.size else none) := by
simp only [push_eq_append, findIdx?_append]
@@ -553,7 +604,7 @@ theorem findIdx?_flatten {xss : Array (Array α)} {p : α → Bool} :
cases xss using array₂_induction
simp [List.findIdx?_flatten, Function.comp_def]
@[simp] theorem findIdx?_replicate :
@[simp, grind =] theorem findIdx?_replicate :
(replicate n a).findIdx? p = if 0 < n p a then some 0 else none := by
rw [ List.toArray_replicate]
simp only [List.findIdx?_toArray]
@@ -578,6 +629,7 @@ theorem findIdx?_eq_none_of_findIdx?_eq_none {xs : Array α} {p q : α → Bool}
rcases xs with xs
simpa using List.findIdx?_eq_none_of_findIdx?_eq_none (by simpa using w)
@[grind =]
theorem findIdx_eq_getD_findIdx? {xs : Array α} {p : α Bool} :
xs.findIdx p = (xs.findIdx? p).getD xs.size := by
rcases xs with xs
@@ -594,15 +646,17 @@ theorem findIdx?_eq_some_le_of_findIdx?_eq_some {xs : Array α} {p q : α → Bo
cases xs
simp [hf]
@[simp] theorem findIdx?_take {xs : Array α} {i : Nat} {p : α Bool} :
@[simp, grind =] theorem findIdx?_take {xs : Array α} {i : Nat} {p : α Bool} :
(xs.take i).findIdx? p = (xs.findIdx? p).bind (Option.guard (fun j => j < i)) := by
cases xs
simp
/-! ### findFinIdx? -/
@[grind =]
theorem findFinIdx?_empty {p : α Bool} : findFinIdx? p #[] = none := by simp
@[grind =]
theorem findFinIdx?_singleton {a : α} {p : α Bool} :
#[a].findFinIdx? p = if p a then some 0, by simp else none := by
simp
@@ -620,7 +674,7 @@ theorem findFinIdx?_eq_pmap_findIdx? {xs : Array α} {p : α → Bool} :
(fun i h => h) := by
simp [findIdx?_eq_map_findFinIdx?_val, Option.pmap_map]
@[simp] theorem findFinIdx?_eq_none_iff {xs : Array α} {p : α Bool} :
@[simp, grind =] theorem findFinIdx?_eq_none_iff {xs : Array α} {p : α Bool} :
xs.findFinIdx? p = none x, x xs ¬ p x := by
simp [findFinIdx?_eq_pmap_findIdx?]
@@ -636,12 +690,14 @@ theorem findFinIdx?_eq_some_iff {xs : Array α} {p : α → Bool} {i : Fin xs.si
· rintro h, w
exact i, i.2, h, fun j hji => w j, by omega hji, rfl
@[grind =]
theorem findFinIdx?_push {xs : Array α} {a : α} {p : α Bool} :
(xs.push a).findFinIdx? p =
((xs.findFinIdx? p).map (Fin.castLE (by simp))).or (if p a then some xs.size, by simp else none) := by
simp only [findFinIdx?_eq_pmap_findIdx?, findIdx?_push, Option.pmap_or]
split <;> rename_i h _ <;> split <;> simp [h]
@[grind =]
theorem findFinIdx?_append {xs ys : Array α} {p : α Bool} :
(xs ++ ys).findFinIdx? p =
((xs.findFinIdx? p).map (Fin.castLE (by simp))).or
@@ -651,13 +707,13 @@ theorem findFinIdx?_append {xs ys : Array α} {p : α → Bool} :
· simp [h, Option.pmap_map, Option.map_pmap, Nat.add_comm]
· simp [h]
@[simp]
@[simp, grind =]
theorem isSome_findFinIdx? {xs : Array α} {p : α Bool} :
(xs.findFinIdx? p).isSome = xs.any p := by
rcases xs with xs
simp [Array.size]
@[simp]
@[simp, grind =]
theorem isNone_findFinIdx? {xs : Array α} {p : α Bool} :
(xs.findFinIdx? p).isNone = xs.all (fun x => ¬ p x) := by
rcases xs with xs
@@ -678,6 +734,7 @@ The verification API for `idxOf` is still incomplete.
The lemmas below should be made consistent with those for `findIdx` (and proved using them).
-/
@[grind =]
theorem idxOf_append [BEq α] [LawfulBEq α] {xs ys : Array α} {a : α} :
(xs ++ ys).idxOf a = if a xs then xs.idxOf a else ys.idxOf a + xs.size := by
rw [idxOf, findIdx_append]
@@ -691,10 +748,23 @@ theorem idxOf_eq_size [BEq α] [LawfulBEq α] {xs : Array α} (h : a ∉ xs) : x
rcases xs with xs
simp [List.idxOf_eq_length (by simpa using h)]
theorem idxOf_lt_length [BEq α] [LawfulBEq α] {xs : Array α} (h : a xs) : xs.idxOf a < xs.size := by
theorem idxOf_lt_length_of_mem [BEq α] [LawfulBEq α] {xs : Array α} (h : a xs) : xs.idxOf a < xs.size := by
rcases xs with xs
simp [List.idxOf_lt_length (by simpa using h)]
simp [List.idxOf_lt_length_of_mem (by simpa using h)]
theorem idxOf_le_size [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
xs.idxOf a xs.size := by
rcases xs with xs
simp [List.idxOf_le_length]
grind_pattern idxOf_le_size => xs.idxOf a, xs.size
theorem idxOf_lt_size_iff [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
xs.idxOf a < xs.size a xs := by
rcases xs with xs
simp [List.idxOf_lt_length_iff]
grind_pattern idxOf_lt_size_iff => xs.idxOf a, xs.size
/-! ### idxOf?
@@ -702,19 +772,20 @@ The verification API for `idxOf?` is still incomplete.
The lemmas below should be made consistent with those for `findIdx?` (and proved using them).
-/
theorem idxOf?_empty [BEq α] : (#[] : Array α).idxOf? a = none := by simp
@[grind =] theorem idxOf?_empty [BEq α] : (#[] : Array α).idxOf? a = none := by simp
@[simp] theorem idxOf?_eq_none_iff [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
@[simp, grind =] theorem idxOf?_eq_none_iff [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
xs.idxOf? a = none a xs := by
rcases xs with xs
simp [List.idxOf?_eq_none_iff]
@[simp]
@[simp, grind =]
theorem isSome_idxOf? [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
(xs.idxOf? a).isSome a xs := by
rcases xs with xs
simp
@[grind =]
theorem isNone_idxOf? [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
(xs.idxOf? a).isNone = ¬ a xs := by
simp
@@ -729,9 +800,9 @@ theorem idxOf?_eq_map_finIdxOf?_val [BEq α] {xs : Array α} {a : α} :
xs.idxOf? a = (xs.finIdxOf? a).map (·.val) := by
simp [idxOf?, finIdxOf?, findIdx?_eq_map_findFinIdx?_val]
theorem finIdxOf?_empty [BEq α] : (#[] : Array α).finIdxOf? a = none := by simp
@[grind =] theorem finIdxOf?_empty [BEq α] : (#[] : Array α).finIdxOf? a = none := by simp
@[simp] theorem finIdxOf?_eq_none_iff [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
@[simp, grind =] theorem finIdxOf?_eq_none_iff [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
xs.finIdxOf? a = none a xs := by
rcases xs with xs
simp [List.finIdxOf?_eq_none_iff, Array.size]
@@ -742,14 +813,16 @@ theorem finIdxOf?_empty [BEq α] : (#[] : Array α).finIdxOf? a = none := by sim
unfold Array.size at i
simp [List.finIdxOf?_eq_some_iff]
@[simp]
theorem isSome_finIdxOf? [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
(xs.finIdxOf? a).isSome a xs := by
@[simp, grind =]
theorem isSome_finIdxOf? [BEq α] [PartialEquivBEq α] {xs : Array α} {a : α} :
(xs.finIdxOf? a).isSome = xs.contains a := by
rcases xs with xs
simp [Array.size]
theorem isNone_finIdxOf? [BEq α] [LawfulBEq α] {xs : Array α} {a : α} :
(xs.finIdxOf? a).isNone = ¬ a xs := by
simp
@[simp, grind =]
theorem isNone_finIdxOf? [BEq α] [PartialEquivBEq α] {xs : Array α} {a : α} :
(xs.finIdxOf? a).isNone = !xs.contains a := by
rcases xs with xs
simp [Array.size]
end Array

View File

@@ -133,7 +133,6 @@ grind_pattern Array.getElem?_eq_none => xs.size ≤ i, xs[i]?
theorem getElem?_eq_some_iff {xs : Array α} : xs[i]? = some b h : i < xs.size, xs[i] = b :=
_root_.getElem?_eq_some_iff
@[grind ]
theorem getElem_of_getElem? {xs : Array α} : xs[i]? = some a h : i < xs.size, xs[i] = a :=
getElem?_eq_some_iff.mp
@@ -176,7 +175,7 @@ theorem getElem_push_lt {xs : Array α} {x : α} {i : Nat} (h : i < xs.size) :
simp only [push, getElem_toList, List.concat_eq_append]
rw [List.getElem_append_right] <;> simp [ getElem_toList, Nat.zero_lt_one]
theorem getElem_push {xs : Array α} {x : α} {i : Nat} (h : i < (xs.push x).size) :
@[grind =] theorem getElem_push {xs : Array α} {x : α} {i : Nat} (h : i < (xs.push x).size) :
(xs.push x)[i] = if h : i < xs.size then xs[i] else x := by
by_cases h' : i < xs.size
· simp [getElem_push_lt, h']
@@ -954,6 +953,13 @@ theorem set_push {xs : Array α} {x y : α} {h} :
· simp at h
omega
@[grind _=_]
theorem set_pop {xs : Array α} {x : α} {i : Nat} (h : i < xs.pop.size) :
xs.pop.set i x h = (xs.set i x (by simp at h; omega)).pop := by
ext i h₁ h₂
· simp
· simp [getElem_set]
@[simp] theorem set_eq_empty_iff {xs : Array α} {i : Nat} {a : α} {h : i < xs.size} :
xs.set i a = #[] xs = #[] := by
cases xs <;> cases i <;> simp [set]
@@ -986,7 +992,11 @@ theorem mem_or_eq_of_mem_set
@[simp, grind] theorem setIfInBounds_empty {i : Nat} {a : α} :
#[].setIfInBounds i a = #[] := rfl
@[simp] theorem set!_eq_setIfInBounds : @set! = @setIfInBounds := rfl
@[simp, grind =] theorem set!_eq_setIfInBounds : set! xs i v = setIfInBounds xs i v := rfl
@[grind]
theorem setIfInBounds_def (xs : Array α) (i : Nat) (a : α) :
xs.setIfInBounds i a = if h : i < xs.size then xs.set i a else xs := rfl
@[deprecated set!_eq_setIfInBounds (since := "2024-12-12")]
abbrev set!_is_setIfInBounds := @set!_eq_setIfInBounds
@@ -1078,7 +1088,7 @@ theorem mem_or_eq_of_mem_setIfInBounds
by_cases h : i < xs.size <;>
simp [setIfInBounds, Nat.not_lt_of_le, h, getD_getElem?]
@[simp] theorem toList_setIfInBounds {xs : Array α} {i : Nat} {x : α} :
@[simp, grind =] theorem toList_setIfInBounds {xs : Array α} {i : Nat} {x : α} :
(xs.setIfInBounds i x).toList = xs.toList.set i x := by
simp only [setIfInBounds]
split <;> rename_i h
@@ -1488,6 +1498,19 @@ theorem forall_mem_filter {p : α → Bool} {xs : Array α} {P : α → Prop} :
( (i) (_ : i xs.filter p), P i) (j) (_ : j xs), p j P j := by
simp
@[grind] theorem getElem_filter {xs : Array α} {p : α Bool} {i : Nat} (h : i < (xs.filter p).size) :
p (xs.filter p)[i] :=
(mem_filter.mp (getElem_mem h)).2
theorem getElem?_filter {xs : Array α} {p : α Bool} {i : Nat} (h : i < (xs.filter p).size)
(w : (xs.filter p)[i]? = some a) : p a := by
rw [getElem?_eq_getElem] at w
simp only [Option.some.injEq] at w
rw [ w]
apply getElem_filter h
grind_pattern getElem?_filter => (xs.filter p)[i]?, some a
@[simp] theorem filter_filter {p q : α Bool} {xs : Array α} :
filter p (filter q xs) = filter (fun a => p a && q a) xs := by
apply ext'
@@ -2997,6 +3020,10 @@ theorem extract_empty_of_size_le_start {xs : Array α} {start stop : Nat} (h : x
apply ext'
simp
theorem _root_.List.toArray_drop {l : List α} {k : Nat} :
(l.drop k).toArray = l.toArray.extract k := by
rw [List.drop_eq_extract, List.extract_toArray, List.size_toArray]
@[deprecated extract_size (since := "2025-02-27")]
theorem take_size {xs : Array α} : xs.take xs.size = xs := by
cases xs
@@ -3607,8 +3634,8 @@ We can prove that two folds over the same array are related (by some arbitrary r
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
-/
theorem foldl_rel {xs : Array α} {f g : β α β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a xs (c c' : β), r c c' r (f c a) (g c' a)) :
theorem foldl_rel {xs : Array α} {f : β α β} {g : γ α γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a xs (c : β) (c' : γ), r c c' r (f c a) (g c' a)) :
r (xs.foldl (fun acc a => f acc a) a) (xs.foldl (fun acc a => g acc a) b) := by
rcases xs with xs
simpa using List.foldl_rel h (by simpa using h')
@@ -3618,8 +3645,8 @@ We can prove that two folds over the same array are related (by some arbitrary r
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
-/
theorem foldr_rel {xs : Array α} {f g : α β β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a xs (c c' : β), r c c' r (f a c) (g a c')) :
theorem foldr_rel {xs : Array α} {f : α β β} {g : α γ γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a xs (c : β) (c' : γ), r c c' r (f a c) (g a c')) :
r (xs.foldr (fun a acc => f a acc) a) (xs.foldr (fun a acc => g a acc) b) := by
rcases xs with xs
simpa using List.foldr_rel h (by simpa using h')
@@ -4512,7 +4539,7 @@ abbrev contains_def [DecidableEq α] {a : α} {xs : Array α} : xs.contains a
(zip xs ys).size = min xs.size ys.size :=
size_zipWith
@[simp] theorem getElem_zipWith {xs : Array α} {ys : Array β} {f : α β γ} {i : Nat}
@[simp, grind =] theorem getElem_zipWith {xs : Array α} {ys : Array β} {f : α β γ} {i : Nat}
(hi : i < (zipWith f xs ys).size) :
(zipWith f xs ys)[i] = f (xs[i]'(by simp at hi; omega)) (ys[i]'(by simp at hi; omega)) := by
cases xs

View File

@@ -51,27 +51,27 @@ theorem mapFinIdx_spec {xs : Array α} {f : (i : Nat) → α → (h : i < xs.siz
i h, p i ((Array.mapFinIdx xs f)[i]) h :=
(mapFinIdx_induction _ _ (fun _ => True) trivial p fun _ _ _ => hs .., trivial).2
@[simp] theorem size_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
@[simp, grind =] theorem size_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
(xs.mapFinIdx f).size = xs.size :=
(mapFinIdx_spec (p := fun _ _ _ => True) (hs := fun _ _ => trivial)).1
@[simp] theorem size_zipIdx {xs : Array α} {k : Nat} : (xs.zipIdx k).size = xs.size :=
@[simp, grind =] theorem size_zipIdx {xs : Array α} {k : Nat} : (xs.zipIdx k).size = xs.size :=
Array.size_mapFinIdx
@[deprecated size_zipIdx (since := "2025-01-21")] abbrev size_zipWithIndex := @size_zipIdx
@[simp] theorem getElem_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} {i : Nat}
@[simp, grind =] theorem getElem_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} {i : Nat}
(h : i < (xs.mapFinIdx f).size) :
(xs.mapFinIdx f)[i] = f i (xs[i]'(by simp_all)) (by simp_all) :=
(mapFinIdx_spec (p := fun i b h => b = f i xs[i] h) fun _ _ => rfl).2 i _
@[simp] theorem getElem?_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} {i : Nat} :
@[simp, grind =] theorem getElem?_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} {i : Nat} :
(xs.mapFinIdx f)[i]? =
xs[i]?.pbind fun b h => some <| f i b (getElem?_eq_some_iff.1 h).1 := by
simp only [getElem?_def, size_mapFinIdx, getElem_mapFinIdx]
split <;> simp_all
@[simp] theorem toList_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
@[simp, grind =] theorem toList_mapFinIdx {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
(xs.mapFinIdx f).toList = xs.toList.mapFinIdx (fun i a h => f i a (by simpa)) := by
apply List.ext_getElem <;> simp
@@ -91,20 +91,20 @@ theorem mapIdx_spec {f : Nat → α → β} {xs : Array α}
i h, p i ((xs.mapIdx f)[i]) h :=
(mapIdx_induction (motive := fun _ => True) trivial fun _ _ _ => hs .., trivial).2
@[simp] theorem size_mapIdx {f : Nat α β} {xs : Array α} : (xs.mapIdx f).size = xs.size :=
@[simp, grind =] theorem size_mapIdx {f : Nat α β} {xs : Array α} : (xs.mapIdx f).size = xs.size :=
(mapIdx_spec (p := fun _ _ _ => True) (hs := fun _ _ => trivial)).1
@[simp] theorem getElem_mapIdx {f : Nat α β} {xs : Array α} {i : Nat}
@[simp, grind =] theorem getElem_mapIdx {f : Nat α β} {xs : Array α} {i : Nat}
(h : i < (xs.mapIdx f).size) :
(xs.mapIdx f)[i] = f i (xs[i]'(by simp_all)) :=
(mapIdx_spec (p := fun i b h => b = f i xs[i]) fun _ _ => rfl).2 i (by simp_all)
@[simp] theorem getElem?_mapIdx {f : Nat α β} {xs : Array α} {i : Nat} :
@[simp, grind =] theorem getElem?_mapIdx {f : Nat α β} {xs : Array α} {i : Nat} :
(xs.mapIdx f)[i]? =
xs[i]?.map (f i) := by
simp [getElem?_def, size_mapIdx, getElem_mapIdx]
@[simp] theorem toList_mapIdx {f : Nat α β} {xs : Array α} :
@[simp, grind =] theorem toList_mapIdx {f : Nat α β} {xs : Array α} :
(xs.mapIdx f).toList = xs.toList.mapIdx (fun i a => f i a) := by
apply List.ext_getElem <;> simp
@@ -126,7 +126,7 @@ namespace Array
/-! ### zipIdx -/
@[simp] theorem getElem_zipIdx {xs : Array α} {k : Nat} {i : Nat} (h : i < (xs.zipIdx k).size) :
@[simp, grind =] theorem getElem_zipIdx {xs : Array α} {k : Nat} {i : Nat} (h : i < (xs.zipIdx k).size) :
(xs.zipIdx k)[i] = (xs[i]'(by simp_all), k + i) := by
simp [zipIdx]
@@ -140,7 +140,7 @@ abbrev getElem_zipWithIndex := @getElem_zipIdx
@[deprecated zipIdx_toArray (since := "2025-01-21")]
abbrev zipWithIndex_toArray := @zipIdx_toArray
@[simp] theorem toList_zipIdx {xs : Array α} {k : Nat} :
@[simp, grind =] theorem toList_zipIdx {xs : Array α} {k : Nat} :
(xs.zipIdx k).toList = xs.toList.zipIdx k := by
rcases xs with xs
simp
@@ -185,7 +185,7 @@ abbrev mem_zipWithIndex_iff_getElem? := @mem_zipIdx_iff_getElem?
subst w
rfl
@[simp]
@[simp, grind =]
theorem mapFinIdx_empty {f : (i : Nat) α (h : i < 0) β} : mapFinIdx #[] f = #[] :=
rfl
@@ -195,6 +195,7 @@ theorem mapFinIdx_eq_ofFn {xs : Array α} {f : (i : Nat) → α → (h : i < xs.
simp only [List.mapFinIdx_toArray, List.mapFinIdx_eq_ofFn, Fin.getElem_fin, List.getElem_toArray]
simp [Array.size]
@[grind =]
theorem mapFinIdx_append {xs ys : Array α} {f : (i : Nat) α (h : i < (xs ++ ys).size) β} :
(xs ++ ys).mapFinIdx f =
xs.mapFinIdx (fun i a h => f i a (by simp; omega)) ++
@@ -203,7 +204,7 @@ theorem mapFinIdx_append {xs ys : Array α} {f : (i : Nat) → α → (h : i < (
cases ys
simp [List.mapFinIdx_append, Array.size]
@[simp]
@[simp, grind =]
theorem mapFinIdx_push {xs : Array α} {a : α} {f : (i : Nat) α (h : i < (xs.push a).size) β} :
mapFinIdx (xs.push a) f =
(mapFinIdx xs (fun i a h => f i a (by simp; omega))).push (f xs.size a (by simp)) := by
@@ -237,7 +238,7 @@ theorem exists_of_mem_mapFinIdx {b : β} {xs : Array α} {f : (i : Nat) → α
rcases xs with xs
exact List.exists_of_mem_mapFinIdx (by simpa using h)
@[simp] theorem mem_mapFinIdx {b : β} {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
@[simp, grind =] theorem mem_mapFinIdx {b : β} {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
b xs.mapFinIdx f (i : Nat) (h : i < xs.size), f i xs[i] h = b := by
rcases xs with xs
simp
@@ -290,7 +291,7 @@ theorem mapFinIdx_eq_mapFinIdx_iff {xs : Array α} {f g : (i : Nat) → α → (
rw [eq_comm, mapFinIdx_eq_iff]
simp
@[simp] theorem mapFinIdx_mapFinIdx {xs : Array α}
@[simp, grind =] theorem mapFinIdx_mapFinIdx {xs : Array α}
{f : (i : Nat) α (h : i < xs.size) β}
{g : (i : Nat) β (h : i < (xs.mapFinIdx f).size) γ} :
(xs.mapFinIdx f).mapFinIdx g = xs.mapFinIdx (fun i a h => g i (f i a h) (by simpa using h)) := by
@@ -305,14 +306,14 @@ theorem mapFinIdx_eq_replicate_iff {xs : Array α} {f : (i : Nat) → α → (h
@[deprecated mapFinIdx_eq_replicate_iff (since := "2025-03-18")]
abbrev mapFinIdx_eq_mkArray_iff := @mapFinIdx_eq_replicate_iff
@[simp] theorem mapFinIdx_reverse {xs : Array α} {f : (i : Nat) α (h : i < xs.reverse.size) β} :
@[simp, grind =] theorem mapFinIdx_reverse {xs : Array α} {f : (i : Nat) α (h : i < xs.reverse.size) β} :
xs.reverse.mapFinIdx f = (xs.mapFinIdx (fun i a h => f (xs.size - 1 - i) a (by simp; omega))).reverse := by
rcases xs with l
simp [List.mapFinIdx_reverse, Array.size]
/-! ### mapIdx -/
@[simp]
@[simp, grind =]
theorem mapIdx_empty {f : Nat α β} : mapIdx f #[] = #[] :=
rfl
@@ -332,13 +333,14 @@ theorem mapIdx_eq_zipIdx_map {xs : Array α} {f : Nat → α → β} :
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapIdx_eq_zipWithIndex_map := @mapIdx_eq_zipIdx_map
@[grind =]
theorem mapIdx_append {xs ys : Array α} :
(xs ++ ys).mapIdx f = xs.mapIdx f ++ ys.mapIdx (fun i => f (i + xs.size)) := by
rcases xs with xs
rcases ys with ys
simp [List.mapIdx_append]
@[simp]
@[simp, grind =]
theorem mapIdx_push {xs : Array α} {a : α} :
mapIdx f (xs.push a) = (mapIdx f xs).push (f xs.size a) := by
simp [ append_singleton, mapIdx_append]
@@ -360,7 +362,7 @@ theorem exists_of_mem_mapIdx {b : β} {xs : Array α}
rw [mapIdx_eq_mapFinIdx] at h
simpa [Fin.exists_iff] using exists_of_mem_mapFinIdx h
@[simp] theorem mem_mapIdx {b : β} {xs : Array α} :
@[simp, grind =] theorem mem_mapIdx {b : β} {xs : Array α} :
b mapIdx f xs (i : Nat) (h : i < xs.size), f i xs[i] = b := by
constructor
· intro h
@@ -414,7 +416,7 @@ theorem mapIdx_eq_mapIdx_iff {xs : Array α} :
rcases xs with xs
simp [List.mapIdx_eq_mapIdx_iff]
@[simp] theorem mapIdx_set {f : Nat α β} {xs : Array α} {i : Nat} {h : i < xs.size} {a : α} :
@[simp, grind =] theorem mapIdx_set {f : Nat α β} {xs : Array α} {i : Nat} {h : i < xs.size} {a : α} :
(xs.set i a).mapIdx f = (xs.mapIdx f).set i (f i a) (by simpa) := by
rcases xs with xs
simp [List.mapIdx_set]
@@ -424,17 +426,17 @@ theorem mapIdx_eq_mapIdx_iff {xs : Array α} :
rcases xs with xs
simp [List.mapIdx_set]
@[simp] theorem back?_mapIdx {xs : Array α} {f : Nat α β} :
@[simp, grind =] theorem back?_mapIdx {xs : Array α} {f : Nat α β} :
(mapIdx f xs).back? = (xs.back?).map (f (xs.size - 1)) := by
rcases xs with xs
simp [List.getLast?_mapIdx]
@[simp] theorem back_mapIdx {xs : Array α} {f : Nat α β} (h) :
@[simp, grind =] theorem back_mapIdx {xs : Array α} {f : Nat α β} (h) :
(xs.mapIdx f).back h = f (xs.size - 1) (xs.back (by simpa using h)) := by
rcases xs with xs
simp [List.getLast_mapIdx]
@[simp] theorem mapIdx_mapIdx {xs : Array α} {f : Nat α β} {g : Nat β γ} :
@[simp, grind =] theorem mapIdx_mapIdx {xs : Array α} {f : Nat α β} {g : Nat β γ} :
(xs.mapIdx f).mapIdx g = xs.mapIdx (fun i => g i f i) := by
simp [mapIdx_eq_iff]
@@ -447,7 +449,7 @@ theorem mapIdx_eq_replicate_iff {xs : Array α} {f : Nat → α → β} {b : β}
@[deprecated mapIdx_eq_replicate_iff (since := "2025-03-18")]
abbrev mapIdx_eq_mkArray_iff := @mapIdx_eq_replicate_iff
@[simp] theorem mapIdx_reverse {xs : Array α} {f : Nat α β} :
@[simp, grind =] theorem mapIdx_reverse {xs : Array α} {f : Nat α β} :
xs.reverse.mapIdx f = (mapIdx (fun i => f (xs.size - 1 - i)) xs).reverse := by
rcases xs with xs
simp [List.mapIdx_reverse]
@@ -456,7 +458,7 @@ end Array
namespace List
@[grind] theorem mapFinIdxM_toArray [Monad m] [LawfulMonad m] {l : List α}
@[grind =] theorem mapFinIdxM_toArray [Monad m] [LawfulMonad m] {l : List α}
{f : (i : Nat) α (h : i < l.length) m β} :
l.toArray.mapFinIdxM f = toArray <$> l.mapFinIdxM f := by
let rec go (i : Nat) (acc : Array β) (inv : i + acc.size = l.length) :
@@ -477,7 +479,7 @@ namespace List
simp only [Array.mapFinIdxM, mapFinIdxM]
exact go _ #[] _
@[grind] theorem mapIdxM_toArray [Monad m] [LawfulMonad m] {l : List α}
@[grind =] theorem mapIdxM_toArray [Monad m] [LawfulMonad m] {l : List α}
{f : Nat α m β} :
l.toArray.mapIdxM f = toArray <$> l.mapIdxM f := by
let rec go (bs : List α) (acc : Array β) (inv : bs.length + acc.size = l.length) :

View File

@@ -23,7 +23,7 @@ namespace Array
/-! ### ofFn -/
@[simp] theorem ofFn_zero {f : Fin 0 α} : ofFn f = #[] := by
@[simp, grind =] theorem ofFn_zero {f : Fin 0 α} : ofFn f = #[] := by
simp [ofFn, ofFn.go]
theorem ofFn_succ {f : Fin (n+1) α} :
@@ -42,10 +42,10 @@ theorem ofFn_add {n m} {f : Fin (n + m) → α} :
| zero => simp
| succ m ih => simp [ofFn_succ, ih]
@[simp] theorem _root_.List.toArray_ofFn {f : Fin n α} : (List.ofFn f).toArray = Array.ofFn f := by
@[simp, grind =] theorem _root_.List.toArray_ofFn {f : Fin n α} : (List.ofFn f).toArray = Array.ofFn f := by
ext <;> simp
@[simp] theorem toList_ofFn {f : Fin n α} : (Array.ofFn f).toList = List.ofFn f := by
@[simp, grind =] theorem toList_ofFn {f : Fin n α} : (Array.ofFn f).toList = List.ofFn f := by
apply List.ext_getElem <;> simp
theorem ofFn_succ' {f : Fin (n+1) α} :
@@ -58,7 +58,7 @@ theorem ofFn_eq_empty_iff {f : Fin n → α} : ofFn f = #[] ↔ n = 0 := by
rw [ Array.toList_inj]
simp
@[simp 500]
@[simp 500, grind =]
theorem mem_ofFn {n} {f : Fin n α} {a : α} : a ofFn f i, f i = a := by
constructor
· intro w
@@ -73,7 +73,7 @@ theorem mem_ofFn {n} {f : Fin n → α} {a : α} : a ∈ ofFn f ↔ ∃ i, f i =
def ofFnM {n} [Monad m] (f : Fin n m α) : m (Array α) :=
Fin.foldlM n (fun xs i => xs.push <$> f i) (Array.emptyWithCapacity n)
@[simp]
@[simp, grind =]
theorem ofFnM_zero [Monad m] {f : Fin 0 m α} : ofFnM f = pure #[] := by
simp [ofFnM]
@@ -109,7 +109,7 @@ theorem ofFnM_add {n m} [Monad m] [LawfulMonad m] {f : Fin (n + k) → m α} :
funext x
simp
@[simp] theorem toList_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
@[simp, grind =] theorem toList_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
toList <$> ofFnM f = List.ofFnM f := by
induction n with
| zero => simp

View File

@@ -45,6 +45,7 @@ theorem zipWith_self {f : αα → δ} {xs : Array α} : zipWith f xs xs =
See also `getElem?_zipWith'` for a variant
using `Option.map` and `Option.bind` rather than a `match`.
-/
@[grind =]
theorem getElem?_zipWith {f : α β γ} {i : Nat} :
(zipWith f as bs)[i]? = match as[i]?, bs[i]? with
| some a, some b => some (f a b) | _, _ => none := by
@@ -76,31 +77,35 @@ theorem getElem?_zip_eq_some {as : Array α} {bs : Array β} {z : α × β} {i :
· rintro h₀, h₁
exact _, _, h₀, h₁, rfl
@[simp]
@[simp, grind =]
theorem zipWith_map {μ} {f : γ δ μ} {g : α γ} {h : β δ} {as : Array α} {bs : Array β} :
zipWith f (as.map g) (bs.map h) = zipWith (fun a b => f (g a) (h b)) as bs := by
cases as
cases bs
simp [List.zipWith_map]
@[grind =]
theorem zipWith_map_left {as : Array α} {bs : Array β} {f : α α'} {g : α' β γ} :
zipWith g (as.map f) bs = zipWith (fun a b => g (f a) b) as bs := by
cases as
cases bs
simp [List.zipWith_map_left]
@[grind =]
theorem zipWith_map_right {as : Array α} {bs : Array β} {f : β β'} {g : α β' γ} :
zipWith g as (bs.map f) = zipWith (fun a b => g a (f b)) as bs := by
cases as
cases bs
simp [List.zipWith_map_right]
@[grind =]
theorem zipWith_foldr_eq_zip_foldr {f : α β γ} {i : δ} :
(zipWith f as bs).foldr g i = (zip as bs).foldr (fun p r => g (f p.1 p.2) r) i := by
cases as
cases bs
simp [List.zipWith_foldr_eq_zip_foldr]
@[grind =]
theorem zipWith_foldl_eq_zip_foldl {f : α β γ} {i : δ} :
(zipWith f as bs).foldl g i = (zip as bs).foldl (fun r p => g r (f p.1 p.2)) i := by
cases as
@@ -111,22 +116,26 @@ theorem zipWith_foldl_eq_zip_foldl {f : α → β → γ} {i : δ} :
theorem zipWith_eq_empty_iff {f : α β γ} {as : Array α} {bs : Array β} : zipWith f as bs = #[] as = #[] bs = #[] := by
cases as <;> cases bs <;> simp
@[grind =]
theorem map_zipWith {δ : Type _} {f : α β} {g : γ δ α} {cs : Array γ} {ds : Array δ} :
map f (zipWith g cs ds) = zipWith (fun x y => f (g x y)) cs ds := by
cases cs
cases ds
simp [List.map_zipWith]
@[grind =]
theorem take_zipWith : (zipWith f as bs).take i = zipWith f (as.take i) (bs.take i) := by
cases as
cases bs
simp [List.take_zipWith]
@[grind =]
theorem extract_zipWith : (zipWith f as bs).extract i j = zipWith f (as.extract i j) (bs.extract i j) := by
cases as
cases bs
simp [List.drop_zipWith, List.take_zipWith]
@[grind =]
theorem zipWith_append {f : α β γ} {as as' : Array α} {bs bs' : Array β}
(h : as.size = bs.size) :
zipWith f (as ++ as') (bs ++ bs') = zipWith f as bs ++ zipWith f as' bs' := by
@@ -152,7 +161,7 @@ theorem zipWith_eq_append_iff {f : α → β → γ} {as : Array α} {bs : Array
· rintro ws, xs, ys, zs, h, rfl, rfl, h₁, h₂
exact ws, xs, ys, zs, by simp_all
@[simp] theorem zipWith_replicate {a : α} {b : β} {m n : Nat} :
@[simp, grind =] theorem zipWith_replicate {a : α} {b : β} {m n : Nat} :
zipWith f (replicate m a) (replicate n b) = replicate (min m n) (f a b) := by
simp [ List.toArray_replicate]
@@ -184,6 +193,7 @@ theorem zipWith_eq_zipWith_take_min (as : Array α) (bs : Array β) :
simp
rw [List.zipWith_eq_zipWith_take_min]
@[grind =]
theorem reverse_zipWith (h : as.size = bs.size) :
(zipWith f as bs).reverse = zipWith f as.reverse bs.reverse := by
cases as
@@ -200,7 +210,7 @@ theorem lt_size_right_of_zip {i : Nat} {as : Array α} {bs : Array β} (h : i <
i < bs.size :=
lt_size_right_of_zipWith h
@[simp]
@[simp, grind =]
theorem getElem_zip {as : Array α} {bs : Array β} {i : Nat} {h : i < (zip as bs).size} :
(zip as bs)[i] =
(as[i]'(lt_size_left_of_zip h), bs[i]'(lt_size_right_of_zip h)) :=
@@ -211,18 +221,22 @@ theorem zip_eq_zipWith {as : Array α} {bs : Array β} : zip as bs = zipWith Pro
cases bs
simp [List.zip_eq_zipWith]
@[grind _=_]
theorem zip_map {f : α γ} {g : β δ} {as : Array α} {bs : Array β} :
zip (as.map f) (bs.map g) = (zip as bs).map (Prod.map f g) := by
cases as
cases bs
simp [List.zip_map]
@[grind _=_]
theorem zip_map_left {f : α γ} {as : Array α} {bs : Array β} :
zip (as.map f) bs = (zip as bs).map (Prod.map f id) := by rw [ zip_map, map_id]
@[grind _=_]
theorem zip_map_right {f : β γ} {as : Array α} {bs : Array β} :
zip as (bs.map f) = (zip as bs).map (Prod.map id f) := by rw [ zip_map, map_id]
@[grind =]
theorem zip_append {as bs : Array α} {cs ds : Array β} (_h : as.size = cs.size) :
zip (as ++ bs) (cs ++ ds) = zip as cs ++ zip bs ds := by
cases as
@@ -231,6 +245,7 @@ theorem zip_append {as bs : Array α} {cs ds : Array β} (_h : as.size = cs.size
cases ds
simp_all [List.zip_append]
@[grind =]
theorem zip_map' {f : α β} {g : α γ} {xs : Array α} :
zip (xs.map f) (xs.map g) = xs.map fun a => (f a, g a) := by
cases xs
@@ -276,7 +291,7 @@ theorem zip_eq_append_iff {as : Array α} {bs : Array β} :
as₁ as₂ bs₁ bs₂, as₁.size = bs₁.size as = as₁ ++ as₂ bs = bs₁ ++ bs₂ xs = zip as₁ bs₁ ys = zip as₂ bs₂ := by
simp [zip_eq_zipWith, zipWith_eq_append_iff]
@[simp] theorem zip_replicate {a : α} {b : β} {m n : Nat} :
@[simp, grind =] theorem zip_replicate {a : α} {b : β} {m n : Nat} :
zip (replicate m a) (replicate n b) = replicate (min m n) (a, b) := by
simp [ List.toArray_replicate]
@@ -293,6 +308,7 @@ theorem zip_eq_zip_take_min {as : Array α} {bs : Array β} :
/-! ### zipWithAll -/
@[grind =]
theorem getElem?_zipWithAll {f : Option α Option β γ} {i : Nat} :
(zipWithAll f as bs)[i]? = match as[i]?, bs[i]? with
| none, none => .none | a?, b? => some (f a? b?) := by
@@ -301,31 +317,35 @@ theorem getElem?_zipWithAll {f : Option α → Option β → γ} {i : Nat} :
simp [List.getElem?_zipWithAll]
rfl
@[grind =]
theorem zipWithAll_map {μ} {f : Option γ Option δ μ} {g : α γ} {h : β δ} {as : Array α} {bs : Array β} :
zipWithAll f (as.map g) (bs.map h) = zipWithAll (fun a b => f (g <$> a) (h <$> b)) as bs := by
cases as
cases bs
simp [List.zipWithAll_map]
@[grind =]
theorem zipWithAll_map_left {as : Array α} {bs : Array β} {f : α α'} {g : Option α' Option β γ} :
zipWithAll g (as.map f) bs = zipWithAll (fun a b => g (f <$> a) b) as bs := by
cases as
cases bs
simp [List.zipWithAll_map_left]
@[grind =]
theorem zipWithAll_map_right {as : Array α} {bs : Array β} {f : β β'} {g : Option α Option β' γ} :
zipWithAll g as (bs.map f) = zipWithAll (fun a b => g a (f <$> b)) as bs := by
cases as
cases bs
simp [List.zipWithAll_map_right]
@[grind =]
theorem map_zipWithAll {δ : Type _} {f : α β} {g : Option γ Option δ α} {cs : Array γ} {ds : Array δ} :
map f (zipWithAll g cs ds) = zipWithAll (fun x y => f (g x y)) cs ds := by
cases cs
cases ds
simp [List.map_zipWithAll]
@[simp] theorem zipWithAll_replicate {a : α} {b : β} {n : Nat} :
@[simp, grind =] theorem zipWithAll_replicate {a : α} {b : β} {n : Nat} :
zipWithAll f (replicate n a) (replicate n b) = replicate n (f (some a) (some b)) := by
simp [ List.toArray_replicate]
@@ -342,6 +362,7 @@ theorem unzip_fst : (unzip l).fst = l.map Prod.fst := by
theorem unzip_snd : (unzip l).snd = l.map Prod.snd := by
simp
@[grind =]
theorem unzip_eq_map {xs : Array (α × β)} : unzip xs = (xs.map Prod.fst, xs.map Prod.snd) := by
cases xs
simp [List.unzip_eq_map]
@@ -375,9 +396,11 @@ theorem zip_of_prod {as : Array α} {bs : Array β} {xs : Array (α × β)} (hl
(hr : xs.map Prod.snd = bs) : xs = as.zip bs := by
rw [ hl, hr, zip_unzip xs, fst_unzip, snd_unzip, zip_unzip, zip_unzip]
@[simp] theorem unzip_replicate {n : Nat} {a : α} {b : β} :
@[simp, grind =] theorem unzip_replicate {n : Nat} {a : α} {b : β} :
unzip (replicate n (a, b)) = (replicate n a, replicate n b) := by
ext1 <;> simp
@[deprecated unzip_replicate (since := "2025-03-18")]
abbrev unzip_mkArray := @unzip_replicate
end Array

View File

@@ -174,7 +174,7 @@ recommended_spelling "zero" for "0#n" in [BitVec.ofNat, «term__#__»]
recommended_spelling "one" for "1#n" in [BitVec.ofNat, «term__#__»]
/-- Unexpander for bitvector literals. -/
@[app_unexpander BitVec.ofNat] def unexpandBitVecOfNat : Lean.PrettyPrinter.Unexpander
@[app_unexpander BitVec.ofNat] meta def unexpandBitVecOfNat : Lean.PrettyPrinter.Unexpander
| `($(_) $n $i:num) => `($i:num#$n)
| _ => throw ()
@@ -183,7 +183,7 @@ scoped syntax:max term:max noWs "#'" noWs term:max : term
macro_rules | `($i#'$p) => `(BitVec.ofNatLT $i $p)
/-- Unexpander for bitvector literals without truncation. -/
@[app_unexpander BitVec.ofNatLT] def unexpandBitVecOfNatLt : Lean.PrettyPrinter.Unexpander
@[app_unexpander BitVec.ofNatLT] meta def unexpandBitVecOfNatLt : Lean.PrettyPrinter.Unexpander
| `($(_) $i $p) => `($i#'$p)
| _ => throw ()

View File

@@ -1023,7 +1023,7 @@ theorem DivModState.toNat_shiftRight_sub_one_eq
{args : DivModArgs w} {qr : DivModState w} (h : qr.Poised args) :
args.n.toNat >>> (qr.wn - 1)
= (args.n.toNat >>> qr.wn) * 2 + (args.n.getLsbD (qr.wn - 1)).toNat := by
show BitVec.toNat (args.n >>> (qr.wn - 1)) = _
change BitVec.toNat (args.n >>> (qr.wn - 1)) = _
have {..} := h -- break the structure down for `omega`
rw [shiftRight_sub_one_eq_shiftConcat args.n h.hwn_lt]
rw [toNat_shiftConcat_eq_of_lt (k := w - qr.wn)]

View File

@@ -319,6 +319,7 @@ theorem ofFin_ofNat (n : Nat) :
@[simp] theorem ofFin_neg {x : Fin (2 ^ w)} : ofFin (-x) = -(ofFin x) := by
rfl
open Fin.NatCast in
@[simp, norm_cast] theorem ofFin_natCast (n : Nat) : ofFin (n : Fin (2^w)) = (n : BitVec w) := by
rfl
@@ -337,6 +338,7 @@ theorem toFin_zero : toFin (0 : BitVec w) = 0 := rfl
theorem toFin_one : toFin (1 : BitVec w) = 1 := by
rw [toFin_inj]; simp only [ofNat_eq_ofNat, ofFin_ofNat]
open Fin.NatCast in
@[simp, norm_cast] theorem toFin_natCast (n : Nat) : toFin (n : BitVec w) = (n : Fin (2^w)) := by
rfl
@@ -880,6 +882,19 @@ theorem slt_eq_sle_and_ne {x y : BitVec w} : x.slt y = (x.sle y && x != y) := by
apply Bool.eq_iff_iff.2
simp [BitVec.slt, BitVec.sle, Int.lt_iff_le_and_ne, BitVec.toInt_inj]
/-- For all bitvectors `x, y`, either `x` is signed less than `y`,
or is equal to `y`, or is signed greater than `y`. -/
theorem slt_trichotomy (x y : BitVec w) : x.slt y x = y y.slt x := by
simpa [slt_iff_toInt_lt, toInt_inj]
using Int.lt_trichotomy x.toInt y.toInt
/-- For all bitvectors `x, y`, either `x` is unsigned less than `y`,
or is equal to `y`, or is unsigned greater than `y`. -/
theorem lt_trichotomy (x y : BitVec w) :
x < y x = y y < x := by
simpa [ ult_iff_lt, ult_eq_decide, decide_eq_true_eq, toNat_inj]
using Nat.lt_trichotomy x.toNat y.toNat
/-! ### setWidth, zeroExtend and truncate -/
@[simp]

View File

@@ -183,10 +183,7 @@ theorem foldrM_loop [Monad m] [LawfulMonad m] (f : Fin (n+1) → α → m α) (x
| zero =>
rw [foldrM_loop_zero, foldrM_loop_succ, pure_bind]
conv => rhs; rw [bind_pure (f 0 x)]
congr
try -- TODO: block can be deleted after bootstrapping
funext
simp [foldrM_loop_zero]
rfl
| succ i ih =>
rw [foldrM_loop_succ, foldrM_loop_succ, bind_assoc]
congr; funext; exact ih ..

View File

@@ -102,9 +102,30 @@ theorem dite_val {n : Nat} {c : Prop} [Decidable c] {x y : Fin n} :
(if c then x else y).val = if c then x.val else y.val := by
by_cases c <;> simp [*]
instance (n : Nat) [NeZero n] : NatCast (Fin n) where
namespace NatCast
/--
This is not a global instance, but may be activated locally via `open Fin.NatCast in ...`.
This is not an instance because the `binop%` elaborator assumes that
there are no non-trivial coercion loops,
but this introduces a coercion from `Nat` to `Fin n` and back.
Non-trivial loops lead to undesirable and counterintuitive elaboration behavior.
For example, for `x : Fin k` and `n : Nat`,
it causes `x < n` to be elaborated as `x < ↑n` rather than `↑x < n`,
silently introducing wraparound arithmetic.
Note: as of 2025-06-03, Mathlib has such a coercion for `Fin n` anyway!
-/
@[expose]
def instNatCast (n : Nat) [NeZero n] : NatCast (Fin n) where
natCast a := Fin.ofNat n a
attribute [scoped instance] instNatCast
end NatCast
@[expose]
def intCast [NeZero n] (a : Int) : Fin n :=
if 0 a then
@@ -112,9 +133,22 @@ def intCast [NeZero n] (a : Int) : Fin n :=
else
- Fin.ofNat n a.natAbs
instance (n : Nat) [NeZero n] : IntCast (Fin n) where
namespace IntCast
/--
This is not a global instance, but may be activated locally via `open Fin.IntCast in ...`.
See the doc-string for `Fin.NatCast.instNatCast` for more details.
-/
@[expose]
def instIntCast (n : Nat) [NeZero n] : IntCast (Fin n) where
intCast := Fin.intCast
attribute [scoped instance] instIntCast
end IntCast
open IntCast in
theorem intCast_def {n : Nat} [NeZero n] (x : Int) :
(x : Fin n) = if 0 x then Fin.ofNat n x.natAbs else -Fin.ofNat n x.natAbs := rfl
@@ -1045,6 +1079,17 @@ theorem val_neg {n : Nat} [NeZero n] (x : Fin n) :
have := Fin.val_ne_zero_iff.mpr h
omega
protected theorem sub_eq_add_neg {n : Nat} (x y : Fin n) : x - y = x + -y := by
by_cases h : n = 0
· subst h
apply elim0 x
· replace h : NeZero n := h
ext
rw [Fin.coe_sub, Fin.val_add, val_neg]
split
· simp_all
· simp [Nat.add_comm]
/-! ### mul -/
theorem ofNat_mul [NeZero n] (x : Nat) (y : Fin n) :

View File

@@ -3,7 +3,6 @@ Copyright (c) 2016 Jeremy Avigad. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jeremy Avigad, Mario Carneiro
-/
module
prelude
@@ -99,7 +98,7 @@ theorem ofNat_emod (m n : Nat) : (↑(m % n) : Int) = m % n := natCast_emod m n
theorem emod_add_ediv : a b : Int, a % b + b * (a / b) = a
| ofNat _, ofNat _ => congrArg ofNat <| Nat.mod_add_div ..
| ofNat m, -[n+1] => by
show (m % succ n + -(succ n) * -(m / succ n) : Int) = m
change (m % succ n + -(succ n) * -(m / succ n) : Int) = m
rw [Int.neg_mul_neg]; exact congrArg ofNat <| Nat.mod_add_div ..
| -[_+1], 0 => by rw [emod_zero]; rfl
| -[m+1], succ n => aux m n.succ
@@ -149,7 +148,7 @@ theorem add_mul_ediv_right (a b : Int) {c : Int} (H : c ≠ 0) : (a + b * c) / c
fun {k n} => @fun
| ofNat _ => congrArg ofNat <| Nat.add_mul_div_right _ _ k.succ_pos
| -[m+1] => by
show ((n * k.succ : Nat) - m.succ : Int).ediv k.succ = n - (m / k.succ + 1 : Nat)
change ((n * k.succ : Nat) - m.succ : Int).ediv k.succ = n - (m / k.succ + 1 : Nat)
by_cases h : m < n * k.succ
· rw [ Int.ofNat_sub h, Int.ofNat_sub ((Nat.div_lt_iff_lt_mul k.succ_pos).2 h)]
apply congrArg ofNat
@@ -158,7 +157,7 @@ theorem add_mul_ediv_right (a b : Int) {c : Int} (H : c ≠ 0) : (a + b * c) / c
have H {a b : Nat} (h : a b) : (a : Int) + -((b : Int) + 1) = -[b - a +1] := by
rw [negSucc_eq, Int.ofNat_sub h]
simp only [Int.sub_eq_add_neg, Int.neg_add, Int.neg_neg, Int.add_left_comm, Int.add_assoc]
show ediv ((n * succ k) + -((m : Int) + 1)) (succ k) = n + -((m / succ k) + 1 : Int)
change ediv ((n * succ k) + -((m : Int) + 1)) (succ k) = n + -((m / succ k) + 1 : Int)
rw [H h, H ((Nat.le_div_iff_mul_le k.succ_pos).2 h)]
apply congrArg negSucc
rw [Nat.mul_comm, Nat.sub_mul_div_of_le]; rwa [Nat.mul_comm]

View File

@@ -3,7 +3,6 @@ Copyright (c) 2016 Jeremy Avigad. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jeremy Avigad, Mario Carneiro, Kim Morrison, Markus Himmel
-/
module
prelude
@@ -203,6 +202,9 @@ theorem tdiv_eq_ediv_of_nonneg : ∀ {a b : Int}, 0 ≤ a → a.tdiv b = a / b
| succ _, succ _, _ => rfl
| succ _, -[_+1], _ => rfl
@[simp] theorem natCast_tdiv_eq_ediv {a : Nat} {b : Int} : (a : Int).tdiv b = a / b :=
tdiv_eq_ediv_of_nonneg (by simp)
theorem tdiv_eq_ediv {a b : Int} :
a.tdiv b = a / b + if 0 a b a then 0 else sign b := by
simp only [dvd_iff_emod_eq_zero]
@@ -329,17 +331,17 @@ theorem fdiv_eq_ediv_of_dvd {a b : Int} (h : b a) : a.fdiv b = a / b := by
theorem tmod_add_tdiv : a b : Int, tmod a b + b * (a.tdiv b) = a
| ofNat _, ofNat _ => congrArg ofNat (Nat.mod_add_div ..)
| ofNat m, -[n+1] => by
show (m % succ n + -(succ n) * -(m / succ n) : Int) = m
change (m % succ n + -(succ n) * -(m / succ n) : Int) = m
rw [Int.neg_mul_neg]; exact congrArg ofNat (Nat.mod_add_div ..)
| -[m+1], 0 => by
show -(((succ m) % 0) : Int) + 0 * -(succ m / 0) = -(succ m)
change -(((succ m) % 0) : Int) + 0 * -(succ m / 0) = -(succ m)
rw [Nat.mod_zero, Int.zero_mul, Int.add_zero]
| -[m+1], ofNat n => by
show -(((succ m) % n) : Int) + n * -(succ m / n) = -(succ m)
change -(((succ m) % n) : Int) + n * -(succ m / n) = -(succ m)
rw [Int.mul_neg, Int.neg_add]
exact congrArg (-ofNat ·) (Nat.mod_add_div ..)
| -[m+1], -[n+1] => by
show -((succ m % succ n) : Int) + -(succ n) * (succ m / succ n) = -(succ m)
change -((succ m % succ n) : Int) + -(succ n) * (succ m / succ n) = -(succ m)
rw [Int.neg_mul, Int.neg_add]
exact congrArg (-ofNat ·) (Nat.mod_add_div ..)
@@ -361,17 +363,17 @@ theorem fmod_add_fdiv : ∀ a b : Int, a.fmod b + b * a.fdiv b = a
| 0, ofNat _ | 0, -[_+1] => congrArg ofNat <| by simp
| succ _, ofNat _ => congrArg ofNat <| Nat.mod_add_div ..
| succ m, -[n+1] => by
show subNatNat (m % succ n) n + ((succ n * (m / succ n)) + n + 1) = (m + 1)
change subNatNat (m % succ n) n + ((succ n * (m / succ n)) + n + 1) = (m + 1)
rw [Int.add_comm _ n, Int.add_assoc, Int.add_assoc,
Int.subNatNat_eq_coe, Int.sub_add_cancel]
exact congrArg (ofNat · + 1) <| Nat.mod_add_div ..
| -[_+1], 0 => by rw [fmod_zero]; rfl
| -[m+1], succ n => by
show subNatNat .. - ((succ n * (m / succ n)) + (succ n)) = -(succ m)
change subNatNat .. - ((succ n * (m / succ n)) + (succ n)) = -(succ m)
rw [Int.subNatNat_eq_coe, Int.sub_sub, Int.neg_sub, Int.sub_sub, Int.sub_sub_self]
exact congrArg (-ofNat ·) <| Nat.succ_add .. Nat.mod_add_div .. rfl
| -[m+1], -[n+1] => by
show -((succ m % succ n) : Int) + -(succ n * (succ m / succ n)) = -(succ m)
change -((succ m % succ n) : Int) + -(succ n * (succ m / succ n)) = -(succ m)
rw [ Int.neg_add]; exact congrArg (-ofNat ·) <| Nat.mod_add_div ..
/-- Variant of `fmod_add_fdiv` with the multiplication written the other way around. -/
@@ -572,7 +574,7 @@ theorem neg_one_ediv (b : Int) : -1 / b = -b.sign :=
· refine Nat.le_trans ?_ (Nat.le_add_right _ _)
rw [ Nat.mul_div_mul_left _ _ m.succ_pos]
apply Nat.div_mul_le_self
· show m.succ * n.succ _
· change m.succ * n.succ _
rw [Nat.mul_left_comm]
apply Nat.mul_le_mul_left
apply (Nat.div_lt_iff_lt_mul k.succ_pos).1
@@ -2745,7 +2747,7 @@ theorem bmod_lt {x : Int} {m : Nat} (h : 0 < m) : bmod x m < (m + 1) / 2 := by
split
· assumption
· apply Int.lt_of_lt_of_le
· show _ < 0
· change _ < 0
have : x % m < m := emod_lt_of_pos x (natCast_pos.mpr h)
exact Int.sub_neg_of_lt this
· exact Int.le.intro_sub _ rfl

View File

@@ -339,7 +339,7 @@ protected theorem add_sub_assoc (a b c : Int) : a + b - c = a + (b - c) := by
match m with
| 0 => rfl
| succ m =>
show ofNat (n - succ m) = subNatNat n (succ m)
change ofNat (n - succ m) = subNatNat n (succ m)
rw [subNatNat, Nat.sub_eq_zero_of_le h]
@[deprecated negSucc_eq (since := "2025-03-11")]

View File

@@ -1665,7 +1665,7 @@ theorem natCast_sub (x y : Nat)
(NatCast.natCast x : Int) + -1*NatCast.natCast y
else
(0 : Int) := by
show ((x - y) : Int) = if (y : Int) + (-1)*x 0 then x + (-1)*y else 0
change ((x - y) : Int) = if (y : Int) + (-1)*x 0 then (x : Int) + (-1)*y else 0
rw [Int.neg_mul, Int.sub_eq_add_neg, Int.one_mul]
rw [Int.neg_mul, Int.sub_eq_add_neg, Int.one_mul]
split

View File

@@ -9,6 +9,7 @@ prelude
import Init.SimpLemmas
import Init.Data.Nat.Basic
import Init.Data.List.Notation
import Init.Data.Nat.Div.Basic
@[expose] section
@@ -1624,8 +1625,8 @@ def find? (p : α → Bool) : List α → Option α
| true => some a
| false => find? p as
@[simp] theorem find?_nil : ([] : List α).find? p = none := rfl
theorem find?_cons : (a::as).find? p = match p a with | true => some a | false => as.find? p :=
@[simp, grind =] theorem find?_nil : ([] : List α).find? p = none := rfl
@[grind =]theorem find?_cons : (a::as).find? p = match p a with | true => some a | false => as.find? p :=
rfl
/-! ### findSome? -/
@@ -1845,8 +1846,8 @@ def lookup [BEq α] : α → List (α × β) → Option β
| true => some b
| false => lookup a as
@[simp] theorem lookup_nil [BEq α] : ([] : List (α × β)).lookup a = none := rfl
theorem lookup_cons [BEq α] {k : α} :
@[simp, grind =] theorem lookup_nil [BEq α] : ([] : List (α × β)).lookup a = none := rfl
@[grind =] theorem lookup_cons [BEq α] {k : α} :
((k, b)::as).lookup a = match a == k with | true => some b | false => as.lookup a :=
rfl

View File

@@ -23,9 +23,9 @@ open Nat
/-! ### eraseP -/
@[simp] theorem eraseP_nil : [].eraseP p = [] := rfl
@[simp, grind =] theorem eraseP_nil : [].eraseP p = [] := rfl
theorem eraseP_cons {a : α} {l : List α} :
@[grind =] theorem eraseP_cons {a : α} {l : List α} :
(a :: l).eraseP p = bif p a then l else a :: l.eraseP p := rfl
@[simp] theorem eraseP_cons_of_pos {l : List α} {p} (h : p a) : (a :: l).eraseP p = l := by
@@ -92,7 +92,7 @@ theorem exists_or_eq_self_of_eraseP (p) (l : List α) :
let _, l₁, l₂, _, _, e₁, e₂ := exists_of_eraseP al pa
rw [e₂]; simp [length_append, e₁]
theorem length_eraseP {l : List α} : (l.eraseP p).length = if l.any p then l.length - 1 else l.length := by
@[grind =] theorem length_eraseP {l : List α} : (l.eraseP p).length = if l.any p then l.length - 1 else l.length := by
split <;> rename_i h
· simp only [any_eq_true] at h
obtain x, m, h := h
@@ -106,8 +106,13 @@ theorem eraseP_sublist {l : List α} : l.eraseP p <+ l := by
| .inl h => rw [h]; apply Sublist.refl
| .inr c, l₁, l₂, _, _, h₃, h₄ => rw [h₄, h₃]; simp
grind_pattern eraseP_sublist => l.eraseP p, List.Sublist
theorem eraseP_subset {l : List α} : l.eraseP p l := eraseP_sublist.subset
grind_pattern eraseP_subset => l.eraseP p, List.Subset
@[grind ]
protected theorem Sublist.eraseP : l₁ <+ l₂ l₁.eraseP p <+ l₂.eraseP p
| .slnil => Sublist.refl _
| .cons a s => by
@@ -147,10 +152,12 @@ theorem mem_of_mem_eraseP {l : List α} : a ∈ l.eraseP p → a ∈ l := (erase
· intro; obtain x, m, h := h; simp_all
· simp_all
@[grind _=_]
theorem eraseP_map {f : β α} : {l : List β}, (map f l).eraseP p = map f (l.eraseP (p f))
| [] => rfl
| b::l => by by_cases h : p (f b) <;> simp [h, eraseP_map, eraseP_cons_of_pos]
@[grind =]
theorem eraseP_filterMap {f : α Option β} : {l : List α},
(filterMap f l).eraseP p = filterMap f (l.eraseP (fun x => match f x with | some y => p y | none => false))
| [] => rfl
@@ -165,6 +172,7 @@ theorem eraseP_filterMap {f : α → Option β} : ∀ {l : List α},
· simp only [w, cond_false]
rw [filterMap_cons_some h, eraseP_filterMap]
@[grind =]
theorem eraseP_filter {f : α Bool} {l : List α} :
(filter f l).eraseP p = filter f (l.eraseP (fun x => p x && f x)) := by
rw [ filterMap_eq_filter, eraseP_filterMap]
@@ -174,18 +182,19 @@ theorem eraseP_filter {f : α → Bool} {l : List α} :
split <;> split at * <;> simp_all
theorem eraseP_append_left {a : α} (pa : p a) :
{l₁ : List α} l₂, a l₁ (l₁++l₂).eraseP p = l₁.eraseP p ++ l₂
{l₁ : List α} l₂, a l₁ (l₁ ++ l₂).eraseP p = l₁.eraseP p ++ l₂
| x :: xs, l₂, h => by
by_cases h' : p x <;> simp [h']
rw [eraseP_append_left pa l₂ ((mem_cons.1 h).resolve_left (mt _ h'))]
intro | rfl => exact pa
theorem eraseP_append_right :
{l₁ : List α} l₂, ( b l₁, ¬p b) eraseP p (l₁++l₂) = l₁ ++ l₂.eraseP p
{l₁ : List α} l₂, ( b l₁, ¬p b) eraseP p (l₁ ++ l₂) = l₁ ++ l₂.eraseP p
| [], _, _ => rfl
| _ :: _, _, h => by
simp [(forall_mem_cons.1 h).1, eraseP_append_right _ (forall_mem_cons.1 h).2]
@[grind =]
theorem eraseP_append {l₁ l₂ : List α} :
(l₁ ++ l₂).eraseP p = if l₁.any p then l₁.eraseP p ++ l₂ else l₁ ++ l₂.eraseP p := by
split <;> rename_i h
@@ -196,6 +205,7 @@ theorem eraseP_append {l₁ l₂ : List α} :
rw [eraseP_append_right _]
simp_all
@[grind =]
theorem eraseP_replicate {n : Nat} {a : α} {p : α Bool} :
(replicate n a).eraseP p = if p a then replicate (n - 1) a else replicate n a := by
induction n with
@@ -212,6 +222,7 @@ theorem eraseP_replicate {n : Nat} {a : α} {p : α → Bool} :
(replicate n a).eraseP p = replicate n a := by
rw [eraseP_of_forall_not (by simp_all)]
@[grind ]
protected theorem IsPrefix.eraseP (h : l₁ <+: l₂) : l₁.eraseP p <+: l₂.eraseP p := by
rw [IsPrefix] at h
obtain t, rfl := h
@@ -258,13 +269,15 @@ theorem eraseP_eq_iff {p} {l : List α} :
subst p
simp_all
@[grind ]
theorem Pairwise.eraseP (q) : Pairwise p l Pairwise p (l.eraseP q) :=
Pairwise.sublist <| eraseP_sublist
@[grind]
@[grind ]
theorem Nodup.eraseP (p) : Nodup l Nodup (l.eraseP p) :=
Pairwise.eraseP p
@[grind =]
theorem eraseP_comm {l : List α} (h : a l, ¬ p a ¬ q a) :
(l.eraseP p).eraseP q = (l.eraseP q).eraseP p := by
induction l with
@@ -357,6 +370,7 @@ theorem exists_erase_eq [LawfulBEq α] {a : α} {l : List α} (h : a ∈ l) :
length (l.erase a) = length l - 1 := by
rw [erase_eq_eraseP]; exact length_eraseP_of_mem h (beq_self_eq_true a)
@[grind =]
theorem length_erase [LawfulBEq α] {a : α} {l : List α} :
length (l.erase a) = if a l then length l - 1 else length l := by
rw [erase_eq_eraseP, length_eraseP]
@@ -365,11 +379,17 @@ theorem length_erase [LawfulBEq α] {a : α} {l : List α} :
theorem erase_sublist {a : α} {l : List α} : l.erase a <+ l :=
erase_eq_eraseP' a l eraseP_sublist ..
grind_pattern length_erase => l.erase a, List.Sublist
theorem erase_subset {a : α} {l : List α} : l.erase a l := erase_sublist.subset
grind_pattern erase_subset => l.erase a, List.Subset
@[grind ]
theorem Sublist.erase (a : α) {l₁ l₂ : List α} (h : l₁ <+ l₂) : l₁.erase a <+ l₂.erase a := by
simp only [erase_eq_eraseP']; exact h.eraseP
@[grind ]
theorem IsPrefix.erase (a : α) {l₁ l₂ : List α} (h : l₁ <+: l₂) : l₁.erase a <+: l₂.erase a := by
simp only [erase_eq_eraseP']; exact h.eraseP
@@ -391,6 +411,7 @@ theorem mem_of_mem_erase {a b : α} {l : List α} (h : a ∈ l.erase b) : a ∈
rw [erase_eq_eraseP', eraseP_eq_self_iff]
simp [forall_mem_ne']
@[grind _=_]
theorem erase_filter [LawfulBEq α] {f : α Bool} {l : List α} :
(filter f l).erase a = filter f (l.erase a) := by
induction l with
@@ -418,10 +439,12 @@ theorem erase_append_right [LawfulBEq α] {a : α} {l₁ : List α} (l₂ : List
rw [erase_eq_eraseP, erase_eq_eraseP, eraseP_append_right]
intros b h' h''; rw [eq_of_beq h''] at h; exact h h'
@[grind =]
theorem erase_append [LawfulBEq α] {a : α} {l₁ l₂ : List α} :
(l₁ ++ l₂).erase a = if a l₁ then l₁.erase a ++ l₂ else l₁ ++ l₂.erase a := by
simp [erase_eq_eraseP, eraseP_append]
@[grind =]
theorem erase_replicate [LawfulBEq α] {n : Nat} {a b : α} :
(replicate n a).erase b = if b == a then replicate (n - 1) a else replicate n a := by
rw [erase_eq_eraseP]
@@ -429,6 +452,7 @@ theorem erase_replicate [LawfulBEq α] {n : Nat} {a b : α} :
-- The arguments `a b` are explicit,
-- so they can be specified to prevent `simp` repeatedly applying the lemma.
@[grind =]
theorem erase_comm [LawfulBEq α] (a b : α) {l : List α} :
(l.erase a).erase b = (l.erase b).erase a := by
if ab : a == b then rw [eq_of_beq ab] else ?_
@@ -468,6 +492,7 @@ theorem erase_eq_iff [LawfulBEq α] {a : α} {l : List α} :
rw [erase_of_not_mem]
simp_all
@[grind ]
theorem Pairwise.erase [LawfulBEq α] {l : List α} (a) : Pairwise p l Pairwise p (l.erase a) :=
Pairwise.sublist <| erase_sublist
@@ -520,6 +545,7 @@ end erase
/-! ### eraseIdx -/
@[grind =]
theorem length_eraseIdx {l : List α} {i : Nat} :
(l.eraseIdx i).length = if i < l.length then l.length - 1 else l.length := by
induction l generalizing i with
@@ -537,8 +563,9 @@ theorem length_eraseIdx_of_lt {l : List α} {i} (h : i < length l) :
(l.eraseIdx i).length = length l - 1 := by
simp [length_eraseIdx, h]
@[simp] theorem eraseIdx_zero {l : List α} : eraseIdx l 0 = l.tail := by cases l <;> rfl
@[simp, grind =] theorem eraseIdx_zero {l : List α} : eraseIdx l 0 = l.tail := by cases l <;> rfl
@[grind =]
theorem eraseIdx_eq_take_drop_succ :
(l : List α) (i : Nat), l.eraseIdx i = l.take i ++ l.drop (i + 1)
| nil, _ => by simp
@@ -565,6 +592,7 @@ theorem eraseIdx_ne_nil_iff {l : List α} {i : Nat} : eraseIdx l i ≠ [] ↔ 2
@[deprecated eraseIdx_ne_nil_iff (since := "2025-01-30")]
abbrev eraseIdx_ne_nil := @eraseIdx_ne_nil_iff
@[grind]
theorem eraseIdx_sublist : (l : List α) (k : Nat), eraseIdx l k <+ l
| [], _ => by simp
| a::l, 0 => by simp
@@ -573,6 +601,7 @@ theorem eraseIdx_sublist : ∀ (l : List α) (k : Nat), eraseIdx l k <+ l
theorem mem_of_mem_eraseIdx {l : List α} {i : Nat} {a : α} (h : a l.eraseIdx i) : a l :=
(eraseIdx_sublist _ _).mem h
@[grind]
theorem eraseIdx_subset {l : List α} {k : Nat} : eraseIdx l k l :=
(eraseIdx_sublist _ _).subset
@@ -612,6 +641,15 @@ theorem eraseIdx_append_of_length_le {l : List α} {k : Nat} (hk : length l ≤
| zero => simp_all
| succ k => simp_all [eraseIdx_cons_succ, Nat.succ_sub_succ]
@[grind =]
theorem eraseIdx_append :
eraseIdx (l ++ l') k = if k < length l then eraseIdx l k ++ l' else l ++ eraseIdx l' (k - length l) := by
split <;> rename_i h
· simp [eraseIdx_append_of_lt_length h]
· rw [eraseIdx_append_of_length_le]
omega
@[grind =]
theorem eraseIdx_replicate {n : Nat} {a : α} {k : Nat} :
(replicate n a).eraseIdx k = if k < n then replicate (n - 1) a else replicate n a := by
split <;> rename_i h
@@ -623,12 +661,15 @@ theorem eraseIdx_replicate {n : Nat} {a : α} {k : Nat} :
exact m.2
· rw [eraseIdx_of_length_le (by simpa using h)]
@[grind ]
theorem Pairwise.eraseIdx {l : List α} (k) : Pairwise p l Pairwise p (l.eraseIdx k) :=
Pairwise.sublist <| eraseIdx_sublist _ _
@[grind ]
theorem Nodup.eraseIdx {l : List α} (k) : Nodup l Nodup (l.eraseIdx k) :=
Pairwise.eraseIdx k
@[grind ]
protected theorem IsPrefix.eraseIdx {l l' : List α} (h : l <+: l') (k : Nat) :
eraseIdx l k <+: eraseIdx l' k := by
rcases h with t, rfl

View File

@@ -23,14 +23,14 @@ Examples:
-/
def finRange (n : Nat) : List (Fin n) := ofFn fun i => i
@[simp] theorem length_finRange {n : Nat} : (List.finRange n).length = n := by
@[simp, grind =] theorem length_finRange {n : Nat} : (List.finRange n).length = n := by
simp [List.finRange]
@[simp] theorem getElem_finRange {i : Nat} (h : i < (List.finRange n).length) :
@[simp, grind =] theorem getElem_finRange {i : Nat} (h : i < (List.finRange n).length) :
(finRange n)[i] = Fin.cast length_finRange i, h := by
simp [List.finRange]
@[simp] theorem finRange_zero : finRange 0 = [] := by simp [finRange]
@[simp, grind =] theorem finRange_zero : finRange 0 = [] := by simp [finRange]
theorem finRange_succ {n} : finRange (n+1) = 0 :: (finRange n).map Fin.succ := by
apply List.ext_getElem; simp; intro i; cases i <;> simp
@@ -46,6 +46,7 @@ theorem finRange_succ_last {n} :
· rfl
· next h => exact Fin.eq_last_of_not_lt h
@[grind _=_]
theorem finRange_reverse {n} : (finRange n).reverse = (finRange n).map Fin.rev := by
induction n with
| zero => simp

View File

@@ -45,7 +45,7 @@ theorem exists_of_findSome?_eq_some {l : List α} {f : α → Option β} (w : l.
simp_all only [findSome?_cons, mem_cons, exists_eq_or_imp]
split at w <;> simp_all
@[simp] theorem findSome?_eq_none_iff : findSome? p l = none x l, p x = none := by
@[simp, grind =] theorem findSome?_eq_none_iff : findSome? p l = none x l, p x = none := by
induction l <;> simp [findSome?_cons]; split <;> simp [*]
@[simp] theorem findSome?_isSome_iff {f : α Option β} {l : List α} :
@@ -91,7 +91,7 @@ theorem findSome?_eq_some_iff {f : α → Option β} {l : List α} {b : β} :
obtain rfl, rfl, rfl := h₁
exact l₁, a, l₂, rfl, h₂, fun a' w => h₃ a' (mem_cons_of_mem p w)
@[simp] theorem findSome?_guard {l : List α} : findSome? (Option.guard fun x => p x) l = find? p l := by
@[simp, grind =] theorem findSome?_guard {l : List α} : findSome? (Option.guard p) l = find? p l := by
induction l with
| nil => simp
| cons x xs ih =>
@@ -103,32 +103,33 @@ theorem findSome?_eq_some_iff {f : α → Option β} {l : List α} {b : β} :
· simp only [Option.guard_eq_none_iff] at h
simp [ih, h]
theorem find?_eq_findSome?_guard {l : List α} : find? p l = findSome? (Option.guard fun x => p x) l :=
theorem find?_eq_findSome?_guard {l : List α} : find? p l = findSome? (Option.guard p) l :=
findSome?_guard.symm
@[simp] theorem head?_filterMap {f : α Option β} {l : List α} : (l.filterMap f).head? = l.findSome? f := by
@[simp, grind =] theorem head?_filterMap {f : α Option β} {l : List α} : (l.filterMap f).head? = l.findSome? f := by
induction l with
| nil => simp
| cons x xs ih =>
simp only [filterMap_cons, findSome?_cons]
split <;> simp [*]
@[simp] theorem head_filterMap {f : α Option β} {l : List α} (h) :
@[simp, grind =] theorem head_filterMap {f : α Option β} {l : List α} (h) :
(l.filterMap f).head h = (l.findSome? f).get (by simp_all [Option.isSome_iff_ne_none]) := by
simp [head_eq_iff_head?_eq_some]
@[simp] theorem getLast?_filterMap {f : α Option β} {l : List α} : (l.filterMap f).getLast? = l.reverse.findSome? f := by
@[simp, grind =] theorem getLast?_filterMap {f : α Option β} {l : List α} : (l.filterMap f).getLast? = l.reverse.findSome? f := by
rw [getLast?_eq_head?_reverse]
simp [ filterMap_reverse]
@[simp] theorem getLast_filterMap {f : α Option β} {l : List α} (h) :
@[simp, grind =] theorem getLast_filterMap {f : α Option β} {l : List α} (h) :
(l.filterMap f).getLast h = (l.reverse.findSome? f).get (by simp_all [Option.isSome_iff_ne_none]) := by
simp [getLast_eq_iff_getLast?_eq_some]
@[simp] theorem map_findSome? {f : α Option β} {g : β γ} {l : List α} :
@[simp, grind _=_] theorem map_findSome? {f : α Option β} {g : β γ} {l : List α} :
(l.findSome? f).map g = l.findSome? (Option.map g f) := by
induction l <;> simp [findSome?_cons]; split <;> simp [*]
@[grind _=_]
theorem findSome?_map {f : β γ} {l : List β} : findSome? p (l.map f) = l.findSome? (p f) := by
induction l with
| nil => simp
@@ -136,15 +137,18 @@ theorem findSome?_map {f : β → γ} {l : List β} : findSome? p (l.map f) = l.
simp only [map_cons, findSome?]
split <;> simp_all
@[grind =]
theorem head_flatten {L : List (List α)} (h : l, l L l []) :
(flatten L).head (by simpa using h) = (L.findSome? fun l => l.head?).get (by simpa using h) := by
(flatten L).head (by simpa using h) = (L.findSome? head?).get (by simpa using h) := by
simp [head_eq_iff_head?_eq_some, head?_flatten]
@[grind =]
theorem getLast_flatten {L : List (List α)} (h : l, l L l []) :
(flatten L).getLast (by simpa using h) =
(L.reverse.findSome? fun l => l.getLast?).get (by simpa using h) := by
(L.reverse.findSome? getLast?).get (by simpa using h) := by
simp [getLast_eq_iff_getLast?_eq_some, getLast?_flatten]
@[grind =]
theorem findSome?_replicate : findSome? f (replicate n a) = if n = 0 then none else f a := by
cases n with
| zero => simp
@@ -174,6 +178,9 @@ theorem Sublist.findSome?_isSome {l₁ l₂ : List α} (h : l₁ <+ l₂) :
· simp_all
· exact ih
grind_pattern Sublist.findSome?_isSome => l₁ <+ l₂, l₁.findSome? f
grind_pattern Sublist.findSome?_isSome => l₁ <+ l₂, l₂.findSome? f
theorem Sublist.findSome?_eq_none {l₁ l₂ : List α} (h : l₁ <+ l₂) :
l₂.findSome? f = none l₁.findSome? f = none := by
simp only [List.findSome?_eq_none_iff, Bool.not_eq_true]
@@ -185,16 +192,30 @@ theorem IsPrefix.findSome?_eq_some {l₁ l₂ : List α} {f : α → Option β}
obtain t, rfl := h
simp +contextual [findSome?_append]
grind_pattern IsPrefix.findSome?_eq_some => l₁ <+: l₂, l₁.findSome? f, some b
grind_pattern IsPrefix.findSome?_eq_some => l₁ <+: l₂, l₂.findSome? f, some b
theorem IsPrefix.findSome?_eq_none {l₁ l₂ : List α} {f : α Option β} (h : l₁ <+: l₂) :
List.findSome? f l₂ = none List.findSome? f l₁ = none :=
h.sublist.findSome?_eq_none
grind_pattern IsPrefix.findSome?_eq_none => l₁ <+: l₂, l₂.findSome? f
grind_pattern IsPrefix.findSome?_eq_none => l₁ <+: l₂, l₁.findSome? f
theorem IsSuffix.findSome?_eq_none {l₁ l₂ : List α} {f : α Option β} (h : l₁ <:+ l₂) :
List.findSome? f l₂ = none List.findSome? f l₁ = none :=
h.sublist.findSome?_eq_none
grind_pattern IsSuffix.findSome?_eq_none => l₁ <+: l₂, l₂.findSome? f
grind_pattern IsSuffix.findSome?_eq_none => l₁ <+: l₂, l₁.findSome? f
theorem IsInfix.findSome?_eq_none {l₁ l₂ : List α} {f : α Option β} (h : l₁ <:+: l₂) :
List.findSome? f l₂ = none List.findSome? f l₁ = none :=
h.sublist.findSome?_eq_none
grind_pattern IsInfix.findSome?_eq_none => l₁ <+: l₂, l₂.findSome? f
grind_pattern IsInfix.findSome?_eq_none => l₁ <+: l₂, l₁.findSome? f
/-! ### find? -/
@[simp] theorem find?_cons_of_pos {l} (h : p a) : find? p (a :: l) = some a := by
@@ -203,7 +224,7 @@ theorem IsInfix.findSome?_eq_none {l₁ l₂ : List α} {f : α → Option β} (
@[simp] theorem find?_cons_of_neg {l} (h : ¬p a) : find? p (a :: l) = find? p l := by
simp [find?, h]
@[simp] theorem find?_eq_none : find? p l = none x l, ¬ p x := by
@[simp, grind =] theorem find?_eq_none : find? p l = none x l, ¬ p x := by
induction l <;> simp [find?_cons]; split <;> simp [*]
theorem find?_eq_some_iff_append :
@@ -248,25 +269,28 @@ theorem find?_cons_eq_some : (a :: xs).find? p = some b ↔ (p a ∧ a = b)
rw [find?_cons]
split <;> simp_all
@[simp] theorem find?_isSome {xs : List α} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
@[simp, grind =] theorem find?_isSome {xs : List α} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
induction xs with
| nil => simp
| cons x xs ih =>
simp only [find?_cons, mem_cons, exists_eq_or_imp]
split <;> simp_all
@[grind ]
theorem find?_some : {l}, find? p l = some a p a
| b :: l, H => by
by_cases h : p b <;> simp [find?, h] at H
· exact H h
· exact find?_some H
@[grind ]
theorem mem_of_find?_eq_some : {l}, find? p l = some a a l
| b :: l, H => by
by_cases h : p b <;> simp [find?, h] at H
· exact H .head _
· exact .tail _ (mem_of_find?_eq_some H)
@[grind]
theorem get_find?_mem {xs : List α} {p : α Bool} (h) : (xs.find? p).get h xs := by
induction xs with
| nil => simp at h
@@ -278,7 +302,7 @@ theorem get_find?_mem {xs : List α} {p : α → Bool} (h) : (xs.find? p).get h
right
apply ih
@[simp] theorem find?_filter {xs : List α} {p : α Bool} {q : α Bool} :
@[simp, grind =] theorem find?_filter {xs : List α} {p : α Bool} {q : α Bool} :
(xs.filter p).find? q = xs.find? (fun a => p a q a) := by
induction xs with
| nil => simp
@@ -288,22 +312,22 @@ theorem get_find?_mem {xs : List α} {p : α → Bool} (h) : (xs.find? p).get h
· simp only [find?_cons]
split <;> simp_all
@[simp] theorem head?_filter {p : α Bool} {l : List α} : (l.filter p).head? = l.find? p := by
@[simp, grind =] theorem head?_filter {p : α Bool} {l : List α} : (l.filter p).head? = l.find? p := by
rw [ filterMap_eq_filter, head?_filterMap, findSome?_guard]
@[simp] theorem head_filter {p : α Bool} {l : List α} (h) :
@[simp, grind =] theorem head_filter {p : α Bool} {l : List α} (h) :
(l.filter p).head h = (l.find? p).get (by simp_all [Option.isSome_iff_ne_none]) := by
simp [head_eq_iff_head?_eq_some]
@[simp] theorem getLast?_filter {p : α Bool} {l : List α} : (l.filter p).getLast? = l.reverse.find? p := by
@[simp, grind =] theorem getLast?_filter {p : α Bool} {l : List α} : (l.filter p).getLast? = l.reverse.find? p := by
rw [getLast?_eq_head?_reverse]
simp [ filter_reverse]
@[simp] theorem getLast_filter {p : α Bool} {l : List α} (h) :
@[simp, grind =] theorem getLast_filter {p : α Bool} {l : List α} (h) :
(l.filter p).getLast h = (l.reverse.find? p).get (by simp_all [Option.isSome_iff_ne_none]) := by
simp [getLast_eq_iff_getLast?_eq_some]
@[simp] theorem find?_filterMap {xs : List α} {f : α Option β} {p : β Bool} :
@[simp, grind =] theorem find?_filterMap {xs : List α} {f : α Option β} {p : β Bool} :
(xs.filterMap f).find? p = (xs.find? (fun a => (f a).any p)).bind f := by
induction xs with
| nil => simp
@@ -313,15 +337,15 @@ theorem get_find?_mem {xs : List α} {p : α → Bool} (h) : (xs.find? p).get h
· simp only [find?_cons]
split <;> simp_all
@[simp] theorem find?_map {f : β α} {l : List β} : find? p (l.map f) = (l.find? (p f)).map f := by
@[simp, grind =] theorem find?_map {f : β α} {l : List β} : find? p (l.map f) = (l.find? (p f)).map f := by
induction l with
| nil => simp
| cons x xs ih =>
simp only [map_cons, find?]
by_cases h : p (f x) <;> simp [h, ih]
@[simp] theorem find?_flatten {xss : List (List α)} {p : α Bool} :
xss.flatten.find? p = xss.findSome? (·.find? p) := by
@[simp, grind _=_] theorem find?_flatten {xss : List (List α)} {p : α Bool} :
xss.flatten.find? p = xss.findSome? (find? p) := by
induction xss with
| nil => simp
| cons _ _ ih =>
@@ -378,7 +402,7 @@ theorem find?_flatten_eq_some_iff {xs : List (List α)} {p : α → Bool} {a :
@[deprecated find?_flatten_eq_some_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_some := @find?_flatten_eq_some_iff
@[simp] theorem find?_flatMap {xs : List α} {f : α List β} {p : β Bool} :
@[simp, grind =] theorem find?_flatMap {xs : List α} {f : α List β} {p : β Bool} :
(xs.flatMap f).find? p = xs.findSome? (fun x => (f x).find? p) := by
simp [flatMap_def, findSome?_map]; rfl
@@ -386,6 +410,7 @@ theorem find?_flatMap_eq_none_iff {xs : List α} {f : α → List β} {p : β
(xs.flatMap f).find? p = none x xs, y f x, !p y := by
simp
@[grind =]
theorem find?_replicate : find? p (replicate n a) = if n = 0 then none else if p a then some a else none := by
cases n
· simp
@@ -430,6 +455,9 @@ theorem Sublist.find?_isSome {l₁ l₂ : List α} (h : l₁ <+ l₂) : (l₁.fi
· simp
· simpa using ih
grind_pattern Sublist.find?_isSome => l₁ <+ l₂, l₁.find? p
grind_pattern Sublist.find?_isSome => l₁ <+ l₂, l₂.find? p
theorem Sublist.find?_eq_none {l₁ l₂ : List α} (h : l₁ <+ l₂) : l₂.find? p = none l₁.find? p = none := by
simp only [List.find?_eq_none, Bool.not_eq_true]
exact fun w x m => w x (Sublist.mem m h)
@@ -440,16 +468,31 @@ theorem IsPrefix.find?_eq_some {l₁ l₂ : List α} {p : α → Bool} (h : l₁
obtain t, rfl := h
simp +contextual [find?_append]
grind_pattern IsPrefix.find?_eq_some => l₁ <+: l₂, l₁.find? p, some b
grind_pattern IsPrefix.find?_eq_some => l₁ <+: l₂, l₂.find? p, some b
theorem IsPrefix.find?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <+: l₂) :
List.find? p l₂ = none List.find? p l₁ = none :=
h.sublist.find?_eq_none
grind_pattern Sublist.find?_eq_none => l₁ <+ l₂, l₂.find? p
grind_pattern Sublist.find?_eq_none => l₁ <+ l₂, l₁.find? p
theorem IsSuffix.find?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <:+ l₂) :
List.find? p l₂ = none List.find? p l₁ = none :=
h.sublist.find?_eq_none
grind_pattern IsPrefix.find?_eq_none => l₁ <+: l₂, l₂.find? p
grind_pattern IsPrefix.find?_eq_none => l₁ <+: l₂, l₁.find? p
theorem IsInfix.find?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <:+: l₂) :
List.find? p l₂ = none List.find? p l₁ = none :=
h.sublist.find?_eq_none
grind_pattern IsSuffix.find?_eq_none => l₁ <:+ l₂, l₂.find? p
grind_pattern IsSuffix.find?_eq_none => l₁ <:+ l₂, l₁.find? p
@[grind =]
theorem find?_pmap {P : α Prop} {f : (a : α) P a β} {xs : List α}
(H : (a : α), a xs P a) {p : β Bool} :
(xs.pmap f H).find? p = (xs.attach.find? (fun a, m => p (f a (H a m)))).map fun a, m => f a (H a m) := by
@@ -482,9 +525,9 @@ private theorem findIdx?_go_eq {p : α → Bool} {xs : List α} {i : Nat} :
ext
simp only [Nat.add_comm i, Function.comp_apply, Nat.add_assoc]
@[simp] theorem findIdx?_nil : ([] : List α).findIdx? p = none := rfl
@[simp, grind =] theorem findIdx?_nil : ([] : List α).findIdx? p = none := rfl
theorem findIdx?_cons :
@[grind =] theorem findIdx?_cons :
(x :: xs).findIdx? p = if p x then some 0 else (xs.findIdx? p).map fun i => i + 1 := by
simp [findIdx?, findIdx?_go_eq]
@@ -493,6 +536,7 @@ theorem findIdx?_cons :
/-! ### findIdx -/
@[grind =]
theorem findIdx_cons {p : α Bool} {b : α} {l : List α} :
(b :: l).findIdx p = bif p b then 0 else (l.findIdx p) + 1 := by
cases H : p b with
@@ -511,6 +555,7 @@ where
@[simp] theorem findIdx_singleton {a : α} {p : α Bool} : [a].findIdx p = if p a then 0 else 1 := by
simp [findIdx_cons, findIdx_nil]
@[grind ]
theorem findIdx_of_getElem?_eq_some {xs : List α} (w : xs[xs.findIdx p]? = some y) : p y := by
induction xs with
| nil => simp_all
@@ -520,6 +565,8 @@ theorem findIdx_getElem {xs : List α} {w : xs.findIdx p < xs.length} :
p xs[xs.findIdx p] :=
xs.findIdx_of_getElem?_eq_some (getElem?_eq_getElem w)
grind_pattern findIdx_getElem => xs[xs.findIdx p]
theorem findIdx_lt_length_of_exists {xs : List α} (h : x xs, p x) :
xs.findIdx p < xs.length := by
induction xs with
@@ -558,6 +605,8 @@ theorem findIdx_le_length {p : α → Bool} {xs : List α} : xs.findIdx p ≤ xs
· simp at e
exact Nat.le_of_eq (findIdx_eq_length.mpr e)
grind_pattern findIdx_le_length => xs.findIdx p, xs.length
@[simp]
theorem findIdx_lt_length {p : α Bool} {xs : List α} :
xs.findIdx p < xs.length x xs, p x := by
@@ -567,6 +616,8 @@ theorem findIdx_lt_length {p : α → Bool} {xs : List α} :
rw [ this, findIdx_eq_length, not_exists]
simp only [Bool.not_eq_true, not_and]
grind_pattern findIdx_lt_length => xs.findIdx p, xs.length
/-- `p` does not hold for elements with indices less than `xs.findIdx p`. -/
theorem not_of_lt_findIdx {p : α Bool} {xs : List α} {i : Nat} (h : i < xs.findIdx p) :
p (xs[i]'(Nat.le_trans h findIdx_le_length)) = false := by
@@ -591,6 +642,8 @@ theorem not_of_lt_findIdx {p : α → Bool} {xs : List α} {i : Nat} (h : i < xs
rw [ ipm, Nat.succ_lt_succ_iff] at h
simpa using ih h
grind_pattern not_of_lt_findIdx => xs.findIdx p, xs[i]
/-- If `¬ p xs[j]` for all `j < i`, then `i ≤ xs.findIdx p`. -/
theorem le_findIdx_of_not {p : α Bool} {xs : List α} {i : Nat} (h : i < xs.length)
(h2 : j (hji : j < i), p (xs[j]'(Nat.lt_trans hji h)) = false) : i xs.findIdx p := by
@@ -618,6 +671,7 @@ theorem findIdx_eq {p : α → Bool} {xs : List α} {i : Nat} (h : i < xs.length
simp at h3
simp_all [not_of_lt_findIdx h3]
@[grind =]
theorem findIdx_append {p : α Bool} {l₁ l₂ : List α} :
(l₁ ++ l₂).findIdx p =
if l₁.findIdx p < l₁.length then l₁.findIdx p else l₂.findIdx p + l₁.length := by
@@ -639,6 +693,9 @@ theorem IsPrefix.findIdx_le {l₁ l₂ : List α} {p : α → Bool} (h : l₁ <+
· exact Nat.le_refl ..
· simp_all [findIdx_eq_length_of_false]
grind_pattern IsPrefix.findIdx_le => l₁ <:+ l₂, l₁.findIdx p
grind_pattern IsPrefix.findIdx_le => l₁ <:+ l₂, l₂.findIdx p
theorem IsPrefix.findIdx_eq_of_findIdx_lt_length {l₁ l₂ : List α} {p : α Bool} (h : l₁ <+: l₂)
(lt : l₁.findIdx p < l₁.length) : l₂.findIdx p = l₁.findIdx p := by
rw [IsPrefix] at h
@@ -648,6 +705,8 @@ theorem IsPrefix.findIdx_eq_of_findIdx_lt_length {l₁ l₂ : List α} {p : α
· rfl
· simp_all
grind_pattern IsPrefix.findIdx_eq_of_findIdx_lt_length => l₁ <:+ l₂, l₁.findIdx p, l₂.findIdx p
theorem findIdx_le_findIdx {l : List α} {p q : α Bool} (h : x l, p x q x) : l.findIdx q l.findIdx p := by
induction l with
| nil => simp
@@ -671,7 +730,7 @@ theorem findIdx_le_findIdx {l : List α} {p q : α → Bool} (h : ∀ x ∈ l, p
/-! ### findIdx? -/
@[simp]
@[simp, grind =]
theorem findIdx?_eq_none_iff {xs : List α} {p : α Bool} :
xs.findIdx? p = none x, x xs p x = false := by
induction xs with
@@ -680,7 +739,7 @@ theorem findIdx?_eq_none_iff {xs : List α} {p : α → Bool} :
simp only [findIdx?_cons]
split <;> simp_all [cond_eq_if]
@[simp]
@[simp, grind =]
theorem findIdx?_isSome {xs : List α} {p : α Bool} :
(xs.findIdx? p).isSome = xs.any p := by
induction xs with
@@ -689,7 +748,7 @@ theorem findIdx?_isSome {xs : List α} {p : α → Bool} :
simp only [findIdx?_cons]
split <;> simp_all
@[simp]
@[simp, grind =]
theorem findIdx?_isNone {xs : List α} {p : α Bool} :
(xs.findIdx? p).isNone = xs.all (¬p ·) := by
induction xs with
@@ -795,14 +854,14 @@ theorem of_findIdx?_eq_none {xs : List α} {p : α → Bool} (w : xs.findIdx? p
@[deprecated of_findIdx?_eq_none (since := "2025-02-02")]
abbrev findIdx?_of_eq_none := @of_findIdx?_eq_none
@[simp] theorem findIdx?_map {f : β α} {l : List β} : findIdx? p (l.map f) = l.findIdx? (p f) := by
@[simp, grind _=_] theorem findIdx?_map {f : β α} {l : List β} : findIdx? p (l.map f) = l.findIdx? (p f) := by
induction l with
| nil => simp
| cons x xs ih =>
simp only [map_cons, findIdx?_cons]
split <;> simp_all
@[simp] theorem findIdx?_append :
@[simp, grind =] theorem findIdx?_append :
(xs ++ ys : List α).findIdx? p =
(xs.findIdx? p).or ((ys.findIdx? p).map fun i => i + xs.length) := by
induction xs with simp [findIdx?_cons]
@@ -824,7 +883,7 @@ theorem findIdx?_flatten {l : List (List α)} {p : α → Bool} :
· rw [Option.or_of_isNone (by simp_all [findIdx?_isNone])]
simp [Function.comp_def, Nat.add_comm, Nat.add_assoc]
@[simp] theorem findIdx?_replicate :
@[simp, grind =] theorem findIdx?_replicate :
(replicate n a).findIdx? p = if 0 < n p a then some 0 else none := by
cases n with
| zero => simp
@@ -878,22 +937,38 @@ theorem Sublist.findIdx?_eq_none {l₁ l₂ : List α} (h : l₁ <+ l₂) :
simp only [findIdx?_eq_none_iff]
exact fun w x m => w x (h.mem m)
grind_pattern Sublist.findIdx?_eq_none => l₁ <+ l₂, l₁.findIdx? p
grind_pattern Sublist.findIdx?_eq_none => l₁ <+ l₂, l₂.findIdx? p
theorem IsPrefix.findIdx?_eq_some {l₁ l₂ : List α} {p : α Bool} (h : l₁ <+: l₂) :
List.findIdx? p l₁ = some i List.findIdx? p l₂ = some i := by
rw [IsPrefix] at h
obtain t, rfl := h
intro h
simp [findIdx?_append, h]
theorem IsPrefix.findIdx?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <+: l₂) :
List.findIdx? p l₂ = none List.findIdx? p l₁ = none :=
h.sublist.findIdx?_eq_none
grind_pattern IsPrefix.findIdx?_eq_none => l₁ <+: l₂, l₁.findIdx? p
grind_pattern IsPrefix.findIdx?_eq_none => l₁ <+: l₂, l₂.findIdx? p
theorem IsSuffix.findIdx?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <:+ l₂) :
List.findIdx? p l₂ = none List.findIdx? p l₁ = none :=
h.sublist.findIdx?_eq_none
grind_pattern IsSuffix.findIdx?_eq_none => l₁ <:+ l₂, l₁.findIdx? p
grind_pattern IsSuffix.findIdx?_eq_none => l₁ <:+ l₂, l₂.findIdx? p
theorem IsInfix.findIdx?_eq_none {l₁ l₂ : List α} {p : α Bool} (h : l₁ <:+: l₂) :
List.findIdx? p l₂ = none List.findIdx? p l₁ = none :=
h.sublist.findIdx?_eq_none
grind_pattern IsInfix.findIdx?_eq_none => l₁ <:+: l₂, l₁.findIdx? p
grind_pattern IsInfix.findIdx?_eq_none => l₁ <:+: l₂, l₂.findIdx? p
@[grind =]
theorem findIdx_eq_getD_findIdx? {xs : List α} {p : α Bool} :
xs.findIdx p = (xs.findIdx? p).getD xs.length := by
induction xs with
@@ -914,7 +989,7 @@ theorem findIdx_eq_getD_findIdx? {xs : List α} {p : α → Bool} :
/-! ### findFinIdx? -/
@[simp] theorem findFinIdx?_nil {p : α Bool} : findFinIdx? p [] = none := rfl
@[simp, grind =] theorem findFinIdx?_nil {p : α Bool} : findFinIdx? p [] = none := rfl
theorem findIdx?_go_eq_map_findFinIdx?_go_val {xs : List α} {p : α Bool} {i : Nat} {h} :
List.findIdx?.go p xs i =
@@ -940,6 +1015,7 @@ theorem findFinIdx?_eq_pmap_findIdx? {xs : List α} {p : α → Bool} :
(fun i h => h) := by
simp [findIdx?_eq_map_findFinIdx?_val, Option.pmap_map]
@[grind =]
theorem findFinIdx?_cons {p : α Bool} {x : α} {xs : List α} :
findFinIdx? p (x :: xs) = if p x then some 0 else (findFinIdx? p xs).map Fin.succ := by
rw [ Option.map_inj_right (f := Fin.val) (fun a b => Fin.eq_of_val_eq)]
@@ -950,6 +1026,7 @@ theorem findFinIdx?_cons {p : α → Bool} {x : α} {xs : List α} :
· rw [findIdx?_eq_map_findFinIdx?_val]
simp [Function.comp_def]
@[grind =]
theorem findFinIdx?_append {xs ys : List α} {p : α Bool} :
(xs ++ ys).findFinIdx? p =
((xs.findFinIdx? p).map (Fin.castLE (by simp))).or
@@ -959,11 +1036,11 @@ theorem findFinIdx?_append {xs ys : List α} {p : α → Bool} :
· simp [h, Option.pmap_map, Option.map_pmap, Nat.add_comm]
· simp [h]
@[simp] theorem findFinIdx?_singleton {a : α} {p : α Bool} :
@[simp, grind =] theorem findFinIdx?_singleton {a : α} {p : α Bool} :
[a].findFinIdx? p = if p a then some 0, by simp else none := by
simp [findFinIdx?_cons, findFinIdx?_nil]
@[simp] theorem findFinIdx?_eq_none_iff {l : List α} {p : α Bool} :
@[simp, grind =] theorem findFinIdx?_eq_none_iff {l : List α} {p : α Bool} :
l.findFinIdx? p = none x l, ¬ p x := by
simp [findFinIdx?_eq_pmap_findIdx?]
@@ -979,7 +1056,7 @@ theorem findFinIdx?_eq_some_iff {xs : List α} {p : α → Bool} {i : Fin xs.len
· rintro h, w
exact i, i.2, h, fun j hji => w j, by omega hji, rfl
@[simp]
@[simp, grind =]
theorem isSome_findFinIdx? {l : List α} {p : α Bool} :
(l.findFinIdx? p).isSome = l.any p := by
induction l with
@@ -988,7 +1065,7 @@ theorem isSome_findFinIdx? {l : List α} {p : α → Bool} :
simp only [findFinIdx?_cons]
split <;> simp_all
@[simp]
@[simp, grind =]
theorem isNone_findFinIdx? {l : List α} {p : α Bool} :
(l.findFinIdx? p).isNone = l.all (fun x => ¬ p x) := by
induction l with
@@ -1013,6 +1090,7 @@ The verification API for `idxOf` is still incomplete.
The lemmas below should be made consistent with those for `findIdx` (and proved using them).
-/
@[grind =]
theorem idxOf_cons [BEq α] :
(x :: xs : List α).idxOf y = bif x == y then 0 else xs.idxOf y + 1 := by
dsimp [idxOf]
@@ -1027,6 +1105,7 @@ abbrev indexOf_cons := @idxOf_cons
@[deprecated idxOf_cons_self (since := "2025-01-29")]
abbrev indexOf_cons_self := @idxOf_cons_self
@[grind =]
theorem idxOf_append [BEq α] [LawfulBEq α] {l₁ l₂ : List α} {a : α} :
(l₁ ++ l₂).idxOf a = if a l₁ then l₁.idxOf a else l₂.idxOf a + l₁.length := by
rw [idxOf, findIdx_append]
@@ -1050,7 +1129,7 @@ theorem idxOf_eq_length [BEq α] [LawfulBEq α] {l : List α} (h : a ∉ l) : l.
@[deprecated idxOf_eq_length (since := "2025-01-29")]
abbrev indexOf_eq_length := @idxOf_eq_length
theorem idxOf_lt_length [BEq α] [EquivBEq α] {l : List α} (h : a l) : l.idxOf a < l.length := by
theorem idxOf_lt_length_of_mem [BEq α] [EquivBEq α] {l : List α} (h : a l) : l.idxOf a < l.length := by
induction l with
| nil => simp at h
| cons x xs ih =>
@@ -1063,8 +1142,23 @@ theorem idxOf_lt_length [BEq α] [EquivBEq α] {l : List α} (h : a ∈ l) : l.i
· exact zero_lt_succ xs.length
· exact Nat.add_lt_add_right ih 1
@[deprecated idxOf_lt_length (since := "2025-01-29")]
abbrev indexOf_lt_length := @idxOf_lt_length
theorem idxOf_le_length [BEq α] [LawfulBEq α] {l : List α} {a : α} :
l.idxOf a l.length := by
simpa [idxOf] using findIdx_le_length
grind_pattern idxOf_le_length => l.idxOf a, l.length
theorem idxOf_lt_length_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
l.idxOf a < l.length a l := by
constructor
· intro h
simpa [idxOf] using h
· exact idxOf_lt_length_of_mem
grind_pattern idxOf_lt_length_iff => l.idxOf a, l.length
@[deprecated idxOf_lt_length_of_mem (since := "2025-01-29")]
abbrev indexOf_lt_length := @idxOf_lt_length_of_mem
/-! ### finIdxOf?
@@ -1076,14 +1170,14 @@ theorem idxOf?_eq_map_finIdxOf?_val [BEq α] {xs : List α} {a : α} :
xs.idxOf? a = (xs.finIdxOf? a).map (·.val) := by
simp [idxOf?, finIdxOf?, findIdx?_eq_map_findFinIdx?_val]
@[simp] theorem finIdxOf?_nil [BEq α] : ([] : List α).finIdxOf? a = none := rfl
@[simp, grind =] theorem finIdxOf?_nil [BEq α] : ([] : List α).finIdxOf? a = none := rfl
theorem finIdxOf?_cons [BEq α] {a : α} {xs : List α} :
@[grind =] theorem finIdxOf?_cons [BEq α] {a : α} {xs : List α} :
(a :: xs).finIdxOf? b =
if a == b then some 0, by simp else (xs.finIdxOf? b).map (·.succ) := by
simp [finIdxOf?, findFinIdx?_cons]
@[simp] theorem finIdxOf?_eq_none_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
@[simp, grind =] theorem finIdxOf?_eq_none_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
l.finIdxOf? a = none a l := by
simp only [finIdxOf?, findFinIdx?_eq_none_iff, beq_iff_eq]
constructor
@@ -1096,18 +1190,19 @@ theorem finIdxOf?_cons [BEq α] {a : α} {xs : List α} :
l.finIdxOf? a = some i l[i] = a j (_ : j < i), ¬l[j] = a := by
simp only [finIdxOf?, findFinIdx?_eq_some_iff, beq_iff_eq]
@[simp]
theorem isSome_finIdxOf? [BEq α] [LawfulBEq α] {l : List α} {a : α} :
(l.finIdxOf? a).isSome a l := by
@[simp, grind =]
theorem isSome_finIdxOf? [BEq α] [PartialEquivBEq α] {l : List α} {a : α} :
(l.finIdxOf? a).isSome = l.contains a := by
induction l with
| nil => simp
| cons x xs ih =>
simp only [finIdxOf?_cons]
split <;> simp_all [@eq_comm _ x a]
split <;> simp_all [BEq.comm]
theorem isNone_finIdxOf? [BEq α] [LawfulBEq α] {l : List α} {a : α} :
(l.finIdxOf? a).isNone = ¬ a l := by
simp
@[simp]
theorem isNone_finIdxOf? [BEq α] [PartialEquivBEq α] {l : List α} {a : α} :
(l.finIdxOf? a).isNone = !l.contains a := by
rw [ isSome_finIdxOf?, Option.not_isSome]
/-! ### idxOf?
@@ -1115,16 +1210,16 @@ The verification API for `idxOf?` is still incomplete.
The lemmas below should be made consistent with those for `findIdx?` (and proved using them).
-/
@[simp] theorem idxOf?_nil [BEq α] : ([] : List α).idxOf? a = none := rfl
@[simp, grind =] theorem idxOf?_nil [BEq α] : ([] : List α).idxOf? a = none := rfl
theorem idxOf?_cons [BEq α] {a : α} {xs : List α} {b : α} :
@[grind =] theorem idxOf?_cons [BEq α] {a : α} {xs : List α} {b : α} :
(a :: xs).idxOf? b = if a == b then some 0 else (xs.idxOf? b).map (· + 1) := by
simp [idxOf?, findIdx?_cons]
@[simp] theorem idxOf?_singleton [BEq α] {a b : α} : [a].idxOf? b = if a == b then some 0 else none := by
simp [idxOf?_cons, idxOf?_nil]
@[simp] theorem idxOf?_eq_none_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
@[simp, grind =] theorem idxOf?_eq_none_iff [BEq α] [LawfulBEq α] {l : List α} {a : α} :
l.idxOf? a = none a l := by
simp only [idxOf?, findIdx?_eq_none_iff, beq_eq_false_iff_ne, ne_eq]
constructor
@@ -1137,7 +1232,7 @@ theorem idxOf?_cons [BEq α] {a : α} {xs : List α} {b : α} :
@[deprecated idxOf?_eq_none_iff (since := "2025-01-29")]
abbrev indexOf?_eq_none_iff := @idxOf?_eq_none_iff
@[simp]
@[simp, grind =]
theorem isSome_idxOf? [BEq α] [LawfulBEq α] {l : List α} {a : α} :
(l.idxOf? a).isSome a l := by
induction l with
@@ -1146,6 +1241,7 @@ theorem isSome_idxOf? [BEq α] [LawfulBEq α] {l : List α} {a : α} :
simp only [idxOf?_cons]
split <;> simp_all [@eq_comm _ x a]
@[grind =]
theorem isNone_idxOf? [BEq α] [LawfulBEq α] {l : List α} {a : α} :
(l.idxOf? a).isNone = ¬ a l := by
simp
@@ -1172,7 +1268,7 @@ theorem lookup_eq_findSome? {l : List (α × β)} {k : α} :
simp only [lookup_cons, findSome?_cons]
split <;> simp_all
@[simp] theorem lookup_eq_none_iff {l : List (α × β)} {k : α} :
@[simp, grind =] theorem lookup_eq_none_iff {l : List (α × β)} {k : α} :
l.lookup k = none p l, k != p.1 := by
simp [lookup_eq_findSome?]
@@ -1192,10 +1288,12 @@ theorem lookup_eq_some_iff {l : List (α × β)} {k : α} {b : β} :
· rintro l₁, l₂, rfl, h
exact l₁, (k, b), l₂, rfl, by simp, by simpa using h
@[grind =]
theorem lookup_append {l₁ l₂ : List (α × β)} {k : α} :
(l₁ ++ l₂).lookup k = (l₁.lookup k).or (l₂.lookup k) := by
simp [lookup_eq_findSome?, findSome?_append]
@[grind =]
theorem lookup_replicate {k : α} :
(replicate n (a,b)).lookup k = if n = 0 then none else if k == a then some b else none := by
induction n with
@@ -1230,6 +1328,9 @@ theorem Sublist.lookup_eq_none {l₁ l₂ : List (α × β)} (h : l₁ <+ l₂)
simp only [lookup_eq_findSome?]
exact h.findSome?_eq_none
grind_pattern Sublist.lookup_isSome => l₁ <+ l₂, l₁.lookup k
grind_pattern Sublist.lookup_isSome => l₁ <+ l₂, l₂.lookup k
theorem IsPrefix.lookup_eq_some {l₁ l₂ : List (α × β)} (h : l₁ <+: l₂) :
List.lookup k l₁ = some b List.lookup k l₂ = some b := by
simp only [lookup_eq_findSome?]
@@ -1238,13 +1339,24 @@ theorem IsPrefix.lookup_eq_some {l₁ l₂ : List (α × β)} (h : l₁ <+: l₂
theorem IsPrefix.lookup_eq_none {l₁ l₂ : List (α × β)} (h : l₁ <+: l₂) :
List.lookup k l₂ = none List.lookup k l₁ = none :=
h.sublist.lookup_eq_none
grind_pattern IsPrefix.lookup_eq_none => l₁ <+: l₂, l₁.lookup k
grind_pattern IsPrefix.lookup_eq_none => l₁ <+: l₂, l₂.lookup k
theorem IsSuffix.lookup_eq_none {l₁ l₂ : List (α × β)} (h : l₁ <:+ l₂) :
List.lookup k l₂ = none List.lookup k l₁ = none :=
h.sublist.lookup_eq_none
grind_pattern IsSuffix.lookup_eq_none => l₁ <:+ l₂, l₁.lookup k
grind_pattern IsSuffix.lookup_eq_none => l₁ <:+ l₂, l₂.lookup k
theorem IsInfix.lookup_eq_none {l₁ l₂ : List (α × β)} (h : l₁ <:+: l₂) :
List.lookup k l₂ = none List.lookup k l₁ = none :=
h.sublist.lookup_eq_none
grind_pattern IsInfix.lookup_eq_none => l₁ <:+: l₂, l₁.lookup k
grind_pattern IsInfix.lookup_eq_none => l₁ <:+: l₂, l₂.lookup k
end lookup
end List

View File

@@ -261,11 +261,11 @@ Examples:
/-- Tail recursive implementation of `findRev?`. This is only used at runtime. -/
def findRev?TR (p : α Bool) (l : List α) : Option α := l.reverse.find? p
@[simp] theorem find?_singleton {a : α} : [a].find? p = if p a then some a else none := by
@[simp, grind =] theorem find?_singleton {a : α} : [a].find? p = if p a then some a else none := by
simp only [find?]
split <;> simp_all
@[simp] theorem find?_append {xs ys : List α} : (xs ++ ys).find? p = (xs.find? p).or (ys.find? p) := by
@[simp, grind =] theorem find?_append {xs ys : List α} : (xs ++ ys).find? p = (xs.find? p).or (ys.find? p) := by
induction xs with
| nil => simp [find?]
| cons x xs ih =>
@@ -287,12 +287,12 @@ def findRev?TR (p : α → Bool) (l : List α) : Option α := l.reverse.find? p
/-- Tail recursive implementation of `finSomedRev?`. This is only used at runtime. -/
def findSomeRev?TR (f : α Option β) (l : List α) : Option β := l.reverse.findSome? f
@[simp] theorem findSome?_singleton {a : α} :
@[simp, grind =] theorem findSome?_singleton {a : α} :
[a].findSome? f = f a := by
simp only [findSome?_cons, findSome?_nil]
split <;> simp_all
@[simp] theorem findSome?_append {xs ys : List α} : (xs ++ ys).findSome? f = (xs.findSome? f).or (ys.findSome? f) := by
@[simp, grind =] theorem findSome?_append {xs ys : List α} : (xs ++ ys).findSome? f = (xs.findSome? f).or (ys.findSome? f) := by
induction xs with
| nil => simp [findSome?]
| cons x xs ih =>

View File

@@ -1318,6 +1318,19 @@ theorem forall_mem_filter {l : List α} {p : α → Bool} {P : α → Prop} :
( (i) (_ : i l.filter p), P i) (j) (_ : j l), p j P j := by
simp
@[grind] theorem getElem_filter {xs : List α} {p : α Bool} {i : Nat} (h : i < (xs.filter p).length) :
p (xs.filter p)[i] :=
(mem_filter.mp (getElem_mem h)).2
theorem getElem?_filter {xs : List α} {p : α Bool} {i : Nat} (h : i < (xs.filter p).length)
(w : (xs.filter p)[i]? = some a) : p a := by
rw [getElem?_eq_getElem] at w
simp only [Option.some.injEq] at w
rw [ w]
apply getElem_filter h
grind_pattern getElem?_filter => (xs.filter p)[i]?, some a
@[simp] theorem filter_filter : {l}, filter p (filter q l) = filter (fun a => p a && q a) l
| [] => rfl
| a :: l => by by_cases hp : p a <;> by_cases hq : q a <;> simp [hp, hq, filter_filter]
@@ -2778,8 +2791,8 @@ We can prove that two folds over the same list are related (by some arbitrary re
if we know that the initial elements are related and the folding function, for each element of the list,
preserves the relation.
-/
theorem foldl_rel {l : List α} {f g : β α β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a l (c c' : β), r c c' r (f c a) (g c' a)) :
theorem foldl_rel {l : List α} {f : β α β} {g : γ α γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a l (c : β) (c' : γ), r c c' r (f c a) (g c' a)) :
r (l.foldl (fun acc a => f acc a) a) (l.foldl (fun acc a => g acc a) b) := by
induction l generalizing a b with
| nil => simp_all
@@ -2794,8 +2807,8 @@ We can prove that two folds over the same list are related (by some arbitrary re
if we know that the initial elements are related and the folding function, for each element of the list,
preserves the relation.
-/
theorem foldr_rel {l : List α} {f g : α β β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a l (c c' : β), r c c' r (f a c) (g a c')) :
theorem foldr_rel {l : List α} {f : α β β} {g : α γ γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a l (c : β) (c' : γ), r c c' r (f a c) (g a c')) :
r (l.foldr (fun a acc => f a acc) a) (l.foldr (fun a acc => g a acc) b) := by
induction l generalizing a b with
| nil => simp_all

View File

@@ -91,7 +91,7 @@ is valid.
subst w
rfl
@[simp]
@[simp, grind =]
theorem mapFinIdx_nil {f : (i : Nat) α (h : i < 0) β} : mapFinIdx [] f = [] :=
rfl
@@ -101,7 +101,7 @@ theorem mapFinIdx_nil {f : (i : Nat) → α → (h : i < 0) → β} : mapFinIdx
| nil => simpa using h
| cons _ _ ih => simp [mapFinIdx.go, ih]
@[simp] theorem length_mapFinIdx {as : List α} {f : (i : Nat) α (h : i < as.length) β} :
@[simp, grind =] theorem length_mapFinIdx {as : List α} {f : (i : Nat) α (h : i < as.length) β} :
(as.mapFinIdx f).length = as.length := by
simp [mapFinIdx, length_mapFinIdx_go]
@@ -129,7 +129,7 @@ theorem getElem_mapFinIdx_go {as : List α} {f : (i : Nat) → α → (h : i < a
· have h₃ : i - acc.size = (i - (acc.size + 1)) + 1 := by omega
simp [h₃]
@[simp] theorem getElem_mapFinIdx {as : List α} {f : (i : Nat) α (h : i < as.length) β} {i : Nat} {h} :
@[simp, grind =] theorem getElem_mapFinIdx {as : List α} {f : (i : Nat) α (h : i < as.length) β} {i : Nat} {h} :
(as.mapFinIdx f)[i] = f i (as[i]'(by simp at h; omega)) (by simp at h; omega) := by
simp [mapFinIdx, getElem_mapFinIdx_go]
@@ -137,18 +137,19 @@ theorem mapFinIdx_eq_ofFn {as : List α} {f : (i : Nat) → α → (h : i < as.l
as.mapFinIdx f = List.ofFn fun i : Fin as.length => f i as[i] i.2 := by
apply ext_getElem <;> simp
@[simp] theorem getElem?_mapFinIdx {l : List α} {f : (i : Nat) α (h : i < l.length) β} {i : Nat} :
@[simp, grind =] theorem getElem?_mapFinIdx {l : List α} {f : (i : Nat) α (h : i < l.length) β} {i : Nat} :
(l.mapFinIdx f)[i]? = l[i]?.pbind fun x m => some <| f i x (by simp [getElem?_eq_some_iff] at m; exact m.1) := by
simp only [getElem?_def, length_mapFinIdx, getElem_mapFinIdx]
split <;> simp
@[simp]
@[simp, grind =]
theorem mapFinIdx_cons {l : List α} {a : α} {f : (i : Nat) α (h : i < l.length + 1) β} :
mapFinIdx (a :: l) f = f 0 a (by omega) :: mapFinIdx l (fun i a h => f (i + 1) a (by omega)) := by
apply ext_getElem
· simp
· rintro (_|i) h₁ h₂ <;> simp
@[grind =]
theorem mapFinIdx_append {xs ys : List α} {f : (i : Nat) α (h : i < (xs ++ ys).length) β} :
(xs ++ ys).mapFinIdx f =
xs.mapFinIdx (fun i a h => f i a (by simp; omega)) ++
@@ -165,7 +166,7 @@ theorem mapFinIdx_append {xs ys : List α} {f : (i : Nat) → α → (h : i < (x
congr
omega
@[simp] theorem mapFinIdx_concat {l : List α} {e : α} {f : (i : Nat) α (h : i < (l ++ [e]).length) β}:
@[simp, grind =] theorem mapFinIdx_concat {l : List α} {e : α} {f : (i : Nat) α (h : i < (l ++ [e]).length) β}:
(l ++ [e]).mapFinIdx f = l.mapFinIdx (fun i a h => f i a (by simp; omega)) ++ [f l.length e (by simp)] := by
simp [mapFinIdx_append]
@@ -201,7 +202,7 @@ theorem exists_of_mem_mapFinIdx {b : β} {l : List α} {f : (i : Nat) → α
obtain h', rfl := h
exact i, h', rfl
@[simp] theorem mem_mapFinIdx {b : β} {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
@[simp, grind =] theorem mem_mapFinIdx {b : β} {l : List α} {f : (i : Nat) α (h : i < l.length) β} :
b l.mapFinIdx f (i : Nat) (h : i < l.length), f i l[i] h = b := by
constructor
· intro h
@@ -287,7 +288,7 @@ theorem mapFinIdx_eq_mapFinIdx_iff {l : List α} {f g : (i : Nat) → α → (h
rw [eq_comm, mapFinIdx_eq_iff]
simp [Fin.forall_iff]
@[simp] theorem mapFinIdx_mapFinIdx {l : List α}
@[simp, grind =] theorem mapFinIdx_mapFinIdx {l : List α}
{f : (i : Nat) α (h : i < l.length) β}
{g : (i : Nat) β (h : i < (l.mapFinIdx f).length) γ} :
(l.mapFinIdx f).mapFinIdx g = l.mapFinIdx (fun i a h => g i (f i a h) (by simpa)) := by
@@ -303,7 +304,7 @@ theorem mapFinIdx_eq_replicate_iff {l : List α} {f : (i : Nat) → α → (h :
· rintro w b i h rfl
exact w i h
@[simp] theorem mapFinIdx_reverse {l : List α} {f : (i : Nat) α (h : i < l.reverse.length) β} :
@[simp, grind =] theorem mapFinIdx_reverse {l : List α} {f : (i : Nat) α (h : i < l.reverse.length) β} :
l.reverse.mapFinIdx f =
(l.mapFinIdx (fun i a h => f (l.length - 1 - i) a (by simp; omega))).reverse := by
simp [mapFinIdx_eq_iff]
@@ -313,7 +314,7 @@ theorem mapFinIdx_eq_replicate_iff {l : List α} {f : (i : Nat) → α → (h :
/-! ### mapIdx -/
@[simp]
@[simp, grind =]
theorem mapIdx_nil {f : Nat α β} : mapIdx f [] = [] :=
rfl
@@ -333,7 +334,7 @@ theorem length_mapIdx_go : ∀ {l : List α} {acc : Array β},
simp
omega
@[simp] theorem length_mapIdx {l : List α} : (l.mapIdx f).length = l.length := by
@[simp, grind =] theorem length_mapIdx {l : List α} : (l.mapIdx f).length = l.length := by
simp [mapIdx, length_mapIdx_go]
theorem getElem?_mapIdx_go : {l : List α} {acc : Array β} {i : Nat},
@@ -356,11 +357,11 @@ theorem getElem?_mapIdx_go : ∀ {l : List α} {acc : Array β} {i : Nat},
· have : i - acc.size = i - (acc.size + 1) + 1 := by omega
simp_all
@[simp] theorem getElem?_mapIdx {l : List α} {i : Nat} :
@[simp, grind =] theorem getElem?_mapIdx {l : List α} {i : Nat} :
(l.mapIdx f)[i]? = Option.map (f i) l[i]? := by
simp [mapIdx, getElem?_mapIdx_go]
@[simp] theorem getElem_mapIdx {l : List α} {f : Nat α β} {i : Nat} {h : i < (l.mapIdx f).length} :
@[simp, grind =] theorem getElem_mapIdx {l : List α} {f : Nat α β} {i : Nat} {h : i < (l.mapIdx f).length} :
(l.mapIdx f)[i] = f i (l[i]'(by simpa using h)) := by
apply Option.some_inj.mp
rw [ getElem?_eq_getElem, getElem?_mapIdx, getElem?_eq_getElem (by simpa using h)]
@@ -384,18 +385,19 @@ theorem mapIdx_eq_zipIdx_map {l : List α} {f : Nat → α → β} :
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-21")]
abbrev mapIdx_eq_enum_map := @mapIdx_eq_zipIdx_map
@[simp]
@[simp, grind =]
theorem mapIdx_cons {l : List α} {a : α} :
mapIdx f (a :: l) = f 0 a :: mapIdx (fun i => f (i + 1)) l := by
simp [mapIdx_eq_zipIdx_map, List.zipIdx_succ]
@[grind =]
theorem mapIdx_append {xs ys : List α} :
(xs ++ ys).mapIdx f = xs.mapIdx f ++ ys.mapIdx fun i => f (i + xs.length) := by
induction xs generalizing f with
| nil => rfl
| cons _ _ ih => simp [ih (f := fun i => f (i + 1)), Nat.add_assoc]
@[simp] theorem mapIdx_concat {l : List α} {e : α} :
@[simp, grind =] theorem mapIdx_concat {l : List α} {e : α} :
mapIdx f (l ++ [e]) = mapIdx f l ++ [f l.length e] := by
simp [mapIdx_append]
@@ -415,7 +417,7 @@ theorem exists_of_mem_mapIdx {b : β} {l : List α}
rw [mapIdx_eq_mapFinIdx] at h
simpa [Fin.exists_iff] using exists_of_mem_mapFinIdx h
@[simp] theorem mem_mapIdx {b : β} {l : List α} :
@[simp, grind =] theorem mem_mapIdx {b : β} {l : List α} :
b mapIdx f l (i : Nat) (h : i < l.length), f i l[i] = b := by
constructor
· intro h
@@ -470,7 +472,7 @@ theorem mapIdx_eq_mapIdx_iff {l : List α} :
· intro i h₁ h₂
simp [w]
@[simp] theorem mapIdx_set {l : List α} {i : Nat} {a : α} :
@[simp, grind =] theorem mapIdx_set {l : List α} {i : Nat} {a : α} :
(l.set i a).mapIdx f = (l.mapIdx f).set i (f i a) := by
simp only [mapIdx_eq_iff, getElem?_set, length_mapIdx, getElem?_mapIdx]
intro i
@@ -478,16 +480,16 @@ theorem mapIdx_eq_mapIdx_iff {l : List α} :
· split <;> simp_all
· rfl
@[simp] theorem head_mapIdx {l : List α} {f : Nat α β} {w : mapIdx f l []} :
@[simp, grind =] theorem head_mapIdx {l : List α} {f : Nat α β} {w : mapIdx f l []} :
(mapIdx f l).head w = f 0 (l.head (by simpa using w)) := by
cases l with
| nil => simp at w
| cons _ _ => simp
@[simp] theorem head?_mapIdx {l : List α} {f : Nat α β} : (mapIdx f l).head? = l.head?.map (f 0) := by
@[simp, grind =] theorem head?_mapIdx {l : List α} {f : Nat α β} : (mapIdx f l).head? = l.head?.map (f 0) := by
cases l <;> simp
@[simp] theorem getLast_mapIdx {l : List α} {f : Nat α β} {h} :
@[simp, grind =] theorem getLast_mapIdx {l : List α} {f : Nat α β} {h} :
(mapIdx f l).getLast h = f (l.length - 1) (l.getLast (by simpa using h)) := by
cases l with
| nil => simp at h
@@ -498,13 +500,13 @@ theorem mapIdx_eq_mapIdx_iff {l : List α} :
simp only [ mapIdx_cons, getElem_mapIdx]
simp
@[simp] theorem getLast?_mapIdx {l : List α} {f : Nat α β} :
@[simp, grind =] theorem getLast?_mapIdx {l : List α} {f : Nat α β} :
(mapIdx f l).getLast? = (getLast? l).map (f (l.length - 1)) := by
cases l
· simp
· rw [getLast?_eq_getLast, getLast?_eq_getLast, getLast_mapIdx] <;> simp
@[simp] theorem mapIdx_mapIdx {l : List α} {f : Nat α β} {g : Nat β γ} :
@[simp, grind =] theorem mapIdx_mapIdx {l : List α} {f : Nat α β} {g : Nat β γ} :
(l.mapIdx f).mapIdx g = l.mapIdx (fun i => g i f i) := by
simp [mapIdx_eq_iff]
@@ -517,7 +519,7 @@ theorem mapIdx_eq_replicate_iff {l : List α} {f : Nat → α → β} {b : β} :
· rintro w _ i h rfl
exact w i h
@[simp] theorem mapIdx_reverse {l : List α} {f : Nat α β} :
@[simp, grind =] theorem mapIdx_reverse {l : List α} {f : Nat α β} :
l.reverse.mapIdx f = (mapIdx (fun i => f (l.length - 1 - i)) l).reverse := by
simp [mapIdx_eq_iff]
intro i

View File

@@ -14,6 +14,7 @@ set_option linter.indexVariables true -- Enforce naming conventions for index va
namespace List
@[grind =]
theorem getElem?_eraseIdx {l : List α} {i : Nat} {j : Nat} :
(l.eraseIdx i)[j]? = if j < i then l[j]? else l[j + 1]? := by
rw [eraseIdx_eq_take_drop_succ, getElem?_append]
@@ -49,6 +50,7 @@ theorem getElem?_eraseIdx_of_ge {l : List α} {i : Nat} {j : Nat} (h : i ≤ j)
intro h'
omega
@[grind =]
theorem getElem_eraseIdx {l : List α} {i : Nat} {j : Nat} (h : j < (l.eraseIdx i).length) :
(l.eraseIdx i)[j] = if h' : j < i then
l[j]'(by have := length_eraseIdx_le l i; omega)
@@ -123,6 +125,48 @@ theorem eraseIdx_set_gt {l : List α} {i : Nat} {j : Nat} {a : α} (h : i < j) :
· have t : i n := by omega
simp [t]
@[grind =]
theorem eraseIdx_set {xs : List α} {i : Nat} {a : α} {j : Nat} :
(xs.set i a).eraseIdx j =
if j < i then
(xs.eraseIdx j).set (i - 1) a
else if j = i then
xs.eraseIdx i
else
(xs.eraseIdx j).set i a := by
split <;> rename_i h'
· rw [eraseIdx_set_lt]
omega
· split <;> rename_i h''
· subst h''
rw [eraseIdx_set_eq]
· rw [eraseIdx_set_gt]
omega
theorem set_eraseIdx_le {xs : List α} {i : Nat} {j : Nat} {a : α} (h : i j) :
(xs.eraseIdx i).set j a = (xs.set (j + 1) a).eraseIdx i := by
rw [eraseIdx_set_lt]
· simp
· omega
theorem set_eraseIdx_gt {xs : List α} {i : Nat} {j : Nat} {a : α} (h : j < i) :
(xs.eraseIdx i).set j a = (xs.set j a).eraseIdx i := by
rw [eraseIdx_set_gt]
omega
@[grind =]
theorem set_eraseIdx {xs : List α} {i : Nat} {j : Nat} {a : α} :
(xs.eraseIdx i).set j a =
if i j then
(xs.set (j + 1) a).eraseIdx i
else
(xs.set j a).eraseIdx i := by
split <;> rename_i h'
· rw [set_eraseIdx_le]
omega
· rw [set_eraseIdx_gt]
omega
@[simp] theorem set_getElem_succ_eraseIdx_succ
{l : List α} {i : Nat} (h : i + 1 < l.length) :
(l.eraseIdx (i + 1)).set i l[i + 1] = l.eraseIdx i := by

View File

@@ -538,7 +538,7 @@ theorem dropWhile_eq_drop_findIdx_not {xs : List α} {p : α → Bool} :
/-! ### zipWith -/
@[simp] theorem length_zipWith {f : α β γ} {l₁ : List α} {l₂ : List β} :
@[simp, grind =] theorem length_zipWith {f : α β γ} {l₁ : List α} {l₂ : List β} :
length (zipWith f l₁ l₂) = min (length l₁) (length l₂) := by
induction l₁ generalizing l₂ <;> cases l₂ <;>
simp_all [succ_min_succ, Nat.zero_min, Nat.min_zero]
@@ -549,7 +549,7 @@ theorem lt_length_left_of_zipWith {f : α → β → γ} {i : Nat} {l : List α}
theorem lt_length_right_of_zipWith {f : α β γ} {i : Nat} {l : List α} {l' : List β}
(h : i < (zipWith f l l').length) : i < l'.length := by rw [length_zipWith] at h; omega
@[simp]
@[simp, grind =]
theorem getElem_zipWith {f : α β γ} {l : List α} {l' : List β}
{i : Nat} {h : i < (zipWith f l l').length} :
(zipWith f l l')[i] =
@@ -566,6 +566,7 @@ theorem zipWith_eq_zipWith_take_min : ∀ {l₁ : List α} {l₂ : List β},
| _, [] => by simp
| a :: l₁, b :: l₂ => by simp [succ_min_succ, zipWith_eq_zipWith_take_min (l₁ := l₁) (l₂ := l₂)]
@[grind =]
theorem reverse_zipWith (h : l.length = l'.length) :
(zipWith f l l').reverse = zipWith f l.reverse l'.reverse := by
induction l generalizing l' with
@@ -578,14 +579,14 @@ theorem reverse_zipWith (h : l.length = l'.length) :
have : tl.reverse.length = tl'.reverse.length := by simp [h]
simp [hl h, zipWith_append this]
@[simp] theorem zipWith_replicate {a : α} {b : β} {m n : Nat} :
@[simp, grind =] theorem zipWith_replicate {a : α} {b : β} {m n : Nat} :
zipWith f (replicate m a) (replicate n b) = replicate (min m n) (f a b) := by
rw [zipWith_eq_zipWith_take_min]
simp
/-! ### zip -/
@[simp] theorem length_zip {l₁ : List α} {l₂ : List β} :
@[simp, grind =] theorem length_zip {l₁ : List α} {l₂ : List β} :
length (zip l₁ l₂) = min (length l₁) (length l₂) := by
simp [zip]
@@ -597,7 +598,7 @@ theorem lt_length_right_of_zip {i : Nat} {l : List α} {l' : List β} (h : i < (
i < l'.length :=
lt_length_right_of_zipWith h
@[simp]
@[simp, grind =]
theorem getElem_zip {l : List α} {l' : List β} {i : Nat} {h : i < (zip l l').length} :
(zip l l')[i] =
(l[i]'(lt_length_left_of_zip h), l'[i]'(lt_length_right_of_zip h)) :=
@@ -609,7 +610,7 @@ theorem zip_eq_zip_take_min : ∀ {l₁ : List α} {l₂ : List β},
| _, [] => by simp
| a :: l₁, b :: l₂ => by simp [succ_min_succ, zip_eq_zip_take_min (l₁ := l₁) (l₂ := l₂)]
@[simp] theorem zip_replicate {a : α} {b : β} {m n : Nat} :
@[simp, grind =] theorem zip_replicate {a : α} {b : β} {m n : Nat} :
zip (replicate m a) (replicate n b) = replicate (min m n) (a, b) := by
rw [zip_eq_zip_take_min]
simp

View File

@@ -6,7 +6,7 @@ Author: Leonardo de Moura
module
prelude
import Init.Data.Nat.Div.Basic
meta import Init.Data.Nat.Div.Basic
/-!
# Notation for `List` literals.

View File

@@ -34,14 +34,14 @@ to each potential index in order, starting at `0`.
def ofFnM {n} [Monad m] (f : Fin n m α) : m (List α) :=
List.reverse <$> Fin.foldlM n (fun xs i => (· :: xs) <$> f i) []
@[simp]
@[simp, grind =]
theorem length_ofFn {f : Fin n α} : (ofFn f).length = n := by
simp only [ofFn]
induction n with
| zero => simp
| succ n ih => simp [Fin.foldr_succ, ih]
@[simp]
@[simp, grind =]
protected theorem getElem_ofFn {f : Fin n α} (h : i < (ofFn f).length) :
(ofFn f)[i] = f i, by simp_all := by
simp only [ofFn]
@@ -55,7 +55,7 @@ protected theorem getElem_ofFn {f : Fin n → α} (h : i < (ofFn f).length) :
apply ih
simp_all
@[simp]
@[simp, grind =]
protected theorem getElem?_ofFn {f : Fin n α} :
(ofFn f)[i]? = if h : i < n then some (f i, h) else none :=
if h : i < (ofFn f).length
@@ -67,7 +67,7 @@ protected theorem getElem?_ofFn {f : Fin n → α} :
simpa using h
/-- `ofFn` on an empty domain is the empty list. -/
@[simp]
@[simp, grind =]
theorem ofFn_zero {f : Fin 0 α} : ofFn f = [] := by
rw [ofFn, Fin.foldr_zero]
@@ -98,7 +98,7 @@ theorem ofFn_add {n m} {f : Fin (n + m) → α} :
theorem ofFn_eq_nil_iff {f : Fin n α} : ofFn f = [] n = 0 := by
cases n <;> simp only [ofFn_zero, ofFn_succ, eq_self_iff_true, Nat.succ_ne_zero, reduceCtorEq]
@[simp 500]
@[simp 500, grind =]
theorem mem_ofFn {n} {f : Fin n α} {a : α} : a ofFn f i, f i = a := by
constructor
· intro w
@@ -107,17 +107,17 @@ theorem mem_ofFn {n} {f : Fin n → α} {a : α} : a ∈ ofFn f ↔ ∃ i, f i =
· rintro i, rfl
apply mem_of_getElem (i := i) <;> simp
theorem head_ofFn {n} {f : Fin n α} (h : ofFn f []) :
@[grind =] theorem head_ofFn {n} {f : Fin n α} (h : ofFn f []) :
(ofFn f).head h = f 0, Nat.pos_of_ne_zero (mt ofFn_eq_nil_iff.2 h) := by
rw [ getElem_zero (length_ofFn Nat.pos_of_ne_zero (mt ofFn_eq_nil_iff.2 h)),
List.getElem_ofFn]
theorem getLast_ofFn {n} {f : Fin n α} (h : ofFn f []) :
@[grind =]theorem getLast_ofFn {n} {f : Fin n α} (h : ofFn f []) :
(ofFn f).getLast h = f n - 1, Nat.sub_one_lt (mt ofFn_eq_nil_iff.2 h) := by
simp [getLast_eq_getElem, length_ofFn, List.getElem_ofFn]
/-- `ofFnM` on an empty domain is the empty list. -/
@[simp]
@[simp, grind =]
theorem ofFnM_zero [Monad m] [LawfulMonad m] {f : Fin 0 m α} : ofFnM f = pure [] := by
simp [ofFnM]

View File

@@ -159,7 +159,7 @@ theorem pairwise_append_comm {R : αα → Prop} (s : ∀ {x y}, R x y →
@[grind =] theorem pairwise_middle {R : α α Prop} (s : {x y}, R x y R y x) {a : α} {l₁ l₂ : List α} :
Pairwise R (l₁ ++ a :: l₂) Pairwise R (a :: (l₁ ++ l₂)) := by
show Pairwise R (l₁ ++ ([a] ++ l₂)) Pairwise R ([a] ++ l₁ ++ l₂)
change Pairwise R (l₁ ++ ([a] ++ l₂)) Pairwise R ([a] ++ l₁ ++ l₂)
rw [ append_assoc, pairwise_append, @pairwise_append _ _ ([a] ++ l₁), pairwise_append_comm s]
simp only [mem_append, or_comm]
@@ -279,7 +279,7 @@ theorem nodup_nil : @Nodup α [] :=
theorem nodup_cons {a : α} {l : List α} : Nodup (a :: l) a l Nodup l := by
simp only [Nodup, pairwise_cons, forall_mem_ne]
theorem Nodup.sublist : l₁ <+ l₂ Nodup l₂ Nodup l₁ :=
@[grind ] theorem Nodup.sublist : l₁ <+ l₂ Nodup l₂ Nodup l₁ :=
Pairwise.sublist
grind_pattern Nodup.sublist => l₁ <+ l₂, Nodup l₁

View File

@@ -225,7 +225,7 @@ theorem zipIdx_eq_nil_iff {l : List α} {i : Nat} : List.zipIdx l i = [] ↔ l =
| [], _ => rfl
| _ :: _, _ => congrArg Nat.succ length_zipIdx
@[simp]
@[simp, grind =]
theorem getElem?_zipIdx :
{l : List α} {i j}, (zipIdx l i)[j]? = l[j]?.map fun a => (a, i + j)
| [], _, _ => rfl
@@ -234,7 +234,7 @@ theorem getElem?_zipIdx :
simp only [zipIdx_cons, getElem?_cons_succ]
exact getElem?_zipIdx.trans <| by rw [Nat.add_right_comm]; rfl
@[simp]
@[simp, grind =]
theorem getElem_zipIdx {l : List α} (h : i < (l.zipIdx j).length) :
(l.zipIdx j)[i] = (l[i]'(by simpa [length_zipIdx] using h), j + i) := by
simp only [length_zipIdx] at h
@@ -242,7 +242,7 @@ theorem getElem_zipIdx {l : List α} (h : i < (l.zipIdx j).length) :
simp only [getElem?_zipIdx, getElem?_eq_getElem h]
simp
@[simp]
@[simp, grind =]
theorem tail_zipIdx {l : List α} {i : Nat} : (zipIdx l i).tail = zipIdx l.tail (i + 1) := by
induction l generalizing i with
| nil => simp

View File

@@ -257,6 +257,17 @@ theorem dropLast_eq_take {l : List α} : l.dropLast = l.take (l.length - 1) := b
dsimp
rw [map_drop]
theorem drop_eq_extract {l : List α} {k : Nat} :
l.drop k = l.extract k := by
induction l generalizing k
case nil => simp
case cons _ _ ih =>
match k with
| 0 => simp
| _ + 1 =>
simp only [List.drop_succ_cons, List.length_cons, ih]
simp only [List.extract_eq_drop_take, List.drop_succ_cons, Nat.succ_sub_succ]
/-! ### takeWhile and dropWhile -/
theorem takeWhile_cons {p : α Bool} {a : α} {l : List α} :

View File

@@ -685,7 +685,7 @@ theorem replace_toArray [BEq α] [LawfulBEq α] (l : List α) (a b : α) :
· rw [if_pos (by omega), if_pos, if_neg]
· simp only [mem_take_iff_getElem, not_exists]
intro k hk
simpa using h.2 k, by omega (by show k < i.1; omega)
simpa using h.2 k, by omega (by change k < i.1; omega)
· subst h₃
simpa using h.1
· rw [if_neg (by omega)]

View File

@@ -46,6 +46,7 @@ theorem zipWith_self {f : αα → δ} : ∀ {l : List α}, zipWith f l l =
See also `getElem?_zipWith'` for a variant
using `Option.map` and `Option.bind` rather than a `match`.
-/
@[grind =]
theorem getElem?_zipWith {f : α β γ} {i : Nat} :
(zipWith f as bs)[i]? = match as[i]?, bs[i]? with
| some a, some b => some (f a b) | _, _ => none := by
@@ -83,33 +84,39 @@ theorem getElem?_zip_eq_some {l₁ : List α} {l₂ : List β} {z : α × β} {i
· rintro h₀, h₁
exact _, _, h₀, h₁, rfl
@[grind =]
theorem head?_zipWith {f : α β γ} :
(List.zipWith f as bs).head? = match as.head?, bs.head? with
| some a, some b => some (f a b) | _, _ => none := by
simp [head?_eq_getElem?, getElem?_zipWith]
@[grind =]
theorem head_zipWith {f : α β γ} (h):
(List.zipWith f as bs).head h = f (as.head (by rintro rfl; simp_all)) (bs.head (by rintro rfl; simp_all)) := by
apply Option.some.inj
rw [ head?_eq_head, head?_zipWith, head?_eq_head, head?_eq_head]
@[simp]
@[simp, grind =]
theorem zipWith_map {μ} {f : γ δ μ} {g : α γ} {h : β δ} {l₁ : List α} {l₂ : List β} :
zipWith f (l₁.map g) (l₂.map h) = zipWith (fun a b => f (g a) (h b)) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWith_map_left {l₁ : List α} {l₂ : List β} {f : α α'} {g : α' β γ} :
zipWith g (l₁.map f) l₂ = zipWith (fun a b => g (f a) b) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWith_map_right {l₁ : List α} {l₂ : List β} {f : β β'} {g : α β' γ} :
zipWith g l₁ (l₂.map f) = zipWith (fun a b => g a (f b)) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWith_foldr_eq_zip_foldr {f : α β γ} {i : δ} {g : γ δ δ} :
(zipWith f l₁ l₂).foldr g i = (zip l₁ l₂).foldr (fun p r => g (f p.1 p.2) r) i := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWith_foldl_eq_zip_foldl {f : α β γ} {i : δ} {g : δ γ δ} :
(zipWith f l₁ l₂).foldl g i = (zip l₁ l₂).foldl (fun r p => g r (f p.1 p.2)) i := by
induction l₁ generalizing i l₂ <;> cases l₂ <;> simp_all
@@ -118,6 +125,7 @@ theorem zipWith_foldl_eq_zip_foldl {f : α → β → γ} {i : δ} {g : δ →
theorem zipWith_eq_nil_iff {f : α β γ} {l l'} : zipWith f l l' = [] l = [] l' = [] := by
cases l <;> cases l' <;> simp
@[grind =]
theorem map_zipWith {δ : Type _} {f : α β} {g : γ δ α} {l : List γ} {l' : List δ} :
map f (zipWith g l l') = zipWith (fun x y => f (g x y)) l l' := by
induction l generalizing l' with
@@ -127,6 +135,7 @@ theorem map_zipWith {δ : Type _} {f : α → β} {g : γ → δ → α} {l : Li
· simp
· simp [hl]
@[grind =]
theorem take_zipWith : (zipWith f l l').take i = zipWith f (l.take i) (l'.take i) := by
induction l generalizing l' i with
| nil => simp
@@ -137,6 +146,7 @@ theorem take_zipWith : (zipWith f l l').take i = zipWith f (l.take i) (l'.take i
· simp
· simp [hl]
@[grind =]
theorem drop_zipWith : (zipWith f l l').drop i = zipWith f (l.drop i) (l'.drop i) := by
induction l generalizing l' i with
| nil => simp
@@ -147,10 +157,11 @@ theorem drop_zipWith : (zipWith f l l').drop i = zipWith f (l.drop i) (l'.drop i
· simp
· simp [hl]
@[simp]
@[simp, grind =]
theorem tail_zipWith : (zipWith f l l').tail = zipWith f l.tail l'.tail := by
rw [ drop_one]; simp [drop_zipWith]
@[grind =]
theorem zipWith_append {f : α β γ} {l₁ l₁' : List α} {l₂ l₂' : List β}
(h : l₁.length = l₂.length) :
zipWith f (l₁ ++ l₁') (l₂ ++ l₂') = zipWith f l₁ l₂ ++ zipWith f l₁' l₂' := by
@@ -254,22 +265,26 @@ theorem zip_eq_zipWith : ∀ {l₁ : List α} {l₂ : List β}, zip l₁ l₂ =
| _, [] => rfl
| a :: l₁, b :: l₂ => by simp [zip_cons_cons, zip_eq_zipWith (l₁ := l₁)]
@[grind _=_]
theorem zip_map {f : α γ} {g : β δ} :
{l₁ : List α} {l₂ : List β}, zip (l₁.map f) (l₂.map g) = (zip l₁ l₂).map (Prod.map f g)
| [], _ => rfl
| _, [] => by simp only [map, zip_nil_right]
| _ :: _, _ :: _ => by simp only [map, zip_cons_cons, zip_map, Prod.map]
@[grind _=_]
theorem zip_map_left {f : α γ} {l₁ : List α} {l₂ : List β} :
zip (l₁.map f) l₂ = (zip l₁ l₂).map (Prod.map f id) := by rw [ zip_map, map_id]
@[grind _=_]
theorem zip_map_right {f : β γ} {l₁ : List α} {l₂ : List β} :
zip l₁ (l₂.map f) = (zip l₁ l₂).map (Prod.map id f) := by rw [ zip_map, map_id]
@[simp] theorem tail_zip {l₁ : List α} {l₂ : List β} :
@[simp, grind =] theorem tail_zip {l₁ : List α} {l₂ : List β} :
(zip l₁ l₂).tail = zip l₁.tail l₂.tail := by
cases l₁ <;> cases l₂ <;> simp
@[grind =]
theorem zip_append :
{l₁ r₁ : List α} {l₂ r₂ : List β} (_h : length l₁ = length l₂),
zip (l₁ ++ r₁) (l₂ ++ r₂) = zip l₁ l₂ ++ zip r₁ r₂
@@ -278,6 +293,7 @@ theorem zip_append :
| _ :: _, _, _ :: _, _, h => by
simp only [cons_append, zip_cons_cons, zip_append (Nat.succ.inj h)]
@[grind =]
theorem zip_map' {f : α β} {g : α γ} :
{l : List α}, zip (l.map f) (l.map g) = l.map fun a => (f a, g a)
| [] => rfl
@@ -296,7 +312,7 @@ theorem map_fst_zip :
| [], _, _ => rfl
| _ :: as, _ :: bs, h => by
simp [Nat.succ_le_succ_iff] at h
show _ :: map Prod.fst (zip as bs) = _ :: as
change _ :: map Prod.fst (zip as bs) = _ :: as
rw [map_fst_zip (l₁ := as) h]
| _ :: _, [], h => by simp at h
@@ -308,7 +324,7 @@ theorem map_snd_zip :
| [], b :: bs, h => by simp at h
| a :: as, b :: bs, h => by
simp [Nat.succ_le_succ_iff] at h
show _ :: map Prod.snd (zip as bs) = _ :: bs
change _ :: map Prod.snd (zip as bs) = _ :: bs
rw [map_snd_zip (l₂ := bs) h]
theorem map_prod_left_eq_zip {l : List α} {f : α β} :
@@ -353,6 +369,7 @@ theorem zip_eq_append_iff {l₁ : List α} {l₂ : List β} :
/-! ### zipWithAll -/
@[grind =]
theorem getElem?_zipWithAll {f : Option α Option β γ} {i : Nat} :
(zipWithAll f as bs)[i]? = match as[i]?, bs[i]? with
| none, none => .none | a?, b? => some (f a? b?) := by
@@ -366,33 +383,38 @@ theorem getElem?_zipWithAll {f : Option α → Option β → γ} {i : Nat} :
cases i <;> simp_all
| cons b bs => cases i <;> simp_all
@[grind =]
theorem head?_zipWithAll {f : Option α Option β γ} :
(zipWithAll f as bs).head? = match as.head?, bs.head? with
| none, none => .none | a?, b? => some (f a? b?) := by
simp [head?_eq_getElem?, getElem?_zipWithAll]
@[simp] theorem head_zipWithAll {f : Option α Option β γ} (h) :
@[simp, grind =] theorem head_zipWithAll {f : Option α Option β γ} (h) :
(zipWithAll f as bs).head h = f as.head? bs.head? := by
apply Option.some.inj
rw [ head?_eq_head, head?_zipWithAll]
split <;> simp_all
@[simp] theorem tail_zipWithAll {f : Option α Option β γ} :
@[simp, grind =] theorem tail_zipWithAll {f : Option α Option β γ} :
(zipWithAll f as bs).tail = zipWithAll f as.tail bs.tail := by
cases as <;> cases bs <;> simp
@[grind =]
theorem zipWithAll_map {μ} {f : Option γ Option δ μ} {g : α γ} {h : β δ} {l₁ : List α} {l₂ : List β} :
zipWithAll f (l₁.map g) (l₂.map h) = zipWithAll (fun a b => f (g <$> a) (h <$> b)) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWithAll_map_left {l₁ : List α} {l₂ : List β} {f : α α'} {g : Option α' Option β γ} :
zipWithAll g (l₁.map f) l₂ = zipWithAll (fun a b => g (f <$> a) b) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem zipWithAll_map_right {l₁ : List α} {l₂ : List β} {f : β β'} {g : Option α Option β' γ} :
zipWithAll g l₁ (l₂.map f) = zipWithAll (fun a b => g a (f <$> b)) l₁ l₂ := by
induction l₁ generalizing l₂ <;> cases l₂ <;> simp_all
@[grind =]
theorem map_zipWithAll {δ : Type _} {f : α β} {g : Option γ Option δ α} {l : List γ} {l' : List δ} :
map f (zipWithAll g l l') = zipWithAll (fun x y => f (g x y)) l l' := by
induction l generalizing l' with
@@ -400,7 +422,7 @@ theorem map_zipWithAll {δ : Type _} {f : α → β} {g : Option γ → Option
| cons hd tl hl =>
cases l' <;> simp_all
@[simp] theorem zipWithAll_replicate {a : α} {b : β} {n : Nat} :
@[simp, grind =] theorem zipWithAll_replicate {a : α} {b : β} {n : Nat} :
zipWithAll f (replicate n a) (replicate n b) = replicate n (f (some a) (some b)) := by
induction n with
| zero => rfl
@@ -408,12 +430,13 @@ theorem map_zipWithAll {δ : Type _} {f : α → β} {g : Option γ → Option
/-! ### unzip -/
@[simp] theorem unzip_fst : (unzip l).fst = l.map Prod.fst := by
@[simp, grind =] theorem unzip_fst : (unzip l).fst = l.map Prod.fst := by
induction l <;> simp_all
@[simp] theorem unzip_snd : (unzip l).snd = l.map Prod.snd := by
@[simp, grind =] theorem unzip_snd : (unzip l).snd = l.map Prod.snd := by
induction l <;> simp_all
@[grind =]
theorem unzip_eq_map : {l : List (α × β)}, unzip l = (l.map Prod.fst, l.map Prod.snd)
| [] => rfl
| (a, b) :: l => by simp only [unzip_cons, map_cons, unzip_eq_map (l := l)]
@@ -453,6 +476,6 @@ theorem tail_zip_fst {l : List (α × β)} : l.unzip.1.tail = l.tail.unzip.1 :=
theorem tail_zip_snd {l : List (α × β)} : l.unzip.2.tail = l.tail.unzip.2 := by
simp
@[simp] theorem unzip_replicate {n : Nat} {a : α} {b : β} :
@[simp, grind =] theorem unzip_replicate {n : Nat} {a : α} {b : β} :
unzip (replicate n (a, b)) = (replicate n a, replicate n b) := by
ext1 <;> simp

View File

@@ -1069,7 +1069,7 @@ protected theorem sub_lt_sub_right : ∀ {a b c : Nat}, c ≤ a → a < b → a
exact Nat.sub_lt_sub_right (le_of_succ_le_succ hle) (lt_of_succ_lt_succ h)
protected theorem sub_self_add (n m : Nat) : n - (n + m) = 0 := by
show (n + 0) - (n + m) = 0
change (n + 0) - (n + m) = 0
rw [Nat.add_sub_add_left, Nat.zero_sub]
@[simp] protected theorem sub_eq_zero_of_le {n m : Nat} (h : n m) : n - m = 0 := by

View File

@@ -9,6 +9,7 @@ prelude
import Init.WF
import Init.WFTactics
import Init.Data.Nat.Basic
meta import Init.MetaTypes
@[expose] section
@@ -75,7 +76,7 @@ private theorem div.go.fuel_congr (x y fuel1 fuel2 : Nat) (hy : 0 < y) (h1 : x <
termination_by structural fuel1
theorem div_eq (x y : Nat) : x / y = if 0 < y y x then (x - y) / y + 1 else 0 := by
show Nat.div _ _ = ite _ (Nat.div _ _ + 1) _
change Nat.div _ _ = ite _ (Nat.div _ _ + 1) _
unfold Nat.div
split
next =>
@@ -257,7 +258,7 @@ protected def mod : @& Nat → @& Nat → Nat
instance instMod : Mod Nat := Nat.mod
protected theorem modCore_eq_mod (n m : Nat) : Nat.modCore n m = n % m := by
show Nat.modCore n m = Nat.mod n m
change Nat.modCore n m = Nat.mod n m
match n, m with
| 0, _ =>
rw [Nat.modCore_eq]
@@ -521,7 +522,7 @@ theorem mul_sub_div (x n p : Nat) (h₁ : x < n*p) : (n * p - (x + 1)) / n = p -
rw [Nat.mul_sub_right_distrib, Nat.mul_comm]
exact Nat.sub_le_sub_left ((div_lt_iff_lt_mul npos).1 (lt_succ_self _)) _
focus
show succ (pred (n * p - x)) (succ (pred (p - x / n))) * n
change succ (pred (n * p - x)) (succ (pred (p - x / n))) * n
rw [succ_pred_eq_of_pos (Nat.sub_pos_of_lt h₁),
fun h => succ_pred_eq_of_pos (Nat.sub_pos_of_lt h)] -- TODO: why is the function needed?
focus

View File

@@ -1772,6 +1772,12 @@ instance decidableExistsLE' {p : (m : Nat) → m ≤ k → Prop} [I : ∀ m h, D
intro
exact fun h, w => le_of_lt_succ h, w, fun h, w => lt_add_one_of_le h, w
instance decidableExistsFin (P : Fin n Prop) [DecidablePred P] : Decidable ( i, P i) :=
decidable_of_iff ( k, k < n ((h: k < n) P k, h))
fun k, a => Exists.intro k, a.left (a.right a.left),
fun i, e => Exists.intro i.val i.isLt, fun _ => e
/-! ### Results about `List.sum` specialized to `Nat` -/
protected theorem sum_pos_iff_exists_pos {l : List Nat} : 0 < l.sum x l, 0 < x := by

View File

@@ -149,7 +149,7 @@ instance : LawfulBEq PolyCnstr where
rw [h₁, h₂, h₃]
rfl {a} := by
cases a; rename_i eq lhs rhs
show (eq == eq && (lhs == lhs && rhs == rhs)) = true
change (eq == eq && (lhs == lhs && rhs == rhs)) = true
simp
structure ExprCnstr where

View File

@@ -85,4 +85,4 @@ theorem Membership.get_elem_helper {i n : Nat} {r : Std.Range} (h₁ : i ∈ r)
i < n := h₂ h₁.2.1
macro_rules
| `(tactic| get_elem_tactic_trivial) => `(tactic| apply Membership.get_elem_helper; assumption; rfl)
| `(tactic| get_elem_tactic_extensible) => `(tactic| apply Membership.get_elem_helper; assumption; rfl)

View File

@@ -586,7 +586,7 @@ decreasing_by
focus
rename_i i₀ j₀ _ eq h'
rw [show (s.next i₀ - sep.next j₀).1 = (i₀ - j₀).1 by
show (_ + Char.utf8Size _) - (_ + Char.utf8Size _) = _
change (_ + Char.utf8Size _) - (_ + Char.utf8Size _) = _
rw [(beq_iff_eq ..).1 eq, Nat.add_sub_add_right]; rfl]
right; exact Nat.sub_lt_sub_left
(Nat.lt_of_le_of_lt (Nat.le_add_right ..) (Nat.gt_of_not_le (mt decide_eq_true h')))

View File

@@ -295,11 +295,11 @@ where
termination_by text.utf8ByteSize - pos.byteIdx
decreasing_by
decreasing_with
show text.utf8ByteSize - (text.next (text.next pos)).byteIdx < text.utf8ByteSize - pos.byteIdx
change text.utf8ByteSize - (text.next (text.next pos)).byteIdx < text.utf8ByteSize - pos.byteIdx
have k := Nat.gt_of_not_le <| mt decide_eq_true h
exact Nat.sub_lt_sub_left k (Nat.lt_trans (String.lt_next text pos) (String.lt_next _ _))
decreasing_with
show text.utf8ByteSize - (text.next pos).byteIdx < text.utf8ByteSize - pos.byteIdx
change text.utf8ByteSize - (text.next pos).byteIdx < text.utf8ByteSize - pos.byteIdx
have k := Nat.gt_of_not_le <| mt decide_eq_true h
exact Nat.sub_lt_sub_left k (String.lt_next _ _)

View File

@@ -6,8 +6,7 @@ Author: Leonardo de Moura
module
prelude
import Init.Meta
import Init.Data.ToString.Basic
meta import Init.Meta
syntax:max "s!" interpolatedStr(term) : term

View File

@@ -7,6 +7,7 @@ Authors: Shreyas Srinivas, François G. Dorais, Kim Morrison
module
prelude
meta import Init.Coe
import Init.Data.Array.Lemmas
import Init.Data.Array.MapIdx
import Init.Data.Array.InsertIdx

View File

@@ -22,11 +22,13 @@ open Nat
/-! ### eraseIdx -/
@[grind =]
theorem eraseIdx_eq_take_drop_succ {xs : Vector α n} {i : Nat} (h) :
xs.eraseIdx i = (xs.take i ++ xs.drop (i + 1)).cast (by omega) := by
rcases xs with xs, rfl
simp [Array.eraseIdx_eq_take_drop_succ, *]
@[grind =]
theorem getElem?_eraseIdx {xs : Vector α n} {i : Nat} (h : i < n) {j : Nat} :
(xs.eraseIdx i)[j]? = if j < i then xs[j]? else xs[j + 1]? := by
rcases xs with xs, rfl
@@ -44,12 +46,14 @@ theorem getElem?_eraseIdx_of_ge {xs : Vector α n} {i : Nat} (h : i < n) {j : Na
intro h'
omega
@[grind =]
theorem getElem_eraseIdx {xs : Vector α n} {i : Nat} (h : i < n) {j : Nat} (h' : j < n - 1) :
(xs.eraseIdx i)[j] = if h'' : j < i then xs[j] else xs[j + 1] := by
apply Option.some.inj
rw [ getElem?_eq_getElem, getElem?_eraseIdx]
split <;> simp
@[grind ]
theorem mem_of_mem_eraseIdx {xs : Vector α n} {i : Nat} {h} {a : α} (h : a xs.eraseIdx i) : a xs := by
rcases xs with xs, rfl
simpa using Array.mem_of_mem_eraseIdx (by simpa using h)
@@ -64,13 +68,23 @@ theorem eraseIdx_append_of_length_le {xs : Vector α n} {k : Nat} (hk : n ≤ k)
eraseIdx (xs ++ xs') k = (xs ++ eraseIdx xs' (k - n)).cast (by omega) := by
rcases xs with xs
rcases xs' with xs'
simp [Array.eraseIdx_append_of_length_le, *]
simp [Array.eraseIdx_append_of_size_le, *]
@[grind =]
theorem eraseIdx_append {xs : Vector α n} {ys : Vector α m} {k : Nat} (h : k < n + m) :
eraseIdx (xs ++ ys) k = if h' : k < n then (eraseIdx xs k ++ ys).cast (by omega) else (xs ++ eraseIdx ys (k - n) (by omega)).cast (by omega) := by
rcases xs with xs, rfl
rcases ys with ys, rfl
simp only [mk_append_mk, eraseIdx_mk, Array.eraseIdx_append]
split <;> simp
@[grind = ]
theorem eraseIdx_cast {xs : Vector α n} {k : Nat} (h : k < m) :
eraseIdx (xs.cast w) k h = (eraseIdx xs k).cast (by omega) := by
rcases xs with xs
simp
@[grind =]
theorem eraseIdx_replicate {n : Nat} {a : α} {k : Nat} {h} :
(replicate n a).eraseIdx k = replicate (n - 1) a := by
rw [replicate_eq_mk_replicate, eraseIdx_mk]
@@ -112,6 +126,45 @@ theorem eraseIdx_set_gt {xs : Vector α n} {i : Nat} {j : Nat} {a : α} (h : i <
rcases xs with xs
simp [Array.eraseIdx_set_gt, *]
@[grind =]
theorem eraseIdx_set {xs : Vector α n} {i : Nat} {a : α} {hi : i < n} {j : Nat} {hj : j < n} :
(xs.set i a).eraseIdx j =
if h' : j < i then
(xs.eraseIdx j).set (i - 1) a
else if h'' : j = i then
xs.eraseIdx i
else
(xs.eraseIdx j).set i a := by
rcases xs with xs
simp only [set_mk, eraseIdx_mk, Array.eraseIdx_set]
split
· simp
· split <;> simp
theorem set_eraseIdx_le {xs : Vector α n} {i : Nat} {w : i < n} {j : Nat} {a : α} (h : i j) (hj : j < n - 1) :
(xs.eraseIdx i).set j a = (xs.set (j + 1) a).eraseIdx i := by
rw [eraseIdx_set_lt]
· simp
· omega
theorem set_eraseIdx_gt {xs : Vector α n} {i : Nat} {w : i < n} {j : Nat} {a : α} (h : j < i) (hj : j < n - 1) :
(xs.eraseIdx i).set j a = (xs.set j a).eraseIdx i := by
rw [eraseIdx_set_gt]
omega
@[grind =]
theorem set_eraseIdx {xs : Vector α n} {i : Nat} {w : i < n} {j : Nat} {a : α} (hj : j < n - 1) :
(xs.eraseIdx i).set j a =
if h' : i j then
(xs.set (j + 1) a).eraseIdx i
else
(xs.set j a).eraseIdx i := by
split <;> rename_i h'
· rw [set_eraseIdx_le]
omega
· rw [set_eraseIdx_gt]
omega
@[simp] theorem set_getElem_succ_eraseIdx_succ
{xs : Vector α n} {i : Nat} (h : i + 1 < n) :
(xs.eraseIdx (i + 1)).set i xs[i + 1] = xs.eraseIdx i := by

View File

@@ -17,7 +17,7 @@ namespace Vector
/-- `finRange n` is the vector of all elements of `Fin n` in order. -/
protected def finRange (n : Nat) : Vector (Fin n) n := ofFn fun i => i
@[simp] theorem getElem_finRange {i : Nat} (h : i < n) :
@[simp, grind =] theorem getElem_finRange {i : Nat} (h : i < n) :
(Vector.finRange n)[i] = i, h := by
simp [Vector.finRange]
@@ -39,6 +39,7 @@ theorem finRange_succ_last {n} :
· simp_all
omega
@[grind _=_]
theorem finRange_reverse {n} : (Vector.finRange n).reverse = (Vector.finRange n).map Fin.rev := by
ext i h
simp

View File

@@ -44,12 +44,23 @@ theorem findSome?_singleton {a : α} {f : α → Option β} : #v[a].findSome? f
rcases xs with xs, rfl
simp only [push_mk, findSomeRev?_mk, Array.findSomeRev?_push_of_isNone, h]
@[grind =]
theorem findSomeRev?_push {xs : Vector α n} {a : α} {f : α Option β} :
(xs.push a).findSomeRev? f = (f a).or (xs.findSomeRev? f) := by
match h : f a with
| some b =>
rw [findSomeRev?_push_of_isSome]
all_goals simp_all
| none =>
rw [findSomeRev?_push_of_isNone]
all_goals simp_all
theorem exists_of_findSome?_eq_some {f : α Option β} {xs : Vector α n} (w : xs.findSome? f = some b) :
a, a xs f a = some b := by
rcases xs with xs, rfl
simpa using Array.exists_of_findSome?_eq_some (by simpa using w)
@[simp] theorem findSome?_eq_none_iff {f : α Option β} {xs : Vector α n} :
@[simp, grind =] theorem findSome?_eq_none_iff {f : α Option β} {xs : Vector α n} :
xs.findSome? f = none x xs, f x = none := by
rcases xs with xs, rfl
simp
@@ -72,24 +83,27 @@ theorem findSome?_eq_some_iff {f : α → Option β} {xs : Vector α n} {b : β}
· rintro k₁, k₂, h, ys, a, zs, w, h₁, h₂
exact ys.toArray, a, zs.toArray, by simp [w], h₁, by simpa using h₂
@[simp] theorem findSome?_guard {xs : Vector α n} : findSome? (Option.guard fun x => p x) xs = find? p xs := by
@[simp, grind =] theorem findSome?_guard {xs : Vector α n} : findSome? (Option.guard p) xs = find? p xs := by
rcases xs with xs, rfl
simp
theorem find?_eq_findSome?_guard {xs : Vector α n} : find? p xs = findSome? (Option.guard fun x => p x) xs :=
theorem find?_eq_findSome?_guard {xs : Vector α n} : find? p xs = findSome? (Option.guard p) xs :=
findSome?_guard.symm
@[simp] theorem map_findSome? {f : α Option β} {g : β γ} {xs : Vector α n} :
@[simp, grind =] theorem map_findSome? {f : α Option β} {g : β γ} {xs : Vector α n} :
(xs.findSome? f).map g = xs.findSome? (Option.map g f) := by
cases xs; simp
@[grind _=_]
theorem findSome?_map {f : β γ} {xs : Vector β n} : findSome? p (xs.map f) = xs.findSome? (p f) := by
rcases xs with xs, rfl
simp [Array.findSome?_map]
@[grind =]
theorem findSome?_append {xs : Vector α n₁} {ys : Vector α n₂} : (xs ++ ys).findSome? f = (xs.findSome? f).or (ys.findSome? f) := by
cases xs; cases ys; simp [Array.findSome?_append]
@[grind =]
theorem getElem?_zero_flatten {xss : Vector (Vector α m) n} :
(flatten xss)[0]? = xss.findSome? fun xs => xs[0]? := by
cases xss using vector₂_induction
@@ -106,12 +120,14 @@ theorem getElem_zero_flatten.proof {xss : Vector (Vector α m) n} (h : 0 < n * m
Option.isSome_some, and_true]
exact xss[0], h₂ _ (by simp), by simp
@[grind =]
theorem getElem_zero_flatten {xss : Vector (Vector α m) n} (h : 0 < n * m) :
(flatten xss)[0] = (xss.findSome? fun xs => xs[0]?).get (getElem_zero_flatten.proof h) := by
have t := getElem?_zero_flatten (xss := xss)
simp [getElem?_eq_getElem, h] at t
simp [ t]
@[grind =]
theorem findSome?_replicate : findSome? f (replicate n a) = if n = 0 then none else f a := by
rw [replicate_eq_mk_replicate, findSome?_mk, Array.findSome?_replicate]
@@ -142,9 +158,9 @@ abbrev findSome?_mkVector_of_isNone := @findSome?_replicate_of_isNone
/-! ### find? -/
@[simp] theorem find?_empty : find? p #v[] = none := rfl
@[simp, grind =] theorem find?_empty : find? p #v[] = none := rfl
theorem find?_singleton {a : α} {p : α Bool} :
@[grind =]theorem find?_singleton {a : α} {p : α Bool} :
#v[a].find? p = if p a then some a else none := by
simp
@@ -152,11 +168,23 @@ theorem find?_singleton {a : α} {p : α → Bool} :
findRev? p (xs.push a) = some a := by
cases xs; simp [h]
@[simp] theorem findRev?_cons_of_neg {xs : Vector α n} (h : ¬p a) :
@[simp] theorem findRev?_push_of_neg {xs : Vector α n} (h : ¬p a) :
findRev? p (xs.push a) = findRev? p xs := by
cases xs; simp [h]
@[simp] theorem find?_eq_none : find? p l = none x l, ¬ p x := by
@[deprecated findRev?_push_of_neg (since := "2025-06-12")]
abbrev findRev?_cons_of_neg := @findRev?_push_of_neg
@[grind =]
theorem finRev?_push {xs : Vector α n} :
findRev? p (xs.push a) = (Option.guard p a).or (xs.findRev? p) := by
cases h : p a
· rw [findRev?_push_of_neg, Option.guard_eq_none_iff.mpr h]
all_goals simp [h]
· rw [findRev?_push_of_pos, Option.guard_eq_some_iff.mpr rfl, h]
all_goals simp [h]
@[simp, grind =] theorem find?_eq_none : find? p l = none x l, ¬ p x := by
cases l; simp
theorem find?_eq_some_iff_append {xs : Vector α n} :
@@ -181,35 +209,38 @@ theorem find?_push_eq_some {xs : Vector α n} :
(xs.push a).find? p = some b xs.find? p = some b (xs.find? p = none (p a a = b)) := by
cases xs; simp
@[simp] theorem find?_isSome {xs : Vector α n} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
@[simp, grind =] theorem find?_isSome {xs : Vector α n} {p : α Bool} : (xs.find? p).isSome x, x xs p x := by
cases xs; simp
@[grind ]
theorem find?_some {xs : Vector α n} (h : find? p xs = some a) : p a := by
rcases xs with xs, rfl
simp at h
exact Array.find?_some h
@[grind ]
theorem mem_of_find?_eq_some {xs : Vector α n} (h : find? p xs = some a) : a xs := by
cases xs
simp at h
simpa using Array.mem_of_find?_eq_some h
@[grind]
theorem get_find?_mem {xs : Vector α n} (h) : (xs.find? p).get h xs := by
cases xs
simp [Array.get_find?_mem]
@[simp] theorem find?_map {f : β α} {xs : Vector β n} :
@[simp, grind =] theorem find?_map {f : β α} {xs : Vector β n} :
find? p (xs.map f) = (xs.find? (p f)).map f := by
cases xs; simp
@[simp] theorem find?_append {xs : Vector α n₁} {ys : Vector α n₂} :
@[simp, grind =] theorem find?_append {xs : Vector α n₁} {ys : Vector α n₂} :
(xs ++ ys).find? p = (xs.find? p).or (ys.find? p) := by
cases xs
cases ys
simp
@[simp] theorem find?_flatten {xs : Vector (Vector α m) n} {p : α Bool} :
xs.flatten.find? p = xs.findSome? (·.find? p) := by
@[simp, grind =] theorem find?_flatten {xs : Vector (Vector α m) n} {p : α Bool} :
xs.flatten.find? p = xs.findSome? (find? p) := by
cases xs using vector₂_induction
simp [Array.findSome?_map, Function.comp_def]
@@ -217,7 +248,7 @@ theorem find?_flatten_eq_none_iff {xs : Vector (Vector α m) n} {p : α → Bool
xs.flatten.find? p = none ys xs, x ys, !p x := by
simp
@[simp] theorem find?_flatMap {xs : Vector α n} {f : α Vector β m} {p : β Bool} :
@[simp, grind =] theorem find?_flatMap {xs : Vector α n} {f : α Vector β m} {p : β Bool} :
(xs.flatMap f).find? p = xs.findSome? (fun x => (f x).find? p) := by
cases xs
simp [Array.find?_flatMap, Array.flatMap_toArray]
@@ -227,6 +258,7 @@ theorem find?_flatMap_eq_none_iff {xs : Vector α n} {f : α → Vector β m} {p
(xs.flatMap f).find? p = none x xs, y f x, !p y := by
simp
@[grind =]
theorem find?_replicate :
find? p (replicate n a) = if n = 0 then none else if p a then some a else none := by
rw [replicate_eq_mk_replicate, find?_mk, Array.find?_replicate]
@@ -278,6 +310,7 @@ abbrev find?_mkVector_eq_some_iff := @find?_replicate_eq_some_iff
@[deprecated get_find?_replicate (since := "2025-03-18")]
abbrev get_find?_mkVector := @get_find?_replicate
@[grind =]
theorem find?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
(H : (a : α), a xs P a) {p : β Bool} :
(xs.pmap f H).find? p = (xs.attach.find? (fun a, m => p (f a (H a m)))).map fun a, m => f a (H a m) := by
@@ -291,8 +324,10 @@ theorem find?_eq_some_iff_getElem {xs : Vector α n} {p : α → Bool} {b : α}
/-! ### findFinIdx? -/
@[grind =]
theorem findFinIdx?_empty {p : α Bool} : findFinIdx? p (#v[] : Vector α 0) = none := by simp
@[grind =]
theorem findFinIdx?_singleton {a : α} {p : α Bool} :
#[a].findFinIdx? p = if p a then some 0, by simp else none := by
simp
@@ -302,6 +337,12 @@ theorem findFinIdx?_singleton {a : α} {p : α → Bool} :
subst w
simp
@[simp, grind =] theorem findFinIdx?_eq_none_iff {xs : Vector α n} {p : α Bool} :
xs.findFinIdx? p = none x, x xs ¬ p x := by
rcases xs with xs, rfl
simp [Array.findFinIdx?_eq_none_iff]
@[grind =]
theorem findFinIdx?_push {xs : Vector α n} {a : α} {p : α Bool} :
(xs.push a).findFinIdx? p =
((xs.findFinIdx? p).map Fin.castSucc).or (if p a then some n, by simp else none) := by
@@ -309,6 +350,7 @@ theorem findFinIdx?_push {xs : Vector α n} {a : α} {p : α → Bool} :
simp [Array.findFinIdx?_push, Option.map_or, Function.comp_def]
congr
@[grind =]
theorem findFinIdx?_append {xs : Vector α n₁} {ys : Vector α n₂} {p : α Bool} :
(xs ++ ys).findFinIdx? p =
((xs.findFinIdx? p).map (Fin.castLE (by simp))).or
@@ -317,13 +359,13 @@ theorem findFinIdx?_append {xs : Vector α n₁} {ys : Vector α n₂} {p : α
rcases ys with ys, rfl
simp [Array.findFinIdx?_append, Option.map_or, Function.comp_def]
@[simp]
@[simp, grind =]
theorem isSome_findFinIdx? {xs : Vector α n} {p : α Bool} :
(xs.findFinIdx? p).isSome = xs.any p := by
rcases xs with xs, rfl
simp
@[simp]
@[simp, grind =]
theorem isNone_findFinIdx? {xs : Vector α n} {p : α Bool} :
(xs.findFinIdx? p).isNone = xs.all (fun x => ¬ p x) := by
rcases xs with xs, rfl

View File

@@ -2538,23 +2538,23 @@ theorem foldr_hom (f : β₁ → β₂) {g₁ : α → β₁ → β₁} {g₂ :
rw [Array.foldr_hom _ H]
/--
We can prove that two folds over the same array are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
We can prove that two folds over the same vector are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the
vector, preserves the relation.
-/
theorem foldl_rel {xs : Vector α n} {f g : β α β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a xs (c c' : β), r c c' r (f c a) (g c' a)) :
theorem foldl_rel {xs : Vector α n} {f : β α β} {g : γ α γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a xs (c : β) (c' : γ), r c c' r (f c a) (g c' a)) :
r (xs.foldl (fun acc a => f acc a) a) (xs.foldl (fun acc a => g acc a) b) := by
rcases xs with xs, rfl
simpa using Array.foldl_rel h (by simpa using h')
/--
We can prove that two folds over the same array are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the array,
preserves the relation.
We can prove that two folds over the same vector are related (by some arbitrary relation)
if we know that the initial elements are related and the folding function, for each element of the
vector, preserves the relation.
-/
theorem foldr_rel {xs : Vector α n} {f g : α β β} {a b : β} {r : β β Prop}
(h : r a b) (h' : (a : α), a xs (c c' : β), r c c' r (f a c) (g a c')) :
theorem foldr_rel {xs : Vector α n} {f : α β β} {g : α γ γ} {a : β} {b : γ} {r : β γ Prop}
(h : r a b) (h' : (a : α), a xs (c : β) (c' : γ), r c c' r (f a c) (g a c')) :
r (xs.foldr (fun a acc => f a acc) a) (xs.foldr (fun a acc => g a acc) b) := by
rcases xs with xs, rfl
simpa using Array.foldr_rel h (by simpa using h')
@@ -3076,7 +3076,7 @@ theorem getElem_push_last {xs : Vector α n} {x : α} : (xs.push x)[n] = x := by
/-! ### zipWith -/
@[simp] theorem getElem_zipWith {f : α β γ} {as : Vector α n} {bs : Vector β n} {i : Nat}
@[simp, grind =] theorem getElem_zipWith {f : α β γ} {as : Vector α n} {bs : Vector β n} {i : Nat}
(hi : i < n) : (zipWith f as bs)[i] = f as[i] bs[i] := by
cases as
cases bs

View File

@@ -19,13 +19,13 @@ namespace Vector
/-! ### mapFinIdx -/
@[simp] theorem getElem_mapFinIdx {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} {i : Nat}
@[simp, grind =] theorem getElem_mapFinIdx {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} {i : Nat}
(h : i < n) :
(xs.mapFinIdx f)[i] = f i xs[i] h := by
rcases xs with xs, rfl
simp
@[simp] theorem getElem?_mapFinIdx {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} {i : Nat} :
@[simp, grind =] theorem getElem?_mapFinIdx {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} {i : Nat} :
(xs.mapFinIdx f)[i]? =
xs[i]?.pbind fun b h => some <| f i b (getElem?_eq_some_iff.1 h).1 := by
simp only [getElem?_def, getElem_mapFinIdx]
@@ -33,12 +33,12 @@ namespace Vector
/-! ### mapIdx -/
@[simp] theorem getElem_mapIdx {f : Nat α β} {xs : Vector α n} {i : Nat} (h : i < n) :
@[simp, grind =] theorem getElem_mapIdx {f : Nat α β} {xs : Vector α n} {i : Nat} (h : i < n) :
(xs.mapIdx f)[i] = f i (xs[i]'(by simp_all)) := by
rcases xs with xs, rfl
simp
@[simp] theorem getElem?_mapIdx {f : Nat α β} {xs : Vector α n} {i : Nat} :
@[simp, grind =] theorem getElem?_mapIdx {f : Nat α β} {xs : Vector α n} {i : Nat} :
(xs.mapIdx f)[i]? = xs[i]?.map (f i) := by
rcases xs with xs, rfl
simp
@@ -47,11 +47,11 @@ end Vector
namespace Array
@[simp] theorem mapFinIdx_toVector {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
@[simp, grind =] theorem mapFinIdx_toVector {xs : Array α} {f : (i : Nat) α (h : i < xs.size) β} :
xs.toVector.mapFinIdx f = (xs.mapFinIdx f).toVector.cast (by simp) := by
ext <;> simp
@[simp] theorem mapIdx_toVector {f : Nat α β} {xs : Array α} :
@[simp, grind =] theorem mapIdx_toVector {f : Nat α β} {xs : Array α} :
xs.toVector.mapIdx f = (xs.mapIdx f).toVector.cast (by simp) := by
ext <;> simp
@@ -61,12 +61,12 @@ namespace Vector
/-! ### zipIdx -/
@[simp] theorem toList_zipIdx {xs : Vector α n} (k : Nat := 0) :
@[simp, grind =] theorem toList_zipIdx {xs : Vector α n} (k : Nat := 0) :
(xs.zipIdx k).toList = xs.toList.zipIdx k := by
rcases xs with xs, rfl
simp
@[simp] theorem getElem_zipIdx {xs : Vector α n} {i : Nat} {h : i < n} :
@[simp, grind =] theorem getElem_zipIdx {xs : Vector α n} {i : Nat} {h : i < n} :
(xs.zipIdx k)[i] = (xs[i]'(by simp_all), k + i) := by
rcases xs with xs, rfl
simp
@@ -116,7 +116,7 @@ abbrev mem_zipWithIndex_iff_getElem? := @mem_zipIdx_iff_getElem?
subst w
rfl
@[simp]
@[simp, grind =]
theorem mapFinIdx_empty {f : (i : Nat) α (h : i < 0) β} : mapFinIdx #v[] f = #v[] :=
rfl
@@ -125,6 +125,7 @@ theorem mapFinIdx_eq_ofFn {as : Vector α n} {f : (i : Nat) → α → (h : i <
rcases as with as, rfl
simp [Array.mapFinIdx_eq_ofFn]
@[grind =]
theorem mapFinIdx_append {xs : Vector α n} {ys : Vector α m} {f : (i : Nat) α (h : i < n + m) β} :
(xs ++ ys).mapFinIdx f =
xs.mapFinIdx (fun i a h => f i a (by omega)) ++
@@ -133,7 +134,7 @@ theorem mapFinIdx_append {xs : Vector α n} {ys : Vector α m} {f : (i : Nat)
rcases ys with ys, rfl
simp [Array.mapFinIdx_append]
@[simp]
@[simp, grind =]
theorem mapFinIdx_push {xs : Vector α n} {a : α} {f : (i : Nat) α (h : i < n + 1) β} :
mapFinIdx (xs.push a) f =
(mapFinIdx xs (fun i a h => f i a (by omega))).push (f n a (by simp)) := by
@@ -154,7 +155,7 @@ theorem exists_of_mem_mapFinIdx {b : β} {xs : Vector α n} {f : (i : Nat) →
rcases xs with xs, rfl
exact List.exists_of_mem_mapFinIdx (by simpa using h)
@[simp] theorem mem_mapFinIdx {b : β} {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} :
@[simp, grind =] theorem mem_mapFinIdx {b : β} {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} :
b xs.mapFinIdx f (i : Nat) (h : i < n), f i xs[i] h = b := by
rcases xs with xs, rfl
simp
@@ -215,7 +216,7 @@ theorem mapFinIdx_eq_mapFinIdx_iff {xs : Vector α n} {f g : (i : Nat) → α
rw [eq_comm, mapFinIdx_eq_iff]
simp
@[simp] theorem mapFinIdx_mapFinIdx {xs : Vector α n}
@[simp, grind =] theorem mapFinIdx_mapFinIdx {xs : Vector α n}
{f : (i : Nat) α (h : i < n) β}
{g : (i : Nat) β (h : i < n) γ} :
(xs.mapFinIdx f).mapFinIdx g = xs.mapFinIdx (fun i a h => g i (f i a h) h) := by
@@ -229,14 +230,14 @@ theorem mapFinIdx_eq_replicate_iff {xs : Vector α n} {f : (i : Nat) → α
@[deprecated mapFinIdx_eq_replicate_iff (since := "2025-03-18")]
abbrev mapFinIdx_eq_mkVector_iff := @mapFinIdx_eq_replicate_iff
@[simp] theorem mapFinIdx_reverse {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} :
@[simp, grind =] theorem mapFinIdx_reverse {xs : Vector α n} {f : (i : Nat) α (h : i < n) β} :
xs.reverse.mapFinIdx f = (xs.mapFinIdx (fun i a h => f (n - 1 - i) a (by omega))).reverse := by
rcases xs with xs, rfl
simp
/-! ### mapIdx -/
@[simp]
@[simp, grind =]
theorem mapIdx_empty {f : Nat α β} : mapIdx f #v[] = #v[] :=
rfl
@@ -256,13 +257,14 @@ theorem mapIdx_eq_zipIdx_map {xs : Vector α n} {f : Nat → α → β} :
@[deprecated mapIdx_eq_zipIdx_map (since := "2025-01-27")]
abbrev mapIdx_eq_zipWithIndex_map := @mapIdx_eq_zipIdx_map
@[grind =]
theorem mapIdx_append {xs : Vector α n} {ys : Vector α m} :
(xs ++ ys).mapIdx f = xs.mapIdx f ++ ys.mapIdx fun i => f (i + n) := by
rcases xs with xs, rfl
rcases ys with ys, rfl
simp [Array.mapIdx_append]
@[simp]
@[simp, grind =]
theorem mapIdx_push {xs : Vector α n} {a : α} :
mapIdx f (xs.push a) = (mapIdx f xs).push (f n a) := by
simp [ append_singleton, mapIdx_append]
@@ -275,7 +277,7 @@ theorem exists_of_mem_mapIdx {b : β} {xs : Vector α n}
rw [mapIdx_eq_mapFinIdx] at h
simpa [Fin.exists_iff] using exists_of_mem_mapFinIdx h
@[simp] theorem mem_mapIdx {b : β} {xs : Vector α n} :
@[simp, grind =] theorem mem_mapIdx {b : β} {xs : Vector α n} :
b xs.mapIdx f (i : Nat) (h : i < n), f i xs[i] = b := by
constructor
· intro h
@@ -332,7 +334,7 @@ theorem mapIdx_eq_mapIdx_iff {xs : Vector α n} :
rcases xs with xs, rfl
simp [Array.mapIdx_eq_mapIdx_iff]
@[simp] theorem mapIdx_set {xs : Vector α n} {i : Nat} {h : i < n} {a : α} :
@[simp, grind =] theorem mapIdx_set {xs : Vector α n} {i : Nat} {h : i < n} {a : α} :
(xs.set i a).mapIdx f = (xs.mapIdx f).set i (f i a) (by simpa) := by
rcases xs with xs, rfl
simp
@@ -342,17 +344,17 @@ theorem mapIdx_eq_mapIdx_iff {xs : Vector α n} :
rcases xs with xs, rfl
simp
@[simp] theorem back?_mapIdx {xs : Vector α n} {f : Nat α β} :
@[simp, grind =] theorem back?_mapIdx {xs : Vector α n} {f : Nat α β} :
(mapIdx f xs).back? = (xs.back?).map (f (n - 1)) := by
rcases xs with xs, rfl
simp
@[simp] theorem back_mapIdx [NeZero n] {xs : Vector α n} {f : Nat α β} :
@[simp, grind =] theorem back_mapIdx [NeZero n] {xs : Vector α n} {f : Nat α β} :
(mapIdx f xs).back = f (n - 1) (xs.back) := by
rcases xs with xs, rfl
simp
@[simp] theorem mapIdx_mapIdx {xs : Vector α n} {f : Nat α β} {g : Nat β γ} :
@[simp, grind =] theorem mapIdx_mapIdx {xs : Vector α n} {f : Nat α β} {g : Nat β γ} :
(xs.mapIdx f).mapIdx g = xs.mapIdx (fun i => g i f i) := by
simp [mapIdx_eq_iff]
@@ -364,7 +366,7 @@ theorem mapIdx_eq_replicate_iff {xs : Vector α n} {f : Nat → α → β} {b :
@[deprecated mapIdx_eq_replicate_iff (since := "2025-03-18")]
abbrev mapIdx_eq_mkVector_iff := @mapIdx_eq_replicate_iff
@[simp] theorem mapIdx_reverse {xs : Vector α n} {f : Nat α β} :
@[simp, grind =] theorem mapIdx_reverse {xs : Vector α n} {f : Nat α β} :
xs.reverse.mapIdx f = (mapIdx (fun i => f (n - 1 - i)) xs).reverse := by
rcases xs with xs, rfl
simp [Array.mapIdx_reverse]

View File

@@ -20,15 +20,15 @@ set_option linter.indexVariables true -- Enforce naming conventions for index va
namespace Vector
@[simp] theorem getElem_ofFn {α n} {f : Fin n α} (h : i < n) :
@[simp, grind =] theorem getElem_ofFn {α n} {f : Fin n α} (h : i < n) :
(Vector.ofFn f)[i] = f i, by simpa using h := by
simp [ofFn]
theorem getElem?_ofFn {α n} {f : Fin n α} :
@[simp, grind =] theorem getElem?_ofFn {α n} {f : Fin n α} :
(ofFn f)[i]? = if h : i < n then some (f i, h) else none := by
simp [getElem?_def]
@[simp 500]
@[simp 500, grind =]
theorem mem_ofFn {n} {f : Fin n α} {a : α} : a ofFn f i, f i = a := by
constructor
· intro w
@@ -37,7 +37,7 @@ theorem mem_ofFn {n} {f : Fin n → α} {a : α} : a ∈ ofFn f ↔ ∃ i, f i =
· rintro i, rfl
apply mem_of_getElem (i := i) <;> simp
theorem back_ofFn {n} [NeZero n] {f : Fin n α} :
@[grind =] theorem back_ofFn {n} [NeZero n] {f : Fin n α} :
(ofFn f).back = f n - 1, by have := NeZero.ne n; omega := by
simp [back]
@@ -71,7 +71,7 @@ def ofFnM {n} [Monad m] (f : Fin n → m α) : m (Vector α n) :=
else
pure acc, by omega
@[simp]
@[simp, grind =]
theorem ofFnM_zero [Monad m] {f : Fin 0 m α} : Vector.ofFnM f = pure #v[] := by
simp [ofFnM, ofFnM.go]
@@ -114,13 +114,13 @@ theorem ofFnM_add {n m} [Monad m] [LawfulMonad m] {f : Fin (n + k) → m α} :
| zero => simp
| succ k ih => simp [ofFnM_succ, ih, push_append]
@[simp, grind] theorem toArray_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
@[simp, grind =] theorem toArray_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
toArray <$> ofFnM f = Array.ofFnM f := by
induction n with
| zero => simp
| succ n ih => simp [ofFnM_succ, Array.ofFnM_succ, ih]
@[simp, grind] theorem toList_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
@[simp, grind =] theorem toList_ofFnM [Monad m] [LawfulMonad m] {f : Fin n m α} :
toList <$> Vector.ofFnM f = List.ofFnM f := by
unfold toList
suffices Array.toList <$> (toArray <$> ofFnM f) = List.ofFnM f by

View File

@@ -46,6 +46,7 @@ theorem zipWith_self {f : αα → δ} {xs : Vector α n} : zipWith f xs xs
See also `getElem?_zipWith'` for a variant
using `Option.map` and `Option.bind` rather than a `match`.
-/
@[grind =]
theorem getElem?_zipWith {f : α β γ} {i : Nat} :
(zipWith f as bs)[i]? = match as[i]?, bs[i]? with
| some a, some b => some (f a b) | _, _ => none := by
@@ -74,53 +75,61 @@ theorem getElem?_zip_eq_some {as : Vector α n} {bs : Vector β n} {z : α × β
rcases bs with bs, h
simp [Array.getElem?_zip_eq_some]
@[simp]
@[simp, grind =]
theorem zipWith_map {μ} {f : γ δ μ} {g : α γ} {h : β δ} {as : Vector α n} {bs : Vector β n} :
zipWith f (as.map g) (bs.map h) = zipWith (fun a b => f (g a) (h b)) as bs := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.zipWith_map]
@[grind =]
theorem zipWith_map_left {as : Vector α n} {bs : Vector β n} {f : α α'} {g : α' β γ} :
zipWith g (as.map f) bs = zipWith (fun a b => g (f a) b) as bs := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.zipWith_map_left]
@[grind =]
theorem zipWith_map_right {as : Vector α n} {bs : Vector β n} {f : β β'} {g : α β' γ} :
zipWith g as (bs.map f) = zipWith (fun a b => g a (f b)) as bs := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.zipWith_map_right]
@[grind =]
theorem zipWith_foldr_eq_zip_foldr {f : α β γ} {i : δ} :
(zipWith f as bs).foldr g i = (zip as bs).foldr (fun p r => g (f p.1 p.2) r) i := by
rcases as with as, rfl
rcases bs with bs, h
simpa using Array.zipWith_foldr_eq_zip_foldr
@[grind =]
theorem zipWith_foldl_eq_zip_foldl {f : α β γ} {i : δ} :
(zipWith f as bs).foldl g i = (zip as bs).foldl (fun r p => g r (f p.1 p.2)) i := by
rcases as with as, rfl
rcases bs with bs, h
simpa using Array.zipWith_foldl_eq_zip_foldl
@[grind =]
theorem map_zipWith {δ : Type _} {f : α β} {g : γ δ α} {as : Vector γ n} {bs : Vector δ n} :
map f (zipWith g as bs) = zipWith (fun x y => f (g x y)) as bs := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.map_zipWith]
@[grind =]
theorem take_zipWith : (zipWith f as bs).take i = zipWith f (as.take i) (bs.take i) := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.take_zipWith]
@[grind =]
theorem extract_zipWith : (zipWith f as bs).extract i j = zipWith f (as.extract i j) (bs.extract i j) := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.extract_zipWith]
@[grind =]
theorem zipWith_append {f : α β γ}
{as : Vector α n} {as' : Vector α m} {bs : Vector β n} {bs' : Vector β m} :
zipWith f (as ++ as') (bs ++ bs') = zipWith f as bs ++ zipWith f as' bs' := by
@@ -147,7 +156,8 @@ theorem zipWith_eq_append_iff {f : α → β → γ} {as : Vector α (n + m)} {b
simp only at w₁ w₂
exact as₁, as₂, bs₁, bs₂, by simpa [hw, hy] using w₁, w₂
@[simp] theorem zipWith_replicate {a : α} {b : β} {n : Nat} :
@[simp, grind =]
theorem zipWith_replicate {a : α} {b : β} {n : Nat} :
zipWith f (replicate n a) (replicate n b) = replicate n (f a b) := by
ext
simp
@@ -167,6 +177,7 @@ theorem map_zip_eq_zipWith {f : α × β → γ} {as : Vector α n} {bs : Vector
rcases bs with bs, h
simp [Array.map_zip_eq_zipWith]
@[grind =]
theorem reverse_zipWith {f : α β γ} {as : Vector α n} {bs : Vector β n} :
(zipWith f as bs).reverse = zipWith f as.reverse bs.reverse := by
rcases as with as, rfl
@@ -175,7 +186,7 @@ theorem reverse_zipWith {f : α → β → γ} {as : Vector α n} {bs : Vector
/-! ### zip -/
@[simp]
@[simp, grind =]
theorem getElem_zip {as : Vector α n} {bs : Vector β n} {i : Nat} {h : i < n} :
(zip as bs)[i] = (as[i], bs[i]) :=
getElem_zipWith ..
@@ -185,18 +196,22 @@ theorem zip_eq_zipWith {as : Vector α n} {bs : Vector β n} : zip as bs = zipWi
rcases bs with bs, h
simp [Array.zip_eq_zipWith, h]
@[grind _=_]
theorem zip_map {f : α γ} {g : β δ} {as : Vector α n} {bs : Vector β n} :
zip (as.map f) (bs.map g) = (zip as bs).map (Prod.map f g) := by
rcases as with as, rfl
rcases bs with bs, h
simp [Array.zip_map, h]
@[grind _=_]
theorem zip_map_left {f : α γ} {as : Vector α n} {bs : Vector β n} :
zip (as.map f) bs = (zip as bs).map (Prod.map f id) := by rw [ zip_map, map_id]
@[grind _=_]
theorem zip_map_right {f : β γ} {as : Vector α n} {bs : Vector β n} :
zip as (bs.map f) = (zip as bs).map (Prod.map id f) := by rw [ zip_map, map_id]
@[grind =]
theorem zip_append {as : Vector α n} {bs : Vector β n} {as' : Vector α m} {bs' : Vector β m} :
zip (as ++ as') (bs ++ bs') = zip as bs ++ zip as' bs' := by
rcases as with as, rfl
@@ -205,6 +220,7 @@ theorem zip_append {as : Vector α n} {bs : Vector β n} {as' : Vector α m} {bs
rcases bs' with bs', h'
simp [Array.zip_append, h, h']
@[grind =]
theorem zip_map' {f : α β} {g : α γ} {xs : Vector α n} :
zip (xs.map f) (xs.map g) = xs.map fun a => (f a, g a) := by
rcases xs with xs, rfl
@@ -248,7 +264,8 @@ theorem zip_eq_append_iff {as : Vector α (n + m)} {bs : Vector β (n + m)} {xs
as₁ as₂ bs₁ bs₂, as = as₁ ++ as₂ bs = bs₁ ++ bs₂ xs = zip as₁ bs₁ ys = zip as₂ bs₂ := by
simp [zip_eq_zipWith, zipWith_eq_append_iff]
@[simp] theorem zip_replicate {a : α} {b : β} {n : Nat} :
@[simp, grind =]
theorem zip_replicate {a : α} {b : β} {n : Nat} :
zip (replicate n a) (replicate n b) = replicate n (a, b) := by
ext <;> simp
@@ -257,14 +274,17 @@ abbrev zip_mkVector := @zip_replicate
/-! ### unzip -/
@[simp] theorem unzip_fst : (unzip xs).fst = xs.map Prod.fst := by
@[simp, grind =]
theorem unzip_fst : (unzip xs).fst = xs.map Prod.fst := by
cases xs
simp_all
@[simp] theorem unzip_snd : (unzip xs).snd = xs.map Prod.snd := by
@[simp, grind =]
theorem unzip_snd : (unzip xs).snd = xs.map Prod.snd := by
cases xs
simp_all
@[grind =]
theorem unzip_eq_map {xs : Vector (α × β) n} : unzip xs = (xs.map Prod.fst, xs.map Prod.snd) := by
cases xs
simp [List.unzip_eq_map]
@@ -296,7 +316,8 @@ theorem zip_of_prod {as : Vector α n} {bs : Vector β n} {xs : Vector (α × β
(hr : xs.map Prod.snd = bs) : xs = as.zip bs := by
rw [ hl, hr, zip_unzip xs, unzip_fst, unzip_snd, zip_unzip, zip_unzip]
@[simp] theorem unzip_replicate {a : α} {b : β} {n : Nat} :
@[simp, grind =]
theorem unzip_replicate {a : α} {b : β} {n : Nat} :
unzip (replicate n (a, b)) = (replicate n a, replicate n b) := by
ext1 <;> simp

View File

@@ -47,7 +47,7 @@ proof in the context using `have`, because `get_elem_tactic` tries
The proof side-condition `valid xs i` is automatically dispatched by the
`get_elem_tactic` tactic; this tactic can be extended by adding more clauses to
`get_elem_tactic_trivial` using `macro_rules`.
`get_elem_tactic_extensible` using `macro_rules`.
`xs[i]?` and `xs[i]!` do not impose a proof obligation; the former returns
an `Option elem`, with `none` signalling that the value isn't present, and
@@ -164,25 +164,25 @@ export LawfulGetElem (getElem?_def getElem!_def)
instance (priority := low) [GetElem coll idx elem valid] [ xs i, Decidable (valid xs i)] :
LawfulGetElem coll idx elem valid where
@[simp] theorem getElem?_pos [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
@[simp, grind] theorem getElem?_pos [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
(c : cont) (i : idx) (h : dom c i) : c[i]? = some (c[i]'h) := by
have : Decidable (dom c i) := .isTrue h
rw [getElem?_def]
exact dif_pos h
@[simp] theorem getElem?_neg [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
@[simp, grind] theorem getElem?_neg [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
(c : cont) (i : idx) (h : ¬dom c i) : c[i]? = none := by
have : Decidable (dom c i) := .isFalse h
rw [getElem?_def]
exact dif_neg h
@[simp] theorem getElem!_pos [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
@[simp, grind] theorem getElem!_pos [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
[Inhabited elem] (c : cont) (i : idx) (h : dom c i) :
c[i]! = c[i]'h := by
have : Decidable (dom c i) := .isTrue h
simp [getElem!_def, getElem?_def, h]
@[simp] theorem getElem!_neg [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
@[simp, grind] theorem getElem!_neg [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
[Inhabited elem] (c : cont) (i : idx) (h : ¬dom c i) : c[i]! = default := by
have : Decidable (dom c i) := .isFalse h
simp [getElem!_def, getElem?_def, h]
@@ -193,7 +193,7 @@ instance (priority := low) [GetElem coll idx elem valid] [∀ xs i, Decidable (v
simp only [getElem?_def] at h
split <;> simp_all
@[simp, grind =] theorem getElem?_eq_none_iff [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
@[simp] theorem getElem?_eq_none_iff [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
(c : cont) (i : idx) [Decidable (dom c i)] : c[i]? = none ¬dom c i := by
simp only [getElem?_def]
split <;> simp_all
@@ -238,8 +238,6 @@ theorem getElem_of_getElem? [GetElem? cont idx elem dom] [LawfulGetElem cont idx
{c : cont} {i : idx} [Decidable (dom c i)] (h : c[i]? = some e) : Exists fun h : dom c i => c[i] = e :=
getElem?_eq_some_iff.mp h
grind_pattern getElem_of_getElem? => c[i]?, some e
@[simp] theorem some_getElem_eq_getElem?_iff [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
{c : cont} {i : idx} [Decidable (dom c i)] (h : dom c i):
(some c[i] = c[i]?) True := by
@@ -275,15 +273,15 @@ instance [GetElem? cont Nat elem dom] [h : LawfulGetElem cont Nat elem dom] :
getElem?_def _c _i _d := h.getElem?_def ..
getElem!_def _c _i := h.getElem!_def ..
@[simp] theorem getElem_fin [GetElem Cont Nat Elem Dom] (a : Cont) (i : Fin n) (h : Dom a i) :
@[simp, grind =] theorem getElem_fin [GetElem Cont Nat Elem Dom] (a : Cont) (i : Fin n) (h : Dom a i) :
a[i] = a[i.1] := rfl
@[simp] theorem getElem?_fin [h : GetElem? Cont Nat Elem Dom] (a : Cont) (i : Fin n) : a[i]? = a[i.1]? := rfl
@[simp, grind =] theorem getElem?_fin [h : GetElem? Cont Nat Elem Dom] (a : Cont) (i : Fin n) : a[i]? = a[i.1]? := rfl
@[simp] theorem getElem!_fin [GetElem? Cont Nat Elem Dom] (a : Cont) (i : Fin n) [Inhabited Elem] : a[i]! = a[i.1]! := rfl
@[simp, grind =] theorem getElem!_fin [GetElem? Cont Nat Elem Dom] (a : Cont) (i : Fin n) [Inhabited Elem] : a[i]! = a[i.1]! := rfl
macro_rules
| `(tactic| get_elem_tactic_trivial) => `(tactic| (with_reducible apply Fin.val_lt_of_le); get_elem_tactic_trivial; done)
| `(tactic| get_elem_tactic_extensible) => `(tactic| (with_reducible apply Fin.val_lt_of_le); get_elem_tactic_extensible; done)
end Fin

View File

@@ -18,3 +18,5 @@ import Init.Grind.CommRing
import Init.Grind.Module
import Init.Grind.Ordered
import Init.Grind.Ext
import Init.Grind.ToInt
import Init.Data.Int.OfNat -- This may not have otherwise been imported, breaking `grind` proofs.

View File

@@ -71,7 +71,9 @@ class CommRing (α : Type u) extends Ring α, CommSemiring α
attribute [instance 100] Semiring.toAdd Semiring.toMul Semiring.toHPow Ring.toNeg Ring.toSub
-- This is a low-priority instance, to avoid conflicts with existing `OfNat`, `NatCast`, and `IntCast` instances.
attribute [instance 100] Semiring.ofNat Semiring.natCast Ring.intCast
attribute [instance 100] Semiring.ofNat
attribute [local instance] Semiring.natCast Ring.intCast
namespace Semiring
@@ -118,7 +120,6 @@ theorem pow_add (a : α) (k₁ k₂ : Nat) : a ^ (k₁ + k₂) = a^k₁ * a^k₂
instance : NatModule α where
hMul a x := a * x
add_zero := by simp [add_zero]
zero_add := by simp [zero_add]
add_assoc := by simp [add_assoc]
add_comm := by simp [add_comm]
zero_hmul := by simp [natCast_zero, zero_mul]
@@ -271,7 +272,6 @@ theorem intCast_pow (x : Int) (k : Nat) : ((x ^ k : Int) : α) = (x : α) ^ k :=
instance : IntModule α where
hMul a x := a * x
add_zero := by simp [add_zero]
zero_add := by simp [zero_add]
add_assoc := by simp [add_assoc]
add_comm := by simp [add_comm]
zero_hmul := by simp [intCast_zero, zero_mul]

View File

@@ -34,4 +34,9 @@ instance : CommRing (BitVec w) where
instance : IsCharP (BitVec w) (2 ^ w) where
ofNat_eq_zero_iff {x} := by simp [BitVec.ofInt, BitVec.toNat_eq]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add (BitVec w) (some 0) (some (2^w)) := inferInstance
example : ToInt.Neg (BitVec w) (some 0) (some (2^w)) := inferInstance
example : ToInt.Sub (BitVec w) (some 0) (some (2^w)) := inferInstance
end Lean.Grind

View File

@@ -14,22 +14,6 @@ namespace Lean.Grind
namespace Fin
instance (n : Nat) [NeZero n] : NatCast (Fin n) where
natCast a := Fin.ofNat n a
@[expose]
def intCast [NeZero n] (a : Int) : Fin n :=
if 0 a then
Fin.ofNat n a.natAbs
else
- Fin.ofNat n a.natAbs
instance (n : Nat) [NeZero n] : IntCast (Fin n) where
intCast := Fin.intCast
theorem intCast_def {n : Nat} [NeZero n] (x : Int) :
(x : Fin n) = if 0 x then Fin.ofNat n x.natAbs else -Fin.ofNat n x.natAbs := rfl
-- TODO: we should replace this at runtime with either repeated squaring,
-- or a GMP accelerated function.
@[expose]
@@ -78,18 +62,22 @@ theorem sub_eq_add_neg [NeZero n] (a b : Fin n) : a - b = a + -b := by
cases a; cases b; simp [Fin.neg_def, Fin.sub_def, Fin.add_def, Nat.add_comm]
private theorem neg_neg [NeZero n] (a : Fin n) : - - a = a := by
cases a; simp [Fin.neg_def, Fin.sub_def];
cases a; simp [Fin.neg_def, Fin.sub_def]
next a h => cases a; simp; next a =>
rw [Nat.self_sub_mod n (a+1)]
have : NeZero (n - (a + 1)) := by omega
rw [Nat.self_sub_mod, Nat.sub_sub_eq_min, Nat.min_eq_right (Nat.le_of_lt h)]
open Fin.NatCast Fin.IntCast in
theorem intCast_neg [NeZero n] (i : Int) : Int.cast (R := Fin n) (-i) = - Int.cast (R := Fin n) i := by
simp [Int.cast, IntCast.intCast, Fin.intCast]; split <;> split <;> try omega
simp [Int.cast, IntCast.intCast, Fin.intCast]
split <;> split <;> try omega
next h₁ h₂ => simp [Int.le_antisymm h₁ h₂, Fin.neg_def]
next => simp [Fin.neg_neg]
instance (n : Nat) [NeZero n] : CommRing (Fin n) where
natCast := Fin.NatCast.instNatCast n
intCast := Fin.IntCast.instIntCast n
add_assoc := Fin.add_assoc
add_comm := Fin.add_comm
add_zero := Fin.add_zero
@@ -112,6 +100,9 @@ instance (n : Nat) [NeZero n] : IsCharP (Fin n) n where
simp only [Nat.zero_mod]
simp only [Fin.mk.injEq]
example [NeZero n] : ToInt.Neg (Fin n) (some 0) (some n) := inferInstance
example [NeZero n] : ToInt.Sub (Fin n) (some 0) (some n) := inferInstance
end Fin
end Lean.Grind

View File

@@ -11,10 +11,14 @@ import Init.Data.Hashable
import all Init.Data.Ord
import Init.Data.RArray
import Init.Grind.CommRing.Basic
import Init.Grind.Ordered.Ring
namespace Lean.Grind
namespace CommRing
-- These are no longer global instances, so we need to turn them on here.
attribute [local instance] Semiring.natCast Ring.intCast
abbrev Var := Nat
inductive Expr where
@@ -32,13 +36,13 @@ abbrev Context (α : Type u) := RArray α
def Var.denote {α} (ctx : Context α) (v : Var) : α :=
ctx.get v
def denoteInt {α} [CommRing α] (k : Int) : α :=
def denoteInt {α} [Ring α] (k : Int) : α :=
bif k < 0 then
- OfNat.ofNat (α := α) k.natAbs
else
OfNat.ofNat (α := α) k.natAbs
def Expr.denote {α} [CommRing α] (ctx : Context α) : Expr α
def Expr.denote {α} [Ring α] (ctx : Context α) : Expr α
| .add a b => denote ctx a + denote ctx b
| .sub a b => denote ctx a - denote ctx b
| .mul a b => denote ctx a * denote ctx b
@@ -202,7 +206,7 @@ instance : LawfulBEq Poly where
intro a
induction a <;> simp! [BEq.beq]
next k m p ih =>
show m == m p == p
change m == m p == p
simp [ih]
def Poly.denote [CommRing α] (ctx : Context α) (p : Poly) : α :=
@@ -529,7 +533,7 @@ theorem Mon.denote_concat {α} [CommRing α] (ctx : Context α) (m₁ m₂ : Mon
next p₁ m₁ ih => rw [mul_assoc]
private theorem le_of_blt_false {a b : Nat} : a.blt b = false b a := by
intro h; apply Nat.le_of_not_gt; show ¬a < b
intro h; apply Nat.le_of_not_gt; change ¬a < b
rw [ Nat.blt_eq, h]; simp
private theorem eq_of_blt_false {a b : Nat} : a.blt b = false b.blt a = false a = b := by
@@ -1136,5 +1140,90 @@ theorem imp_keqC {α c} [CommRing α] [IsCharP α c] (ctx : Context α) [NoNatZe
end Stepwise
/-! IntModule interface -/
def Mon.denoteAsIntModule [CommRing α] (ctx : Context α) (m : Mon) : α :=
match m with
| .unit => One.one
| .mult pw m => go m (pw.denote ctx)
where
go (m : Mon) (acc : α) : α :=
match m with
| .unit => acc
| .mult pw m => go m (acc * pw.denote ctx)
def Poly.denoteAsIntModule [CommRing α] (ctx : Context α) (p : Poly) : α :=
match p with
| .num k => Int.cast k * One.one
| .add k m p => Int.cast k * m.denoteAsIntModule ctx + denoteAsIntModule ctx p
theorem Mon.denoteAsIntModule_go_eq_denote {α} [CommRing α] (ctx : Context α) (m : Mon) (acc : α)
: denoteAsIntModule.go ctx m acc = acc * m.denote ctx := by
induction m generalizing acc <;> simp [*, denoteAsIntModule.go, denote, mul_one, One.one, *, mul_assoc]
theorem Mon.denoteAsIntModule_eq_denote {α} [CommRing α] (ctx : Context α) (m : Mon)
: m.denoteAsIntModule ctx = m.denote ctx := by
cases m <;> simp [denoteAsIntModule, denote, denoteAsIntModule_go_eq_denote]; rfl
theorem Poly.denoteAsIntModule_eq_denote {α} [CommRing α] (ctx : Context α) (p : Poly) : p.denoteAsIntModule ctx = p.denote ctx := by
induction p <;> simp [*, denoteAsIntModule, denote, mul_one, One.one, Mon.denoteAsIntModule_eq_denote]
open Stepwise
theorem eq_norm {α} [CommRing α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p lhs.denote ctx = rhs.denote ctx p.denoteAsIntModule ctx = 0 := by
rw [Poly.denoteAsIntModule_eq_denote]; apply core
theorem diseq_norm {α} [CommRing α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p lhs.denote ctx rhs.denote ctx p.denoteAsIntModule ctx 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h; subst p; simp [Expr.denote_toPoly, Expr.denote]
intro h; rw [sub_eq_zero_iff] at h; contradiction
open IntModule.IsOrdered
theorem le_norm {α} [CommRing α] [Preorder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p lhs.denote ctx rhs.denote ctx p.denoteAsIntModule ctx 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h; subst p; simp [Expr.denote_toPoly, Expr.denote]
replace h := add_le_left h ((-1) * rhs.denote ctx)
rw [neg_mul, sub_eq_add_neg, one_mul, sub_eq_add_neg, sub_self] at h
assumption
theorem lt_norm {α} [CommRing α] [Preorder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p lhs.denote ctx < rhs.denote ctx p.denoteAsIntModule ctx < 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h; subst p; simp [Expr.denote_toPoly, Expr.denote]
replace h := add_lt_left h ((-1) * rhs.denote ctx)
rw [neg_mul, sub_eq_add_neg, one_mul, sub_eq_add_neg, sub_self] at h
assumption
theorem not_le_norm {α} [CommRing α] [LinearOrder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert rhs lhs p ¬ lhs.denote ctx rhs.denote ctx p.denoteAsIntModule ctx < 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h₁; subst p; simp [Expr.denote_toPoly, Expr.denote]
replace h₁ := LinearOrder.lt_of_not_le h₁
replace h₁ := add_lt_left h₁ (-lhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
theorem not_lt_norm {α} [CommRing α] [LinearOrder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert rhs lhs p ¬ lhs.denote ctx < rhs.denote ctx p.denoteAsIntModule ctx 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h₁; subst p; simp [Expr.denote_toPoly, Expr.denote]
replace h₁ := LinearOrder.le_of_not_lt h₁
replace h₁ := add_le_left h₁ (-lhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
theorem not_le_norm' {α} [CommRing α] [Preorder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p ¬ lhs.denote ctx rhs.denote ctx ¬ p.denoteAsIntModule ctx 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h₁; subst p; simp [Expr.denote_toPoly, Expr.denote]; intro h
replace h := add_le_right (rhs.denote ctx) h
rw [sub_eq_add_neg, add_left_comm, sub_eq_add_neg, sub_self] at h; simp [add_zero] at h
contradiction
theorem not_lt_norm' {α} [CommRing α] [Preorder α] [Ring.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: core_cert lhs rhs p ¬ lhs.denote ctx < rhs.denote ctx ¬ p.denoteAsIntModule ctx < 0 := by
simp [core_cert, Poly.denoteAsIntModule_eq_denote]; intro _ h₁; subst p; simp [Expr.denote_toPoly, Expr.denote]; intro h
replace h := add_lt_right (rhs.denote ctx) h
rw [sub_eq_add_neg, add_left_comm, sub_eq_add_neg, sub_self] at h; simp [add_zero] at h
contradiction
end CommRing
end Lean.Grind

View File

@@ -45,6 +45,11 @@ instance : IsCharP Int8 (2 ^ 8) where
simp [Int8.ofInt_eq_iff_bmod_eq_toInt,
Int.dvd_iff_bmod_eq_zero, Nat.dvd_iff_mod_eq_zero, Int.ofNat_dvd_right]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add Int8 (some (-(2^7))) (some (2^7)) := inferInstance
example : ToInt.Neg Int8 (some (-(2^7))) (some (2^7)) := inferInstance
example : ToInt.Sub Int8 (some (-(2^7))) (some (2^7)) := inferInstance
instance : NatCast Int16 where
natCast x := Int16.ofNat x
@@ -76,6 +81,11 @@ instance : IsCharP Int16 (2 ^ 16) where
simp [Int16.ofInt_eq_iff_bmod_eq_toInt,
Int.dvd_iff_bmod_eq_zero, Nat.dvd_iff_mod_eq_zero, Int.ofNat_dvd_right]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add Int16 (some (-(2^15))) (some (2^15)) := inferInstance
example : ToInt.Neg Int16 (some (-(2^15))) (some (2^15)) := inferInstance
example : ToInt.Sub Int16 (some (-(2^15))) (some (2^15)) := inferInstance
instance : NatCast Int32 where
natCast x := Int32.ofNat x
@@ -107,6 +117,11 @@ instance : IsCharP Int32 (2 ^ 32) where
simp [Int32.ofInt_eq_iff_bmod_eq_toInt,
Int.dvd_iff_bmod_eq_zero, Nat.dvd_iff_mod_eq_zero, Int.ofNat_dvd_right]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add Int32 (some (-(2^31))) (some (2^31)) := inferInstance
example : ToInt.Neg Int32 (some (-(2^31))) (some (2^31)) := inferInstance
example : ToInt.Sub Int32 (some (-(2^31))) (some (2^31)) := inferInstance
instance : NatCast Int64 where
natCast x := Int64.ofNat x
@@ -138,6 +153,11 @@ instance : IsCharP Int64 (2 ^ 64) where
simp [Int64.ofInt_eq_iff_bmod_eq_toInt,
Int.dvd_iff_bmod_eq_zero, Nat.dvd_iff_mod_eq_zero, Int.ofNat_dvd_right]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add Int64 (some (-(2^63))) (some (2^63)) := inferInstance
example : ToInt.Neg Int64 (some (-(2^63))) (some (2^63)) := inferInstance
example : ToInt.Sub Int64 (some (-(2^63))) (some (2^63)) := inferInstance
instance : NatCast ISize where
natCast x := ISize.ofNat x
@@ -171,4 +191,9 @@ instance : IsCharP ISize (2 ^ numBits) where
simp [ISize.ofInt_eq_iff_bmod_eq_toInt,
Int.dvd_iff_bmod_eq_zero, Nat.dvd_iff_mod_eq_zero, Int.ofNat_dvd_right]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add ISize (some (-(2^(numBits-1)))) (some (2^(numBits-1))) := inferInstance
example : ToInt.Neg ISize (some (-(2^(numBits-1)))) (some (2^(numBits-1))) := inferInstance
example : ToInt.Sub ISize (some (-(2^(numBits-1)))) (some (2^(numBits-1))) := inferInstance
end Lean.Grind

View File

@@ -149,6 +149,11 @@ instance : IsCharP UInt8 256 where
have : OfNat.ofNat x = UInt8.ofNat x := rfl
simp [this, UInt8.ofNat_eq_iff_mod_eq_toNat]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add UInt8 (some 0) (some (2^8)) := inferInstance
example : ToInt.Neg UInt8 (some 0) (some (2^8)) := inferInstance
example : ToInt.Sub UInt8 (some 0) (some (2^8)) := inferInstance
instance : CommRing UInt16 where
add_assoc := UInt16.add_assoc
add_comm := UInt16.add_comm
@@ -174,6 +179,11 @@ instance : IsCharP UInt16 65536 where
have : OfNat.ofNat x = UInt16.ofNat x := rfl
simp [this, UInt16.ofNat_eq_iff_mod_eq_toNat]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add UInt16 (some 0) (some (2^16)) := inferInstance
example : ToInt.Neg UInt16 (some 0) (some (2^16)) := inferInstance
example : ToInt.Sub UInt16 (some 0) (some (2^16)) := inferInstance
instance : CommRing UInt32 where
add_assoc := UInt32.add_assoc
add_comm := UInt32.add_comm
@@ -199,6 +209,11 @@ instance : IsCharP UInt32 4294967296 where
have : OfNat.ofNat x = UInt32.ofNat x := rfl
simp [this, UInt32.ofNat_eq_iff_mod_eq_toNat]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add UInt32 (some 0) (some (2^32)) := inferInstance
example : ToInt.Neg UInt32 (some 0) (some (2^32)) := inferInstance
example : ToInt.Sub UInt32 (some 0) (some (2^32)) := inferInstance
instance : CommRing UInt64 where
add_assoc := UInt64.add_assoc
add_comm := UInt64.add_comm
@@ -224,6 +239,11 @@ instance : IsCharP UInt64 18446744073709551616 where
have : OfNat.ofNat x = UInt64.ofNat x := rfl
simp [this, UInt64.ofNat_eq_iff_mod_eq_toNat]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add UInt64 (some 0) (some (2^64)) := inferInstance
example : ToInt.Neg UInt64 (some 0) (some (2^64)) := inferInstance
example : ToInt.Sub UInt64 (some 0) (some (2^64)) := inferInstance
instance : CommRing USize where
add_assoc := USize.add_assoc
add_comm := USize.add_comm
@@ -251,4 +271,9 @@ instance : IsCharP USize (2 ^ numBits) where
have : OfNat.ofNat x = USize.ofNat x := rfl
simp [this, USize.ofNat_eq_iff_mod_eq_toNat]
-- Verify we can derive the instances showing how `toInt` interacts with operations:
example : ToInt.Add USize (some 0) (some (2^numBits)) := inferInstance
example : ToInt.Neg USize (some 0) (some (2^numBits)) := inferInstance
example : ToInt.Sub USize (some 0) (some (2^numBits)) := inferInstance
end Lean.Grind

View File

@@ -7,12 +7,12 @@ module
prelude
import Init.Data.Int.Order
import Init.Grind.ToInt
namespace Lean.Grind
class NatModule (M : Type u) extends Zero M, Add M, HMul Nat M M where
add_zero : a : M, a + 0 = a
zero_add : a : M, 0 + a = a
add_comm : a b : M, a + b = b + a
add_assoc : a b c : M, a + b + c = a + (b + c)
zero_hmul : a : M, 0 * a = 0
@@ -26,7 +26,6 @@ attribute [instance 100] NatModule.toZero NatModule.toAdd NatModule.toHMul
class IntModule (M : Type u) extends Zero M, Add M, Neg M, Sub M, HMul Int M M where
add_zero : a : M, a + 0 = a
zero_add : a : M, 0 + a = a
add_comm : a b : M, a + b = b + a
add_assoc : a b c : M, a + b + c = a + (b + c)
zero_hmul : a : M, (0 : Int) * a = 0
@@ -52,9 +51,15 @@ instance toNatModule (M : Type u) [i : IntModule M] : NatModule M :=
variable {M : Type u} [IntModule M]
theorem zero_add (a : M) : 0 + a = a := by
rw [add_comm, add_zero]
theorem add_neg_cancel (a : M) : a + -a = 0 := by
rw [add_comm, neg_add_cancel]
theorem add_left_comm (a b c : M) : a + (b + c) = b + (a + c) := by
rw [ add_assoc, add_assoc, add_comm a]
theorem add_left_inj {a b : M} (c : M) : a + c = b + c a = b :=
fun h => by simpa [add_assoc, add_neg_cancel, add_zero] using (congrArg (· + -c) h),
fun g => congrArg (· + c) g
@@ -111,4 +116,17 @@ class NoNatZeroDivisors (α : Type u) [Zero α] [HMul Nat α α] where
export NoNatZeroDivisors (no_nat_zero_divisors)
instance [ToInt α (some lo) (some hi)] [IntModule α] [ToInt.Zero α (some lo) (some hi)] [ToInt.Add α (some lo) (some hi)] : ToInt.Neg α (some lo) (some hi) where
toInt_neg x := by
have := (ToInt.Add.toInt_add (-x) x).symm
rw [IntModule.neg_add_cancel, ToInt.Zero.toInt_zero] at this
rw [ToInt.wrap_eq_wrap_iff] at this
simp at this
rw [ ToInt.wrap_toInt]
rw [ToInt.wrap_eq_wrap_iff]
simpa
instance [ToInt α (some lo) (some hi)] [IntModule α] [ToInt.Add α (some lo) (some hi)] [ToInt.Neg α (some lo) (some hi)] : ToInt.Sub α (some lo) (some hi) :=
ToInt.Sub.of_sub_eq_add_neg IntModule.sub_eq_add_neg
end Lean.Grind

View File

@@ -11,3 +11,4 @@ import Init.Grind.Ordered.Module
import Init.Grind.Ordered.Ring
import Init.Grind.Ordered.Field
import Init.Grind.Ordered.Int
import Init.Grind.Ordered.Linarith

View File

@@ -0,0 +1,502 @@
/-
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
import Init.Grind.Ordered.Module
import Init.Grind.Ordered.Ring
import all Init.Data.Ord
import all Init.Data.AC
import Init.Data.RArray
/-!
Support for the linear arithmetic module for `IntModule` in `grind`
-/
namespace Lean.Grind.Linarith
abbrev Var := Nat
open IntModule
attribute [local simp] add_zero zero_add zero_hmul hmul_zero one_hmul
inductive Expr where
| zero
| var (i : Var)
| add (a b : Expr)
| sub (a b : Expr)
| neg (a : Expr)
| mul (k : Int) (a : Expr)
deriving Inhabited, BEq
abbrev Context (α : Type u) := RArray α
def Var.denote {α} (ctx : Context α) (v : Var) : α :=
ctx.get v
def Expr.denote {α} [IntModule α] (ctx : Context α) : Expr α
| zero => 0
| .var v => v.denote ctx
| .add a b => denote ctx a + denote ctx b
| .sub a b => denote ctx a - denote ctx b
| .mul k a => k * denote ctx a
| .neg a => -denote ctx a
inductive Poly where
| nil
| add (k : Int) (v : Var) (p : Poly)
deriving BEq
def Poly.denote {α} [IntModule α] (ctx : Context α) (p : Poly) : α :=
match p with
| .nil => 0
| .add k v p => k * v.denote ctx + denote ctx p
/--
Similar to `Poly.denote`, but produces a denotation better for normalization.
-/
def Poly.denote' {α} [IntModule α] (ctx : Context α) (p : Poly) : α :=
match p with
| .nil => 0
| .add 1 v p => go (v.denote ctx) p
| .add k v p => go (k * v.denote ctx) p
where
go (r : α) (p : Poly) : α :=
match p with
| .nil => r
| .add 1 v p => go (r + v.denote ctx) p
| .add k v p => go (r + k * v.denote ctx) p
-- Helper instance for `ac_rfl`
local instance {α} [IntModule α] : Std.Associative (· + · : α α α) where
assoc := IntModule.add_assoc
-- Helper instance for `ac_rfl`
local instance {α} [IntModule α] : Std.Commutative (· + · : α α α) where
comm := IntModule.add_comm
theorem Poly.denote'_go_eq_denote {α} [IntModule α] (ctx : Context α) (p : Poly) (r : α) : denote'.go ctx r p = p.denote ctx + r := by
induction r, p using denote'.go.induct ctx <;> simp [denote'.go, denote]
next ih => rw [ih]; ac_rfl
next ih => rw [ih]; ac_rfl
theorem Poly.denote'_eq_denote {α} [IntModule α] (ctx : Context α) (p : Poly) : p.denote' ctx = p.denote ctx := by
unfold denote' <;> split <;> simp [denote, denote'_go_eq_denote] <;> ac_rfl
def Poly.insert (k : Int) (v : Var) (p : Poly) : Poly :=
match p with
| .nil => .add k v .nil
| .add k' v' p =>
bif Nat.blt v' v then
.add k v <| .add k' v' p
else bif Nat.beq v v' then
if Int.add k k' == 0 then
p
else
.add (Int.add k k') v' p
else
.add k' v' (insert k v p)
/-- Normalizes the given polynomial by fusing monomial and constants. -/
def Poly.norm (p : Poly) : Poly :=
match p with
| .nil => .nil
| .add k v p => (norm p).insert k v
def Poly.append (p₁ p₂ : Poly) : Poly :=
match p₁ with
| .nil => p₂
| .add k x p₁ => .add k x (append p₁ p₂)
def Poly.combine' (fuel : Nat) (p₁ p₂ : Poly) : Poly :=
match fuel with
| 0 => p₁.append p₂
| fuel + 1 => match p₁, p₂ with
| .nil, p₂ => p₂
| p₁, .nil => p₁
| .add a₁ x₁ p₁, .add a₂ x₂ p₂ =>
bif Nat.beq x₁ x₂ then
let a := a₁ + a₂
bif a == 0 then
combine' fuel p₁ p₂
else
.add a x₁ (combine' fuel p₁ p₂)
else bif Nat.blt x₂ x₁ then
.add a₁ x₁ (combine' fuel p₁ (.add a₂ x₂ p₂))
else
.add a₂ x₂ (combine' fuel (.add a₁ x₁ p₁) p₂)
def Poly.combine (p₁ p₂ : Poly) : Poly :=
combine' 100000000 p₁ p₂
/-- Converts the given expression into a polynomial. -/
def Expr.toPoly' (e : Expr) : Poly :=
go 1 e .nil
where
go (coeff : Int) : Expr (Poly Poly)
| .zero => id
| .var v => (.add coeff v ·)
| .add a b => go coeff a go coeff b
| .sub a b => go coeff a go (-coeff) b
| .mul k a => bif k == 0 then id else go (Int.mul coeff k) a
| .neg a => go (-coeff) a
/-- Converts the given expression into a polynomial, and then normalizes it. -/
def Expr.norm (e : Expr) : Poly :=
e.toPoly'.norm
/--
`p.mul k` multiplies all coefficients and constant of the polynomial `p` by `k`.
-/
def Poly.mul' (p : Poly) (k : Int) : Poly :=
match p with
| .nil => .nil
| .add k' v p => .add (k*k') v (mul' p k)
def Poly.mul (p : Poly) (k : Int) : Poly :=
if k == 0 then
.nil
else
p.mul' k
@[simp] theorem Poly.denote_mul {α} [IntModule α] (ctx : Context α) (p : Poly) (k : Int) : (p.mul k).denote ctx = k * p.denote ctx := by
simp [mul]
split
next => simp [*, denote]
next =>
induction p <;> simp [mul', denote, *]
rw [mul_hmul, hmul_add]
theorem Poly.denote_insert {α} [IntModule α] (ctx : Context α) (k : Int) (v : Var) (p : Poly) :
(p.insert k v).denote ctx = p.denote ctx + k * v.denote ctx := by
fun_induction p.insert k v <;> simp [denote]
next => ac_rfl
next h₁ h₂ h₃ =>
simp at h₃; simp at h₂; subst h₂
rw [add_comm, add_assoc, add_hmul, h₃, zero_hmul, zero_add]
next h _ => simp at h; subst h; rw [add_hmul]; ac_rfl
next ih => rw [ih]; ac_rfl
attribute [local simp] Poly.denote_insert
theorem Poly.denote_norm {α} [IntModule α] (ctx : Context α) (p : Poly) : p.norm.denote ctx = p.denote ctx := by
induction p <;> simp [denote, norm, add_comm, *]
attribute [local simp] Poly.denote_norm
theorem Poly.denote_append {α} [IntModule α] (ctx : Context α) (p₁ p₂ : Poly) : (p₁.append p₂).denote ctx = p₁.denote ctx + p₂.denote ctx := by
induction p₁ <;> simp [append, denote, *]; ac_rfl
attribute [local simp] Poly.denote_append
theorem Poly.denote_combine' {α} [IntModule α] (ctx : Context α) (fuel : Nat) (p₁ p₂ : Poly) : (p₁.combine' fuel p₂).denote ctx = p₁.denote ctx + p₂.denote ctx := by
fun_induction p₁.combine' fuel p₂ <;>
simp_all +zetaDelta [denote, Int.add_mul]
next h _ =>
rw [Int.add_comm] at h
rw [add_left_comm, add_assoc, add_assoc, add_hmul, h, zero_hmul, zero_add]
next => rw [add_hmul]; ac_rfl
all_goals ac_rfl
theorem Poly.denote_combine {α} [IntModule α] (ctx : Context α) (p₁ p₂ : Poly) : (p₁.combine p₂).denote ctx = p₁.denote ctx + p₂.denote ctx := by
simp [combine, denote_combine']
attribute [local simp] Poly.denote_combine
theorem Expr.denote_toPoly'_go {α} [IntModule α] {k p} (ctx : Context α) (e : Expr)
: (toPoly'.go k e p).denote ctx = k * e.denote ctx + p.denote ctx := by
induction k, e using Expr.toPoly'.go.induct generalizing p <;> simp [toPoly'.go, denote, Poly.denote, *, hmul_add]
next => ac_rfl
next => rw [sub_eq_add_neg, neg_hmul, hmul_add, hmul_neg]; ac_rfl
next h => simp at h; subst h; simp
next ih => simp at ih; rw [ih, mul_hmul]
next => rw [hmul_neg, neg_hmul]
theorem Expr.denote_norm {α} [IntModule α] (ctx : Context α) (e : Expr) : e.norm.denote ctx = e.denote ctx := by
simp [norm, toPoly', Expr.denote_toPoly'_go, Poly.denote]
attribute [local simp] Expr.denote_norm
instance : LawfulBEq Poly where
eq_of_beq {a} := by
induction a <;> intro b <;> cases b <;> simp_all! [BEq.beq]
next ih =>
intro _ _ h
exact ih h
rfl := by
intro a
induction a <;> simp! [BEq.beq]
assumption
attribute [local simp] Poly.denote'_eq_denote
def Poly.leadCoeff (p : Poly) : Int :=
match p with
| .add a _ _ => a
| _ => 1
open IntModule.IsOrdered
/-!
Helper theorems for conflict resolution during model construction.
-/
private theorem le_add_le {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] {a b : α}
(h₁ : a 0) (h₂ : b 0) : a + b 0 := by
replace h₁ := add_le_left h₁ b; simp at h₁
exact Preorder.le_trans h₁ h₂
private theorem le_add_lt {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] {a b : α}
(h₁ : a 0) (h₂ : b < 0) : a + b < 0 := by
replace h₁ := add_le_left h₁ b; simp at h₁
exact Preorder.lt_of_le_of_lt h₁ h₂
private theorem lt_add_lt {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] {a b : α}
(h₁ : a < 0) (h₂ : b < 0) : a + b < 0 := by
replace h₁ := add_lt_left h₁ b; simp at h₁
exact Preorder.lt_trans h₁ h₂
private theorem coe_natAbs_nonneg (a : Int) : (a.natAbs : Int) 0 := by
exact Int.natCast_nonneg a.natAbs
def le_le_combine_cert (p₁ p₂ p₃ : Poly) : Bool :=
let a₁ := p₁.leadCoeff.natAbs
let a₂ := p₂.leadCoeff.natAbs
p₃ == (p₁.mul a₂ |>.combine (p₂.mul a₁))
theorem le_le_combine {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: le_le_combine_cert p₁ p₂ p₃ p₁.denote' ctx 0 p₂.denote' ctx 0 p₃.denote' ctx 0 := by
simp [le_le_combine_cert]; intro _ h₁ h₂; subst p₃; simp
replace h₁ := hmul_nonpos (coe_natAbs_nonneg p₂.leadCoeff) h₁
replace h₂ := hmul_nonpos (coe_natAbs_nonneg p₁.leadCoeff) h₂
exact le_add_le h₁ h₂
def le_lt_combine_cert (p₁ p₂ p₃ : Poly) : Bool :=
let a₁ := p₁.leadCoeff.natAbs
let a₂ := p₂.leadCoeff.natAbs
a₁ > (0 : Int) && p₃ == (p₁.mul a₂ |>.combine (p₂.mul a₁))
theorem le_lt_combine {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: le_lt_combine_cert p₁ p₂ p₃ p₁.denote' ctx 0 p₂.denote' ctx < 0 p₃.denote' ctx < 0 := by
simp [-Int.natAbs_pos, -Int.ofNat_pos, le_lt_combine_cert]; intro hp _ h₁ h₂; subst p₃; simp
replace h₁ := hmul_nonpos (coe_natAbs_nonneg p₂.leadCoeff) h₁
replace h₂ := hmul_neg (p₁.leadCoeff.natAbs) h₂ |>.mp hp
exact le_add_lt h₁ h₂
theorem le_eq_combine {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: le_le_combine_cert p₁ p₂ p₃ p₁.denote' ctx 0 p₂.denote' ctx = 0 p₃.denote' ctx 0 := by
simp [le_le_combine_cert]; intro _ h₁ h₂; subst p₃; simp [h₂]
replace h₁ := hmul_nonpos (coe_natAbs_nonneg p₂.leadCoeff) h₁
assumption
def lt_lt_combine_cert (p₁ p₂ p₃ : Poly) : Bool :=
let a₁ := p₁.leadCoeff.natAbs
let a₂ := p₂.leadCoeff.natAbs
a₂ > (0 : Int) && a₁ > (0 : Int) && p₃ == (p₁.mul a₂ |>.combine (p₂.mul a₁))
theorem lt_lt_combine {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: lt_lt_combine_cert p₁ p₂ p₃ p₁.denote' ctx < 0 p₂.denote' ctx < 0 p₃.denote' ctx < 0 := by
simp [-Int.natAbs_pos, -Int.ofNat_pos, lt_lt_combine_cert]; intro hp₁ hp₂ _ h₁ h₂; subst p₃; simp
replace h₁ := hmul_neg (p₂.leadCoeff.natAbs) h₁ |>.mp hp₁
replace h₂ := hmul_neg (p₁.leadCoeff.natAbs) h₂ |>.mp hp₂
exact lt_add_lt h₁ h₂
def lt_eq_combine_cert (p₁ p₂ p₃ : Poly) : Bool :=
let a₁ := p₁.leadCoeff.natAbs
let a₂ := p₂.leadCoeff.natAbs
a₂ > (0 : Int) && p₃ == (p₁.mul a₂ |>.combine (p₂.mul a₁))
theorem lt_eq_combine {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: lt_eq_combine_cert p₁ p₂ p₃ p₁.denote' ctx < 0 p₂.denote' ctx = 0 p₃.denote' ctx < 0 := by
simp [-Int.natAbs_pos, -Int.ofNat_pos, lt_eq_combine_cert]; intro hp₁ _ h₁ h₂; subst p₃; simp [h₂]
replace h₁ := hmul_neg (p₂.leadCoeff.natAbs) h₁ |>.mp hp₁
assumption
theorem eq_eq_combine {α} [IntModule α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: le_le_combine_cert p₁ p₂ p₃ p₁.denote' ctx = 0 p₂.denote' ctx = 0 p₃.denote' ctx = 0 := by
simp [le_le_combine_cert]; intro _ h₁ h₂; subst p₃; simp [h₁, h₂]
def diseq_split_cert (p₁ p₂ : Poly) : Bool :=
p₂ == p₁.mul (-1)
-- We need `LinearOrder` to use `trichotomy`
theorem diseq_split {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ : Poly)
: diseq_split_cert p₁ p₂ p₁.denote' ctx 0 p₁.denote' ctx < 0 p₂.denote' ctx < 0 := by
simp [diseq_split_cert]; intro _ h₁; subst p₂; simp
cases LinearOrder.trichotomy (p₁.denote ctx) 0
next h => exact Or.inl h
next h =>
apply Or.inr
simp [h₁] at h
rw [ neg_pos_iff, neg_hmul, neg_neg, one_hmul]; assumption
theorem diseq_split_resolve {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ : Poly)
: diseq_split_cert p₁ p₂ p₁.denote' ctx 0 ¬p₁.denote' ctx < 0 p₂.denote' ctx < 0 := by
intro h₁ h₂ h₃
exact (diseq_split ctx p₁ p₂ h₁ h₂).resolve_left h₃
def eq_diseq_combine_cert (p₁ p₂ p₃ : Poly) : Bool :=
let a₁ := p₁.leadCoeff.natAbs
let a₂ := p₂.leadCoeff.natAbs
a₁ 0 && p₃ == (p₁.mul a₂ |>.combine (p₂.mul a₁))
theorem eq_diseq_combine {α} [IntModule α] [NoNatZeroDivisors α] (ctx : Context α) (p₁ p₂ p₃ : Poly)
: eq_diseq_combine_cert p₁ p₂ p₃ p₁.denote' ctx = 0 p₂.denote' ctx 0 p₃.denote' ctx 0 := by
simp [- Int.natAbs_eq_zero, -Int.natCast_eq_zero, eq_diseq_combine_cert]; intro hne _ h₁ h₂; subst p₃
simp [h₁, h₂]; intro h
have := no_nat_zero_divisors (p₁.leadCoeff.natAbs) (p₂.denote ctx) hne h
contradiction
def eq_diseq_combine_cert' (p₁ p₂ p₃ : Poly) (k : Int) : Bool :=
p₃ == (p₁.mul k |>.combine p₂)
/-
Special case of `eq_diseq_combine` where leading coefficient `c₁` of `p₁` is `-k*c₂`, where
`c₂` is the leading coefficient of `p₂`.
-/
theorem eq_diseq_combine' {α} [IntModule α] (ctx : Context α) (p₁ p₂ p₃ : Poly) (k : Int)
: eq_diseq_combine_cert' p₁ p₂ p₃ k p₁.denote' ctx = 0 p₂.denote' ctx 0 p₃.denote' ctx 0 := by
simp [eq_diseq_combine_cert']; intro _ h₁ h₂; subst p₃
simp [h₁, h₂]
/-!
Helper theorems for internalizing facts into the linear arithmetic procedure
-/
def norm_cert (lhs rhs : Expr) (p : Poly) :=
p == (lhs.sub rhs).norm
theorem eq_norm {α} [IntModule α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p lhs.denote ctx = rhs.denote ctx p.denote' ctx = 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
theorem le_of_eq {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p lhs.denote ctx = rhs.denote ctx p.denote' ctx 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
apply Preorder.le_refl
theorem diseq_norm {α} [IntModule α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p lhs.denote ctx rhs.denote ctx p.denote' ctx 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
intro h
replace h := congrArg (rhs.denote ctx + ·) h; simp [sub_eq_add_neg] at h
rw [add_left_comm, sub_eq_add_neg, sub_self, add_zero] at h
contradiction
theorem le_norm {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p lhs.denote ctx rhs.denote ctx p.denote' ctx 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
replace h₁ := add_le_left h₁ (-rhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
theorem lt_norm {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p lhs.denote ctx < rhs.denote ctx p.denote' ctx < 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
replace h₁ := add_lt_left h₁ (-rhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
theorem not_le_norm {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert rhs lhs p ¬ lhs.denote ctx rhs.denote ctx p.denote' ctx < 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
replace h₁ := LinearOrder.lt_of_not_le h₁
replace h₁ := add_lt_left h₁ (-lhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
theorem not_lt_norm {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert rhs lhs p ¬ lhs.denote ctx < rhs.denote ctx p.denote' ctx 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]
replace h₁ := LinearOrder.le_of_not_lt h₁
replace h₁ := add_le_left h₁ (-lhs.denote ctx)
simp [ sub_eq_add_neg, sub_self] at h₁
assumption
-- If the module does not have a linear order, we can still put the expressions in polynomial forms
theorem not_le_norm' {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p ¬ lhs.denote ctx rhs.denote ctx ¬ p.denote' ctx 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]; intro h
replace h := add_le_right (rhs.denote ctx) h
rw [sub_eq_add_neg, add_left_comm, sub_eq_add_neg, sub_self] at h; simp at h
contradiction
theorem not_lt_norm' {α} [IntModule α] [Preorder α] [IntModule.IsOrdered α] (ctx : Context α) (lhs rhs : Expr) (p : Poly)
: norm_cert lhs rhs p ¬ lhs.denote ctx < rhs.denote ctx ¬ p.denote' ctx < 0 := by
simp [norm_cert]; intro _ h₁; subst p; simp [Expr.denote, h₁, sub_self]; intro h
replace h := add_lt_right (rhs.denote ctx) h
rw [sub_eq_add_neg, add_left_comm, sub_eq_add_neg, sub_self] at h; simp at h
contradiction
/-!
Equality detection
-/
def eq_of_le_ge_cert (p₁ p₂ : Poly) : Bool :=
p₂ == p₁.mul (-1)
theorem eq_of_le_ge {α} [IntModule α] [PartialOrder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ : Poly) (p₂ : Poly)
: eq_of_le_ge_cert p₁ p₂ p₁.denote' ctx 0 p₂.denote' ctx 0 p₁.denote' ctx = 0 := by
simp [eq_of_le_ge_cert]
intro; subst p₂; simp
intro h₁ h₂
replace h₂ := add_le_left h₂ (p₁.denote ctx)
rw [add_comm, neg_hmul, one_hmul, sub_eq_add_neg, sub_self, zero_add] at h₂
exact PartialOrder.le_antisymm h₁ h₂
/-!
Helper theorems for closing the goal
-/
theorem diseq_unsat {α} [IntModule α] (ctx : Context α) : (Poly.nil).denote ctx 0 False := by
simp [Poly.denote]
theorem lt_unsat {α} [IntModule α] [Preorder α] (ctx : Context α) : (Poly.nil).denote ctx < 0 False := by
simp [Poly.denote]; intro h
have := Preorder.lt_iff_le_not_le.mp h
simp at this
def zero_lt_one_cert (p : Poly) : Bool :=
p == .add (-1) 0 .nil
theorem zero_lt_one {α} [Ring α] [Preorder α] [Ring.IsOrdered α] (ctx : Context α) (p : Poly)
: zero_lt_one_cert p (0 : Var).denote ctx = One.one p.denote' ctx < 0 := by
simp [zero_lt_one_cert]; intro _ h; subst p; simp [Poly.denote, h, One.one, neg_hmul]
rw [neg_lt_iff, neg_zero]; apply Ring.IsOrdered.zero_lt_one
/-!
Coefficient normalization
-/
def coeff_cert (p₁ p₂ : Poly) (k : Nat) :=
k > 0 && p₁ == p₂.mul k
theorem eq_coeff {α} [IntModule α] [NoNatZeroDivisors α] (ctx : Context α) (p₁ p₂ : Poly) (k : Nat)
: coeff_cert p₁ p₂ k p₁.denote' ctx = 0 p₂.denote' ctx = 0 := by
simp [coeff_cert]; intro h _; subst p₁; simp
exact no_nat_zero_divisors k (p₂.denote ctx) (Nat.ne_zero_of_lt h)
theorem le_coeff {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ : Poly) (k : Nat)
: coeff_cert p₁ p₂ k p₁.denote' ctx 0 p₂.denote' ctx 0 := by
simp [coeff_cert]; intro h _; subst p₁; simp
have : k > (0 : Int) := Int.natCast_pos.mpr h
intro h₁; apply Classical.byContradiction
intro h₂; replace h₂ := LinearOrder.lt_of_not_le h₂
replace h₂ := IsOrdered.hmul_pos (k) h₂ |>.mp this
exact Preorder.lt_irrefl 0 (Preorder.lt_of_lt_of_le h₂ h₁)
theorem lt_coeff {α} [IntModule α] [LinearOrder α] [IntModule.IsOrdered α] (ctx : Context α) (p₁ p₂ : Poly) (k : Nat)
: coeff_cert p₁ p₂ k p₁.denote' ctx < 0 p₂.denote' ctx < 0 := by
simp [coeff_cert]; intro h _; subst p₁; simp
have : k > (0 : Int) := Int.natCast_pos.mpr h
intro h₁; apply Classical.byContradiction
intro h₂; replace h₂ := LinearOrder.le_of_not_lt h₂
replace h₂ := IsOrdered.hmul_nonneg (Int.le_of_lt this) h₂
exact Preorder.lt_irrefl 0 (Preorder.lt_of_le_of_lt h₂ h₁)
theorem diseq_neg {α} [IntModule α] (ctx : Context α) (p p' : Poly) : p' == p.mul (-1) p.denote' ctx 0 p'.denote' ctx 0 := by
simp; intro _ _; subst p'; simp [neg_hmul]
intro h; replace h := congrArg (- ·) h; simp [neg_neg, neg_zero] at h
contradiction
end Lean.Grind.Linarith

View File

@@ -91,6 +91,19 @@ theorem trichotomy (a b : α) : a < b a = b b < a := by
| inl h => right; right; exact h
| inr h => right; left; exact h.symm
theorem le_of_not_lt {α} [LinearOrder α] {a b : α} (h : ¬ a < b) : b a := by
cases LinearOrder.trichotomy a b
next => contradiction
next h => apply PartialOrder.le_iff_lt_or_eq.mpr; cases h <;> simp [*]
theorem lt_of_not_le {α} [LinearOrder α] {a b : α} (h : ¬ a b) : b < a := by
cases LinearOrder.trichotomy a b
next h₁ h₂ => have := Preorder.lt_iff_le_not_le.mp h₂; simp [h] at this
next h =>
cases h
next h => subst a; exact False.elim <| h (Preorder.le_refl b)
next => assumption
end LinearOrder
end Lean.Grind

View File

@@ -20,13 +20,13 @@ set_option linter.unusedVariables false in
def node_def (_ : Nat) {α : Sort u} {a : α} : NodeDef := .unit
@[app_unexpander node_def]
def nodeDefUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def nodeDefUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $id:num) => return mkIdent <| Name.mkSimple $ "#" ++ toString id.getNat
| _ => throw ()
@[app_unexpander NodeDef]
def NodeDefUnexpander : PrettyPrinter.Unexpander := fun _ => do
meta def NodeDefUnexpander : PrettyPrinter.Unexpander := fun _ => do
return mkIdent <| Name.mkSimple "NodeDef"
end Lean.Grind

View File

@@ -55,7 +55,7 @@ and stores `c` in `x`, `b` in `a`, and the proof that `c = some b` in `h`.
-/
def genPattern {α : Sort u} (_h : Prop) (x : α) (_val : α) : α := x
/-- Similar to `genPattern` but for the heterogenous case -/
/-- Similar to `genPattern` but for the heterogeneous case -/
def genHEqPattern {α β : Sort u} (_h : Prop) (x : α) (_val : β) : α := x
end Lean.Grind
@@ -175,7 +175,7 @@ structure Config where
-/
zeta := true
/--
When `true` (default: `false`), uses procedure for handling equalities over commutative rings.
When `true` (default: `true`), uses procedure for handling equalities over commutative rings.
-/
ring := true
ringSteps := 10000
@@ -184,6 +184,11 @@ structure Config where
proof terms, instead of a single-step Nullstellensatz certificate
-/
ringNull := false
/--
When `true` (default: `true`), uses procedure for handling linear arithmetic for `IntModule`, and
`CommRing`.
-/
linarith := true
deriving Inhabited, BEq
end Lean.Grind

513
src/Init/Grind/ToInt.lean Normal file
View File

@@ -0,0 +1,513 @@
/-
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
module
prelude
import Init.Data.Int.DivMod.Basic
import Init.Data.Int.Lemmas
import Init.Data.Int.Order
import Init.Data.Fin.Lemmas
import Init.Data.UInt.Lemmas
import Init.Data.SInt.Lemmas
/-!
# Typeclasses for types that can be embedded into an interval of `Int`.
The typeclass `ToInt α lo? hi?` carries the data of a function `ToInt.toInt : α → Int`
which is injective, lands between the (optional) lower and upper bounds `lo?` and `hi?`.
The function `ToInt.wrap` is the identity if either bound is `none`,
and otherwise wraps the integers into the interval `[lo, hi)`.
The typeclass `ToInt.Add α lo? hi?` then asserts that `toInt (x + y) = wrap lo? hi? (toInt x + toInt y)`.
There are many variants for other operations.
These typeclasses are used solely in the `grind` tactic to lift linear inequalities into `Int`.
-/
namespace Lean.Grind
class ToInt (α : Type u) (lo? hi? : outParam (Option Int)) where
toInt : α Int
toInt_inj : x y, toInt x = toInt y x = y
le_toInt : lo? = some lo lo toInt x
toInt_lt : hi? = some hi toInt x < hi
@[simp]
def ToInt.wrap (lo? hi? : Option Int) (x : Int) : Int :=
match lo?, hi? with
| some lo, some hi => (x - lo) % (hi - lo) + lo
| _, _ => x
class ToInt.Zero (α : Type u) [Zero α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
toInt_zero : toInt (0 : α) = wrap lo? hi? 0
class ToInt.Add (α : Type u) [Add α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
toInt_add : x y : α, toInt (x + y) = wrap lo? hi? (toInt x + toInt y)
class ToInt.Neg (α : Type u) [Neg α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
toInt_neg : x : α, toInt (-x) = wrap lo? hi? (-toInt x)
class ToInt.Sub (α : Type u) [Sub α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
toInt_sub : x y : α, toInt (x - y) = wrap lo? hi? (toInt x - toInt y)
class ToInt.Mod (α : Type u) [Mod α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
/-- One might expect a `wrap` on the right hand side,
but in practice this stronger statement is usually true. -/
toInt_mod : x y : α, toInt (x % y) = toInt x % toInt y
class ToInt.LE (α : Type u) [LE α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
le_iff : x y : α, x y toInt x toInt y
class ToInt.LT (α : Type u) [LT α] (lo? hi? : outParam (Option Int)) [ToInt α lo? hi?] where
lt_iff : x y : α, x < y toInt x < toInt y
/-! ## Helper theorems -/
theorem ToInt.wrap_add (lo? hi? : Option Int) (x y : Int) :
ToInt.wrap lo? hi? (x + y) = ToInt.wrap lo? hi? (ToInt.wrap lo? hi? x + ToInt.wrap lo? hi? y) := by
simp only [wrap]
split <;> rename_i lo hi
· dsimp
rw [Int.add_left_inj, Int.emod_eq_emod_iff_emod_sub_eq_zero, Int.emod_def (x - lo), Int.emod_def (y - lo)]
have : (x + y - lo -
(x - lo - (hi - lo) * ((x - lo) / (hi - lo)) + lo +
(y - lo - (hi - lo) * ((y - lo) / (hi - lo)) + lo) - lo)) =
(hi - lo) * ((x - lo) / (hi - lo) + (y - lo) / (hi - lo)) := by
simp only [Int.mul_add]
omega
rw [this]
exact Int.mul_emod_right ..
· simp
@[simp]
theorem ToInt.wrap_toInt (lo? hi? : Option Int) [ToInt α lo? hi?] (x : α) :
ToInt.wrap lo? hi? (ToInt.toInt x) = ToInt.toInt x := by
simp only [wrap]
split
· have := ToInt.le_toInt (x := x) rfl
have := ToInt.toInt_lt (x := x) rfl
rw [Int.emod_eq_of_lt (by omega) (by omega)]
omega
· rfl
theorem ToInt.wrap_eq_bmod {i : Int} (h : 0 i) :
ToInt.wrap (some (-i)) (some i) x = x.bmod ((2 * i).toNat) := by
match i, h with
| (i : Nat), _ =>
have : (2 * (i : Int)).toNat = 2 * i := by omega
rw [this]
simp [Int.bmod_eq_emod, Int.two_mul]
have : (2 * (i : Int) + 1) / 2 = i := by omega
rw [this]
by_cases h : i = 0
· simp [h]
split
· rw [ Int.sub_eq_add_neg, Int.sub_eq_iff_eq_add, Nat.two_mul, Int.natCast_add,
Int.sub_sub, Int.sub_add_cancel]
rw [Int.emod_eq_iff (by omega)]
refine ?_, ?_, ?_
· omega
· have := Int.emod_lt x (b := 2 * (i : Int)) (by omega)
omega
· rw [Int.emod_def]
have : x - 2 * i * (x / (2 * i)) - i - (x + i) = (2 * (i : Int)) * (- (x / (2 * i)) - 1) := by
simp only [Int.mul_sub, Int.mul_neg]
omega
rw [this]
exact Int.dvd_mul_right ..
· rw [ Int.sub_eq_add_neg, Int.sub_eq_iff_eq_add, Int.natCast_zero, Int.sub_zero]
rw [Int.emod_eq_iff (by omega)]
refine ?_, ?_, ?_
· have := Int.emod_nonneg x (b := 2 * (i : Int)) (by omega)
omega
· omega
· rw [Int.emod_def]
have : x - 2 * i * (x / (2 * i)) + i - (x + i) = (2 * (i : Int)) * (- (x / (2 * i))) := by
simp only [Int.mul_neg]
omega
rw [this]
exact Int.dvd_mul_right ..
theorem ToInt.wrap_eq_wrap_iff :
ToInt.wrap (some lo) (some hi) x = ToInt.wrap (some lo) (some hi) y (x - y) % (hi - lo) = 0 := by
simp only [wrap]
rw [Int.add_left_inj]
rw [Int.emod_eq_emod_iff_emod_sub_eq_zero]
have : x - lo - (y - lo) = x - y := by omega
rw [this]
/-- Construct a `ToInt.Sub` instance from a `ToInt.Add` and `ToInt.Neg` instance and
a `sub_eq_add_neg` assumption. -/
def ToInt.Sub.of_sub_eq_add_neg {α : Type u} [_root_.Add α] [_root_.Neg α] [_root_.Sub α]
(sub_eq_add_neg : x y : α, x - y = x + -y)
{lo? hi? : Option Int} [ToInt α lo? hi?] [Add α lo? hi?] [Neg α lo? hi?] : ToInt.Sub α lo? hi? where
toInt_sub x y := by
rw [sub_eq_add_neg, ToInt.Add.toInt_add, ToInt.Neg.toInt_neg, Int.sub_eq_add_neg]
conv => rhs; rw [ToInt.wrap_add, ToInt.wrap_toInt]
/-! ## Instances for concrete types-/
instance : ToInt Int none none where
toInt := id
toInt_inj := by simp
le_toInt := by simp
toInt_lt := by simp
@[simp] theorem toInt_int (x : Int) : ToInt.toInt x = x := rfl
instance : ToInt.Add Int none none where
toInt_add := by simp
instance : ToInt.Neg Int none none where
toInt_neg x := by simp
instance : ToInt.Sub Int none none where
toInt_sub x y := by simp
instance : ToInt.Mod Int none none where
toInt_mod x y := by simp
instance : ToInt.LE Int none none where
le_iff x y := by simp
instance : ToInt.LT Int none none where
lt_iff x y := by simp
instance : ToInt Nat (some 0) none where
toInt := Nat.cast
toInt_inj x y := Int.ofNat_inj.mp
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x
toInt_lt := by simp
@[simp] theorem toInt_nat (x : Nat) : ToInt.toInt x = (x : Int) := rfl
instance : ToInt.Add Nat (some 0) none where
toInt_add := by simp
instance : ToInt.Mod Nat (some 0) none where
toInt_mod x y := by simp
instance : ToInt.LE Nat (some 0) none where
le_iff x y := by simp
instance : ToInt.LT Nat (some 0) none where
lt_iff x y := by simp
-- Mathlib will add a `ToInt + (some 1) none` instance.
instance : ToInt (Fin n) (some 0) (some n) where
toInt x := x.val
toInt_inj x y w := Fin.eq_of_val_eq (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp only [Option.some.injEq] at w; subst w; exact Int.natCast_nonneg x
toInt_lt {hi x} w := by simp only [Option.some.injEq] at w; subst w; exact Int.ofNat_lt.mpr x.isLt
@[simp] theorem toInt_fin (x : Fin n) : ToInt.toInt x = (x.val : Int) := rfl
instance : ToInt.Add (Fin n) (some 0) (some n) where
toInt_add x y := by rfl
instance [NeZero n] : ToInt.Zero (Fin n) (some 0) (some n) where
toInt_zero := by rfl
-- The `ToInt.Neg` and `ToInt.Sub` instances are generated automatically from the `IntModule (Fin n)` instance.
instance : ToInt.Mod (Fin n) (some 0) (some n) where
toInt_mod x y := by
simp only [toInt_fin, Fin.mod_val, Int.natCast_emod]
instance : ToInt.LE (Fin n) (some 0) (some n) where
le_iff x y := by simpa using Fin.le_def
instance : ToInt.LT (Fin n) (some 0) (some n) where
lt_iff x y := by simpa using Fin.lt_def
instance : ToInt UInt8 (some 0) (some (2^8)) where
toInt x := (x.toNat : Int)
toInt_inj x y w := UInt8.toNat_inj.mp (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by simp at w; subst w; exact Int.lt_toNat.mp (UInt8.toNat_lt x)
@[simp] theorem toInt_uint8 (x : UInt8) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add UInt8 (some 0) (some (2^8)) where
toInt_add x y := by simp
instance : ToInt.Zero UInt8 (some 0) (some (2^8)) where
toInt_zero := by simp
instance : ToInt.Mod UInt8 (some 0) (some (2^8)) where
toInt_mod x y := by simp
instance : ToInt.LE UInt8 (some 0) (some (2^8)) where
le_iff x y := by simpa using UInt8.le_iff_toBitVec_le
instance : ToInt.LT UInt8 (some 0) (some (2^8)) where
lt_iff x y := by simpa using UInt8.lt_iff_toBitVec_lt
instance : ToInt UInt16 (some 0) (some (2^16)) where
toInt x := (x.toNat : Int)
toInt_inj x y w := UInt16.toNat_inj.mp (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by simp at w; subst w; exact Int.lt_toNat.mp (UInt16.toNat_lt x)
@[simp] theorem toInt_uint16 (x : UInt16) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add UInt16 (some 0) (some (2^16)) where
toInt_add x y := by simp
instance : ToInt.Zero UInt16 (some 0) (some (2^16)) where
toInt_zero := by simp
instance : ToInt.Mod UInt16 (some 0) (some (2^16)) where
toInt_mod x y := by simp
instance : ToInt.LE UInt16 (some 0) (some (2^16)) where
le_iff x y := by simpa using UInt16.le_iff_toBitVec_le
instance : ToInt.LT UInt16 (some 0) (some (2^16)) where
lt_iff x y := by simpa using UInt16.lt_iff_toBitVec_lt
instance : ToInt UInt32 (some 0) (some (2^32)) where
toInt x := (x.toNat : Int)
toInt_inj x y w := UInt32.toNat_inj.mp (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by simp at w; subst w; exact Int.lt_toNat.mp (UInt32.toNat_lt x)
@[simp] theorem toInt_uint32 (x : UInt32) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add UInt32 (some 0) (some (2^32)) where
toInt_add x y := by simp
instance : ToInt.Zero UInt32 (some 0) (some (2^32)) where
toInt_zero := by simp
instance : ToInt.Mod UInt32 (some 0) (some (2^32)) where
toInt_mod x y := by simp
instance : ToInt.LE UInt32 (some 0) (some (2^32)) where
le_iff x y := by simpa using UInt32.le_iff_toBitVec_le
instance : ToInt.LT UInt32 (some 0) (some (2^32)) where
lt_iff x y := by simpa using UInt32.lt_iff_toBitVec_lt
instance : ToInt UInt64 (some 0) (some (2^64)) where
toInt x := (x.toNat : Int)
toInt_inj x y w := UInt64.toNat_inj.mp (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by simp at w; subst w; exact Int.lt_toNat.mp (UInt64.toNat_lt x)
@[simp] theorem toInt_uint64 (x : UInt64) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add UInt64 (some 0) (some (2^64)) where
toInt_add x y := by simp
instance : ToInt.Zero UInt64 (some 0) (some (2^64)) where
toInt_zero := by simp
instance : ToInt.Mod UInt64 (some 0) (some (2^64)) where
toInt_mod x y := by simp
instance : ToInt.LE UInt64 (some 0) (some (2^64)) where
le_iff x y := by simpa using UInt64.le_iff_toBitVec_le
instance : ToInt.LT UInt64 (some 0) (some (2^64)) where
lt_iff x y := by simpa using UInt64.lt_iff_toBitVec_lt
instance : ToInt USize (some 0) (some (2^System.Platform.numBits)) where
toInt x := (x.toNat : Int)
toInt_inj x y w := USize.toNat_inj.mp (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by
simp at w; subst w
rw [show (2 : Int) ^ System.Platform.numBits = (2 ^ System.Platform.numBits : Nat) by simp,
Int.ofNat_lt]
exact USize.toNat_lt_two_pow_numBits x
@[simp] theorem toInt_usize (x : USize) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add USize (some 0) (some (2^System.Platform.numBits)) where
toInt_add x y := by simp
instance : ToInt.Zero USize (some 0) (some (2^System.Platform.numBits)) where
toInt_zero := by simp
instance : ToInt.Mod USize (some 0) (some (2^System.Platform.numBits)) where
toInt_mod x y := by simp
instance : ToInt.LE USize (some 0) (some (2^System.Platform.numBits)) where
le_iff x y := by simpa using USize.le_iff_toBitVec_le
instance : ToInt.LT USize (some 0) (some (2^System.Platform.numBits)) where
lt_iff x y := by simpa using USize.lt_iff_toBitVec_lt
instance : ToInt Int8 (some (-2^7)) (some (2^7)) where
toInt x := x.toInt
toInt_inj x y w := Int8.toInt_inj.mp w
le_toInt {lo x} w := by simp at w; subst w; exact Int8.le_toInt x
toInt_lt {hi x} w := by simp at w; subst w; exact Int8.toInt_lt x
@[simp] theorem toInt_int8 (x : Int8) : ToInt.toInt x = (x.toInt : Int) := rfl
instance : ToInt.Add Int8 (some (-2^7)) (some (2^7)) where
toInt_add x y := by
simp [Int.bmod_eq_emod]
split <;> · simp; omega
instance : ToInt.Zero Int8 (some (-2^7)) (some (2^7)) where
toInt_zero := by
-- simp -- FIXME: succeeds, but generates a `(kernel) application type mismatch` error!
change (0 : Int8).toInt = _
rw [Int8.toInt_zero]
decide
-- Note that we can not define `ToInt.Mod` instances for `Int8`,
-- because the condition does not hold unless `0 ≤ x.toInt y.toInt x.toInt y = 0`.
instance : ToInt.LE Int8 (some (-2^7)) (some (2^7)) where
le_iff x y := by simpa using Int8.le_iff_toInt_le
instance : ToInt.LT Int8 (some (-2^7)) (some (2^7)) where
lt_iff x y := by simpa using Int8.lt_iff_toInt_lt
instance : ToInt Int16 (some (-2^15)) (some (2^15)) where
toInt x := x.toInt
toInt_inj x y w := Int16.toInt_inj.mp w
le_toInt {lo x} w := by simp at w; subst w; exact Int16.le_toInt x
toInt_lt {hi x} w := by simp at w; subst w; exact Int16.toInt_lt x
@[simp] theorem toInt_int16 (x : Int16) : ToInt.toInt x = (x.toInt : Int) := rfl
instance : ToInt.Add Int16 (some (-2^15)) (some (2^15)) where
toInt_add x y := by
simp [Int.bmod_eq_emod]
split <;> · simp; omega
instance : ToInt.Zero Int16 (some (-2^15)) (some (2^15)) where
toInt_zero := by
-- simp -- FIXME: succeeds, but generates a `(kernel) application type mismatch` error!
change (0 : Int16).toInt = _
rw [Int16.toInt_zero]
decide
instance : ToInt.LE Int16 (some (-2^15)) (some (2^15)) where
le_iff x y := by simpa using Int16.le_iff_toInt_le
instance : ToInt.LT Int16 (some (-2^15)) (some (2^15)) where
lt_iff x y := by simpa using Int16.lt_iff_toInt_lt
instance : ToInt Int32 (some (-2^31)) (some (2^31)) where
toInt x := x.toInt
toInt_inj x y w := Int32.toInt_inj.mp w
le_toInt {lo x} w := by simp at w; subst w; exact Int32.le_toInt x
toInt_lt {hi x} w := by simp at w; subst w; exact Int32.toInt_lt x
@[simp] theorem toInt_int32 (x : Int32) : ToInt.toInt x = (x.toInt : Int) := rfl
instance : ToInt.Add Int32 (some (-2^31)) (some (2^31)) where
toInt_add x y := by
simp [Int.bmod_eq_emod]
split <;> · simp; omega
instance : ToInt.Zero Int32 (some (-2^31)) (some (2^31)) where
toInt_zero := by
-- simp -- FIXME: succeeds, but generates a `(kernel) application type mismatch` error!
change (0 : Int32).toInt = _
rw [Int32.toInt_zero]
decide
instance : ToInt.LE Int32 (some (-2^31)) (some (2^31)) where
le_iff x y := by simpa using Int32.le_iff_toInt_le
instance : ToInt.LT Int32 (some (-2^31)) (some (2^31)) where
lt_iff x y := by simpa using Int32.lt_iff_toInt_lt
instance : ToInt Int64 (some (-2^63)) (some (2^63)) where
toInt x := x.toInt
toInt_inj x y w := Int64.toInt_inj.mp w
le_toInt {lo x} w := by simp at w; subst w; exact Int64.le_toInt x
toInt_lt {hi x} w := by simp at w; subst w; exact Int64.toInt_lt x
@[simp] theorem toInt_int64 (x : Int64) : ToInt.toInt x = (x.toInt : Int) := rfl
instance : ToInt.Add Int64 (some (-2^63)) (some (2^63)) where
toInt_add x y := by
simp [Int.bmod_eq_emod]
split <;> · simp; omega
instance : ToInt.Zero Int64 (some (-2^63)) (some (2^63)) where
toInt_zero := by
-- simp -- FIXME: succeeds, but generates a `(kernel) application type mismatch` error!
change (0 : Int64).toInt = _
rw [Int64.toInt_zero]
decide
instance : ToInt.LE Int64 (some (-2^63)) (some (2^63)) where
le_iff x y := by simpa using Int64.le_iff_toInt_le
instance : ToInt.LT Int64 (some (-2^63)) (some (2^63)) where
lt_iff x y := by simpa using Int64.lt_iff_toInt_lt
instance : ToInt (BitVec v) (some 0) (some (2^v)) where
toInt x := (x.toNat : Int)
toInt_inj x y w :=
BitVec.eq_of_toNat_eq (Int.ofNat_inj.mp w)
le_toInt {lo x} w := by simp at w; subst w; exact Int.natCast_nonneg x.toNat
toInt_lt {hi x} w := by
simp at w; subst w;
simpa using Int.ofNat_lt.mpr (BitVec.isLt x)
@[simp] theorem toInt_bitVec (x : BitVec v) : ToInt.toInt x = (x.toNat : Int) := rfl
instance : ToInt.Add (BitVec v) (some 0) (some (2^v)) where
toInt_add x y := by simp
instance : ToInt.Zero (BitVec v) (some 0) (some (2^v)) where
toInt_zero := by simp
instance : ToInt.Mod (BitVec v) (some 0) (some (2^v)) where
toInt_mod x y := by simp
instance : ToInt.LE (BitVec v) (some 0) (some (2^v)) where
le_iff x y := by simpa using BitVec.le_def
instance : ToInt.LT (BitVec v) (some 0) (some (2^v)) where
lt_iff x y := by simpa using BitVec.lt_def
instance : ToInt ISize (some (-2^(System.Platform.numBits-1))) (some (2^(System.Platform.numBits-1))) where
toInt x := x.toInt
toInt_inj x y w := ISize.toInt_inj.mp w
le_toInt {lo x} w := by simp at w; subst w; exact ISize.two_pow_numBits_le_toInt x
toInt_lt {hi x} w := by simp at w; subst w; exact ISize.toInt_lt_two_pow_numBits x
@[simp] theorem toInt_isize (x : ISize) : ToInt.toInt x = x.toInt := rfl
instance : ToInt.Add ISize (some (-2^(System.Platform.numBits-1))) (some (2^(System.Platform.numBits-1))) where
toInt_add x y := by
rw [toInt_isize, ISize.toInt_add, ToInt.wrap_eq_bmod (Int.pow_nonneg (by decide))]
have p₁ : (2 : Int) * 2 ^ (System.Platform.numBits - 1) = 2 ^ System.Platform.numBits := by
have := System.Platform.numBits_pos
have : System.Platform.numBits - 1 + 1 = System.Platform.numBits := by omega
simp [ Int.pow_succ', this]
have p₂ : ((2 : Int) ^ System.Platform.numBits).toNat = 2 ^ System.Platform.numBits := by
rw [Int.toNat_pow_of_nonneg (by decide)]
simp
simp [p₁, p₂]
instance : ToInt.Zero ISize (some (-2^(System.Platform.numBits-1))) (some (2^(System.Platform.numBits-1))) where
toInt_zero := by
rw [toInt_isize]
rw [ISize.toInt_zero, ToInt.wrap_eq_bmod (Int.pow_nonneg (by decide))]
simp
instance instToIntLEISize : ToInt.LE ISize (some (-2^(System.Platform.numBits-1))) (some (2^(System.Platform.numBits-1))) where
le_iff x y := by simpa using ISize.le_iff_toInt_le
instance instToIntLTISize : ToInt.LT ISize (some (-2^(System.Platform.numBits-1))) (some (2^(System.Platform.numBits-1))) where
lt_iff x y := by simpa using ISize.lt_iff_toInt_lt
end Lean.Grind

View File

@@ -57,25 +57,25 @@ theorem nestedProof_congr (p q : Prop) (h : p = q) (hp : p) (hq : q) : HEq (@nes
subst h; apply HEq.refl
@[app_unexpander nestedProof]
def nestedProofUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def nestedProofUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $p:term) => `($p)
| _ => throw ()
@[app_unexpander MatchCond]
def matchCondUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def matchCondUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $p:term) => `($p)
| _ => throw ()
@[app_unexpander EqMatch]
def eqMatchUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def eqMatchUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $lhs:term $rhs:term) => `($lhs = $rhs)
| _ => throw ()
@[app_unexpander offset]
def offsetUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def offsetUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $lhs:term $rhs:term) => `($lhs + $rhs)
| _ => throw ()
@@ -88,7 +88,7 @@ many `NatCast.natCast` applications which is too verbose. We add the following
unexpander to cope with this issue.
-/
@[app_unexpander NatCast.natCast]
def natCastUnexpander : PrettyPrinter.Unexpander := fun stx => do
meta def natCastUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $a:term) => `($a)
| _ => throw ()

View File

@@ -363,7 +363,7 @@ theorem chain_iterates {f : αα} (hf : monotone f) : chain (iterates f) :=
· left; apply hf; assumption
· right; apply hf; assumption
case sup c hchain hi ih2 =>
show f x csup c csup c f x
change f x csup c csup c f x
by_cases h : z, c z f x z
· left
obtain z, hz, hfz := h
@@ -380,7 +380,7 @@ theorem chain_iterates {f : αα} (hf : monotone f) : chain (iterates f) :=
next => contradiction
next => assumption
case sup c hchain hi ih =>
show rel (csup c) y rel y (csup c)
change rel (csup c) y rel y (csup c)
by_cases h : z, c z rel y z
· right
obtain z, hz, hfz := h

View File

@@ -13,6 +13,8 @@ import Init.MetaTypes
import Init.Syntax
import Init.Data.Array.GetLit
import Init.Data.Option.BasicAux
meta import Init.Data.Array.Basic
meta import Init.Syntax
namespace Lean
@@ -246,7 +248,7 @@ def appendBefore (n : Name) (pre : String) : Name :=
| num p n => Name.mkNum (Name.mkStr p pre) n
protected theorem beq_iff_eq {m n : Name} : m == n m = n := by
show m.beq n _
change m.beq n _
induction m generalizing n <;> cases n <;> simp_all [Name.beq, And.comm]
instance : LawfulBEq Name where
@@ -1610,7 +1612,7 @@ macro (name := declareSimpLikeTactic) doc?:(docComment)?
else
pure (← `(``dsimp), ← `("dsimp"), ← `($[$doc?:docComment]? syntax (name := $tacName) $tacToken:str optConfig (discharger)? (&" only")? (" [" (simpErase <|> simpLemma),* "]")? (location)? : tactic))
`($stx:command
@[macro $tacName] def expandSimp : Macro := fun s => do
@[macro $tacName] meta def expandSimp : Macro := fun s => do
let cfg ← `(optConfig| $cfg)
let s := s.setKind $kind
let s := s.setArg 0 (mkAtomFrom s[0] $tkn (canonical := true))

View File

@@ -8,7 +8,6 @@ Notation for operators defined at Prelude.lean
module
prelude
import Init.Prelude
import Init.Coe
set_option linter.missingDocs true -- keep it documented
@@ -27,14 +26,14 @@ of a lean file. For example, `def foo := 1` is a `command`, as is
`namespace Foo` and `end Foo`. Commands generally have an effect on the state of
adding something to the environment (like a new definition), as well as
commands like `variable` which modify future commands within a scope. -/
def command : Category := {}
meta def command : Category := {}
/-- `term` is the builtin syntax category for terms. A term denotes an expression
in lean's type theory, for example `2 + 2` is a term. The difference between
`Term` and `Expr` is that the former is a kind of syntax, while the latter is
the result of elaboration. For example `by simp` is also a `Term`, but it elaborates
to different `Expr`s depending on the context. -/
def term : Category := {}
meta def term : Category := {}
/-- `tactic` is the builtin syntax category for tactics. These appear after
`by` in proofs, and they are programs that take in the proof context
@@ -44,28 +43,28 @@ a term of the expected type. For example, `simp` is a tactic, used in:
example : 2 + 2 = 4 := by simp
```
-/
def tactic : Category := {}
meta def tactic : Category := {}
/-- `doElem` is a builtin syntax category for elements that can appear in the `do` notation.
For example, `let x ← e` is a `doElem`, and a `do` block consists of a list of `doElem`s. -/
def doElem : Category := {}
meta def doElem : Category := {}
/-- `structInstFieldDecl` is the syntax category for value declarations for fields in structure instance notation.
For example, the `:= 1` and `| 0 => 0 | n + 1 => n` in `{ x := 1, f | 0 => 0 | n + 1 => n }` are in the `structInstFieldDecl` class. -/
def structInstFieldDecl : Category := {}
meta def structInstFieldDecl : Category := {}
/-- `level` is a builtin syntax category for universe levels.
This is the `u` in `Sort u`: it can contain `max` and `imax`, addition with
constants, and variables. -/
def level : Category := {}
meta def level : Category := {}
/-- `attr` is a builtin syntax category for attributes.
Declarations can be annotated with attributes using the `@[...]` notation. -/
def attr : Category := {}
meta def attr : Category := {}
/-- `stx` is a builtin syntax category for syntax. This is the abbreviated
parser notation used inside `syntax` and `macro` declarations. -/
def stx : Category := {}
meta def stx : Category := {}
/-- `prio` is a builtin syntax category for priorities.
Priorities are used in many different attributes.
@@ -73,7 +72,7 @@ Higher numbers denote higher priority, and for example typeclass search will
try high priority instances before low priority.
In addition to literals like `37`, you can also use `low`, `mid`, `high`, as well as
add and subtract priorities. -/
def prio : Category := {}
meta def prio : Category := {}
/-- `prec` is a builtin syntax category for precedences. A precedence is a value
that expresses how tightly a piece of syntax binds: for example `1 + 2 * 3` is
@@ -84,7 +83,7 @@ In addition to literals like `37`, there are some special named precedence level
* `max` for the highest precedence used in term parsers (not actually the maximum possible value)
* `lead` for the precedence of terms not supposed to be used as arguments
and you can also add and subtract precedences. -/
def prec : Category := {}
meta def prec : Category := {}
end Parser.Category
@@ -531,8 +530,21 @@ is interpreted as `f (g x)` rather than `(f g) x`.
syntax:min term " <| " term:min : term
macro_rules
| `($f $args* <| $a) => `($f $args* $a)
| `($f <| $a) => `($f $a)
| `($f $args* <| $a) =>
if a.raw.isMissing then
-- Ensures that `$f $args* <|` is elaborated as `$f $args*`, not `$f $args* sorry`.
-- For the latter, the elaborator produces `TermInfo` where the missing argument has already
-- been applied as `sorry`, which inhibits some language server functionality that relies
-- on this `TermInfo` (e.g. signature help).
-- The parser will still produce an error for `$f $args* <|` in this case.
`($f $args*)
else
`($f $args* $a)
| `($f <| $a) =>
if a.raw.isMissing then
`($f)
else
`($f $a)
/--
Haskell-like pipe operator `|>`. `x |> f` means the same as the same as `f x`,
@@ -553,8 +565,21 @@ is interpreted as `f (g x)` rather than `(f g) x`.
syntax:min term atomic(" $" ws) term:min : term
macro_rules
| `($f $args* $ $a) => `($f $args* $a)
| `($f $ $a) => `($f $a)
| `($f $args* $ $a) =>
if a.raw.isMissing then
-- Ensures that `$f $args* $` is elaborated as `$f $args*`, not `$f $args* sorry`.
-- For the latter, the elaborator produces `TermInfo` where the missing argument has already
-- been applied as `sorry`, which inhibits some language server functionality that relies
-- on this `TermInfo` (e.g. signature help).
-- The parser will still produce an error for `$f $args* <|` in this case.
`($f $args*)
else
`($f $args* $a)
| `($f $ $a) =>
if a.raw.isMissing then
`($f)
else
`($f $a)
@[inherit_doc Subtype] syntax "{ " withoutPosition(ident (" : " term)? " // " term) " }" : term

View File

@@ -9,10 +9,11 @@ module
prelude
import Init.Data.ToString.Basic
import Init.Data.Array.Subarray
import Init.Conv
import Init.Meta
import Init.While
meta import Init.Data.Option.Basic
meta import Init.Data.Array.Subarray
namespace Lean
@@ -142,13 +143,13 @@ macro tk:"calc" steps:calcSteps : conv =>
`(conv| tactic => calc%$tk $steps)
end Lean
@[app_unexpander Unit.unit] def unexpandUnit : Lean.PrettyPrinter.Unexpander
@[app_unexpander Unit.unit] meta def unexpandUnit : Lean.PrettyPrinter.Unexpander
| `($(_)) => `(())
@[app_unexpander List.nil] def unexpandListNil : Lean.PrettyPrinter.Unexpander
@[app_unexpander List.nil] meta def unexpandListNil : Lean.PrettyPrinter.Unexpander
| `($(_)) => `([])
@[app_unexpander List.cons] def unexpandListCons : Lean.PrettyPrinter.Unexpander
@[app_unexpander List.cons] meta def unexpandListCons : Lean.PrettyPrinter.Unexpander
| `($(_) $x $tail) =>
match tail with
| `([]) => `([$x])
@@ -157,105 +158,105 @@ end Lean
| _ => throw ()
| _ => throw ()
@[app_unexpander List.toArray] def unexpandListToArray : Lean.PrettyPrinter.Unexpander
@[app_unexpander List.toArray] meta def unexpandListToArray : Lean.PrettyPrinter.Unexpander
| `($(_) [$xs,*]) => `(#[$xs,*])
| _ => throw ()
@[app_unexpander Prod.mk] def unexpandProdMk : Lean.PrettyPrinter.Unexpander
@[app_unexpander Prod.mk] meta def unexpandProdMk : Lean.PrettyPrinter.Unexpander
| `($(_) $x ($y, $ys,*)) => `(($x, $y, $ys,*))
| `($(_) $x $y) => `(($x, $y))
| _ => throw ()
@[app_unexpander ite] def unexpandIte : Lean.PrettyPrinter.Unexpander
@[app_unexpander ite] meta def unexpandIte : Lean.PrettyPrinter.Unexpander
| `($(_) $c $t $e) => `(if $c then $t else $e)
| _ => throw ()
@[app_unexpander Eq.ndrec] def unexpandEqNDRec : Lean.PrettyPrinter.Unexpander
@[app_unexpander Eq.ndrec] meta def unexpandEqNDRec : Lean.PrettyPrinter.Unexpander
| `($(_) $m $h) => `($h $m)
| _ => throw ()
@[app_unexpander Eq.rec] def unexpandEqRec : Lean.PrettyPrinter.Unexpander
@[app_unexpander Eq.rec] meta def unexpandEqRec : Lean.PrettyPrinter.Unexpander
| `($(_) $m $h) => `($h $m)
| _ => throw ()
@[app_unexpander Exists] def unexpandExists : Lean.PrettyPrinter.Unexpander
@[app_unexpander Exists] meta def unexpandExists : Lean.PrettyPrinter.Unexpander
| `($(_) fun $x:ident => $xs:binderIdent*, $b) => `( $x:ident $xs:binderIdent*, $b)
| `($(_) fun $x:ident => $b) => `( $x:ident, $b)
| `($(_) fun ($x:ident : $t) => $b) => `( ($x:ident : $t), $b)
| _ => throw ()
@[app_unexpander Sigma] def unexpandSigma : Lean.PrettyPrinter.Unexpander
@[app_unexpander Sigma] meta def unexpandSigma : Lean.PrettyPrinter.Unexpander
| `($(_) fun ($x:ident : $t) => $b) => `(($x:ident : $t) × $b)
| _ => throw ()
@[app_unexpander PSigma] def unexpandPSigma : Lean.PrettyPrinter.Unexpander
@[app_unexpander PSigma] meta def unexpandPSigma : Lean.PrettyPrinter.Unexpander
| `($(_) fun ($x:ident : $t) => $b) => `(($x:ident : $t) ×' $b)
| _ => throw ()
@[app_unexpander Subtype] def unexpandSubtype : Lean.PrettyPrinter.Unexpander
@[app_unexpander Subtype] meta def unexpandSubtype : Lean.PrettyPrinter.Unexpander
| `($(_) fun ($x:ident : $type) => $p) => `({ $x : $type // $p })
| `($(_) fun $x:ident => $p) => `({ $x // $p })
| _ => throw ()
@[app_unexpander TSyntax] def unexpandTSyntax : Lean.PrettyPrinter.Unexpander
@[app_unexpander TSyntax] meta def unexpandTSyntax : Lean.PrettyPrinter.Unexpander
| `($f [$k]) => `($f $k)
| _ => throw ()
@[app_unexpander TSyntaxArray] def unexpandTSyntaxArray : Lean.PrettyPrinter.Unexpander
@[app_unexpander TSyntaxArray] meta def unexpandTSyntaxArray : Lean.PrettyPrinter.Unexpander
| `($f [$k]) => `($f $k)
| _ => throw ()
@[app_unexpander Syntax.TSepArray] def unexpandTSepArray : Lean.PrettyPrinter.Unexpander
@[app_unexpander Syntax.TSepArray] meta def unexpandTSepArray : Lean.PrettyPrinter.Unexpander
| `($f [$k] $sep) => `($f $k $sep)
| _ => throw ()
@[app_unexpander GetElem.getElem] def unexpandGetElem : Lean.PrettyPrinter.Unexpander
@[app_unexpander GetElem.getElem] meta def unexpandGetElem : Lean.PrettyPrinter.Unexpander
| `($_ $array $index $_) => `($array[$index])
| _ => throw ()
@[app_unexpander getElem!] def unexpandGetElem! : Lean.PrettyPrinter.Unexpander
@[app_unexpander getElem!] meta def unexpandGetElem! : Lean.PrettyPrinter.Unexpander
| `($_ $array $index) => `($array[$index]!)
| _ => throw ()
@[app_unexpander getElem?] def unexpandGetElem? : Lean.PrettyPrinter.Unexpander
@[app_unexpander getElem?] meta def unexpandGetElem? : Lean.PrettyPrinter.Unexpander
| `($_ $array $index) => `($array[$index]?)
| _ => throw ()
@[app_unexpander Array.empty] def unexpandArrayEmpty : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.empty] meta def unexpandArrayEmpty : Lean.PrettyPrinter.Unexpander
| _ => `(#[])
@[app_unexpander Array.mkArray0] def unexpandMkArray0 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray0] meta def unexpandMkArray0 : Lean.PrettyPrinter.Unexpander
| _ => `(#[])
@[app_unexpander Array.mkArray1] def unexpandMkArray1 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray1] meta def unexpandMkArray1 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1) => `(#[$a1])
| _ => throw ()
@[app_unexpander Array.mkArray2] def unexpandMkArray2 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray2] meta def unexpandMkArray2 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2) => `(#[$a1, $a2])
| _ => throw ()
@[app_unexpander Array.mkArray3] def unexpandMkArray3 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray3] meta def unexpandMkArray3 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3) => `(#[$a1, $a2, $a3])
| _ => throw ()
@[app_unexpander Array.mkArray4] def unexpandMkArray4 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray4] meta def unexpandMkArray4 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3 $a4) => `(#[$a1, $a2, $a3, $a4])
| _ => throw ()
@[app_unexpander Array.mkArray5] def unexpandMkArray5 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray5] meta def unexpandMkArray5 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3 $a4 $a5) => `(#[$a1, $a2, $a3, $a4, $a5])
| _ => throw ()
@[app_unexpander Array.mkArray6] def unexpandMkArray6 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray6] meta def unexpandMkArray6 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3 $a4 $a5 $a6) => `(#[$a1, $a2, $a3, $a4, $a5, $a6])
| _ => throw ()
@[app_unexpander Array.mkArray7] def unexpandMkArray7 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray7] meta def unexpandMkArray7 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3 $a4 $a5 $a6 $a7) => `(#[$a1, $a2, $a3, $a4, $a5, $a6, $a7])
| _ => throw ()
@[app_unexpander Array.mkArray8] def unexpandMkArray8 : Lean.PrettyPrinter.Unexpander
@[app_unexpander Array.mkArray8] meta def unexpandMkArray8 : Lean.PrettyPrinter.Unexpander
| `($(_) $a1 $a2 $a3 $a4 $a5 $a6 $a7 $a8) => `(#[$a1, $a2, $a3, $a4, $a5, $a6, $a7, $a8])
| _ => throw ()
@@ -360,13 +361,13 @@ namespace Lean
/-- Unexpander for the `{ x }` notation. -/
@[app_unexpander singleton]
def singletonUnexpander : Lean.PrettyPrinter.Unexpander
meta def singletonUnexpander : Lean.PrettyPrinter.Unexpander
| `($_ $a) => `({ $a:term })
| _ => throw ()
/-- Unexpander for the `{ x, y, ... }` notation. -/
@[app_unexpander insert]
def insertUnexpander : Lean.PrettyPrinter.Unexpander
meta def insertUnexpander : Lean.PrettyPrinter.Unexpander
| `($_ $a { $ts:term,* }) => `({$a:term, $ts,*})
| _ => throw ()

View File

@@ -149,7 +149,7 @@ Because the resulting value is treated as a side-effect-free term, the compiler
duplicate, or delete calls to this function. The side effect may even be hoisted into a constant,
causing the side effect to occur at initialization time, even if it would otherwise never be called.
-/
@[inline] unsafe def unsafeBaseIO (fn : BaseIO α) : α :=
@[noinline] unsafe def unsafeBaseIO (fn : BaseIO α) : α :=
match fn.run () with
| EStateM.Result.ok a _ => a
@@ -567,7 +567,7 @@ def addHeartbeats (count : Nat) : BaseIO Unit := do
/--
Whether a file should be opened for reading, writing, creation and writing, or appending.
A the operating system level, this translates to the mode of a file handle (i.e., a set of `open`
At the operating system level, this translates to the mode of a file handle (i.e., a set of `open`
flags and an `fdopen` mode).
None of the modes represented by this datatype translate line endings (i.e. `O_BINARY` on Windows).

View File

@@ -535,6 +535,12 @@ syntax (name := change) "change " term (location)? : tactic
-/
syntax (name := changeWith) "change " term " with " term (location)? : tactic
/--
`show t` finds the first goal whose target unifies with `t`. It makes that the main goal,
performs the unification, and replaces the target with the unified version of `t`.
-/
syntax (name := «show») "show " term : tactic
/--
Extracts `let` and `let_fun` expressions from within the target or a local hypothesis,
introducing new local definitions.
@@ -695,7 +701,7 @@ syntax (name := dsimp) "dsimp" optConfig (discharger)? (&" only")?
A `simpArg` is either a `*`, `-lemma` or a simp lemma specification
(which includes the `↑` `↓` `←` specifications for pre, post, reverse rewriting).
-/
def simpArg := simpStar.binary `orelse (simpErase.binary `orelse simpLemma)
meta def simpArg := simpStar.binary `orelse (simpErase.binary `orelse simpLemma)
/-- A simp args list is a list of `simpArg`. This is the main argument to `simp`. -/
syntax simpArgs := " [" simpArg,* "]"
@@ -704,7 +710,7 @@ syntax simpArgs := " [" simpArg,* "]"
A `dsimpArg` is similar to `simpArg`, but it does not have the `simpStar` form
because it does not make sense to use hypotheses in `dsimp`.
-/
def dsimpArg := simpErase.binary `orelse simpLemma
meta def dsimpArg := simpErase.binary `orelse simpLemma
/-- A dsimp args list is a list of `dsimpArg`. This is the main argument to `dsimp`. -/
syntax dsimpArgs := " [" dsimpArg,* "]"
@@ -864,11 +870,6 @@ The `let` tactic is for adding definitions to the local context of the main goal
local variables `x : α`, `y : β`, and `z : γ`.
-/
macro "let " d:letDecl : tactic => `(tactic| refine_lift let $d:letDecl; ?_)
/--
`show t` finds the first goal whose target unifies with `t`. It makes that the main goal,
performs the unification, and replaces the target with the unified version of `t`.
-/
macro "show " e:term : tactic => `(tactic| refine_lift show $e from ?_) -- TODO: fix, see comment
/-- `let rec f : t := e` adds a recursive definition `f` to the current goal.
The syntax is the same as term-mode `let rec`. -/
syntax (name := letrec) withPosition(atomic("let " &"rec ") letRecDecls) : tactic
@@ -1917,30 +1918,35 @@ syntax "" withoutPosition(term) "" : term
macro_rules | `($type) => `((by assumption : $type))
/--
`get_elem_tactic_trivial` is an extensible tactic automatically called
`get_elem_tactic_extensible` is an extensible tactic automatically called
by the notation `arr[i]` to prove any side conditions that arise when
constructing the term (e.g. the index is in bounds of the array).
The default behavior is to just try `trivial` (which handles the case
where `i < arr.size` is in the context) and `simp +arith` and `omega`
The default behavior is to try `simp +arith` and `omega`
(for doing linear arithmetic in the index).
(Note that the core tactic `get_elem_tactic` has already tried
`done` and `assumption` before the extensible tactic is called.)
-/
syntax "get_elem_tactic_extensible" : tactic
/-- `get_elem_tactic_trivial` has been deprecated in favour of `get_elem_tactic_extensible`. -/
syntax "get_elem_tactic_trivial" : tactic
macro_rules | `(tactic| get_elem_tactic_trivial) => `(tactic| omega)
macro_rules | `(tactic| get_elem_tactic_trivial) => `(tactic| simp +arith; done)
macro_rules | `(tactic| get_elem_tactic_trivial) => `(tactic| trivial)
macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| omega)
macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| simp +arith; done)
macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| trivial)
/--
`get_elem_tactic` is the tactic automatically called by the notation `arr[i]`
to prove any side conditions that arise when constructing the term
(e.g. the index is in bounds of the array). It just delegates to
`get_elem_tactic_trivial` and gives a diagnostic error message otherwise;
users are encouraged to extend `get_elem_tactic_trivial` instead of this tactic.
`get_elem_tactic_extensible` and gives a diagnostic error message otherwise;
users are encouraged to extend `get_elem_tactic_extensible` instead of this tactic.
-/
macro "get_elem_tactic" : tactic =>
`(tactic| first
/-
Recall that `macro_rules` (namely, for `get_elem_tactic_trivial`) are tried in reverse order.
Recall that `macro_rules` (namely, for `get_elem_tactic_extensible`) are tried in reverse order.
We first, however, try `done`, since the necessary proof may already have been
found during unification, in which case there is no goal to solve (see #6999).
If a goal is present, we want `assumption` to be tried first.
@@ -1953,15 +1959,15 @@ macro "get_elem_tactic" : tactic =>
If `omega` is used to "fill" this proof, we will have a more complex proof term that
cannot be inferred by unification.
We hardcoded `assumption` here to ensure users cannot accidentally break this IF
they add new `macro_rules` for `get_elem_tactic_trivial`.
they add new `macro_rules` for `get_elem_tactic_extensible`.
TODO: Implement priorities for `macro_rules`.
TODO: Ensure we have **high-priority** macro_rules for `get_elem_tactic_trivial` which are
TODO: Ensure we have **high-priority** macro_rules for `get_elem_tactic_extensible` which are
just `done` and `assumption`.
-/
| done
| assumption
| get_elem_tactic_trivial
| get_elem_tactic_extensible
| fail "failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid

View File

@@ -41,3 +41,5 @@ import Lean.PrivateName
import Lean.PremiseSelection
import Lean.Namespace
import Lean.EnvExtension
import Lean.ErrorExplanation
import Lean.DefEqAttrib

View File

@@ -64,13 +64,6 @@ Checks whether the declaration was originally declared as a theorem; see also
def wasOriginallyTheorem (env : Environment) (declName : Name) : Bool :=
getOriginalConstKind? env declName |>.map (· matches .thm) |>.getD false
-- HACK: remove together with MutualDef HACK when `[dsimp]` is introduced
private def isSimpleRflProof (proof : Expr) : Bool :=
if let .lam _ _ proof _ := proof then
isSimpleRflProof proof
else
proof.isAppOfArity ``rfl 2
private def looksLikeRelevantTheoremProofType (type : Expr) : Bool :=
if let .forallE _ _ type _ := type then
looksLikeRelevantTheoremProofType type
@@ -84,18 +77,12 @@ def addDecl (decl : Declaration) : CoreM Unit := do
-- namespaces
modifyEnv (decl.getNames.foldl registerNamePrefixes)
if !Elab.async.get ( getOptions) then
return ( addSynchronously)
-- convert `Declaration` to `ConstantInfo` to use as a preliminary value in the environment until
-- kernel checking has finished; not all cases are supported yet
let mut exportedInfo? := none
let (name, info, kind) match decl with
| .thmDecl thm =>
let exportProof := !( getEnv).header.isModule ||
-- We should preserve rfl theorems but also we should not override a decision to hide by the
-- MutualDef elaborator via `withoutExporting`
( getEnv).isExporting && isSimpleRflProof thm.value ||
-- TODO: this is horrible...
looksLikeRelevantTheoremProofType thm.type
if !exportProof then
@@ -106,7 +93,7 @@ def addDecl (decl : Declaration) : CoreM Unit := do
exportedInfo? := some <| .axiomInfo { defn with isUnsafe := defn.safety == .unsafe }
pure (defn.name, .defnInfo defn, .defn)
| .axiomDecl ax => pure (ax.name, .axiomInfo ax, .axiom)
| _ => return ( addSynchronously)
| _ => return ( doAdd)
if decl.getTopLevelNames.all isPrivateName then
exportedInfo? := none
@@ -125,30 +112,26 @@ def addDecl (decl : Declaration) : CoreM Unit := do
-- report preliminary constant info immediately
async.commitConst async.asyncEnv (some info) (exportedInfo? <|> info)
setEnv async.mainEnv
let cancelTk IO.CancelToken.new
let checkAct Core.wrapAsyncAsSnapshot (cancelTk? := cancelTk) fun _ => do
let doAddAndCommit := do
setEnv async.asyncEnv
try
doAdd
finally
async.commitCheckEnv ( getEnv)
let t BaseIO.mapTask checkAct env.checked
let endRange? := ( getRef).getTailPos?.map fun pos => pos, pos
Core.logSnapshotTask { stx? := none, reportingRange? := endRange?, task := t, cancelTk? := cancelTk }
if Elab.async.get ( getOptions) then
let cancelTk IO.CancelToken.new
let checkAct Core.wrapAsyncAsSnapshot (cancelTk? := cancelTk) fun _ => doAddAndCommit
let t BaseIO.mapTask checkAct env.checked
let endRange? := ( getRef).getTailPos?.map fun pos => pos, pos
Core.logSnapshotTask { stx? := none, reportingRange? := endRange?, task := t, cancelTk? := cancelTk }
else
try
doAddAndCommit
finally
setEnv async.mainEnv
where
addSynchronously := do
doAdd
-- make constants known to the elaborator; in the synchronous case, we can simply read them from
-- the kernel env
for n in decl.getNames do
let env getEnv
let some info := env.checked.get.find? n | unreachable!
-- do *not* report extensions in synchronous case at this point as they are usually set only
-- after adding the constant itself
let res env.addConstAsync (reportExts := false) n (.ofConstantInfo info)
res.commitConst env (info? := info)
res.commitCheckEnv res.asyncEnv
setEnv res.mainEnv
doAdd := do
profileitM Exception "type checking" ( getOptions) do
withTraceNode `Kernel (fun _ => return m!"typechecking declarations {decl.getTopLevelNames}") do

View File

@@ -135,7 +135,9 @@ structure TagAttribute where
deriving Inhabited
def registerTagAttribute (name : Name) (descr : String)
(validate : Name AttrM Unit := fun _ => pure ()) (ref : Name := by exact decl_name%) (applicationTime := AttributeApplicationTime.afterTypeChecking) : IO TagAttribute := do
(validate : Name AttrM Unit := fun _ => pure ()) (ref : Name := by exact decl_name%)
(applicationTime := AttributeApplicationTime.afterTypeChecking)
(asyncMode : EnvExtension.AsyncMode := .mainOnly) : IO TagAttribute := do
let ext : PersistentEnvExtension Name Name NameSet registerPersistentEnvExtension {
name := ref
mkInitial := pure {}
@@ -145,6 +147,12 @@ def registerTagAttribute (name : Name) (descr : String)
let r : Array Name := es.fold (fun a e => a.push e) #[]
r.qsort Name.quickLt
statsFn := fun s => "tag attribute" ++ Format.line ++ "number of local entries: " ++ format s.size
asyncMode := asyncMode
replay? := some fun _ newState newConsts s =>
newConsts.foldl (init := s) fun s c =>
if newState.contains c then
s.insert c
else s
}
let attrImpl : AttributeImpl := {
ref, name, descr, applicationTime
@@ -153,7 +161,9 @@ def registerTagAttribute (name : Name) (descr : String)
unless kind == AttributeKind.global do throwError "invalid attribute '{name}', must be global"
let env getEnv
unless (env.getModuleIdxFor? decl).isNone do
throwError "invalid attribute '{name}', declaration is in an imported module"
throwError "invalid attribute '{name}', declaration {.ofConstName decl} is in an imported module"
unless env.asyncMayContain decl do
throwError "invalid attribute '{name}', declaration {.ofConstName decl} is not from the present async context {env.asyncPrefix?}"
validate decl
modifyEnv fun env => ext.addEntry env decl
}
@@ -162,10 +172,27 @@ def registerTagAttribute (name : Name) (descr : String)
namespace TagAttribute
/-- Sets the attribute (without running `validate`) -/
def setTag [Monad m] [MonadError m] [MonadEnv m] (attr : TagAttribute) (decl : Name) : m Unit := do
let env getEnv
unless (env.getModuleIdxFor? decl).isNone do
throwError "invalid attribute '{attr.attr.name}', declaration {.ofConstName decl} is in an imported module"
unless env.asyncMayContain decl do
throwError "invalid attribute '{attr.attr.name}', declaration {.ofConstName decl} is not from the present async context {env.asyncPrefix?}"
modifyEnv fun env => attr.ext.addEntry env decl
def hasTag (attr : TagAttribute) (env : Environment) (decl : Name) : Bool :=
match env.getModuleIdxFor? decl with
| some modIdx => (attr.ext.getModuleEntries env modIdx).binSearchContains decl Name.quickLt
| none => (attr.ext.getState env).contains decl
| none =>
if attr.ext.toEnvExtension.asyncMode matches .async then
-- It seems that the env extension API doesn't quite allow querying attributes in a way
-- that works for realizable constants, but without waiting on proofs to finish.
-- Until then, we use the following overapproximation, to be refined later:
(attr.ext.findStateAsync env decl).contains decl ||
(attr.ext.getState env (asyncMode := .local)).contains decl
else
(attr.ext.getState env).contains decl
end TagAttribute

View File

@@ -15,7 +15,15 @@ def declareBuiltinDocStringAndRanges (declName : Name) : AttrM Unit := do
if let some declRanges findDeclarationRanges? declName then
declareBuiltin (declName ++ `declRange) (mkAppN (mkConst ``addBuiltinDeclarationRanges) #[toExpr declName, toExpr declRanges])
builtin_initialize
/--
Makes the documentation and location of a declaration available as a builtin.
This allows the documentation of core Lean features to be visible without importing the file they
are defined in. This is only useful during bootstrapping and should not be used outside of
the Lean source code.
-/
@[builtin_init, builtin_doc]
private def initFn :=
registerBuiltinAttribute {
name := `builtin_doc
descr := "make the docs and location of this declaration available as a builtin"

View File

@@ -159,7 +159,12 @@ def addClass (env : Environment) (clsName : Name) : Except MessageData Environme
let outParams checkOutParam 0 #[] #[] decl.type
return classExtension.addEntry env { name := clsName, outParams }
builtin_initialize
/--
Registers an inductive type or structure as a type class. Using `class` or `class inductive` is
generally preferred over using `@[class] structure` or `@[class] inductive` directly.
-/
@[builtin_init, builtin_doc]
private def init :=
registerBuiltinAttribute {
name := `class
descr := "type class"

View File

@@ -14,6 +14,7 @@ import Lean.Compiler.NeverExtractAttr
import Lean.Compiler.IR
import Lean.Compiler.CSimpAttr
import Lean.Compiler.FFI
import Lean.Compiler.MetaAttr
import Lean.Compiler.NoncomputableAttr
import Lean.Compiler.Main
import Lean.Compiler.AtMostOnce -- TODO: delete after we port code generator to Lean

View File

@@ -48,7 +48,22 @@ def add (declName : Name) (kind : AttributeKind) : CoreM Unit := do
else
throwError "invalid 'csimp' theorem, only constant replacement theorems (e.g., `@f = @g`) are currently supported."
builtin_initialize
/--
Tags compiler simplification theorems, which allow one value to be replaced by another equal value
in compiled code. This is typically used to replace a slow function whose definition is convenient
in proofs with a faster equivalent or to make noncomputable functions computable. In particular,
many operations on lists and arrays are replaced by tail-recursive equivalents.
A compiler simplification theorem cannot take any parameters and must prove a statement `@f = @g`
where `f` and `g` may be arbitrary constants. In functions defined after the theorem tagged
`@[csimp]`, any occurrence of `f` is replaced with `g` in compiled code, but not in the type
theory. In this sense, `@[csimp]` is a safer alternative to `@[implemented_by]`.
However it is still possible to register unsound `@[csimp]` lemmas by using `unsafe` or unsound
axioms (like `sorryAx`).
-/
@[builtin_init, builtin_doc]
private def initFn :=
registerBuiltinAttribute {
name := `csimp
descr := "simplification theorem for the compiler"

View File

@@ -17,6 +17,36 @@ private def isValidCppName : Name → Bool
| .str p s => isValidCppId s && isValidCppName p
| _ => false
/--
Exports a function under the provided unmangled symbol name. This can be used to refer to Lean
functions from other programming languages like C.
Example:
```
@[export lean_color_from_map]
def colorValue (properties : @& Std.HashMap String String) : UInt32 :=
match properties["color"]? with
| some "red" => 0xff0000
| some "green" => 0x00ff00
| some "blue" => 0x0000ff
| _ => -1
```
C code:
```c
#include <lean/lean.h>
uint32_t lean_color_from_map(b_lean_obj_arg properties);
void fill_rectangle_from_map(b_lean_obj_arg properties) {
uint32_t color = lean_color_from_map(properties);
// ...
}
```
The opposite of this is `@[extern]`, which allows Lean functions to refer to functions from other
programming languages.
-/
@[builtin_doc]
builtin_initialize exportAttr : ParametricAttribute Name
registerParametricAttribute {
name := `export,

View File

@@ -9,6 +9,12 @@ import Lean.Compiler.IR.Basic
namespace Lean.IR.UnboxResult
/--
Tags types that the compiler should unbox if they occur in result values.
This attribute currently has no effect.
-/
@[builtin_doc]
builtin_initialize unboxAttr : TagAttribute
registerTagAttribute `unbox "compiler tries to unbox result values if their types are tagged with `[unbox]`" fun declName => do
let cinfo getConstInfo declName;
@@ -19,6 +25,6 @@ builtin_initialize unboxAttr : TagAttribute ←
| _ => throwError "constant must be an inductive type"
def hasUnboxAttr (env : Environment) (n : Name) : Bool :=
unboxAttr.hasTag env n
unboxAttr.hasTag env n
end Lean.IR.UnboxResult

View File

@@ -11,6 +11,37 @@ import Lean.Elab.InfoTree
namespace Lean.Compiler
/--
Instructs the compiler to use a different function as the implementation of a function. With the
exception of tactics that call native code such as `native_decide`, the kernel and type checking
are unaffected. When this attribute is used on a function, the function is not compiled and all
compiler-related attributes (e.g. `noncomputable`, `@[inline]`) are ignored. Calls to this
function are replaced by calls to its implementation.
The most common use cases of `@[implemented_by]` are to provide an efficient unsafe implementation
and to make an unsafe function accessible in safe code through an opaque function:
```
unsafe def testEqImpl (as bs : Array Nat) : Bool :=
ptrEq as bs || as == bs
@[implemented_by testEqImpl]
def testEq (as bs : Array Nat) : Bool :=
as == bs
unsafe def printAddrImpl {α : Type u} (x : α) : IO Unit :=
IO.println s!"Address: {ptrAddrUnsafe x}"
@[implemented_by printAddrImpl]
opaque printAddr {α : Type u} (x : α) : IO Unit
```
The provided implementation is not checked to be equivalent to the original definition. This makes
it possible to prove `False` with `native_decide` using incorrect implementations. For a safer
variant of this attribute that however doesn't work for unsafe implementations, see `@[csimp]`,
which requires a proof that the two functions are equal.
-/
@[builtin_doc]
builtin_initialize implementedByAttr : ParametricAttribute Name registerParametricAttribute {
name := `implemented_by
descr := "name of the Lean (probably unsafe) function that implements opaque constant"

View File

@@ -96,7 +96,28 @@ private opaque registerInitAttrInner (attrName : Name) (runAfterImport : Bool) (
def registerInitAttr (attrName : Name) (runAfterImport : Bool) (ref : Name := by exact decl_name%) : IO (ParametricAttribute Name) :=
registerInitAttrInner attrName runAfterImport ref
/--
Registers an initialization procedure. Initialization procedures are run in files that import the
file they are defined in.
This attribute comes in two kinds: Without arguments, the tagged declaration should have type
`IO Unit` and are simply run during initialization. With a declaration name as a argument, the
tagged declaration should be an opaque constant and the provided declaration name an action in `IO`
that returns a value of the type of the tagged declaration. Such initialization procedures store
the resulting value and make it accessible through the tagged declaration.
The `initialize` command should usually be preferred over using this attribute directly.
-/
@[builtin_doc]
builtin_initialize regularInitAttr : ParametricAttribute Name registerInitAttr `init true
/--
Registers a builtin initialization procedure.
This attribute is used internally to define builtin initialization procedures for bootstrapping and
should not be used otherwise.
-/
@[builtin_doc]
builtin_initialize builtinInitAttr : ParametricAttribute Name registerInitAttr `builtin_init false
def getInitFnNameForCore? (env : Environment) (attr : ParametricAttribute Name) (fn : Name) : Option Name :=

View File

@@ -14,7 +14,7 @@ inductive InlineAttributeKind where
deriving Inhabited, BEq, Hashable
/--
This is an approximate test for testing whether `declName` can be annotated with the `[macro_inline]` attribute or not.
This is an approximate test for testing whether `declName` can be annotated with the `[macro_inline]` attribute or not.
-/
private def isValidMacroInline (declName : Name) : CoreM Bool := do
let .defnInfo info getConstInfo declName
@@ -32,6 +32,27 @@ private def isValidMacroInline (declName : Name) : CoreM Bool := do
return false
return true
/--
Changes the inlining behavior. This attribute comes in several variants:
- `@[inline]`: marks the definition to be inlined when it is appropriate.
- `@[inline_if_reduce]`: marks the definition to be inlined if an application of it after inlining
and applying reduction isn't a `match` expression. This attribute can be used for inlining
structurally recursive functions.
- `@[noinline]`: marks the definition to never be inlined.
- `@[always_inline]`: marks the definition to always be inlined.
- `@[macro_inline]`: marks the definition to always be inlined at the beginning of compilation.
This makes it possible to define functions that evaluate some of their parameters lazily.
Example:
```
@[macro_inline]
def test (x y : Nat) : Nat :=
if x = 42 then x else y
#eval test 42 (2^1000000000000) -- doesn't compute 2^1000000000000
```
Only non-recursive functions may be marked `@[macro_inline]`.
-/
@[builtin_doc]
builtin_initialize inlineAttrs : EnumAttributes InlineAttributeKind
registerEnumAttributes
[(`inline, "mark definition to be inlined", .inline),

View File

@@ -166,7 +166,7 @@ it is a free variable, a type (or type former), or `lcErased`.
`Check.lean` contains a substitution validator.
-/
abbrev FVarSubst := Std.HashMap FVarId Expr
abbrev FVarSubst := Std.HashMap FVarId Arg
/--
Replace the free variables in `e` using the given substitution.
@@ -191,7 +191,9 @@ where
if e.hasFVar then
match e with
| .fvar fvarId => match s[fvarId]? with
| some e => if translator then e else go e
| some (.fvar fvarId') => if translator then .fvar fvarId' else go (.fvar fvarId')
| some (.type e) => if translator then e else go e
| some .erased => erasedExpr
| none => e
| .lit .. | .const .. | .sort .. | .mvar .. | .bvar .. => e
| .app f a => e.updateApp! (goApp f) (go a) |>.headBeta
@@ -211,7 +213,7 @@ inductive NormFVarResult where
fvar (fvarId : FVarId)
| /--
Free variable has been erased. This can happen when instantiating polymorphic code
with computationally irrelant stuff. -/
with computationally irrelevant stuff. -/
erased
deriving Inhabited
@@ -230,11 +232,9 @@ private partial def normFVarImp (s : FVarSubst) (fvarId : FVarId) (translator :
.fvar fvarId'
else
normFVarImp s fvarId' translator
| some e =>
if e.isErased then
.erased
else
panic! s!"invalid LCNF substitution of free variable with expression {e}"
-- Types and type formers are only preserved as hints and
-- are erased in computationally relevant contexts.
| some .erased | some (.type _) => .erased
| none => .fvar fvarId
/--
@@ -247,10 +247,9 @@ private partial def normArgImp (s : FVarSubst) (arg : Arg) (translator : Bool) :
| .erased => arg
| .fvar fvarId =>
match s[fvarId]? with
| some (.fvar fvarId') =>
let arg' := .fvar fvarId'
| some (arg'@(.fvar _)) =>
if translator then arg' else normArgImp s arg' translator
| some e => if e.isErased then .erased else .type e
| some (arg'@.erased) | some (arg'@(.type _)) => arg'
| none => arg
| .type e => arg.updateType! (normExprImp s e translator)
@@ -292,21 +291,20 @@ export MonadFVarSubstState (modifySubst)
instance (m n) [MonadLift m n] [MonadFVarSubstState m] : MonadFVarSubstState n where
modifySubst f := liftM (modifySubst f : m _)
/--
Add the substitution `fvarId ↦ e`, `e` must be a valid LCNF `Arg`.
See `Check.lean` for the free variable substitution checker.
-/
@[inline] def addSubst [MonadFVarSubstState m] (fvarId : FVarId) (arg : Arg) : m Unit :=
modifySubst fun s => s.insert fvarId arg
/--
Add the entry `fvarId ↦ fvarId'` to the free variable substitution.
-/
@[inline] def addFVarSubst [MonadFVarSubstState m] (fvarId : FVarId) (fvarId' : FVarId) : m Unit :=
modifySubst fun s => s.insert fvarId (.fvar fvarId')
/--
Add the substitution `fvarId ↦ e`, `e` must be a valid LCNF argument.
That is, it must be a free variable, type (or type former), or `lcErased`.
See `Check.lean` for the free variable substitution checker.
-/
@[inline] def addSubst [MonadFVarSubstState m] (fvarId : FVarId) (e : Expr) : m Unit :=
modifySubst fun s => s.insert fvarId e
@[inline, inherit_doc normFVarImp] def normFVar [MonadFVarSubst m t] [Monad m] (fvarId : FVarId) : m NormFVarResult :=
return normFVarImp ( getSubst) fvarId t

View File

@@ -148,8 +148,13 @@ partial def Decl.extractClosed (decl : Decl) (sccDecls : Array Decl) : CompilerM
def extractClosed : Pass where
phase := .mono
name := `extractClosed
run := fun decls =>
decls.foldlM (init := #[]) fun newDecls decl => return newDecls ++ ( decl.extractClosed decls)
run := fun decls => do
-- Reuse the option from the old compiler for now.
if ( getOptions).getBool `compiler.extract_closed true then
decls.foldlM (init := #[]) fun newDecls decl =>
return newDecls ++ ( decl.extractClosed decls)
else
return decls
builtin_initialize registerTraceClass `Compiler.extractClosed (inherited := true)

View File

@@ -546,13 +546,13 @@ where
let mut newArgs := knownArgs
for (param, arg) in decl.params.zip args do
if let some knownVal := newArgs[param.fvarId]? then
if arg.toExpr != knownVal then
if arg != knownVal then
newArgs := newArgs.erase param.fvarId
modify fun s => { s with jpJmpArgs := s.jpJmpArgs.insert fn newArgs }
else
let folder := fun acc (param, arg) => do
if ( allFVarM (isInJpScope fn) arg) then
return acc.insert param.fvarId arg.toExpr
return acc.insert param.fvarId arg
else
return acc
let interestingArgs decl.params.zip args |>.foldlM (init := {}) folder

View File

@@ -35,6 +35,9 @@ structure TrivialStructureInfo where
fieldIdx : Nat
deriving Inhabited, Repr
builtin_initialize trivialStructureInfoExt : CacheExtension Name (Option TrivialStructureInfo)
CacheExtension.register
/--
Return `some fieldIdx` if `declName` is the name of an inductive datatype s.t.
- It does not have builtin support in the runtime.
@@ -42,10 +45,19 @@ Return `some fieldIdx` if `declName` is the name of an inductive datatype s.t.
- This constructor has only one computationally relevant field.
-/
def hasTrivialStructure? (declName : Name) : CoreM (Option TrivialStructureInfo) := do
match ( trivialStructureInfoExt.find? declName) with
| some info? => return info?
| none =>
let info? fillCache
trivialStructureInfoExt.insert declName info?
return info?
where fillCache : CoreM (Option TrivialStructureInfo) := do
if isRuntimeBultinType declName then return none
let .inductInfo info getConstInfo declName | return none
if info.isUnsafe || info.isRec then return none
let [ctorName] := info.ctors | return none
let ctorType getOtherDeclBaseType ctorName []
if ctorType.isErased then return none
let mask getRelevantCtorFields ctorName
let mut result := none
for h : i in [:mask.size] do
@@ -86,11 +98,7 @@ partial def toMonoType (type : Expr) : CoreM Expr := do
where
visitApp (f : Expr) (args : Array Expr) : CoreM Expr := do
match f with
| .const ``lcErased _ =>
if args.all (·.isErased) then
return erasedExpr
else
return anyExpr
| .const ``lcErased _ => return erasedExpr
| .const ``lcAny _ => return anyExpr
| .const ``Decidable _ => return mkConst ``Bool
| .const declName us =>
@@ -100,6 +108,7 @@ where
else
let mut result := mkConst declName
let mut type getOtherDeclBaseType declName us
if type.isErased then return erasedExpr
for arg in args do
let .forallE _ d b _ := type.headBeta | unreachable!
let arg := arg.headBeta

View File

@@ -68,6 +68,7 @@ def builtinPassManager : PassManager := {
toMono,
simp (occurrence := 3) (phase := .mono),
reduceJpArity (phase := .mono),
structProjCases,
extendJoinPointContext (phase := .mono) (occurrence := 0),
floatLetIn (phase := .mono) (occurrence := 1),
reduceArity,
@@ -78,7 +79,6 @@ def builtinPassManager : PassManager := {
lambdaLifting,
extendJoinPointContext (phase := .mono) (occurrence := 1),
simp (occurrence := 5) (phase := .mono),
structProjCases,
cse (occurrence := 2) (phase := .mono),
saveMono, -- End of mono phase
extractClosed

Some files were not shown because too many files have changed in this diff Show More