Compare commits

...

175 Commits

Author SHA1 Message Date
Kim Morrison
8d92ff842f chore: update stage0 2025-11-21 18:16:57 +11:00
Kim Morrison
f13bbaed5c feat: #grind_lint skip suffix
delete old grind_lint

.

move exception to separate file

note about stage0
2025-11-21 18:16:53 +11:00
Markus Himmel
51b67385cc refactor: better name for String.replaceStart and variants (#11290)
This PR renames `String.replaceStartEnd` to `String.slice`,
`String.replaceStart` to `String.sliceFrom`, and `String.replaceEnd` to
`String.sliceTo`, and similar for the corresponding functions on
`String.Slice`.
2025-11-20 16:42:27 +00:00
Wojciech Różowski
556e96088e feat: add lemmas relating getMin/getMin?/getMin!/getMinD and insertion to the empty (D)TreeMap/TreeSet (#11231)
This PR adds several lemmas that relate
`getMin`/`getMin?`/`getMin!`/`getMinD` and insertion to the empty
(D)TreeMap/TreeSet and their extensional variants.

---------

Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
2025-11-20 16:35:07 +00:00
Paul Reichert
649d0b4eb5 refactor: remove duplicated internal lemmas (#11260)
This PR removes some duplicated internal lemmas of the hash map and tree
map infrastructure.
2025-11-20 16:29:27 +00:00
Sebastian Ullrich
e5e7a89fdc fix: shake: only record used simp theorems as dependencies, plus simprocs (#11287) 2025-11-20 15:43:25 +00:00
Sebastian Ullrich
7ef229d03d chore: shake: re-add attribute rev use (#11288)
Global `attribute` commands on non-local declarations are impossible to
track granularly a priori and so should be preserved by `shake` by
default. A new `shake` option could be added to ignore these
dependencies for evaluation.
2025-11-20 15:39:38 +00:00
Markus Himmel
7267ed707a feat: string patterns for decidable predicates on Char (#11285)
This PR adds `Std.Slice.Pattern` instances for `p : Char -> Prop` as
long as `DecidablePred p`, to allow things like `"hello".dropWhile (· =
'h')`.

To achieve this, we refactor `ForwardPattern` and friends to be
"non-uniform", i.e., the class is now `ForwardPattern pat`, not
`ForwardPattern ρ` (where `pat : ρ`).
2025-11-20 15:30:37 +00:00
Wojciech Różowski
89d4e9bd4c feat: add intersection for ExtDHashMap/ExtHashMap/ExtHashSet (#11241)
This PR provides intersection operation for
`ExtDHashMap`/`ExtHashMap`/`ExtHashSet` and proves several lemmas about
it.

---------

Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
2025-11-20 15:24:28 +00:00
Wojciech Różowski
108a3d1b44 feat: add intersection on DTreeMap/TreeMap/TreeSet (#11165)
This PR provides intersection on `DTreeMap`/`TreeMap`/`TreeSet`and
provides several lemmas about it.

---------

Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
2025-11-20 15:08:30 +00:00
Markus Himmel
f7ed158002 chore: introduce and immediately deprecate String.Slice.length (#11286)
This PR adds a function `String.Slice.length`, with the following
deprecation string: There is no constant-time length function on slices.
Use `s.positions.count` instead, or `isEmpty` if you only need to know
whether the slice is empty.
2025-11-20 14:31:46 +00:00
Markus Himmel
cf0e4441e8 chore: create alias String.Slice.any for String.Slice.contains (#11282)
This PR adds the alias `String.Slice.any` for `String.Slice.contains`.

It would probably be even better to only have one, but we don't have a
good mechanism for pointing people looking for one towards the other, so
an alias it is for now.
2025-11-20 13:21:30 +00:00
Markus Himmel
2c12bc9fdf chore: more deprecations for string migration (#11281)
This PR adds a few deprecations for functions that never existed but
that are still helpful for people migrating their code post-#11180.
2025-11-20 13:09:52 +00:00
Paul Reichert
fc6e0454c7 feat: add more lemmas about Array and List slices, support subslices (#11178)
This PR provides more lemmas about `Subarray` and `ListSlice` and it
also adds support for subslices of these two types of slices.
2025-11-20 10:46:17 +00:00
Kim Morrison
a106ea053f test: split grind_lint.lean into 7 smaller files for faster CI (#11271)
This PR splits the single grind_lint.lean test (50+ seconds) into 7
separate files that each run in under 7 seconds:

- grind_lint_list.lean (5.7s): List namespace with exceptions
- grind_lint_array.lean (4.6s): Array namespace
- grind_lint_bitvec.lean (3.9s): BitVec namespace with exceptions
- grind_lint_std_hashmap.lean (6.8s): Std hash map/set namespaces
- grind_lint_std_treemap.lean (~6s): Std tree map/set namespaces
- grind_lint_std_misc.lean (~5s): Std.Do, Std.Range, Std.Tactic
- grind_lint_misc.lean (5.5s): All other non-Lean namespaces

Each file maintains complete namespace coverage and preserves all
existing exceptions. The split enables better CI parallelization and
faster feedback.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-20 05:19:02 +00:00
Leonardo de Moura
00600806ad fix: proof construction in grind ring (#11273)
This PR fixes a bug during proof construction in `grind`.
2025-11-20 04:52:18 +00:00
Aaron Liu
5c8ebd8868 feat: make Option.decidableEqNone coherent with Option.instDecidableEq (#9302)
This PR modifies `Option.instDecidableEq` and `Option.decidableEqNone`
so that the latter can be made into a global instance without causing
diamonds. It also adds `Option.decidabeNoneEq`.

See
[Zulip](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Option.2EdecidableEqNone/near/527226250).

---------

Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
Co-authored-by: Rob Simmons <rob@lean-fro.org>
2025-11-20 01:48:42 +00:00
Leonardo de Moura
47228b94fd feat: arbitrary grind parameters (#11268)
This PR implements support for arbitrary `grind` parameters. The feature
is similar to the one available in `simp`, where a proof term is treated
as a local universe-polymorphic lemma. This feature relies on `grind
-revert` (see #11248). For example, users can now write:

```lean
def snd (p : α × β) : β := p.2
theorem snd_eq (a : α) (b : β) : snd (a, b) = b := rfl

/--
trace: [grind.ematch.instance] snd_eq (a + 1): snd (a + 1, Type) = Type
[grind.ematch.instance] snd_eq (a + 1): snd (a + 1, true) = true
-/
#guard_msgs (trace) in
set_option trace.grind.ematch.instance true in
example (a : Nat) : (snd (a + 1, true), snd (a + 1, Type), snd (2, 2)) = (true, Type, snd (2, 2)) := by
  grind [snd_eq (a + 1)]
```

Note that in the example above, `snd_eq` is instantiated only twice, but
with different universe parameters.
As described in #11248, the new feature cannot be used with `grind
+revert`.
2025-11-19 21:01:01 +00:00
Lean stage0 autoupdater
126fca1ec8 chore: update stage0 2025-11-19 19:40:23 +00:00
Leonardo de Moura
2ed025ade8 feat: mark sizeOf theorems as grind theorems (#11265)
This PR marks the automatically generated `sizeOf` theorems as `grind`
theorems.

closes #11259

Note: Requested update stage0, we need it to be able to solve example in
the issue above.
```lean
example (a: Nat) (b: Nat): sizeOf a < sizeOf (a, b) := by
  grind
```
2025-11-19 18:38:35 +00:00
Henrik Böving
827a96ade3 fix: several memory leaks in the new String API (#11263)
This PR fixes several memory leaks in the new `String` API.

These leaks are mostly situations where we forgot to put borrowing
annotations. The single
exception is the new `String` constructor `ofByteArray`. It cannot take
the `ByteArray` as
a borrowed argument anymore and must thus free it on its own.
2025-11-19 18:23:35 +00:00
Sebastian Ullrich
e0f96208e4 chore: typo in error message (#11262) 2025-11-19 17:15:11 +00:00
Joachim Breitner
5cc0a10346 refactor: use Match.AltParamInfo also for splitters (#11261)
This PR continues the homogenization between matchers and splitters,
following up on #11256. In particular it removes the ambiguity whether
`numParams` includes the `discrEqns` or not.
2025-11-19 16:13:53 +00:00
Lean stage0 autoupdater
1b6fba49c2 chore: update stage0 2025-11-19 15:57:48 +00:00
Joachim Breitner
63bd0b5e77 refactor: introduce Match.altInfos (#11256)
This PR replaces `MatcherInfo.numAltParams` with a more detailed data
structure that allows us, in particular, to distinguish between an
alternative for a constructor with a `Unit` field and the alternative
for a nullary constructor, where an artificial `Unit` argument is
introduced.
2025-11-19 15:09:17 +00:00
Lean stage0 autoupdater
75342961fc chore: update stage0 2025-11-19 13:58:11 +00:00
Henrik Böving
52b687cab4 perf: less allocations when using string patterns (#11255)
This PR reduces the allocations when using string patterns. In
particular
`startsWith`, `dropPrefix?`, `endsWith`, `dropSuffix?` are optimized.
2025-11-19 13:06:27 +00:00
Joachim Breitner
75570f327f refactor: thunk field-less alternatives of casesOnSameCtor (#11254)
This RP adds a `Unit` argument to `casesOnSameCtor` to make it behave
moere similar to a matcher. Follow up in spirit to #11239.
2025-11-19 09:53:09 +00:00
Markus Himmel
52d05b6972 refactor: use String.split instead of String.splitOn or String.splitToList (#11250)
This PR introduces a function `String.split` which is based on
`String.Slice.split` and therefore supports all pattern types and
returns a `Std.Iter String.Slice`.

This supersedes the functions `String.splitOn` and `String.splitToList`,
and we remove all all uses of these functions from core. They will be
deprecated in a future PR.

Migrating from `String.splitOn` and `String.splitToList` is easy: we
introduce functions `Iter.toStringList` and `Iter.toStringArray` that
can be used to conveniently go from `Std.Iter String.Slice` to `List
String` and `Array String`, so for example `s.splitOn "foo"` can be
replaced by `s.split "foo" |>.toStringList`.
2025-11-19 09:35:19 +00:00
Joachim Breitner
f7031c7aa9 perf: in match splitters, thunk alts if needed (#11239)
This PR adds a `Unit` assumption to alternatives of the splitter that
would otherwise not have arguments. This fixes #11211.

In practice these argument-less alternatives did not cause wrong
behavior, as the motive when used with `split` is always a function
type. But it is better to be safe here (maybe someone uses splitters in
other ways), it may increase the effectiveness of #10184 and simplifies
#11220.

The perf impact is insignificant in the grand scheme of things on
stdlib, but the change is effective:
```
~/lean4 $ build/release/stage1/bin/lean tests/lean/run/matchSplitStats.lean 
969 splitters found
455 splitters are const defs
~/lean4 $ build/release/stage2/bin/lean tests/lean/run/matchSplitStats.lean 
969 splitters found
829 splitters are const defs
```
2025-11-19 09:08:34 +00:00
Lean stage0 autoupdater
9fc90488ce chore: update stage0 2025-11-19 08:40:32 +00:00
Markus Himmel
59949f89ee chore: add function String.Pos.extract (#11251)
This PR is a preparatory bootstrapping PR for #11240.
2025-11-19 08:05:28 +00:00
Leonardo de Moura
61186629d6 feat: grind -revert (#11248)
This PR implements the option `revert`, which is set to `false` by
default. To recover the old `grind` behavior, you should use `grind
+revert`. Previously, `grind` used the `RevSimpIntro` idiom, i.e., it
would revert all hypotheses and then re-introduce them while simplifying
and applying eager `cases`. This idiom created several problems:

* Users reported that `grind` would include unnecessary parameters. See
[here](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Grind.20aggressively.20includes.20local.20hypotheses.2E/near/554887715).
* Unnecessary section variables were also being introduced. See the new
test contributed by Sebastian Graf.
* Finally, it prevented us from supporting arbitrary parameters as we do
in `simp`. In `simp`, I implemented a mechanism that simulates local
universe-polymorphic theorems, but this approach could not be used in
`grind` because there is no mechanism for reverting (and re-introducing)
local universe-polymorphic theorems. Adding such a mechanism would
require substantial work: I would need to modify the local context
object. I considered maintaining a substitution from the original
variables to the new ones, but this is also tricky, because the mapping
would have to be stored in the `grind` goal objects, and it is not just
a simple mapping. After reverting everything, I would need to keep a
sequence of original variables that must be added to the mapping as we
re-introduce them, but eager case splits complicate this quite a bit.
The whole approach felt overly messy.

The new behavior `grind -revert` addresses all these issues. None of the
`grind` proofs in our test suite broke after we fixed the bugs exposed
by the new feature. That said, the traces and counterexamples produced
by `grind` are different. The new proof terms are also different.
2025-11-19 05:28:31 +00:00
Robert J. Simmons
d5ecca995f chore: update some error explanations (#11225)
This PR updates some of the Error Explanations that had gotten out of
sync with actual error messages
2025-11-19 03:16:40 +00:00
Robert J. Simmons
f81e64936a feat: improve error when an identifier is unbound because autoImplicit is off (#11119)
This PR introduces a clarifying note to "undefined identifier" error
messages when the undefined identifier is in a syntactic position where
autobinding might generally apply, but where and autobinding is
disabled. A corresponding note is made in the `lean.unknownIdentifier`
error explanation.

The core intended audience for this error message change is "newcomer
who would otherwise be baffled why the thing that works in this Mathlib
project gets 'unknown identifier' errors in this non-Mathlib project."

## Modified behavior

### Example 1
```lean4
set_option autoImplicit true in
set_option relaxedAutoImplicit false in
def thisBreaks (x : α₂) (y : size₂) := ()
```

Before:
```
Unknown identifier `size₂`
```

After:
```
Unknown identifier `size₂`

Note: It is not possible to treat `size₂` as an implicitly bound variable here because it has multiple characters while the `relaxedAutoImplicit` option is set to `false`.
```

### Example 2
```lean4
set_option autoImplicit false in
def thisAlsoBreaks (x : α₃) (y : size₃) := ()
```

Before:
```
Unknown identifier `α₃`
Unknown identifier `size₃`
```

After:
```
Unknown identifier `α₃`

Note: It is not possible to treat `α₃` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
Unknown identifier `size₃`

Note: It is not possible to treat `size₃` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```

## How this works

The elaboration process knows whether it is considering syntax where we
be able to auto-bind implicits thanks to information in the
`Lean.Elab.Term.Context`.

Before this PR, this contains:
* `autoBoundImplicit`, a boolean that is true when we are considering
syntax that might be able to auto-bind implicit AND when the
`autoImplicit` flag is set to true
* `autoBoundImplicits`, an array of `Expr` variables that we've
autobound

After this PR, this contains:
* `autoBoundImplicitCtx`, an option which is `some` **whenever** we are
considering syntax that might be able to auto-bind implicit, and carries
the array of exprs as well as a copy of the `autoImplicit` flag's value.
(The latter lets us re-implement the `autoBoundImplicit` flag for
backward compatibility.)

Therefore, rather than having access to "elaboration is in an
autobinding context && flag is enabled", it's possible to recover both
of those individual values, and give different information to the user
in cases where we didn't attempt autobinding but would have if different
options had been set.

## Rationale

The revised error message avoids offering much guidance — it doesn't
actively suggest setting the option to a different value or suggest
adding an implicit binding. Care needs to be taken here to make sure
advice is not misleading; as the accepted RFC in #6462 points out, a
substantial portion of autobinding failures are just going to be
misspellings.

I considered and then rejected a code action here to that would add a
local `set_option autoImplicit true`. This seems undesirable or
counterproductive — if a project like Mathlib has proactively disabled
`autoImplicit`, its odd to be pushing local exceptions.

A hint prompting the user to add an implicit binding would be more
proper, but only in certain circumstances — we want to be conservative
in suggesting specific code actions! In a situation like this one, we'd
want to _avoid_ giving the suggestion of adding a `{HasArr}` binding,
which I think either requires tricky heuristics or means we'd want the
elaboration to play through the consequences of auto-binding and make
sure it doesn't cause any follow-on errors before suggesting adding an
implicit binding.

```
set_option autoImplicit true
set_option relaxedAutoImplicit false
instance has_arr : HasArr Preorder := { Arr := Function }
```

Additionally, it seems like it would make the most sense to offer to
auto-bind _all_ the relevant unknown identifiers at once. To avoid being
misleading, this too would seem to require playing through the
consequences of autobinding before being able to safely suggest the
change. This is enough additional complexity that I'm leaving it for
future work.

---------

Co-authored-by: David Thrane Christiansen <david@davidchristiansen.dk>
2025-11-19 03:11:34 +00:00
Mac Malone
5bb9839887 fix: symbol clashes between packages (#11082)
This PR prevents symbol clashes between (non-`@[export]`) definitions
from different Lean packages.

Previously, if two modules define a function with the same name and were
transitively imported (even privately) by some downstream module,
linking would fail due to a symbol clash. Similarly, if a user defined a
symbol with the same name as one in the `Lean` library, Lean would use
the core symbol even if one did not import `Lean`.

This is solved by changing Lean's name mangling algorithm to include an
optional package identifier. This identifier is provided by Lake via
`--setup` when building a module. This information is weaved through the
elaborator, interpreter, and compiler via a persistent environment
extension that associates modules with their package identifier.

With a package identifier, standard symbols have the form
`lp_<pkg-id>_<mangled-def>`. Without one, the old scheme is used (i.e.,
`l_<mangled-def>`). Module initializers are also prefixed with package
identifier (if any). For example, the initializer for a module `Foo` in
a package `test` is now `initialize_test_Foo` (instead of
`initialize_Foo`). Lake's default for native library names has also been
adjusted accordingly, so that libraries can still, by default, be used
as plugins. Thus, the default library name of the `lean_lib Foo` in
`package test` will now be `libtest_Foo`.

When using Lake to build the Lean core (i.e., `bootstrap = true`), no
package identifier will be used. Thus, definitions in user packages can
never have symbol clashes with core.

Closes #222.
2025-11-19 02:24:44 +00:00
Mac Malone
687698e79d test: module clash across packages (#11246)
This PR adds a test that covers importing modules defined in multiple
packages.

Currently, will resolve the module to its first occurrence in the its
search order. However, this will soon change, so this test is designed
to analyze that behavior.
2025-11-19 02:23:34 +00:00
Leonardo de Moura
8a0ee9aac7 fix: assigned universe metavars in grind (#11247)
This PR fixes an issue in the `grind` preprocessor. `simp` may introduce
assigned (universe) metavariables (e.g., when performing
zeta-reduction).
2025-11-19 00:19:17 +00:00
Leonardo de Moura
6dd8ad13e5 fix: grind minor issues (#11244)
This PR fixes minor issues in `grind`. In preparation for adding `grind
-revert`.
2025-11-18 22:11:20 +00:00
Markus Himmel
fa5d08b7de refactor: use String.Slice in String.take and variants (#11180)
This PR redefines `String.take` and variants to operate on
`String.Slice`. While previously functions returning a substring of the
input sometimes returned `String` and sometimes returned
`Substring.Raw`, they now uniformly return `String.Slice`.

This is a BREAKING change, because many functions now have a different
return type. So for example, if `s` is a string and `f` is a function
accepting a string, `f (s.drop 1)` will no longer compile because
`s.drop 1` is a `String.Slice`. To fix this, insert a call to `copy` to
restore the old behavior: `f (s.drop 1).copy`.

Of course, in many cases, there will be more efficient options. For
example, don't write `f <| s.drop 1 |>.copy |>.dropEnd 1 |>.copy`, write
`f <| s.drop 1 |>.dropEnd 1 |>.copy` instead. Also, instead of `(s.drop
1).copy = "Hello"`, write `s.drop 1 == "Hello".toSlice` instead.
2025-11-18 16:13:48 +00:00
Markus Himmel
03eb2f73ac chore: deprecate String.toSubstring (#11232)
This PR deprecates `String.toSubstring` in favor of
`String.toRawSubstring` (cf. #11154).
2025-11-18 13:50:50 +00:00
Wrenna Robson
36a6844625 feat: add Std.Trichotomous (#10945)
This PR adds `Std.Tricho r`, a typeclass for relations which identifies
them as trichotomous. This is preferred to `Std.Antisymm (¬ r · ·)` in
all cases (which it is equivalent to).
2025-11-18 13:20:53 +00:00
Lean stage0 autoupdater
4296f8deee chore: update stage0 2025-11-18 11:23:27 +00:00
Markus Himmel
e301f86c6c chore: add String.Pos.next (#11238)
This PR is split from a future PR and adds the function
`String.Pos.next`, an alias (and soon to be correct name) of
`String.ValidPos.next`.

This is for boring bootstrapping reasons.
2025-11-18 10:41:22 +00:00
Jovan Gerbscheid
4c972ba0d6 fix: add missing s! in UInt64.fromJson? (#11237)
This PR fixes the error thrown by `UInt64.fromJson?` and
`USize.fromJson?` to use the missing `s!`.
2025-11-18 10:31:31 +00:00
Joachim Breitner
f6e580ccf8 refactor: extract functionality from Match.MatchEqs (#11236)
This PR extracts two modules from `Match.MatchEqs`, in preparation of
#11220
and to use the module system to draw clear boundaries between concerns
here.
2025-11-18 10:02:10 +00:00
Sebastian Graf
51ed5f247c fix: register node kind for elabToSyntax functionality (#11235)
This PR registers a node kind for `Lean.Parser.Term.elabToSyntax` in
order to support the `Lean.Elab.Term.elabToSyntax` functionality without
registering a dedicated parser for user-accessible syntax.
2025-11-18 09:47:08 +00:00
Henrik Böving
1759b83929 test: regression test for #6332 (#11234)
Closes: #6332
2025-11-18 09:47:04 +00:00
Wojciech Różowski
e35d65174c feat: add intersection on DHashMap (#11112)
This PR adds intersection operation on `DHashMap`/`HashMap`/`HashSet`
and provides several lemmas about its behaviour.

---------

Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
2025-11-18 09:40:44 +00:00
Paul Reichert
1a4c3ca35d refactor: small iterator improvements (#11175)
This PR removes duplicated instance parameters in the standard library
and flips lemmas of the form `toList_eq_toListIter` into a form that is
suitable for `simp`.
2025-11-18 09:28:55 +00:00
Lean stage0 autoupdater
1f807969b7 chore: update stage0 2025-11-18 09:08:13 +00:00
Wojciech Różowski
f46c17fa1d feat: add lemmas for DHashMap/HashMap/HashSet about emptyWithCapacity/empty (#11223)
This PR adds missing lemmas relating `emptyWithCapacity`/`empty` and
`toList`/`keys`/`values` for `DHashMap`/`HashMap`/`HashSet`.
2025-11-18 08:17:16 +00:00
Kim Morrison
155db16572 chore: begin dev cycle for v4.27.0 (#11229)
Set LEAN_VERSION_MINOR to 27.
2025-11-18 08:12:49 +00:00
Markus Himmel
f6a9059709 chore: rename String.offsetOfPos to String.Pos.Raw.offsetOfPos (#11218)
This PR renames `String.offsetOfPos` to `String.Pos.Raw.offsetOfPos` to
align with the other `String.Pos.Raw` operations.
2025-11-18 07:24:06 +00:00
Sebastian Graf
59d2d00132 feat: turn a term elaborator into a syntax object with elabToSyntax (#11222)
This PR implements `elabToSyntax` for creating scoped syntax `s :
Syntax` for an arbitrary elaborator `el : Option Expr -> TermElabM Expr`
such that `elabTerm s = el`.

Roundtripping example implementing an elaborator imitating `let`:

```lean
elab "lett " decl:letDecl ";" e:term : term <= ty? => do
  let elabE (ty? : Option Expr) : TermElabM Expr := do elabTerm e ty?
  elabToSyntax elabE fun body => do
    elabTerm (← `(let $decl:letDecl; $body)) ty?

#guard lett x := 42; (x + 1) = 43
```
2025-11-18 07:10:31 +00:00
Leonardo de Moura
5a4226f2bd refactor: remove old grindSearchM framework (#11226)
This PR finally removes the old `grind` framework `SearchM`. It has been
replaced with the new `Action` framework.
2025-11-18 00:33:38 +00:00
Mac Malone
81d716069c fix: lake: improper uses of computeArtifact w/o text (#11216)
This PR ensures that the `text` argument of `computeArtifact` is always
provided in Lake code, fixing a hashing bug with
`buildArtifactUnlessUpToDate` in the process.

Closes #11209
2025-11-17 22:27:19 +00:00
Henrik Böving
033fa8c585 test: add additional regression test for #11131 from #10925 (#11224)
Closes #10925
2025-11-17 21:23:53 +00:00
Joachim Breitner
09001ecad6 fix: let realizeConst run withDeclNameForAuxNaming (#11221)
This PR lets `realizeConst` use `withDeclNameForAuxNaming` so that
auxilary definitions created there get non-clashing names.
2025-11-17 21:17:16 +00:00
Lean stage0 autoupdater
1c82929c34 chore: update stage0 2025-11-17 19:02:56 +00:00
Joachim Breitner
b67e8a15d0 perf: avoid quadratic calculation of notAlts in match splitter (#11196)
This PR avoids match splitter calculation from testing all quadratically
many pairs of alternatives for overlaps, by keeping track of possible
overlaps during matcher calculation, storing that information in the
`MatcherInfo`, and using that during matcher calculation.
2025-11-17 18:10:13 +00:00
Lean stage0 autoupdater
be6457284a chore: update stage0 2025-11-17 17:15:47 +00:00
Henrik Böving
07e6b99e2e fix: deallocation for closures in non default configurations (#11217)
This PR fixes fallout of the closure allocator changes in #10982. As far
as we know
this bug only meaningfully manifests in non default build configurations
without mimalloc such as:
`cmake --preset release -DUSE_MIMALLOC=OFF`

The issue is that I forgot to update the deallocation functions for
closures. However, this only
seems to matter if we disable mimalloc which is why this slipped through
testing.
2025-11-17 16:27:20 +00:00
Paul Reichert
8eb0293098 feat: add MPL specs for slice for ... in (#11141)
This PR provides a polymorphic `ForIn` instance for slices and an MPL
`spec` lemma for the iteration over slices using `for ... in`. It also
provides a version specialized to `Subarray`.
2025-11-17 15:58:29 +00:00
Markus Himmel
8671f81aa5 fix: lakefile require syntax in package not found on Reservoir error (#11198)
This PR fixes an error message in Lake which suggested incorrect
lakefile syntax.

The error message (which was very helpful by the way) looked like this:
```
error: TwoFX/batteries: package not found on Reservoir.

  If the package is on GitHub, you can add a Git source. For example:

    require ...
      from git "https://github.com/TwoFX/batteries" @ git "main"

  or, if using TOML:

    [[require]]
    git = "https://github.com/TwoFX/batteries"
    rev = "main"
    ...
```

The suggested Lakefile syntax does not work. The correct syntax,
according to the reference manual and according to my tests, is
```
    require ...
      from git "https://github.com/TwoFX/batteries" @ "main"
```
without the second `git`.
2025-11-17 15:12:23 +00:00
David Thrane Christiansen
5ce1f67261 fix: module docstring header nesting in Verso format (#11215)
This PR fixes an issue where header nesting levels were properly tracked
between, but not within, moduledocs.
2025-11-17 13:57:00 +00:00
Henrik Böving
bef8574b93 fix: be more careful when recording cases in the compiler (#11210)
This PR fixes a bug in the LCNF simplifier unearthed while working on
#11078. In some situations caused by `unsafeCast`, the simplifier would
record incorrect information about `cases`, leading to further bugs down
the line.

Suppose we have `v : NonScalar` due to an `unsafeCast` and we run
`cases` on it, expecting `Prod.mk fst snd`. The current code attempts to
record both the arguments from the constructor application in the case
arm `fst`, `snd` and the parameters for the type by inspecting the discr
`v`. However, `NonScalar` does of course not have any parameters,
causing the simplifier to record wrong information. This patch makes the
`cases` infrastructure more cautious when extracting information from
the type of `v`.
2025-11-17 11:34:16 +00:00
Joachim Breitner
27e5e21bfe perf: use Nat-based bitmask in sparse cases construction (#11200)
This PR changes how sparse case expressions represent the
none-of-the-above information. Instead of of many `x.ctorIdx ≠ i`
hypotheses, it introduces a single `Nat.hasNotBit mask x.ctorIdx`
hypothesis which compresses that information into a bitmask. This avoids
a quadratic overhead during splitter generation, where all n assumptions
would be refined through `.subst` and `.cases` constructions for all n
assumption of the splitter alternative.

The definition of `Nat.hasNotBit` uses `Nat.rightShift` which is fiddly
to get to reduce well, especially on open terms and with `Meta.whnf`.
Some experimentation was needed to find proof terms that work, these are
all put together in the `Lean.Meta.HasNotBit` module.

Fixes #11183

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2025-11-17 10:05:18 +00:00
Rob23oba
eba5a5a6ef fix: consider over-applications in reduceArity compiler pass (#11185)
This PR fixes the `reduceArity` compiler pass to consider
over-applications to functions that have their arity reduced.
Previously, this pass assumed that the amount of arguments to
applications was always the same as the number of parameters in the
signature. This is usually true, since the compiler eagerly introduces
parameters as long as the return type is a function type, resulting in a
function with a return type that isn't a function type. However, for
dependent types that sometimes are function types and sometimes not,
this assumption is broken, resulting in the additional parameters to be
dropped.

Closes #11131
2025-11-17 07:51:37 +00:00
Kim Morrison
bba399eefe chore: finish dealing with #grind_lint (#11207)
This ensures that no `grind` annotated theorem, simply by being
instantiated, causes a chain of >20 further instantiations, with a small
list of documented exceptions.
2025-11-17 06:58:28 +00:00
Kim Morrison
8b575dcbf2 chore: fixing grind annotations using #grind_lint (#11206)
Slightly more extensive version of #11205, for which I want separate CI.
2025-11-17 05:30:01 +00:00
Kim Morrison
d6f3ca24d3 chore: fixing grind annotations using #grind_lint (#11205) 2025-11-17 04:53:21 +00:00
Kim Morrison
8c7604f550 feat: try? runs tactics with separate heartbeats budgets (#11174)
This PR modifies the `try?` framework, so each subsidiary tactic runs
with a separate `maxHeartbeats` budget.

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2025-11-17 01:30:43 +00:00
Kim Morrison
4b28713a44 feat: #grind_lint check produces a "Try this:" suggestion with #grind_list inspect commands (#11204)
This PR has `#grind_list check` produce a "Try this:" suggestion with
`#grind_list inspect` commands, as this is usually the next step in
dealing with problematic cases. We also fix the grind pattern for one
theorem, as part of testing the workflow. More to follow.
2025-11-17 00:52:57 +00:00
Leonardo de Moura
4c189bc8f2 fix: grind actions (#11203)
This PR fixes a few minor issues in the new `Action` framework used in
`grind`. The goal is to eventually delete the old `SearchM`
infrastructure. The main `solve` function used by `grind` is now based
on the `Action` framework. The PR also deletes dead code in `SearchM`.
2025-11-17 00:37:19 +00:00
Sebastian Ullrich
0b93b3f182 chore: record uses of user-defined attributes as shake dependencies (#11202) 2025-11-16 20:34:23 +00:00
Sebastian Ullrich
ed34ee0cd5 chore: make declMetaExt persistent for shake (#11201) 2025-11-16 20:11:56 +00:00
Joachim Breitner
8ef742647e test: benchmark for large partial match (#11199)
Creates an inductive data type with 100 constructors, and a function
that does
matches on half of its constructors, with a catch-all for the other
half, and generates the splitter.

Related to #11183.
2025-11-16 11:20:31 +00:00
Lean stage0 autoupdater
65a41c38a0 chore: update stage0 2025-11-16 10:13:26 +00:00
Markus Himmel
bf60550ce5 chore: rename Substring to Substring.Raw (#11154)
This PR renames `Substring`  to `Substring.Raw`.

This is to signify its status as a second-class citizen (not deprecated,
but no real plans for verification, like `String.Pos.Raw`) and to free
up the name `Substring` for a possible future type `String.Substring :
String -> Type` so that `s.Substring` is the type of substrings of `s`.

The functions `String.toSubstring` and `String.toSubstring'` will remain
for now for bootstrapping reasons.
2025-11-16 09:30:04 +00:00
Leonardo de Moura
ef1dc21f1c feat: use new grind? infrastructure to implement try? (#11197)
This PR implements `try?` using the new `finish?` infrastructure. It
also removes the old tracing infrastructure, which is now obsolete.
Example:

```lean
/--
info: Try these:
  [apply] grind
  [apply] grind only [findIdx, insert, = mem_indices_of_mem, = getElem?_neg, = getElem?_pos, = HashMap.mem_insert,
    = HashMap.getElem_insert, #1bba]
  [apply] grind only [findIdx, insert, = mem_indices_of_mem, = getElem?_neg, = getElem?_pos, = HashMap.mem_insert,
    = HashMap.getElem_insert]
  [apply] grind =>
    instantiate only [findIdx, insert, = mem_indices_of_mem]
    instantiate only [= getElem?_neg, = getElem?_pos]
    cases #1bba
    · instantiate only [findIdx]
    · instantiate only
      instantiate only [= HashMap.mem_insert, = HashMap.getElem_insert]
-/
#guard_msgs in
example (m : IndexMap α β) (a : α) (b : β) :
    (m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
  try?
```
2025-11-16 05:26:17 +00:00
Robert J. Simmons
31f09da88a feat: prioritize stuck synthetic MVar problems to improve error messages (#11184)
This PR modifies the error message that is returned when more than one
synthetic metavariable can't be resolved.

The two heuristics used for prioritization are:
- prefer typeclass problems associated with small ranges over typeclass
problems associated with large ranges (I'm pretty confident in this
heuristic)
- do not prefer typeclass problems over other kinds of errors (not as
confident in this heuristic)
2025-11-16 00:09:48 +00:00
Leonardo de Moura
2f3939f1ea fix: incorrect grind param warning (#11194)
This PR the redundant `grind` parameter warning message. It now checks
the `grind` theorem instantiation constraints too.
2025-11-15 20:17:55 +00:00
Leonardo de Moura
f4cd97ce04 feat: add grind_pattern constraint annotations (#11193)
This PR uses the new `grind_pattern` constraints to fix cases where an
unbounded number of theorem instantiations would be generated for
certain theorems in the standard library.
2025-11-15 19:08:03 +00:00
Joachim Breitner
e39894e62d feat: realizeConst to set CoreM's maxHeartbeat (#11191)
This PR makes sure that inside a `realizeConst` the `maxHeartbeat`
option is effective.
2025-11-15 17:36:09 +00:00
Johannes Tantow
100006fdd0 feat: verify all and any for hash maps (#10765)
This PR extends the `all`/`any` functions from hash sets to hash maps
and dependent hash maps and verifies them.
2025-11-15 16:59:37 +00:00
Joachim Breitner
a6f4e9156e fix: avoid unknown free variables in match error message (#11190)
This PR avoids running into an “unknown free variable” when printing the
“Failed to compile pattern matching” error. Fixes #11186.
2025-11-15 16:31:24 +00:00
Lean stage0 autoupdater
14625ec114 chore: update stage0 2025-11-15 05:46:38 +00:00
Leonardo de Moura
6f2c04b6a2 feat: grind_pattern constraints (#11189)
This PR implements `grind_pattern` constraints. They are useful for
controlling theorem instantiation in `grind`. As an example, consider
the following two theorems:
```lean
theorem extract_empty {start stop : Nat} :
    (#[] : Array α).extract start stop = #[] := …

theorem extract_extract {as : Array α} {i j k l : Nat} :
    (as.extract i j).extract k l = as.extract (i + k) (min (i + l) j) := …
```

If both are used for theorem instantiation, an unbounded number of
instances is generated as soon as we add the term `#[].extract i j` to
the `grind` context.

We can now prevent this by adding a `grind_pattern` constraint to
`extract_extract`:

```lean
grind_pattern extract_extract => (as.extract i j).extract k l where
  as =/= #[]
```

With this constraint, only one instance is generated, as expected:

```lean
/-- trace: [grind.ematch.instance] extract_empty: #[].extract i j = #[] -/
#guard_msgs (drop error, trace) in
set_option trace.grind.ematch.instance true in
example (as : Array Nat) (h : #[].extract i j = as) : False := by
  grind only [= extract_empty, usr extract_extract]
```
2025-11-15 05:05:04 +00:00
Mac Malone
06f457b48a fix: lake: indeterminism in targets test (#11188)
This PR fixes a source of indeterminism in the `examples/targets` Lake
test (checking the job index).
2025-11-15 04:20:24 +00:00
Mac Malone
8ad0a61169 refactor: lake: scope all module build keys by package (#11169)
This PR changes all module build keys in Lake to be scoped by their
package. This enables building modules with the same name in different
packages (something previously only well-supported for executable
roots).

API-wise, the `BuildKey` definitions `module` and `moduleFacet` have
been deprecated and replaced with `packageModule` and
`packageModuleFacet`. The `moduleTargetIndicator` has also been removed
(with its purpose subsumed by `packageModule`).
2025-11-15 04:13:00 +00:00
Leonardo de Moura
d963d33985 feat: add grind_pattern constraints (#11187)
This PR adds syntax for specifying `grind_pattern` constraints and
extends the `EMatchTheorem` object.

--- 
Note: We need a manual stage0 update because it affects the .olean
files.
2025-11-14 18:27:17 -08:00
Robert J. Simmons
3f4e85413e doc: improved error messages when typeclass errors are stuck (#11179)
This PR removes most cases where an error message explained that it was
"probably due to metavariables," giving more explanation and a hint.

## Example

```
def square x := x * x
```

Before:

```lean4
typeclass instance problem is stuck, it is often due to metavariables
  HMul ?m.9 ?m.9 (?m.3 x)
```

After:
```
typeclass instance problem is stuck
  HMul ?m.9 ?m.9 (?m.3 x)

Note: Lean will not try to resolve this typeclass instance problem because the 
first and second type arguments to `HMul` are metavariables. These arguments 
must be fully determined before Lean will try to resolve the typeclass.

Hint: Adding type annotations and supplying implicit arguments to functions 
can give Lean more information for typeclass resolution. For example, if you 
have a variable `x` that you intend to be a `Nat`, but Lean reports it as 
having an unresolved type like `?m`, replacing `x` with `(x : Nat)` can get 
typeclass resolution un-stuck.
```

In addition to providing beginner-and-intermediate-friendly explanation
about **why** typeclass instance problems are treated as "stuck" when
metavariables appear in output positions, this PR provides
potentially-valuable improvement even to expert users: it explains
**which of the typeclass arguments are inputs** and therefore need to be
fully specified before typeclass resolution will be attempted. This
information can be tricky to find otherwise.

## Next steps, but probably after this PR

* error explanation
* detecting when the syntactic source is a binop and giving a
special-cased explanation on the binary operators and their associated
typeclasses
* detecting when the syntactic source is a function call, inspecting the
function call's type somewhat, and replacing the generic "replace `x`
with `(x : Nat)` hint with a specialized "replace `foo` with `foo (tyArg
:= Nat)`" hint
2025-11-14 21:25:46 +00:00
Alexander Bentkamp
bc2aae380c feat: add lemmas about Int range sizes (#11159)
This PR adds lemmas about the sizes of ranges of Ints, analogous to the
Nat lemmas in `Init.Data.Range.Polymorphic.NatLemmas`. See also
https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Reasonning.20about.20PRange.20sizes.20.28with.20.60Int.60.29/with/546466339.

Closes #11158

---------

Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
2025-11-14 13:35:47 +00:00
Paul Reichert
b5b34ee054 feat: List slices (#11019)
This PR introduces slices of lists that are available via slice notation
(e.g., `xs[1...5]`).

* Moved the `take` combinator and the `List` iterator producer to
`Init`.
* Introduced a `toTake` combinator: `it.toTake` behaves like `it`, but
it has the same type as `it.take n`. There is a constant cost per
iteration compared to `it` itself.
* Introduced `List` slices. Their iterators are defined as
`suffixList.iter.take n` for upper-bounded slices and
`suffixList.iter.toTake` for unbounded ones.

Performance characteristics of using the slice `list[a...b]`:

* when creating it: `O(a)`
* every iterator step: `O(1)`
* `toList`: `O(b - a + 1)` (given that a <= b)

Because the slice only stores a suffix of `xs` internally, two slices
can be equal even though the underlying lists differ in an irrelevant
prefix. Because the `stop` field is allowed to be beyond the list's
upper bound, the slices `[1][0...1]` and `[1][0...2]` are not equal,
even though they effectively cover the same range of the same list.
Improving this would require us to call `List.length` when building the
slice, which would iterate through the whole list.
2025-11-14 11:33:25 +00:00
Sebastian Ullrich
5011b7bd89 chore: make compilation type mismatch error message from non-exposed defs a lot less mysterious (#11177) 2025-11-14 10:50:43 +00:00
Sebastian Ullrich
4602586b6a chore: suggest public meta import on phase check failure, which is more likely to be the correct variant (#11173) 2025-11-14 10:10:04 +00:00
Wojciech Różowski
36ee331ce2 feat: add minimal support for getEntry/getEntry?/getEntry!/getEntryD for DTreeMap (#11161)
This PR adds getEntry/getEntry?/getEntry!/getEntryD operation on
DTreeMap.
2025-11-14 09:09:53 +00:00
Markus Himmel
aca297d1c5 chore: some String API cleanup in Lake.Util.Version (#11160)
This PR performs some cleanup in `Lake.Util.Version`.

---------

Co-authored-by: Mac Malone <tydeu@hatpress.net>
2025-11-14 08:56:56 +00:00
Kim Morrison
de073706c5 feat: redefine Int.pow, for faster kernel reduction (#11139)
This PR replaces #11138, which just added a `@[csimp]` lemma for
`Int.pow`, this time actually replacing the definition. This means we
not only get fast runtime behaviour, but take advantage of the special
kernel support for `Nat.pow`.

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2025-11-14 05:45:19 +00:00
Kim Morrison
f7ead9667b feat: macro for try? (#11170)
This PR adds tactic and term mode macros for `∎` (typed `\qed`) which
expand to `try?`. The term mode version captures any produced
suggestions and prepends `by`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-14 05:27:23 +00:00
Kim Morrison
ffbd744c85 chore: remove simp_all? +suggestions from try? for now (#11172)
This PR removes `simp_all? +suggestions` from `try?` for now. It's
really slow out in Mathlib; too often the suggestions cause `simp` to
loop. Until we have the ability for `try?` to move past a timeing-out
tactic (or maybe even until we have parallelism), it needs to be
removed.

Alternatively, we could try modifying `simp` so that e.g. it won't use a
premise more than once. This might help avoid loops, but it would
produce less-reproducible proofs.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-14 04:58:23 +00:00
Kim Morrison
833aaa823e chore: tactics using library suggestions set the caller field (#11171)
This PR ensures that tactics using library suggestions set the caller
field, so the premise selection engine has access to this. We'll later
use this to filter out some modules for grind, which we know have
already been fully annotated.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-14 04:50:55 +00:00
François G. Dorais
7b29d976ed feat: add instances NeZero(n^0) for n : Nat and n : Int (#10739)
This PR adds two missing `NeZero` instances for `n^0` where `n : Nat`
and `n : Int`.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Add NeZero instances for n^0 when n : Nat and n : Int.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
8305e65ba5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
2025-11-14 03:37:17 +00:00
Leonardo de Moura
1e84b6dff9 feat: add #grind_lint check in module <module> (#11167)
This PR implements support for `#grind_lint check in module <module>`.
Mathlib does not use namespaces, so we need to restrict the
`#grind_lint` search space using module (prefix) names. Example:

```lean
/--
info: instantiating `Array.filterMap_some` triggers more than 100 additional `grind` theorem instantiations
---
info: Array.filterMap_some
[thm] instances
  [thm] Array.filterMap_filterMap ↦ 94
  [thm] Array.size_filterMap_le ↦ 5
  [thm] Array.filterMap_some ↦ 1
---
info: instantiating `Array.range_succ` triggers 22 additional `grind` theorem instantiations
-/
#guard_msgs in
#grind_lint check (min := 20) in module Init.Data.Array
```
2025-11-14 01:44:04 +00:00
Kim Morrison
bc9cc05082 feat: include current file in default premise selector (#11168)
This PR changes the default library suggestions (e.g. for `grind
+suggestions` or `simp_all? +suggestions) to include the theorems from
the current file in addition to the output of Sine Qua Non.
2025-11-14 01:31:30 +00:00
Leonardo de Moura
46ff76aabd feat: #grind_lint refinements (#11166)
This PR implements the following improvements to the `#grind_lint`
command:
1. More informative messages when the number of instances exceeds the
minimum threshold.
2. A code action for `#grind_lint inspect` that inserts
`set_option trace.grind.ematch.instance true` whenever the number of
instances exceeds
   the minimum threshold.
3. Displaying doc strings for `grind` configuration options in
`#grind_lint`.
4. Improve doc strings for `#grind_lint inspect` and `#grind_lint
check`.

Example:
```lean
/--
info: instantiating `Array.filterMap_some` triggers more than 100 additional `grind` theorem instantiations
---
info: Array.filterMap_some
[thm] instances
  [thm] Array.filterMap_filterMap ↦ 94
  [thm] Array.size_filterMap_le ↦ 5
  [thm] Array.filterMap_some ↦ 1
---
info: Try this to display the actual theorem instances:
  [apply] set_option trace.grind.ematch.instance true in
  #grind_lint inspect Array.filterMap_some
-/
#guard_msgs in
#grind_lint inspect Array.filterMap_some
```
2025-11-13 20:36:01 +00:00
Markus Himmel
eb01aaeee4 chore: rename String.Iterator to String.Legacy.Iterator (#11152)
This PR renames `String.Iterator` to `String.Legacy.Iterator`.

From the docstring of `String.Legacy.Iterator`:

> This is a no-longer-supported legacy API that will be removed in a
future release. You should use
> `String.ValidPos` instead, which is similar, but safer. To iterate
over a string `s`, start with
> `p : s.startValidPos`, advance it using `p.next`, access the current
character using `p.get` and
> check if the position is at the end using `p = s.endValidPos` or
`p.IsAtEnd`.
2025-11-13 13:46:22 +00:00
Mac Malone
2b85e29cc9 test: version clash w/ diamond deps (#11155)
This PR adds a test replicating Kim's diamond dependency example.

The top-level package, `D`, depends on two intermediate packages, `B`
and `C`, which each require semantically different versions of another
package, `A`. The portion of `A` that `B` and `C` publicly use is
unchanged across the versions, but they both privately make use of
changed API. Currently, this causes a version clash. This will be made
to work without error later this quarter.
2025-11-13 05:40:56 +00:00
David Thrane Christiansen
ceb86b1293 fix: details in Markdown rendering of Verso docstrings (#11151)
This PR fixes some details in the Markdown renderings of Verso
docstrings, and adds tests to keep them correct. Also adds tests for
Verso docstring metadata.
2025-11-13 05:19:30 +00:00
Lean stage0 autoupdater
a00c78beea chore: update stage0 2025-11-13 02:05:09 +00:00
Leonardo de Moura
ff9c35d6ef feat: #grind_lint command (#11157)
This PR implements the `#grind_lint` command, a diagnostic tool for
analyzing the behavior of theorems annotated for theorem instantiation.
The command helps identify problematic theorems that produce excessive
or unbounded instance generation during E-matching, which can lead to
performance issues.
The main entry point is:
```
#grind_lint check
```
which analyzes all theorems marked with the `@[grind]` attribute.
For each theorem, it creates an artificial goal and runs `grind`,
collecting statistics about the number of instances produced.
Results are summarized using info messages, and detailed breakdowns are
shown for lemmas exceeding a configurable threshold.
Additional subcommands are provided for targeted inspection and control:

* `#grind_lint inspect thm`: analyzes one or more specific theorems in
detail
* `#grind_lint mute thm`: excludes a theorem from instantiation during
analysis
* `#grind_lint skip thm`: omits a theorem from being analyzed by
`#grind_lint check`
2025-11-13 00:42:18 +00:00
Kim Morrison
eb675f708b feat: user extensibility in try? (#11149)
This PR adds a user-extension mechanism for the `try?` tactic. You can
either use the `@[try_suggestion]` attribute on a declaration with
signature ``MVarId -> Try.Info -> MetaM (Array (TSyntax `tactic))`` to
produce suggestions, or the `register_try?_tactic <stx>` command with a
fixed piece of syntax. User-extensions are only tried *after* the
built-in try strategies have been tried and failed.

I wanted to ensure that if the user provides a tactic that produces a
"Try this:" suggestion, we both emit the original tactic and the
suggested replacement (this is what we already do with `grind` and
`simp`). I have this working, but it is quite hacky: we grab the message
log and parse it. I fear this will break when the "Try this:" format is
inevitably changed in the future.


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds user-defined suggestion generators for `try?` via
`@[try_suggestion]` and `register_try?_tactic`, executed after built-ins
with priority and double-suggestion handling.
> 
> - **Parser/Command**:
> - Add command syntax `register_try?_tactic (priority := n)?
<tacticSeq>` in `Lean.Parser.Command`.
> - **Suggestion registry**:
> - Introduce `@[try_suggestion (prio)]` attribute with a scoped env
extension to register generators (`MVarId → Try.Info → MetaM (Array
(TSyntax `tactic))`).
>   - Priority ordering (higher first); supports local/global scope.
> - **Tactic engine (`try?`)**:
> - New unsafe pipeline to collect and run user generators after
built-in tactics; expands nested "Try this" outputs from user tactics.
> - `mkTryEvalSuggestStx` now takes `(goal, info)`; integrates user
tactics as fallback via `attempt_all`.
> - Suppress intermediate "Try this" messages during `evalAndSuggest` by
restoring the message log.
> - **Imports**:
>   - Add `meta import Lean.Elab.Command` for command elaboration.
> - **Tests**:
> - `try_register_builtin.lean`: command availability and warning
without import.
> - `try_user_suggestions.lean`: basic, priority, built-in fallback,
double-suggestion, and command registration cases.
> - Update `versoDocMissing.lean.expected.out` to include
`register_try?_tactic` in expected commands.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
302dc94544. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-11-12 23:49:54 +00:00
Wojciech Różowski
b39ee8a84b feat: add minimal support for getEntry/getEntry?/getEntry!/getEntryD for DHashMap (#11076)
This PR adds `getEntry`/`getEntry?`/`getEntry!`/`getEntryD` operation on
DHashMap.
2025-11-12 16:56:28 +00:00
Paul Reichert
9a3fb90e40 refactor: replace Iter(M).size with Iter(M).count (#10952)
This PR replaces `Iter(M).size` with the `Iter(M).count`. While the
former used a special `IteratorSize` type class, the latter relies on
`IteratorLoop`. The `IteratorSize` class is deprecated. The PR also
renames lemmas about ranges be replacing `_Rcc` with `_rcc`, `_Rco` with
`_roo` (and so on) in names, in order to be more consistent with the
naming convention.
2025-11-12 16:41:00 +00:00
Lean stage0 autoupdater
7f7a4d3eaf chore: update stage0 2025-11-12 15:54:53 +00:00
Sebastian Graf
09cf07b71c feat: new do elaborator, part 1: doElem_elab attribute (#11150)
This PR adds a new, inactive and unused `doElem_elab` attribute that
will allow users to register custom elaborators for `doElem`s in the
form of the new type `DoElab`. The old `do` elaborator is active by
default but can be switched off by disabling the new option
`backward.do.legacy`.
2025-11-12 14:25:28 +00:00
Leonardo de Moura
d464b13569 feat: add cases_next to grind tactic mode (#11148)
This PR addst the `cases_next` tactic to the `grind` interactive mode.
2025-11-12 03:26:18 +00:00
Leonardo de Moura
f2b3f90724 refactor: symmetric equality congruence in grind (#11147)
This PR refactors the implementation of the symmetric equality
congruence rule used in `grind`.
2025-11-12 01:10:37 +00:00
Kim Morrison
bc60b1c19d fix: don't suggest deprecated theorems (#11146)
This PR fixes a bug in #11125. Added a test this time ...

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Exclude deprecated declarations from library suggestions and add a
test verifying they are filtered out.
> 
> - **Library Suggestions**:
> - Update `isDeniedPremise` in `src/Lean/LibrarySuggestions/Basic.lean`
to treat `Lean.Linter.isDeprecated` as denied (`true`), filtering
deprecated constants from suggestions.
> - **Tests**:
> - Add `tests/lean/run/library_suggestions_deprecated.lean` to verify
deprecated theorems (e.g., `deprecatedTheorem`) are not suggested by
`currentFile`, while non-deprecated ones are.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
ef7e546dbc. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-11-12 00:58:47 +00:00
Leonardo de Moura
fa3c85ee84 fix: missing condition in isMatchCondCandidate (#11145)
This PR fixes a bug in `isMatchCondCandidate` used in `grind`. The
missing condition was causing a "not internalized term" `grind` internal
error.
2025-11-12 00:20:37 +00:00
Wojciech Różowski
34f9798b4b feat: add DTreeMap/TreeMap/TreeSet iterators and slices (#10776)
This PR adds iterators and slices for `DTreeMap`/`TreeMap`/`TreeSet`
based on zippers and provides basic lemmas about them.

---------

Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
2025-11-11 17:49:50 +00:00
Wojciech Różowski
e0af5122f7 feat: add union on ExtDTreeMap/ExtTreeMap/ExtTreeSet (#11070)
This PR adds union operation on ExtDHashMap/ExtHashMap/ExtHashSet nd
provides lemmas about union operations.

Stacked on top of #10946.
2025-11-11 16:52:07 +00:00
Markus Himmel
f1224277e2 perf: improve performance of String.ValidPos (#11142)
This PR aims to bring the performance of `String.ValidPos` closer to
that of `String.Pos.Raw` by adding/correcting `extern` annotations as
needed.

This is in response to a regression observed after #11127. The changes
to the `String` `Parsec` module lead to different compiler behavior for
functions like `strCore` and `natCore`. The new IR *looks* better than
the old IR, but the
[numbers](1e438647ba)
are a bit mixed.
2025-11-11 15:30:47 +00:00
Marc Huisinga
c2647cdbf5 fix: pre-filter completion items mod ascii casing (#11140)
This PR ensures that we pre-filter auto-completion items modulo ASCII
casing for consistency with the VS Code fuzzy matching.
2025-11-11 14:11:05 +00:00
dependabot[bot]
aaceb3dbf5 chore: CI: bump actions/upload-artifact from 4 to 5 (#11052)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 4 to 5.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-11 12:41:14 +00:00
dependabot[bot]
3ae409cf81 chore: CI: bump actions/download-artifact from 5 to 6 (#11053)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 5 to 6.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-11 12:40:56 +00:00
dependabot[bot]
a7f47db134 chore: CI: bump softprops/action-gh-release from 2.3.3 to 2.4.1 (#11054)
Bumps
[softprops/action-gh-release](https://github.com/softprops/action-gh-release)
from 2.3.3 to 2.4.1.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-11 12:40:23 +00:00
Markus Himmel
2c2fcff4f8 refactor: do not use String.Iterator (#11127)
This PR removes all uses of `String.Iterator` from core, preferring
`String.ValidPos` instead.

In an upcoming PR, `String.Iterator` will be renamed to
`String.Legacy.Iterator`.
2025-11-11 11:46:58 +00:00
Kim Morrison
d1e19f2aa0 feat: support for induction in try? (#11136)
This PR adds support for `try?` to use induction; it will only perform
induction on inductive types defined in the current namespace and/or
module; so in particular for now it will not induct on built-in
inductives such as `Nat` or `List`.

This is stacked on top of #11132, and there are overlapping changes.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds vanilla induction suggestions to `try?`, updates collection of
inductive candidates, and tests the new behavior on custom inductive
types.
> 
> - **Try tactic pipeline**:
> - Add vanilla induction generators (`mkIndStx`, `mkAllIndStx`) that
try `induction <var> <;> …`, with fallback via `expose_names` when
needed.
> - Integrate induction into `mkTryEvalSuggestStx`, alongside existing
atomic, suggestions, and function-induction options.
> - **Collector updates (`Try/Collect.lean`)**:
> - Enhance `checkInductive` to `whnf` the type and use `getAppFn` to
detect inductive heads, populating `indCandidates`.
> - **Tests**:
> - New `tests/lean/run/try_induction.lean` covering suggestions for
`induction` on custom inductives, interaction with `grind`, and
coexistence with `fun_induction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
b357990c97. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-11 09:29:59 +00:00
Kim Morrison
838be605ac feat: replace Int.pow with a @[csimp] lemma (#11138)
This PR adds a `csimp` lemma for faster runtime evaluation of `Int.pow`
in terms of `Nat.pow`.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Replaces `Int.pow` evaluation with a `@[csimp]` lemma using `Nat.pow`
and adds supporting lemmas (`pow_mul`, `neg_pow`, nonneg results).
> 
> - **Performance/runtime**:
> - Introduce `powImp` and `@[csimp]` theorem `pow_eq_powImp` to
evaluate `Int.pow` via `Nat.pow` with sign handling.
> - **Math lemmas (supporting)**:
>   - `Int.pow_mul`: `a ^ (n * m) = (a ^ n) ^ m`.
>   - `Int.sq_nonnneg`: nonnegativity of `m ^ 2`.
>   - `Int.pow_nonneg_of_even`: nonnegativity for even exponents.
>   - `Int.neg_pow`: `(-m)^n = (-1)^(n % 2) * m^n`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
66ac236db7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-11-11 06:39:10 +00:00
Kim Morrison
02b141ca15 feat: add library suggestions support to try? tactic (#11132)
This PR adds support for `grind +suggestions` and `simp_all?
+suggestions` in `try?`. It outputs `grind only [X, Y, Z]` or `simp_all
only [X, Y, Z]` suggestions (rather than just `+suggestions`).

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-11 06:38:28 +00:00
Kim Morrison
fe8238c76c feat: grind cases on Sum (#11087)
This PR enables `grind` to case bash on `Sum` and `PSum`.
2025-11-11 04:50:34 +00:00
François G. Dorais
7f77bfef4c feat: add List.mem_finRange (#9515)
This PR adds a missing lemma for the `List` API.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Add `[simp]` lemma `List.mem_finRange` proving any `x : Fin n` is in
`finRange n`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
631f7ca852. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Markus Himmel <markus@lean-fro.org>
Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
2025-11-11 04:16:08 +00:00
Leonardo de Moura
e7e85e5e17 fix: stackoverflow during proof construction in grind (#11137)
This PR fixes a stackoverflow during proof construction in `grind`.

Closes #11134
2025-11-11 03:23:43 +00:00
Leonardo de Moura
1b5fb2fa50 fix: check exponent in grind lia and grind ring (#11135)
This PR ensures that `checkExp` is used in `grind lia` (formerly known
as `grind cutsat`) and `grind ring` to prevent stack overflows.

closes #11130
2025-11-11 02:28:55 +00:00
Leonardo de Moura
0e455f5347 fix: disequality ctor propagation in grind (#11133)
This PR fixes disequality propagation for constructor applications in
`grind`. The equivalence class representatives may be distinct
constructor applications, but we must ensure they have the same type.
Examples that were panic'ing before this PR:
```lean
example (a b : List Nat)
    : a ≍ ([] : List Int) → b ≍ ([1] : List Int) → a = b ∨ p → p := by
  grind

example (a b : List Nat)
    : a = [] → a ≍ ([] : List Int) → b = [1] → a = b ∨ p → p := by
  grind

example (a b : List Nat)
    : a = [] → a ≍ ([] : List Int) → b = [1] → b ≍ [(1 : Int)] → a = b ∨ p → p := by
  grind

example (a b : List Nat)
    : a = [] → b = [1] → a = b ∨ p → p := by
  grind

example (a b : List Nat)
    : a = [] → a ≍ ([] : List Int) → b = [1] → a = b ∨ p → p := by
  grind
```

Closes #11124
2025-11-11 01:28:54 +00:00
Leonardo de Moura
f74e21e302 fix: grind injection should not fail at clear (#11126)
This PR ensures `grind` does not fail when applying `injection` to a
hypothesis that cannot be cleared because of forward dependencies.
2025-11-10 14:50:18 +00:00
Wojciech Różowski
c08fcf6c28 feat: add union on ExtDHashMap/ExtHashMap/ExtHashSet (#10946)
This PR adds union operation on ExtDHashMap/ExtHashMap/ExtHashSet nd
provides lemmas about union operations.
2025-11-10 13:48:36 +00:00
Benjamin Shi
ecae85e77b doc: fix typo in List.finIdxOf? (#11111)
This PR fixes a typo in the doc string of `List.finIdxOf?`. The first
line of the doc string previously says the function returns the size of
the list if no element equal to `a`, but both the examples in the doc
string and real run-time behavior indicate it returns `none` in this
case.

Closes #11110
2025-11-10 10:04:07 +00:00
ecyrbe
6008c0d523 feat: add min and max list operations (#11060)
This PR add list `min` and `max` operations to complement `min?` and
`max?` ones in the same vain as `head?` and `head`.

It was dicussed here in
[zulip](https://leanprover.zulipchat.com/#narrow/channel/217875-Is-there-code-for-X.3F/topic/maximum.20of.20a.20list.20known.20to.20be.20nonempty/with/548296389)

it also add small unit tests for `min` , `max`, `min?` and `max?`
2025-11-10 09:56:59 +00:00
Kim Morrison
d47b474e41 feat: suggestions don't included deprecated theorems (#11125)
This PR adds a filter for premise selectors to ensure deprecated
theorems are not returned.
2025-11-10 04:24:06 +00:00
Kim Morrison
c7652413db feat: link docstrings for diamond inheritance (#11122)
This PR fixes a problem for structures with diamond inheritance: rather
than copying doc-strings (which are not available unless `.server.olean`
is loaded), we link to them. Adds tests.
2025-11-10 01:05:01 +00:00
Kim Morrison
08d0ae1e8a feat: add foldl_flatMap and foldr_flatMap theorems (#11123)
This PR adds theorems about folds over flatMaps, for
`List`/`Array`/`Vector`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-09 23:00:29 +00:00
Kim Morrison
6fc48d14c0 feat: missing lemmas about List.findIdx (#11113)
This PR adds some small missing lemmas.
2025-11-09 21:16:11 +00:00
Mac Malone
80409a9ceb feat: lake: Job.sync & other touchups (#11118)
This PR adds `Job.sync` as a standard way of declaring a synchronous
job.

It makes some non-behavior changes to related Job APIs to improve
compilation.
2025-11-08 04:35:05 +00:00
Mac Malone
590ff23e71 fix: lake: moreLinkObjs|Libs on a lean_exe (#11117)
This PR fixes a bug where Lake ignored `moreLinkObjs` and `moreLinkLibs`
on a `lean_exe`.
2025-11-08 04:20:42 +00:00
Joachim Breitner
f843837bfa test: test missing cases error (#11107)
This PR tests the missing cases error.

I thought I broke this, but it seems I did not (or at least not this
way, maybe there is a way to trigger it).
2025-11-06 14:38:55 +00:00
Joachim Breitner
d41f39fb10 perf: sparse case splitting in match compilation (#10823)
This PR lets the match compilation procedure use sparse case analysis
when the patterns only match on some but not all constructors of an
inductive type. This way, less code is produce. Before, code handling
each of the other cases was then optimized and commoned-up by later
compilation pipeline, but that is wasteful to do.

In some cases this will prevent Lean from noticing that a match
statement is complete
because it performs less case-splitting for the unreachable case. In
this case, give explicit
patterns to perform the deeper split with `by contradiction` as the
right-hand side.

At least temporarily, there is also the option to disable this behaviour
with
```
set_option backwards.match.sparseCases false
```
2025-11-06 13:46:35 +00:00
Joachim Breitner
7459304e98 refactor: bv_decide: remove verifyEnum et. al (#11068)
This PR removes the `verifyEnum` functions from the bv_decide frontend.
These functions looked at the implementation of matchers to see if they
really do the matching that they claim to do. This breaks that
abstraction barrier, and should not be necessary, as only functions with
a `MatcherInfo` env entry are considered here, which should all play
nicely.
2025-11-06 09:22:36 +00:00
Kim Morrison
e6b1f1984c feat: suggestions tactic generates hovers (#11098)
This PR updates the `suggestions` tactic so the printed message includes
hoverable type information (and displays scores and flags when
relevant).
2025-11-06 06:31:04 +00:00
Kim Morrison
6d2af21aa0 feat: add Int.ediv_pow and related lemmas (#11100)
This PR adds `theorem Int.ediv_pow {a b : Int} {n : Nat} (hab : b ∣ a) :
(a / b) ^ n = a ^ n / b ^ n` and related lemmas.

---------

Co-authored-by: Bhavik Mehta <bhavikmehta8@gmail.com>
2025-11-06 06:16:18 +00:00
Kim Morrison
3a4e64fe94 feat: some missing Array grind annotations (#11102)
This PR adds some annotations missing in the Array bootstrapping files.
2025-11-06 05:22:40 +00:00
Leonardo de Moura
0d7ca700ad fix: Function.Injective initialization in grind (#11101)
This PR fixes an initialization issue for local `Function.Injective f`
hypotheses.

Closes #11088
2025-11-06 04:26:57 +00:00
Leonardo de Moura
f401f8b46e fix: universe meta-variable support in grind (#11099)
This PR improves the support for universe-metavariables in `grind`.

Closes #11086
2025-11-06 03:38:59 +00:00
Sebastian Ullrich
ae86c18ac1 chore: backward.privateInPublic should not break irrelevance of proofs for rebuilds (#11097) 2025-11-05 23:00:04 +00:00
Sebastian Ullrich
ea2b745e57 chore: new module system adjustments for the Mathlib port (#11093) 2025-11-05 22:17:53 +00:00
Kim Morrison
28a3cf9a6c chore: grind attributes for Prod (#11085) 2025-11-05 20:52:28 +00:00
Joachim Breitner
343887e480 perf: use hasIndepIndices (#11095)
This PR makes use of `hasIndepIndices`. That function was unused since
commit 54f6517ca3, but it seems it should
be used.
2025-11-05 18:41:23 +00:00
Joachim Breitner
d8a67095d6 chore: make workspaceSymbol benchmarks modules (#11094)
This PR makes workspaceSymbol benchmarks `module`s, so that they are
less sensitive to additions of private symbols in the standard library.
2025-11-05 18:40:39 +00:00
Joachim Breitner
0cb79868f4 feat: sparse casesOn constructions (#11072)
This PR adds “sparse casesOn” constructions. They are similar to
`.casesOn`, but have arms only for some constructors and a catch-all
(providing `t.ctorIdx ≠ 42` assumptions). The compiler has native
support for these constructors and now (because of the similarity) also
the per-constructor elimination principles.
2025-11-05 15:49:11 +00:00
Leonardo de Moura
ccecac5a56 chore: use abbrev in denote functions (#11092)
This PR ensures that `grind ac` denotation functions used in proof by
reflection are marked as `abbrev`.
2025-11-05 13:51:36 +00:00
Marc Huisinga
8b43fc54b2 doc: clarify server protocol violations around initialize (#11091) 2025-11-05 09:53:39 +00:00
Leonardo de Moura
e7f4f98071 fix: stackoverflow during proof construction in grind (#11084)
This PR fixes a stack overflow that occurs when constructing a proof
term in `grind`.

Closes #11081
2025-11-05 02:35:05 +00:00
Leonardo de Moura
52e37e0d55 refactor: denote functions in grind (#11071)
This PR ensures that the `denote` functions used to implement
proof-by-reflection terms in `grind` are abbreviations. This change
eliminates the need for the `withAbstractAtoms` gadget.
2025-11-04 23:34:17 +00:00
Leonardo de Moura
a4e073f565 fix: panic during equality propagation in grind ring (#11080)
This PR fixes a panic during equality propagation in the `grind ring`
module. If the maximum number of steps has been reached, the polynomials
may not be fully simplified.

Closes #11073
2025-11-04 23:20:38 +00:00
Sebastian Ullrich
18131de438 fix: evalConst meta check and auxiliary IR decls (#11079)
Uncovered in Mathlib through new boxed decls from `BaseIO` changes
2025-11-04 21:29:49 +00:00
Leonardo de Moura
e430626d8a fix: anchor values in grind? (#11077)
This PR fixes the anchor values produced by `grind?`
2025-11-04 13:03:18 +00:00
Kim Morrison
e76bbef79b feat: simp? +suggestions handles ambiguity (#11075)
This PR updates `simp? +suggestions` so that if a name is ambiguous
(because of namespaces) all alternatives are used, rather than erroring.
2025-11-04 05:26:51 +00:00
Kim Morrison
04d72fe346 chore: basic dev instructions for Claude (#11074)
This PR adds a `.claude/claude.md`, with basic development instructions
for Claude Code to operate in this repository.
2025-11-04 04:07:53 +00:00
Sebastian Ullrich
e4fb780f8a perf: remove unused argument to ExternEntry.opaque (#11066)
This used to create quite a few unique objects in public .olean
2025-11-03 17:26:32 +00:00
Wojciech Różowski
00e29075f3 feat: add union on DTreeMap/TreeMap/TreeSet (#10896)
This PR adds union operations on DTreeMap/TreeMap/TreeSet and their raw
variants and provides lemmas about union operations.

---------

Co-authored-by: Paul Reichert <6992158+datokrat@users.noreply.github.com>
2025-11-03 13:47:44 +00:00
Kim Morrison
ec775907e4 chore: update stage0 2025-11-03 23:26:40 +11:00
Kim Morrison
8d603d34dc feat: make set_library_suggestions persistent 2025-11-03 23:26:40 +11:00
Mac Malone
528c0dd2e4 feat: lake: require dependencies by semver range (#10959)
This PR enables Lake users to require Reservoir dependencies by a
semantic version range. On a `lake update`, Lake will fetch the
package's version information from Reservoir and select the newest
version of the package that satisfies the range.

### Using Version Ranges

Version ranges can be specified through the `version` field of a TOML
`require` or the `@` clause of a Lean `require`. They are only
meaningful on Reservoir dependencies.

**lakefile.lean**
```lean-4
require "Seasawher" / "mdgen" @ "2.*"
```

**lakefile.toml**
```toml
[[require]]
name = "mdgen"
scope = "Seasawher"
version = "2.*"
```

The syntax for these versions ranges is a mix of
[Rust's](https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html?highlight=caret#version-requirement-syntax)
and
[Node's](https://github.com/npm/node-semver/tree/v7.7.3?tab=readme-ov-file#ranges)
with some Lean-friendly deviations.

### Comparators

The basic unit of semantic version ranges are version comparators. They
take a base version and a comparison operator and match versions which
compare positively with their base. Lake supports the following
comparison operators.

* `<`, `<=` / `≤`, `>`, `>=` / `≥`, `=`, `!=` / `≠`

Unlike Rust and Node, Lake supports Unicode alternatives for the
operators. It also adds the not-equal operator (`!=` / `≠`) to make
excluding broken versions easier.

Comparators can be combined into clauses via conjunction or disjunction:

* **AND clauses**: Rust-style `≥1.2.3, <1.8.0` or Node-style `1.2.3
<1.8.0`
* **OR clauses**: Node-style `1.2.7 || >=1.2.9, <2.0.0`

When the base version of a comparator has a `-` suffix (e.g.,
`>1.2.3-alpha.3`) it will match versions of the same core (`1.2.3`) with
suffixes that lexicographically compare (e.g., `1.2.3-alpha.7` or
`1.2.3-beta.2`) but will not match suffixed versions of different cores
(e.g., `3.4.5-rc5`). An empty `-` suffix can be used to disable this
behavior. For example, `<2.0.0-` will match `1.2.3-beta.2` and
`2.0.0-alpha.1`.

### Range Macros

In addition to the basic comparators, Lake also supports standard
shorthand for specifying more complex ranges. Namely, it supports the
caret (`^`) and tilde (`~`) operator along with wildcard ranges.

**Caret Ranges**
* `^1` => `≥1.0.0, <2.0.0-`
* `^1.2` => `≥1.2.0, <2.0.0-`
* `^1.2.3` => `≥1.2.3, <2.0.0-`
* `^1.2.3-beta.2` => `≥1.2.3-beta.2, <2.0.0-`
* `^0.2` => `≥0.0.0, <0.3.0-`
* `^0.2.3` => `≥0.2.3, <0.3.0-`
* `^0.0.3` => `≥0.0.3, <0.0.4-`
* `^0` => `≥0.0.0, <1.0.0-`
* `^0.0` => `≥0.0.0, <0.1.0-`

**Tilde Ranges**
* `~1` => `≥1.0.0, <2.0.0-`
* `~1.2` => `≥1.2.0, <1.3.0-`
* `~1.2.3` => `≥1.2.3, <1.3.0-`
* `~1.2.3-beta.2` => `≥1.2.3-beta.2, <1.3.0-`
* `^0` => `≥0.0.0, <1.0.0-`
* `^0.2.3` => `≥0.2.3, <0.3.0-`
* `^0.0.3` => `≥0.0.3, <0.0.4-`
* `~0` => `≥0.0.0, <1.0.0-`
* `~0.0` => `≥0.0.0, <0.1.0-`
* `~0.0.0` => `≥0.0.0, <0.1.0-`

**Wildcard Ranges**
* `*` => `≥0.0.0`
* `1.x` => `≥1.0.0, <2.0.0-`
* `1.*.x` => `≥1.0.0, <2.0.0-`
* `1.2.*` => `≥1.2.0, <1.3.0-`

These ranges closely follow Rust's and Node's syntax. Like Node but
unlike Rust, wildcard ranges support `x` and `X` as alternative syntax
for wildcards.
2025-11-03 04:18:24 +00:00
1902 changed files with 33235 additions and 7410 deletions

14
.claude/CLAUDE.md Normal file
View File

@@ -0,0 +1,14 @@
When asked to implement new features:
* begin by reviewing existing relevant code and tests
* write comprehensive tests first (expecting that these will initially fail)
* and then iterate on the implementation until the tests pass.
To build Lean you should use `make -j$(nproc) -C build/release`.
To run a test you should use `cd tests/lean/run && ./test_single.sh example_test.lean`.
*Never* report success on a task unless you have verified both a clean build without errors, and that the relevant tests pass. You have to keep working until you have verified both of these.
All new tests should go in `tests/lean/run/`. Note that these tests don't have expected output, and just run on a success or failure basis. So you should use `#guard_msgs` to check for specific messages.
If you are not following best practices specific to this repository and the user expresses frustration, stop and ask them to help update this `.claude/CLAUDE.md` file with the missing guidance.

View File

@@ -213,7 +213,7 @@ jobs:
else
${{ matrix.tar || 'tar' }} cf - $dir | zstd -T0 --no-progress -o pack/$dir.tar.zst
fi
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
if: matrix.release
with:
name: build-${{ matrix.name }}

View File

@@ -375,11 +375,11 @@ jobs:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
path: artifacts
- name: Release
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
files: artifacts/*/*
fail_on_unmatched_files: true
@@ -407,7 +407,7 @@ jobs:
# Doesn't seem to be working when additionally fetching from lean4-nightly
#filter: tree:0
token: ${{ secrets.PUSH_NIGHTLY_TOKEN }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
path: artifacts
- name: Prepare Nightly Release
@@ -425,7 +425,7 @@ jobs:
echo -e "\n*Full commit log*\n" >> diff.md
git log --oneline "$last_tag"..HEAD | sed 's/^/* /' >> diff.md
- name: Release Nightly
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
body_path: diff.md
prerelease: true

View File

@@ -71,7 +71,7 @@ jobs:
GH_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Release (short format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }}
# There are coredumps files here as well, but all in deeper subdirectories.
@@ -86,7 +86,7 @@ jobs:
- name: Release (SHA-suffixed format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }} (${{ steps.workflow-info.outputs.sourceHeadSha }})
# There are coredumps files here as well, but all in deeper subdirectories.

View File

@@ -129,8 +129,7 @@ For all other modules imported by `lean`, the initializer is run without `builti
Thus `[init]` functions are run iff their module is imported, regardless of whether they have native code available or not, while `[builtin_init]` functions are only run for native executable or plugins, regardless of whether their module is imported or not.
`lean` uses built-in initializers for e.g. registering basic parsers that should be available even without importing their module (which is necessary for bootstrapping).
The initializer for module `A.B` is called `initialize_A_B` and will automatically initialize any imported modules.
Module initializers are idempotent (when run with the same `builtin` flag), but not thread-safe.
The initializer for module `A.B` in a package `foo` is called `initialize_foo_A_B`. For modules in the Lean core (e.g., `Init.Prelude`), the initializer is called `initialize_Init_Prelude`. Module initializers will automatically initialize any imported modules. They are also idempotent (when run with the same `builtin` flag), but not thread-safe.
**Important for process-related functionality**: If your application needs to use process-related functions from libuv, such as `Std.Internal.IO.Process.getProcessTitle` and `Std.Internal.IO.Process.setProcessTitle`, you must call `lean_setup_args(argc, argv)` (which returns a potentially modified `argv` that must be used in place of the original) **before** calling `lean_initialize()` or `lean_initialize_runtime_module()`. This sets up process handling capabilities correctly, which is essential for certain system-level operations that Lean's runtime may depend on.

View File

@@ -1,132 +0,0 @@
/-
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
import Lean
namespace Lean.Meta.Grind.Analyzer
/-!
A simple E-matching annotation analyzer.
For each theorem annotated as an E-matching candidate, it creates an artificial goal, executes `grind` and shows the
number of instances created.
For a theorem of the form `params -> type`, the artificial goal is of the form `params -> type -> False`.
-/
/--
`grind` configuration for the analyzer. We disable case-splits and lookahead,
increase the number of generations, and limit the number of instances generated.
-/
def config : Grind.Config := {
splits := 0
lookahead := false
mbtc := false
ematch := 20
instances := 100
gen := 10
}
structure Config where
/-- Minimum number of instantiations to trigger summary report -/
min : Nat := 10
/-- Minimum number of instantiations to trigger detailed report -/
detailed : Nat := 50
def mkParams : MetaM Params := do
let params Grind.mkParams config
let ematch getEMatchTheorems
let casesTypes Grind.getCasesTypes
return { params with ematch, casesTypes }
/-- Returns the total number of generated instances. -/
private def sum (cs : PHashMap Origin Nat) : Nat := Id.run do
let mut r := 0
for (_, c) in cs do
r := r + c
return r
private def thmsToMessageData (thms : PHashMap Origin Nat) : MetaM MessageData := do
let data := thms.toArray.filterMap fun (origin, c) =>
match origin with
| .decl declName => some (declName, c)
| _ => none
let data := data.qsort fun (d₁, c₁) (d₂, c₂) => if c₁ == c₂ then Name.lt d₁ d₂ else c₁ > c₂
let data data.mapM fun (declName, counter) =>
return .trace { cls := `thm } m!"{.ofConst (← mkConstWithLevelParams declName)} ↦ {counter}" #[]
return .trace { cls := `thm } "instances" data
/--
Analyzes theorem `declName`. That is, creates the artificial goal based on `declName` type,
and invokes `grind` on it.
-/
def analyzeEMatchTheorem (declName : Name) (c : Config) : MetaM Unit := do
let info getConstInfo declName
let mvarId forallTelescope info.type fun _ type => do
withLocalDeclD `h type fun _ => do
return ( mkFreshExprMVar (mkConst ``False)).mvarId!
let result Grind.main mvarId ( mkParams) (pure ())
let thms := result.counters.thm
let s := sum thms
if s > c.min then
IO.println s!"{declName} : {s}"
if s > c.detailed then
logInfo m!"{declName}\n{← thmsToMessageData thms}"
-- Not sure why this is failing: `down_pure` perhaps has an unnecessary universe parameter?
run_meta analyzeEMatchTheorem ``Std.Do.SPred.down_pure {}
/-- Analyzes all theorems in the standard library marked as E-matching theorems. -/
def analyzeEMatchTheorems (c : Config := {}) : MetaM Unit := do
let origins := ( getEMatchTheorems).getOrigins
let decls := origins.filterMap fun | .decl declName => some declName | _ => none
for declName in decls.mergeSort Name.lt do
try
analyzeEMatchTheorem declName c
catch e =>
logError m!"{declName} failed with {e.toMessageData}"
logInfo m!"Finished analyzing {decls.length} theorems"
/-- Macro for analyzing E-match theorems with unlimited heartbeats -/
macro "#analyzeEMatchTheorems" : command => `(
set_option maxHeartbeats 0 in
run_meta analyzeEMatchTheorems
)
#analyzeEMatchTheorems
-- -- We can analyze specific theorems using commands such as
set_option trace.grind.ematch.instance true
-- 1. grind immediately sees `(#[] : Array α) = ([] : List α).toArray` but probably this should be hidden.
-- 2. `Vector.toArray_empty` keys on `Array.mk []` rather than `#v[].toArray`
-- I guess we could add `(#[].extract _ _).extract _ _` as a stop pattern.
run_meta analyzeEMatchTheorem ``Array.extract_empty {}
-- Neither `Option.bind_some` nor `Option.bind_fun_some` fire, because the terms appear inside
-- lambdas. So we get crazy things like:
-- `fun x => ((some x).bind some).bind fun x => (some x).bind fun x => (some x).bind some`
-- We could consider replacing `filterMap_some` with
-- `filterMap g (filterMap f xs) = filterMap (f >=> g) xs`
-- to avoid the lambda that `grind` struggles with, but this would require more API around the fish.
run_meta analyzeEMatchTheorem ``Array.filterMap_some {}
-- Not entirely certain what is wrong here, but certainly
-- `eq_empty_of_append_eq_empty` is firing too often.
-- Ideally we could instantiate this is we fine `xs ++ ys` in the same equivalence class,
-- note just as soon as we see `xs ++ ys`.
-- I've tried removing this in https://github.com/leanprover/lean4/pull/10162
run_meta analyzeEMatchTheorem ``Array.range'_succ {}
-- Perhaps the same story here.
run_meta analyzeEMatchTheorem ``Array.range_succ {}
-- `zip_map_left` and `zip_map_right` are bad grind lemmas,
-- checking if they can be removed in https://github.com/leanprover/lean4/pull/10163
run_meta analyzeEMatchTheorem ``Array.zip_map {}
-- It seems crazy to me that as soon as we have `0 >>> n = 0`, we instantiate based on the
-- pattern `0 >>> n >>> m` by substituting `0` into `0 >>> n` to produce the `0 >>> n >>> n`.
-- I don't think any forbidden subterms can help us here. I don't know what to do. :-(
run_meta analyzeEMatchTheorem ``Int.zero_shiftRight {}

View File

@@ -60,7 +60,7 @@ if (arity == fixed + {n}) \{
for j in [n:max + 1] do
let fs := mkFsArgs (j - n)
let sep := if j = n then "" else ", "
emit s!" case {j}: \{ obj* r = FN{j}(f)({fs}{sep}{args}); lean_free_small_object(f); return r; }\n"
emit s!" case {j}: \{ obj* r = FN{j}(f)({fs}{sep}{args}); lean_free_object(f); return r; }\n"
emit " }
}
switch (arity) {\n"
@@ -162,7 +162,7 @@ static obj* fix_args(obj* f, unsigned n, obj*const* as) {
for (unsigned i = 0; i < fixed; i++, source++, target++) {
*target = *source;
}
lean_free_small_object(f);
lean_free_object(f);
}
for (unsigned i = 0; i < n; i++, as++, target++) {
*target = *as;

View File

@@ -10,7 +10,7 @@ endif()
include(ExternalProject)
project(LEAN CXX C)
set(LEAN_VERSION_MAJOR 4)
set(LEAN_VERSION_MINOR 26)
set(LEAN_VERSION_MINOR 27)
set(LEAN_VERSION_PATCH 0)
set(LEAN_VERSION_IS_RELEASE 0) # This number is 1 in the release revision, and 0 otherwise.
set(LEAN_SPECIAL_VERSION_DESC "" CACHE STRING "Additional version description like 'nightly-2018-03-11'")

View File

@@ -148,6 +148,23 @@ This is the inverse of `ExceptT.mk`.
@[always_inline, inline, expose]
def ExceptT.run {ε : Type u} {m : Type u Type v} {α : Type u} (x : ExceptT ε m α) : m (Except ε α) := x
/--
Use a monadic action that may throw an exception by providing explicit success and failure
continuations.
-/
@[always_inline, inline, expose]
def ExceptT.runK [Monad m] (x : ExceptT ε m α) (ok : α m β) (error : ε m β) : m β :=
x.run >>= (·.casesOn error ok)
/--
Returns the value of a computation, forgetting whether it was an exception or a success.
This corresponds to early return.
-/
@[always_inline, inline, expose]
def ExceptT.runCatch [Monad m] (x : ExceptT α m α) : m α :=
x.runK pure pure
namespace ExceptT
variable {ε : Type u} {m : Type u Type v} [Monad m]

View File

@@ -1468,6 +1468,8 @@ def Prod.map {α₁ : Type u₁} {α₂ : Type u₂} {β₁ : Type v₁} {β₂
@[simp] theorem Prod.map_apply (f : α β) (g : γ δ) (x) (y) :
Prod.map f g (x, y) = (f x, g y) := rfl
-- We add `@[grind =]` to these in `Init.Data.Prod`.
@[simp] theorem Prod.map_fst (f : α β) (g : γ δ) (x) : (Prod.map f g x).1 = f x.1 := rfl
@[simp] theorem Prod.map_snd (f : α β) (g : γ δ) (x) : (Prod.map f g x).2 = g x.2 := rfl
@@ -1588,7 +1590,7 @@ gen_injective_theorems% PSum
gen_injective_theorems% Sigma
gen_injective_theorems% String
gen_injective_theorems% String.Pos.Raw
gen_injective_theorems% Substring
gen_injective_theorems% Substring.Raw
gen_injective_theorems% Subtype
gen_injective_theorems% Sum
gen_injective_theorems% Task
@@ -2505,8 +2507,7 @@ class Antisymm (r : αα → Prop) : Prop where
/-- An antisymmetric relation `r` satisfies `r a b → r b a → a = b`. -/
antisymm (a b : α) : r a b r b a a = b
/-- `Asymm r` means that the binary relation `r` is asymmetric, that is,
`r a b → ¬ r b a`. -/
/-- `Asymm r` means that the binary relation `r` is asymmetric, that is, `r a b → ¬ r b a`. -/
class Asymm (r : α α Prop) : Prop where
/-- An asymmetric relation satisfies `r a b → ¬ r b a`. -/
asymm : a b, r a b ¬r b a
@@ -2516,16 +2517,19 @@ class Symm (r : αα → Prop) : Prop where
/-- A symmetric relation satisfies `r a b → r b a`. -/
symm : a b, r a b r b a
/-- `Total X r` means that the binary relation `r` on `X` is total, that is, that for any
`x y : X` we have `r x y` or `r y x`. -/
/-- `Total X r` means that the binary relation `r` on `X` is total, that is, `r a b` or `r b a`. -/
class Total (r : α α Prop) : Prop where
/-- A total relation satisfies `r a b r b a`. -/
/-- A total relation satisfies `r a b` or `r b a`. -/
total : a b, r a b r b a
/-- `Irrefl r` means the binary relation `r` is irreflexive, that is, `r x x` never
holds. -/
/-- `Irrefl r` means the binary relation `r` is irreflexive, that is, `r x x` never holds. -/
class Irrefl (r : α α Prop) : Prop where
/-- An irreflexive relation satisfies `¬ r a a`. -/
irrefl : a, ¬r a a
/-- `Trichotomous r` says that `r` is trichotomous, that is, `¬ r a b → ¬ r b a → a = b`. -/
class Trichotomous (r : α α Prop) : Prop where
/-- An trichotomous relation `r` satisfies `¬ r a b → ¬ r b a → a = b`. -/
trichotomous (a b : α) : ¬ r a b ¬ r b a a = b
end Std

View File

@@ -226,7 +226,7 @@ def swap (xs : Array α) (i j : @& Nat) (hi : i < xs.size := by get_elem_tactic)
let xs' := xs.set i v₂
xs'.set j v₁ (Nat.lt_of_lt_of_eq hj (size_set _).symm)
@[simp] theorem size_swap {xs : Array α} {i j : Nat} {hi hj} : (xs.swap i j hi hj).size = xs.size := by
@[simp, grind =] theorem size_swap {xs : Array α} {i j : Nat} {hi hj} : (xs.swap i j hi hj).size = xs.size := by
change ((xs.set i xs[j]).set j xs[i]
(Nat.lt_of_lt_of_eq hj (size_set _).symm)).size = xs.size
rw [size_set, size_set]
@@ -448,7 +448,7 @@ Examples:
-/
abbrev take (xs : Array α) (i : Nat) : Array α := extract xs 0 i
@[simp] theorem take_eq_extract {xs : Array α} {i : Nat} : xs.take i = xs.extract 0 i := rfl
@[simp, grind =] theorem take_eq_extract {xs : Array α} {i : Nat} : xs.take i = xs.extract 0 i := rfl
/--
Removes the first `i` elements of `xs`. If `xs` has fewer than `i` elements, the new array is empty.
@@ -462,7 +462,7 @@ Examples:
-/
abbrev drop (xs : Array α) (i : Nat) : Array α := extract xs i xs.size
@[simp] theorem drop_eq_extract {xs : Array α} {i : Nat} : xs.drop i = xs.extract i xs.size := rfl
@[simp, grind =] theorem drop_eq_extract {xs : Array α} {i : Nat} : xs.drop i = xs.extract i xs.size := rfl
@[inline]
unsafe def modifyMUnsafe [Monad m] (xs : Array α) (i : Nat) (f : α m α) : m (Array α) := do
@@ -1295,7 +1295,7 @@ decreasing_by simp_wf; decreasing_trivial_pre_omega
/--
Returns the index of the first element equal to `a`, or the size of the array if no element is equal
Returns the index of the first element equal to `a`, or `none` if no element is equal
to `a`. The index is returned as a `Fin`, which guarantees that it is in bounds.
Examples:
@@ -1704,7 +1704,7 @@ def popWhile (p : α → Bool) (as : Array α) : Array α :=
as
decreasing_by simp_wf; decreasing_trivial_pre_omega
@[simp] theorem popWhile_empty {p : α Bool} :
@[simp, grind =] theorem popWhile_empty {p : α Bool} :
popWhile p #[] = #[] := by
simp [popWhile]
@@ -1751,7 +1751,8 @@ termination_by xs.size - i
decreasing_by simp_wf; exact Nat.sub_succ_lt_self _ _ h
-- This is required in `Lean.Data.PersistentHashMap`.
@[simp] theorem size_eraseIdx {xs : Array α} (i : Nat) (h) : (xs.eraseIdx i h).size = xs.size - 1 := by
@[simp, grind =]
theorem size_eraseIdx {xs : Array α} (i : Nat) (h) : (xs.eraseIdx i h).size = xs.size - 1 := by
induction xs, i, h using Array.eraseIdx.induct with
| @case1 xs i h h' xs' ih =>
unfold eraseIdx

View File

@@ -100,9 +100,15 @@ abbrev push_toList := @toList_push
@[simp, grind =] theorem empty_append {xs : Array α} : #[] ++ xs = xs := by
apply ext'; simp only [toList_append, List.nil_append]
@[simp, grind _=_] theorem append_assoc {xs ys zs : Array α} : xs ++ ys ++ zs = xs ++ (ys ++ zs) := by
@[simp] theorem append_assoc {xs ys zs : Array α} : xs ++ ys ++ zs = xs ++ (ys ++ zs) := by
apply ext'; simp only [toList_append, List.append_assoc]
grind_pattern append_assoc => (xs ++ ys) ++ zs where
xs =/= #[]; ys =/= #[]; zs =/= #[]
grind_pattern append_assoc => xs ++ (ys ++ zs) where
xs =/= #[]; ys =/= #[]; zs =/= #[]
@[simp] theorem appendList_eq_append {xs : Array α} {l : List α} : xs.appendList l = xs ++ l := rfl
@[simp, grind =] theorem toList_appendList {xs : Array α} {l : List α} :
@@ -110,6 +116,4 @@ abbrev push_toList := @toList_push
rw [ appendList_eq_append]; unfold Array.appendList
induction l generalizing xs <;> simp [*]
end Array

View File

@@ -200,7 +200,7 @@ theorem getElem?_extract_of_succ {as : Array α} {j : Nat} :
simp [getElem?_extract]
omega
@[simp, grind =] theorem extract_extract {as : Array α} {i j k l : Nat} :
@[simp] theorem extract_extract {as : Array α} {i j k l : Nat} :
(as.extract i j).extract k l = as.extract (i + k) (min (i + l) j) := by
ext m h₁ h₂
· simp
@@ -208,6 +208,9 @@ theorem getElem?_extract_of_succ {as : Array α} {j : Nat} :
· simp only [size_extract] at h₁ h₂
simp [Nat.add_assoc]
grind_pattern extract_extract => (as.extract i j).extract k l where
as =/= #[]
theorem extract_eq_empty_of_eq_empty {as : Array α} {i j : Nat} (h : as = #[]) :
as.extract i j = #[] := by
simp [h]

View File

@@ -1628,12 +1628,15 @@ theorem filterMap_eq_filter {p : α → Bool} (w : stop = as.size) :
cases as
simp
@[grind =]
theorem filterMap_filterMap {f : α Option β} {g : β Option γ} {xs : Array α} :
filterMap g (filterMap f xs) = filterMap (fun x => (f x).bind g) xs := by
cases xs
simp [List.filterMap_filterMap]
grind_pattern filterMap_filterMap => filterMap g (filterMap f xs) where
f =/= some
g =/= some
@[grind =]
theorem map_filterMap {f : α Option β} {g : β γ} {xs : Array α} :
map g (filterMap f xs) = filterMap (fun x => (f x).map g) xs := by
@@ -2228,8 +2231,8 @@ theorem push_eq_flatten_iff {xss : Array (Array α)} {ys : Array α} {y : α} :
-- zs = cs ++ ds.flatten := by sorry
/-- Two arrays of subarrays are equal iff their flattens coincide, as well as the sizes of the
subarrays. -/
/-- Two arrays of arrays are equal iff their flattens coincide, as well as the sizes of the
arrays. -/
theorem eq_iff_flatten_eq {xss₁ xss₂ : Array (Array α)} :
xss₁ = xss₂ xss₁.flatten = xss₂.flatten map size xss₁ = map size xss₂ := by
cases xss₁ using array₂_induction with
@@ -3325,6 +3328,16 @@ theorem foldr_filterMap {f : α → Option β} {g : β → γγ} {xs : Arra
(xs.filterMap f).foldr g init = xs.foldr (fun x y => match f x with | some b => g b y | none => y) init := by
simp [foldr_filterMap']
theorem foldl_flatMap {f : α Array β} {g : γ β γ} {xs : Array α} {init : γ} :
(xs.flatMap f).foldl g init = xs.foldl (fun acc x => (f x).foldl g acc) init := by
rcases xs with l
simp [List.foldl_flatMap]
theorem foldr_flatMap {f : α Array β} {g : β γ γ} {xs : Array α} {init : γ} :
(xs.flatMap f).foldr g init = xs.foldr (fun x acc => (f x).foldr g acc) init := by
rcases xs with l
simp [List.foldr_flatMap]
theorem foldl_map_hom' {g : α β} {f : α α α} {f' : β β β} {a : α} {xs : Array α}
{stop : Nat} (h : x y, f' (g x) (g y) = g (f x y)) (w : stop = xs.size) :
(xs.map g).foldl f' (g a) 0 stop = g (xs.foldl f a) := by

View File

@@ -75,11 +75,11 @@ private theorem cons_lex_cons [BEq α] {lt : αα → Bool} {a b : α} {xs
Nat.add_min_add_left, Nat.add_lt_add_iff_left, Std.Rco.forIn'_eq_forIn'_toList]
conv =>
lhs; congr; congr
rw [cons_lex_cons.forIn'_congr_aux Std.Rco.toList_eq_if rfl (fun _ _ _ => rfl)]
rw [cons_lex_cons.forIn'_congr_aux Std.Rco.toList_eq_if_roo rfl (fun _ _ _ => rfl)]
simp only [bind_pure_comp, map_pure]
rw [cons_lex_cons.forIn'_congr_aux (if_pos (by omega)) rfl (fun _ _ _ => rfl)]
simp only [Std.toList_Roo_eq_toList_Rco_of_isSome_succ? (lo := 0) (h := rfl),
Std.PRange.UpwardEnumerable.succ?, Nat.add_comm 1, Std.PRange.Nat.toList_Rco_succ_succ,
simp only [Std.toList_roo_eq_toList_rco_of_isSome_succ? (lo := 0) (h := rfl),
Std.PRange.UpwardEnumerable.succ?, Nat.add_comm 1, Std.PRange.Nat.toList_rco_succ_succ,
Option.get_some, List.forIn'_cons, List.size_toArray, List.length_cons, List.length_nil,
Nat.lt_add_one, getElem_append_left, List.getElem_toArray, List.getElem_cons_zero]
cases lt a b
@@ -151,7 +151,7 @@ protected theorem lt_of_le_of_lt [LE α] [LT α] [LawfulOrderLT α] [IsLinearOrd
@[deprecated Array.lt_of_le_of_lt (since := "2025-08-01")]
protected theorem lt_of_le_of_lt' [LT α]
[i₁ : Std.Asymm (· < · : α α Prop)]
[i₂ : Std.Antisymm (¬ · < · : α α Prop)]
[i₂ : Std.Trichotomous (· < · : α α Prop)]
[i₃ : Trans (¬ · < · : α α Prop) (¬ · < ·) (¬ · < ·)]
{xs ys zs : Array α} (h₁ : xs ys) (h₂ : ys < zs) : xs < zs :=
letI := LE.ofLT α
@@ -165,7 +165,7 @@ protected theorem le_trans [LE α] [LT α] [LawfulOrderLT α] [IsLinearOrder α]
@[deprecated Array.le_trans (since := "2025-08-01")]
protected theorem le_trans' [LT α]
[i₁ : Std.Asymm (· < · : α α Prop)]
[i₂ : Std.Antisymm (¬ · < · : α α Prop)]
[i₂ : Std.Trichotomous (· < · : α α Prop)]
[i₃ : Trans (¬ · < · : α α Prop) (¬ · < ·) (¬ · < ·)]
{xs ys zs : Array α} (h₁ : xs ys) (h₂ : ys zs) : xs zs :=
letI := LE.ofLT α
@@ -196,7 +196,7 @@ protected theorem le_of_lt [LT α]
protected theorem le_iff_lt_or_eq [LT α]
[Std.Irrefl (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Std.Asymm (· < · : α α Prop)]
{xs ys : Array α} : xs ys xs < ys xs = ys := by
simpa using List.le_iff_lt_or_eq (l₁ := xs.toList) (l₂ := ys.toList)
@@ -285,7 +285,7 @@ protected theorem lt_iff_exists [LT α] {xs ys : Array α} :
protected theorem le_iff_exists [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)] {xs ys : Array α} :
[Std.Trichotomous (· < · : α α Prop)] {xs ys : Array α} :
xs ys
(xs = ys.take xs.size)
( (i : Nat) (h₁ : i < xs.size) (h₂ : i < ys.size),
@@ -304,7 +304,7 @@ theorem append_left_lt [LT α] {xs ys zs : Array α} (h : ys < zs) :
theorem append_left_le [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
{xs ys zs : Array α} (h : ys zs) :
xs ++ ys xs ++ zs := by
cases xs
@@ -327,9 +327,9 @@ protected theorem map_lt [LT α] [LT β]
protected theorem map_le [LT α] [LT β]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Std.Asymm (· < · : β β Prop)]
[Std.Antisymm (¬ · < · : β β Prop)]
[Std.Trichotomous (· < · : β β Prop)]
{xs ys : Array α} {f : α β} (w : x y, x < y f x < f y) (h : xs ys) :
map f xs map f ys := by
cases xs

View File

@@ -55,8 +55,12 @@ instance leAntisymm : Std.Antisymm (· ≤ · : Char → Char → Prop) where
antisymm _ _ := Char.le_antisymm
-- This instance is useful while setting up instances for `String`.
instance ltTrichotomous : Std.Trichotomous (· < · : Char Char Prop) where
trichotomous _ _ h₁ h₂ := Char.le_antisymm (by simpa using h₂) (by simpa using h₁)
@[deprecated ltTrichotomous (since := "2025-10-27")]
def notLTAntisymm : Std.Antisymm (¬ · < · : Char Char Prop) where
antisymm _ _ h₁ h₂ := Char.le_antisymm (by simpa using h₂) (by simpa using h₁)
antisymm := Char.ltTrichotomous.trichotomous
instance ltAsymm : Std.Asymm (· < · : Char Char Prop) where
asymm _ _ := Char.lt_asymm

View File

@@ -440,11 +440,11 @@ theorem toDyadic_mkRat (a : Int) (b : Nat) (prec : Int) :
cases prec
· simp only [Rat.toDyadic, Int.ofNat_eq_natCast, Int.toNat_natCast, Int.toNat_neg_natCast,
shiftLeft_zero, Int.natCast_mul]
rw [Int.mul_comm d, Int.ediv_ediv (by simp), Int.shiftLeft_mul,
rw [Int.mul_comm d, Int.ediv_ediv_of_nonneg (by simp), Int.shiftLeft_mul,
Int.mul_ediv_cancel _ (by simpa using hm)]
· simp only [Rat.toDyadic, Int.natCast_shiftLeft, Int.negSucc_eq, Int.natCast_add_one,
Int.toNat_neg_natCast, Int.shiftLeft_zero, Int.neg_neg, Int.toNat_natCast, Int.natCast_mul]
rw [Int.mul_comm d, Int.mul_shiftLeft, Int.ediv_ediv (by simp),
rw [Int.mul_comm d, Int.mul_shiftLeft, Int.ediv_ediv_of_nonneg (by simp),
Int.mul_ediv_cancel _ (by simpa using hm)]
theorem toDyadic_eq_ofIntWithPrec (x : Rat) (prec : Int) :
@@ -472,7 +472,7 @@ theorem toRat_toDyadic (x : Rat) (prec : Int) :
Rat.den_ofNat, Nat.one_pow, Nat.mul_one]
split
· simp_all
· rw [Int.ediv_ediv (Int.natCast_nonneg _)]
· rw [Int.ediv_ediv_of_nonneg (Int.natCast_nonneg _)]
congr 1
rw [Int.natCast_ediv, Int.mul_ediv_cancel']
rw [Int.natCast_dvd_natCast]
@@ -495,7 +495,7 @@ theorem toRat_toDyadic (x : Rat) (prec : Int) :
simp only [this, Int.mul_one]
split
· simp_all
· rw [Int.ediv_ediv (Int.natCast_nonneg _)]
· rw [Int.ediv_ediv_of_nonneg (Int.natCast_nonneg _)]
congr 1
rw [Int.natCast_ediv, Int.mul_ediv_cancel']
· simp

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.Array.Basic
import Init.Data.String.Basic
import Init.Data.String.Search
public section
@@ -47,7 +47,7 @@ Converts a string to a pretty-printer document, replacing newlines in the string
`Std.Format.line`.
-/
def String.toFormat (s : String) : Std.Format :=
Std.Format.joinSep (s.splitOn "\n") Std.Format.line
Std.Format.joinSep (s.split '\n').toList Std.Format.line
instance : ToFormat String.Pos.Raw where
format p := format p.byteIdx

View File

@@ -392,9 +392,9 @@ Examples:
* `(0 : Int) ^ 10 = 0`
* `(-7 : Int) ^ 3 = -343`
-/
protected def pow (m : Int) : Nat Int
| 0 => 1
| succ n => Int.pow m n * m
protected def pow : Int Nat Int
| (m : Nat), n => Int.ofNat (m ^ n)
| m@-[_+1], n => if n % 2 = 0 then Int.ofNat (m.natAbs ^ n) else - Int.ofNat (m.natAbs ^ n)
instance : NatPow Int where
pow := Int.pow

View File

@@ -24,12 +24,17 @@ theorem natCast_shiftRight (n s : Nat) : n >>> s = (n : Int) >>> s := rfl
theorem negSucc_shiftRight (m n : Nat) :
-[m+1] >>> n = -[m >>>n +1] := rfl
@[grind _=_]
theorem shiftRight_add (i : Int) (m n : Nat) :
i >>> (m + n) = i >>> m >>> n := by
simp only [shiftRight_eq, Int.shiftRight]
cases i <;> simp [Nat.shiftRight_add]
grind_pattern shiftRight_add => i >>> (m + n) where
i =/= 0
grind_pattern shiftRight_add => i >>> m >>> n where
i =/= 0
theorem shiftRight_eq_div_pow (m : Int) (n : Nat) :
m >>> n = m / ((2 ^ n) : Nat) := by
simp only [shiftRight_eq, Int.shiftRight, Nat.shiftRight_eq_div_pow]

View File

@@ -9,3 +9,4 @@ prelude
public import Init.Data.Int.DivMod.Basic
public import Init.Data.Int.DivMod.Bootstrap
public import Init.Data.Int.DivMod.Lemmas
public import Init.Data.Int.DivMod.Pow

View File

@@ -145,6 +145,12 @@ theorem dvd_of_mul_dvd_mul_left {a m n : Int} (ha : a ≠ 0) (h : a * m a *
theorem dvd_of_mul_dvd_mul_right {a m n : Int} (ha : a 0) (h : m * a n * a) : m n :=
dvd_of_mul_dvd_mul_left ha (by simpa [Int.mul_comm] using h)
theorem dvd_mul_of_dvd_right {a b c : Int} (h : a c) : a b * c :=
Int.dvd_trans h (Int.dvd_mul_left b c)
theorem dvd_mul_of_dvd_left {a b c : Int} (h : a b) : a b * c :=
Int.dvd_trans h (Int.dvd_mul_right b c)
@[norm_cast] theorem natCast_dvd_natCast {m n : Nat} : (m : Int) n m n where
mp := by
rintro a, h
@@ -1229,7 +1235,7 @@ private theorem ediv_ediv_of_pos {x y z : Int} (hy : 0 < y) (hz : 0 < z) :
· rw [Int.mul_comm y, Int.mul_assoc, Int.add_mul, Int.mul_comm _ z]
exact Int.lt_mul_of_ediv_lt hy (Int.lt_mul_ediv_self_add hz)
theorem ediv_ediv {x y z : Int} (hy : 0 y) : x / y / z = x / (y * z) := by
theorem ediv_ediv_of_nonneg {x y z : Int} (hy : 0 y) : x / y / z = x / (y * z) := by
rcases y with (_ | a) | a
· simp
· rcases z with (_ | b) | b
@@ -1238,6 +1244,21 @@ theorem ediv_ediv {x y z : Int} (hy : 0 ≤ y) : x / y / z = x / (y * z) := by
· simp [Int.negSucc_eq, Int.mul_neg, ediv_ediv_of_pos]
· simp at hy
theorem ediv_ediv {x y z : Int} : x / y / z = x / (y * z) - if y < 0 ¬ z x / y then z.sign else 0 := by
rcases y with y | y
· rw [ediv_ediv_of_nonneg (by simp), if_neg (by simp; omega)]
simp
· rw [Int.negSucc_eq, Int.ediv_neg, Int.neg_mul, Int.ediv_neg, Int.neg_ediv, ediv_ediv_of_nonneg (by omega)]
simp
theorem ediv_mul {x y z : Int} : x / (y * z) = x / y / z + if y < 0 ¬ z x / y then z.sign else 0 := by
have := ediv_ediv (x := x) (y := y) (z := z)
omega
theorem ediv_mul_of_nonneg {x y z : Int} (hy : 0 y) : x / (y * z) = x / y / z := by
have := ediv_ediv_of_nonneg (x := x) (y := y) (z := z) hy
omega
/-! ### tdiv -/
-- `tdiv` analogues of `ediv` lemmas from `Bootstrap.lean`

View File

@@ -0,0 +1,35 @@
/-
Copyright (c) 2025 Lean FRO, LLC All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kim Morrison
-/
module
prelude
public import Init.Data.Int.DivMod.Lemmas
public import Init.Data.Int.Pow
/-!
# Lemmas about divisibility of powers
-/
namespace Int
theorem dvd_pow {a b : Int} {n : Nat} (hab : b a) : b ^ n a ^ n := by
rcases hab with c, rfl
rw [Int.mul_pow]
exact Int.dvd_mul_right (b ^ n) (c ^ n)
theorem ediv_pow {a b : Int} {n : Nat} (hab : b a) :
(a / b) ^ n = a ^ n / b ^ n := by
obtain c, rfl := hab
by_cases b = 0
· by_cases n = 0 <;> simp [*, Int.zero_pow]
· simp [Int.mul_pow, Int.pow_ne_zero, *]
theorem tdiv_pow {a b : Int} {n : Nat} (hab : b a) :
(a.tdiv b) ^ n = (a ^ n).tdiv (b ^ n) := by
rw [Int.tdiv_eq_ediv_of_dvd hab, ediv_pow hab, Int.tdiv_eq_ediv_of_dvd (dvd_pow hab)]
theorem fdiv_pow {a b : Int} {n : Nat} (hab : b a) :
(a.fdiv b) ^ n = (a ^ n).fdiv (b ^ n) := by
rw [Int.fdiv_eq_ediv_of_dvd hab, ediv_pow hab, Int.fdiv_eq_ediv_of_dvd (dvd_pow hab)]

View File

@@ -552,7 +552,7 @@ protected theorem mul_eq_zero {a b : Int} : a * b = 0 ↔ a = 0 b = 0 := by
exact match a, b, h with
| .ofNat 0, _, _ => by simp
| _, .ofNat 0, _ => by simp
| .ofNat (a+1), .negSucc b, h => by cases h
| .ofNat (_+1), .negSucc _, h => by cases h
protected theorem mul_ne_zero {a b : Int} (a0 : a 0) (b0 : b 0) : a * b 0 :=
Or.rec a0 b0 Int.mul_eq_zero.mp

View File

@@ -4,7 +4,6 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Data.Int.LemmasAux
public import Init.Data.Int.Cooper
@@ -12,9 +11,7 @@ import all Init.Data.Int.Gcd
public import Init.Data.AC
import all Init.Data.AC
import Init.LawfulBEqTactics
public section
namespace Int.Linear
/-! Helper definitions and theorems for constructing linear arithmetic proofs. -/
@@ -22,8 +19,7 @@ namespace Int.Linear
abbrev Var := Nat
abbrev Context := Lean.RArray Int
@[expose]
def Var.denote (ctx : Context) (v : Var) : Int :=
abbrev Var.denote (ctx : Context) (v : Var) : Int :=
ctx.get v
inductive Expr where
@@ -36,8 +32,7 @@ inductive Expr where
| mulR (a : Expr) (k : Int)
deriving Inhabited, @[expose] BEq
@[expose]
def Expr.denote (ctx : Context) : Expr Int
abbrev Expr.denote (ctx : Context) : Expr Int
| .add a b => denote ctx a + denote ctx b
| .sub a b => denote ctx a - denote ctx b
| .neg a => - denote ctx a
@@ -46,6 +41,9 @@ def Expr.denote (ctx : Context) : Expr → Int
| .mulL k e => k * denote ctx e
| .mulR e k => denote ctx e * k
set_option allowUnsafeReducibility true
attribute [semireducible] Var.denote Expr.denote
inductive Poly where
| num (k : Int)
| add (k : Int) (v : Var) (p : Poly)
@@ -68,35 +66,36 @@ protected noncomputable def Poly.beq' (p₁ : Poly) : Poly → Bool :=
intro _ _; subst k₁ v₁
simp [ ih p₂, Bool.and'_eq_and]; rfl
@[expose]
def Poly.denote (ctx : Context) (p : Poly) : Int :=
abbrev Poly.denote (ctx : Context) (p : Poly) : Int :=
match p with
| .num k => k
| .add k v p => k * v.denote ctx + denote ctx p
noncomputable abbrev Poly.denote'.go (ctx : Context) (p : Poly) : Int Int :=
Poly.rec
(fun k r => Bool.rec
(r + k)
r
(Int.beq' k 0))
(fun k v _ ih r => Bool.rec
(ih (r + k * v.denote ctx))
(ih (r + v.denote ctx))
(Int.beq' k 1))
p
/--
Similar to `Poly.denote`, but produces a denotation better for `simp +arith`.
Remark: we used to convert `Poly` back into `Expr` to achieve that.
-/
@[expose] noncomputable def Poly.denote' (ctx : Context) (p : Poly) : Int :=
noncomputable abbrev Poly.denote' (ctx : Context) (p : Poly) : Int :=
Poly.rec (fun k => k)
(fun k v p _ => Bool.rec
(go p (k * v.denote ctx))
(go p (v.denote ctx))
(denote'.go ctx p (k * v.denote ctx))
(denote'.go ctx p (v.denote ctx))
(Int.beq' k 1))
p
where
go (p : Poly) : Int Int :=
Poly.rec
(fun k r => Bool.rec
(r + k)
r
(Int.beq' k 0))
(fun k v _ ih r => Bool.rec
(ih (r + k * v.denote ctx))
(ih (r + v.denote ctx))
(Int.beq' k 1))
p
attribute [semireducible] Poly.denote Poly.denote' Poly.denote'.go
@[simp] theorem Poly.denote'_go_eq_denote (ctx : Context) (p : Poly) (r : Int) : denote'.go ctx p r = p.denote ctx + r := by
induction p generalizing r

View File

@@ -14,9 +14,20 @@ namespace Int
/-! # pow -/
@[simp] protected theorem pow_zero (b : Int) : b^0 = 1 := rfl
@[simp, norm_cast]
theorem natCast_pow (m n : Nat) : (m ^ n : Nat) = (m : Int) ^ n := rfl
theorem negSucc_pow (m n : Nat) : (-[m+1] : Int) ^ n = if n % 2 = 0 then Int.ofNat (m.succ ^ n) else Int.negOfNat (m.succ ^ n) := rfl
@[simp] protected theorem pow_zero (m : Int) : m ^ 0 = 1 := by cases m <;> simp [ natCast_pow, negSucc_pow]
protected theorem pow_succ (m : Int) (n : Nat) : m ^ n.succ = m ^ n * m := by
rcases m with _ | a
· rfl
· simp only [negSucc_pow, Nat.succ_mod_succ_eq_zero_iff, Nat.reduceAdd, Nat.mod_two_ne_zero,
Nat.pow_succ, ofNat_eq_natCast, @negOfNat_eq (_ * _), ite_not, apply_ite (· * -[a+1]),
ofNat_mul_negSucc, negOfNat_mul_negSucc]
protected theorem pow_succ (b : Int) (e : Nat) : b ^ (e+1) = (b ^ e) * b := rfl
protected theorem pow_succ' (b : Int) (e : Nat) : b ^ (e+1) = b * (b ^ e) := by
rw [Int.mul_comm, Int.pow_succ]
@@ -32,29 +43,46 @@ protected theorem zero_pow {n : Nat} (h : n ≠ 0) : (0 : Int) ^ n = 0 := by
protected theorem one_pow {n : Nat} : (1 : Int) ^ n = 1 := by
induction n with simp_all [Int.pow_succ]
protected theorem mul_pow {a b : Int} {n : Nat} : (a * b) ^ n = a ^ n * b ^ n := by
induction n with
| zero => simp
| succ n ih =>
rw [Int.pow_succ, Int.pow_succ, Int.pow_succ, ih, Int.mul_assoc, Int.mul_assoc,
Int.mul_left_comm (b^n)]
protected theorem pow_one (a : Int) : a ^ 1 = a := by
rw [Int.pow_succ, Int.pow_zero, Int.one_mul]
protected theorem pow_mul {a : Int} {n m : Nat} : a ^ (n * m) = (a ^ n) ^ m := by
induction m with
| zero => simp
| succ m ih =>
rw [Int.pow_succ, Nat.mul_add_one, Int.pow_add, ih]
protected theorem pow_pos {n : Int} {m : Nat} : 0 < n 0 < n ^ m := by
induction m with
| zero => simp
| succ m ih => exact fun h => Int.mul_pos (ih h) h
| succ m ih =>
simp only [Int.pow_succ]
exact fun h => Int.mul_pos (ih h) h
protected theorem pow_nonneg {n : Int} {m : Nat} : 0 n 0 n ^ m := by
induction m with
| zero => simp
| succ m ih => exact fun h => Int.mul_nonneg (ih h) h
| succ m ih =>
simp only [Int.pow_succ]
exact fun h => Int.mul_nonneg (ih h) h
protected theorem pow_ne_zero {n : Int} {m : Nat} : n 0 n ^ m 0 := by
induction m with
| zero => simp
| succ m ih => exact fun h => Int.mul_ne_zero (ih h) h
| succ m ih =>
simp only [Int.pow_succ]
exact fun h => Int.mul_ne_zero (ih h) h
instance {n : Int} {m : Nat} [NeZero n] : NeZero (n ^ m) := Int.pow_ne_zero (NeZero.ne _)
@[simp, norm_cast]
protected theorem natCast_pow (b n : Nat) : ((b^n : Nat) : Int) = (b : Int) ^ n := by
match n with
| 0 => rfl
| n + 1 =>
simp only [Nat.pow_succ, Int.pow_succ, Int.natCast_mul, Int.natCast_pow _ n]
instance {n : Int} : NeZero (n^0) := by simp
@[simp]
protected theorem two_pow_pred_sub_two_pow {w : Nat} (h : 0 < w) :
@@ -77,7 +105,7 @@ theorem pow_lt_pow_of_lt {a : Int} {b c : Nat} (ha : 1 < a) (hbc : b < c):
omega
@[simp] theorem natAbs_pow (n : Int) : (k : Nat) (n ^ k).natAbs = n.natAbs ^ k
| 0 => rfl
| 0 => by simp
| k + 1 => by rw [Int.pow_succ, natAbs_mul, natAbs_pow, Nat.pow_succ]
theorem toNat_pow_of_nonneg {x : Int} (h : 0 x) (k : Nat) : (x ^ k).toNat = x.toNat ^ k := by
@@ -86,4 +114,21 @@ theorem toNat_pow_of_nonneg {x : Int} (h : 0 ≤ x) (k : Nat) : (x ^ k).toNat =
| succ k ih =>
rw [Int.pow_succ, Int.toNat_mul (Int.pow_nonneg h) h, ih, Nat.pow_succ]
protected theorem sq_nonnneg (m : Int) : 0 m ^ 2 := by
rw [Int.pow_succ, Int.pow_one]
cases m
· apply Int.mul_nonneg <;> simp
· apply Int.mul_nonneg_of_nonpos_of_nonpos <;> exact negSucc_le_zero _
protected theorem pow_nonneg_of_even {m : Int} {n : Nat} (h : n % 2 = 0) : 0 m ^ n := by
rw [ Nat.mod_add_div n 2, h, Nat.zero_add, Int.pow_mul]
apply Int.pow_nonneg
exact Int.sq_nonnneg m
protected theorem neg_pow {m : Int} {n : Nat} : (-m)^n = (-1)^(n % 2) * m^n := by
rw [Int.neg_eq_neg_one_mul, Int.mul_pow]
rw (occs := [1]) [ Nat.mod_add_div n 2]
rw [Int.pow_add, Int.pow_mul]
simp [Int.one_pow]
end Int

View File

@@ -9,6 +9,7 @@ prelude
public import Init.Data.Iterators.Basic
public import Init.Data.Iterators.PostconditionMonad
public import Init.Data.Iterators.Consumers
public import Init.Data.Iterators.Producers
public import Init.Data.Iterators.Combinators
public import Init.Data.Iterators.Lemmas
public import Init.Data.Iterators.ToIterator

View File

@@ -817,6 +817,24 @@ def IterM.TerminationMeasures.Productive.Rel
TerminationMeasures.Productive α m TerminationMeasures.Productive α m Prop :=
Relation.TransGen <| InvImage IterM.IsPlausibleSkipSuccessorOf IterM.TerminationMeasures.Productive.it
theorem IterM.TerminationMeasures.Finite.Rel.of_productive
{α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β] {a b : Finite α m} :
Productive.Rel a.it b.it Finite.Rel a b := by
generalize ha' : Productive.mk a.it = a'
generalize hb' : Productive.mk b.it = b'
have ha : a = a'.it := by simp [ ha']
have hb : b = b'.it := by simp [ hb']
rw [ha, hb]
clear ha hb ha' hb' a b
rw [Productive.Rel, Finite.Rel]
intro h
induction h
· rename_i ih
exact .single _, rfl, ih
· rename_i hab ih
refine .trans ih ?_
exact .single _, rfl, hab
instance {α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β]
[Productive α m] : WellFoundedRelation (IterM.TerminationMeasures.Productive α m) where
rel := IterM.TerminationMeasures.Productive.Rel

View File

@@ -9,4 +9,5 @@ prelude
public import Init.Data.Iterators.Combinators.Monadic
public import Init.Data.Iterators.Combinators.FilterMap
public import Init.Data.Iterators.Combinators.FlatMap
public import Init.Data.Iterators.Combinators.Take
public import Init.Data.Iterators.Combinators.ULift

View File

@@ -8,4 +8,5 @@ module
prelude
public import Init.Data.Iterators.Combinators.Monadic.FilterMap
public import Init.Data.Iterators.Combinators.Monadic.FlatMap
public import Init.Data.Iterators.Combinators.Monadic.Take
public import Init.Data.Iterators.Combinators.Monadic.ULift

View File

@@ -106,16 +106,6 @@ instance Attach.instIteratorLoopPartial {α β : Type w} {m : Type w → Type w'
IteratorLoopPartial (Attach α m P) m n :=
.defaultImplementation
instance {α β : Type w} {m : Type w Type w'} [Monad m]
{P : β Prop} [Iterator α m β] [IteratorSize α m] :
IteratorSize (Attach α m P) m where
size it := IteratorSize.size it.internalState.inner
instance {α β : Type w} {m : Type w Type w'} [Monad m]
{P : β Prop} [Iterator α m β] [IteratorSizePartial α m] :
IteratorSizePartial (Attach α m P) m where
size it := IteratorSizePartial.size it.internalState.inner
end Types
/--

View File

@@ -604,30 +604,4 @@ def IterM.filter {α β : Type w} {m : Type w → Type w'} [Iterator α m β] [M
(f : β Bool) (it : IterM (α := α) m β) :=
(it.filterMap (fun b => if f b then some b else none) : IterM m β)
instance {α β γ : Type w} {m : Type w Type w'}
{n : Type w Type w''} [Monad n] [Iterator α m β] {lift : α : Type w m α n α}
{f : β PostconditionT n (Option γ)} [Finite α m] :
IteratorSize (FilterMap α m n lift f) n :=
.defaultImplementation
instance {α β γ : Type w} {m : Type w Type w'}
{n : Type w Type w''} [Monad n] [Iterator α m β] {lift : α : Type w m α n α}
{f : β PostconditionT n (Option γ)} :
IteratorSizePartial (FilterMap α m n lift f) n :=
.defaultImplementation
instance {α β γ : Type w} {m : Type w Type w'}
{n : Type w Type w''} [Monad n] [Iterator α m β]
{lift : α : Type w m α n α}
{f : β PostconditionT n γ} [IteratorSize α m] :
IteratorSize (Map α m n lift f) n where
size it := lift (IteratorSize.size it.internalState.inner)
instance {α β γ : Type w} {m : Type w Type w'}
{n : Type w Type w''} [Monad n] [Iterator α m β]
{lift : α : Type w m α n α}
{f : β PostconditionT n γ} [IteratorSizePartial α m] :
IteratorSizePartial (Map α m n lift f) n where
size it := lift (IteratorSizePartial.size it.internalState.inner)
end Std.Iterators

View File

@@ -0,0 +1,223 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Nat.Lemmas
public import Init.Data.Iterators.Consumers.Monadic.Collect
public import Init.Data.Iterators.Consumers.Monadic.Loop
public import Init.Data.Iterators.Internal.Termination
@[expose] public section
/-!
This module provides the iterator combinator `IterM.take`.
-/
namespace Std.Iterators
variable {α : Type w} {m : Type w Type w'} {β : Type w}
/--
The internal state of the `IterM.take` iterator combinator.
-/
@[unbox]
structure Take (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
/--
Internal implementation detail of the iterator library.
Caution: For `take n`, `countdown` is `n + 1`.
If `countdown` is zero, the combinator only terminates when `inner` terminates.
-/
countdown : Nat
/-- Internal implementation detail of the iterator library -/
inner : IterM (α := α) m β
/--
Internal implementation detail of the iterator library.
This proof term ensures that a `take` always produces a finite iterator from a productive one.
-/
finite : countdown > 0 Finite α m
/--
Given an iterator `it` and a natural number `n`, `it.take n` is an iterator that outputs
up to the first `n` of `it`'s values in order and then terminates.
**Marble diagram:**
```text
it ---a----b---c--d-e--
it.take 3 ---a----b---c⊥
it ---a--
it.take 3 ---a--
```
**Termination properties:**
* `Finite` instance: only if `it` is productive
* `Productive` instance: only if `it` is productive
**Performance:**
This combinator incurs an additional O(1) cost with each output of `it`.
-/
@[always_inline, inline]
def IterM.take [Iterator α m β] (n : Nat) (it : IterM (α := α) m β) :=
toIterM (Take.mk (n + 1) it (Or.inl <| Nat.zero_lt_succ _)) m β
/--
This combinator is only useful for advanced use cases.
Given a finite iterator `it`, returns an iterator that behaves exactly like `it` but is of the same
type as `it.take n`.
**Marble diagram:**
```text
it ---a----b---c--d-e--
it.toTake ---a----b---c--d-e--
```
**Termination properties:**
* `Finite` instance: always
* `Productive` instance: always
**Performance:**
This combinator incurs an additional O(1) cost with each output of `it`.
-/
@[always_inline, inline]
def IterM.toTake [Iterator α m β] [Finite α m] (it : IterM (α := α) m β) :=
toIterM (Take.mk 0 it (Or.inr inferInstance)) m β
theorem IterM.take.surjective_of_zero_lt {α : Type w} {m : Type w Type w'} {β : Type w}
[Iterator α m β] (it : IterM (α := Take α m) m β) (h : 0 < it.internalState.countdown) :
(it₀ : IterM (α := α) m β) (k : Nat), it = it₀.take k := by
refine it.internalState.inner, it.internalState.countdown - 1, ?_
simp only [take, Nat.sub_add_cancel (m := 1) (n := it.internalState.countdown) (by omega)]
rfl
inductive Take.PlausibleStep [Iterator α m β] (it : IterM (α := Take α m) m β) :
(step : IterStep (IterM (α := Take α m) m β) β) Prop where
| yield : {it' out}, it.internalState.inner.IsPlausibleStep (.yield it' out)
(h : it.internalState.countdown 1) PlausibleStep it (.yield it.internalState.countdown - 1, it', it.internalState.finite.imp_left (by omega) out)
| skip : {it'}, it.internalState.inner.IsPlausibleStep (.skip it')
it.internalState.countdown 1 PlausibleStep it (.skip it.internalState.countdown, it', it.internalState.finite)
| done : it.internalState.inner.IsPlausibleStep .done PlausibleStep it .done
| depleted : it.internalState.countdown = 1
PlausibleStep it .done
@[always_inline, inline]
instance Take.instIterator [Monad m] [Iterator α m β] : Iterator (Take α m) m β where
IsPlausibleStep := Take.PlausibleStep
step it :=
if h : it.internalState.countdown = 1 then
pure <| .deflate <| .done (.depleted h)
else do
match ( it.internalState.inner.step).inflate with
| .yield it' out h' =>
pure <| .deflate <| .yield it.internalState.countdown - 1, it', (it.internalState.finite.imp_left (by omega)) out (.yield h' h)
| .skip it' h' => pure <| .deflate <| .skip it.internalState.countdown, it', it.internalState.finite (.skip h' h)
| .done h' => pure <| .deflate <| .done (.done h')
def Take.Rel (m : Type w Type w') [Monad m] [Iterator α m β] [Productive α m] :
IterM (α := Take α m) m β IterM (α := Take α m) m β Prop :=
open scoped Classical in
if _ : Finite α m then
InvImage (Prod.Lex Nat.lt_wfRel.rel IterM.TerminationMeasures.Finite.Rel)
(fun it => (it.internalState.countdown, it.internalState.inner.finitelyManySteps))
else
InvImage (Prod.Lex Nat.lt_wfRel.rel IterM.TerminationMeasures.Productive.Rel)
(fun it => (it.internalState.countdown, it.internalState.inner.finitelyManySkips))
theorem Take.rel_of_countdown [Monad m] [Iterator α m β] [Productive α m]
{it it' : IterM (α := Take α m) m β}
(h : it'.internalState.countdown < it.internalState.countdown) : Take.Rel m it' it := by
simp only [Rel]
split <;> exact Prod.Lex.left _ _ h
theorem Take.rel_of_inner [Monad m] [Iterator α m β] [Productive α m] {remaining : Nat}
{it it' : IterM (α := α) m β}
(h : it'.finitelyManySkips.Rel it.finitelyManySkips) :
Take.Rel m (it'.take remaining) (it.take remaining) := by
simp only [Rel]
split
· exact Prod.Lex.right _ (.of_productive h)
· exact Prod.Lex.right _ h
theorem Take.rel_of_zero_of_inner [Monad m] [Iterator α m β]
{it it' : IterM (α := Take α m) m β}
(h : it.internalState.countdown = 0) (h' : it'.internalState.countdown = 0)
(h'' : haveI := it.internalState.finite.resolve_left (by omega); it'.internalState.inner.finitelyManySteps.Rel it.internalState.inner.finitelyManySteps) :
haveI := it.internalState.finite.resolve_left (by omega)
Take.Rel m it' it := by
haveI := it.internalState.finite.resolve_left (by omega)
simp only [Rel, this, reduceDIte, InvImage, h, h']
exact Prod.Lex.right _ h''
private def Take.instFinitenessRelation [Monad m] [Iterator α m β]
[Productive α m] :
FinitenessRelation (Take α m) m where
rel := Take.Rel m
wf := by
rw [Rel]
split
all_goals
apply InvImage.wf
refine fun (a, b) => Prod.lexAccessible (WellFounded.apply ?_ a) (WellFounded.apply ?_) b
· exact WellFoundedRelation.wf
· exact WellFoundedRelation.wf
subrelation {it it'} h := by
obtain step, h, h' := h
cases h'
case yield it' out k h' h'' =>
cases h
cases it.internalState.finite
· apply rel_of_countdown
simp only
omega
· by_cases h : it.internalState.countdown = 0
· simp only [h, Nat.zero_le, Nat.sub_eq_zero_of_le]
apply rel_of_zero_of_inner h rfl
exact .single _, rfl, h'
· apply rel_of_countdown
simp only
omega
case skip it' out k h' h'' =>
cases h
by_cases h : it.internalState.countdown = 0
· simp only [h]
apply Take.rel_of_zero_of_inner h rfl
exact .single _, rfl, h'
· obtain it, k, rfl := IterM.take.surjective_of_zero_lt it (by omega)
apply Take.rel_of_inner
exact IterM.TerminationMeasures.Productive.rel_of_skip h'
case done _ =>
cases h
case depleted _ =>
cases h
instance Take.instFinite [Monad m] [Iterator α m β] [Productive α m] :
Finite (Take α m) m :=
by exact Finite.of_finitenessRelation instFinitenessRelation
instance Take.instIteratorCollect {n : Type w Type w'} [Monad m] [Monad n] [Iterator α m β] :
IteratorCollect (Take α m) m n :=
.defaultImplementation
instance Take.instIteratorCollectPartial {n : Type w Type w'} [Monad m] [Monad n] [Iterator α m β] :
IteratorCollectPartial (Take α m) m n :=
.defaultImplementation
instance Take.instIteratorLoop {n : Type x Type x'} [Monad m] [Monad n] [Iterator α m β] :
IteratorLoop (Take α m) m n :=
.defaultImplementation
instance Take.instIteratorLoopPartial [Monad m] [Monad n] [Iterator α m β] :
IteratorLoopPartial (Take α m) m n :=
.defaultImplementation
end Std.Iterators

View File

@@ -74,7 +74,7 @@ variable {α : Type u} {m : Type u → Type u'} {n : Type max u v → Type v'}
/--
Transforms a step of the base iterator into a step of the `uLift` iterator.
-/
@[always_inline, inline]
@[always_inline, inline, expose]
def Types.ULiftIterator.Monadic.modifyStep (step : IterStep (IterM (α := α) m β) β) :
IterStep (IterM (α := ULiftIterator.{v} α m n β lift) n (ULift.{v} β)) (ULift.{v} β) :=
match step with
@@ -140,15 +140,6 @@ instance Types.ULiftIterator.instIteratorCollectPartial {o} [Monad n] [Monad o]
IteratorCollectPartial (ULiftIterator α m n β lift) n o :=
.defaultImplementation
instance Types.ULiftIterator.instIteratorSize [Monad n] [Iterator α m β] [IteratorSize α m]
[Finite (ULiftIterator α m n β lift) n] :
IteratorSize (ULiftIterator α m n β lift) n :=
.defaultImplementation
instance Types.ULiftIterator.instIteratorSizePartial [Monad n] [Iterator α m β] [IteratorSize α m] :
IteratorSizePartial (ULiftIterator α m n β lift) n :=
.defaultImplementation
/--
Transforms an `m`-monadic iterator with values in `β` into an `n`-monadic iterator with
values in `ULift β`. Requires a `MonadLift m (ULiftT n)` instance.

View File

@@ -0,0 +1,70 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Combinators.Monadic.Take
@[expose] public section
namespace Std.Iterators
/--
Given an iterator `it` and a natural number `n`, `it.take n` is an iterator that outputs
up to the first `n` of `it`'s values in order and then terminates.
**Marble diagram:**
```text
it ---a----b---c--d-e--
it.take 3 ---a----b---c⊥
it ---a--
it.take 3 ---a--
```
**Termination properties:**
* `Finite` instance: only if `it` is productive
* `Productive` instance: only if `it` is productive
**Performance:**
This combinator incurs an additional O(1) cost with each output of `it`.
-/
@[always_inline, inline]
def Iter.take {α : Type w} {β : Type w} [Iterator α Id β] (n : Nat) (it : Iter (α := α) β) :
Iter (α := Take α Id) β :=
it.toIterM.take n |>.toIter
/--
This combinator is only useful for advanced use cases.
Given a finite iterator `it`, returns an iterator that behaves exactly like `it` but is of the same
type as `it.take n`.
**Marble diagram:**
```text
it ---a----b---c--d-e--
it.toTake ---a----b---c--d-e--
```
**Termination properties:**
* `Finite` instance: always
* `Productive` instance: always
**Performance:**
This combinator incurs an additional O(1) cost with each output of `it`.
-/
@[always_inline, inline]
def Iter.toTake {α : Type w} {β : Type w} [Iterator α Id β] [Finite α Id] (it : Iter (α := α) β) :
Iter (α := Take α Id) β :=
it.toIterM.toTake.toIter
end Std.Iterators

View File

@@ -255,28 +255,52 @@ def Iter.Partial.find? {α β : Type w} [Iterator α Id β] [IteratorLoopPartial
(it : Iter.Partial (α := α) β) (f : β Bool) : Option β :=
Id.run (it.findM? (pure <| .up <| f ·))
@[always_inline, inline, expose, inherit_doc IterM.size]
def Iter.size {α : Type w} {β : Type w} [Iterator α Id β] [IteratorSize α Id]
(it : Iter (α := α) β) : Nat :=
(IteratorSize.size it.toIterM).run.down
/--
Steps through the whole iterator, counting the number of outputs emitted.
@[always_inline, inline, inherit_doc IterM.Partial.size]
def Iter.Partial.size {α : Type w} {β : Type w} [Iterator α Id β] [IteratorSizePartial α Id]
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline, expose]
def Iter.count {α : Type w} {β : Type w} [Iterator α Id β] [Finite α Id] [IteratorLoop α Id Id]
(it : Iter (α := α) β) : Nat :=
(IteratorSizePartial.size it.toIterM).run.down
it.toIterM.count.run.down
/--
`LawfulIteratorSize α m` ensures that the `size` function of an iterator behaves as if it
iterated over the whole iterator, counting its elements and causing all the monadic side effects
of the iterations. This is a fairly strong condition for monadic iterators and it will be false
for many efficient implementations of `size` that compute the size without actually iterating.
Steps through the whole iterator, counting the number of outputs emitted.
This class is experimental and users of the iterator API should not explicitly depend on it.
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
class LawfulIteratorSize (α : Type w) {β : Type w} [Iterator α Id β] [Finite α Id]
[IteratorSize α Id] where
size_eq_size_toArray {it : Iter (α := α) β} : it.size =
haveI : IteratorCollect α Id Id := .defaultImplementation
it.toArray.size
@[always_inline, inline, expose, deprecated Iter.count (since := "2025-10-29")]
def Iter.size {α : Type w} {β : Type w} [Iterator α Id β] [Finite α Id] [IteratorLoop α Id Id]
(it : Iter (α := α) β) : Nat :=
it.count
/--
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline, expose]
def Iter.Partial.count {α : Type w} {β : Type w} [Iterator α Id β] [IteratorLoopPartial α Id Id]
(it : Iter.Partial (α := α) β) : Nat :=
it.it.toIterM.allowNontermination.count.run.down
/--
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline, expose, deprecated Iter.Partial.count (since := "2025-10-29")]
def Iter.Partial.size {α : Type w} {β : Type w} [Iterator α Id β] [IteratorLoopPartial α Id Id]
(it : Iter.Partial (α := α) β) : Nat :=
it.count
end Std.Iterators

View File

@@ -88,27 +88,6 @@ class IteratorLoopPartial (α : Type w) (m : Type w → Type w') {β : Type w} [
(it : IterM (α := α) m β) γ
((b : β) it.IsPlausibleIndirectOutput b (c : γ) n (ForInStep γ)) n γ
/--
`IteratorSize α m` provides an implementation of the `IterM.size` function.
This class is experimental and users of the iterator API should not explicitly depend on it.
They can, however, assume that consumers that require an instance will work for all iterators
provided by the standard library.
-/
class IteratorSize (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
size : IterM (α := α) m β m (ULift Nat)
/--
`IteratorSizePartial α m` provides an implementation of the `IterM.Partial.size` function that
can be used as `it.allowTermination.size`.
This class is experimental and users of the iterator API should not explicitly depend on it.
They can, however, assume that consumers that require an instance will work for all iterators
provided by the standard library.
-/
class IteratorSizePartial (α : Type w) (m : Type w Type w') {β : Type w} [Iterator α m β] where
size : IterM (α := α) m β m (ULift Nat)
end Typeclasses
/-- Internal implementation detail of the iterator library. -/
@@ -315,7 +294,7 @@ instance {m : Type w → Type w'} {n : Type w → Type w''}
forM it f := forIn it PUnit.unit (fun out _ => do f out; return .yield .unit)
instance {m : Type w Type w'} {n : Type w Type w''}
{α : Type w} {β : Type w} [Iterator α m β] [Finite α m] [IteratorLoopPartial α m n]
{α : Type w} {β : Type w} [Iterator α m β] [IteratorLoopPartial α m n]
[MonadLiftT m n] :
ForM n (IterM.Partial (α := α) m β) β where
forM it f := forIn it PUnit.unit (fun out _ => do f out; return .yield .unit)
@@ -633,86 +612,58 @@ def IterM.Partial.find? {α β : Type w} {m : Type w → Type w'} [Monad m] [Ite
m (Option β) :=
it.findM? (pure <| .up <| f ·)
section Size
section Count
/--
This is the implementation of the default instance `IteratorSize.defaultImplementation`.
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline]
def IterM.DefaultConsumers.size {α : Type w} {m : Type w Type w'} [Monad m] {β : Type w}
[Iterator α m β] [IteratorLoop α m m] [Finite α m] (it : IterM (α := α) m β) :
m (ULift Nat) :=
def IterM.count {α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β] [Finite α m]
[IteratorLoop α m m]
[Monad m] (it : IterM (α := α) m β) : m (ULift Nat) :=
it.fold (init := .up 0) fun acc _ => .up (acc.down + 1)
/--
This is the implementation of the default instance `IteratorSizePartial.defaultImplementation`.
-/
@[always_inline, inline]
def IterM.DefaultConsumers.sizePartial {α : Type w} {m : Type w Type w'} [Monad m] {β : Type w}
[Iterator α m β] [IteratorLoopPartial α m m] (it : IterM (α := α) m β) :
m (ULift Nat) :=
it.allowNontermination.fold (init := .up 0) fun acc _ => .up (acc.down + 1)
/--
This is the default implementation of the `IteratorSize` class.
It simply iterates using `IteratorLoop` and counts the elements.
For certain iterators, more efficient implementations are possible and should be used instead.
-/
@[always_inline, inline]
def IteratorSize.defaultImplementation {α β : Type w} {m : Type w Type w'} [Monad m]
[Iterator α m β] [Finite α m] [IteratorLoop α m m] :
IteratorSize α m where
size := IterM.DefaultConsumers.size
/--
This is the default implementation of the `IteratorSizePartial` class.
It simply iterates using `IteratorLoopPartial` and counts the elements.
For certain iterators, more efficient implementations are possible and should be used instead.
-/
@[always_inline, inline]
instance IteratorSizePartial.defaultImplementation {α β : Type w} {m : Type w Type w'} [Monad m]
[Iterator α m β] [IteratorLoopPartial α m m] :
IteratorSizePartial α m where
size := IterM.DefaultConsumers.sizePartial
/--
Computes how many elements the iterator returns. In monadic situations, it is unclear which effects
are caused by calling `size`, and if the monad is nondeterministic, it is also unclear what the
returned value should be. The reference implementation, `IteratorSize.defaultImplementation`,
simply iterates over the whole iterator monadically, counting the number of emitted values.
An `IteratorSize` instance is considered lawful if it is equal to the reference implementation.
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
Default performance is linear in the number of steps taken by the iterator.
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline]
def IterM.size {α : Type} {m : Type Type w'} {β : Type} [Iterator α m β] [Monad m]
(it : IterM (α := α) m β) [IteratorSize α m] : m Nat :=
ULift.down <$> IteratorSize.size it
@[always_inline, inline, deprecated IterM.count (since := "2025-10-29")]
def IterM.size {α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β] [Finite α m]
[IteratorLoop α m m]
[Monad m] (it : IterM (α := α) m β) : m (ULift Nat) :=
it.count
/--
Computes how many elements the iterator emits.
With monadic iterators (`IterM`), it is unclear which effects
are caused by calling `size`, and if the monad is nondeterministic, it is also unclear what the
returned value should be. The reference implementation, `IteratorSize.defaultImplementation`,
simply iterates over the whole iterator monadically, counting the number of emitted values.
An `IteratorSize` instance is considered lawful if it is equal to the reference implementation.
This is the partial version of `size`. It does not require a proof of finiteness and might loop
forever. It is not possible to verify the behavior in Lean because it uses `partial`.
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
Default performance is linear in the number of steps taken by the iterator.
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline]
def IterM.Partial.size {α : Type} {m : Type Type w'} {β : Type} [Iterator α m β] [Monad m]
(it : IterM.Partial (α := α) m β) [IteratorSizePartial α m] : m Nat :=
ULift.down <$> IteratorSizePartial.size it.it
def IterM.Partial.count {α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β]
[IteratorLoopPartial α m m] [Monad m] (it : IterM.Partial (α := α) m β) : m (ULift Nat) :=
it.fold (init := .up 0) fun acc _ => .up (acc.down + 1)
end Size
/--
Steps through the whole iterator, counting the number of outputs emitted.
**Performance**:
This function's runtime is linear in the number of steps taken by the iterator.
-/
@[always_inline, inline, deprecated IterM.Partial.count (since := "2025-10-29")]
def IterM.Partial.size {α : Type w} {m : Type w Type w'} {β : Type w} [Iterator α m β]
[IteratorLoopPartial α m m] [Monad m] (it : IterM.Partial (α := α) m β) : m (ULift Nat) :=
it.count
end Count
end Std.Iterators

View File

@@ -8,3 +8,4 @@ module
prelude
public import Init.Data.Iterators.Lemmas.Consumers
public import Init.Data.Iterators.Lemmas.Combinators
public import Init.Data.Iterators.Lemmas.Producers

View File

@@ -10,4 +10,5 @@ public import Init.Data.Iterators.Lemmas.Combinators.Attach
public import Init.Data.Iterators.Lemmas.Combinators.Monadic
public import Init.Data.Iterators.Lemmas.Combinators.FilterMap
public import Init.Data.Iterators.Lemmas.Combinators.FlatMap
public import Init.Data.Iterators.Lemmas.Combinators.Take
public import Init.Data.Iterators.Lemmas.Combinators.ULift

View File

@@ -11,6 +11,7 @@ import all Init.Data.Iterators.Combinators.Attach
import all Init.Data.Iterators.Combinators.Monadic.Attach
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.Attach
public import Init.Data.Iterators.Lemmas.Consumers.Collect
public import Init.Data.Iterators.Lemmas.Consumers.Loop
public import Init.Data.Array.Attach
public section
@@ -82,4 +83,14 @@ theorem Iter.toArray_attachWith [Iterator α Id β]
simpa only [Array.toList_inj]
simp [Iter.toList_toArray]
@[simp]
theorem Iter.count_attachWith [Iterator α Id β]
{it : Iter (α := α) β} {hP}
[Finite α Id] [IteratorLoop α Id Id]
[LawfulIteratorLoop α Id Id] :
(it.attachWith P hP).count = it.count := by
letI : IteratorCollect α Id Id := .defaultImplementation
rw [ Iter.length_toList_eq_count, toList_attachWith]
simp
end Std.Iterators

View File

@@ -467,6 +467,17 @@ theorem Iter.fold_map {α β γ : Type w} {δ : Type x}
end Fold
section Count
@[simp]
theorem Iter.count_map {α β β' : Type w} [Iterator α Id β]
[IteratorLoop α Id Id] [Finite α Id] [LawfulIteratorLoop α Id Id]
{it : Iter (α := α) β} {f : β β'} :
(it.map f).count = it.count := by
simp [map_eq_toIter_map_toIterM, count_eq_count_toIterM]
end Count
theorem Iter.anyM_filterMapM {α β β' : Type w} {m : Type w Type w'}
[Iterator α Id β] [Finite α Id] [Monad m] [LawfulMonad m]
{it : Iter (α := α) β} {f : β m (Option β')} {p : β' m (ULift Bool)} :

View File

@@ -9,4 +9,5 @@ prelude
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.Attach
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.FilterMap
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.FlatMap
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.Take
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.ULift

View File

@@ -9,6 +9,7 @@ prelude
public import Init.Data.Iterators.Combinators.Monadic.Attach
import all Init.Data.Iterators.Combinators.Monadic.Attach
public import Init.Data.Iterators.Lemmas.Consumers.Monadic.Collect
public import Init.Data.Iterators.Lemmas.Consumers.Monadic.Loop
public section
@@ -59,4 +60,14 @@ theorem IterM.map_unattach_toArray_attachWith [Iterator α m β] [Monad m] [Mona
rw [ toArray_toList, toArray_toList, map_unattach_toList_attachWith (it := it) (hP := hP)]
simp [-map_unattach_toList_attachWith, -IterM.toArray_toList]
@[simp]
theorem IterM.count_attachWith [Iterator α m β] [Monad m] [Monad n]
{it : IterM (α := α) m β} {hP}
[Finite α m] [IteratorLoop α m m] [LawfulMonad m] [LawfulIteratorLoop α m m] :
(it.attachWith P hP).count = it.count := by
letI : IteratorCollect α m m := .defaultImplementation
rw [ up_length_toList_eq_count, up_length_toList_eq_count,
map_unattach_toList_attachWith (it := it) (P := P) (hP := hP)]
simp only [Functor.map_map, List.length_unattach]
end Std.Iterators

View File

@@ -895,6 +895,23 @@ theorem IterM.fold_map {α β γ δ : Type w} {m : Type w → Type w'}
end Fold
section Count
@[simp]
theorem IterM.count_map {α β β' : Type w} {m : Type w Type w'} [Iterator α m β] [Monad m]
[IteratorLoop α m m] [Finite α m] [LawfulMonad m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} {f : β β'} :
(it.map f).count = it.count := by
induction it using IterM.inductSteps with | step it ihy ihs
rw [count_eq_match_step, count_eq_match_step, step_map, bind_assoc]
apply bind_congr; intro step
cases step.inflate using PlausibleIterStep.casesOn
· simp [ihy _]
· simp [ihs _]
· simp
end Count
section AnyAll
theorem IterM.anyM_filterMapM {α β β' : Type w} {m : Type w Type w'} {n : Type w Type w''}

View File

@@ -0,0 +1,77 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Combinators.Monadic.Take
public import Init.Data.Iterators.Lemmas.Consumers.Monadic
@[expose] public section
namespace Std.Iterators
theorem Take.isPlausibleStep_take_yield [Monad m] [Iterator α m β] {n : Nat}
{it : IterM (α := α) m β} (h : it.IsPlausibleStep (.yield it' out)) :
(it.take (n + 1)).IsPlausibleStep (.yield (it'.take n) out) :=
(.yield h (by simp [IterM.take]))
theorem Take.isPlausibleStep_take_skip [Monad m] [Iterator α m β] {n : Nat}
{it : IterM (α := α) m β} (h : it.IsPlausibleStep (.skip it')) :
(it.take (n + 1)).IsPlausibleStep (.skip (it'.take (n + 1))) :=
(.skip h (by simp [IterM.take]))
theorem IterM.step_take {α m β} [Monad m] [Iterator α m β] {n : Nat}
{it : IterM (α := α) m β} :
(it.take n).step = (match n with
| 0 => pure <| .deflate <| .done (.depleted rfl)
| k + 1 => do
match ( it.step).inflate with
| .yield it' out h => pure <| .deflate <| .yield (it'.take k) out (Take.isPlausibleStep_take_yield h)
| .skip it' h => pure <| .deflate <| .skip (it'.take (k + 1)) (Take.isPlausibleStep_take_skip h)
| .done h => pure <| .deflate <| .done (.done h)) := by
simp only [take, step, Iterator.step, internalState_toIterM]
cases n
case zero => rfl
case succ k =>
apply bind_congr
intro step
cases step.inflate using PlausibleIterStep.casesOn <;> rfl
theorem IterM.toList_take_zero {α m β} [Monad m] [LawfulMonad m] [Iterator α m β]
[Finite (Take α m) m]
[IteratorCollect (Take α m) m m] [LawfulIteratorCollect (Take α m) m m]
{it : IterM (α := α) m β} :
(it.take 0).toList = pure [] := by
rw [toList_eq_match_step]
simp [step_take]
theorem IterM.step_toTake {α m β} [Monad m] [Iterator α m β] [Finite α m]
{it : IterM (α := α) m β} :
it.toTake.step = (do
match ( it.step).inflate with
| .yield it' out h => pure <| .deflate <| .yield it'.toTake out (.yield h Nat.zero_ne_one)
| .skip it' h => pure <| .deflate <| .skip it'.toTake (.skip h Nat.zero_ne_one)
| .done h => pure <| .deflate <| .done (.done h)) := by
simp only [toTake, step, Iterator.step, internalState_toIterM]
apply bind_congr
intro step
cases step.inflate using PlausibleIterStep.casesOn <;> rfl
@[simp]
theorem IterM.toList_toTake {α m β} [Monad m] [LawfulMonad m] [Iterator α m β] [Finite α m]
[IteratorCollect α m m] [LawfulIteratorCollect α m m]
{it : IterM (α := α) m β} :
it.toTake.toList = it.toList := by
induction it using IterM.inductSteps with | step it ihy ihs
rw [toList_eq_match_step, toList_eq_match_step]
simp only [step_toTake, bind_assoc]
apply bind_congr; intro step
cases step.inflate using PlausibleIterStep.casesOn
· simp [ihy _]
· simp [ihs _]
· simp
end Std.Iterators

View File

@@ -7,8 +7,8 @@ module
prelude
public import Init.Data.Iterators.Combinators.Monadic.ULift
import all Init.Data.Iterators.Combinators.Monadic.ULift
public import Init.Data.Iterators.Lemmas.Consumers.Monadic.Collect
public import Init.Data.Iterators.Lemmas.Consumers.Monadic.Loop
public section
@@ -20,9 +20,13 @@ variable {α : Type u} {m : Type u → Type u'} {n : Type max u v → Type v'}
theorem IterM.step_uLift [Iterator α m β] [Monad n] {it : IterM (α := α) m β}
[MonadLiftT m (ULiftT n)] :
(it.uLift n).step = (do
let step := ( (monadLift it.step : ULiftT n _).run).down
return .deflate Types.ULiftIterator.Monadic.modifyStep step.inflate.val, step.inflate.val, step.inflate.property, rfl) :=
rfl
match ( (monadLift it.step : ULiftT n _).run).down.inflate with
| .yield it' out h => return .deflate (.yield (it'.uLift n) (.up out) _, h, rfl)
| .skip it' h => return .deflate (.skip (it'.uLift n) _, h, rfl)
| .done h => return .deflate (.done _, h, rfl)) := by
simp only [IterM.step, Iterator.step, IterM.uLift]
apply bind_congr; intro step
split <;> simp [Types.ULiftIterator.Monadic.modifyStep, *]
@[simp]
theorem IterM.toList_uLift [Iterator α m β] [Monad m] [Monad n] {it : IterM (α := α) m β}
@@ -33,14 +37,11 @@ theorem IterM.toList_uLift [Iterator α m β] [Monad m] [Monad n] {it : IterM (
(fun l => l.down.map ULift.up) <$> (monadLift it.toList : ULiftT n _).run := by
induction it using IterM.inductSteps with | step it ihy ihs
rw [IterM.toList_eq_match_step, IterM.toList_eq_match_step, step_uLift]
simp only [bind_pure_comp, bind_map_left, liftM_bind, ULiftT.run_bind, map_bind]
apply bind_congr
intro step
simp [Types.ULiftIterator.Monadic.modifyStep]
simp only [bind_assoc, map_eq_pure_bind, monadLift_bind, ULiftT.run_bind]
apply bind_congr; intro step
cases step.down.inflate using PlausibleIterStep.casesOn
· simp only [uLift] at ihy
simp [ihy _]
· exact ihs _
· simp [ihy _]
· simp [ihs _]
· simp
@[simp]
@@ -63,4 +64,20 @@ theorem IterM.toArray_uLift [Iterator α m β] [Monad m] [Monad n] {it : IterM (
rw [ toArray_toList, toArray_toList, toList_uLift, monadLift_map]
simp
@[simp]
theorem IterM.count_uLift [Iterator α m β] [Monad m] [Monad n] {it : IterM (α := α) m β}
[MonadLiftT m (ULiftT n)] [Finite α m] [IteratorLoop α m m]
[LawfulMonad m] [LawfulMonad n] [LawfulIteratorLoop α m m]
[LawfulMonadLiftT m (ULiftT n)] :
(it.uLift n).count =
(.up ·.down.down) <$> (monadLift (n := ULiftT n) it.count).run := by
induction it using IterM.inductSteps with | step it ihy ihs
rw [count_eq_match_step, count_eq_match_step, monadLift_bind, map_eq_pure_bind, step_uLift]
simp only [bind_assoc, ULiftT.run_bind]
apply bind_congr; intro step
cases step.down.inflate using PlausibleIterStep.casesOn
· simp [ihy _]
· simp [ihs _]
· simp
end Std.Iterators

View File

@@ -6,8 +6,8 @@ Authors: Paul Reichert
module
prelude
public import Std.Data.Iterators.Combinators.Take
public import Std.Data.Iterators.Lemmas.Combinators.Monadic.Take
public import Init.Data.Iterators.Combinators.Take
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.Take
public import Init.Data.Iterators.Lemmas.Consumers
@[expose] public section
@@ -19,14 +19,19 @@ theorem Iter.take_eq_toIter_take_toIterM {α β} [Iterator α Id β] {n : Nat}
it.take n = (it.toIterM.take n).toIter :=
rfl
theorem Iter.toTake_eq_toIter_toTake_toIterM {α β} [Iterator α Id β] [Finite α Id]
{it : Iter (α := α) β} :
it.toTake = it.toIterM.toTake.toIter :=
rfl
theorem Iter.step_take {α β} [Iterator α Id β] {n : Nat}
{it : Iter (α := α) β} :
(it.take n).step = (match n with
| 0 => .done (.depleted rfl)
| k + 1 =>
match it.step with
| .yield it' out h => .yield (it'.take k) out (.yield h rfl)
| .skip it' h => .skip (it'.take (k + 1)) (.skip h rfl)
| .yield it' out h => .yield (it'.take k) out (Take.isPlausibleStep_take_yield h)
| .skip it' h => .skip (it'.take (k + 1)) (Take.isPlausibleStep_take_skip h)
| .done h => .done (.done h)) := by
simp only [Iter.step, Iter.step, Iter.take_eq_toIter_take_toIterM, IterM.step_take, toIterM_toIter]
cases n
@@ -88,11 +93,29 @@ theorem Iter.toArray_take_of_finite {α β} [Iterator α Id β] {n : Nat}
@[simp]
theorem Iter.toList_take_zero {α β} [Iterator α Id β]
[Finite (Take α Id β) Id]
[IteratorCollect (Take α Id β) Id Id] [LawfulIteratorCollect (Take α Id β) Id Id]
[Finite (Take α Id) Id]
[IteratorCollect (Take α Id) Id Id] [LawfulIteratorCollect (Take α Id) Id Id]
{it : Iter (α := α) β} :
(it.take 0).toList = [] := by
rw [toList_eq_match_step]
simp [step_take]
theorem Iter.step_toTake {α β} [Iterator α Id β] [Finite α Id]
{it : Iter (α := α) β} :
it.toTake.step = (
match it.step with
| .yield it' out h => .yield it'.toTake out (.yield h Nat.zero_ne_one)
| .skip it' h => .skip it'.toTake (.skip h Nat.zero_ne_one)
| .done h => .done (.done h)) := by
simp only [toTake_eq_toIter_toTake_toIterM, Iter.step, toIterM_toIter, IterM.step_toTake,
Id.run_bind]
cases it.toIterM.step.run.inflate using PlausibleIterStep.casesOn <;> simp
@[simp]
theorem Iter.toList_toTake {α β} [Iterator α Id β] [Finite α Id]
[IteratorCollect α Id Id] [LawfulIteratorCollect α Id Id]
{it : Iter (α := α) β} :
it.toTake.toList = it.toList := by
simp [toTake_eq_toIter_toTake_toIterM, toList_eq_toList_toIterM]
end Std.Iterators

View File

@@ -10,6 +10,7 @@ public import Init.Data.Iterators.Combinators.ULift
import all Init.Data.Iterators.Combinators.ULift
public import Init.Data.Iterators.Lemmas.Combinators.Monadic.ULift
public import Init.Data.Iterators.Lemmas.Consumers.Collect
public import Init.Data.Iterators.Lemmas.Consumers.Loop
public section
@@ -22,14 +23,16 @@ theorem Iter.uLift_eq_toIter_uLift_toIterM {it : Iter (α := α) β} :
rfl
theorem Iter.step_uLift [Iterator α Id β] {it : Iter (α := α) β} :
it.uLift.step =
Types.ULiftIterator.modifyStep it.step.val,
it.step.val.mapIterator Iter.toIterM, it.step.property,
by simp [Types.ULiftIterator.modifyStep] := by
it.uLift.step = match it.step with
| .yield it' out h => .yield it'.uLift (.up out) _, h, rfl
| .skip it' h => .skip it'.uLift _, h, rfl
| .done h => .done _, h, rfl := by
rw [Subtype.ext_iff]
simp only [uLift_eq_toIter_uLift_toIterM, step, IterM.Step.toPure, toIterM_toIter,
IterM.step_uLift, bind_pure_comp, Id.run_map, toIter_toIterM]
simp [Types.ULiftIterator.modifyStep, monadLift]
IterM.step_uLift, toIter_toIterM]
simp only [monadLift, ULiftT.run_pure, PlausibleIterStep.yield, PlausibleIterStep.skip,
PlausibleIterStep.done, pure_bind]
cases it.toIterM.step.run.inflate using PlausibleIterStep.casesOn <;> simp
@[simp]
theorem Iter.toList_uLift [Iterator α Id β] {it : Iter (α := α) β}
@@ -55,4 +58,12 @@ theorem Iter.toArray_uLift [Iterator α Id β] {it : Iter (α := α) β}
rw [ toArray_toList, toArray_toList, toList_uLift]
simp [-toArray_toList]
@[simp]
theorem Iter.count_uLift [Iterator α Id β] {it : Iter (α := α) β}
[Finite α Id] [IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id] :
it.uLift.count = it.count := by
simp only [monadLift, uLift_eq_toIter_uLift_toIterM, count_eq_count_toIterM, toIterM_toIter]
rw [IterM.count_uLift]
simp [monadLift]
end Std.Iterators

View File

@@ -68,8 +68,7 @@ theorem Iter.forIn_eq {α β : Type w} [Iterator α Id β] [Finite α Id]
rw [ h]
theorem Iter.forIn'_eq_forIn'_toIterM {α β : Type w} [Iterator α Id β]
[Finite α Id] {m : Type w Type w''} [Monad m] [LawfulMonad m]
[IteratorLoop α Id m] [LawfulIteratorLoop α Id m]
[Finite α Id] {m : Type w Type w''} [Monad m] [LawfulMonad m] [IteratorLoop α Id m]
{γ : Type w} {it : Iter (α := α) β} {init : γ}
{f : (out : β) _ γ m (ForInStep γ)} :
letI : ForIn' m (Iter (α := α) β) β _ := Iter.instForIn'
@@ -81,7 +80,7 @@ theorem Iter.forIn'_eq_forIn'_toIterM {α β : Type w} [Iterator α Id β]
theorem Iter.forIn_eq_forIn_toIterM {α β : Type w} [Iterator α Id β]
[Finite α Id] {m : Type w Type w''} [Monad m] [LawfulMonad m]
[IteratorLoop α Id m] [LawfulIteratorLoop α Id m]
[IteratorLoop α Id m]
{γ : Type w} {it : Iter (α := α) β} {init : γ}
{f : β γ m (ForInStep γ)} :
ForIn.forIn it init f =
@@ -331,7 +330,7 @@ theorem Iter.foldM_eq_forIn {α β : Type w} {γ : Type x} [Iterator α Id β] [
theorem Iter.foldM_eq_foldM_toIterM {α β : Type w} [Iterator α Id β]
[Finite α Id] {m : Type w Type w''} [Monad m] [LawfulMonad m]
[IteratorLoop α Id m] [LawfulIteratorLoop α Id m]
[IteratorLoop α Id m]
{γ : Type w} {it : Iter (α := α) β} {init : γ} {f : γ β m γ} :
it.foldM (init := init) f = it.toIterM.foldM (init := init) f := by
simp [foldM_eq_forIn, IterM.foldM_eq_forIn, forIn_eq_forIn_toIterM]
@@ -396,7 +395,7 @@ theorem Iter.fold_eq_foldM {α β : Type w} {γ : Type x} [Iterator α Id β]
simp [foldM_eq_forIn, fold_eq_forIn]
theorem Iter.fold_eq_fold_toIterM {α β : Type w} {γ : Type w} [Iterator α Id β]
[Finite α Id] [IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
[Finite α Id] [IteratorLoop α Id Id]
{f : γ β γ} {init : γ} {it : Iter (α := α) β} :
it.fold (init := init) f = (it.toIterM.fold (init := init) f).run := by
rw [fold_eq_foldM, foldM_eq_foldM_toIterM, IterM.fold_eq_foldM]
@@ -422,8 +421,9 @@ theorem Iter.fold_eq_match_step {α β : Type w} {γ : Type x} [Iterator α Id
cases step using PlausibleIterStep.casesOn <;> simp
-- The argument `f : γ₁ → γ₂` is intentionally explicit, as it is sometimes not found by unification.
theorem Iter.fold_hom [Iterator α Id β] [Finite α Id]
[IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
theorem Iter.fold_hom {γ₁ : Type x₁} {γ₂ : Type x₂} [Iterator α Id β] [Finite α Id]
[IteratorLoop α Id Id.{x₁}] [LawfulIteratorLoop α Id Id.{x₁}]
[IteratorLoop α Id Id.{x₂}] [LawfulIteratorLoop α Id Id.{x₂}]
{it : Iter (α := α) β}
(f : γ₁ γ₂) {g₁ : γ₁ β γ₁} {g₂ : γ₂ β γ₂} {init : γ₁}
(H : x y, g₂ (f x) y = f (g₁ x y)) :
@@ -469,30 +469,72 @@ theorem Iter.foldl_toArray {α β : Type w} {γ : Type x} [Iterator α Id β] [F
it.toArray.foldl (init := init) f = it.fold (init := init) f := by
rw [fold_eq_foldM, Array.foldl_eq_foldlM, Iter.foldlM_toArray]
@[simp]
theorem Iter.size_toArray_eq_size {α β : Type w} [Iterator α Id β] [Finite α Id]
[IteratorCollect α Id Id] [LawfulIteratorCollect α Id Id]
[IteratorSize α Id] [LawfulIteratorSize α]
theorem Iter.count_eq_count_toIterM {α β : Type w} [Iterator α Id β]
[Finite α Id] [IteratorLoop α Id Id.{w}] {it : Iter (α := α) β} :
it.count = it.toIterM.count.run.down :=
(rfl)
theorem Iter.count_eq_fold {α β : Type w} [Iterator α Id β]
[Finite α Id] [IteratorLoop α Id Id.{w}] [LawfulIteratorLoop α Id Id.{w}]
[IteratorLoop α Id Id.{0}] [LawfulIteratorLoop α Id Id.{0}]
{it : Iter (α := α) β} :
it.toArray.size = it.size := by
simp only [toArray_eq_toArray_toIterM, LawfulIteratorCollect.toArray_eq]
simp [ toArray_eq_toArray_toIterM, LawfulIteratorSize.size_eq_size_toArray]
it.count = it.fold (γ := Nat) (init := 0) (fun acc _ => acc + 1) := by
rw [count_eq_count_toIterM, IterM.count_eq_fold, fold_eq_fold_toIterM]
rw [ fold_hom (f := ULift.down)]
simp
theorem Iter.count_eq_forIn {α β : Type w} [Iterator α Id β]
[Finite α Id] [IteratorLoop α Id Id.{w}] [LawfulIteratorLoop α Id Id.{w}]
[IteratorLoop α Id Id.{0}] [LawfulIteratorLoop α Id Id.{0}]
{it : Iter (α := α) β} :
it.count = (ForIn.forIn (m := Id) it 0 (fun _ acc => return .yield (acc + 1))).run := by
rw [count_eq_fold, forIn_pure_yield_eq_fold, Id.run_pure]
theorem Iter.count_eq_match_step {α β : Type w} [Iterator α Id β]
[Finite α Id] [IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
{it : Iter (α := α) β} :
it.count = (match it.step.val with
| .yield it' _ => it'.count + 1
| .skip it' => it'.count
| .done => 0) := by
simp only [count_eq_count_toIterM]
rw [IterM.count_eq_match_step]
simp only [bind_pure_comp, id_map', Id.run_bind, Iter.step]
cases it.toIterM.step.run.inflate using PlausibleIterStep.casesOn <;> simp
@[simp]
theorem Iter.length_toList_eq_size {α β : Type w} [Iterator α Id β] [Finite α Id]
theorem Iter.size_toArray_eq_count {α β : Type w} [Iterator α Id β] [Finite α Id]
[IteratorCollect α Id Id] [LawfulIteratorCollect α Id Id]
[IteratorSize α Id] [LawfulIteratorSize α]
[IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
{it : Iter (α := α) β} :
it.toList.length = it.size := by
rw [ toList_toArray, Array.length_toList, size_toArray_eq_size]
it.toArray.size = it.count := by
simp only [toArray_eq_toArray_toIterM, count_eq_count_toIterM, Id.run_map,
IterM.up_size_toArray_eq_count]
@[deprecated Iter.size_toArray_eq_count (since := "2025-10-29")]
def Iter.size_toArray_eq_size := @size_toArray_eq_count
@[simp]
theorem Iter.length_toListRev_eq_size {α β : Type w} [Iterator α Id β] [Finite α Id]
theorem Iter.length_toList_eq_count {α β : Type w} [Iterator α Id β] [Finite α Id]
[IteratorCollect α Id Id] [LawfulIteratorCollect α Id Id]
[IteratorSize α Id] [LawfulIteratorSize α]
[IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
{it : Iter (α := α) β} :
it.toListRev.length = it.size := by
rw [toListRev_eq, List.length_reverse, length_toList_eq_size]
it.toList.length = it.count := by
rw [ toList_toArray, Array.length_toList, size_toArray_eq_count]
@[deprecated Iter.length_toList_eq_count (since := "2025-10-29")]
def Iter.length_toList_eq_size := @length_toList_eq_count
@[simp]
theorem Iter.length_toListRev_eq_count {α β : Type w} [Iterator α Id β] [Finite α Id]
[IteratorCollect α Id Id] [LawfulIteratorCollect α Id Id]
[IteratorLoop α Id Id] [LawfulIteratorLoop α Id Id]
{it : Iter (α := α) β} :
it.toListRev.length = it.count := by
rw [toListRev_eq, List.length_reverse, length_toList_eq_count]
@[deprecated Iter.length_toListRev_eq_count (since := "2025-10-29")]
def Iter.length_toListRev_eq_size := @length_toListRev_eq_count
theorem Iter.anyM_eq_forIn {α β : Type w} {m : Type Type w'} [Iterator α Id β]
[Finite α Id] [Monad m] [LawfulMonad m] [IteratorLoop α Id m] [LawfulIteratorLoop α Id m]

View File

@@ -335,6 +335,73 @@ theorem IterM.drain_eq_map_toArray {α β : Type w} {m : Type w → Type w'} [It
it.drain = (fun _ => .unit) <$> it.toList := by
simp [IterM.drain_eq_map_toList]
theorem IterM.count_eq_fold {α β : Type w} {m : Type w Type w'} [Iterator α m β]
[Finite α m] [Monad m] [LawfulMonad m] [IteratorLoop α m m]
{it : IterM (α := α) m β} :
it.count = it.fold (init := .up 0) (fun acc _ => .up <| acc.down + 1) :=
(rfl)
theorem IterM.count_eq_forIn {α β : Type w} {m : Type w Type w'} [Iterator α m β]
[Finite α m] [Monad m] [LawfulMonad m] [IteratorLoop α m m]
{it : IterM (α := α) m β} :
it.count = ForIn.forIn it (.up 0) (fun _ acc => return .yield (.up (acc.down + 1))) :=
(rfl)
theorem IterM.count_eq_match_step {α β : Type w} {m : Type w Type w'} [Iterator α m β]
[Finite α m] [Monad m] [LawfulMonad m] [IteratorLoop α m m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} :
it.count = (do
match ( it.step).inflate.val with
| .yield it' _ => return .up (( it'.count).down + 1)
| .skip it' => return .up ( it'.count).down
| .done => return .up 0) := by
simp only [count_eq_fold]
have (acc : Nat) (it' : IterM (α := α) m β) :
it'.fold (init := ULift.up acc) (fun acc _ => .up (acc.down + 1)) =
(ULift.up <| ·.down + acc) <$>
it'.fold (init := ULift.up 0) (fun acc _ => .up (acc.down + 1)) := by
rw [ fold_hom]
· simp only [Nat.zero_add]; rfl
· simp only [ULift.up.injEq]; omega
rw [fold_eq_match_step]
apply bind_congr; intro step
cases step.inflate using PlausibleIterStep.casesOn
· simp only [Nat.zero_add, bind_pure_comp]
rw [this 1]
· simp
· simp
@[simp]
theorem IterM.up_size_toArray_eq_count {α β : Type w} [Iterator α m β] [Finite α m]
[Monad m] [LawfulMonad m]
[IteratorCollect α m m] [LawfulIteratorCollect α m m]
[IteratorLoop α m m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} :
(.up <| ·.size) <$> it.toArray = it.count := by
rw [toArray_eq_fold, count_eq_fold, fold_hom]
· simp only [List.size_toArray, List.length_nil]; rfl
· simp
@[simp]
theorem IterM.up_length_toList_eq_count {α β : Type w} [Iterator α m β] [Finite α m]
[Monad m] [LawfulMonad m]
[IteratorCollect α m m] [LawfulIteratorCollect α m m]
[IteratorLoop α m m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} :
(.up <| ·.length) <$> it.toList = it.count := by
rw [toList_eq_fold, count_eq_fold, fold_hom]
· simp only [List.length_nil]; rfl
· simp
@[simp]
theorem IterM.up_length_toListRev_eq_count {α β : Type w} [Iterator α m β] [Finite α m]
[Monad m] [LawfulMonad m]
[IteratorCollect α m m] [LawfulIteratorCollect α m m]
[IteratorLoop α m m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} :
(.up <| ·.length) <$> it.toListRev = it.count := by
simp only [toListRev_eq, Functor.map_map, List.length_reverse, up_length_toList_eq_count]
theorem IterM.anyM_eq_forIn {α β : Type w} {m : Type w Type w'} [Iterator α m β]
[Finite α m] [Monad m] [LawfulMonad m] [IteratorLoop α m m] [LawfulIteratorLoop α m m]
{it : IterM (α := α) m β} {p : β m (ULift Bool)} :

View File

@@ -0,0 +1,10 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Lemmas.Producers.Monadic
public import Init.Data.Iterators.Lemmas.Producers.List

View File

@@ -7,8 +7,8 @@ module
prelude
public import Init.Data.Iterators.Lemmas.Consumers.Collect
public import Std.Data.Iterators.Producers.List
public import Std.Data.Iterators.Lemmas.Producers.Monadic.List
public import Init.Data.Iterators.Producers.List
public import Init.Data.Iterators.Lemmas.Producers.Monadic.List
@[expose] public section

View File

@@ -0,0 +1,9 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Lemmas.Producers.Monadic.List

View File

@@ -0,0 +1,71 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Lemmas.Consumers.Monadic
public import Init.Data.Iterators.Producers.Monadic.List
@[expose] public section
/-!
# Lemmas about list iterators
This module provides lemmas about the interactions of `List.iterM` with `IterM.step` and various
collectors.
-/
namespace Std.Iterators
open Std.Internal
variable {m : Type w Type w'} {n : Type w Type w''} [Monad m] {β : Type w}
@[simp]
theorem _root_.List.step_iterM_nil :
(([] : List β).iterM m).step = pure (.deflate .done, rfl) := by
simp only [IterM.step, Iterator.step]; rfl
@[simp]
theorem _root_.List.step_iterM_cons {x : β} {xs : List β} :
((x :: xs).iterM m).step = pure (.deflate .yield (xs.iterM m) x, rfl) := by
simp only [List.iterM, IterM.step, Iterator.step]; rfl
theorem _root_.List.step_iterM {l : List β} :
(l.iterM m).step = match l with
| [] => pure (.deflate .done, rfl)
| x :: xs => pure (.deflate .yield (xs.iterM m) x, rfl) := by
cases l <;> simp [List.step_iterM_cons, List.step_iterM_nil]
theorem ListIterator.toArrayMapped_iterM [Monad n] [LawfulMonad n]
{β : Type w} {γ : Type w} {lift : δ : Type w m δ n δ}
[LawfulMonadLiftFunction lift] {f : β n γ} {l : List β} :
IteratorCollect.toArrayMapped lift f (l.iterM m) (m := m) = List.toArray <$> l.mapM f := by
rw [LawfulIteratorCollect.toArrayMapped_eq]
induction l with
| nil =>
rw [IterM.DefaultConsumers.toArrayMapped_eq_match_step]
simp [List.step_iterM_nil, LawfulMonadLiftFunction.lift_pure]
| cons x xs ih =>
rw [IterM.DefaultConsumers.toArrayMapped_eq_match_step]
simp [List.step_iterM_cons, List.mapM_cons, pure_bind, ih, LawfulMonadLiftFunction.lift_pure]
@[simp]
theorem _root_.List.toArray_iterM [LawfulMonad m] {l : List β} :
(l.iterM m).toArray = pure l.toArray := by
simp only [IterM.toArray, ListIterator.toArrayMapped_iterM]
rw [List.mapM_pure, map_pure, List.map_id']
@[simp]
theorem _root_.List.toList_iterM [LawfulMonad m] {l : List β} :
(l.iterM m).toList = pure l := by
rw [ IterM.toList_toArray, List.toArray_iterM, map_pure, List.toList_toArray]
@[simp]
theorem _root_.List.toListRev_iterM [LawfulMonad m] {l : List β} :
(l.iterM m).toListRev = pure l.reverse := by
simp [IterM.toListRev_eq, List.toList_iterM]
end Std.Iterators

View File

@@ -0,0 +1,10 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Producers.Monadic
public import Init.Data.Iterators.Producers.List

View File

@@ -6,7 +6,7 @@ Authors: Paul Reichert
module
prelude
public import Std.Data.Iterators.Producers.Monadic.List
public import Init.Data.Iterators.Producers.Monadic.List
@[expose] public section

View File

@@ -0,0 +1,9 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Iterators.Producers.Monadic.List

View File

@@ -87,12 +87,4 @@ instance {α : Type w} [Monad m] {n : Type x → Type x'} [Monad n] :
IteratorLoopPartial (ListIterator α) m n :=
.defaultImplementation
@[always_inline, inline]
instance {α : Type w} [Monad m] : IteratorSize (ListIterator α) m :=
.defaultImplementation
@[always_inline, inline]
instance {α : Type w} [Monad m] : IteratorSizePartial (ListIterator α) m :=
.defaultImplementation
end Std.Iterators

View File

@@ -89,6 +89,12 @@ instance {x : γ} {State : Type w} {iter}
IteratorCollect (α := i.State) m n :=
inferInstanceAs <| IteratorCollect (α := State) m n
instance {x : γ} {State : Type w} {iter} [Monad m] [Monad n]
[Iterator (α := State) m β] [IteratorCollect State m n] [LawfulIteratorCollect State m n] :
letI i : ToIterator x m β := .ofM State iter
LawfulIteratorCollect (α := i.State) m n :=
inferInstanceAs <| LawfulIteratorCollect (α := State) m n
instance {x : γ} {State : Type w} {iter}
[Iterator (α := State) m β] [IteratorCollectPartial State m n] :
letI i : ToIterator x m β := .ofM State iter
@@ -101,22 +107,22 @@ instance {x : γ} {State : Type w} {iter}
IteratorLoop (α := i.State) m n :=
inferInstanceAs <| IteratorLoop (α := State) m n
instance {x : γ} {State : Type w} {iter} [Monad m] [Monad n]
[Iterator (α := State) m β] [IteratorLoop State m n] [LawfulIteratorLoop State m n]:
letI i : ToIterator x m β := .ofM State iter
LawfulIteratorLoop (α := i.State) m n :=
inferInstanceAs <| LawfulIteratorLoop (α := State) m n
instance {x : γ} {State : Type w} {iter}
[Iterator (α := State) m β] [IteratorLoopPartial State m n] :
letI i : ToIterator x m β := .ofM State iter
IteratorLoopPartial (α := i.State) m n :=
inferInstanceAs <| IteratorLoopPartial (α := State) m n
instance {x : γ} {State : Type w} {iter}
[Iterator (α := State) m β] [IteratorSize State m] :
letI i : ToIterator x m β := .ofM State iter
IteratorSize (α := i.State) m :=
inferInstanceAs <| IteratorSize (α := State) m
instance {x : γ} {State : Type w} {iter}
[Iterator (α := State) m β] [IteratorSizePartial State m] :
letI i : ToIterator x m β := .ofM State iter
IteratorSizePartial (α := i.State) m :=
inferInstanceAs <| IteratorSizePartial (α := State) m
@[simp]
theorem ToIterator.state_eq {x : γ} {State : Type w} {iter} :
haveI : ToIterator x Id β := .of State iter
ToIterator.State x Id = State :=
rfl
end Std.Iterators

View File

@@ -622,11 +622,17 @@ instance : Std.LawfulIdentity (α := List α) (· ++ ·) [] where
| nil => simp
| cons _ as ih => simp [ih, Nat.succ_add]
@[simp, grind _=_] theorem append_assoc (as bs cs : List α) : (as ++ bs) ++ cs = as ++ (bs ++ cs) := by
@[simp] theorem append_assoc (as bs cs : List α) : (as ++ bs) ++ cs = as ++ (bs ++ cs) := by
induction as with
| nil => rfl
| cons a as ih => simp [ih]
grind_pattern append_assoc => (as ++ bs) ++ cs where
as =/= []; bs =/= []; cs =/= []
grind_pattern append_assoc => as ++ (bs ++ cs) where
as =/= []; bs =/= []; cs =/= []
instance : Std.Associative (α := List α) (· ++ ·) := append_assoc
-- Arguments are explicit as there is often ambiguity inferring the arguments.
@@ -2086,6 +2092,18 @@ def min? [Min α] : List α → Option α
| [] => none
| a::as => some <| as.foldl min a
/-! ### min -/
/--
Returns the smallest element of a non-empty list.
Examples:
* `[4].min (by decide) = 4`
* `[1, 4, 2, 10, 6].min (by decide) = 1`
-/
protected def min [Min α] : (l : List α) (h : l []) α
| a::as, _ => as.foldl min a
/-! ### max? -/
/--
@@ -2100,6 +2118,18 @@ def max? [Max α] : List α → Option α
| [] => none
| a::as => some <| as.foldl max a
/-! ### max -/
/--
Returns the largest element of a non-empty list.
Examples:
* `[4].max (by decide) = 4`
* `[1, 4, 2, 10, 6].max (by decide) = 10`
-/
protected def max [Max α] : (l : List α) (h : l []) α
| a::as, _ => as.foldl max a
/-! ## Other list operations
The functions are currently mostly used in meta code,

View File

@@ -272,8 +272,8 @@ theorem sizeOf_get [SizeOf α] (as : List α) (i : Fin as.length) : sizeOf (as.g
apply Nat.lt_trans ih
simp +arith
theorem not_lex_antisymm [DecidableEq α] {r : α α Prop} [DecidableRel r]
(antisymm : x y : α, ¬ r x y ¬ r y x x = y)
theorem lex_trichotomous [DecidableEq α] {r : α α Prop} [DecidableRel r]
(trichotomous : x y : α, ¬ r x y ¬ r y x x = y)
{as bs : List α} (h₁ : ¬ Lex r bs as) (h₂ : ¬ Lex r as bs) : as = bs :=
match as, bs with
| [], [] => rfl
@@ -286,20 +286,26 @@ theorem not_lex_antisymm [DecidableEq α] {r : αα → Prop} [DecidableRel
· subst eq
have h₁ : ¬ Lex r bs as := fun h => h₁ (List.Lex.cons h)
have h₂ : ¬ Lex r as bs := fun h => h₂ (List.Lex.cons h)
simp [not_lex_antisymm antisymm h₁ h₂]
simp [lex_trichotomous trichotomous h₁ h₂]
· exfalso
by_cases hba : r b a
· exact h₁ (Lex.rel hba)
· exact eq (antisymm _ _ hab hba)
· exact eq (trichotomous _ _ hab hba)
@[deprecated lex_trichotomous (since := "2025-10-27")]
theorem not_lex_antisymm [DecidableEq α] {r : α α Prop} [DecidableRel r]
(antisymm : x y : α, ¬ r x y ¬ r y x x = y)
{as bs : List α} (h₁ : ¬ Lex r bs as) (h₂ : ¬ Lex r as bs) : as = bs :=
lex_trichotomous antisymm h₁ h₂
protected theorem le_antisymm [LT α]
[i : Std.Antisymm (¬ · < · : α α Prop)]
[i : Std.Trichotomous (· < · : α α Prop)]
{as bs : List α} (h₁ : as bs) (h₂ : bs as) : as = bs :=
open Classical in
not_lex_antisymm i.antisymm h₁ h₂
lex_trichotomous i.trichotomous h₁ h₂
instance [LT α]
[s : Std.Antisymm (¬ · < · : α α Prop)] :
[s : Std.Trichotomous (· < · : α α Prop)] :
Std.Antisymm (· · : List α List α Prop) where
antisymm _ _ h₁ h₂ := List.le_antisymm h₁ h₂

View File

@@ -291,9 +291,11 @@ theorem eraseP_comm {l : List α} (h : ∀ a ∈ l, ¬ p a ¬ q a) :
· simp [h₁, h₂]
· simp [h₁, h₂, ih (fun b m => h b (mem_cons_of_mem _ m))]
@[grind ]
theorem head_eraseP_mem {xs : List α} {p : α Bool} (h) : (xs.eraseP p).head h xs :=
eraseP_sublist.head_mem h
@[grind ]
theorem getLast_eraseP_mem {xs : List α} {p : α Bool} (h) : (xs.eraseP p).getLast h xs :=
eraseP_sublist.getLast_mem h

View File

@@ -60,6 +60,10 @@ theorem finRange_reverse {n} : (finRange n).reverse = (finRange n).map Fin.rev :
congr 2; funext
simp [Fin.rev_succ]
@[simp, grind ]
theorem mem_finRange {n} (x : Fin n) : x finRange n := by
simp [finRange]
end List
namespace Fin

View File

@@ -659,6 +659,18 @@ theorem findIdx_eq {p : α → Bool} {xs : List α} {i : Nat} (h : i < xs.length
simp at h3
simp_all [not_of_lt_findIdx h3]
@[simp]
theorem lt_findIdx_iff (xs : List α) (p : α Bool) (i : Nat) :
i < xs.findIdx p h : i < xs.length, j, (hj : j i) p xs[j] = false :=
fun h => by have := findIdx_le_length (xs := xs) (p := p); omega,
fun j hj => by apply not_of_lt_findIdx; omega,
fun h, w => by apply lt_findIdx_of_not h; simpa using w
@[simp, grind =]
theorem findIdx_map (xs : List α) (f : α β) (p : β Bool) :
(xs.map f).findIdx p = xs.findIdx (p f) := by
induction xs with simp_all [findIdx_cons]
@[grind =]
theorem findIdx_append {p : α Bool} {l₁ l₂ : List α} :
(l₁ ++ l₂).findIdx p =

View File

@@ -38,7 +38,7 @@ The following operations are still missing `@[csimp]` replacements:
The following operations are not recursive to begin with
(or are defined in terms of recursive primitives):
`isEmpty`, `isSuffixOf`, `isSuffixOf?`, `rotateLeft`, `rotateRight`, `insert`, `zip`, `enum`,
`min?`, `max?`, and `removeAll`.
`min?`, `max?`, `min`, `max` and `removeAll`.
The following operations were already given `@[csimp]` replacements in `Init/Data/List/Basic.lean`:
`length`, `map`, `filter`, `replicate`, `leftPad`, `unzip`, `range'`, `iota`, `intersperse`.

View File

@@ -60,7 +60,7 @@ See also
* `Init.Data.List.Erase` for lemmas about `List.eraseP` and `List.erase`.
* `Init.Data.List.Find` for lemmas about `List.find?`, `List.findSome?`, `List.findIdx`,
`List.findIdx?`, and `List.indexOf`
* `Init.Data.List.MinMax` for lemmas about `List.min?` and `List.max?`.
* `Init.Data.List.MinMax` for lemmas about `List.min?`, `List.min`, `List.max?` and `List.max`.
* `Init.Data.List.Pairwise` for lemmas about `List.Pairwise` and `List.Nodup`.
* `Init.Data.List.Sublist` for lemmas about `List.Subset`, `List.Sublist`, `List.IsPrefix`,
`List.IsSuffix`, and `List.IsInfix`.
@@ -298,6 +298,12 @@ theorem ext_getElem {l₁ l₂ : List α} (hl : length l₁ = length l₂)
have h₁ := Nat.le_of_not_lt h₁
rw [getElem?_eq_none h₁, getElem?_eq_none]; rwa [ hl]
theorem ext_getElem_iff {l₁ l₂ : List α} :
l₁ = l₂ l₁.length = l₂.length (i : Nat) (h₁ : i < l₁.length) (h₂ : i < l₂.length), l₁[i]'h₁ = l₂[i]'h₂ := by
constructor
· simp +contextual
· exact fun h => ext_getElem h.1 h.2
@[simp] theorem getElem_concat_length {l : List α} {a : α} {i : Nat} (h : i = l.length) (w) :
(l ++ [a])[i]'w = a := by
subst h; simp
@@ -1217,9 +1223,13 @@ theorem tailD_map {f : α → β} {l l' : List α} :
theorem getLastD_map {f : α β} {l : List α} {a : α} : (map f l).getLastD (f a) = f (l.getLastD a) := by
simp
@[simp, grind _=_] theorem map_map {g : β γ} {f : α β} {l : List α} :
@[simp] theorem map_map {g : β γ} {f : α β} {l : List α} :
map g (map f l) = map (g f) l := by induction l <;> simp_all
grind_pattern map_map => map g (map f l) where
g =/= List.reverse
f =/= List.reverse
/-! ### filter -/
@[simp] theorem filter_cons_of_pos {p : α Bool} {a : α} {l} (pa : p a) :
@@ -1428,13 +1438,16 @@ theorem filterMap_eq_filter {p : α → Bool} :
| nil => rfl
| cons a l IH => by_cases pa : p a <;> simp [Option.guard, pa, IH]
@[grind =]
theorem filterMap_filterMap {f : α Option β} {g : β Option γ} {l : List α} :
filterMap g (filterMap f l) = filterMap (fun x => (f x).bind g) l := by
induction l with
| nil => rfl
| cons a l IH => cases h : f a <;> simp [filterMap_cons, *]
grind_pattern filterMap_filterMap => filterMap g (filterMap f l) where
f =/= some
g =/= some
@[grind =]
theorem map_filterMap {f : α Option β} {g : β γ} {l : List α} :
map g (filterMap f l) = filterMap (fun x => (f x).map g) l := by
@@ -2456,16 +2469,28 @@ theorem getLast_of_mem_getLast? {l : List α} (hx : x ∈ l.getLast?) :
simp only [reverse_cons, filterMap_append, filterMap_cons, ih]
split <;> simp_all
@[simp, grind _=_] theorem reverse_append {as bs : List α} : (as ++ bs).reverse = bs.reverse ++ as.reverse := by
@[simp] theorem reverse_append {as bs : List α} : (as ++ bs).reverse = bs.reverse ++ as.reverse := by
induction as <;> simp_all
grind_pattern reverse_append => (as ++ bs).reverse where
as =/= []
bs =/= []
grind_pattern reverse_append => bs.reverse ++ as.reverse where
as =/= []
bs =/= []
@[simp] theorem reverse_eq_append_iff {xs ys zs : List α} :
xs.reverse = ys ++ zs xs = zs.reverse ++ ys.reverse := by
rw [reverse_eq_iff, reverse_append]
@[grind _=_] theorem reverse_concat {l : List α} {a : α} : (l ++ [a]).reverse = a :: l.reverse := by
theorem reverse_concat {l : List α} {a : α} : (l ++ [a]).reverse = a :: l.reverse := by
rw [reverse_append]; rfl
grind_pattern reverse_concat => (l ++ [a]).reverse where
l =/= []
grind_pattern reverse_concat => a :: l.reverse where
l =/= []
theorem reverse_eq_concat {xs ys : List α} {a : α} :
xs.reverse = ys ++ [a] xs = a :: ys.reverse := by
rw [reverse_eq_iff, reverse_concat]
@@ -2483,9 +2508,15 @@ theorem flatten_reverse {L : List (List α)} :
@[grind =] theorem reverse_flatMap {β} {l : List α} {f : α List β} : (l.flatMap f).reverse = l.reverse.flatMap (reverse f) := by
induction l <;> simp_all
@[grind =] theorem flatMap_reverse {β} {l : List α} {f : α List β} : (l.reverse.flatMap f) = (l.flatMap (reverse f)).reverse := by
grind_pattern reverse_flatMap => (l.flatMap f).reverse where
f =/= List.reverse _
theorem flatMap_reverse {β} {l : List α} {f : α List β} : l.reverse.flatMap f = (l.flatMap (reverse f)).reverse := by
induction l <;> simp_all
grind_pattern flatMap_reverse => l.reverse.flatMap f where
f =/= List.reverse _
@[simp] theorem reverseAux_eq {as bs : List α} : reverseAux as bs = reverse as ++ bs :=
reverseAux_eq_append ..
@@ -2632,6 +2663,22 @@ theorem foldr_map_hom {g : α → β} {f : ααα} {f' : β → β →
@[simp, grind _=_] theorem foldr_append {f : α β β} {b : β} {l l' : List α} :
(l ++ l').foldr f b = l.foldr f (l'.foldr f b) := by simp [foldr_eq_foldrM, -foldrM_pure]
theorem foldl_flatMap {f : α List β} {g : γ β γ} {l : List α} {init : γ} :
(l.flatMap f).foldl g init = l.foldl (fun acc x => (f x).foldl g acc) init := by
induction l generalizing init
· simp
next a l ih =>
simp only [flatMap_cons, foldl_cons]
rw [foldl_append, ih]
theorem foldr_flatMap {f : α List β} {g : β γ γ} {l : List α} {init : γ} :
(l.flatMap f).foldr g init = l.foldr (fun x acc => (f x).foldr g acc) init := by
induction l generalizing init
· simp
next a l ih =>
simp only [flatMap_cons, foldr_cons]
rw [foldr_append, ih]
@[grind =] theorem foldl_flatten {f : β α β} {b : β} {L : List (List α)} :
(flatten L).foldl f b = L.foldl (fun b l => l.foldl f b) b := by
induction L generalizing b <;> simp_all

View File

@@ -98,7 +98,7 @@ theorem not_cons_lex_cons_iff [DecidableEq α] [DecidableRel r] {a b} {l₁ l₂
theorem cons_le_cons_iff [LT α]
[i₁ : Std.Asymm (· < · : α α Prop)]
[i₂ : Std.Antisymm (¬ · < · : α α Prop)]
[i₂ : Std.Trichotomous (· < · : α α Prop)]
{a b} {l₁ l₂ : List α} :
(a :: l₁) (b :: l₂) a < b a = b l₁ l₂ := by
dsimp only [instLE, instLT, List.le, List.lt]
@@ -110,12 +110,12 @@ theorem cons_le_cons_iff [LT α]
apply Decidable.byContradiction
intro h₃
apply h₂
exact i₂.antisymm _ _ h₁ h₃
exact i₂.trichotomous _ _ h₁ h₃
· if h₃ : a < b then
exact .inl h₃
else
right
exact i₂.antisymm _ _ h₃ h₁, h₂
exact i₂.trichotomous _ _ h₃ h₁, h₂
· rintro (h | h₁, h₂)
· left
exact i₁.asymm _ _ h, fun w => Irrefl.irrefl _ (w h)
@@ -124,7 +124,7 @@ theorem cons_le_cons_iff [LT α]
theorem not_lt_of_cons_le_cons [LT α]
[i₁ : Std.Asymm (· < · : α α Prop)]
[i₂ : Std.Antisymm (¬ · < · : α α Prop)]
[i₂ : Std.Trichotomous (· < · : α α Prop)]
{a b : α} {l₁ l₂ : List α} (h : a :: l₁ b :: l₂) : ¬ b < a := by
rw [cons_le_cons_iff] at h
rcases h with h | rfl, h
@@ -138,7 +138,7 @@ theorem left_le_left_of_cons_le_cons [LT α] [LE α] [IsLinearOrder α]
theorem le_of_cons_le_cons [LT α]
[i₀ : Std.Irrefl (· < · : α α Prop)]
[i₁ : Std.Asymm (· < · : α α Prop)]
[i₂ : Std.Antisymm (¬ · < · : α α Prop)]
[i₂ : Std.Trichotomous (· < · : α α Prop)]
{a} {l₁ l₂ : List α} (h : a :: l₁ a :: l₂) : l₁ l₂ := by
rw [cons_le_cons_iff] at h
rcases h with h | _, h
@@ -212,7 +212,7 @@ protected theorem lt_of_le_of_lt [LT α] [LE α] [IsLinearOrder α] [LawfulOrder
@[deprecated List.lt_of_le_of_lt (since := "2025-08-01")]
protected theorem lt_of_le_of_lt' [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Trans (¬ · < · : α α Prop) (¬ · < ·) (¬ · < ·)]
{l₁ l₂ l₃ : List α} (h₁ : l₁ l₂) (h₂ : l₂ < l₃) : l₁ < l₃ :=
letI : LE α := .ofLT α
@@ -226,7 +226,7 @@ protected theorem le_trans [LT α] [LE α] [IsLinearOrder α] [LawfulOrderLT α]
@[deprecated List.le_trans (since := "2025-08-01")]
protected theorem le_trans' [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Trans (¬ · < · : α α Prop) (¬ · < ·) (¬ · < ·)]
{l₁ l₂ l₃ : List α} (h₁ : l₁ l₂) (h₂ : l₂ l₃) : l₁ l₃ :=
letI := LE.ofLT α
@@ -298,7 +298,7 @@ protected theorem le_of_lt [LT α]
protected theorem le_iff_lt_or_eq [LT α]
[Std.Irrefl (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Std.Asymm (· < · : α α Prop)]
{l₁ l₂ : List α} : l₁ l₂ l₁ < l₂ l₁ = l₂ := by
constructor
@@ -484,7 +484,7 @@ protected theorem lt_iff_exists [LT α] {l₁ l₂ : List α} :
protected theorem le_iff_exists [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)] {l₁ l₂ : List α} :
[Std.Trichotomous (· < · : α α Prop)] {l₁ l₂ : List α} :
l₁ l₂
(l₁ = l₂.take l₁.length)
( (i : Nat) (h₁ : i < l₁.length) (h₂ : i < l₂.length),
@@ -497,7 +497,7 @@ protected theorem le_iff_exists [LT α]
conv => lhs; simp +singlePass [exists_comm]
· simpa using Std.Irrefl.irrefl
· simpa using Std.Asymm.asymm
· simpa using Std.Antisymm.antisymm
· simpa using Std.Trichotomous.trichotomous
theorem append_left_lt [LT α] {l₁ l₂ l₃ : List α} (h : l₂ < l₃) :
l₁ ++ l₂ < l₁ ++ l₃ := by
@@ -507,7 +507,7 @@ theorem append_left_lt [LT α] {l₁ l₂ l₃ : List α} (h : l₂ < l₃) :
theorem append_left_le [LT α]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
{l₁ l₂ l₃ : List α} (h : l₂ l₃) :
l₁ ++ l₂ l₁ ++ l₃ := by
induction l₁ with
@@ -540,9 +540,9 @@ protected theorem map_lt [LT α] [LT β]
protected theorem map_le [LT α] [LT β]
[Std.Asymm (· < · : α α Prop)]
[Std.Antisymm (¬ · < · : α α Prop)]
[Std.Trichotomous (· < · : α α Prop)]
[Std.Asymm (· < · : β β Prop)]
[Std.Antisymm (¬ · < · : β β Prop)]
[Std.Trichotomous (· < · : β β Prop)]
{l₁ l₂ : List α} {f : α β} (w : x y, x < y f x < f y) (h : l₁ l₂) :
map f l₁ map f l₂ := by
rw [List.le_iff_exists] at h

View File

@@ -46,6 +46,9 @@ theorem isSome_min?_of_mem {l : List α} [Min α] {a : α} (h : a ∈ l) :
l.min?.isSome := by
cases l <;> simp_all [min?_cons']
theorem isSome_min?_of_ne_nil [Min α] : {l : List α} (hl : l []) l.min?.isSome
| x::xs, h => by simp [min?_cons']
theorem min?_eq_head? {α : Type u} [Min α] {l : List α}
(h : l.Pairwise (fun a b => min a b = a)) : l.min? = l.head? := by
cases l with
@@ -155,6 +158,48 @@ theorem foldl_min [Min α] [Std.IdempotentOp (min : ααα)] [Std.Asso
{l : List α} {a : α} : l.foldl (init := a) min = min a (l.min?.getD a) := by
cases l <;> simp [min?, foldl_assoc, Std.IdempotentOp.idempotent]
/-! ### min -/
theorem min?_eq_some_min [Min α] : {l : List α} (hl : l [])
l.min? = some (l.min hl)
| a::as, _ => by simp [List.min, List.min?_cons']
theorem min_eq_get_min? [Min α] : (l : List α) (hl : l [])
l.min hl = l.min?.get (isSome_min?_of_ne_nil hl)
| a::as, _ => by simp [List.min, List.min?_cons']
theorem min_eq_head {α : Type u} [Min α] {l : List α} (hl : l [])
(h : l.Pairwise (fun a b => min a b = a)) : l.min hl = l.head hl := by
apply Option.some.inj
rw [ min?_eq_some_min, head?_eq_some_head]
exact min?_eq_head? h
theorem min_mem [Min α] [MinEqOr α] {l : List α} (hl : l []) : l.min hl l :=
min?_mem (min?_eq_some_min hl)
theorem min_le_of_mem [Min α] [LE α] [Std.IsLinearOrder α] [Std.LawfulOrderMin α]
{l : List α} {a : α} (ha : a l) :
l.min (ne_nil_of_mem ha) a :=
(min?_eq_some_iff.mp (min?_eq_some_min (List.ne_nil_of_mem ha))).right a ha
protected theorem le_min_iff [Min α] [LE α] [LawfulOrderInf α]
{l : List α} (hl : l []) : {x}, x l.min hl b, b l x b :=
le_min?_iff (min?_eq_some_min hl)
theorem min_eq_iff [Min α] [LE α] {l : List α} [IsLinearOrder α] [LawfulOrderMin α] (hl : l []) :
l.min hl = a a l b, b l a b := by
simpa [min?_eq_some_min hl] using (min?_eq_some_iff (xs := l))
@[simp] theorem min_replicate [Min α] [MinEqOr α] {n : Nat} {a : α} (h : replicate n a []) :
(replicate n a).min h = a := by
have n_pos : 0 < n := Nat.pos_of_ne_zero (fun hn => by simp [hn] at h)
simpa [min?_eq_some_min h] using (min?_replicate_of_pos (a := a) n_pos)
theorem foldl_min_eq_min [Min α] [Std.IdempotentOp (min : α α α)] [Std.Associative (min : α α α)]
{l : List α} (hl : l []) {a : α} :
l.foldl min a = min a (l.min hl) := by
simpa [min?_eq_some_min hl] using foldl_min (l := l)
/-! ### max? -/
@[simp] theorem max?_nil [Max α] : ([] : List α).max? = none := rfl
@@ -174,6 +219,9 @@ theorem isSome_max?_of_mem {l : List α} [Max α] {a : α} (h : a ∈ l) :
l.max?.isSome := by
cases l <;> simp_all [max?_cons']
theorem isSome_max?_of_ne_nil [Max α] : {l : List α} (hl : l []) l.max?.isSome
| x::xs, h => by simp [max?_cons']
theorem max?_eq_head? {α : Type u} [Max α] {l : List α}
(h : l.Pairwise (fun a b => max a b = a)) : l.max? = l.head? := by
cases l with
@@ -296,4 +344,46 @@ theorem foldl_max [Max α] [Std.IdempotentOp (max : ααα)] [Std.Asso
{l : List α} {a : α} : l.foldl (init := a) max = max a (l.max?.getD a) := by
cases l <;> simp [max?, foldl_assoc, Std.IdempotentOp.idempotent]
/-! ### max -/
theorem max?_eq_some_max [Max α] : {l : List α} (hl : l [])
l.max? = some (l.max hl)
| a::as, _ => by simp [List.max, List.max?_cons']
theorem max_eq_get_max? [Max α] : (l : List α) (hl : l [])
l.max hl = l.max?.get (isSome_max?_of_ne_nil hl)
| a::as, _ => by simp [List.max, List.max?_cons']
theorem max_eq_head {α : Type u} [Max α] {l : List α} (hl : l [])
(h : l.Pairwise (fun a b => max a b = a)) : l.max hl = l.head hl := by
apply Option.some.inj
rw [ max?_eq_some_max, head?_eq_some_head]
exact max?_eq_head? h
theorem max_mem [Max α] [MaxEqOr α] {l : List α} (hl : l []) : l.max hl l :=
max?_mem (max?_eq_some_max hl)
protected theorem max_le_iff [Max α] [LE α] [LawfulOrderSup α]
{l : List α} (hl : l []) : {x}, l.max hl x b, b l b x :=
max?_le_iff (max?_eq_some_max hl)
theorem max_eq_iff [Max α] [LE α] {l : List α} [IsLinearOrder α] [LawfulOrderMax α] (hl : l []) :
l.max hl = a a l b, b l b a := by
simpa [max?_eq_some_max hl] using (max?_eq_some_iff (xs := l))
theorem le_max_of_mem [Max α] [LE α] [Std.IsLinearOrder α] [Std.LawfulOrderMax α]
{l : List α} {a : α} (ha : a l) :
a l.max (List.ne_nil_of_mem ha) :=
(max?_eq_some_iff.mp (max?_eq_some_max (List.ne_nil_of_mem ha))).right a ha
@[simp] theorem max_replicate [Max α] [MaxEqOr α] {n : Nat} {a : α} (h : replicate n a []) :
(replicate n a).max h = a := by
have n_pos : 0 < n := Nat.pos_of_ne_zero (fun hn => by simp [hn] at h)
simpa [max?_eq_some_max h] using (max?_replicate_of_pos (a := a) n_pos)
theorem foldl_max_eq_max [Max α] [Std.IdempotentOp (max : α α α)] [Std.Associative (max : α α α)]
{l : List α} (hl : l []) {a : α} :
l.foldl max a = max a (l.max hl) := by
simpa [max?_eq_some_max hl] using foldl_max (l := l)
end List

View File

@@ -68,10 +68,13 @@ theorem getElem?_modifyHead {l : List α} {f : αα} {i} :
@[simp, grind =] theorem tail_modifyHead {f : α α} {l : List α} :
(l.modifyHead f).tail = l.tail := by cases l <;> simp
@[simp, grind =] theorem take_modifyHead {f : α α} {l : List α} {i} :
@[simp] theorem take_modifyHead {f : α α} {l : List α} {i} :
(l.modifyHead f).take i = (l.take i).modifyHead f := by
cases l <;> cases i <;> simp
grind_pattern take_modifyHead => (l.modifyHead f).take i where
i =/= 0
@[simp] theorem drop_modifyHead_of_pos {f : α α} {l : List α} {i} (h : 0 < i) :
(l.modifyHead f).drop i = l.drop i := by
cases l <;> cases i <;> simp_all
@@ -103,7 +106,9 @@ theorem eraseIdx_eq_modifyTailIdx : ∀ i (l : List α), eraseIdx l i = l.modify
| _+1, [] => rfl
| _+1, _ :: _ => congrArg (cons _) (eraseIdx_eq_modifyTailIdx _ _)
@[simp, grind =] theorem length_modifyTailIdx (f : List α List α) (H : l, (f l).length = l.length) :
-- This is not suitable as a `@[grind =]` lemma:
-- as soon as it is instantiated the hypothesis `H` causes an infinite chain of instantiations.
@[simp] theorem length_modifyTailIdx (f : List α List α) (H : l, (f l).length = l.length) :
(l : List α) i, (l.modifyTailIdx i f).length = l.length
| _, 0 => H _
| [], _+1 => rfl
@@ -213,7 +218,6 @@ theorem modify_eq_self {f : αα} {i} {l : List α} (h : l.length ≤ i) :
intro h
omega
@[grind =]
theorem modify_modify_eq (f g : α α) (i) (l : List α) :
(l.modify i f).modify i g = l.modify i (g f) := by
apply ext_getElem
@@ -222,6 +226,9 @@ theorem modify_modify_eq (f g : αα) (i) (l : List α) :
simp only [getElem_modify, Function.comp_apply]
split <;> simp
grind_pattern modify_modify_eq => (l.modify i f).modify i g where
l =/= []
theorem modify_modify_ne (f g : α α) {i j} (l : List α) (h : i j) :
(l.modify i f).modify j g = (l.modify j g).modify i f := by
apply ext_getElem

View File

@@ -374,6 +374,22 @@ theorem drop_take : ∀ {i j : Nat} {l : List α}, drop i (take j l) = take (j -
rw [drop_take]
simp
@[simp]
theorem drop_eq_drop_iff :
{l : List α} {i j : Nat}, l.drop i = l.drop j min i l.length = min j l.length
| [], i, j => by simp
| _ :: xs, 0, 0 => by simp
| x :: xs, i + 1, 0 => by
rw [List.ext_getElem_iff]
simp [succ_min_succ, show ¬ xs.length - i = xs.length + 1 by omega]
| x :: xs, 0, j + 1 => by
rw [List.ext_getElem_iff]
simp [succ_min_succ, show ¬ xs.length + 1 = xs.length - j by omega]
| x :: xs, i + 1, j + 1 => by simp [succ_min_succ, drop_eq_drop_iff]
theorem drop_eq_drop_min {l : List α} {i : Nat} : l.drop i = l.drop (min i l.length) := by
simp
theorem take_reverse {α} {xs : List α} {i : Nat} :
xs.reverse.take i = (xs.drop (xs.length - i)).reverse := by
by_cases h : i xs.length

View File

@@ -98,14 +98,18 @@ theorem eq_nil_of_subset_nil {l : List α} : l ⊆ [] → l = [] := subset_nil.m
theorem map_subset {l₁ l₂ : List α} (f : α β) (h : l₁ l₂) : map f l₁ map f l₂ :=
fun x => by simp only [mem_map]; exact .imp fun a => .imp_left (@h _)
grind_pattern map_subset => l₁ l₂, map f l₁
grind_pattern map_subset => l₁ l₂, map f l₂
grind_pattern map_subset => l₁ l₂, map f l₁ where
l₂ =/= List.map _ _
grind_pattern map_subset => l₁ l₂, map f l₂ where
l₁ =/= List.map _ _
theorem filter_subset {l₁ l₂ : List α} (p : α Bool) (H : l₁ l₂) : filter p l₁ filter p l₂ :=
fun x => by simp_all [mem_filter, subset_def.1 H]
grind_pattern filter_subset => l₁ l₂, filter p l₁
grind_pattern filter_subset => l₁ l₂, filter p l₂
grind_pattern filter_subset => l₁ l₂, filter p l₁ where
l₂ =/= List.filter _ _
grind_pattern filter_subset => l₁ l₂, filter p l₂ where
l₁ =/= List.filter _ _
theorem filterMap_subset {l₁ l₂ : List α} (f : α Option β) (H : l₁ l₂) :
filterMap f l₁ filterMap f l₂ := by
@@ -114,8 +118,10 @@ theorem filterMap_subset {l₁ l₂ : List α} (f : α → Option β) (H : l₁
rintro a, h, w
exact a, H h, w
grind_pattern filterMap_subset => l₁ l₂, filterMap f l₁
grind_pattern filterMap_subset => l₁ l₂, filterMap f l₂
grind_pattern filterMap_subset => l₁ l₂, filterMap f l₁ where
l₂ =/= List.filterMap _ _
grind_pattern filterMap_subset => l₁ l₂, filterMap f l₂ where
l₁ =/= List.filterMap _ _
theorem subset_append_left (l₁ l₂ : List α) : l₁ l₁ ++ l₂ := fun _ => mem_append_left _
@@ -206,13 +212,11 @@ theorem Sublist.head_mem (s : ys <+ xs) (h) : ys.head h ∈ xs :=
s.mem (List.head_mem h)
grind_pattern Sublist.head_mem => ys <+ xs, ys.head h
grind_pattern Sublist.head_mem => ys.head h xs -- This is somewhat aggressive, as it initiates sublist based reasoning.
theorem Sublist.getLast_mem (s : ys <+ xs) (h) : ys.getLast h xs :=
s.mem (List.getLast_mem h)
grind_pattern Sublist.getLast_mem => ys <+ xs, ys.getLast h
grind_pattern Sublist.getLast_mem => ys.getLast h xs -- This is somewhat aggressive, as it initiates sublist based reasoning.
instance : Trans (@Sublist α) Subset Subset :=
fun h₁ h₂ => trans h₁.subset h₂
@@ -282,20 +286,28 @@ protected theorem Sublist.map (f : α → β) {l₁ l₂} (s : l₁ <+ l₂) : m
grind_pattern Sublist.map => l₁ <+ l₂, map f l₁
grind_pattern Sublist.map => l₁ <+ l₂, map f l₂
@[grind ]
protected theorem Sublist.filterMap (f : α Option β) (s : l₁ <+ l₂) :
filterMap f l₁ <+ filterMap f l₂ := by
induction s <;> simp [filterMap_cons] <;> split <;> simp [*, cons]
grind_pattern Sublist.filterMap => l₁ <+ l₂, filterMap f l
grind_pattern Sublist.filterMap => l₁ <+ l₂, filterMap f l₂
grind_pattern Sublist.filterMap => filterMap f l₁ <+ filterMap f l where
l₁ =/= List.filterMap _ _
l₂ =/= List.filterMap _ _
grind_pattern Sublist.filterMap => l₁ <+ l₂, filterMap f l₁ where
l₂ =/= List.filterMap _ _
grind_pattern Sublist.filterMap => l₁ <+ l₂, filterMap f l₂ where
l₁ =/= List.filterMap _ _
@[grind ]
protected theorem Sublist.filter (p : α Bool) {l₁ l₂} (s : l₁ <+ l₂) : filter p l₁ <+ filter p l₂ := by
rw [ filterMap_eq_filter]; apply s.filterMap
grind_pattern Sublist.filter => l₁ <+ l₂, l₁.filter p
grind_pattern Sublist.filter => l₁ <+ l₂, l₂.filter p
grind_pattern Sublist.filter => filter p l₁ <+ filter p l₂ where
l₁ =/= List.filter _ _
l₂ =/= List.filter _ _
grind_pattern Sublist.filter => l₁ <+ l₂, l₁.filter p where
l₂ =/= List.filter _ _
grind_pattern Sublist.filter => l₁ <+ l₂, l₂.filter p where
l₁ =/= List.filter _ _
theorem head_filter_mem (xs : List α) (p : α Bool) (h) : (xs.filter p).head h xs :=
filter_sublist.head_mem h

View File

@@ -473,8 +473,13 @@ protected theorem eq_iff_le_and_ge : ∀{a b : Nat}, a = b ↔ a ≤ b ∧ b ≤
instance : Std.Antisymm ( . . : Nat Nat Prop) where
antisymm _ _ h₁ h₂ := Nat.le_antisymm h₁ h₂
instance : Std.Antisymm (¬ . < . : Nat Nat Prop) where
antisymm _ _ h₁ h₂ := Nat.le_antisymm (Nat.ge_of_not_lt h₂) (Nat.ge_of_not_lt h₁)
instance : Std.Trichotomous (. < . : Nat Nat Prop) where
trichotomous _ _ h₁ h₂ := Nat.le_antisymm (Nat.ge_of_not_lt h₂) (Nat.ge_of_not_lt h₁)
set_option linter.missingDocs false in
@[deprecated Nat.instTrichotomousLt (since := "2025-10-27")]
def Nat.instAntisymmNotLt : Std.Antisymm (¬ . < . : Nat Nat Prop) where
antisymm := Nat.instTrichotomousLt.trichotomous
protected theorem add_le_add_left {n m : Nat} (h : n m) (k : Nat) : k + n k + m :=
match le.dest h with
@@ -817,6 +822,8 @@ protected theorem two_pow_pos (w : Nat) : 0 < 2^w := Nat.pow_pos (by decide)
instance {n m : Nat} [NeZero n] : NeZero (n^m) :=
Nat.ne_zero_iff_zero_lt.mpr (Nat.pow_pos (pos_of_neZero _))
instance {n : Nat} : NeZero (n^0) := Nat.one_ne_zero
protected theorem mul_pow (a b n : Nat) : (a * b) ^ n = a ^ n * b ^ n := by
induction n with
| zero => simp [Nat.pow_zero]

View File

@@ -139,4 +139,12 @@ Returns `true` if the `(n+1)`th least significant bit is `1`, or `false` if it i
-- `1 &&& n` is faster than `n &&& 1` for big `n`.
1 &&& (m >>> n) != 0
/--
Asserts that the `(n+1)`th least significant bit of `m` is not set.
(This definition is used by Lean internally for compact bitmaps.)
-/
@[expose, reducible] protected def hasNotBit (m n : Nat) : Prop :=
Nat.land 1 (Nat.shiftRight m n) 1
end Nat

View File

@@ -485,11 +485,16 @@ protected theorem and_comm (x y : Nat) : x &&& y = y &&& x := by
apply Nat.eq_of_testBit_eq
simp [Bool.and_comm]
@[grind _=_]
protected theorem and_assoc (x y z : Nat) : (x &&& y) &&& z = x &&& (y &&& z) := by
apply Nat.eq_of_testBit_eq
simp [Bool.and_assoc]
grind_pattern Nat.and_assoc => (x &&& y) &&& z where
x =/= 0; y =/= 0; z =/= 0
grind_pattern Nat.and_assoc => x &&& (y &&& z) where
x =/= 0; y =/= 0; z =/= 0
instance : Std.Associative (α := Nat) (· &&& ·) where
assoc := Nat.and_assoc

View File

@@ -1719,11 +1719,16 @@ theorem shiftRight_succ_inside : ∀m n, m >>> (n+1) = (m/2) >>> n
| 0 => by simp
| n + 1 => by simp [zero_shiftRight n, shiftRight_succ]
@[grind _=_]
theorem shiftLeft_add (m n : Nat) : k, m <<< (n + k) = (m <<< n) <<< k
| 0 => rfl
| k + 1 => by simp [ Nat.add_assoc, shiftLeft_add _ _ k, shiftLeft_succ]
grind_pattern shiftLeft_add => m <<< (n + k) where
m =/= 0
grind_pattern shiftLeft_add => (m <<< n) <<< k where
m =/= 0
@[simp] theorem shiftLeft_shiftRight (x n : Nat) : x <<< n >>> n = x := by
rw [Nat.shiftLeft_eq, Nat.shiftRight_eq_div_pow, Nat.mul_div_cancel _ (Nat.two_pow_pos _)]

View File

@@ -28,10 +28,10 @@ abbrev Context := Lean.RArray Nat
/--
When encoding polynomials. We use `fixedVar` for encoding numerals.
The denotation of `fixedVar` is always `1`. -/
def fixedVar := 100000000 -- Any big number should work here
abbrev fixedVar := 100000000 -- Any big number should work here
def Var.denote (ctx : Context) (v : Var) : Nat :=
bif v == fixedVar then 1 else ctx.get v
noncomputable abbrev Var.denote (ctx : Context) (v : Var) : Nat :=
Bool.rec (ctx.get v) 1 (Nat.beq v fixedVar)
inductive Expr where
| num (v : Nat)
@@ -41,7 +41,7 @@ inductive Expr where
| mulR (a : Expr) (k : Nat)
deriving Inhabited, BEq
def Expr.denote (ctx : Context) : Expr Nat
noncomputable abbrev Expr.denote (ctx : Context) : Expr Nat
| .add a b => Nat.add (denote ctx a) (denote ctx b)
| .num k => k
| .var v => v.denote ctx
@@ -50,7 +50,7 @@ def Expr.denote (ctx : Context) : Expr → Nat
abbrev Poly := List (Nat × Var)
def Poly.denote (ctx : Context) (p : Poly) : Nat :=
noncomputable abbrev Poly.denote (ctx : Context) (p : Poly) : Nat :=
match p with
| [] => 0
| (k, v) :: p => Nat.add (Nat.mul k (v.denote ctx)) (denote ctx p)
@@ -113,9 +113,14 @@ def Poly.isNonZero (p : Poly) : Bool :=
| [] => false
| (k, v) :: p => bif v == fixedVar then k > 0 else isNonZero p
def Poly.denote_eq (ctx : Context) (mp : Poly × Poly) : Prop := mp.1.denote ctx = mp.2.denote ctx
abbrev Poly.denote_eq (ctx : Context) (mp : Poly × Poly) : Prop :=
mp.1.denote ctx = mp.2.denote ctx
def Poly.denote_le (ctx : Context) (mp : Poly × Poly) : Prop := mp.1.denote ctx mp.2.denote ctx
abbrev Poly.denote_le (ctx : Context) (mp : Poly × Poly) : Prop :=
mp.1.denote ctx mp.2.denote ctx
set_option allowUnsafeReducibility true
attribute [semireducible] Poly.denote_eq Poly.denote_le
def Expr.toPoly (e : Expr) :=
go 1 e []
@@ -146,7 +151,7 @@ structure ExprCnstr where
lhs : Expr
rhs : Expr
def PolyCnstr.denote (ctx : Context) (c : PolyCnstr) : Prop :=
abbrev PolyCnstr.denote (ctx : Context) (c : PolyCnstr) : Prop :=
bif c.eq then
Poly.denote_eq ctx (c.lhs, c.rhs)
else
@@ -168,7 +173,7 @@ def PolyCnstr.isValid (c : PolyCnstr) : Bool :=
else
c.lhs.isZero
def ExprCnstr.denote (ctx : Context) (c : ExprCnstr) : Prop :=
abbrev ExprCnstr.denote (ctx : Context) (c : ExprCnstr) : Prop :=
bif c.eq then
c.lhs.denote ctx = c.rhs.denote ctx
else

View File

@@ -4,12 +4,9 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Data.List.BasicAux
public section
namespace Nat.SOM
open Linear (Var hugeFuel Context Var.denote)
@@ -21,7 +18,9 @@ inductive Expr where
| mul (a b : Expr)
deriving Inhabited
def Expr.denote (ctx : Context) : Expr Nat
set_option allowUnsafeReducibility true
noncomputable abbrev Expr.denote (ctx : Context) : Expr Nat
| num n => n
| var v => v.denote ctx
| add a b => Nat.add (a.denote ctx) (b.denote ctx)
@@ -29,10 +28,12 @@ def Expr.denote (ctx : Context) : Expr → Nat
abbrev Mon := List Var
def Mon.denote (ctx : Context) : Mon Nat
noncomputable abbrev Mon.denote (ctx : Context) : Mon Nat
| [] => 1
| v::vs => Nat.mul (v.denote ctx) (denote ctx vs)
attribute [semireducible] Expr.denote Mon.denote
def Mon.mul (m₁ m₂ : Mon) : Mon :=
go hugeFuel m₁ m₂
where
@@ -53,10 +54,12 @@ where
abbrev Poly := List (Nat × Mon)
def Poly.denote (ctx : Context) : Poly Nat
noncomputable abbrev Poly.denote (ctx : Context) : Poly Nat
| [] => 0
| (k, m) :: p => Nat.add (Nat.mul k (m.denote ctx)) (denote ctx p)
attribute [semireducible] Poly.denote
def Poly.add (p₁ p₂ : Poly) : Poly :=
go hugeFuel p₁ p₂
where

View File

@@ -15,7 +15,39 @@ public section
namespace Option
deriving instance DecidableEq for Option
/- We write the instance manually so that it is coherent with `decidableEqNone` and
`decidableNoneEq`.
TODO: adjust the `deriving instance DecidableEq` handler to generate something coherent. -/
instance instDecidableEq {α} [DecidableEq α] : DecidableEq (Option α) := fun a b =>
match a with
| none => match b with
| none => .isTrue rfl
| some _ => .isFalse Option.noConfusion
| some a => match b with
| none => .isFalse Option.noConfusion
| some b => decidable_of_decidable_of_eq (Option.some.injEq a b).symm
/--
Equality with `none` is decidable even if the wrapped type does not have decidable equality.
-/
instance decidableEqNone (o : Option α) : Decidable (o = none) :=
/- We use a `match` instead of transferring from `isNone_iff_eq_none` for
compatibility with the `DecidableEq` instance. -/
match o with
| none => .isTrue rfl
| some _ => .isFalse Option.noConfusion
/--
Equality with `none` is decidable even if the wrapped type does not have decidable equality.
-/
instance decidableNoneEq (o : Option α) : Decidable (none = o) :=
/- We use a `match` instead of transferring from `isNone_iff_eq_none` for
compatibility with the `DecidableEq` instance. -/
match o with
| none => .isTrue rfl
| some _ => .isFalse Option.noConfusion
deriving instance BEq for Option
@[simp, grind =] theorem getD_none : getD none a = a := rfl

View File

@@ -35,17 +35,6 @@ instance [DecidableEq α] (j : α) (o : Option α) : Decidable (j ∈ o) :=
theorem some_inj {a b : α} : some a = some b a = b := by simp; rfl
/--
Equality with `none` is decidable even if the wrapped type does not have decidable equality.
This is not an instance because it is not definitionally equal to the standard instance of
`DecidableEq (Option α)`, which can cause problems. It can be locally bound if needed.
Try to use the Boolean comparisons `Option.isNone` or `Option.isSome` instead.
-/
@[inline] def decidableEqNone {o : Option α} : Decidable (o = none) :=
decidable_of_decidable_of_iff isNone_iff_eq_none
instance decidableForallMem {p : α Prop} [DecidablePred p] :
o : Option α, Decidable ( a, a o p a)
| none => isTrue nofun

View File

@@ -212,10 +212,13 @@ theorem bind_comm {f : α → β → Option γ} (a : Option α) (b : Option β)
(a.bind fun x => b.bind (f x)) = b.bind fun y => a.bind fun x => f x y := by
cases a <;> cases b <;> rfl
@[grind =]
theorem bind_assoc (x : Option α) (f : α Option β) (g : β Option γ) :
(x.bind f).bind g = x.bind fun y => (f y).bind g := by cases x <;> rfl
grind_pattern bind_assoc => (x.bind f).bind g where
f =/= some
g =/= some
theorem bind_congr {α β} {o : Option α} {f g : α Option β} :
(h : a, o = some a f a = g a) o.bind f = o.bind g := by
cases o <;> simp

View File

@@ -234,13 +234,13 @@ If an `LT α` instance is asymmetric and its negation is transitive and antisymm
public theorem IsLinearOrder.of_lt {α : Type u} [LT α]
(lt_asymm : Asymm (α := α) (· < ·) := by exact inferInstance)
(not_lt_trans : Trans (α := α) (¬ · < ·) (¬ · < ·) (¬ · < ·) := by exact inferInstance)
(not_lt_antisymm : Antisymm (α := α) (¬ · < ·) := by exact inferInstance) :
(lt_trichotomous : Trichotomous (α := α) (· < ·) := by exact inferInstance) :
haveI := LE.ofLT α
IsLinearOrder α :=
letI := LE.ofLT α
haveI : IsLinearPreorder α := .of_lt
{ le_antisymm := by
simpa [LE.ofLT] using fun a b hab hba => not_lt_antisymm.antisymm a b hba hab }
simpa [LE.ofLT] using fun a b hab hba => lt_trichotomous.trichotomous a b hba hab }
/--
This lemma characterizes in terms of `LT α` when a `Min α` instance

View File

@@ -22,17 +22,53 @@ section AxiomaticInstances
public instance (r : α α Prop) [Asymm r] : Irrefl r where
irrefl a h := Asymm.asymm a a h h
public instance {r : α α Prop} [Total r] : Refl r where
public instance (r : α α Prop) [Total r] : Refl r where
refl a := by simpa using Total.total a a
public instance (r : α α Prop) [Asymm r] : Antisymm r where
antisymm a b h h' := (Asymm.asymm a b h h').elim
public instance (r : α α Prop) [Total r] : Trichotomous r where
trichotomous a b h h' := by simpa [h, h'] using Total.total (r := r) a b
public theorem Trichotomous.rel_or_eq_or_rel_swap {r : α α Prop} [i : Trichotomous r] {a b} :
r a b a = b r b a := match Classical.em (r a b) with
| .inl hab => .inl hab | .inr hab => match Classical.em (r b a) with
| .inl hba => .inr <| .inr hba
| .inr hba => .inr <| .inl <| i.trichotomous _ _ hab hba
public theorem trichotomous_of_rel_or_eq_or_rel_swap {r : α α Prop}
(h : {a b}, r a b a = b r b a) : Trichotomous r where
trichotomous _ _ hab hba := (h.resolve_left hab).resolve_right hba
public instance Antisymm.trichotomous_of_antisymm_not {r : α α Prop} [i : Antisymm (¬ r · ·)] :
Trichotomous r where trichotomous := i.antisymm
public theorem Trichotomous.antisymm_not {r : α α Prop} [i : Trichotomous r] :
Antisymm (¬ r · ·) where antisymm := i.trichotomous
public theorem Total.rel_of_not_rel_swap {r : α α Prop} [Total r] {a b} (h : ¬ r a b) : r b a :=
(Total.total a b).elim (fun h' => (h h').elim) (·)
public theorem total_of_not_rel_swap_imp_rel {r : α α Prop} (h : {a b}, ¬ r a b r b a) :
Total r where
total a b := match Classical.em (r a b) with | .inl hab => .inl hab | .inr hab => .inr (h hab)
public theorem total_of_refl_of_trichotomous (r : α α Prop) [Refl r] [Trichotomous r] :
Total r where
total a b := (Trichotomous.rel_or_eq_or_rel_swap (a := a) (b := b) (r := r)).elim Or.inl <|
fun h => h.elim (fun h => h Or.inl (Refl.refl _)) Or.inr
public theorem asymm_of_irrefl_of_antisymm (r : α α Prop) [Irrefl r] [Antisymm r] :
Asymm r where asymm a b h h' := Irrefl.irrefl _ (Antisymm.antisymm a b h h' h)
public instance Total.asymm_of_total_not {r : α α Prop} [i : Total (¬ r · ·)] : Asymm r where
asymm a b h := by cases i.total a b <;> trivial
asymm a b h := (i.total a b).resolve_left (· h)
public theorem Asymm.total_not {r : α α Prop} [i : Asymm r] : Total (¬ r · ·) where
total a b := by
apply Classical.byCases (p := r a b) <;> intro hab
· exact Or.inr <| i.asymm a b hab
· exact Or.inl hab
total a b := match Classical.em (r b a) with
| .inl hba => .inl <| i.asymm b a hba
| .inr hba => .inr hba
public instance {α : Type u} [LE α] [IsPartialOrder α] :
Antisymm (α := α) (· ·) where
@@ -74,9 +110,7 @@ public theorem le_total {α : Type u} [LE α] [Std.Total (α := α) (· ≤ ·)]
Std.Total.total a b
public theorem le_of_not_ge {α : Type u} [LE α] [Std.Total (α := α) (· ·)] {a b : α} :
¬ b a a b := by
intro h
simpa [h] using Std.Total.total a b (r := (· ·))
¬ b a a b := Total.rel_of_not_rel_swap
end LE
@@ -90,18 +124,30 @@ public theorem lt_iff_le_and_not_ge {α : Type u} [LT α] [LE α] [LawfulOrderLT
a < b a b ¬ b a :=
LawfulOrderLT.lt_iff a b
public theorem not_lt {α : Type u} [LT α] [LE α] [Std.Total (α := α) (· ·)] [LawfulOrderLT α]
{a b : α} : ¬ a < b b a := by
simp [lt_iff_le_and_not_ge, Classical.not_not, Std.Total.total]
public theorem not_lt_iff_not_le_or_ge {α : Type u} [LT α] [LE α] [LawfulOrderLT α]
{a b : α} : ¬ a < b ¬ a b b a := by
simp only [lt_iff_le_and_not_ge, Classical.not_and_iff_not_or_not, Classical.not_not]
public theorem not_le_of_gt {α : Type u} [LT α] [LE α] [LawfulOrderLT α] {a b : α}
(h : a < b) : ¬ b a := (lt_iff_le_and_not_ge.1 h).2
public theorem not_lt_of_ge {α : Type u} [LT α] [LE α] [LawfulOrderLT α] {a b : α}
(h : a b) : ¬ b < a := imp_not_comm.1 not_le_of_gt h
public instance {α : Type u} {_ : LE α} [LT α] [LawfulOrderLT α]
[Trichotomous (α := α) (· < ·)] : Antisymm (α := α) (· ·) where
antisymm _ _ hab hba := Trichotomous.trichotomous _ _ (not_lt_of_ge hba) (not_lt_of_ge hab)
public theorem not_gt_of_lt {α : Type u} [LT α] [i : Std.Asymm (α := α) (· < ·)] {a b : α}
(h : a < b) : ¬ b < a :=
i.asymm a b h
public theorem le_of_lt {α : Type u} [LT α] [LE α] [LawfulOrderLT α] {a b : α} (h : a < b) :
a b := by
simp only [LawfulOrderLT.lt_iff] at h
exact h.1
a b := (lt_iff_le_and_not_ge.1 h).1
public instance {α : Type u} {_ : LT α} [LE α] [LawfulOrderLT α]
[Antisymm (α := α) (· ·)] : Antisymm (α := α) (· < ·) where
antisymm _ _ hab hba := Antisymm.antisymm _ _ (le_of_lt hab) (le_of_lt hba)
public instance {α : Type u} [LT α] [LE α] [LawfulOrderLT α] :
Std.Asymm (α := α) (· < ·) where
@@ -110,8 +156,9 @@ public instance {α : Type u} [LT α] [LE α] [LawfulOrderLT α] :
intro h h'
exact h.2.elim h'.1
public instance {α : Type u} [LT α] [LE α] [LawfulOrderLT α] :
Std.Irrefl (α := α) (· < ·) := inferInstance
@[deprecated instIrreflOfAsymm (since := "2025-10-24")]
public theorem instIrreflLtOfIsPreorderOfLawfulOrderLT {α : Type u} [LT α] [LE α]
[LawfulOrderLT α] : Std.Irrefl (α := α) (· < ·) := inferInstance
public instance {α : Type u} [LT α] [LE α] [Trans (α := α) (· ·) (· ·) (· ·) ]
[LawfulOrderLT α] : Trans (α := α) (· < ·) (· < ·) (· < ·) where
@@ -122,10 +169,19 @@ public instance {α : Type u} [LT α] [LE α] [Trans (α := α) (· ≤ ·) (·
· intro hca
exact hab.2.elim (le_trans hbc.1 hca)
public theorem not_lt {α : Type u} [LT α] [LE α] [Std.Total (α := α) (· ·)] [LawfulOrderLT α]
{a b : α} : ¬ a < b b a := by
simp [not_lt_iff_not_le_or_ge]
exact le_of_not_ge
public theorem not_le {α : Type u} [LT α] [LE α] [Std.Total (α := α) (· ·)] [LawfulOrderLT α]
{a b : α} : ¬ a b b < a := by
simp [lt_iff_le_and_not_ge]
exact le_of_not_ge
public instance {α : Type u} {_ : LT α} [LE α] [LawfulOrderLT α]
[Total (α := α) (· ·)] [Antisymm (α := α) (· ·)] :
Antisymm (α := α) (¬ · < ·) where
antisymm a b hab hba := by
[Total (α := α) (· ·)] [Antisymm (α := α) (· ·)] : Trichotomous (α := α) (· < ·) where
trichotomous a b hab hba := by
simp only [not_lt] at hab hba
exact Antisymm.antisymm (r := (· ·)) a b hba hab
@@ -136,9 +192,9 @@ public instance {α : Type u} {_ : LT α} [LE α] [LawfulOrderLT α]
simp only [not_lt] at hab hbc
exact le_trans hbc hab
public instance {α : Type u} {_ : LT α} [LE α] [LawfulOrderLT α] [Total (α := α) (· ·)] :
Total (α := α) (¬ · < ·) where
total a b := by simp [not_lt, Std.Total.total]
@[deprecated Asymm.total_not (since := "2025-10-24")]
public theorem instTotalNotLtOfLawfulOrderLTOfLe {α : Type u} {_ : LT α} [LE α] [LawfulOrderLT α]
: Total (α := α) (¬ · < ·) := Asymm.total_not
public theorem lt_of_le_of_lt {α : Type u} [LE α] [LT α]
[Trans (α := α) (· ·) (· ·) (· ·)] [LawfulOrderLT α] {a b c : α} (hab : a b)

View File

@@ -12,6 +12,8 @@ public section
namespace Prod
attribute [grind =] Prod.map_fst Prod.map_snd
instance [BEq α] [BEq β] [ReflBEq α] [ReflBEq β] : ReflBEq (α × β) where
rfl {a} := by cases a; simp [BEq.beq]

View File

@@ -36,7 +36,7 @@ inductive RArray (α : Type u) : Type u where
variable {α : Type u}
/-- The crucial operation, written with very little abstractional overhead -/
noncomputable def RArray.get (a : RArray α) (n : Nat) : α :=
noncomputable abbrev RArray.get (a : RArray α) (n : Nat) : α :=
RArray.rec (fun x => x) (fun p _ _ l r => (Nat.ble p n).rec l r) a
private theorem RArray.get_eq_def (a : RArray α) (n : Nat) :

View File

@@ -18,6 +18,7 @@ public import Init.Data.Range.Polymorphic.UInt
public import Init.Data.Range.Polymorphic.SInt
public import Init.Data.Range.Polymorphic.NatLemmas
public import Init.Data.Range.Polymorphic.IntLemmas
public import Init.Data.Range.Polymorphic.GetElemTactic
public section

View File

@@ -0,0 +1,39 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Alexander Bentkamp
-/
module
prelude
public import Init.Data.Range.Polymorphic.Int
public import Init.Data.Range.Polymorphic.Lemmas
public section
namespace Std.PRange.Int
@[simp]
theorem size_rco {a b : Int} :
(a...b).size = (b - a).toNat := by
simp only [Rco.size, Rxo.HasSize.size, Rxc.HasSize.size]
omega
@[simp]
theorem size_rcc {a b : Int} :
(a...=b).size = (b + 1 - a).toNat := by
simp [Rcc.size, Rxc.HasSize.size]
@[simp]
theorem size_roc {a b : Int} :
(a<...=b).size = (b - a).toNat := by
simp only [Roc.size, Rxc.HasSize.size]
omega
@[simp]
theorem size_roo {a b : Int} :
(a<...b).size = (b - a - 1).toNat := by
simp [Roo.size, Rxo.HasSize.size, Rxc.HasSize.size]
omega
end Std.PRange.Int

View File

@@ -19,45 +19,6 @@ open Std.Iterators
namespace Std
open PRange
namespace Rxc
/--
Iterators for right-closed ranges implementing {name}`Rxc.HasSize` support {name}`Iter.size`.
-/
instance [Rxc.HasSize α] [UpwardEnumerable α] [LE α] [DecidableLE α] :
IteratorSize (Rxc.Iterator α) Id where
size it := match it.internalState.next with
| none => pure (.up 0)
| some next => pure (.up (Rxc.HasSize.size next it.internalState.upperBound))
end Rxc
namespace Rxo
/--
Iterators for ranges implementing {name}`Rxo.HasSize` support {name}`Iter.size`.
-/
instance [Rxo.HasSize α] [UpwardEnumerable α] [LT α] [DecidableLT α] :
IteratorSize (Rxo.Iterator α) Id where
size it := match it.internalState.next with
| none => pure (.up 0)
| some next => pure (.up (Rxo.HasSize.size next it.internalState.upperBound))
end Rxo
namespace Rxi
/--
Iterators for ranges implementing {name}`Rxi.HasSize` support {name}`Iter.size`.
-/
instance [Rxi.HasSize α] [UpwardEnumerable α] :
IteratorSize (Rxi.Iterator α) Id where
size it := match it.internalState.next with
| none => pure (.up 0)
| some next => pure (.up (Rxi.HasSize.size next))
end Rxi
namespace Rcc
variable {α : Type u}
@@ -92,8 +53,8 @@ def toArray [LE α] [DecidableLE α] [UpwardEnumerable α] [LawfulUpwardEnumerab
Returns the number of elements contained in the given closed range.
-/
@[always_inline, inline]
def size [Rxc.HasSize α] [UpwardEnumerable α] [LE α] [DecidableLE α] (r : Rcc α) : Nat :=
Internal.iter r |>.size
def size [Rxc.HasSize α] (r : Rcc α) : Nat :=
Rxc.HasSize.size r.lower r.upper
section Iterator
@@ -178,8 +139,8 @@ def toArray [LT α] [DecidableLT α] [UpwardEnumerable α] [LawfulUpwardEnumerab
Returns the number of elements contained in the given left-closed right-open range.
-/
@[always_inline, inline]
def size [Rxo.HasSize α] [UpwardEnumerable α] [LT α] [DecidableLT α] (r : Rco α) : Nat :=
Internal.iter r |>.size
def size [Rxo.HasSize α] (r : Rco α) : Nat :=
Rxo.HasSize.size r.lower r.upper
section Iterator
@@ -265,8 +226,8 @@ def toArray [UpwardEnumerable α] [LawfulUpwardEnumerable α] [Rxi.IsAlwaysFinit
Returns the number of elements contained in the given left-closed right-unbounded range.
-/
@[always_inline, inline]
def size [Rxi.HasSize α] [UpwardEnumerable α] (r : Rci α) : Nat :=
Internal.iter r |>.size
def size [Rxi.HasSize α] (r : Rci α) : Nat :=
Rxi.HasSize.size r.lower
section Iterator
@@ -349,8 +310,10 @@ def toArray [LE α] [DecidableLE α] [UpwardEnumerable α] [LawfulUpwardEnumerab
Returns the number of elements contained in the given left-open right-closed range.
-/
@[always_inline, inline]
def size [Rxc.HasSize α] [UpwardEnumerable α] [LE α] [DecidableLE α] (r : Roc α) : Nat :=
Internal.iter r |>.size
def size [Rxc.HasSize α] [UpwardEnumerable α] (r : Roc α) : Nat :=
match UpwardEnumerable.succ? r.lower with
| none => 0
| some lower => Rxc.HasSize.size lower r.upper
section Iterator
@@ -428,8 +391,10 @@ def toArray [LT α] [DecidableLT α] [UpwardEnumerable α] [LawfulUpwardEnumerab
Returns the number of elements contained in the given open range.
-/
@[always_inline, inline]
def size [Rxo.HasSize α] [UpwardEnumerable α] [LT α] [DecidableLT α] (r : Roo α) : Nat :=
Internal.iter r |>.size
def size [Rxo.HasSize α] [UpwardEnumerable α] (r : Roo α) : Nat :=
match UpwardEnumerable.succ? r.lower with
| none => 0
| some lower => Rxo.HasSize.size lower r.upper
section Iterator
@@ -507,7 +472,9 @@ Returns the number of elements contained in the given left-open right-unbounded
-/
@[always_inline, inline]
def size [Rxi.HasSize α] [UpwardEnumerable α] (r : Roi α) : Nat :=
Internal.iter r |>.size
match UpwardEnumerable.succ? r.lower with
| none => 0
| some lower => Rxi.HasSize.size lower
section Iterator
@@ -580,8 +547,10 @@ def toArray [Least? α] [LE α] [DecidableLE α] [UpwardEnumerable α] [LawfulUp
Returns the number of elements contained in the given closed range.
-/
@[always_inline, inline]
def size [Rxc.HasSize α] [UpwardEnumerable α] [Least? α] [LE α] [DecidableLE α] (r : Ric α) : Nat :=
Internal.iter r |>.size
def size [Rxc.HasSize α] [Least? α] (r : Ric α) : Nat :=
match Least?.least? (α := α) with
| none => 0
| some least => Rxc.HasSize.size least r.upper
section Iterator
@@ -653,8 +622,10 @@ def toArray [Least? α] [LT α] [DecidableLT α] [UpwardEnumerable α] [LawfulUp
Returns the number of elements contained in the given closed range.
-/
@[always_inline, inline]
def size [Rxo.HasSize α] [UpwardEnumerable α] [Least? α] [LT α] [DecidableLT α] (r : Rio α) : Nat :=
Internal.iter r |>.size
def size [Rxo.HasSize α] [Least? α] (r : Rio α) : Nat :=
match Least?.least? (α := α) with
| none => 0
| some least => Rxo.HasSize.size least r.upper
section Iterator
@@ -727,8 +698,10 @@ def toArray {α} [UpwardEnumerable α] [Least? α] (r : Rii α)
Returns the number of elements contained in the full range.
-/
@[always_inline, inline]
def size [UpwardEnumerable α] [Least? α] (r : Rii α) [IteratorSize (Rxi.Iterator α) Id] : Nat :=
Internal.iter r |>.size
def size (_ : Rii α) [Least? α] [Rxi.HasSize α] : Nat :=
match Least?.least? (α := α) with
| none => 0
| some least => Rxi.HasSize.size least
section Iterator

File diff suppressed because it is too large Load Diff

View File

@@ -16,52 +16,64 @@ namespace Std.PRange.Nat
theorem succ_eq {n : Nat} : succ n = n + 1 :=
rfl
theorem toList_Rco_succ_succ {m n : Nat} :
theorem toList_rco_succ_succ {m n : Nat} :
((m+1)...(n+1)).toList = (m...n).toList.map (· + 1) := by
simp only [ succ_eq]
rw [Std.Rco.toList_succ_succ_eq_map]
@[deprecated toList_Rco_succ_succ (since := "2025-08-22")]
theorem ClosedOpen.toList_succ_succ {m n : Nat} :
((m+1)...(n+1)).toList = (m...n).toList.map (· + 1) := toList_Rco_succ_succ
@[deprecated toList_rco_succ_succ (since := "2025-10-30")]
def toList_Rco_succ_succ := @toList_rco_succ_succ
@[deprecated toList_rco_succ_succ (since := "2025-08-22")]
def ClosedOpen.toList_succ_succ := @toList_rco_succ_succ
@[simp]
theorem size_Rcc {a b : Nat} :
theorem size_rcc {a b : Nat} :
(a...=b).size = b + 1 - a := by
simp [Rcc.size, Std.Iterators.Iter.size, Std.Iterators.IteratorSize.size,
Rcc.Internal.iter, Std.Iterators.Iter.toIterM, Rxc.HasSize.size]
simp [Rcc.size, Rxc.HasSize.size]
@[deprecated size_rcc (since := "2025-10-30")]
def size_Rcc := @size_rcc
@[simp]
theorem size_Rco {a b : Nat} :
theorem size_rco {a b : Nat} :
(a...b).size = b - a := by
simp only [Rco.size, Iterators.Iter.size, Iterators.IteratorSize.size, Iterators.Iter.toIterM,
Rco.Internal.iter, Rxo.HasSize.size, Rxc.HasSize.size, Id.run_pure]
simp only [Rco.size, Rxo.HasSize.size, Rxc.HasSize.size]
omega
@[deprecated size_rco (since := "2025-10-30")]
def size_Rco := @size_rco
@[simp]
theorem size_Roc {a b : Nat} :
theorem size_roc {a b : Nat} :
(a<...=b).size = b - a := by
simp [Roc.size, Std.Iterators.Iter.size, Std.Iterators.IteratorSize.size,
Roc.Internal.iter, Std.Iterators.Iter.toIterM, Rxc.HasSize.size]
simp [Roc.size, Rxc.HasSize.size]
@[deprecated size_roc (since := "2025-10-30")]
def size_Roc := @size_roc
@[simp]
theorem size_Roo {a b : Nat} :
theorem size_roo {a b : Nat} :
(a<...b).size = b - a - 1 := by
simp only [Roo.size, Iterators.Iter.size, Iterators.IteratorSize.size, Iterators.Iter.toIterM,
Roo.Internal.iter, Rxo.HasSize.size, Rxc.HasSize.size, Id.run_pure]
omega
simp [Roo.size, Rxo.HasSize.size, Rxc.HasSize.size]
@[deprecated size_roo (since := "2025-10-30")]
def size_Roo := @size_roo
@[simp]
theorem size_Ric {b : Nat} :
theorem size_ric {b : Nat} :
(*...=b).size = b + 1 := by
simp [Ric.size, Std.Iterators.Iter.size, Std.Iterators.IteratorSize.size,
Ric.Internal.iter, Std.Iterators.Iter.toIterM, Rxc.HasSize.size]
simp [Ric.size, Rxc.HasSize.size]
@[deprecated size_ric (since := "2025-10-30")]
def size_Ric := @size_ric
@[simp]
theorem size_Rio {b : Nat} :
theorem size_rio {b : Nat} :
(*...b).size = b := by
simp only [Rio.size, Iterators.Iter.size, Iterators.IteratorSize.size, Iterators.Iter.toIterM,
Rio.Internal.iter, Rxo.HasSize.size, Rxc.HasSize.size, Id.run_pure]
omega
simp [Rio.size, Rxo.HasSize.size, Rxc.HasSize.size]
@[deprecated size_rio (since := "2025-10-30")]
def size_Rio := @size_rio
end Std.PRange.Nat

View File

@@ -551,7 +551,9 @@ theorem pow_def (q : Rat) (n : Nat) :
@[simp] theorem num_pow (q : Rat) (n : Nat) : (q ^ n).num = q.num ^ n := rfl
@[simp] theorem den_pow (q : Rat) (n : Nat) : (q ^ n).den = q.den ^ n := rfl
@[simp] protected theorem pow_zero (q : Rat) : q ^ 0 = 1 := rfl
@[simp] protected theorem pow_zero (q : Rat) : q ^ 0 = 1 := by
simp only [pow_def, Int.pow_zero, Nat.pow_zero, mk_den_one]
rfl
protected theorem pow_succ (q : Rat) (n : Nat) : q ^ (n + 1) = q ^ n * q := by
rcases q with n, d, hn, r
@@ -567,7 +569,8 @@ protected theorem zpow_natCast (q : Rat) (n : Nat) : q ^ (n : Int) = q ^ n := rf
protected theorem zpow_neg (q : Rat) (n : Int) : q ^ (-n : Int) = (q ^ n)⁻¹ := by
rcases n with (_ | n) | n
· with_unfolding_all rfl
· simp only [Int.ofNat_eq_natCast, Int.cast_ofNat_Int, Int.neg_zero, Rat.zpow_zero]
with_unfolding_all rfl
· rfl
· exact (Rat.inv_inv _).symm

View File

@@ -414,8 +414,8 @@ instance : Repr String where
instance : Repr String.Pos.Raw where
reprPrec p _ := "{ byteIdx := " ++ repr p.byteIdx ++ " }"
instance : Repr Substring where
reprPrec s _ := Format.text <| String.Internal.append (String.quote (Substring.Internal.toString s)) ".toSubstring"
instance : Repr Substring.Raw where
reprPrec s _ := Format.text <| String.Internal.append (String.quote (Substring.Raw.Internal.toString s)) ".toRawSubstring"
instance (n : Nat) : Repr (Fin n) where
reprPrec f _ := repr f.val

View File

@@ -10,6 +10,7 @@ public import Init.Data.Slice.Basic
public import Init.Data.Slice.Notation
public import Init.Data.Slice.Operations
public import Init.Data.Slice.Array
public import Init.Data.Slice.List
public import Init.Data.Slice.Lemmas
public section

View File

@@ -23,45 +23,82 @@ variable {α : Type u}
instance : Rcc.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rcc.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray range.lower (range.upper + 1)
instance : Rco.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rco.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray range.lower range.upper
instance : Rci.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rci.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray halfOpenRange.lower halfOpenRange.upper
instance : Roc.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roc.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray (range.lower + 1) (range.upper + 1)
instance : Roo.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roo.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray (range.lower + 1) range.upper
instance : Roi.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roi.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray halfOpenRange.lower halfOpenRange.upper
instance : Ric.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Ric.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray 0 (range.upper + 1)
instance : Rio.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rio.HasRcoIntersection.intersection range 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray 0 range.upper
instance : Rii.Sliceable (Array α) Nat (Subarray α) where
mkSlice xs _ :=
let halfOpenRange := 0...<xs.size
(xs.toSubarray halfOpenRange.lower halfOpenRange.upper)
xs.toSubarray 0 xs.size
instance : Rcc.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rcc.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Rco.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rco.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Rci.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rci.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Roc.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roc.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Roo.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roo.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Roi.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Roi.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Ric.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Ric.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Rio.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs range :=
let halfOpenRange := Rio.HasRcoIntersection.intersection range 0...<xs.size
xs.array[(halfOpenRange.lower + xs.start)...(halfOpenRange.upper + xs.start)]
instance : Rii.Sliceable (Subarray α) Nat (Subarray α) where
mkSlice xs _ :=
xs

View File

@@ -7,25 +7,26 @@ module
prelude
public import Init.Data.Slice.Array.Basic
public import Init.Data.Slice.Operations
import Init.Data.Iterators.Combinators.Attach
public import Init.Data.Iterators.Combinators.ULift
import all Init.Data.Range.Polymorphic.Basic
public import Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Slice.Operations
import Init.Omega
import Init.Data.Iterators.Lemmas.Combinators.Monadic.FilterMap
public section
/-!
This module provides slice notation for array slices (a.k.a. `Subarray`) and implements an iterator
for those slices.
This module implements an iterator for array slices (`Subarray`).
-/
open Std Slice PRange Iterators
variable {shape : RangeShape} {α : Type u}
instance {s : Subarray α} : ToIterator s Id α :=
instance {s : Slice (Internal.SubarrayData α)} : ToIterator s Id α :=
.of _
(Rco.Internal.iter (s.internalRepresentation.start...<s.internalRepresentation.stop)
|>.attachWith (· < s.internalRepresentation.array.size) ?h
@@ -40,23 +41,24 @@ where finally
universe v w
@[no_expose] instance {s : Subarray α} : Iterator (ToIterator.State s Id) Id α := inferInstance
@[no_expose] instance {s : Subarray α} : Finite (ToIterator.State s Id) Id := inferInstance
@[no_expose] instance {s : Subarray α} : IteratorCollect (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : Subarray α} : IteratorCollectPartial (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : Subarray α} {m : Type v Type w} [Monad m] :
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} : Iterator (ToIterator.State s Id) Id α := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} : Finite (ToIterator.State s Id) Id := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} : IteratorCollect (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} : LawfulIteratorCollect (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} : IteratorCollectPartial (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} {m : Type v Type w} [Monad m] :
IteratorLoop (ToIterator.State s Id) Id m := inferInstance
@[no_expose] instance {s : Subarray α} {m : Type v Type w} [Monad m] :
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} {m : Type v Type w} [Monad m] :
LawfulIteratorLoop (ToIterator.State s Id) Id m := inferInstance
@[no_expose] instance {s : Slice (Internal.SubarrayData α)} {m : Type v Type w} [Monad m] :
IteratorLoopPartial (ToIterator.State s Id) Id m := inferInstance
@[no_expose] instance {s : Subarray α} :
IteratorSize (ToIterator.State s Id) Id := inferInstance
@[no_expose] instance {s : Subarray α} :
IteratorSizePartial (ToIterator.State s Id) Id := inferInstance
@[no_expose]
instance {α : Type u} {m : Type v Type w} :
ForIn m (Subarray α) α where
forIn xs init f := forIn (Std.Slice.Internal.iter xs) init f
instance : SliceSize (Internal.SubarrayData α) where
size s := s.internalRepresentation.stop - s.internalRepresentation.start
instance {α : Type u} {m : Type v Type w} [Monad m] :
ForIn m (Subarray α) α :=
inferInstance
/-!
Without defining the following function `Subarray.foldlM`, it is still possible to call

View File

@@ -8,21 +8,23 @@ module
prelude
import all Init.Data.Array.Subarray
import all Init.Data.Slice.Array.Basic
import Init.Data.Slice.Lemmas
public import Init.Data.Slice.Array.Iterator
import all Init.Data.Slice.Array.Iterator
import all Init.Data.Slice.Operations
import all Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Range.Polymorphic.Lemmas
import all Init.Data.Range.Polymorphic.Lemmas
public import Init.Data.Slice.Lemmas
public import Init.Data.Iterators.Lemmas
import Init.Data.Slice.List.Lemmas
import Init.Data.Range.Polymorphic.NatLemmas
public section
open Std Std.Iterators Std.PRange Std.Slice
open Std.Iterators Std.PRange
namespace Subarray
namespace Std.Slice.Array
private theorem internalIter_Rco_eq {α : Type u} {s : Subarray α} :
theorem internalIter_eq {α : Type u} {s : Subarray α} :
Internal.iter s = (Rco.Internal.iter (s.start...<s.stop)
|>.attachWith (· < s.array.size)
(fun out h => h
@@ -33,16 +35,666 @@ private theorem internalIter_Rco_eq {α : Type u} {s : Subarray α} :
|>.map fun | .up i => s.array[i.1]) := by
simp [Internal.iter, ToIterator.iter_eq, Subarray.start, Subarray.stop, Subarray.array]
private theorem toList_internalIter {α : Type u} {s : Subarray α} :
theorem toList_internalIter {α : Type u} {s : Subarray α} :
(Internal.iter s).toList =
((s.start...s.stop).toList
|>.attachWith (· < s.array.size)
(fun out h => h
|> Rco.mem_toList_iff_mem.mp
|> Rco.lt_upper_of_mem
|> (Nat.lt_of_lt_of_le · s.stop_le_array_size))
|>.map fun i => s.array[i.1]) := by
rw [internalIter_Rco_eq, Iter.toList_map, Iter.toList_uLift, Iter.toList_attachWith]
|>.attach
|>.map fun i => s.array[i.1]'(i.property
|> Rco.mem_toList_iff_mem.mp
|> Rco.lt_upper_of_mem
|> (Nat.lt_of_lt_of_le · s.stop_le_array_size))) := by
rw [internalIter_eq, Iter.toList_map, Iter.toList_uLift, Iter.toList_attachWith]
simp [Rco.toList]
end Std.Slice.Array
public instance : LawfulSliceSize (Internal.SubarrayData α) where
lawful s := by
simp [SliceSize.size, ToIterator.iter_eq, Iter.toIter_toIterM,
Iter.size_toArray_eq_count, Rco.Internal.toArray_eq_toArray_iter,
Rco.size_toArray, Rco.size, Rxo.HasSize.size, Rxc.HasSize.size]
omega
public theorem toArray_eq_sliceToArray {α : Type u} {s : Subarray α} :
s.toArray = Slice.toArray s := by
simp [Subarray.toArray, Array.ofSubarray]
@[simp]
public theorem forIn_toList {α : Type u} {s : Subarray α}
{m : Type v Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
{f : α γ m (ForInStep γ)} :
ForIn.forIn s.toList init f = ForIn.forIn s init f :=
Slice.forIn_toList
@[simp]
public theorem forIn_toArray {α : Type u} {s : Subarray α}
{m : Type v Type w} [Monad m] [LawfulMonad m] {γ : Type v} {init : γ}
{f : α γ m (ForInStep γ)} :
ForIn.forIn s.toArray init f = ForIn.forIn s init f :=
Slice.forIn_toArray
end Subarray
public theorem Array.toSubarray_eq_toSubarray_of_min_eq_min {xs : Array α}
{start stop stop' : Nat} (h : min stop xs.size = min stop' xs.size) :
xs.toSubarray start stop = xs.toSubarray start stop' := by
simp only [Array.toSubarray]
split
· split
· have h₁ : start xs.size := by omega
have h₂ : start stop' := by omega
simp only [dif_pos h₁, dif_pos h₂]
split
· simp_all
· simp_all [Nat.min_eq_right (Nat.le_of_lt _)]
· simp only [Nat.min_eq_left, *] at h
split
· simp only [Nat.min_eq_left, *] at h
simp [h]
omega
· simp only [ge_iff_le, not_false_eq_true, Nat.min_eq_right (Nat.le_of_not_ge _), *] at h
simp [h]
omega
· split
· split
· simp only [ge_iff_le, not_false_eq_true, Nat.min_eq_right (Nat.le_of_not_ge _),
Nat.min_eq_left, *] at h
simp_all
omega
· simp
· simp [Nat.min_eq_right (Nat.le_of_not_ge _), *] at h
split
· simp only [Nat.min_eq_left, *] at h
simp_all
omega
· simp
public theorem Array.toSubarray_eq_min {xs : Array α} {lo hi : Nat} :
xs.toSubarray lo hi = xs, min lo (min hi xs.size), min hi xs.size, Nat.min_le_right _ _,
Nat.min_le_right _ _ := by
simp only [Array.toSubarray]
split <;> split <;> simp [Nat.min_eq_right (Nat.le_of_not_ge _), *]
@[simp]
public theorem Array.array_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).array = xs := by
simp [toSubarray_eq_min, Subarray.array]
@[simp]
public theorem Array.start_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).start = min lo (min hi xs.size) := by
simp [toSubarray_eq_min, Subarray.start]
@[simp]
public theorem Array.stop_toSubarray {xs : Array α} {lo hi : Nat} :
(xs.toSubarray lo hi).stop = min hi xs.size := by
simp [toSubarray_eq_min, Subarray.stop]
theorem Subarray.toList_eq {xs : Subarray α} :
xs.toList = (xs.array.extract xs.start xs.stop).toList := by
let aslice := xs
obtain array, start, stop, h₁, h₂ := xs
let lslice : ListSlice α := array.toList.drop start, some (stop - start)
simp only [Subarray.start, Subarray.stop, Subarray.array]
change aslice.toList = _
have : aslice.toList = lslice.toList := by
simp [ListSlice.toList_eq, lslice, aslice]
simp only [Std.Slice.toList, toList_internalIter]
apply List.ext_getElem
· have : stop - start array.size - start := by omega
simp [Subarray.start, Subarray.stop, Std.PRange.Nat.size_rco, *]
· intros
simp [Subarray.array, Subarray.start, Subarray.stop, Std.Rco.getElem_toList_eq, succMany?]
simp [this, ListSlice.toList_eq, lslice]
@[simp]
public theorem Subarray.toArray_toList {xs : Subarray α} :
xs.toList.toArray = xs.toArray := by
simp [Std.Slice.toList, Subarray.toArray, Array.ofSubarray, Std.Slice.toArray]
@[simp]
public theorem Subarray.toList_toArray {xs : Subarray α} :
xs.toArray.toList = xs.toList := by
simp [Std.Slice.toList, Subarray.toArray, Array.ofSubarray, Std.Slice.toArray]
@[simp]
public theorem Subarray.length_toList {xs : Subarray α} :
xs.toList.length = xs.size := by
simp [Subarray.toList_eq, Subarray.size]
have : xs.start xs.stop := xs.internalRepresentation.start_le_stop
have : xs.stop xs.array.size := xs.internalRepresentation.stop_le_array_size
omega
@[simp]
public theorem Subarray.size_toArray {xs : Subarray α} :
xs.toArray.size = xs.size := by
rw [ Subarray.toArray_toList, List.size_toArray, length_toList]
namespace Array
@[simp]
public theorem array_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].array = xs := by
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].start = min lo (min hi xs.size) := by
simp [Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].stop = min hi xs.size := by
simp [Std.Rco.Sliceable.mkSlice]
public theorem mkSlice_rco_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
xs[lo...hi] = xs[(min lo (min hi xs.size))...(min hi xs.size)] := by
simp [Std.Rco.Sliceable.mkSlice, Array.toSubarray_eq_min]
@[simp]
public theorem toList_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
rw [List.take_eq_take_min, List.drop_eq_drop_min]
simp [Std.Rco.Sliceable.mkSlice, Subarray.toList_eq, List.take_drop,
Nat.add_sub_of_le (Nat.min_le_right _ _)]
@[simp]
public theorem toArray_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.extract lo hi := by
simp only [ Subarray.toArray_toList, toList_mkSlice_rco]
rw [show xs = xs.toList.toArray by simp, List.extract_toArray, List.extract_eq_drop_take]
simp only [List.take_drop, mk.injEq]
by_cases h : lo hi
· congr 1
rw [List.take_eq_take_iff, Nat.add_sub_cancel' h]
· rw [List.drop_eq_nil_of_le, List.drop_eq_nil_of_le]
· simp; omega
· simp; omega
@[simp]
public theorem size_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...hi].size = min hi xs.size - lo := by
simp [ Subarray.length_toList]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
public theorem mkSlice_rcc_eq_mkSlice_rco_min {xs : Array α} {lo hi : Nat} :
xs[lo...=hi] = xs[(min lo (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem array_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].array = xs := by
simp [Std.Rcc.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].start = min lo (min (hi + 1) xs.size) := by
simp [Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].stop = min (hi + 1) xs.size := by
simp [Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].toList = (xs.toList.take (hi + 1)).drop lo := by
rw [mkSlice_rcc_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].toArray = xs.extract lo (hi + 1) := by
simp
@[simp]
public theorem size_mkSlice_rcc {xs : Array α} {lo hi : Nat} :
xs[lo...=hi].size = min (hi + 1) xs.size - lo := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].array = xs := by
simp [Std.Rci.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].start = min lo xs.size := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
@[simp]
public theorem stop_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].stop = xs.size := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
@[simp]
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Array α} {lo : Nat} :
xs[lo...*] = xs[lo...xs.size] := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
public theorem mkSlice_rci_eq_mkSlice_rco_min {xs : Array α} {lo : Nat} :
xs[lo...*] = xs[(min lo xs.size)...xs.size] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].toList = xs.toList.drop lo := by
rw [mkSlice_rci_eq_mkSlice_rco, toList_mkSlice_rco, Array.length_toList, List.take_length]
@[simp]
public theorem toArray_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].toArray = xs.extract lo := by
simp
@[simp]
public theorem size_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo...*].size = xs.size - lo := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].array = xs := by
simp [Std.Roo.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].start = min (lo + 1) (min hi xs.size) := by
simp [Std.Roo.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].stop = min hi xs.size := by
simp [Std.Roo.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Array α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
public theorem mkSlice_roo_eq_mkSlice_roo_min {xs : Array α} {lo hi : Nat} :
xs[lo<...hi] = xs[(min (lo + 1) (min hi xs.size))...(min hi xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].toList = (xs.toList.take hi).drop (lo + 1) := by
rw [mkSlice_roo_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].toArray = xs.extract (lo + 1) hi := by
rw [mkSlice_roo_eq_mkSlice_rco, toArray_mkSlice_rco]
@[simp]
public theorem size_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...hi].size = min hi xs.size - (lo + 1) := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].array = xs := by
simp [Std.Roc.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].start = min (lo + 1) (min (hi + 1) xs.size) := by
simp [Std.Roc.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].stop = min (hi + 1) xs.size := by
simp [Std.Roc.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_roc_eq_mkSlice_roo {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
public theorem mkSlice_roc_eq_mkSlice_roo_min {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(min (lo + 1) (min (hi + 1) xs.size))...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.toList.take (hi + 1)).drop (lo + 1) := by
rw [mkSlice_roc_eq_mkSlice_roo, toList_mkSlice_roo]
@[simp]
public theorem toArray_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].toArray = xs.extract (lo + 1) (hi + 1) := by
rw [mkSlice_roc_eq_mkSlice_roo, toArray_mkSlice_roo]
@[simp]
public theorem size_mkSlice_roc {xs : Array α} {lo hi : Nat} :
xs[lo<...=hi].size = min (hi + 1) xs.size - (lo + 1) := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo<...*].array = xs := by
simp [Std.Roi.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo<...*].start = min (lo + 1) xs.size := by
simp [Std.Roi.Sliceable.mkSlice, Std.Roi.HasRcoIntersection.intersection]
@[simp]
public theorem stop_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo...*].stop = xs.size := by
simp [Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Roi.HasRcoIntersection.intersection,
Std.Rci.Sliceable.mkSlice, Std.Rci.HasRcoIntersection.intersection]
public theorem mkSlice_roi_eq_mkSlice_roo {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[lo<...xs.size] := by
simp [mkSlice_rci_eq_mkSlice_rco]
public theorem mkSlice_roi_eq_mkSlice_roo_min {xs : Array α} {lo : Nat} :
xs[lo<...*] = xs[(min (lo + 1) xs.size)...xs.size] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
rw [mkSlice_roi_eq_mkSlice_rci, toList_mkSlice_rci]
@[simp]
public theorem toArray_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo<...*].toArray = xs.drop (lo + 1) := by
rw [mkSlice_roi_eq_mkSlice_rci, toArray_mkSlice_rci]
@[simp]
public theorem size_mkSlice_roi {xs : Array α} {lo : Nat} :
xs[lo<...*].size = xs.size - (lo + 1) := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].array = xs := by
simp [Std.Rio.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].start = 0 := by
simp [Std.Rio.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].stop = min hi xs.size := by
simp [Std.Rio.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Array α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
public theorem mkSlice_rio_eq_mkSlice_rio_min {xs : Array α} {hi : Nat} :
xs[*...hi] = xs[*...(min hi xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].toList = xs.toList.take hi := by
rw [mkSlice_rio_eq_mkSlice_rco, toList_mkSlice_rco, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].toArray = xs.extract 0 hi := by
rw [mkSlice_rio_eq_mkSlice_rco, toArray_mkSlice_rco]
@[simp]
public theorem size_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...hi].size = min hi xs.size := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].array = xs := by
simp [Std.Ric.Sliceable.mkSlice, Array.toSubarray, apply_dite, Subarray.array]
@[simp]
public theorem start_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].start = 0 := by
simp [Std.Ric.Sliceable.mkSlice]
@[simp]
public theorem stop_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].stop = min (hi + 1) xs.size := by
simp [Std.Ric.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rio {xs : Array α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
public theorem mkSlice_ric_eq_mkSlice_rio_min {xs : Array α} {hi : Nat} :
xs[*...=hi] = xs[*...(min (hi + 1) xs.size)] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].toList = xs.toList.take (hi + 1) := by
rw [mkSlice_ric_eq_mkSlice_rio, toList_mkSlice_rio]
@[simp]
public theorem toArray_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].toArray = xs.extract 0 (hi + 1) := by
rw [mkSlice_ric_eq_mkSlice_rio, toArray_mkSlice_rio]
@[simp]
public theorem size_mkSlice_ric {xs : Array α} {hi : Nat} :
xs[*...=hi].size = min (hi + 1) xs.size := by
simp [ Subarray.length_toList]
@[simp]
public theorem mkSlice_rii_eq_mkSlice_rci {xs : Array α} :
xs[*...*] = xs[0...*] := by
simp [Std.Rii.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice,
Std.Rci.HasRcoIntersection.intersection]
public theorem mkSlice_rii_eq_mkSlice_rio {xs : Array α} :
xs[*...*] = xs[*...xs.size] := by
simp [mkSlice_rci_eq_mkSlice_rco]
public theorem mkSlice_rii_eq_mkSlice_rio_min {xs : Array α} :
xs[*...*] = xs[*...xs.size] := by
simp [mkSlice_rco_eq_mkSlice_rco_min]
@[simp]
public theorem toList_mkSlice_rii {xs : Array α} :
xs[*...*].toList = xs.toList := by
rw [mkSlice_rii_eq_mkSlice_rci, toList_mkSlice_rci, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rii {xs : Array α} :
xs[*...*].toArray = xs := by
simp
@[simp]
public theorem size_mkSlice_rii {xs : Array α} :
xs[*...*].size = xs.size := by
simp [ Subarray.length_toList]
@[simp]
public theorem array_mkSlice_rii {xs : Array α} :
xs[*...*].array = xs := by
simp
@[simp]
public theorem start_mkSlice_rii {xs : Array α} :
xs[*...*].start = 0 := by
simp
@[simp]
public theorem stop_mkSlice_rii {xs : Array α} :
xs[*...*].stop = xs.size := by
simp [Std.Rii.Sliceable.mkSlice]
end Array
section SubarraySlices
namespace Subarray
@[simp]
public theorem toList_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
simp only [instSliceableSubarrayNat_1, Std.Rco.HasRcoIntersection.intersection,
Std.Rco.Sliceable.mkSlice, toList_eq, Array.start_toSubarray, Array.stop_toSubarray,
Array.toList_extract, List.take_drop, List.take_take]
rw [Nat.add_sub_cancel' (by omega)]
simp [Subarray.size, Array.length_toList, List.take_eq_take_min, Nat.add_comm xs.start]
@[simp]
public theorem toArray_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
simp [ Subarray.toArray_toList, List.drop_take]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Rcc.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
xs[lo...=hi].toList = (xs.toList.take (hi + 1)).drop lo := by
rw [mkSlice_rcc_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
xs[lo...=hi].toArray = xs.toArray.extract lo (hi + 1) := by
simp
@[simp]
public theorem mkSlice_rci_eq_mkSlice_rco {xs : Subarray α} {lo : Nat} :
xs[lo...*] = xs[lo...xs.size] := by
simp [Std.Rci.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Rci.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_rci {xs : Subarray α} {lo : Nat} :
xs[lo...*].toList = xs.toList.drop lo := by
rw [mkSlice_rci_eq_mkSlice_rco, toList_mkSlice_rco, Subarray.length_toList, List.take_length]
@[simp]
public theorem toArray_mkSlice_rci {xs : Subarray α} {lo : Nat} :
xs[lo...*].toArray = xs.toArray.extract lo := by
simp
@[simp]
public theorem mkSlice_roc_eq_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice,
Std.Roc.HasRcoIntersection.intersection, Std.Roo.HasRcoIntersection.intersection]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : Subarray α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Roo.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
xs[lo<...hi].toList = (xs.toList.take hi).drop (lo + 1) := by
rw [mkSlice_roo_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_roo {xs : Subarray α} {lo hi : Nat} :
xs[lo<...hi].toArray = xs.toArray.extract (lo + 1) hi := by
simp
@[simp]
public theorem mkSlice_roc_eq_mkSlice_rcc {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Roc.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_roc {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.toList.take (hi + 1)).drop (lo + 1) := by
rw [mkSlice_roc_eq_mkSlice_rcc, toList_mkSlice_rcc]
@[simp]
public theorem toArray_mkSlice_roc {xs : Subarray α} {lo hi : Nat} :
xs[lo<...=hi].toArray = xs.toArray.extract (lo + 1) (hi + 1) := by
simp
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : Subarray α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice,
Std.Roi.HasRcoIntersection.intersection, Std.Rci.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_roi {xs : Subarray α} {lo : Nat} :
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
rw [mkSlice_roi_eq_mkSlice_rci, toList_mkSlice_rci]
@[simp]
public theorem toArray_mkSlice_roi {xs : Subarray α} {lo : Nat} :
xs[lo<...*].toArray = xs.toArray.extract (lo + 1) := by
simp
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rio {xs : Subarray α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice,
Std.Ric.HasRcoIntersection.intersection, Std.Rio.HasRcoIntersection.intersection]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : Subarray α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Rio.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_rio {xs : Subarray α} {hi : Nat} :
xs[*...hi].toList = xs.toList.take hi := by
rw [mkSlice_rio_eq_mkSlice_rco, toList_mkSlice_rco, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rio {xs : Subarray α} {hi : Nat} :
xs[*...hi].toArray = xs.toArray.extract 0 hi := by
simp
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rcc {xs : Subarray α} {hi : Nat} :
xs[*...=hi] = xs[0...=hi] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice,
Std.Ric.HasRcoIntersection.intersection, Std.Rco.HasRcoIntersection.intersection]
@[simp]
public theorem toList_mkSlice_ric {xs : Subarray α} {hi : Nat} :
xs[*...=hi].toList = xs.toList.take (hi + 1) := by
rw [mkSlice_ric_eq_mkSlice_rcc, toList_mkSlice_rcc, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_ric {xs : Subarray α} {hi : Nat} :
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
simp
@[simp]
public theorem mkSlice_rii {xs : Subarray α} :
xs[*...*] = xs := by
simp [Std.Rii.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rii {xs : Subarray α} :
xs[*...*].toList = xs.toList := by
rw [mkSlice_rii]
@[simp]
public theorem toArray_mkSlice_rii {xs : Subarray α} :
xs[*...*].toArray = xs.toArray := by
rw [mkSlice_rii]
end Subarray
end SubarraySlices

View File

@@ -8,7 +8,9 @@ module
prelude
public import Init.Data.Slice.Operations
import all Init.Data.Slice.Operations
import Init.Data.Iterators.Consumers
import Init.Data.Iterators.Lemmas.Consumers
public import Init.Data.List.Control
public section
@@ -23,11 +25,54 @@ theorem Internal.iter_eq_toIteratorIter {γ : Type u} {s : Slice γ}
Internal.iter s = ToIterator.iter s :=
(rfl)
theorem Internal.size_eq_size_iter {s : Slice γ} [ToIterator s Id β]
[Iterator (ToIterator.State s Id) Id β] [IteratorSize (ToIterator.State s Id) Id] :
s.size = (Internal.iter s).size :=
theorem forIn_internalIter {γ : Type u} {β : Type v}
{m : Type w Type x} [Monad m] {δ : Type w}
[ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β]
[ s : Slice γ, IteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, LawfulIteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, Finite (ToIterator.State s Id) Id] {s : Slice γ}
{init : δ} {f : β δ m (ForInStep δ)} :
ForIn.forIn (Internal.iter s) init f = ForIn.forIn s init f :=
(rfl)
@[simp]
public theorem forIn_toList {γ : Type u} {β : Type v}
{m : Type w Type x} [Monad m] [LawfulMonad m] {δ : Type w}
[ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β]
[ s : Slice γ, IteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, LawfulIteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, IteratorCollect (ToIterator.State s Id) Id Id]
[ s : Slice γ, LawfulIteratorCollect (ToIterator.State s Id) Id Id]
[ s : Slice γ, Finite (ToIterator.State s Id) Id] {s : Slice γ}
{init : δ} {f : β δ m (ForInStep δ)} :
ForIn.forIn s.toList init f = ForIn.forIn s init f := by
rw [ forIn_internalIter, Iter.forIn_toList, Slice.toList]
@[simp]
public theorem forIn_toArray {γ : Type u} {β : Type v}
{m : Type w Type x} [Monad m] [LawfulMonad m] {δ : Type w}
[ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β]
[ s : Slice γ, IteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, LawfulIteratorLoop (ToIterator.State s Id) Id m]
[ s : Slice γ, IteratorCollect (ToIterator.State s Id) Id Id]
[ s : Slice γ, LawfulIteratorCollect (ToIterator.State s Id) Id Id]
[ s : Slice γ, Finite (ToIterator.State s Id) Id] {s : Slice γ}
{init : δ} {f : β δ m (ForInStep δ)} :
ForIn.forIn s.toArray init f = ForIn.forIn s init f := by
rw [ forIn_internalIter, Iter.forIn_toArray, Slice.toArray]
theorem Internal.size_eq_count_iter [ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β] {s : Slice γ}
[Finite (ToIterator.State s Id) Id]
[IteratorLoop (ToIterator.State s Id) Id Id] [LawfulIteratorLoop (ToIterator.State s Id) Id Id]
[SliceSize γ] [LawfulSliceSize γ] :
s.size = (Internal.iter s).count := by
letI : IteratorCollect (ToIterator.State s Id) Id Id := .defaultImplementation
simp only [Slice.size, iter, LawfulSliceSize.lawful, Iter.length_toList_eq_count]
theorem Internal.toArray_eq_toArray_iter {s : Slice γ} [ToIterator s Id β]
[Iterator (ToIterator.State s Id) Id β] [IteratorCollect (ToIterator.State s Id) Id Id]
[Finite (ToIterator.State s Id) Id] :
@@ -46,33 +91,33 @@ theorem Internal.toListRev_eq_toListRev_iter {s : Slice γ} [ToIterator s Id β]
(rfl)
@[simp]
theorem size_toArray_eq_size {s : Slice γ} [ToIterator s Id β]
[Iterator (ToIterator.State s Id) Id β] [IteratorSize (ToIterator.State s Id) Id]
theorem size_toArray_eq_size [ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β] {s : Slice γ}
[SliceSize γ] [LawfulSliceSize γ]
[IteratorCollect (ToIterator.State s Id) Id Id] [Finite (ToIterator.State s Id) Id]
[LawfulIteratorSize (ToIterator.State s Id)]
[LawfulIteratorCollect (ToIterator.State s Id) Id Id] :
s.toArray.size = s.size := by
simp [Internal.size_eq_size_iter, Internal.toArray_eq_toArray_iter,
Iter.size_toArray_eq_size]
letI : IteratorLoop (ToIterator.State s Id) Id Id := .defaultImplementation
rw [Internal.size_eq_count_iter, Internal.toArray_eq_toArray_iter, Iter.size_toArray_eq_count]
@[simp]
theorem length_toList_eq_size {s : Slice γ} [ToIterator s Id β]
[Iterator (ToIterator.State s Id) Id β] [IteratorSize (ToIterator.State s Id) Id]
[IteratorCollect (ToIterator.State s Id) Id Id] [Finite (ToIterator.State s Id) Id]
[LawfulIteratorSize (ToIterator.State s Id)]
[LawfulIteratorCollect (ToIterator.State s Id) Id Id] :
theorem length_toList_eq_size [ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β] {s : Slice γ}
[SliceSize γ] [LawfulSliceSize γ] [IteratorCollect (ToIterator.State s Id) Id Id]
[Finite (ToIterator.State s Id) Id] [LawfulIteratorCollect (ToIterator.State s Id) Id Id] :
s.toList.length = s.size := by
simp [Internal.size_eq_size_iter, Internal.toList_eq_toList_iter,
Iter.length_toList_eq_size]
letI : IteratorLoop (ToIterator.State s Id) Id Id := .defaultImplementation
rw [Internal.size_eq_count_iter, Internal.toList_eq_toList_iter, Iter.length_toList_eq_count]
@[simp]
theorem length_toListRev_eq_size {s : Slice γ} [ToIterator s Id β]
[Iterator (ToIterator.State s Id) Id β] [IteratorSize (ToIterator.State s Id) Id]
[IteratorCollect (ToIterator.State s Id) Id Id] [Finite (ToIterator.State s Id) Id]
[LawfulIteratorSize (ToIterator.State s Id)]
[LawfulIteratorCollect (ToIterator.State s Id) Id Id] :
theorem length_toListRev_eq_size [ s : Slice γ, ToIterator s Id β]
[ s : Slice γ, Iterator (ToIterator.State s Id) Id β] {s : Slice γ}
[IteratorLoop (ToIterator.State s Id) Id Id.{v}] [SliceSize γ] [LawfulSliceSize γ]
[Finite (ToIterator.State s Id) Id]
[LawfulIteratorLoop (ToIterator.State s Id) Id Id] :
s.toListRev.length = s.size := by
simp [Internal.size_eq_size_iter, Internal.toListRev_eq_toListRev_iter,
Iter.length_toListRev_eq_size]
letI : IteratorCollect (ToIterator.State s Id) Id Id := .defaultImplementation
rw [Internal.size_eq_count_iter, Internal.toListRev_eq_toListRev_iter,
Iter.length_toListRev_eq_count]
end Std.Slice

View File

@@ -0,0 +1,11 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Slice.List.Basic
public import Init.Data.Slice.List.Iterator
public import Init.Data.Slice.List.Lemmas

View File

@@ -0,0 +1,159 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Slice.Basic
public import Init.Data.Slice.Notation
public import Init.Data.Range.Polymorphic.Nat
set_option linter.missingDocs true
/-!
This module provides slice notation for list slices (a.k.a. `Sublist`) and implements an iterator
for those slices.
-/
open Std Std.Slice Std.PRange
/--
Internal representation of `ListSlice`, which is an abbreviation for `Slice ListSliceData`.
-/
public class Std.Slice.Internal.ListSliceData (α : Type u) where
/-- The relevant suffix of the underlying list. -/
list : List α
/-- The maximum length of the slice, starting at the beginning of `list`. -/
stop : Option Nat
/--
A region of some underlying list. List slices can be used to avoid copying or allocating space,
while being more convenient than tracking the bounds by hand.
A list slice only stores the suffix of the underlying list, starting from the range's lower bound
so that the cost of operations on the slice does not depend on the start position. However,
the cost of creating a list slice is linear in the start position.
-/
public abbrev ListSlice (α : Type u) := Slice (Internal.ListSliceData α)
variable {α : Type u}
/--
Returns a slice of a list with the given bounds.
If `start` or `stop` are not valid bounds for a sublist, then they are clamped to the list's length.
Additionally, the starting index is clamped to the ending index.
This function is linear in `start` because it stores `as.drop start` in the slice.
-/
public def List.toSlice (as : List α) (start : Nat) (stop : Nat) : ListSlice α :=
if start < stop then
{ list := as.drop start, stop := some (stop - start) }
else
{ list := [], stop := some 0 }
/--
Returns a slice of a list with the given lower bound.
If `start` is not a valid bound for a sublist, then they are clamped to the list's length.
This function is linear in `start` because it stores `as.drop start` in the slice.
-/
public def List.toUnboundedSlice (as : List α) (start : Nat) : ListSlice α :=
{ list := as.drop start, stop := none }
public instance : Rcc.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice range.lower (range.upper + 1)
public instance : Rco.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice range.lower range.upper
public instance : Rci.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toUnboundedSlice range.lower
public instance : Roc.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice (range.lower + 1) (range.upper + 1)
public instance : Roo.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice (range.lower + 1) range.upper
public instance : Roi.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toUnboundedSlice (range.lower + 1)
public instance : Ric.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice 0 (range.upper + 1)
public instance : Rio.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs range :=
xs.toSlice 0 range.upper
public instance : Rii.Sliceable (List α) Nat (ListSlice α) where
mkSlice xs _ :=
xs.toUnboundedSlice 0
public instance : Rcc.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper + 1
| some stop => min stop (range.upper + 1)
xs.internalRepresentation.list[range.lower...stop]
public instance : Rco.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper
| some stop => min stop range.upper
xs.internalRepresentation.list[range.lower...stop]
public instance : Rci.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
match xs.internalRepresentation.stop with
| none => xs.internalRepresentation.list[range.lower...*]
| some stop => xs.internalRepresentation.list[range.lower...stop]
public instance : Roc.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper + 1
| some stop => min stop (range.upper + 1)
xs.internalRepresentation.list[range.lower<...stop]
public instance : Roo.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper
| some stop => min stop range.upper
xs.internalRepresentation.list[range.lower<...stop]
public instance : Roi.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
match xs.internalRepresentation.stop with
| none => xs.internalRepresentation.list[range.lower<...*]
| some stop => xs.internalRepresentation.list[range.lower<...stop]
public instance : Ric.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper + 1
| some stop => min stop (range.upper + 1)
xs.internalRepresentation.list[*...stop]
public instance : Rio.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs range :=
let stop := match xs.internalRepresentation.stop with
| none => range.upper
| some stop => min stop range.upper
xs.internalRepresentation.list[*...stop]
public instance : Rii.Sliceable (ListSlice α) Nat (ListSlice α) where
mkSlice xs _ :=
xs

View File

@@ -0,0 +1,79 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
public import Init.Data.Slice.List.Basic
public import Init.Data.Iterators.Producers.List
public import Init.Data.Iterators.Combinators.Take
import all Init.Data.Range.Polymorphic.Basic
public import Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Slice.Operations
import Init.Omega
public section
/-!
This module implements an iterator for list slices.
-/
open Std Slice PRange Iterators
variable {shape : RangeShape} {α : Type u}
instance {s : ListSlice α} : ToIterator s Id α :=
.of _ (match s.internalRepresentation.stop with
| some n => s.internalRepresentation.list.iter.take n
| none => s.internalRepresentation.list.iter.toTake)
universe v w
@[no_expose] instance {s : ListSlice α} : Iterator (ToIterator.State s Id) Id α := inferInstance
@[no_expose] instance {s : ListSlice α} : Finite (ToIterator.State s Id) Id := inferInstance
@[no_expose] instance {s : ListSlice α} : IteratorCollect (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : ListSlice α} : IteratorCollectPartial (ToIterator.State s Id) Id Id := inferInstance
@[no_expose] instance {s : ListSlice α} {m : Type v Type w} [Monad m] :
IteratorLoop (ToIterator.State s Id) Id m := inferInstance
@[no_expose] instance {s : ListSlice α} {m : Type v Type w} [Monad m] :
IteratorLoopPartial (ToIterator.State s Id) Id m := inferInstance
instance : SliceSize (Internal.ListSliceData α) where
size s := (Internal.iter s).count
@[no_expose]
instance {α : Type u} {m : Type v Type w} :
ForIn m (ListSlice α) α where
forIn xs init f := forIn (Internal.iter xs) init f
namespace List
/-- Allocates a new list that contains the contents of the slice. -/
def ofSlice (s : ListSlice α) : List α :=
s.toList
docs_to_verso ofSlice
instance : Append (ListSlice α) where
append x y :=
let a := x.toList ++ y.toList
a.toSlice 0 a.length
/-- `ListSlice` representation. -/
protected def ListSlice.repr [Repr α] (s : ListSlice α) : Std.Format :=
let xs := s.toList
repr xs ++ ".toSlice 0 " ++ toString xs.length
instance [Repr α] : Repr (ListSlice α) where
reprPrec s _ := ListSlice.repr s
instance [ToString α] : ToString (ListSlice α) where
toString s := toString s.toArray
end List
@[inherit_doc List.ofSlice]
def ListSlice.toList (s : ListSlice α) : List α :=
List.ofSlice s

View File

@@ -0,0 +1,406 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Paul Reichert
-/
module
prelude
import all Init.Data.Slice.List.Basic
public import Init.Data.Slice.List.Iterator
import all Init.Data.Slice.List.Iterator
import all Init.Data.Slice.Operations
import all Init.Data.Range.Polymorphic.Iterators
import all Init.Data.Range.Polymorphic.Lemmas
public import Init.Data.Slice.Lemmas
public import Init.Data.Iterators.Lemmas
open Std.Iterators Std.PRange
namespace Std.Slice.List
theorem internalIter_eq {α : Type u} {s : ListSlice α} :
Internal.iter s = match s.internalRepresentation.stop with
| some stop => s.internalRepresentation.list.iter.take stop
| none => s.internalRepresentation.list.iter.toTake := by
simp only [Internal.iter, ToIterator.iter_eq]; rfl
theorem toList_internalIter {α : Type u} {s : ListSlice α} :
(Internal.iter s).toList = match s.internalRepresentation.stop with
| some stop => s.internalRepresentation.list.take stop
| none => s.internalRepresentation.list := by
simp only [internalIter_eq]
split <;> simp
theorem internalIter_eq_toIteratorIter {α : Type u} {s : ListSlice α} :
Internal.iter s = ToIterator.iter s :=
rfl
public instance : LawfulSliceSize (Internal.ListSliceData α) where
lawful s := by
simp [ internalIter_eq_toIteratorIter, SliceSize.size]
end Std.Slice.List
public theorem ListSlice.toList_eq {xs : ListSlice α} :
xs.toList = match xs.internalRepresentation.stop with
| some stop => xs.internalRepresentation.list.take stop
| none => xs.internalRepresentation.list := by
simp only [toList, List.ofSlice, Std.Slice.toList, ToIterator.state_eq]
rw [Std.Slice.List.toList_internalIter]
rfl
public theorem ListSlice.toArray_toList {xs : ListSlice α} :
xs.toList.toArray = xs.toArray := by
simp [ListSlice.toList, Std.Slice.toArray, List.ofSlice, Std.Slice.toList]
public theorem ListSlice.toList_toArray {xs : ListSlice α} :
xs.toArray.toList = xs.toList := by
simp [ListSlice.toList, Std.Slice.toArray, List.ofSlice, Std.Slice.toList]
@[simp]
public theorem ListSlice.length_toList {xs : ListSlice α} :
xs.toList.length = xs.size := by
simp [ListSlice.toList_eq, Std.Slice.size, Std.Slice.SliceSize.size, Iter.length_toList_eq_count]
rw [Std.Slice.List.toList_internalIter]
rfl
@[simp]
public theorem ListSlice.size_toArray {xs : ListSlice α} :
xs.toArray.size = xs.size := by
simp [ ListSlice.toArray_toList]
namespace List
@[simp]
public theorem toList_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.take hi).drop lo := by
rw [List.take_eq_take_min, List.drop_eq_drop_min]
simp only [Std.Rco.Sliceable.mkSlice, toSlice, ListSlice.toList_eq]
by_cases h : lo < hi
· have : lo hi := by omega
simp [h, List.take_drop, Nat.add_sub_cancel' _, List.take_eq_take_min]
· have : min hi xs.length lo := by omega
simp [h, Nat.min_eq_right this]
@[simp]
public theorem toArray_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].toArray = ((xs.take hi).drop lo).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...hi].size = min hi xs.length - lo := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rcc {xs : List α} {lo hi : Nat} :
xs[lo...=hi].toList = (xs.take (hi + 1)).drop lo := by
rw [mkSlice_rcc_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_rcc {xs : List α} {lo hi : Nat} :
xs[lo...=hi].toArray = ((xs.take (hi + 1)).drop lo).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_rcc {xs : List α} {lo hi : Nat} :
xs[lo...=hi].size = min (hi + 1) xs.length - lo := by
simp [ ListSlice.length_toList]
@[simp]
public theorem toList_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo...*].toList = xs.drop lo := by
rw [List.drop_eq_drop_min]
simp [ListSlice.toList_eq, Std.Rci.Sliceable.mkSlice, List.toUnboundedSlice]
@[simp]
public theorem toArray_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo...*].toArray = (xs.drop lo).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo...*].size = xs.length - lo := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : List α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roo {xs : List α} {lo hi : Nat} :
xs[lo<...hi].toList = (xs.take hi).drop (lo + 1) := by
rw [mkSlice_roo_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_roo {xs : List α} {lo hi : Nat} :
xs[lo<...hi].toArray = ((xs.take hi).drop (lo + 1)).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_roo {xs : List α} {lo hi : Nat} :
xs[lo<...hi].size = min hi xs.length - (lo + 1) := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_roc_eq_mkSlice_roo {xs : List α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roc {xs : List α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.take (hi + 1)).drop (lo + 1) := by
rw [mkSlice_roc_eq_mkSlice_roo, toList_mkSlice_roo]
@[simp]
public theorem toArray_mkSlice_roc {xs : List α} {lo hi : Nat} :
xs[lo<...=hi].toArray = ((xs.take (hi + 1)).drop (lo + 1)).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_roc {xs : List α} {lo hi : Nat} :
xs[lo<...=hi].size = min (hi + 1) xs.length - (lo + 1) := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : List α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roi {xs : List α} {lo : Nat} :
xs[lo<...*].toList = xs.drop (lo + 1) := by
rw [mkSlice_roi_eq_mkSlice_rci, toList_mkSlice_rci]
@[simp]
public theorem toArray_mkSlice_roi {xs : List α} {lo : Nat} :
xs[lo<...*].toArray = (xs.drop (lo + 1)).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_roi {xs : List α} {lo : Nat} :
xs[lo<...*].size = xs.length - (lo + 1) := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : List α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rio {xs : List α} {hi : Nat} :
xs[*...hi].toList = xs.take hi := by
rw [mkSlice_rio_eq_mkSlice_rco, toList_mkSlice_rco, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rio {xs : List α} {hi : Nat} :
xs[*...hi].toArray = (xs.take hi).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_rio {xs : List α} {hi : Nat} :
xs[*...hi].size = min hi xs.length := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rio {xs : List α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_ric {xs : List α} {hi : Nat} :
xs[*...=hi].toList = xs.take (hi + 1) := by
rw [mkSlice_ric_eq_mkSlice_rio, toList_mkSlice_rio]
@[simp]
public theorem toArray_mkSlice_ric {xs : List α} {hi : Nat} :
xs[*...=hi].toArray = (xs.take (hi + 1)).toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_ric {xs : List α} {hi : Nat} :
xs[*...=hi].size = min (hi + 1) xs.length := by
simp [ ListSlice.length_toList]
@[simp]
public theorem mkSlice_rii_eq_mkSlice_rci {xs : List α} :
xs[*...*] = xs[0...*] := by
simp [Std.Rii.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rii {xs : List α} :
xs[*...*].toList = xs := by
rw [mkSlice_rii_eq_mkSlice_rci, toList_mkSlice_rci, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rii {xs : List α} :
xs[*...*].toArray = xs.toArray := by
simp [ ListSlice.toArray_toList]
@[simp]
public theorem size_mkSlice_rii {xs : List α} :
xs[*...*].size = xs.length := by
simp [ ListSlice.length_toList]
end List
section ListSubslices
namespace ListSlice
@[simp]
public theorem toList_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...hi].toList = (xs.toList.take hi).drop lo := by
simp only [instSliceableListSliceNat_1, List.toList_mkSlice_rco, ListSlice.toList_eq (xs := xs)]
obtain xs, stop := xs
cases stop
· simp
· simp [List.take_take, Nat.min_comm]
@[simp]
public theorem toArray_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...hi].toArray = xs.toArray.extract lo hi := by
simp [ toArray_toList, List.drop_take]
@[simp]
public theorem mkSlice_rcc_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo...=hi] = xs[lo...(hi + 1)] := by
simp [Std.Rcc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rcc {xs : ListSlice α} {lo hi : Nat} :
xs[lo...=hi].toList = (xs.toList.take (hi + 1)).drop lo := by
rw [mkSlice_rcc_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_rcc {xs : ListSlice α} {lo hi : Nat} :
xs[lo...=hi].toArray = xs.toArray.extract lo (hi + 1) := by
simp [ ListSlice.toArray_toList, List.drop_take]
@[simp]
public theorem toList_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
xs[lo...*].toList = xs.toList.drop lo := by
simp only [instSliceableListSliceNat_2, ListSlice.toList_eq (xs := xs)]
obtain xs, stop := xs
simp only
split <;> simp
@[simp]
public theorem toArray_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
xs[lo...*].toArray = xs.toArray.extract lo := by
simp only [ toArray_toList, toList_mkSlice_rci]
rw (occs := [1]) [ List.take_length (l := List.drop lo xs.toList)]
simp
@[simp]
public theorem mkSlice_roo_eq_mkSlice_rco {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...hi] = xs[(lo + 1)...hi] := by
simp [Std.Roo.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roo {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...hi].toList = (xs.toList.take hi).drop (lo + 1) := by
rw [mkSlice_roo_eq_mkSlice_rco, toList_mkSlice_rco]
@[simp]
public theorem toArray_mkSlice_roo {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...hi].toArray = xs.toArray.extract (lo + 1) hi := by
simp [ toArray_toList, List.drop_take]
@[simp]
public theorem mkSlice_roc_eq_mkSlice_roo {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi] = xs[lo<...(hi + 1)] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Roo.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_roc_eq_mkSlice_rcc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi] = xs[(lo + 1)...=hi] := by
simp [Std.Roc.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi].toList = (xs.toList.take (hi + 1)).drop (lo + 1) := by
rw [mkSlice_roc_eq_mkSlice_rcc, toList_mkSlice_rcc]
@[simp]
public theorem toArray_mkSlice_roc {xs : ListSlice α} {lo hi : Nat} :
xs[lo<...=hi].toArray = xs.toArray.extract (lo + 1) (hi + 1) := by
simp [ toArray_toList, List.drop_take]
@[simp]
public theorem mkSlice_roi_eq_mkSlice_rci {xs : ListSlice α} {lo : Nat} :
xs[lo<...*] = xs[(lo + 1)...*] := by
simp [Std.Roi.Sliceable.mkSlice, Std.Rci.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toList = xs.toList.drop (lo + 1) := by
rw [mkSlice_roi_eq_mkSlice_rci, toList_mkSlice_rci]
@[simp]
public theorem toArray_mkSlice_roi {xs : ListSlice α} {lo : Nat} :
xs[lo<...*].toArray = xs.toArray.extract (lo + 1) := by
simp only [ toArray_toList, toList_mkSlice_roi]
rw (occs := [1]) [ List.take_length (l := List.drop (lo + 1) xs.toList)]
simp
@[simp]
public theorem mkSlice_rio_eq_mkSlice_rco {xs : ListSlice α} {hi : Nat} :
xs[*...hi] = xs[0...hi] := by
simp [Std.Rio.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rio {xs : ListSlice α} {hi : Nat} :
xs[*...hi].toList = xs.toList.take hi := by
rw [mkSlice_rio_eq_mkSlice_rco, toList_mkSlice_rco, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_rio {xs : ListSlice α} {hi : Nat} :
xs[*...hi].toArray = xs.toArray.extract 0 hi := by
simp [ toArray_toList]
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rio {xs : ListSlice α} {hi : Nat} :
xs[*...=hi] = xs[*...(hi + 1)] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rio.Sliceable.mkSlice]
@[simp]
public theorem mkSlice_ric_eq_mkSlice_rcc {xs : ListSlice α} {hi : Nat} :
xs[*...=hi] = xs[0...=hi] := by
simp [Std.Ric.Sliceable.mkSlice, Std.Rco.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
xs[*...=hi].toList = xs.toList.take (hi + 1) := by
rw [mkSlice_ric_eq_mkSlice_rcc, toList_mkSlice_rcc, List.drop_zero]
@[simp]
public theorem toArray_mkSlice_ric {xs : ListSlice α} {hi : Nat} :
xs[*...=hi].toArray = xs.toArray.extract 0 (hi + 1) := by
simp [ toArray_toList]
@[simp]
public theorem mkSlice_rii {xs : ListSlice α} :
xs[*...*] = xs := by
simp [Std.Rii.Sliceable.mkSlice]
@[simp]
public theorem toList_mkSlice_rii {xs : ListSlice α} :
xs[*...*].toList = xs.toList := by
rw [mkSlice_rii]
@[simp]
public theorem toArray_mkSlice_rii {xs : ListSlice α} :
xs[*...*].toArray = xs.toArray := by
rw [mkSlice_rii]
end ListSlice
end ListSubslices

Some files were not shown because too many files have changed in this diff Show More