Compare commits

...

115 Commits

Author SHA1 Message Date
Leonardo de Moura
98ab206665 feat: helper theorems for grind order
This PR adds helper theorems for justifying proofs constructed by
`grind order`
2025-09-26 20:57:10 -07:00
Mac Malone
18832eb600 fix: lake: ill-formed build output handling (#10586)
This PR makes Lake no longer error if build outputs found in a trace
file (or in the artifact cache) are ill-formed.

This is caused a problem with the CI cache and is just generally too
strict.
2025-09-27 03:35:57 +00:00
Mac Malone
05300f7b51 chore: restoreAllArtifacts = true for core (#10582)
This PR sets `restoreAllArtifacts = true` in the core build Lake
configuration file.
2025-09-27 03:30:21 +00:00
Leonardo de Moura
0bf7741a3e feat: multiple grind propagators per declaration (#10583)
This PR allows users to declare additional `grind` constraint
propagators for declarations that already include propagators in core.
2025-09-27 02:04:03 +00:00
Mac Malone
f80d6e7d38 refactor: lake: libPrefixOnWindows on libName (#10579)
This PR alters `libPrefixOnWindows` behavior to add the `lib` prefix to
the library's `libName` rather than just the file path. This means that
Lake's `-l` will now have the prefix on Windows. While this should not
matter to a MSYS2 build (which accepts both `lib`-prefixed and
unprefixed variants), it should ensure compatibility with MSVC (if that
is ever an issue).
2025-09-27 01:54:23 +00:00
Mac Malone
5b8d4d7210 chore: invalidate Lake CI cache (#10587)
This PR invalidates the CI cache for the Linux Lake build job by bumping
the version of the CI cache key.

The CI cache is broken due to a change in the output format in build
traces. This will be fixed in #10586, but this should prevent further
breakages of PRs in the meantime.
2025-09-27 01:11:23 +00:00
Lean stage0 autoupdater
db8c77a8fa chore: update stage0 2025-09-26 22:39:54 +00:00
Mac Malone
7ee3079afb feat: lake: CMake build types (#10578)
This PR adds support for the CMake spelling of a build type (i.e.,
capitalized) to Lake's `buildType` configuration option.
2025-09-26 21:06:12 +00:00
Mac Malone
c3d9d0d931 feat: lake: restoreAllArtifacts (#10576)
This PR adds a new package configuration option: `restoreAllArtifacts`.
When set to `true` and the Lake local artifact cache is enabled, Lake
will copy all cached artifacts into the build directory. This ensures
they are available for external consumers who expect build results to be
in the build directory.
2025-09-26 20:58:32 +00:00
Mac Malone
e98d7dd603 feat: lake: Reservoir-versioned dependencies (#10551)
This PR enables Reservoir packages to be required as dependencies at a
specific package version (i.e., the `version` specified in the package's
configuration file).
2025-09-26 20:52:54 +00:00
Mac Malone
6102f00322 chore: rm src/lake/lakefile.toml (#10580)
This file is essentially just for me and can cause problems with the
language server, so I have removed it from the committed code (and left
an ignored version on my own setup).
2025-09-26 20:51:02 +00:00
Sebastian Ullrich
646f2fabbf fix: allow meta decls in #eval (#10545) 2025-09-26 15:10:33 +00:00
Sebastian Ullrich
f4a0259344 chore: cleanups uncovered by Shake (#10572)
* Wrap proof subterms in `by exact` so dependencies can be demoted to
private `import`s
* Remove trivial instance re-definitions that may cause name collisions
on import changes
* Remove unused `open`s that may fail on import removals
2025-09-26 14:38:30 +00:00
Sebastian Graf
3f816156cc fix: immediately replace main goal in SPred proof mode tactics (#10571)
This PR ensures that `SPred` proof mode tactics such as `mspec`,
`mintro`, etc. immediately replace the main goal when entering the proof
mode. This prevents `No goals to be solved` errors.
2025-09-26 13:41:38 +00:00
Sebastian Ullrich
c92ec361cd chore: CommandElabM.liftCoreM should not reset InfoState.lazyAssignment (#10518)
Fixes #10408
2025-09-26 13:37:40 +00:00
Sebastian Ullrich
49cff79712 fix: privacy checks and import all (#10550)
This PR ensures private declarations are accessible from the private
scope iff they are local or imported through an `import all` chain,
including for anonymous notation and structure instance notation.
2025-09-26 13:26:10 +00:00
Sebastian Ullrich
2677ca8fb4 fix: import-merging theorems under the module system (#10556) 2025-09-26 13:02:51 +00:00
Sebastian Graf
78b09d5dcc feat: support case label like syntax in mvcgen invariants (#10570)
This PR adds support for case label like syntax in `mvcgen invariants`
in order to refer to inaccessible names. Example:

```lean
def copy (l : List Nat) : Id (Array Nat) := do
  let mut acc := #[]
  for x in l do
    acc := acc.push x
  return acc

theorem copy_labelled_invariants (l : List Nat) : ⦃⌜True⌝⦄ copy l ⦃⇓ r => ⌜r = l.toArray⌝⦄ := by
  mvcgen [copy] invariants
  | inv1 acc => ⇓ ⟨xs, letMuts⟩ => ⌜acc = l.toArray⌝
  with admit
```
2025-09-26 12:57:49 +00:00
Sebastian Ullrich
a164ae5073 chore: overhaul meta error messages (#10569) 2025-09-26 12:56:46 +00:00
Sebastian Ullrich
2c54386555 fix: Prop instances should be elaborated in the private scope (#10568) 2025-09-26 12:16:09 +00:00
Sebastian Graf
62fd973b28 fix: make getArg!' compute the correct arg index to access (#10567)
This PR fixes argument index calculation in `Lean.Expr.getArg!'`.
2025-09-26 11:54:49 +00:00
Sebastian Graf
71e09ca883 feat: concrete invariant? suggestions based on start and end (#10566)
This PR improves `mvcgen invariants?` to suggest concrete invariants
based on how invariants are used in VCs.
These suggestions are intentionally simplistic and boil down to "this
holds at the start of the loop and this must hold at the end of the
loop":

```lean
def mySum (l : List Nat) : Nat := Id.run do
  let mut acc := 0
  for x in l do
    acc := acc + x
  return acc

/--
info: Try this:
  invariants
    · ⇓⟨xs, letMuts⟩ => ⌜xs.prefix = [] ∧ letMuts = 0 ∨ xs.suffix = [] ∧ letMuts = l.sum⌝
-/
#guard_msgs (info) in
theorem mySum_suggest_invariant (l : List Nat) : mySum l = l.sum := by
  generalize h : mySum l = r
  apply Id.of_wp_run_eq h
  mvcgen invariants?
  all_goals admit
```

It still is the user's job to weaken this invariant such that it
interpolates over all loop iterations, but it *is* a good starting point
for iterating. It is also useful because the user does not need to
remember the exact syntax.
2025-09-26 11:37:14 +00:00
Kim Morrison
e6dd41255b feat: upstream ReduceEval instances from quote4 (#10563)
This PR moves some `ReduceEval` instances about basic types up from the
`quote4` library.
2025-09-26 04:02:55 +00:00
Leonardo de Moura
cfc46ac17f feat: internalization for grind order (#10562)
This PR simplifies the `grind order` module, and internalizes the order
constraints. It removes the `Offset` type class because it introduced
too much complexity. We now cover the same use cases with a simpler
approach:
- Any type that implements at least `Std.IsPreorder`
- Arbitrary ordered rings.
- `Nat` by the `Nat.ToInt` adapter.
2025-09-26 03:49:06 +00:00
Mac Malone
7c0868d562 refactor: lake: introduce LogConfig (#10468)
This PR refactors the Lake log monads to take a `LogConfig` structure
when run (rather than multiple arguments). This breaking change should
help minimize future breakages due to changes in configurations options.

In addition, the CLI logging monad stack has been polished up and
`LogIO` now supports the `failLv` configuration option.
2025-09-26 02:44:51 +00:00
Mac Malone
28fb4bb1b2 feat: lake cache (& remote cache support) (#10188)
This PR adds support for remote artifact caches (e.g., Reservoir) to
Lake. As part of this support, a new suite of `lake cache` CLI commands
has been introduced to help manage Lake's cache. Also, the existing
local cache support has been overhauled for better interplay with the
new remote support.

**Cache CLI**

Artifacts are uploaded to a remote cache via `lake cache put`. This
command takes a JSON Lines input-to-outputs file which describes the
output artifacts for a build (indexed by its input hash). This file can
be produced by a run of `lake build` with the new `-o` option. Lake will
write the input-to-outputs mappings of thee root package artifacts
traversed by the build to the file specified via `-o`. This file can
then be passed to `lake cache put` to upload both it and the built
artifacts from the local cache to the remote cache.

The remote cache service can be customized using the following
environment variables:

* `LAKE_CACHE_KEY`: This is the authorization key for the remote cache.
Lake uploads artifacts via `curl` using the AWS Signature Version 4
protocol, so this should be the S3 `<key>:<secret>` pair expected by
`curl`.

* `LAKE_CACHE_ARTIFACT_ENDPOINT`: This is the base URL to upload (or
download) artifacts to a given remote cache. Artifacts will be stored at
`<endpoint>/<scope/<content-hash>.art`.

* `LAKE_CACHE_REVISION_ENDPOINT`: This is the base URL to upload (or
download) input-to-output mappings to a given remote cache. Mappings are
indexed by the Git revision of the package, and are stored at
`<endpoint>/<scope/<rev>.jsonl`.

The `<scope>` is provided through the `--scope` option to `lake cache
put`. This option is used to prevent one package from overwriting the
artifacts/mappings of another. Lake artifact hashes and Git revisions
hashes are not cryptographically secure, so it is not safe for a service
to store untrusted files across packages in a single flat store.

Once artifacts are available in a remote cache, the `lake cache get`
command can be used to retrieve them. By default, it will fetch
artifacts for the root package's dependencies from Reservoir using its
API. But, like `cache put`, it can be configured to use a custom
endpoint with the above environment variables and an explicit `--scope`.
When so configured, `cache get` will instead download artifacts for the
root package. Lake only downloads artifacts for a single package in this
case, because it cannot deduce the necessary package scopes without
Reservoir.

**Significant local cache changes**

* Lake now always has a cache directory. If Lake cannot find a good
candidate directory on the system for the cache, it will instead store
the cache at `.lake/cache` within the workspace.

* If the local cache is disabled, Lake will not save built artifacts to
the cache. However, Lake will, nonetheless, always attempt to lookup
build artifacts in the cache. If found, the cached artifact will be
copied to the the build location ("restored").

* Input-to-outputs mappings in the local cache are no longer stored in a
single file for a package, but rather in individual files per input (in
the `outputs` subdirectory of the cache).

* Outputs in a trace file, outputs file, or mappings file are now an
`ArtifactDescr`, which is currently composed of both the content hash
and the file extension.

* Trace files now contain a date-based `schemaVersion` to help make
version to version migration easier. Hashes in JSON and in artifacts
names now use a 16-digit hexadecimal encoding (instead of a variable
decimal encoding).

* `buildArtifactUnlessUpToDate` now returns an `Artifact` instead of a
`FilePath`.

**NOTE:** The Lake local cache is still disabled by default. This means
that built artifacts, by default, will not be placed in the cache
directory, and thus will not be available for `lake cache put` to
upload. Users must first explicitly enable the cache by either setting
the `LAKE_ARTIFACT_CACHE` environment variable to a truthy value or by
setting the `enableArtifactCache` package configuration option to
`true`.
2025-09-26 01:13:43 +00:00
Robert J. Simmons
2231d9b488 feat: improve error messages for ambiguous 3.toDecmial syntax (#10488)
This PR changes the way that scientific numerals are parsed in order to
give better error messages for (invalid) syntax like `32.succ`.

Example:

```lean4
#check 32.succ
```

Before, the error message is:

```
unexpected identifier; expected command
```

This is because `32.` parses as a complete float, and `#check 32.`
parses as a complete command, so `succ` is being read as the start of a
new command.

With this change, the error message will move from the `succ` token to
the `32` token (which isn't totally ideal from my perspective) but gives
a less misleading error message and corresponding suggestion:

```
unexpected identifier after decimal point; consider parenthesizing the number
```
2025-09-26 01:12:10 +00:00
David Thrane Christiansen
e72bf59385 feat: more metadata for Verso docstrings (#10560)
This PR adds highlighted Lean code to Verso docstrings and fixes smaller
quality-of-life issues.
2025-09-25 23:51:51 +00:00
Mac Malone
343328b7df feat: lake: rename dependencies (#10452)
This PR refactors Lake's package naming procedure to allow packages to
be renamed by the consumer. With this, users can now require a package
using a different name than the one it was defined with.

This is support will be used in the future to enable seamlessly
including the same package at multiple different versions within the
same workspace.

In a Lake package configuration file written in Lean, the current
package's assigned name is now accessed through `__name__` instead of
the previous `_package.name`. A deprecation warning has been added to
`_package.name` to assist in migration.
2025-09-25 22:10:39 +00:00
Leonardo de Moura
5b9befcdbf feat: infrastructure for grind order (#10553)
This PR implements infrastructure for the new `grind order` module.
2025-09-25 17:53:43 +00:00
Alex Keizer
188ef680da chore: ensure pass refers to SpecResult.pass in GuardMsgs (#10539)
This PR adds a `.` in front of `pass` in the `#guard_msgs`
implementation.

Previously, the match arm read `| pass => ...`. Presumably, `pass` was
intended to mean `SpecResult.pass`, but, this isn't in scope, so instead
`pass` here is a catch-all variable. By adding a dot, we ensure we
actually refer to the constant. Note that this was the last case in the
pattern-match, and since all other constructors were correctly
referenced, the only case that went to the fallback was
`SpecResult.pass`, so the code did the right thing. Still, by fixing
this, we prevent a surprise in the event that a new `SpecResult`
constructor is added.
2025-09-25 13:50:46 +00:00
Henrik Böving
5fd8c1b94d feat: new String.Slice API (#10514)
This PR defines the new `String.Slice` API.

Many of the core design principles of the API are taken over from Rust's
[string
library](https://doc.rust-lang.org/stable/std/string/struct.String.html).
2025-09-25 12:18:52 +00:00
Sebastian Ullrich
5ef7b45afa doc: meta modifier (#10554) 2025-09-25 11:45:54 +00:00
Mario Carneiro
9f41f3324a fix: make Substring.beq reflexive (#10552)
This PR ensures that `Substring.beq` is reflexive, and in particular
satisfies the equivalence `ss1 == ss2 <-> ss1.toString = ss2.toString`.

Closes #10511.

Note: I also fixed a strange line in the `String.extract` documentation
which looks like it may have been a copypasta, and added another example
to show how invalid UTF8 positions work, but the doc also makes a point
of saying that it is unspecified so maybe it would be better not to have
the example? 🤷
2025-09-25 05:08:41 +00:00
Henrik Böving
055060990c fix: use _Exit in the language server (#10538)
This PR fixes deadlocking `exit` calls in the language server.

We have previously observed deadlocking calls to `exit` inside of the
language server and deemed them irrelevant. However, child processes of
these deadlocking exiting processes can continue to consume a large
amount of CPU as they try to compile a library etc. Hence, this PR
switches to the MT safe `_Exit` inside of the language server,
in order to ensure the server finishes when it is told to.
2025-09-24 14:44:16 +00:00
Sebastian Graf
4c44f4ef7c chore: add fixed test case for #9363 (#10547) 2025-09-24 14:32:08 +00:00
Markus Himmel
d6cd738ab4 feat: redefine String, part two (#10457)
This PR introduces safe alternatives to `String.Pos` and `Substring`
that can only represent valid positions/slices.

Specifically, the PR

- introduces the predicate `String.Pos.IsValid`;
- proves several nontrivial equivalent conditions for
`String.Pos.IsValid`;
- introduces `String.ValidPos`, which is a `String.Pos` with an
`IsValid` proof;
- introduces `String.Slice`, which is like `Substring` but made from
`String.ValidPos` instead of `Pos`;
- introduces `String.Pos.IsValidForSlice`, which is like
`String.Pos.IsValid` but for slices;
- introduces `String.Slice.Pos`, which is like `String.ValidPos` but for
slices;
- introduces various functions for converting between the two types of
positions.

The API added in this PR is not complete. It will be expanded in future
PRs with addional operations and verification.
2025-09-24 13:36:55 +00:00
Markus Himmel
68409ef6fd chore: turn some crashes into errors (#8402)
This PR prevents some nonsensical code from crashing the server.

Specifically, the kernel is changed to
- properly check that passed expressions do not contain loose bvars,
which could lead to a segmentation fault on a well-crafted input
(discovered through fuzzing), and
- check that constants generated when creating a new inductive type do
not overwrite each other, which could lead to the kernel taking
something out of the environment and then casting it to something it
isn't.

Partially addresses #8258, but let's keep that one open until the error
message is a little better.

Fixes #10492.
2025-09-24 13:04:18 +00:00
Joachim Breitner
ca1101dddd feat: #print T.rec to show more information (#10543)
This PR lets `#print T.rec` show more information about a recursor, in
particular it's reduction rules.
2025-09-24 12:22:00 +00:00
Sebastian Graf
ce7a4f50be chore: add spec lemmas for MonadControl (#10544) 2025-09-24 12:16:06 +00:00
Sebastian Graf
eb9dd9a9e3 chore: add some missing spec lemmas (#10540) 2025-09-24 12:08:12 +00:00
Markus Himmel
b6198434f2 fix: String regressions (#10523)
This PR fixes some regressions introduced by #10304.
2025-09-24 12:01:50 +00:00
Joachim Breitner
1374445081 chore: update bench/riskv-ast.lean (#10505)
This PR disables `trace.profiler` in `bench/riskv-ast.lean`. We don't
want to optimize the trace profiler, but normal code.

While at it, I removed the `#exit` to cover more of the file.

While at it, also import the latest from from upstream.
2025-09-24 11:46:26 +00:00
Joachim Breitner
9df345e322 fix: .congr_simp for non-defs (#10508)
This PR allows `.congr_simp` theorems to be created not just for
definitoins, but any constant. This is important to make the machinery
work across module boundaries.

It also moves the `enableRealizationsForConst` for constructors to a
more sensible
place, and enables it for axioms.
2025-09-24 11:45:49 +00:00
Kim Morrison
3b2705d0df feat: helper functions for premise selection API (#10512)
This PR adds some helper functions for the premise selection API, to
assist implementers.

---------

Co-authored-by: Thomas Zhu <thomas.zhu.sh@hotmail.com>
2025-09-24 11:45:40 +00:00
Sebastian Ullrich
44a2b085c4 feat: scripts/Modulize.lean (#10460)
This PR introduces a simple script that adjusts module headers in a
package for use of the module system, without further minimizing import
or annotation use.

---------

Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
2025-09-24 11:40:17 +00:00
Joachim Breitner
7f18c734eb fix: simpHaveTelescope: calculate used fvars transitiviely (#10536)
This PR fixes `simp` in `-zeta -zetaUnused` mode from producing
incorrect proofs if in a `have` telescope a variable occurrs in the
type of the body only transitively. Fixes #10353.
2025-09-24 11:30:09 +00:00
Sebastian Ullrich
ac6ae51bce chore: minor module system fixes from batteries port (#10496) 2025-09-24 08:59:23 +00:00
Lean stage0 autoupdater
fd4a8c5407 chore: update stage0 2025-09-24 08:23:57 +00:00
Henrik Böving
2e5bbf4596 fix: #guard should work with the module system (#10535)
This PR ensures that `#guard` can be called under the module system
without issues.
2025-09-24 07:38:10 +00:00
David Thrane Christiansen
00b74e02cd feat: docstring role for module names, plus improved suggestions (#10533)
This PR adds a docstring role for module names, called `module`. It also
improves the suggestions provided for code elements, making them more
relevant and proposing `lit`.
2025-09-24 07:32:27 +00:00
Marc Huisinga
90db9ef006 feat: unknown identifier code actions for auto-implicits (#10442)
This PR ensures that unknown identifier code actions are provided on
auto-implicits.

Closes #8837.
2025-09-24 07:28:06 +00:00
Kim Morrison
3ddda9ae4d chore: adjust List.countP grind annotations (#10532) 2025-09-24 07:07:11 +00:00
Kim Morrison
ac0b82933f chore: add variant of Rat.ofScientific_def for grind (#10534) 2025-09-24 06:37:46 +00:00
Kim Morrison
d8219a37ef feat: grind linarith synthesis issues explain changes in behaviour (#10448)
This PR modifies the "issues" grind diagnostics prints. Previously we
would just describe synthesis failures. These messages were confusing to
users, as in fact the linarith module continues to work, but less
capably. For most of the issues, we now explain the resulting change in
behaviour. There is a still a TODO to explain the change when
`IsOrderedRing` is not available.
2025-09-24 04:02:35 +00:00
thorimur
7ea7acc687 chore: lower monad of addSuggestion(s) to CoreM (#10526) 2025-09-24 03:35:34 +00:00
Sofia Rodrigues
161a1c06a2 feat: add Std.Notify type (#10368)
This PR adds `Notify` that is a structure that is similar to `CondVar`
but it's used for concurrency. The main difference between
`Std.Sync.Notify` and `Std.Condvar` is that depends on a `Std.Mutex` and
blocks the entire thread that the `Task` is using while waiting. If I
try to use it with async and a lot of `Task`s like this:

```lean
def condvar : Async Unit := do
  let condvar ← Std.Condvar.new
  let mutex ← Std.Mutex.new false

  for i in [0:threads] do
    background do
      IO.println s!"start {i + 1}"
      await =<< (show IO (ETask _ _) from IO.asTask (mutex.atomically (condvar.wait mutex)))
      IO.println s!"end {i + 1}"

  IO.sleep 2000
  condvar.notifyAll
```

It causes some weird behavior because some tasks start running and get
notified, while others don’t, because `condvar.wait` blocks the `Task`
entire task and right now afaik it blocks an entire thread and cannot be
paused while doing blocking operations like that.

`Notify` uses `Promise`s so it’s better suited for concurrency. The
`Task` is not blocked while waiting for a notification which makes it
simpler for use cases that just involve notifying:

```lean
def notify : Async Unit := do
  let notify ← Std.Notify.new

  for i in [0:threads] do
    background do
      IO.println s!"start {i}"
      notify.wait
      IO.println s!"end {i}"

  IO.sleep 2000
  notify.notify
```

This PR depends on: #10366, #10367 and #10370.
2025-09-24 03:35:08 +00:00
Kim Morrison
781e3c6add chore: remove unhelpful grind annotations (#10435)
This PR removes some `grind` annotations for `Array.attach` and related
functions. These lemmas introduce lambda on the right hand side which
`grind` can't do much with. I've added a test file that verifies that
the theorems with removed annotations can actually be proved already by
grind. Removing the annotations will help with excessive instantiation.
2025-09-24 03:02:46 +00:00
Leonardo de Moura
b73b8a7edf feat: helper ordered ring theorems (#10529)
This PR adds some helper theorems for the upcoming `grind order` solver.
2025-09-24 03:01:19 +00:00
Sofia Rodrigues
94e5b66dfe feat: add AsyncStream, AsyncWrite and AsyncRead type classes (#10370)
This PR adds async type classes for streams.
2025-09-23 23:30:33 +00:00
Joachim Breitner
8443600762 chore: assert hasLooseBVar before shifting (#10528)
This assumptions seems to be violated in #10353, so maybe worth
asserting it here to more quickly stumble over it.
2025-09-23 20:49:04 +00:00
Garmelon
8b64425033 chore: set temci tags for the radar bench script (#10527)
The radar bench scripts at
https://github.com/leanprover/radar-bench-lean4/ split up the benchmarks
between the two runners based on the tags: One runner filters by the tag
`stdlib` while the other filters by the tag `other`. Only benchmarks
using one of these tags will be run, and any benchmark tagged with both
will waste electricity.

As far as I know, the tags are unused otherwise, so I just replaced all
the old tags.
2025-09-23 19:51:10 +00:00
David Thrane Christiansen
d96fd949ff fix: invalid docstring suggestions for attributes (#10522)
This also exposed an issue with `#guard_msgs` in Verso mode where the
docstring would log parse errors as if it contained Verso, even though
it actually worked. This has been fixed, and error messages improved as
well.
2025-09-23 16:18:21 +00:00
Sebastian Ullrich
d33aece210 feat: list definitions in defeq problems that could not be unfolded for lack of @[expose] (#10158)
This PR adds information about definitions blocked from unfolding via
the module system to type defeq errors.
2025-09-23 16:13:39 +00:00
Sebastian Graf
9a7bab5f90 chore: add documentation for mvcgen related definitions (#10525) 2025-09-23 15:59:58 +00:00
Kim Morrison
e2f87ed215 chore: lemma for unfolding eraseIdxIfInBounds (#10520) 2025-09-23 13:08:41 +00:00
Tom Levy
e42892cfb6 doc: fix comment about String.fromUTF8 replacing invalid chars (#10240)
Hi, the doc of `String.fromUTF8` previously said invalid characters are
replaced with 'A'. But the parameter `h : validateUTF8 a` guarantees
there are no invalid characters, so that explanation doesn't make sense
to me. This PR deletes that explanation (and fixes some unrelated
typos).

I also have a patch that uses `h` to prove each of the characters is
valid, eliminating the need for a default character
([pr/chore-String-fromUTF8-prove-valid](27f1ff36b2)),
would you be interested in merging that?

<details>
<summary>Notes on invalid characters from unchecked C++</summary>
I don't know if this function may be called from unchecked C++ with
invalid characters. If it may, I'm not sure what would happen with my
patched function... I'm not familiar with Lean's safety model, but it
seems like a bad idea to have a Lean function that takes a proof of a
proposition but is expected to operate in a certain way even if the
proposition is false. I think the safe approach is to have two functions
-- one that takes a proof and is only called from Lean, and another that
doesn't take a proof and replaces invalid chars (for use from C++, not
sure whether it's useful from Lean); I'd prefer to go even further and
report an error instead of silently replacing invalid characters (I'm
not sure if there is any easy way to report errors/panic in Lean code
called from C++).
</details>
2025-09-23 10:19:20 +00:00
Sebastian Ullrich
cc5c070328 fix: inline/specialize may only refer to publicly imported decls for now (#10494)
This PR resolves a potential bad interaction between the compiler and
the module system where references to declarations not imported are
brought into scope by inlining or specializing. We now proactively check
that declarations to be inlined/specialized only reference public
imports. The intention is to later resolve this limitation by moving out
compilation into a separate build step with its own import/incremental
system.
2025-09-23 09:58:14 +00:00
David Thrane Christiansen
f122454ef6 chore: cleanup and better docs for #10479 (#10504)
This PR cleans up a half-reverted refactor and adds documentation to
#10479.
2025-09-23 09:02:07 +00:00
Sebastian Graf
02f482129a fix: Use @[tactic_alt] for bv_decide, mvcgen and similar tactics (#10506)
This PR annotates the shadowing main definitions of `bv_decide`,
`mvcgen` and similar tactics in `Std` with the semantically richer
`tactic_alt` attribute so that `verso` will not warn about overloads.

This fixes leanprover/verso#535.
2025-09-23 07:40:02 +00:00
Kim Morrison
0807f73171 feat: basic premise selection algorithm based on MePo (#7844)
This PR adds a simple implementation of MePo, from "Lightweight
relevance filtering for machine-generated resolution problems" by Meng
and Paulson.

This needs tuning, but is already useful as a baseline or test case.

---------

Co-authored-by: Thomas Zhu <thomas.zhu.sh@hotmail.com>
2025-09-23 06:40:22 +00:00
Lean stage0 autoupdater
27fa5b0bb5 chore: update stage0 2025-09-23 06:22:49 +00:00
Alex Meiburg
8f9966ba74 doc: fix to new name for "Associative" in ac_rfl / ac_nf docstring (#10458)
This PR fixes the docstring for ac_rfl and ac_nf to correctly refer to
`Std.Associative` instead of the old name `Associative`; ditto
`Commutative`.
2025-09-23 05:52:05 +00:00
Sofia Rodrigues
eabd7309b7 feat: add vectored write and fix rc issue in tcp and udp cancel function (#10487)
This PR adds vectored write and fix rc issues in tcp and udp cancel
functions.
2025-09-22 17:02:57 +00:00
Sebastian Graf
795d13ddce feat: account for tactic_alt in missing docs linter (#10507)
This PR makes the missing docs linter aware of `tactic_alt`.
2025-09-22 16:23:24 +00:00
Kim Morrison
2b23afdfab chore: remove >6 month old deprecations (#10446) 2025-09-22 12:47:11 +00:00
Kim Morrison
20b0bd0a20 chore: upstream rangeOfStx? from Batteries (#10490)
This PR upstreams a helper function that is used in ProofWidgets.

---------

Co-authored-by: Marc Huisinga <mhuisi@protonmail.com>
2025-09-22 12:21:14 +00:00
Kim Morrison
979c2b4af0 chore: add grind annotations for List.not_mem_nil (#10493) 2025-09-22 12:18:03 +00:00
Kim Morrison
b3cd5999e7 chore: normalize empty ByteArrays to .empty (#10501) 2025-09-22 12:06:29 +00:00
Kim Morrison
a4dcb25f69 chore: add limited public API for builtinRpcProcedures (#10499)
This is not a complete public API, just enough to avoid an `open
private` in ProofWidgets.
2025-09-22 11:20:25 +00:00
Henrik Böving
85ce814689 fix: constant folding for UIntX (#10495)
This PR fixes constant folding for UIntX in the code generator. This
optimization was previously simply dead code due to the way that uint
literals are encoded.
2025-09-22 10:06:24 +00:00
Leonardo de Moura
9fc18b8ab4 doc: extra grind docstrings (#10486)
This PR adds and expands `grind` related docstrings.
2025-09-22 03:27:48 +00:00
Leonardo de Moura
852a3db447 chore: improve grind imports (#10491) 2025-09-22 01:25:41 +00:00
Lean stage0 autoupdater
d0d5d4ca39 chore: update stage0 2025-09-21 11:13:12 +00:00
Sebastian Ullrich
b32f3e8930 chore: revert "feat: add vectored write and fix rc issue in tcp and udp cancel functions" (#10485)
Reverts leanprover/lean4#10367 due to Windows build failure
2025-09-21 10:43:46 +00:00
David Thrane Christiansen
34f5fba54d chore: remove bootstrapping workaround (#10484)
This PR removes temporary bootstrapping workarounds introduced in PR
#10479.
2025-09-21 07:36:49 +00:00
Leonardo de Moura
4c9601e60f feat: support for injective functions in grind (#10483)
This PR completes support for injective functions in grind. See
examples:
```lean

/-! Add some injectivity theorems. -/

def double (x : Nat) := 2*x

@[grind inj] theorem double_inj : Function.Injective double := by
  grind [Function.Injective, double]

structure InjFn (α : Type) (β : Type) where
  f : α → β
  h : Function.Injective f

instance : CoeFun (InjFn α β) (fun _ => α → β) where
  coe s := s.f

@[grind inj] theorem fn_inj (F : InjFn α β) : Function.Injective (F : α → β) := by
  grind [Function.Injective, cases InjFn]

def toList (a : α) : List α := [a]

@[grind inj] theorem toList_inj : Function.Injective (toList : α → List α) := by
  grind [Function.Injective, toList]

/-! Examples -/

example (x y : Nat) : toList (double x) = toList (double y) → x = y := by
  grind

example (f : InjFn (List Nat) α) (x y z : Nat)
    : f (toList (double x)) = f (toList y) →
      y = double z →
      x = z := by
  grind
```
2025-09-21 06:31:46 +00:00
Leonardo de Moura
42be7bb5c7 fix: [grind inj] attribute (#10482)
This PR fixes symbol collection for the `@[grind inj]` attribute.
2025-09-21 04:14:17 +00:00
Leonardo de Moura
5f68c1662d refactor: generalize theorem activation in grind (#10481)
This PR generalizes the theorem activation function used in `grind`. 
The goal is to reuse it to implement the injective function module.
2025-09-21 02:50:55 +00:00
Leonardo de Moura
2d14d51935 fix: equality resolution in grind (#10480)
This PR fixes a bug in the equality resolution frontend used in `grind`.
2025-09-21 02:40:38 +00:00
Lean stage0 autoupdater
7cbeb14e46 chore: update stage0 2025-09-20 22:47:35 +00:00
David Thrane Christiansen
cee2886154 feat: improvements to Verso docstrings (#10479)
This PR implements module docstrings in Verso syntax, as well as adding
a number of improvements and fixes to Verso docstrings in general. In
particular, they now have language server support and are parsed at
parse time rather than elaboration time, so the snapshot's syntax tree
includes the parsed documentation.
2025-09-20 22:05:57 +00:00
Leonardo de Moura
35764213fc fix: grind sort internalization (#10477)
This PR ensures sorts are internalized by `grind`.
2025-09-20 18:31:20 +00:00
Sofia Rodrigues
6b92cbdfa4 feat: add vectored write and fix rc issue in tcp and udp cancel functions (#10367)
This PR adds vectored write for TCP and UDP (that helps a lot with not
copying the arrays over and over) and fix a RC issue in TCP and UDP
cancel functions with the line `lean_dec((lean_object*)udp_socket);` and
a similar one that tries to decrement the object inside of the `socket`.
2025-09-20 17:01:20 +00:00
Leonardo de Moura
72bb7cf364 fix: infer_let in the kernel (#10476)
This PR fixes the dead `let` elimination code in the kernel's
`infer_let` function.

Closes #10475
2025-09-20 16:26:46 +00:00
Sofia Rodrigues
4881c3042e refactor: replace Task with Async and minor changes to some basic Async functions (#10366)
This PR refactors the Async module to use the `Async` type in all of the
`Async` files.
2025-09-20 16:23:06 +00:00
Leonardo de Moura
ec7add0b48 doc: ! modifier in grind parameters (#10474)
This PR adds a doc string for the `!` parameter modifier in `grind`.
2025-09-20 08:06:05 +00:00
Leonardo de Moura
9b842b7554 fix: message context in grind code actions (#10473)
This PR ensures the code action messages produced by `grind` include the
full context
2025-09-20 08:02:12 +00:00
Leonardo de Moura
fc718eac88 feat: code action for grind parameters (#10472)
This PR adds a code action for `grind` parameters. We need to use
`set_option grind.param.codeAction true` to enable the option. The PR
also adds a modifier to instruct `grind` to use the "default" pattern
inference strategy.
2025-09-20 07:30:39 +00:00
Michał Dobranowski
8b3c82cce2 fix: lake: GH action template condition (#10459)
This PR fixes a conditional check in a GitHub Action template generated
by Lake.

Closes #10420.
2025-09-20 06:33:42 +00:00
Mac Malone
0d1b7e6c88 chore: lake: fix tests/lean (#10470)
The ordering of the `--setup` JSON object changed at some point,
breaking this test. This PR fixes it by avoiding the potential for such
breakages.
2025-09-20 02:11:50 +00:00
Leonardo de Moura
d898c9ed17 fix: grind canonicalizer (#10469)
This PR fixes an incorrect optimization in the `grind` canonicalizer.
See the new test for an example that exposes the problem.
2025-09-20 01:24:54 +00:00
Leonardo de Moura
c6abc3c036 feat: improve grind diagnostics (#10466)
This PR reduces noise in the 'Equivalence classes' section of the
`grind` diagnostics. It now uses a notion of *support expressions*.
Right now, it is hard-coded, but we will probably make it extensible in
the future. The current definition is

- `match`, `ite` and `dite`-applications. They have builtin support in
`grind`.
- Cast-like applications used by `grind`: `toQ`, `toInt`, `Nat.cast`,
`Int.cast`, and `cast`
- `grind` gadget applications (e.g., `Grind.nestedDecidable`)
- Projections of constructors (e.g., `{ x := 1, y := 2}.x`)
- Auxiliary arithmetic terms constructed by solvers such as `cutsat` and
`ring`.

If an equivalence class contains at most one non-support term, it goes
into the “others” bucket. Otherwise, we display the non-support elements
and place the support terms in a child node.

**BEFORE**:
<img width="1397" height="1558" alt="image"
src="https://github.com/user-attachments/assets/4fd4de31-7300-4158-908b-247024381243"
/>

**AFTER**:
<img width="840" height="340" alt="image"
src="https://github.com/user-attachments/assets/05020f34-4ade-49bf-8ccc-9eb0ba53c861"
/>

**Remark**: No information is lost; it is just grouped differently."
2025-09-19 23:44:30 +00:00
Leonardo de Moura
d07862db2a chore: skip cast-like operations in grind mbtc (#10465)
This PR skips cast-like helper `grind` functions during `grind mbtc`
2025-09-19 21:12:25 +00:00
Leonardo de Moura
8a79ef3633 chore: missing grind normalization (#10463)
This PR adds `Nat.sub_zero` as a `grind` normalization rule.
2025-09-19 18:50:39 +00:00
Leonardo de Moura
b1c82f776b chore: mbtc in grind cutsat (#10462)
Minor improvement to `grind mbtc` in `cutsat`.
2025-09-19 18:47:10 +00:00
Leonardo de Moura
f278f31469 fix: unnecessary case-splits in grind mbtc (#10461)
This PR fixes unnecessary case splits generated by the `grind mbtc`
module. Here, `mbtc` stands for model-based theory combination.
2025-09-19 17:24:57 +00:00
Joachim Breitner
38b4062edb feat: linear-size Ord instance (#10270)
This PR adds an alternative implementation of `Deriving Ord` based on
comparing `.ctorIdx` and using a dedicated matcher for comparing same
constructors (added in #10152). The new option
`deriving.ord.linear_construction_threshold` sets the constructor count
threshold (10 by default) for using the new construction.

It also (unconditionally) changes the implementation for enumeration
types to simply compare the `ctorIdx`.
2025-09-19 14:13:57 +00:00
Sebastian Graf
ae8dc414c3 feat: mvcgen invariants? to scaffold initial invariants (#10456)
This PR implements `mvcgen invariants?` for providing initial invariant
skeletons for the user to flesh out. When the loop body has an early
return, it will helpfully suggest `Invariant.withEarlyReturn ...` as a
skeleton.

```lean
def mySum (l : List Nat) : Nat := Id.run do
  let mut acc := 0
  for x in l do
    acc := acc + x
  return acc

/--
info: Try this:
  invariants
    · ⇓⟨xs, acc⟩ => _
-/
#guard_msgs (info) in
theorem mySum_suggest_invariant (l : List Nat) : mySum l = l.sum := by
  generalize h : mySum l = r
  apply Id.of_wp_run_eq h
  mvcgen invariants?
  all_goals admit

def nodup (l : List Int) : Bool := Id.run do
  let mut seen : HashSet Int := ∅
  for x in l do
    if x ∈ seen then
      return false
    seen := seen.insert x
  return true

/--
info: Try this:
  invariants
    · Invariant.withEarlyReturn (onReturn := fun r acc => _) (onContinue := fun xs acc => _)
-/
#guard_msgs (info) in
theorem nodup_suggest_invariant (l : List Int) : nodup l ↔ l.Nodup := by
  generalize h : nodup l = r
  apply Id.of_wp_run_eq h
  mvcgen invariants?
  all_goals admit
```
2025-09-19 14:05:24 +00:00
Sebastian Ullrich
7822ee4500 fix: check that compiler does not infer inconsistent types between modules (#10418)
This PR fixes a potential miscompilation when using non-exposed type
definitions using the module system by turning it into a static error. A
future revision may lift the restriction by making the compiler metadata
independent of the current module.
2025-09-19 12:36:47 +00:00
Joachim Breitner
8f22c56420 refactor: less public section in Eqns.lean files (#10454) 2025-09-19 11:52:56 +00:00
Joachim Breitner
0e122870be perf: mkNoConfusionCtors: cheaper inferType (#10455)
This PR changes `mkNoConfusionCtors` so that its use of `inferType` does
not have to reduce `noConfusionType`, to make #10315 really effective.
2025-09-19 10:51:17 +00:00
Sebastian Graf
13c23877d4 fix: reduce through lets in mvcgen main loop (#10453)
This PR makes `mvcgen` reduce through `let`s, so that it progresses over
`(have t := 42; fun _ => foo t) 23` by reduction to `have t := 42; foo
t` and then introducing `t`.
2025-09-19 08:21:04 +00:00
Kim Morrison
7fba12f8f7 chore: add test for website primes example (#10451)
(This test is currently failing. We either need to change the failing
line to `grind [!factorial_pos]`, or again change the behaviour of
grind.)
2025-09-19 04:56:28 +00:00
Kim Morrison
abb487a0c0 chore: include lean-lang.org in release checklist (#10450) 2025-09-19 03:46:32 +00:00
1264 changed files with 16628 additions and 5124 deletions

View File

@@ -116,10 +116,10 @@ jobs:
build/stage1/**/*.ir
build/stage1/**/*.c
build/stage1/**/*.c.o*' || '' }}
key: ${{ matrix.name }}-build-v3-${{ github.sha }}
key: ${{ matrix.name }}-build-v4-${{ github.sha }}
# fall back to (latest) previous cache
restore-keys: |
${{ matrix.name }}-build-v3
${{ matrix.name }}-build-v4
# open nix-shell once for initial setup
- name: Setup
run: |

View File

@@ -69,10 +69,10 @@ jobs:
build/stage1/**/*.ir
build/stage1/**/*.c
build/stage1/**/*.c.o*
key: Linux Lake-build-v3-${{ github.sha }}
key: Linux Lake-build-v4-${{ github.sha }}
# fall back to (latest) previous cache
restore-keys: |
Linux Lake-build-v3
Linux Lake-build-v4
- if: env.should_update_stage0 == 'yes'
# sync options with `Linux Lake` to ensure cache reuse
run: |

View File

@@ -8,6 +8,9 @@
},
{
"path": "tests"
},
{
"path": "script"
}
],
"settings": {

91
script/Modulize.lean Normal file
View File

@@ -0,0 +1,91 @@
import Lake.CLI.Main
/-!
A simple script that inserts `module` and `@[expose] public section` into un-modulized files and
bumps their imports to `public`.
-/
open Lean Parser.Module
def main (args : List String) : IO Unit := do
initSearchPath ( findSysroot)
let mods := args.toArray.map (·.toName)
-- Determine default module(s) to run modulize on
let defaultTargetModules : Array Name try
let (elanInstall?, leanInstall?, lakeInstall?) Lake.findInstall?
let config Lake.MonadError.runEIO <| Lake.mkLoadConfig { elanInstall?, leanInstall?, lakeInstall? }
let some workspace Lake.loadWorkspace config |>.toBaseIO
| throw <| IO.userError "failed to load Lake workspace"
let defaultTargetModules := workspace.root.defaultTargets.flatMap fun target =>
if let some lib := workspace.root.findLeanLib? target then
lib.roots
else if let some exe := workspace.root.findLeanExe? target then
#[exe.config.root]
else
#[]
pure defaultTargetModules
catch _ =>
pure #[]
-- the list of root modules
let mods := if mods.isEmpty then defaultTargetModules else mods
-- Only submodules of `pkg` will be edited or have info reported on them
let pkg := mods[0]!.components.head!
-- Load all the modules
let imps := mods.map ({ module := · })
let env importModules imps {}
let srcSearchPath getSrcSearchPath
for mod in env.header.moduleNames do
if !pkg.isPrefixOf mod then
continue
-- Parse the input file
let some path srcSearchPath.findModuleWithExt "lean" mod
| throw <| .userError "error: failed to find source file for {mod}"
let mut text IO.FS.readFile path
let inputCtx := Parser.mkInputContext text path.toString
let (header, parserState, msgs) Parser.parseHeader inputCtx
if !msgs.toList.isEmpty then -- skip this file if there are parse errors
msgs.forM fun msg => msg.toString >>= IO.println
throw <| .userError "parse errors in file"
let `(header| $[module%$moduleTk?]? $imps:import*) := header
| throw <| .userError s!"unexpected header syntax of {path}"
if moduleTk?.isSome then
continue
let looksMeta := mod.components.any (· [`Tactic, `Linter])
-- initial whitespace if empty header
let startPos := header.raw.getPos? |>.getD parserState.pos
-- insert section if any trailing text
if header.raw.getTrailingTailPos?.all (· < text.endPos) then
let insertPos := header.raw.getTailPos? |>.getD startPos -- empty header
let mut sec := if looksMeta then
"public meta section"
else
"@[expose] public section"
if !imps.isEmpty then
sec := "\n\n" ++ sec
if header.raw.getTailPos?.isNone then
sec := sec ++ "\n\n"
text := text.extract 0 insertPos ++ sec ++ text.extract insertPos text.endPos
-- prepend each import with `public `
for imp in imps.reverse do
let insertPos := imp.raw.getPos?.get!
let prfx := if looksMeta then "public meta " else "public "
text := text.extract 0 insertPos ++ prfx ++ text.extract insertPos text.endPos
-- insert `module` header
let mut initText := text.extract 0 startPos
if !initText.trim.isEmpty then
-- If there is a header comment, preserve it and put `module` in the line after
initText := initText.trimRight ++ "\n"
text := initText ++ "module\n\n" ++ text.extract startPos text.endPos
IO.FS.writeFile path text

5
script/lakefile.toml Normal file
View File

@@ -0,0 +1,5 @@
name = "scripts"
[[lean_exe]]
name = "modulize"
root = "Modulize"

1
script/lean-toolchain Normal file
View File

@@ -0,0 +1 @@
lean4

View File

@@ -112,3 +112,11 @@ repositories:
branch: master
dependencies:
- mathlib4
- name: lean-fro.org
url: https://github.com/leanprover/lean-fro.org
toolchain-tag: false
stable-branch: false
branch: master
dependencies:
- verso

View File

@@ -377,6 +377,18 @@ def execute_release_steps(repo, version, config):
except subprocess.CalledProcessError as e:
print(red("Tests failed, but continuing with PR creation..."))
print(red(f"Test error: {e}"))
elif repo_name == "lean-fro.org":
# Update lean-toolchain in examples/hero
print(blue("Updating examples/hero/lean-toolchain..."))
docs_toolchain = repo_path / "examples" / "hero" / "lean-toolchain"
with open(docs_toolchain, "w") as f:
f.write(f"leanprover/lean4:{version}\n")
print(green(f"Updated examples/hero/lean-toolchain to leanprover/lean4:{version}"))
print(blue("Running `lake update`..."))
run_command("lake update", cwd=repo_path, stream_output=True)
print(blue("Running `lake update` in examples/hero..."))
run_command("lake update", cwd=repo_path / "examples" / "hero", stream_output=True)
elif repo_name == "cslib":
print(blue("Updating lakefile.toml..."))
run_command(f'perl -pi -e \'s/"v4\\.[0-9]+(\\.[0-9]+)?(-rc[0-9]+)?"/"' + version + '"/g\' lakefile.*', cwd=repo_path)

View File

@@ -95,16 +95,16 @@ well-founded recursion mechanism to prove that the function terminates.
intro a m h₁ h₂
congr
@[simp, grind =] theorem pmap_empty {P : α Prop} (f : a, P a β) : pmap f #[] (by simp) = #[] := rfl
@[simp] theorem pmap_empty {P : α Prop} (f : a, P a β) : pmap f #[] (by simp) = #[] := rfl
@[simp, grind =] theorem pmap_push {P : α Prop} (f : a, P a β) (a : α) (xs : Array α) (h : b xs.push a, P b) :
@[simp] theorem pmap_push {P : α Prop} (f : a, P a β) (a : α) (xs : Array α) (h : b xs.push a, P b) :
pmap f (xs.push a) h =
(pmap f xs (fun a m => by simp at h; exact h a (.inl m))).push (f a (h a (by simp))) := by
simp [pmap]
@[simp, grind =] theorem attach_empty : (#[] : Array α).attach = #[] := rfl
@[simp] theorem attach_empty : (#[] : Array α).attach = #[] := rfl
@[simp, grind =] theorem attachWith_empty {P : α Prop} (H : x #[], P x) : (#[] : Array α).attachWith P H = #[] := rfl
@[simp] theorem attachWith_empty {P : α Prop} (H : x #[], P x) : (#[] : Array α).attachWith P H = #[] := rfl
@[simp] theorem _root_.List.attachWith_mem_toArray {l : List α} :
l.attachWith (fun x => x l.toArray) (fun x h => by simpa using h) =
@@ -125,13 +125,11 @@ theorem pmap_congr_left {p q : α → Prop} {f : ∀ a, p a → β} {g : ∀ a,
simp only [List.pmap_toArray, mk.injEq]
rw [List.pmap_congr_left _ h]
@[grind =]
theorem map_pmap {p : α Prop} {g : β γ} {f : a, p a β} {xs : Array α} (H) :
map g (pmap f xs H) = pmap (fun a h => g (f a h)) xs H := by
cases xs
simp [List.map_pmap]
@[grind =]
theorem pmap_map {p : β Prop} {g : b, p b γ} {f : α β} {xs : Array α} (H) :
pmap g (map f xs) H = pmap (fun a h => g (f a) h) xs fun _ h => H _ (mem_map_of_mem h) := by
cases xs
@@ -147,14 +145,14 @@ theorem attachWith_congr {xs ys : Array α} (w : xs = ys) {P : α → Prop} {H :
subst w
simp
@[simp, grind =] theorem attach_push {a : α} {xs : Array α} :
@[simp] theorem attach_push {a : α} {xs : Array α} :
(xs.push a).attach =
(xs.attach.map (fun x, h => x, mem_push_of_mem a h)).push a, by simp := by
cases xs
rw [attach_congr (List.push_toArray _ _)]
simp [Function.comp_def]
@[simp, grind =] theorem attachWith_push {a : α} {xs : Array α} {P : α Prop} {H : x xs.push a, P x} :
@[simp] theorem attachWith_push {a : α} {xs : Array α} {P : α Prop} {H : x xs.push a, P x} :
(xs.push a).attachWith P H =
(xs.attachWith P (fun x h => by simp at H; exact H x (.inl h))).push a, H a (by simp) := by
cases xs
@@ -176,9 +174,6 @@ theorem attach_map_val (xs : Array α) (f : α → β) :
cases xs
simp
@[deprecated attach_map_val (since := "2025-02-17")]
abbrev attach_map_coe := @attach_map_val
-- The argument `xs : Array α` is explicit to allow rewriting from right to left.
theorem attach_map_subtype_val (xs : Array α) : xs.attach.map Subtype.val = xs := by
cases xs; simp
@@ -187,9 +182,6 @@ theorem attachWith_map_val {p : α → Prop} {f : α → β} {xs : Array α} (H
((xs.attachWith p H).map fun (i : { i // p i}) => f i) = xs.map f := by
cases xs; simp
@[deprecated attachWith_map_val (since := "2025-02-17")]
abbrev attachWith_map_coe := @attachWith_map_val
theorem attachWith_map_subtype_val {p : α Prop} {xs : Array α} (H : a xs, p a) :
(xs.attachWith p H).map Subtype.val = xs := by
cases xs; simp
@@ -294,25 +286,23 @@ theorem getElem_attach {xs : Array α} {i : Nat} (h : i < xs.attach.size) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp, grind =] theorem pmap_attach {xs : Array α} {p : {x // x xs} Prop} {f : a, p a β} (H) :
@[simp] theorem pmap_attach {xs : Array α} {p : {x // x xs} Prop} {f : a, p a β} (H) :
pmap f xs.attach H =
xs.pmap (P := fun a => h : a xs, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
ext <;> simp
@[simp, grind =] theorem pmap_attachWith {xs : Array α} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
@[simp] theorem pmap_attachWith {xs : Array α} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
pmap f (xs.attachWith q H₁) H₂ =
xs.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
ext <;> simp
@[grind =]
theorem foldl_pmap {xs : Array α} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a xs P a) (g : γ β γ) (x : γ) :
(xs.pmap f H).foldl g x = xs.attach.foldl (fun acc a => g acc (f a.1 (H _ a.2))) x := by
rw [pmap_eq_map_attach, foldl_map]
@[grind =]
theorem foldr_pmap {xs : Array α} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a xs P a) (g : β γ γ) (x : γ) :
(xs.pmap f H).foldr g x = xs.attach.foldr (fun a acc => g (f a.1 (H _ a.2)) acc) x := by
@@ -370,20 +360,18 @@ theorem foldr_attach {xs : Array α} {f : α → β → β} {b : β} :
ext
simpa using fun a => List.mem_of_getElem? a
@[grind =]
theorem attach_map {xs : Array α} {f : α β} :
(xs.map f).attach = xs.attach.map (fun x, h => f x, mem_map_of_mem h) := by
cases xs
ext <;> simp
@[grind =]
theorem attachWith_map {xs : Array α} {f : α β} {P : β Prop} (H : (b : β), b xs.map f P b) :
(xs.map f).attachWith P H = (xs.attachWith (P f) (fun _ h => H _ (mem_map_of_mem h))).map
fun x, h => f x, h := by
cases xs
simp [List.attachWith_map]
@[simp, grind =] theorem map_attachWith {xs : Array α} {P : α Prop} {H : (a : α), a xs P a}
@[simp] theorem map_attachWith {xs : Array α} {P : α Prop} {H : (a : α), a xs P a}
{f : { x // P x } β} :
(xs.attachWith P H).map f = xs.attach.map fun x, h => f x, H _ h := by
cases xs <;> simp_all
@@ -401,9 +389,6 @@ theorem map_attach_eq_pmap {xs : Array α} {f : { x // x ∈ xs } → β} :
cases xs
ext <;> simp
@[deprecated map_attach_eq_pmap (since := "2025-02-09")]
abbrev map_attach := @map_attach_eq_pmap
@[grind =]
theorem attach_filterMap {xs : Array α} {f : α Option β} :
(xs.filterMap f).attach = xs.attach.filterMap
@@ -439,7 +424,6 @@ theorem filter_attachWith {q : α → Prop} {xs : Array α} {p : {x // q x} →
cases xs
simp [Function.comp_def, List.filter_map]
@[grind =]
theorem pmap_pmap {p : α Prop} {q : β Prop} {g : a, p a β} {f : b, q b γ} {xs} (H₁ H₂) :
pmap f (pmap g xs H₁) H₂ =
pmap (α := { x // x xs }) (fun a h => f (g a h) (H₂ (g a h) (mem_pmap_of_mem a.2))) xs.attach
@@ -447,7 +431,7 @@ theorem pmap_pmap {p : α → Prop} {q : β → Prop} {g : ∀ a, p a → β} {f
cases xs
simp [List.pmap_pmap, List.pmap_map]
@[simp, grind =] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {xs ys : Array ι}
@[simp] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {xs ys : Array ι}
(h : a xs ++ ys, p a) :
(xs ++ ys).pmap f h =
(xs.pmap f fun a ha => h a (mem_append_left ys ha)) ++
@@ -462,7 +446,7 @@ theorem pmap_append' {p : α → Prop} {f : ∀ a : α, p a → β} {xs ys : Arr
xs.pmap f h₁ ++ ys.pmap f h₂ :=
pmap_append _
@[simp, grind =] theorem attach_append {xs ys : Array α} :
@[simp] theorem attach_append {xs ys : Array α} :
(xs ++ ys).attach = xs.attach.map (fun x, h => x, mem_append_left ys h) ++
ys.attach.map fun x, h => x, mem_append_right xs h := by
cases xs
@@ -470,62 +454,59 @@ theorem pmap_append' {p : α → Prop} {f : ∀ a : α, p a → β} {xs ys : Arr
rw [attach_congr (List.append_toArray _ _)]
simp [List.attach_append, Function.comp_def]
@[simp, grind =] theorem attachWith_append {P : α Prop} {xs ys : Array α}
@[simp] theorem attachWith_append {P : α Prop} {xs ys : Array α}
{H : (a : α), a xs ++ ys P a} :
(xs ++ ys).attachWith P H = xs.attachWith P (fun a h => H a (mem_append_left ys h)) ++
ys.attachWith P (fun a h => H a (mem_append_right xs h)) := by
simp [attachWith]
@[simp, grind =] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : Array α}
@[simp] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : Array α}
(H : (a : α), a xs.reverse P a) :
xs.reverse.pmap f H = (xs.pmap f (fun a h => H a (by simpa using h))).reverse := by
induction xs <;> simp_all
@[grind =]
theorem reverse_pmap {P : α Prop} {f : (a : α) P a β} {xs : Array α}
(H : (a : α), a xs P a) :
(xs.pmap f H).reverse = xs.reverse.pmap f (fun a h => H a (by simpa using h)) := by
rw [pmap_reverse]
@[simp, grind =] theorem attachWith_reverse {P : α Prop} {xs : Array α}
@[simp] theorem attachWith_reverse {P : α Prop} {xs : Array α}
{H : (a : α), a xs.reverse P a} :
xs.reverse.attachWith P H =
(xs.attachWith P (fun a h => H a (by simpa using h))).reverse := by
cases xs
simp
@[grind =]
theorem reverse_attachWith {P : α Prop} {xs : Array α}
{H : (a : α), a xs P a} :
(xs.attachWith P H).reverse = (xs.reverse.attachWith P (fun a h => H a (by simpa using h))) := by
cases xs
simp
@[simp, grind =] theorem attach_reverse {xs : Array α} :
@[simp] theorem attach_reverse {xs : Array α} :
xs.reverse.attach = xs.attach.reverse.map fun x, h => x, by simpa using h := by
cases xs
rw [attach_congr List.reverse_toArray]
simp
@[grind =]
theorem reverse_attach {xs : Array α} :
xs.attach.reverse = xs.reverse.attach.map fun x, h => x, by simpa using h := by
cases xs
simp
@[simp, grind =] theorem back?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Array α}
@[simp] theorem back?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Array α}
(H : (a : α), a xs P a) :
(xs.pmap f H).back? = xs.attach.back?.map fun a, m => f a (H a m) := by
cases xs
simp
@[simp, grind =] theorem back?_attachWith {P : α Prop} {xs : Array α}
@[simp] theorem back?_attachWith {P : α Prop} {xs : Array α}
{H : (a : α), a xs P a} :
(xs.attachWith P H).back? = xs.back?.pbind (fun a h => some a, H _ (mem_of_back? h)) := by
cases xs
simp
@[simp, grind =]
@[simp]
theorem back?_attach {xs : Array α} :
xs.attach.back? = xs.back?.pbind fun a h => some a, mem_of_back? h := by
cases xs

View File

@@ -129,20 +129,11 @@ end Array
namespace List
@[deprecated Array.toArray_toList (since := "2025-02-17")]
abbrev toArray_toList := @Array.toArray_toList
-- This does not need to be a simp lemma, as already after the `whnfR` the right hand side is `as`.
theorem toList_toArray {as : List α} : as.toArray.toList = as := rfl
@[deprecated toList_toArray (since := "2025-02-17")]
abbrev _root_.Array.toList_toArray := @List.toList_toArray
@[simp, grind =] theorem size_toArray {as : List α} : as.toArray.size = as.length := by simp [Array.size]
@[deprecated size_toArray (since := "2025-02-17")]
abbrev _root_.Array.size_toArray := @List.size_toArray
@[simp, grind =] theorem getElem_toArray {xs : List α} {i : Nat} (h : i < xs.toArray.size) :
xs.toArray[i] = xs[i]'(by simpa using h) := rfl
@@ -412,10 +403,6 @@ that requires a proof the array is non-empty.
def back? (xs : Array α) : Option α :=
xs[xs.size - 1]?
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), expose]
def get? (xs : Array α) (i : Nat) : Option α :=
if h : i < xs.size then some xs[i] else none
/--
Swaps a new element with the element at the given index.
@@ -1812,7 +1799,6 @@ Examples:
* `#["apple", "pear", "orange"].eraseIdxIfInBounds 3 = #["apple", "pear", "orange"]`
* `#["apple", "pear", "orange"].eraseIdxIfInBounds 5 = #["apple", "pear", "orange"]`
-/
@[grind]
def eraseIdxIfInBounds (xs : Array α) (i : Nat) : Array α :=
if h : i < xs.size then xs.eraseIdx i h else xs

View File

@@ -24,29 +24,6 @@ set_option linter.indexVariables true -- Enforce naming conventions for index va
namespace Array
/--
Use the indexing notation `a[i]` instead.
Access an element from an array without needing a runtime bounds checks,
using a `Nat` index and a proof that it is in bounds.
This function does not use `get_elem_tactic` to automatically find the proof that
the index is in bounds. This is because the tactic itself needs to look up values in
arrays.
-/
@[deprecated "Use indexing notation `as[i]` instead" (since := "2025-02-17")]
def get {α : Type u} (xs : @& Array α) (i : @& Nat) (h : LT.lt i xs.size) : α :=
xs.toList.get i, h
/--
Use the indexing notation `a[i]!` instead.
Access an element from an array, or panic if the index is out of bounds.
-/
@[deprecated "Use indexing notation `as[i]!` instead" (since := "2025-02-17"), expose]
def get! {α : Type u} [Inhabited α] (xs : @& Array α) (i : @& Nat) : α :=
Array.getD xs i default
theorem foldlM_toList.aux [Monad m]
{f : β α m β} {xs : Array α} {i j} (H : xs.size i + j) {b} :
foldlM.loop f xs xs.size (Nat.le_refl _) i j b = (xs.toList.drop j).foldlM f b := by
@@ -108,9 +85,6 @@ abbrev push_toList := @toList_push
@[simp, grind =] theorem toList_pop {xs : Array α} : xs.pop.toList = xs.toList.dropLast := rfl
@[deprecated toList_pop (since := "2025-02-17")]
abbrev pop_toList := @Array.toList_pop
@[simp] theorem append_eq_append {xs ys : Array α} : xs.append ys = xs ++ ys := rfl
@[simp, grind =] theorem toList_append {xs ys : Array α} :

View File

@@ -63,7 +63,7 @@ theorem size_eq_countP_add_countP {xs : Array α} : xs.size = countP p xs + coun
rcases xs with xs
simp [List.length_eq_countP_add_countP (p := p)]
@[grind _=_]
@[grind =]
theorem countP_eq_size_filter {xs : Array α} : countP p xs = (filter p xs).size := by
rcases xs with xs
simp [List.countP_eq_length_filter]

View File

@@ -324,6 +324,13 @@ abbrev erase_mkArray_ne := @erase_replicate_ne
end erase
/-! ### eraseIdxIfInBounds -/
@[grind =]
theorem eraseIdxIfInBounds_eq {xs : Array α} {i : Nat} :
xs.eraseIdxIfInBounds i = if h : i < xs.size then xs.eraseIdx i else xs := by
simp [eraseIdxIfInBounds]
/-! ### eraseIdx -/
theorem eraseIdx_eq_eraseIdxIfInBounds {xs : Array α} {i : Nat} (h : i < xs.size) :

View File

@@ -278,9 +278,6 @@ theorem find?_flatten_eq_none_iff {xss : Array (Array α)} {p : α → Bool} :
xss.flatten.find? p = none ys xss, x ys, !p x := by
simp
@[deprecated find?_flatten_eq_none_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_none := @find?_flatten_eq_none_iff
/--
If `find? p` returns `some a` from `xs.flatten`, then `p a` holds, and
some array in `xs` contains `a`, and no earlier element of that array satisfies `p`.
@@ -306,9 +303,6 @@ theorem find?_flatten_eq_some_iff {xss : Array (Array α)} {p : α → Bool} {a
zs.toList, bs.toList.map Array.toList, by simpa using h,
by simpa using h₁, by simpa using h₂
@[deprecated find?_flatten_eq_some_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_some := @find?_flatten_eq_some_iff
@[simp, grind =] theorem find?_flatMap {xs : Array α} {f : α Array β} {p : β Bool} :
(xs.flatMap f).find? p = xs.findSome? (fun x => (f x).find? p) := by
cases xs
@@ -318,17 +312,11 @@ theorem find?_flatMap_eq_none_iff {xs : Array α} {f : α → Array β} {p : β
(xs.flatMap f).find? p = none x xs, y f x, !p y := by
simp
@[deprecated find?_flatMap_eq_none_iff (since := "2025-02-03")]
abbrev find?_flatMap_eq_none := @find?_flatMap_eq_none_iff
@[grind =]
theorem find?_replicate :
find? p (replicate n a) = if n = 0 then none else if p a then some a else none := by
simp [ List.toArray_replicate, List.find?_replicate]
@[deprecated find?_replicate (since := "2025-03-18")]
abbrev find?_mkArray := @find?_replicate
@[simp] theorem find?_replicate_of_size_pos (h : 0 < n) :
find? p (replicate n a) = if p a then some a else none := by
simp [find?_replicate, Nat.ne_of_gt h]
@@ -346,34 +334,19 @@ abbrev find?_mkArray_of_pos := @find?_replicate_of_pos
@[simp] theorem find?_replicate_of_neg (h : ¬ p a) : find? p (replicate n a) = none := by
simp [find?_replicate, h]
@[deprecated find?_replicate_of_neg (since := "2025-03-18")]
abbrev find?_mkArray_of_neg := @find?_replicate_of_neg
-- This isn't a `@[simp]` lemma since there is already a lemma for `l.find? p = none` for any `l`.
theorem find?_replicate_eq_none_iff {n : Nat} {a : α} {p : α Bool} :
(replicate n a).find? p = none n = 0 !p a := by
simp [ List.toArray_replicate, Classical.or_iff_not_imp_left]
@[deprecated find?_replicate_eq_none_iff (since := "2025-03-18")]
abbrev find?_mkArray_eq_none_iff := @find?_replicate_eq_none_iff
@[simp] theorem find?_replicate_eq_some_iff {n : Nat} {a b : α} {p : α Bool} :
(replicate n a).find? p = some b n 0 p a a = b := by
simp [ List.toArray_replicate]
@[deprecated find?_replicate_eq_some_iff (since := "2025-03-18")]
abbrev find?_mkArray_eq_some_iff := @find?_replicate_eq_some_iff
@[deprecated find?_replicate_eq_some_iff (since := "2025-02-03")]
abbrev find?_mkArray_eq_some := @find?_replicate_eq_some_iff
@[simp] theorem get_find?_replicate {n : Nat} {a : α} {p : α Bool} (h) :
((replicate n a).find? p).get h = a := by
simp [ List.toArray_replicate]
@[deprecated get_find?_replicate (since := "2025-03-18")]
abbrev get_find?_mkArray := @get_find?_replicate
@[grind =]
theorem find?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Array α}
(H : (a : α), a xs P a) {p : β Bool} :

View File

@@ -80,9 +80,6 @@ theorem ne_empty_of_size_pos (h : 0 < xs.size) : xs ≠ #[] := by
@[simp] theorem size_eq_zero_iff : xs.size = 0 xs = #[] :=
eq_empty_of_size_eq_zero, fun h => h rfl
@[deprecated size_eq_zero_iff (since := "2025-02-24")]
abbrev size_eq_zero := @size_eq_zero_iff
theorem eq_empty_iff_size_eq_zero : xs = #[] xs.size = 0 :=
size_eq_zero_iff.symm
@@ -107,17 +104,10 @@ theorem exists_mem_of_size_eq_add_one {xs : Array α} (h : xs.size = n + 1) :
theorem size_pos_iff {xs : Array α} : 0 < xs.size xs #[] :=
Nat.pos_iff_ne_zero.trans (not_congr size_eq_zero_iff)
@[deprecated size_pos_iff (since := "2025-02-24")]
abbrev size_pos := @size_pos_iff
theorem size_eq_one_iff {xs : Array α} : xs.size = 1 a, xs = #[a] := by
cases xs
simpa using List.length_eq_one_iff
@[deprecated size_eq_one_iff (since := "2025-02-24")]
abbrev size_eq_one := @size_eq_one_iff
/-! ## L[i] and L[i]? -/
theorem getElem?_eq_none_iff {xs : Array α} : xs[i]? = none xs.size i := by
@@ -371,6 +361,7 @@ abbrev getElem?_mkArray := @getElem?_replicate
/-! ### mem -/
@[grind ]
theorem not_mem_empty (a : α) : ¬ a #[] := by simp
@[simp, grind =] theorem mem_push {xs : Array α} {x y : α} : x xs.push y x xs x = y := by
@@ -542,18 +533,12 @@ theorem isEmpty_eq_false_iff_exists_mem {xs : Array α} :
@[simp] theorem isEmpty_iff {xs : Array α} : xs.isEmpty xs = #[] := by
cases xs <;> simp
@[deprecated isEmpty_iff (since := "2025-02-17")]
abbrev isEmpty_eq_true := @isEmpty_iff
@[grind ]
theorem empty_of_isEmpty {xs : Array α} (h : xs.isEmpty) : xs = #[] := Array.isEmpty_iff.mp h
@[simp] theorem isEmpty_eq_false_iff {xs : Array α} : xs.isEmpty = false xs #[] := by
cases xs <;> simp
@[deprecated isEmpty_eq_false_iff (since := "2025-02-17")]
abbrev isEmpty_eq_false := @isEmpty_eq_false_iff
theorem isEmpty_iff_size_eq_zero {xs : Array α} : xs.isEmpty xs.size = 0 := by
rw [isEmpty_iff, size_eq_zero_iff]
@@ -2996,11 +2981,6 @@ theorem _root_.List.toArray_drop {l : List α} {k : Nat} :
(l.drop k).toArray = l.toArray.extract k := by
rw [List.drop_eq_extract, List.extract_toArray, List.size_toArray]
@[deprecated extract_size (since := "2025-02-27")]
theorem take_size {xs : Array α} : xs.take xs.size = xs := by
cases xs
simp
/-! ### shrink -/
@[simp] private theorem size_shrink_loop {xs : Array α} {n : Nat} : (shrink.loop n xs).size = xs.size - n := by
@@ -3586,8 +3566,6 @@ theorem foldr_eq_foldl_reverse {xs : Array α} {f : α → β → β} {b} :
subst w
rw [foldr_eq_foldl_reverse, foldl_push_eq_append rfl, map_reverse]
@[deprecated foldr_push_eq_append (since := "2025-02-09")] abbrev foldr_flip_push_eq_append := @foldr_push_eq_append
theorem foldl_assoc {op : α α α} [ha : Std.Associative op] {xs : Array α} {a₁ a₂} :
xs.foldl op (op a₁ a₂) = op a₁ (xs.foldl op a₂) := by
rcases xs with l
@@ -4712,44 +4690,3 @@ namespace List
simp_all
end List
/-! ### Deprecations -/
namespace Array
set_option linter.deprecated false in
@[deprecated "`get?` is deprecated" (since := "2025-02-12"), simp]
theorem get?_eq_getElem? (xs : Array α) (i : Nat) : xs.get? i = xs[i]? := rfl
@[deprecated getD_eq_getD_getElem? (since := "2025-02-12")] abbrev getD_eq_get? := @getD_eq_getD_getElem?
set_option linter.deprecated false in
@[deprecated getElem!_eq_getD (since := "2025-02-12")]
theorem get!_eq_getD [Inhabited α] (xs : Array α) : xs.get! n = xs.getD n default := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead of `a.get! i`." (since := "2025-02-12")]
theorem get!_eq_getD_getElem? [Inhabited α] (xs : Array α) (i : Nat) :
xs.get! i = xs[i]?.getD default := by
by_cases p : i < xs.size <;>
simp [get!, getD_eq_getD_getElem?, p]
set_option linter.deprecated false in
@[deprecated get!_eq_getD_getElem? (since := "2025-02-12")] abbrev get!_eq_getElem? := @get!_eq_getD_getElem?
set_option linter.deprecated false in
@[deprecated "`Array.get?` is deprecated, use `a[i]?` instead." (since := "2025-02-12")]
theorem get?_eq_get?_toList (xs : Array α) (i : Nat) : xs.get? i = xs.toList.get? i := by
simp [ getElem?_toList]
set_option linter.deprecated false in
@[deprecated get!_eq_getD_getElem? (since := "2025-02-12")] abbrev get!_eq_get? := @get!_eq_getD_getElem?
/-! ### set -/
@[deprecated getElem?_set_self (since := "2025-02-27")] abbrev get?_set_eq := @getElem?_set_self
@[deprecated getElem?_set_ne (since := "2025-02-27")] abbrev get?_set_ne := @getElem?_set_ne
@[deprecated getElem?_set (since := "2025-02-27")] abbrev get?_set := @getElem?_set
@[deprecated get_set (since := "2025-02-27")] abbrev get_set := @getElem_set
@[deprecated get_set_ne (since := "2025-02-27")] abbrev get_set_ne := @getElem_set_ne
end Array

View File

@@ -9,8 +9,8 @@ prelude
public import Init.Core
import Init.Data.Array.Basic
import Init.Data.Nat.Lemmas
import Init.Data.Range.Polymorphic.Iterators
import Init.Data.Range.Polymorphic.Nat
public import Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Range.Polymorphic.Nat
import Init.Data.Iterators.Consumers
public section

View File

@@ -7,6 +7,7 @@ module
prelude
public import Init.GetElem
public import Init.Data.Array.Basic
import Init.Data.Array.GetLit
public import Init.Data.Slice.Basic

View File

@@ -29,10 +29,6 @@ set_option linter.missingDocs true
namespace BitVec
@[inline, deprecated BitVec.ofNatLT (since := "2025-02-13"), inherit_doc BitVec.ofNatLT]
protected def ofNatLt {n : Nat} (i : Nat) (p : i < 2 ^ n) : BitVec n :=
BitVec.ofNatLT i p
section Nat
/--

View File

@@ -49,11 +49,11 @@ instance instDecidableForallBitVecSucc (P : BitVec (n+1) → Prop) [DecidablePre
instance instDecidableExistsBitVecZero (P : BitVec 0 Prop) [Decidable (P 0#0)] :
Decidable ( v, P v) :=
decidable_of_iff (¬ v, ¬ P v) Classical.not_forall_not
decidable_of_iff (¬ v, ¬ P v) (by exact Classical.not_forall_not)
instance instDecidableExistsBitVecSucc (P : BitVec (n+1) Prop) [DecidablePred P]
[Decidable ( (x : Bool) (v : BitVec n), ¬ P (v.cons x))] : Decidable ( v, P v) :=
decidable_of_iff (¬ v, ¬ P v) Classical.not_forall_not
decidable_of_iff (¬ v, ¬ P v) (by exact Classical.not_forall_not)
/--
For small numerals this isn't necessary (as typeclass search can use the above two instances),

View File

@@ -74,10 +74,6 @@ theorem some_eq_getElem?_iff {l : BitVec w} : some a = l[n]? ↔ ∃ h : n < w,
theorem getElem_of_getElem? {l : BitVec w} : l[n]? = some a h : n < w, l[n] = a :=
getElem?_eq_some_iff.mp
set_option linter.missingDocs false in
@[deprecated getElem?_eq_some_iff (since := "2025-02-17")]
abbrev getElem?_eq_some := @getElem?_eq_some_iff
theorem getElem?_eq_none_iff {l : BitVec w} : l[n]? = none w n := by
simp
@@ -350,25 +346,14 @@ theorem ofBool_eq_iff_eq : ∀ {b b' : Bool}, BitVec.ofBool b = BitVec.ofBool b'
@[simp] theorem ofBool_xor_ofBool : ofBool b ^^^ ofBool b' = ofBool (b ^^ b') := by
cases b <;> cases b' <;> rfl
@[deprecated toNat_ofNatLT (since := "2025-02-13")]
theorem toNat_ofNatLt (x : Nat) (p : x < 2^w) : (x#'p).toNat = x := rfl
@[simp, grind =] theorem getLsbD_ofNatLT {n : Nat} (x : Nat) (lt : x < 2^n) (i : Nat) :
getLsbD (x#'lt) i = x.testBit i := by
simp [getLsbD, BitVec.ofNatLT]
@[deprecated getLsbD_ofNatLT (since := "2025-02-13")]
theorem getLsbD_ofNatLt {n : Nat} (x : Nat) (lt : x < 2^n) (i : Nat) :
getLsbD (x#'lt) i = x.testBit i := getLsbD_ofNatLT x lt i
@[simp, grind =] theorem getMsbD_ofNatLT {n x i : Nat} (h : x < 2^n) :
getMsbD (x#'h) i = (decide (i < n) && x.testBit (n - 1 - i)) := by
simp [getMsbD, getLsbD]
@[deprecated getMsbD_ofNatLT (since := "2025-02-13")]
theorem getMsbD_ofNatLt {n x i : Nat} (h : x < 2^n) :
getMsbD (x#'h) i = (decide (i < n) && x.testBit (n - 1 - i)) := getMsbD_ofNatLT h
@[grind =]
theorem ofNatLT_eq_ofNat {w : Nat} {n : Nat} (hn) : BitVec.ofNatLT n hn = BitVec.ofNat w n :=
eq_of_toNat_eq (by simp [Nat.mod_eq_of_lt hn])
@@ -6361,15 +6346,4 @@ theorem two_pow_ctz_le_toNat_of_ne_zero {x : BitVec w} (hx : x ≠ 0#w) :
have hclz := getLsbD_true_ctz_of_ne_zero (x := x) hx
exact Nat.ge_two_pow_of_testBit hclz
/-! ### Deprecations -/
set_option linter.missingDocs false
@[deprecated toFin_uShiftRight (since := "2025-02-18")]
abbrev toFin_uShiftRight := @toFin_ushiftRight
end BitVec

View File

@@ -87,6 +87,7 @@ def copySlice (src : @& ByteArray) (srcOff : Nat) (dest : ByteArray) (destOff le
def extract (a : ByteArray) (b e : Nat) : ByteArray :=
a.copySlice b empty 0 (e - b)
@[inline]
protected def fastAppend (a : ByteArray) (b : ByteArray) : ByteArray :=
-- we assume that `append`s may be repeated, so use asymptotic growing; use `copySlice` directly to customize
b.copySlice 0 a a.size b.size false

View File

@@ -11,6 +11,13 @@ public import Init.Data.Array.Extract
public section
-- At present the preferred normal form for empty byte arrays is `ByteArray.empty`
@[simp]
theorem emptyc_eq_empty : ( : ByteArray) = ByteArray.empty := rfl
@[simp]
theorem emptyWithCapacity_eq_empty : ByteArray.emptyWithCapacity 0 = ByteArray.empty := rfl
@[simp]
theorem ByteArray.data_empty : ByteArray.empty.data = #[] := rfl
@@ -160,9 +167,9 @@ theorem ByteArray.append_inj_left {xs₁ xs₂ ys₁ ys₂ : ByteArray} (h : xs
simp only [ByteArray.ext_iff, ByteArray.size_data, ByteArray.data_append] at *
exact Array.append_inj_left h hl
theorem ByteArray.extract_append_eq_right {a b : ByteArray} {i : Nat} (hi : i = a.size) :
(a ++ b).extract i (a ++ b).size = b := by
subst hi
theorem ByteArray.extract_append_eq_right {a b : ByteArray} {i j : Nat} (hi : i = a.size) (hj : j = a.size + b.size) :
(a ++ b).extract i j = b := by
subst hi hj
ext1
simp [ size_data]

View File

@@ -140,7 +140,7 @@ Modulus of bounded numbers, usually invoked via the `%` operator.
The resulting value is that computed by the `%` operator on `Nat`.
-/
protected def mod : Fin n Fin n Fin n
| a, h, b, _ => a % b, Nat.lt_of_le_of_lt (Nat.mod_le _ _) h
| a, h, b, _ => a % b, by exact Nat.lt_of_le_of_lt (Nat.mod_le _ _) h
/--
Division of bounded numbers, usually invoked via the `/` operator.
@@ -154,7 +154,7 @@ Examples:
* `(5 : Fin 10) / (7 : Fin 10) = (0 : Fin 10)`
-/
protected def div : Fin n Fin n Fin n
| a, h, b, _ => a / b, Nat.lt_of_le_of_lt (Nat.div_le_self _ _) h
| a, h, b, _ => a / b, by exact Nat.lt_of_le_of_lt (Nat.div_le_self _ _) h
/--
Modulus of bounded numbers with respect to a `Nat`.
@@ -162,7 +162,7 @@ Modulus of bounded numbers with respect to a `Nat`.
The resulting value is that computed by the `%` operator on `Nat`.
-/
def modn : Fin n Nat Fin n
| a, h, m => a % m, Nat.lt_of_le_of_lt (Nat.mod_le _ _) h
| a, h, m => a % m, by exact Nat.lt_of_le_of_lt (Nat.mod_le _ _) h
/--
Bitwise and.

View File

@@ -206,9 +206,6 @@ theorem ediv_nonneg_iff_of_pos {a b : Int} (h : 0 < b) : 0 ≤ a / b ↔ 0 ≤ a
| Int.ofNat (b+1), _ =>
rcases a with a <;> simp [Int.ediv, -natCast_ediv]
@[deprecated ediv_nonneg_iff_of_pos (since := "2025-02-28")]
abbrev div_nonneg_iff_of_pos := @ediv_nonneg_iff_of_pos
/-! ### emod -/
theorem emod_nonneg : (a : Int) {b : Int}, b 0 0 a % b

View File

@@ -122,8 +122,8 @@ theorem eq_one_of_mul_eq_one_right {a b : Int} (H : 0 ≤ a) (H' : a * b = 1) :
theorem eq_one_of_mul_eq_one_left {a b : Int} (H : 0 b) (H' : a * b = 1) : b = 1 :=
eq_one_of_mul_eq_one_right (b := a) H <| by rw [Int.mul_comm, H']
instance decidableDvd : DecidableRel (α := Int) (· ·) := fun _ _ =>
decidable_of_decidable_of_iff (dvd_iff_emod_eq_zero ..).symm
instance decidableDvd : DecidableRel (α := Int) (· ·) := fun a b =>
decidable_of_decidable_of_iff (p := b % a = 0) (by exact (dvd_iff_emod_eq_zero ..).symm)
protected theorem mul_dvd_mul_iff_left {a b c : Int} (h : a 0) : (a * b) (a * c) b c :=
by rintro d, h'; exact d, by rw [Int.mul_assoc] at h'; exact (mul_eq_mul_left_iff h).mp h',

View File

@@ -88,6 +88,20 @@ theorem finVal {n : Nat} {a : Fin n} {a' : Int}
(h₁ : Lean.Grind.ToInt.toInt a = a') : NatCast.natCast (a.val) = a' := by
rw [ h₁, Lean.Grind.ToInt.toInt, Lean.Grind.instToIntFinCoOfNatIntCast]
theorem eq_eq {a b : Nat} {a' b' : Int}
(h₁ : NatCast.natCast a = a') (h₂ : NatCast.natCast b = b') : (a = b) = (a' = b') := by
simp [ h₁, h₂]; constructor
next => intro; subst a; rfl
next => simp [Int.natCast_inj]
theorem lt_eq {a b : Nat} {a' b' : Int}
(h₁ : NatCast.natCast a = a') (h₂ : NatCast.natCast b = b') : (a < b) = (a' < b') := by
simp only [ h₁, h₂, Int.ofNat_lt]
theorem le_eq {a b : Nat} {a' b' : Int}
(h₁ : NatCast.natCast a = a') (h₂ : NatCast.natCast b = b') : (a b) = (a' b') := by
simp only [ h₁, h₂, Int.ofNat_le]
end Nat.ToInt
namespace Int.Nonneg

View File

@@ -1347,8 +1347,6 @@ theorem neg_of_sign_eq_neg_one : ∀ {a : Int}, sign a = -1 → a < 0
| 0 => Int.mul_zero _
| -[_+1] => Int.mul_neg_one _
@[deprecated mul_sign_self (since := "2025-02-24")] abbrev mul_sign := @mul_sign_self
@[simp] theorem sign_mul_self (i : Int) : sign i * i = natAbs i := by
rw [Int.mul_comm, mul_sign_self]

View File

@@ -50,14 +50,9 @@ protected theorem pow_ne_zero {n : Int} {m : Nat} : n ≠ 0 → n ^ m ≠ 0 := b
instance {n : Int} {m : Nat} [NeZero n] : NeZero (n ^ m) := Int.pow_ne_zero (NeZero.ne _)
@[deprecated Nat.pow_le_pow_left (since := "2025-02-17")]
abbrev pow_le_pow_of_le_left := @Nat.pow_le_pow_left
@[deprecated Nat.pow_le_pow_right (since := "2025-02-17")]
abbrev pow_le_pow_of_le_right := @Nat.pow_le_pow_right
-- This can't be removed until the next update-stage0
@[deprecated Nat.pow_pos (since := "2025-02-17")]
abbrev pos_pow_of_pos := @Nat.pow_pos
abbrev _root_.Nat.pos_pow_of_pos := @Nat.pow_pos
@[simp, norm_cast]
protected theorem natCast_pow (b n : Nat) : ((b^n : Nat) : Int) = (b : Int) ^ n := by

View File

@@ -19,6 +19,7 @@ universe v u v' u'
section ULiftT
/-- `ULiftT.{v, u}` shrinks a monad on `Type max u v` to a monad on `Type u`. -/
@[expose] -- for codegen
def ULiftT (n : Type max u v Type v') (α : Type u) := n (ULift.{v} α)
/-- Returns the underlying `n`-monadic representation of a `ULiftT n α` value. -/

View File

@@ -95,14 +95,12 @@ theorem pmap_congr_left {p q : α → Prop} {f : ∀ a, p a → β} {g : ∀ a,
| cons x l ih =>
rw [pmap, pmap, h _ mem_cons_self, ih fun a ha => h a (mem_cons_of_mem _ ha)]
@[grind =]
theorem map_pmap {p : α Prop} {g : β γ} {f : a, p a β} {l : List α} (H) :
map g (pmap f l H) = pmap (fun a h => g (f a h)) l H := by
induction l
· rfl
· simp only [*, pmap, map]
@[grind =]
theorem pmap_map {p : β Prop} {g : b, p b γ} {f : α β} {l : List α} (H) :
pmap g (map f l) H = pmap (fun a h => g (f a) h) l fun _ h => H _ (mem_map_of_mem h) := by
induction l
@@ -149,9 +147,6 @@ theorem attach_map_val {l : List α} {f : α → β} :
(l.attach.map fun (i : {i // i l}) => f i) = l.map f := by
rw [attach, attachWith, map_pmap]; exact pmap_eq_map _
@[deprecated attach_map_val (since := "2025-02-17")]
abbrev attach_map_coe := @attach_map_val
-- The argument `l : List α` is explicit to allow rewriting from right to left.
theorem attach_map_subtype_val (l : List α) : l.attach.map Subtype.val = l :=
attach_map_val.trans (List.map_id _)
@@ -160,9 +155,6 @@ theorem attachWith_map_val {p : α → Prop} {f : α → β} {l : List α} (H :
((l.attachWith p H).map fun (i : { i // p i}) => f i) = l.map f := by
rw [attachWith, map_pmap]; exact pmap_eq_map _
@[deprecated attachWith_map_val (since := "2025-02-17")]
abbrev attachWith_map_coe := @attachWith_map_val
theorem attachWith_map_subtype_val {p : α Prop} {l : List α} (H : a l, p a) :
(l.attachWith p H).map Subtype.val = l :=
(attachWith_map_val _).trans (List.map_id _)
@@ -254,13 +246,6 @@ theorem getElem?_pmap {p : α → Prop} {f : ∀ a, p a → β} {l : List α} (h
· simp
· simp only [pmap, getElem?_cons_succ, hl]
set_option linter.deprecated false in
@[deprecated List.getElem?_pmap (since := "2025-02-12")]
theorem get?_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) (n : Nat) :
get? (pmap f l h) n = Option.pmap f (get? l n) fun x H => h x (mem_of_get? H) := by
simp only [get?_eq_getElem?]
simp [getElem?_pmap]
-- The argument `f` is explicit to allow rewriting from right to left.
@[simp, grind =]
theorem getElem_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) {i : Nat}
@@ -277,15 +262,6 @@ theorem getElem_pmap {p : α → Prop} (f : ∀ a, p a → β) {l : List α} (h
· simp
· simp [hl]
@[deprecated getElem_pmap (since := "2025-02-13")]
theorem get_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) {n : Nat}
(hn : n < (pmap f l h).length) :
get (pmap f l h) n, hn =
f (get l n, @length_pmap _ _ p f l h hn)
(h _ (getElem_mem (@length_pmap _ _ p f l h hn))) := by
simp only [get_eq_getElem]
simp [getElem_pmap]
@[simp, grind =]
theorem getElem?_attachWith {xs : List α} {i : Nat} {P : α Prop} {H : a xs, P a} :
(xs.attachWith P H)[i]? = xs[i]?.pmap Subtype.mk (fun _ a => H _ (mem_of_getElem? a)) :=
@@ -307,13 +283,13 @@ theorem getElem_attach {xs : List α} {i : Nat} (h : i < xs.attach.length) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp, grind =] theorem pmap_attach {l : List α} {p : {x // x l} Prop} {f : a, p a β} (H) :
@[simp] theorem pmap_attach {l : List α} {p : {x // x l} Prop} {f : a, p a β} (H) :
pmap f l.attach H =
l.pmap (P := fun a => h : a l, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
apply ext_getElem <;> simp
@[simp, grind =] theorem pmap_attachWith {l : List α} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
@[simp] theorem pmap_attachWith {l : List α} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
pmap f (l.attachWith q H₁) H₂ =
l.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
@@ -371,26 +347,24 @@ theorem getElem_attach {xs : List α} {i : Nat} (h : i < xs.attach.length) :
xs.attach.tail = xs.tail.attach.map (fun x, h => x, mem_of_mem_tail h) := by
cases xs <;> simp
@[grind =]
theorem foldl_pmap {l : List α} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a l P a) (g : γ β γ) (x : γ) :
(l.pmap f H).foldl g x = l.attach.foldl (fun acc a => g acc (f a.1 (H _ a.2))) x := by
rw [pmap_eq_map_attach, foldl_map]
@[grind =]
theorem foldr_pmap {l : List α} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a l P a) (g : β γ γ) (x : γ) :
(l.pmap f H).foldr g x = l.attach.foldr (fun a acc => g (f a.1 (H _ a.2)) acc) x := by
rw [pmap_eq_map_attach, foldr_map]
@[simp, grind =] theorem foldl_attachWith
@[simp] theorem foldl_attachWith
{l : List α} {q : α Prop} (H : a, a l q a) {f : β { x // q x } β} {b} :
(l.attachWith q H).foldl f b = l.attach.foldl (fun b a, h => f b a, H _ h) b := by
induction l generalizing b with
| nil => simp
| cons a l ih => simp [ih, foldl_map]
@[simp, grind =] theorem foldr_attachWith
@[simp] theorem foldr_attachWith
{l : List α} {q : α Prop} (H : a, a l q a) {f : { x // q x } β β} {b} :
(l.attachWith q H).foldr f b = l.attach.foldr (fun a acc => f a.1, H _ a.2 acc) b := by
induction l generalizing b with
@@ -429,18 +403,16 @@ theorem foldr_attach {l : List α} {f : α → β → β} {b : β} :
| nil => simp
| cons a l ih => rw [foldr_cons, attach_cons, foldr_cons, foldr_map, ih]
@[grind =]
theorem attach_map {l : List α} {f : α β} :
(l.map f).attach = l.attach.map (fun x, h => f x, mem_map_of_mem h) := by
induction l <;> simp [*]
@[grind =]
theorem attachWith_map {l : List α} {f : α β} {P : β Prop} (H : (b : β), b l.map f P b) :
(l.map f).attachWith P H = (l.attachWith (P f) (fun _ h => H _ (mem_map_of_mem h))).map
fun x, h => f x, h := by
induction l <;> simp [*]
@[simp, grind =] theorem map_attachWith {l : List α} {P : α Prop} {H : (a : α), a l P a}
@[simp] theorem map_attachWith {l : List α} {P : α Prop} {H : (a : α), a l P a}
{f : { x // P x } β} :
(l.attachWith P H).map f = l.attach.map fun x, h => f x, H _ h := by
induction l <;> simp_all
@@ -466,10 +438,6 @@ theorem map_attach_eq_pmap {l : List α} {f : { x // x ∈ l } → β} :
apply pmap_congr_left
simp
@[deprecated map_attach_eq_pmap (since := "2025-02-09")]
abbrev map_attach := @map_attach_eq_pmap
@[grind =]
theorem attach_filterMap {l : List α} {f : α Option β} :
(l.filterMap f).attach = l.attach.filterMap
fun x, h => (f x).pbind (fun b m => some b, mem_filterMap.mpr x, h, m) := by
@@ -500,7 +468,6 @@ theorem attach_filterMap {l : List α} {f : α → Option β} :
ext
simp
@[grind =]
theorem attach_filter {l : List α} (p : α Bool) :
(l.filter p).attach = l.attach.filterMap
fun x => if w : p x.1 then some x.1, mem_filter.mpr x.2, w else none := by
@@ -512,7 +479,7 @@ theorem attach_filter {l : List α} (p : α → Bool) :
-- We are still missing here `attachWith_filterMap` and `attachWith_filter`.
@[simp, grind =]
@[simp]
theorem filterMap_attachWith {q : α Prop} {l : List α} {f : {x // q x} Option β} (H) :
(l.attachWith q H).filterMap f = l.attach.filterMap (fun x, h => f x, H _ h) := by
induction l with
@@ -521,7 +488,7 @@ theorem filterMap_attachWith {q : α → Prop} {l : List α} {f : {x // q x} →
simp only [attachWith_cons, filterMap_cons]
split <;> simp_all [Function.comp_def]
@[simp, grind =]
@[simp]
theorem filter_attachWith {q : α Prop} {l : List α} {p : {x // q x} Bool} (H) :
(l.attachWith q H).filter p =
(l.attach.filter (fun x, h => p x, H _ h)).map (fun x, h => x, H _ h) := by
@@ -531,14 +498,13 @@ theorem filter_attachWith {q : α → Prop} {l : List α} {p : {x // q x} → Bo
simp only [attachWith_cons, filter_cons]
split <;> simp_all [Function.comp_def, filter_map]
@[grind =]
theorem pmap_pmap {p : α Prop} {q : β Prop} {g : a, p a β} {f : b, q b γ} {l} (H₁ H₂) :
pmap f (pmap g l H₁) H₂ =
pmap (α := { x // x l }) (fun a h => f (g a h) (H₂ (g a h) (mem_pmap_of_mem a.2))) l.attach
(fun a _ => H₁ a a.2) := by
simp [pmap_eq_map_attach, attach_map]
@[simp, grind =] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {l₁ l₂ : List ι}
@[simp] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {l₁ l₂ : List ι}
(h : a l₁ ++ l₂, p a) :
(l₁ ++ l₂).pmap f h =
(l₁.pmap f fun a ha => h a (mem_append_left l₂ ha)) ++
@@ -555,50 +521,47 @@ theorem pmap_append' {p : α → Prop} {f : ∀ a : α, p a → β} {l₁ l₂ :
l₁.pmap f h₁ ++ l₂.pmap f h₂ :=
pmap_append _
@[simp, grind =] theorem attach_append {xs ys : List α} :
@[simp] theorem attach_append {xs ys : List α} :
(xs ++ ys).attach = xs.attach.map (fun x, h => x, mem_append_left ys h) ++
ys.attach.map fun x, h => x, mem_append_right xs h := by
simp only [attach, attachWith, map_pmap, pmap_append]
congr 1 <;>
exact pmap_congr_left _ fun _ _ _ _ => rfl
@[simp, grind =] theorem attachWith_append {P : α Prop} {xs ys : List α}
@[simp] theorem attachWith_append {P : α Prop} {xs ys : List α}
{H : (a : α), a xs ++ ys P a} :
(xs ++ ys).attachWith P H = xs.attachWith P (fun a h => H a (mem_append_left ys h)) ++
ys.attachWith P (fun a h => H a (mem_append_right xs h)) := by
simp only [attachWith, pmap_append]
@[simp, grind =] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : List α}
@[simp] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : List α}
(H : (a : α), a xs.reverse P a) :
xs.reverse.pmap f H = (xs.pmap f (fun a h => H a (by simpa using h))).reverse := by
induction xs <;> simp_all
@[grind =]
theorem reverse_pmap {P : α Prop} {f : (a : α) P a β} {xs : List α}
(H : (a : α), a xs P a) :
(xs.pmap f H).reverse = xs.reverse.pmap f (fun a h => H a (by simpa using h)) := by
rw [pmap_reverse]
@[simp, grind =] theorem attachWith_reverse {P : α Prop} {xs : List α}
@[simp] theorem attachWith_reverse {P : α Prop} {xs : List α}
{H : (a : α), a xs.reverse P a} :
xs.reverse.attachWith P H =
(xs.attachWith P (fun a h => H a (by simpa using h))).reverse :=
pmap_reverse ..
@[grind =]
theorem reverse_attachWith {P : α Prop} {xs : List α}
{H : (a : α), a xs P a} :
(xs.attachWith P H).reverse = (xs.reverse.attachWith P (fun a h => H a (by simpa using h))) :=
reverse_pmap ..
@[simp, grind =] theorem attach_reverse {xs : List α} :
@[simp] theorem attach_reverse {xs : List α} :
xs.reverse.attach = xs.attach.reverse.map fun x, h => x, by simpa using h := by
simp only [attach, attachWith, reverse_pmap, map_pmap]
apply pmap_congr_left
intros
rfl
@[grind =]
theorem reverse_attach {xs : List α} :
xs.attach.reverse = xs.reverse.attach.map fun x, h => x, by simpa using h := by
simp only [attach, attachWith, reverse_pmap, map_pmap]
@@ -652,7 +615,7 @@ theorem countP_attachWith {p : α → Prop} {q : α → Bool} {l : List α} (H :
(l.attachWith p H).countP (fun a : {x // p x} => q a) = l.countP q := by
simp only [ Function.comp_apply (g := Subtype.val), countP_map, attachWith_map_subtype_val]
@[simp]
@[simp, grind =]
theorem count_attach [BEq α] {l : List α} {a : {x // x l}} :
l.attach.count a = l.count a :=
Eq.trans (countP_congr fun _ _ => by simp) <| countP_attach

View File

@@ -289,16 +289,6 @@ theorem cons_lex_nil [BEq α] {a} {as : List α} : lex (a :: as) [] lt = false :
@[simp] theorem lex_nil [BEq α] {as : List α} : lex as [] lt = false := by
cases as <;> simp [nil_lex_nil, cons_lex_nil]
@[deprecated nil_lex_nil (since := "2025-02-10")]
theorem lex_nil_nil [BEq α] : lex ([] : List α) [] lt = false := rfl
@[deprecated nil_lex_cons (since := "2025-02-10")]
theorem lex_nil_cons [BEq α] {b} {bs : List α} : lex [] (b :: bs) lt = true := rfl
@[deprecated cons_lex_nil (since := "2025-02-10")]
theorem lex_cons_nil [BEq α] {a} {as : List α} : lex (a :: as) [] lt = false := rfl
@[deprecated cons_lex_cons (since := "2025-02-10")]
theorem lex_cons_cons [BEq α] {a b} {as bs : List α} :
lex (a :: as) (b :: bs) lt = (lt a b || (a == b && lex as bs lt)) := rfl
/-! ## Alternative getters -/
/-! ### getLast -/

View File

@@ -21,65 +21,6 @@ namespace List
/-! ## Alternative getters -/
/-! ### get? -/
/--
Returns the `i`-th element in the list (zero-based).
If the index is out of bounds (`i ≥ as.length`), this function returns `none`.
Also see `get`, `getD` and `get!`.
-/
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), expose]
def get? : (as : List α) (i : Nat) Option α
| a::_, 0 => some a
| _::as, n+1 => get? as n
| _, _ => none
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), simp]
theorem get?_nil : @get? α [] n = none := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), simp]
theorem get?_cons_zero : @get? α (a::l) 0 = some a := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), simp]
theorem get?_cons_succ : @get? α (a::l) (n+1) = get? l n := rfl
set_option linter.deprecated false in
@[deprecated "Use `List.ext_getElem?`." (since := "2025-02-12")]
theorem ext_get? : {l₁ l₂ : List α}, ( n, l₁.get? n = l₂.get? n) l₁ = l₂
| [], [], _ => rfl
| _ :: _, [], h => nomatch h 0
| [], _ :: _, h => nomatch h 0
| a :: l₁, a' :: l₂, h => by
have h0 : some a = some a' := h 0
injection h0 with aa; simp only [aa, ext_get? fun n => h (n+1)]
/-! ### get! -/
/--
Returns the `i`-th element in the list (zero-based).
If the index is out of bounds (`i ≥ as.length`), this function panics when executed, and returns
`default`. See `get?` and `getD` for safer alternatives.
-/
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12"), expose]
def get! [Inhabited α] : (as : List α) (i : Nat) α
| a::_, 0 => a
| _::as, n+1 => get! as n
| _, _ => panic! "invalid index"
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12")]
theorem get!_nil [Inhabited α] (n : Nat) : [].get! n = (default : α) := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12")]
theorem get!_cons_succ [Inhabited α] (l : List α) (a : α) (n : Nat) :
(a::l).get! (n+1) = get! l n := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12")]
theorem get!_cons_zero [Inhabited α] (l : List α) (a : α) : (a::l).get! 0 = a := rfl
/-! ### getD -/
/--
@@ -281,17 +222,6 @@ theorem getElem_append_right {as bs : List α} {i : Nat} (h₁ : as.length ≤ i
cases i with simp [Nat.succ_sub_succ] <;> simp at h₁
| succ i => apply ih; simp [h₁]
@[deprecated "Deprecated without replacement." (since := "2025-02-13")]
theorem get_last {as : List α} {i : Fin (length (as ++ [a]))} (h : ¬ i.1 < as.length) : (as ++ [a] : List _).get i = a := by
cases i; rename_i i h'
induction as generalizing i with
| nil => cases i with
| zero => simp [List.get]
| succ => simp +arith at h'
| cons a as ih =>
cases i with simp at h
| succ i => apply ih; simp [h]
theorem sizeOf_lt_of_mem [SizeOf α] {as : List α} (h : a as) : sizeOf a < sizeOf as := by
induction h with
| head => simp +arith

View File

@@ -13,7 +13,7 @@ public section
/-!
# Lemmas about `List.countP` and `List.count`.
Because we mark `countP_eq_length_filter` and `count_eq_countP` with `@[grind _=_]`,
Because we mark `countP_eq_length_filter` with `@[grind =]`,
we don't need many other `@[grind]` annotations here.
-/
@@ -66,7 +66,7 @@ theorem length_eq_countP_add_countP (p : α → Bool) {l : List α} : length l =
· rfl
· simp [h]
@[grind _=_] -- This to quite aggressive, as it introduces `filter` based reasoning whenever we see `countP`.
@[grind =] -- This to quite aggressive, as it introduces `filter` based reasoning whenever we see `countP`.
theorem countP_eq_length_filter {l : List α} : countP p l = (filter p l).length := by
induction l with
| nil => rfl

View File

@@ -360,9 +360,6 @@ theorem find?_flatten_eq_none_iff {xs : List (List α)} {p : α → Bool} :
xs.flatten.find? p = none ys xs, x ys, !p x := by
simp
@[deprecated find?_flatten_eq_none_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_none := @find?_flatten_eq_none_iff
/--
If `find? p` returns `some a` from `xs.flatten`, then `p a` holds, and
some list in `xs` contains `a`, and no earlier element of that list satisfies `p`.
@@ -403,9 +400,6 @@ theorem find?_flatten_eq_some_iff {xs : List (List α)} {p : α → Bool} {a :
· exact h₁ l ml a m
· exact h₂ a m
@[deprecated find?_flatten_eq_some_iff (since := "2025-02-03")]
abbrev find?_flatten_eq_some := @find?_flatten_eq_some_iff
@[simp, grind =] theorem find?_flatMap {xs : List α} {f : α List β} {p : β Bool} :
(xs.flatMap f).find? p = xs.findSome? (fun x => (f x).find? p) := by
simp [flatMap_def, findSome?_map]; rfl
@@ -434,16 +428,10 @@ theorem find?_replicate_eq_none_iff {n : Nat} {a : α} {p : α → Bool} :
(replicate n a).find? p = none n = 0 !p a := by
simp [Classical.or_iff_not_imp_left]
@[deprecated find?_replicate_eq_none_iff (since := "2025-02-03")]
abbrev find?_replicate_eq_none := @find?_replicate_eq_none_iff
@[simp] theorem find?_replicate_eq_some_iff {n : Nat} {a b : α} {p : α Bool} :
(replicate n a).find? p = some b n 0 p a a = b := by
cases n <;> simp
@[deprecated find?_replicate_eq_some_iff (since := "2025-02-03")]
abbrev find?_replicate_eq_some := @find?_replicate_eq_some_iff
@[simp] theorem get_find?_replicate {n : Nat} {a : α} {p : α Bool} (h) : ((replicate n a).find? p).get h = a := by
cases n with
| zero => simp at h
@@ -836,9 +824,6 @@ theorem of_findIdx?_eq_some {xs : List α} {p : α → Bool} (w : xs.findIdx? p
simp_all only [findIdx?_cons]
split at w <;> cases i <;> simp_all
@[deprecated of_findIdx?_eq_some (since := "2025-02-02")]
abbrev findIdx?_of_eq_some := @of_findIdx?_eq_some
theorem of_findIdx?_eq_none {xs : List α} {p : α Bool} (w : xs.findIdx? p = none) :
i : Nat, match xs[i]? with | some a => ¬ p a | none => true := by
intro i
@@ -854,9 +839,6 @@ theorem of_findIdx?_eq_none {xs : List α} {p : α → Bool} (w : xs.findIdx? p
apply ih
split at w <;> simp_all
@[deprecated of_findIdx?_eq_none (since := "2025-02-02")]
abbrev findIdx?_of_eq_none := @of_findIdx?_eq_none
@[simp, grind _=_] theorem findIdx?_map {f : β α} {l : List β} : findIdx? p (l.map f) = l.findIdx? (p f) := by
induction l with
| nil => simp

View File

@@ -107,9 +107,6 @@ theorem ne_nil_of_length_pos (_ : 0 < length l) : l ≠ [] := fun _ => nomatch l
@[simp] theorem length_eq_zero_iff : length l = 0 l = [] :=
eq_nil_of_length_eq_zero, fun h => h rfl
@[deprecated length_eq_zero_iff (since := "2025-02-24")]
abbrev length_eq_zero := @length_eq_zero_iff
theorem eq_nil_iff_length_eq_zero : l = [] length l = 0 :=
length_eq_zero_iff.symm
@@ -142,18 +139,12 @@ theorem exists_cons_of_length_eq_add_one :
theorem length_pos_iff {l : List α} : 0 < length l l [] :=
Nat.pos_iff_ne_zero.trans (not_congr length_eq_zero_iff)
@[deprecated length_pos_iff (since := "2025-02-24")]
abbrev length_pos := @length_pos_iff
theorem ne_nil_iff_length_pos {l : List α} : l [] 0 < length l :=
length_pos_iff.symm
theorem length_eq_one_iff {l : List α} : length l = 1 a, l = [a] :=
fun h => match l, h with | [_], _ => _, rfl, fun _, h => by simp [h]
@[deprecated length_eq_one_iff (since := "2025-02-24")]
abbrev length_eq_one := @length_eq_one_iff
/-! ### cons -/
-- The arguments here are intentionally explicit.
@@ -198,38 +189,6 @@ We simplify `l.get i` to `l[i.1]'i.2` and `l.get? i` to `l[i]?`.
@[simp, grind =]
theorem get_eq_getElem {l : List α} {i : Fin l.length} : l.get i = l[i.1]'i.2 := rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12")]
theorem get?_eq_none : {l : List α} {n}, length l n l.get? n = none
| [], _, _ => rfl
| _ :: l, _+1, h => get?_eq_none (l := l) <| Nat.le_of_succ_le_succ h
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12")]
theorem get?_eq_get : {l : List α} {n} (h : n < l.length), l.get? n = some (get l n, h)
| _ :: _, 0, _ => rfl
| _ :: l, _+1, _ => get?_eq_get (l := l) _
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12")]
theorem get?_eq_some_iff : l.get? n = some a h, get l n, h = a :=
fun e =>
have : n < length l := Nat.gt_of_not_le fun hn => by cases get?_eq_none hn e
this, by rwa [get?_eq_get this, Option.some.injEq] at e,
fun _, e => e get?_eq_get _
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12")]
theorem get?_eq_none_iff : l.get? n = none length l n :=
fun e => Nat.ge_of_not_lt (fun h' => by cases e get?_eq_some_iff.2 h', rfl), get?_eq_none
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12"), simp]
theorem get?_eq_getElem? {l : List α} {i : Nat} : l.get? i = l[i]? := by
simp only [getElem?_def]; split
· exact (get?_eq_get _)
· exact (get?_eq_none_iff.2 <| Nat.not_lt.1 _)
/-! ### getElem!
We simplify `l[i]!` to `(l[i]?).getD default`.
@@ -373,26 +332,9 @@ theorem getD_eq_getElem?_getD {l : List α} {i : Nat} {a : α} : getD l i a = (l
theorem getD_cons_zero : getD (x :: xs) 0 d = x := by simp
theorem getD_cons_succ : getD (x :: xs) (n + 1) d = getD xs n d := by simp
/-! ### get!
We simplify `l.get! i` to `l[i]!`.
-/
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12")]
theorem get!_eq_getD [Inhabited α] : (l : List α) i, l.get! i = l.getD i default
| [], _ => rfl
| _a::_, 0 => by simp [get!]
| _a::l, n+1 => by simpa using get!_eq_getD l n
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12"), simp]
theorem get!_eq_getElem! [Inhabited α] (l : List α) (i) : l.get! i = l[i]! := by
simp [get!_eq_getD]
/-! ### mem -/
@[simp] theorem not_mem_nil {a : α} : ¬ a [] := nofun
@[simp, grind ] theorem not_mem_nil {a : α} : ¬ a [] := nofun
@[simp, grind =] theorem mem_cons : a b :: l a = b a l :=
fun h => by cases h <;> simp [Membership.mem, *],
@@ -581,15 +523,9 @@ theorem elem_eq_mem [BEq α] [LawfulBEq α] (a : α) (as : List α) :
@[grind ]
theorem nil_of_isEmpty {l : List α} (h : l.isEmpty) : l = [] := List.isEmpty_iff.mp h
@[deprecated isEmpty_iff (since := "2025-02-17")]
abbrev isEmpty_eq_true := @isEmpty_iff
@[simp] theorem isEmpty_eq_false_iff {l : List α} : l.isEmpty = false l [] := by
cases l <;> simp
@[deprecated isEmpty_eq_false_iff (since := "2025-02-17")]
abbrev isEmpty_eq_false := @isEmpty_eq_false_iff
theorem isEmpty_eq_false_iff_exists_mem {xs : List α} :
xs.isEmpty = false x, x xs := by
cases xs <;> simp
@@ -2881,9 +2817,6 @@ theorem getLast_eq_head_reverse {l : List α} (h : l ≠ []) :
l.getLast h = l.reverse.head (by simp_all) := by
rw [ head_reverse]
@[deprecated getLast_eq_iff_getLast?_eq_some (since := "2025-02-17")]
abbrev getLast_eq_iff_getLast_eq_some := @getLast_eq_iff_getLast?_eq_some
@[simp] theorem getLast?_eq_none_iff {xs : List α} : xs.getLast? = none xs = [] := by
rw [getLast?_eq_head?_reverse, head?_eq_none_iff, reverse_eq_nil_iff]
@@ -3687,10 +3620,6 @@ theorem get_cons_succ' {as : List α} {i : Fin as.length} :
theorem get_mk_zero : {l : List α} (h : 0 < l.length), l.get 0, h = l.head (length_pos_iff.mp h)
| _::_, _ => rfl
set_option linter.deprecated false in
@[deprecated "Use `a[0]?` instead." (since := "2025-02-12")]
theorem get?_zero (l : List α) : l.get? 0 = l.head? := by cases l <;> rfl
/--
If one has `l.get i` in an expression (with `i : Fin l.length`) and `h : l = l'`,
`rw [h]` will give a "motive is not type correct" error, as it cannot rewrite the
@@ -3700,18 +3629,6 @@ such a rewrite, with `rw [get_of_eq h]`.
theorem get_of_eq {l l' : List α} (h : l = l') (i : Fin l.length) :
get l i = get l' i, h i.2 := by cases h; rfl
set_option linter.deprecated false in
@[deprecated "Use `a[i]?` instead." (since := "2025-02-12")]
theorem get!_of_get? [Inhabited α] : {l : List α} {n}, get? l n = some a get! l n = a
| _a::_, 0, rfl => rfl
| _::l, _+1, e => get!_of_get? (l := l) e
set_option linter.deprecated false in
@[deprecated "Use `a[i]!` instead." (since := "2025-02-12")]
theorem get!_len_le [Inhabited α] : {l : List α} {n}, length l n l.get! n = (default : α)
| [], _, _ => rfl
| _ :: l, _+1, h => get!_len_le (l := l) <| Nat.le_of_succ_le_succ h
theorem getElem!_nil [Inhabited α] {n : Nat} : ([] : List α)[n]! = default := rfl
theorem getElem!_cons_zero [Inhabited α] {l : List α} : (a::l)[0]! = a := by
@@ -3737,30 +3654,11 @@ theorem get_of_mem {a} {l : List α} (h : a ∈ l) : ∃ n, get l n = a := by
obtain n, h, e := getElem_of_mem h
exact n, h, e
set_option linter.deprecated false in
@[deprecated getElem?_of_mem (since := "2025-02-12")]
theorem get?_of_mem {a} {l : List α} (h : a l) : n, l.get? n = some a :=
let n, _, e := get_of_mem h; n, e get?_eq_get _
theorem get_mem : (l : List α) n, get l n l
| _ :: _, 0, _ => .head ..
| _ :: l, _+1, _ => .tail _ (get_mem l ..)
set_option linter.deprecated false in
@[deprecated mem_of_getElem? (since := "2025-02-12")]
theorem mem_of_get? {l : List α} {n a} (e : l.get? n = some a) : a l :=
let _, e := get?_eq_some_iff.1 e; e get_mem ..
theorem mem_iff_get {a} {l : List α} : a l n, get l n = a :=
get_of_mem, fun _, e => e get_mem ..
set_option linter.deprecated false in
@[deprecated mem_iff_getElem? (since := "2025-02-12")]
theorem mem_iff_get? {a} {l : List α} : a l n, l.get? n = some a := by
simp [getElem?_eq_some_iff, Fin.exists_iff, mem_iff_get]
/-! ### Deprecations -/
end List

View File

@@ -105,9 +105,6 @@ theorem length_leftpad {n : Nat} {a : α} {l : List α} :
(leftpad n a l).length = max n l.length := by
simp only [leftpad, length_append, length_replicate, Nat.sub_add_eq_max]
@[deprecated length_leftpad (since := "2025-02-24")]
abbrev leftpad_length := @length_leftpad
theorem length_rightpad {n : Nat} {a : α} {l : List α} :
(rightpad n a l).length = max n l.length := by
simp [rightpad]

View File

@@ -196,9 +196,6 @@ theorem getElem_insertIdx_of_gt {l : List α} {x : α} {i j : Nat} (hn : i < j)
| zero => omega
| succ j => simp
@[deprecated getElem_insertIdx_of_gt (since := "2025-02-04")]
abbrev getElem_insertIdx_of_ge := @getElem_insertIdx_of_gt
@[grind =]
theorem getElem_insertIdx {l : List α} {x : α} {i j : Nat} (h : j < (l.insertIdx i x).length) :
(l.insertIdx i x)[j] =
@@ -261,9 +258,6 @@ theorem getElem?_insertIdx_of_gt {l : List α} {x : α} {i j : Nat} (h : i < j)
(l.insertIdx i x)[j]? = l[j - 1]? := by
rw [getElem?_insertIdx, if_neg (by omega), if_neg (by omega)]
@[deprecated getElem?_insertIdx_of_gt (since := "2025-02-04")]
abbrev getElem?_insertIdx_of_ge := @getElem?_insertIdx_of_gt
end InsertIdx
end List

View File

@@ -117,9 +117,6 @@ theorem take_set_of_le {a : α} {i j : Nat} {l : List α} (h : j ≤ i) :
next h' => rw [getElem?_set_ne (by omega)]
· rfl
@[deprecated take_set_of_le (since := "2025-02-04")]
abbrev take_set_of_lt := @take_set_of_le
@[simp, grind =] theorem take_replicate {a : α} : {i n : Nat}, take i (replicate n a) = replicate (min i n) a
| n, 0 => by simp
| 0, m => by simp
@@ -165,8 +162,8 @@ theorem take_eq_take_iff :
| x :: xs, 0, j + 1 => by simp [succ_min_succ]
| x :: xs, i + 1, j + 1 => by simp [succ_min_succ, take_eq_take_iff]
@[deprecated take_eq_take_iff (since := "2025-02-16")]
abbrev take_eq_take := @take_eq_take_iff
theorem take_eq_take_min {l : List α} {i : Nat} : l.take i = l.take (min i l.length) := by
simp
@[grind =]
theorem take_add {l : List α} {i j : Nat} : l.take (i + j) = l.take i ++ (l.drop i).take j := by

View File

@@ -165,9 +165,6 @@ theorem take_set {l : List α} {i j : Nat} {a : α} :
| nil => simp
| cons hd tl => cases j <;> simp_all
@[deprecated take_set (since := "2025-02-17")]
abbrev set_take := @take_set
theorem drop_set {l : List α} {i j : Nat} {a : α} :
(l.set j a).drop i = if j < i then l.drop i else (l.drop i).set (j - i) a := by
induction i generalizing l j with

View File

@@ -791,18 +791,6 @@ protected theorem pow_le_pow_right {n : Nat} (hx : n > 0) {i : Nat} : ∀ {j}, i
| Or.inr h =>
h.symm Nat.le_refl _
set_option linter.missingDocs false in
@[deprecated Nat.pow_le_pow_left (since := "2025-02-17")]
abbrev pow_le_pow_of_le_left := @Nat.pow_le_pow_left
set_option linter.missingDocs false in
@[deprecated Nat.pow_le_pow_right (since := "2025-02-17")]
abbrev pow_le_pow_of_le_right := @Nat.pow_le_pow_right
set_option linter.missingDocs false in
@[deprecated Nat.pow_pos (since := "2025-02-17")]
abbrev pos_pow_of_pos := @Nat.pow_pos
@[simp] theorem zero_pow_of_pos (n : Nat) (h : 0 < n) : 0 ^ n = 0 := by
cases n with
| zero => cases h
@@ -882,9 +870,6 @@ protected theorem ne_zero_of_lt (h : b < a) : a ≠ 0 := by
exact absurd h (Nat.not_lt_zero _)
apply Nat.noConfusion
@[deprecated Nat.ne_zero_of_lt (since := "2025-02-06")]
theorem not_eq_zero_of_lt (h : b < a) : a 0 := Nat.ne_zero_of_lt h
theorem pred_lt_of_lt {n m : Nat} (h : m < n) : pred n < n :=
pred_lt (Nat.ne_zero_of_lt h)

View File

@@ -75,10 +75,7 @@ theorem attach_map_val (o : Option α) (f : α → β) :
(o.attach.map fun (i : {i // o = some i}) => f i) = o.map f := by
cases o <;> simp
@[deprecated attach_map_val (since := "2025-02-17")]
abbrev attach_map_coe := @attach_map_val
@[simp, grind =]theorem attach_map_subtype_val (o : Option α) :
@[simp] theorem attach_map_subtype_val (o : Option α) :
o.attach.map Subtype.val = o :=
(attach_map_val _ _).trans (congrFun Option.map_id _)
@@ -86,10 +83,7 @@ theorem attachWith_map_val {p : α → Prop} (f : α → β) (o : Option α) (H
((o.attachWith p H).map fun (i : { i // p i}) => f i.val) = o.map f := by
cases o <;> simp
@[deprecated attachWith_map_val (since := "2025-02-17")]
abbrev attachWith_map_coe := @attachWith_map_val
@[simp, grind =] theorem attachWith_map_subtype_val {p : α Prop} (o : Option α) (H : a, o = some a p a) :
@[simp] theorem attachWith_map_subtype_val {p : α Prop} (o : Option α) (H : a, o = some a p a) :
(o.attachWith p H).map Subtype.val = o :=
(attachWith_map_val _ _ _).trans (congrFun Option.map_id _)
@@ -174,23 +168,23 @@ theorem toArray_attachWith {p : α → Prop} {o : Option α} {h} :
o.toList.attach = (o.attach.map fun a, h => a, by simpa using h).toList := by
cases o <;> simp [toList]
@[grind =] theorem attach_map {o : Option α} (f : α β) :
@[grind =]
theorem attach_map {o : Option α} (f : α β) :
(o.map f).attach = o.attach.map (fun x, h => f x, map_eq_some_iff.2 _, h, rfl) := by
cases o <;> simp
@[grind =] theorem attachWith_map {o : Option α} (f : α β) {P : β Prop} {H : (b : β), o.map f = some b P b} :
@[grind =]
theorem attachWith_map {o : Option α} (f : α β) {P : β Prop} {H : (b : β), o.map f = some b P b} :
(o.map f).attachWith P H = (o.attachWith (P f) (fun _ h => H _ (map_eq_some_iff.2 _, h, rfl))).map
fun x, h => f x, h := by
cases o <;> simp
@[grind =] theorem map_attach_eq_pmap {o : Option α} (f : { x // o = some x } β) :
@[grind =]
theorem map_attach_eq_pmap {o : Option α} (f : { x // o = some x } β) :
o.attach.map f = o.pmap (fun a (h : o = some a) => f a, h) (fun _ h => h) := by
cases o <;> simp
@[deprecated map_attach_eq_pmap (since := "2025-02-09")]
abbrev map_attach := @map_attach_eq_pmap
@[simp, grind =] theorem map_attachWith {l : Option α} {P : α Prop} {H : (a : α), l = some a P a}
@[simp] theorem map_attachWith {l : Option α} {P : α Prop} {H : (a : α), l = some a P a}
(f : { x // P x } β) :
(l.attachWith P H).map f = l.attach.map fun x, h => f x, H _ h := by
cases l <;> simp_all
@@ -206,12 +200,12 @@ theorem map_attach_eq_attachWith {o : Option α} {p : α → Prop} (f : ∀ a, o
o.attach.map (fun x => x.1, f x.1 x.2) = o.attachWith p f := by
cases o <;> simp_all
@[grind =] theorem attach_bind {o : Option α} {f : α Option β} :
theorem attach_bind {o : Option α} {f : α Option β} :
(o.bind f).attach =
o.attach.bind fun x, h => (f x).attach.map fun y, h' => y, bind_eq_some_iff.2 _, h, h' := by
cases o <;> simp
@[grind =] theorem bind_attach {o : Option α} {f : {x // o = some x} Option β} :
theorem bind_attach {o : Option α} {f : {x // o = some x} Option β} :
o.attach.bind f = o.pbind fun a h => f a, h := by
cases o <;> simp
@@ -219,7 +213,7 @@ theorem pbind_eq_bind_attach {o : Option α} {f : (a : α) → o = some a → Op
o.pbind f = o.attach.bind fun x, h => f x h := by
cases o <;> simp
@[grind =] theorem attach_filter {o : Option α} {p : α Bool} :
theorem attach_filter {o : Option α} {p : α Bool} :
(o.filter p).attach =
o.attach.bind fun x, h => if h' : p x then some x, by simp_all else none := by
cases o with
@@ -235,12 +229,12 @@ theorem pbind_eq_bind_attach {o : Option α} {f : (a : α) → o = some a → Op
· rintro h, rfl
simp [h]
@[grind =] theorem filter_attachWith {P : α Prop} {o : Option α} {h : x, o = some x P x} {q : α Bool} :
theorem filter_attachWith {P : α Prop} {o : Option α} {h : x, o = some x P x} {q : α Bool} :
(o.attachWith P h).filter q =
(o.filter q).attachWith P (fun _ h' => h _ (eq_some_of_filter_eq_some h')) := by
cases o <;> simp [filter_some] <;> split <;> simp
@[grind =] theorem filter_attach {o : Option α} {p : {x // o = some x} Bool} :
theorem filter_attach {o : Option α} {p : {x // o = some x} Bool} :
o.attach.filter p = o.pbind fun a h => if p a, h then some a, h else none := by
cases o <;> simp [filter_some]
@@ -284,7 +278,7 @@ theorem toArray_pmap {p : α → Prop} {o : Option α} {f : (a : α) → p a →
(o.pmap f h).toArray = o.attach.toArray.map (fun x => f x.1 (h _ x.2)) := by
cases o <;> simp
@[grind =] theorem attach_pfilter {o : Option α} {p : (a : α) o = some a Bool} :
theorem attach_pfilter {o : Option α} {p : (a : α) o = some a Bool} :
(o.pfilter p).attach =
o.attach.pbind fun x h => if h' : p x (by simp_all) then
some x.1, by simpa [pfilter_eq_some_iff] using _, h' else none := by

View File

@@ -49,7 +49,6 @@ instance : LawfulUpwardEnumerableLE (BitVec n) where
· rintro n, hn, rfl
simp [BitVec.ofNatLT]
instance : LawfulOrderLT (BitVec n) := inferInstance
instance : LawfulUpwardEnumerableLT (BitVec n) := inferInstance
instance : LawfulUpwardEnumerableLT (BitVec n) := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed (BitVec n) := inferInstance

View File

@@ -57,7 +57,6 @@ instance : LawfulUpwardEnumerableLE UInt8 where
simpa [upwardEnumerableLE_ofBitVec, UInt8.le_iff_toBitVec_le] using
LawfulUpwardEnumerableLE.le_iff _ _
instance : LawfulOrderLT UInt8 := inferInstance
instance : LawfulUpwardEnumerableLT UInt8 := inferInstance
instance : LawfulUpwardEnumerableLT UInt8 := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed UInt8 := inferInstance
@@ -130,7 +129,6 @@ instance : LawfulUpwardEnumerableLE UInt16 where
simpa [upwardEnumerableLE_ofBitVec, UInt16.le_iff_toBitVec_le] using
LawfulUpwardEnumerableLE.le_iff _ _
instance : LawfulOrderLT UInt16 := inferInstance
instance : LawfulUpwardEnumerableLT UInt16 := inferInstance
instance : LawfulUpwardEnumerableLT UInt16 := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed UInt16 := inferInstance
@@ -203,7 +201,6 @@ instance : LawfulUpwardEnumerableLE UInt32 where
simpa [upwardEnumerableLE_ofBitVec, UInt32.le_iff_toBitVec_le] using
LawfulUpwardEnumerableLE.le_iff _ _
instance : LawfulOrderLT UInt32 := inferInstance
instance : LawfulUpwardEnumerableLT UInt32 := inferInstance
instance : LawfulUpwardEnumerableLT UInt32 := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed UInt32 := inferInstance
@@ -276,7 +273,6 @@ instance : LawfulUpwardEnumerableLE UInt64 where
simpa [upwardEnumerableLE_ofBitVec, UInt64.le_iff_toBitVec_le] using
LawfulUpwardEnumerableLE.le_iff _ _
instance : LawfulOrderLT UInt64 := inferInstance
instance : LawfulUpwardEnumerableLT UInt64 := inferInstance
instance : LawfulUpwardEnumerableLT UInt64 := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed UInt64 := inferInstance
@@ -349,7 +345,6 @@ instance : LawfulUpwardEnumerableLE USize where
simpa [upwardEnumerableLE_ofBitVec, USize.le_iff_toBitVec_le] using
LawfulUpwardEnumerableLE.le_iff _ _
instance : LawfulOrderLT USize := inferInstance
instance : LawfulUpwardEnumerableLT USize := inferInstance
instance : LawfulUpwardEnumerableLT USize := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed USize := inferInstance

View File

@@ -181,12 +181,12 @@ because you don't want to unfold it. Use `Rat.inv_def` instead.)
@[irreducible] protected def inv (a : Rat) : Rat :=
if h : a.num < 0 then
{ num := -a.den, den := a.num.natAbs
den_nz := Nat.ne_of_gt (Int.natAbs_pos.2 (Int.ne_of_lt h))
reduced := Int.natAbs_neg a.den a.reduced.symm }
den_nz := by exact Nat.ne_of_gt (Int.natAbs_pos.2 (Int.ne_of_lt h))
reduced := by exact Int.natAbs_neg a.den a.reduced.symm }
else if h : a.num > 0 then
{ num := a.den, den := a.num.natAbs
den_nz := Nat.ne_of_gt (Int.natAbs_pos.2 (Int.ne_of_gt h))
reduced := a.reduced.symm }
den_nz := by exact Nat.ne_of_gt (Int.natAbs_pos.2 (Int.ne_of_gt h))
reduced := by exact a.reduced.symm }
else
a

View File

@@ -600,6 +600,7 @@ theorem ofScientific_true_def : Rat.ofScientific m true e = mkRat m (10 ^ e) :=
theorem ofScientific_false_def : Rat.ofScientific m false e = (m * 10 ^ e : Nat) := by
unfold Rat.ofScientific; rfl
-- See also `ofScientific_def'` below.
theorem ofScientific_def : Rat.ofScientific m s e =
if s then mkRat m (10 ^ e) else (m * 10 ^ e : Nat) := by
cases s; exact ofScientific_false_def; exact ofScientific_true_def
@@ -1007,6 +1008,21 @@ theorem intCast_neg_iff {a : Int} :
(a : Rat) < 0 a < 0 :=
Rat.intCast_lt_intCast
/--
Alternative statement of `ofScientific_def`.
-/
@[grind =]
theorem ofScientific_def' :
(OfScientific.ofScientific m s e : Rat) = m * (10 ^ (if s then -e else e : Int)) := by
change Rat.ofScientific _ _ _ = _
rw [ofScientific_def]
split
· rw [Rat.zpow_neg, Rat.div_def, Rat.zpow_natCast, mkRat_eq_div]
push_cast
rfl
· push_cast
rfl
theorem floor_def (a : Rat) : a.floor = a.num / a.den := by
rw [Rat.floor]
split <;> simp_all

View File

@@ -96,8 +96,7 @@ theorem Int8.toBitVec.inj : {x y : Int8} → x.toBitVec = y.toBitVec → x = y
/-- Obtains the `Int8` that is 2's complement equivalent to the `UInt8`. -/
@[inline] def UInt8.toInt8 (i : UInt8) : Int8 := Int8.ofUInt8 i
@[inline, deprecated UInt8.toInt8 (since := "2025-02-13"), inherit_doc UInt8.toInt8]
def Int8.mk (i : UInt8) : Int8 := UInt8.toInt8 i
/--
Converts an arbitrary-precision integer to an 8-bit integer, wrapping on overflow or underflow.
@@ -159,8 +158,7 @@ Converts an 8-bit signed integer to a natural number, mapping all negative numbe
Use `Int8.toBitVec` to obtain the two's complement representation.
-/
@[inline] def Int8.toNatClampNeg (i : Int8) : Nat := i.toInt.toNat
@[inline, deprecated Int8.toNatClampNeg (since := "2025-02-13"), inherit_doc Int8.toNatClampNeg]
def Int8.toNat (i : Int8) : Nat := i.toInt.toNat
/-- Obtains the `Int8` whose 2's complement representation is the given `BitVec 8`. -/
@[inline] def Int8.ofBitVec (b : BitVec 8) : Int8 := b
/--
@@ -449,8 +447,7 @@ theorem Int16.toBitVec.inj : {x y : Int16} → x.toBitVec = y.toBitVec → x = y
/-- Obtains the `Int16` that is 2's complement equivalent to the `UInt16`. -/
@[inline] def UInt16.toInt16 (i : UInt16) : Int16 := Int16.ofUInt16 i
@[inline, deprecated UInt16.toInt16 (since := "2025-02-13"), inherit_doc UInt16.toInt16]
def Int16.mk (i : UInt16) : Int16 := UInt16.toInt16 i
/--
Converts an arbitrary-precision integer to a 16-bit signed integer, wrapping on overflow or underflow.
@@ -514,8 +511,7 @@ Converts a 16-bit signed integer to a natural number, mapping all negative numbe
Use `Int16.toBitVec` to obtain the two's complement representation.
-/
@[inline] def Int16.toNatClampNeg (i : Int16) : Nat := i.toInt.toNat
@[inline, deprecated Int16.toNatClampNeg (since := "2025-02-13"), inherit_doc Int16.toNatClampNeg]
def Int16.toNat (i : Int16) : Nat := i.toInt.toNat
/-- Obtains the `Int16` whose 2's complement representation is the given `BitVec 16`. -/
@[inline] def Int16.ofBitVec (b : BitVec 16) : Int16 := b
/--
@@ -820,8 +816,7 @@ theorem Int32.toBitVec.inj : {x y : Int32} → x.toBitVec = y.toBitVec → x = y
/-- Obtains the `Int32` that is 2's complement equivalent to the `UInt32`. -/
@[inline] def UInt32.toInt32 (i : UInt32) : Int32 := Int32.ofUInt32 i
@[inline, deprecated UInt32.toInt32 (since := "2025-02-13"), inherit_doc UInt32.toInt32]
def Int32.mk (i : UInt32) : Int32 := UInt32.toInt32 i
/--
Converts an arbitrary-precision integer to a 32-bit integer, wrapping on overflow or underflow.
@@ -886,8 +881,7 @@ Converts a 32-bit signed integer to a natural number, mapping all negative numbe
Use `Int32.toBitVec` to obtain the two's complement representation.
-/
@[inline] def Int32.toNatClampNeg (i : Int32) : Nat := i.toInt.toNat
@[inline, deprecated Int32.toNatClampNeg (since := "2025-02-13"), inherit_doc Int32.toNatClampNeg]
def Int32.toNat (i : Int32) : Nat := i.toInt.toNat
/-- Obtains the `Int32` whose 2's complement representation is the given `BitVec 32`. -/
@[inline] def Int32.ofBitVec (b : BitVec 32) : Int32 := b
/--
@@ -1207,8 +1201,7 @@ theorem Int64.toBitVec.inj : {x y : Int64} → x.toBitVec = y.toBitVec → x = y
/-- Obtains the `Int64` that is 2's complement equivalent to the `UInt64`. -/
@[inline] def UInt64.toInt64 (i : UInt64) : Int64 := Int64.ofUInt64 i
@[inline, deprecated UInt64.toInt64 (since := "2025-02-13"), inherit_doc UInt64.toInt64]
def Int64.mk (i : UInt64) : Int64 := UInt64.toInt64 i
/--
Converts an arbitrary-precision integer to a 64-bit integer, wrapping on overflow or underflow.
@@ -1278,8 +1271,7 @@ Converts a 64-bit signed integer to a natural number, mapping all negative numbe
Use `Int64.toBitVec` to obtain the two's complement representation.
-/
@[inline] def Int64.toNatClampNeg (i : Int64) : Nat := i.toInt.toNat
@[inline, deprecated Int64.toNatClampNeg (since := "2025-02-13"), inherit_doc Int64.toNatClampNeg]
def Int64.toNat (i : Int64) : Nat := i.toInt.toNat
/-- Obtains the `Int64` whose 2's complement representation is the given `BitVec 64`. -/
@[inline] def Int64.ofBitVec (b : BitVec 64) : Int64 := b
/--
@@ -1613,8 +1605,7 @@ theorem ISize.toBitVec.inj : {x y : ISize} → x.toBitVec = y.toBitVec → x = y
/-- Obtains the `ISize` that is 2's complement equivalent to the `USize`. -/
@[inline] def USize.toISize (i : USize) : ISize := ISize.ofUSize i
@[inline, deprecated USize.toISize (since := "2025-02-13"), inherit_doc USize.toISize]
def ISize.mk (i : USize) : ISize := USize.toISize i
/--
Converts an arbitrary-precision integer to a word-sized signed integer, wrapping around on over- or
underflow.
@@ -1647,8 +1638,7 @@ Converts a word-sized signed integer to a natural number, mapping all negative n
Use `ISize.toBitVec` to obtain the two's complement representation.
-/
@[inline] def ISize.toNatClampNeg (i : ISize) : Nat := i.toInt.toNat
@[inline, deprecated ISize.toNatClampNeg (since := "2025-02-13"), inherit_doc ISize.toNatClampNeg]
def ISize.toNat (i : ISize) : Nat := i.toInt.toNat
/-- Obtains the `ISize` whose 2's complement representation is the given `BitVec`. -/
@[inline] def ISize.ofBitVec (b : BitVec System.Platform.numBits) : ISize := b
/--

View File

@@ -10,6 +10,7 @@ public import Init.Data.Slice.Basic
public import Init.Data.Slice.Notation
public import Init.Data.Slice.Operations
public import Init.Data.Slice.Array
public import Init.Data.Slice.Lemmas
public section

View File

@@ -10,7 +10,7 @@ public import Init.Core
public import Init.Data.Slice.Array.Basic
import Init.Data.Iterators.Combinators.Attach
import Init.Data.Iterators.Combinators.FilterMap
import Init.Data.Iterators.Combinators.ULift
public import Init.Data.Iterators.Combinators.ULift
public import Init.Data.Iterators.Consumers.Collect
public import Init.Data.Iterators.Consumers.Loop
public import Init.Data.Range.Polymorphic.Basic
@@ -31,7 +31,6 @@ open Std Slice PRange Iterators
variable {shape : RangeShape} {α : Type u}
@[no_expose]
instance {s : Subarray α} : ToIterator s Id α :=
.of _
(PRange.Internal.iter (s.internalRepresentation.start...<s.internalRepresentation.stop)

View File

@@ -13,5 +13,7 @@ public import Init.Data.String.Extra
public import Init.Data.String.Lemmas
public import Init.Data.String.Repr
public import Init.Data.String.Bootstrap
public import Init.Data.String.Slice
public import Init.Data.String.Pattern
public section

File diff suppressed because it is too large Load Diff

View File

@@ -141,7 +141,7 @@ Examples:
* `[].asString = ""`
* `['a', 'a', 'a'].asString = "aaa"`
-/
@[expose]
@[expose, inline]
def List.asString (s : List Char) : String :=
String.mk s

View File

@@ -363,6 +363,36 @@ theorem toBitVec_eq_of_isInvalidContinuationByte_eq_false {b : UInt8} (hb : isIn
b.toBitVec = 0b10#2 ++ b.toBitVec.setWidth 6 := by
exact helper₂ 6 b.toBitVec (isInvalidContinuationByte_eq_false_iff_toBitVec.1 hb)
theorem parseFirstByte_eq_invalid_of_isInvalidContinuationByte_eq_false {b : UInt8}
(hb : isInvalidContinuationByte b = false) : parseFirstByte b = .invalid := by
replace hb := toBitVec_eq_of_isInvalidContinuationByte_eq_false hb
match h : parseFirstByte b with
| .done =>
rw [toBitVec_eq_of_parseFirstByte_eq_done h] at hb
have := congrArg (·[7]) hb
simp only at this
rw [BitVec.getElem_append, BitVec.getElem_append] at this
simp at this
| .oneMore =>
rw [toBitVec_eq_of_parseFirstByte_eq_oneMore h] at hb
have := congrArg (·[6]) hb
simp only at this
rw [BitVec.getElem_append, BitVec.getElem_append] at this
simp at this
| .twoMore =>
rw [toBitVec_eq_of_parseFirstByte_eq_twoMore h] at hb
have := congrArg (·[6]) hb
simp only at this
rw [BitVec.getElem_append, BitVec.getElem_append] at this
simp at this
| .threeMore =>
rw [toBitVec_eq_of_parseFirstByte_eq_threeMore h] at hb
have := congrArg (·[6]) hb
simp only at this
rw [BitVec.getElem_append, BitVec.getElem_append] at this
simp at this
| .invalid => rfl
/-! # `parseFirstByte`, `isInvalidContinuationByte` and `utf8EncodeChar` -/
theorem parseFirstByte_utf8EncodeChar_eq_done {c : Char} (hc : c.utf8Size = 1) :
@@ -1177,6 +1207,7 @@ public theorem ByteArray.isSome_utf8DecodeChar?_append {b : ByteArray} {i : Nat}
obtain c, hc := Option.isSome_iff_exists.1 h
rw [utf8DecodeChar?_append_eq_some hc, Option.isSome_some]
@[inline, expose]
public def ByteArray.utf8DecodeChar (bytes : ByteArray) (i : Nat) (h : (utf8DecodeChar? bytes i).isSome) : Char :=
(utf8DecodeChar? bytes i).get h
@@ -1236,3 +1267,115 @@ public theorem List.utf8DecodeChar?_utf8Encode_cons {l : List Char} {c : Char} :
public theorem List.utf8DecodeChar_utf8Encode_cons {l : List Char} {c : Char} {h} :
ByteArray.utf8DecodeChar (c::l).utf8Encode 0 h = c := by
simp [ByteArray.utf8DecodeChar]
/-! # `UInt8.IsUtf8FirstByte` -/
namespace UInt8
/--
Predicate for whether a byte can appear as the first byte of the UTF-8 encoding of a Unicode
scalar value.
-/
@[expose]
public def IsUtf8FirstByte (c : UInt8) : Prop :=
c &&& 0x80 = 0 c &&& 0xe0 = 0xc0 c &&& 0xf0 = 0xe0 c &&& 0xf8 = 0xf0
public instance {c : UInt8} : Decidable c.IsUtf8FirstByte :=
inferInstanceAs <| Decidable (c &&& 0x80 = 0 c &&& 0xe0 = 0xc0 c &&& 0xf0 = 0xe0 c &&& 0xf8 = 0xf0)
theorem isUtf8FirstByte_iff_parseFirstByte_ne_invalid {c : UInt8} :
c.IsUtf8FirstByte parseFirstByte c FirstByte.invalid := by
fun_cases parseFirstByte with simp_all [IsUtf8FirstByte]
@[simp]
public theorem isUtf8FirstByte_getElem_utf8EncodeChar {c : Char} {i : Nat} {hi : i < (String.utf8EncodeChar c).length} :
(String.utf8EncodeChar c)[i].IsUtf8FirstByte i = 0 := by
obtain (rfl|hi₀) := Nat.eq_zero_or_pos i
· simp only [isUtf8FirstByte_iff_parseFirstByte_ne_invalid, iff_true]
match h : c.utf8Size, c.utf8Size_pos, c.utf8Size_le_four with
| 1, _, _ => simp [parseFirstByte_utf8EncodeChar_eq_done h]
| 2, _, _ => simp [parseFirstByte_utf8EncodeChar_eq_oneMore h]
| 3, _, _ => simp [parseFirstByte_utf8EncodeChar_eq_twoMore h]
| 4, _, _ => simp [parseFirstByte_utf8EncodeChar_eq_threeMore h]
· simp only [isUtf8FirstByte_iff_parseFirstByte_ne_invalid, ne_eq, Nat.ne_of_lt' hi₀, iff_false,
Classical.not_not]
apply parseFirstByte_eq_invalid_of_isInvalidContinuationByte_eq_false
simp only [String.length_utf8EncodeChar] at hi
match h : c.utf8Size, c.utf8Size_pos, c.utf8Size_le_four, i, hi₀, hi with
| 1, _, _, 1, _, _ => contradiction
| 2, _, _, 1, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_one_of_utf8Size_eq_two h]
| 3, _, _, 1, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_one_of_utf8Size_eq_three h]
| 3, _, _, 2, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_two_of_utf8Size_eq_three h]
| 4, _, _, 1, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_one_of_utf8Size_eq_four h]
| 4, _, _, 2, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_two_of_utf8Size_eq_four h]
| 4, _, _, 3, _, _ => simp [isInvalidContinuationByte_getElem_utf8EncodeChar_three_of_utf8Size_eq_four h]
public theorem isUtf8FirstByte_getElem_zero_utf8EncodeChar {c : Char} :
((String.utf8EncodeChar c)[0]'(by simp [c.utf8Size_pos])).IsUtf8FirstByte := by
simp
@[expose]
public def utf8ByteSize (c : UInt8) (_h : c.IsUtf8FirstByte) : String.Pos :=
if c &&& 0x80 = 0 then
1
else if c &&& 0xe0 = 0xc0 then
2
else if c &&& 0xf0 = 0xe0 then
3
else
4
def _root_.ByteArray.utf8DecodeChar?.FirstByte.utf8ByteSize : FirstByte String.Pos
| .invalid => 0
| .done => 1
| .oneMore => 2
| .twoMore => 3
| .threeMore => 4
theorem utf8ByteSize_eq_utf8ByteSize_parseFirstByte {c : UInt8} {h : c.IsUtf8FirstByte} :
c.utf8ByteSize h = (parseFirstByte c).utf8ByteSize := by
simp only [utf8ByteSize, FirstByte.utf8ByteSize, parseFirstByte, beq_iff_eq]
split
· simp
· split
· simp
· split
· simp
· obtain (h|h|h|h) := h
· contradiction
· contradiction
· contradiction
· simp [h]
end UInt8
public theorem ByteArray.isUtf8FirstByte_getElem_zero_utf8EncodeChar_append {c : Char} {b : ByteArray} :
(((String.utf8EncodeChar c).toByteArray ++ b)[0]'(by simp; have := c.utf8Size_pos; omega)).IsUtf8FirstByte := by
rw [ByteArray.getElem_append_left (by simp [c.utf8Size_pos]),
List.getElem_toByteArray, UInt8.isUtf8FirstByte_getElem_utf8EncodeChar]
public theorem ByteArray.isUtf8FirstByte_of_isSome_utf8DecodeChar? {b : ByteArray} {i : Nat}
(h : (utf8DecodeChar? b i).isSome) : (b[i]'(lt_size_of_isSome_utf8DecodeChar? h)).IsUtf8FirstByte := by
rw [utf8DecodeChar?_eq_utf8DecodeChar?_extract] at h
suffices ((b.extract i b.size)[0]'(lt_size_of_isSome_utf8DecodeChar? h)).IsUtf8FirstByte by
simpa [ByteArray.getElem_extract, Nat.add_zero] using this
obtain c, hc := Option.isSome_iff_exists.1 h
conv => congr; congr; rw [eq_of_utf8DecodeChar?_eq_some hc]
exact isUtf8FirstByte_getElem_zero_utf8EncodeChar_append
theorem Char.byteIdx_utf8ByteSize_getElem_utf8EncodeChar {c : Char} :
(((String.utf8EncodeChar c)[0]'(by simp [c.utf8Size_pos])).utf8ByteSize
UInt8.isUtf8FirstByte_getElem_zero_utf8EncodeChar).byteIdx = c.utf8Size := by
rw [UInt8.utf8ByteSize_eq_utf8ByteSize_parseFirstByte]
obtain (hc|hc|hc|hc) := c.utf8Size_eq
· rw [parseFirstByte_utf8EncodeChar_eq_done hc, FirstByte.utf8ByteSize, hc]
· rw [parseFirstByte_utf8EncodeChar_eq_oneMore hc, FirstByte.utf8ByteSize, hc]
· rw [parseFirstByte_utf8EncodeChar_eq_twoMore hc, FirstByte.utf8ByteSize, hc]
· rw [parseFirstByte_utf8EncodeChar_eq_threeMore hc, FirstByte.utf8ByteSize, hc]
public theorem ByteArray.utf8Size_utf8DecodeChar {b : ByteArray} {i} {h} :
(utf8DecodeChar b i h).utf8Size =
((b[i]'(lt_size_of_isSome_utf8DecodeChar? h)).utf8ByteSize (isUtf8FirstByte_of_isSome_utf8DecodeChar? h)).byteIdx := by
rw [ Char.byteIdx_utf8ByteSize_getElem_utf8EncodeChar]
simp only [List.getElem_eq_getElem_toByteArray, utf8EncodeChar_utf8DecodeChar]
simp [ByteArray.getElem_extract]

View File

@@ -104,8 +104,7 @@ where
/--
Decodes an array of bytes that encode a string as [UTF-8](https://en.wikipedia.org/wiki/UTF-8) into
the corresponding string. Invalid UTF-8 characters in the byte array result in `(default : Char)`,
or `'A'`, in the string.
the corresponding string.
-/
@[extern "lean_string_from_utf8_unchecked"]
def fromUTF8 (a : @& ByteArray) (h : validateUTF8 a) : String :=
@@ -143,15 +142,6 @@ def toUTF8 (a : @& String) : ByteArray :=
@[simp] theorem size_toUTF8 (s : String) : s.toUTF8.size = s.utf8ByteSize := by
rfl
/--
Accesses the indicated byte in the UTF-8 encoding of a string.
At runtime, this function is implemented by efficient, constant-time code.
-/
@[extern "lean_string_get_byte_fast"]
def getUtf8Byte (s : @& String) (n : Nat) (h : n < s.utf8ByteSize) : UInt8 :=
(toUTF8 s)[n]'(size_toUTF8 _ h)
theorem Iterator.sizeOf_next_lt_of_hasNext (i : String.Iterator) (h : i.hasNext) : sizeOf i.next < sizeOf i := by
cases i; rename_i s pos; simp [Iterator.next, Iterator.sizeOf_eq]; simp [Iterator.hasNext] at h
exact Nat.sub_lt_sub_left h (String.lt_next s pos)

View File

@@ -0,0 +1,12 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Init.Data.String.Pattern.Basic
public import Init.Data.String.Pattern.Char
public import Init.Data.String.Pattern.String
public import Init.Data.String.Pattern.Pred

View File

@@ -0,0 +1,221 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Init.Data.String.Basic
public import Init.Data.Iterators.Basic
set_option doc.verso true
/-!
This module defines the notion of patterns which is central to the {name}`String.Slice` and
{name}`String` API. All functions on {name}`String.Slice` and {name}`String` that
"search for something" are polymorphic over a pattern instead of taking just one particular kind
of pattern such as a {name}`Char`. The key components are:
- {name (scope := "Init.Data.String.Pattern.Basic")}`ToForwardSearcher`
- {name (scope := "Init.Data.String.Pattern.Basic")}`ForwardPattern`
- {name (scope := "Init.Data.String.Pattern.Basic")}`ToBackwardSearcher`
- {name (scope := "Init.Data.String.Pattern.Basic")}`SuffixPattern`
-/
public section
namespace String.Slice.Pattern
/--
A step taken during the traversal of a {name}`Slice` by a forward or backward searcher.
-/
inductive SearchStep (s : Slice) where
/--
The subslice starting at {name}`startPos` and ending at {name}`endPos` did not match the pattern.
-/
| rejected (startPos endPos : s.Pos)
/--
The subslice starting at {name}`startPos` and ending at {name}`endPos` did not match the pattern.
-/
| matched (startPos endPos : s.Pos)
deriving Inhabited
/--
Provides a conversion from a pattern to an iterator of {name}`SearchStep` searching for matches of
the pattern from the start towards the end of a {name}`Slice`.
-/
class ToForwardSearcher (ρ : Type) (σ : outParam (Slice Type)) where
/--
Build an iterator of {name}`SearchStep` corresponding to matches of {name}`pat` along the slice
{name}`s`. The {name}`SearchStep`s returned by this iterator must contain ranges that are
adjacent, non-overlapping and cover all of {name}`s`.
-/
toSearcher : (s : Slice) (pat : ρ) Std.Iter (α := σ s) (SearchStep s)
/--
Provides simple pattern matching capabilities from the start of a {name}`Slice`.
While these operations can be implemented on top of {name}`ToForwardSearcher` some patterns allow
for more efficient implementations so this class can be used to specialise for them. If there is no
need to specialise in this fashion
{name (scope := "Init.Data.String.Pattern.Basic")}`ForwardPattern.defaultImplementation` can be used
to automatically derive an instance.
-/
class ForwardPattern (ρ : Type) where
/--
Checks whether the slice starts with the pattern.
-/
startsWith : Slice ρ Bool
/--
Checks whether the slice starts with the pattern, if it does return slice with the prefix removed,
otherwise {name}`none`.
-/
dropPrefix? : Slice ρ Option Slice
namespace Internal
@[extern "lean_slice_memcmp"]
def memcmp (lhs rhs : @& Slice) (lstart : @& String.Pos) (rstart : @& String.Pos)
(len : @& String.Pos) (h1 : lstart + len lhs.utf8ByteSize)
(h2 : rstart + len rhs.utf8ByteSize) : Bool :=
go 0
where
go (curr : String.Pos) : Bool :=
if h : curr < len then
have hl := by
simp [Pos.le_iff] at h h1
omega
have hr := by
simp [Pos.le_iff] at h h2
omega
if lhs.getUtf8Byte (lstart + curr) hl == rhs.getUtf8Byte (rstart + curr) hr then
go curr.inc
else
false
else
true
termination_by len.byteIdx - curr.byteIdx
decreasing_by
simp at h
omega
variable {ρ : Type} {σ : Slice Type}
variable [ s, Std.Iterators.Iterator (σ s) Id (SearchStep s)]
variable [ s, Std.Iterators.Finite (σ s) Id]
/--
Tries to skip the {name}`searcher` until the next {name}`SearchStep.matched` and return it. If no
match is found until the end returns {name}`none`.
-/
@[inline]
def nextMatch (searcher : Std.Iter (α := σ s) (SearchStep s)) :
Option (Std.Iter (α := σ s) (SearchStep s) × s.Pos × s.Pos) :=
go searcher
where
go [ s, Std.Iterators.Finite (σ s) Id] (searcher : Std.Iter (α := σ s) (SearchStep s)) :
Option (Std.Iter (α := σ s) (SearchStep s) × s.Pos × s.Pos) :=
match searcher.step with
| .yield it (.matched startPos endPos) _ => some (it, startPos, endPos)
| .yield it (.rejected ..) _ | .skip it .. => go it
| .done .. => none
termination_by Std.Iterators.Iter.finitelyManySteps searcher
/--
Tries to skip the {name}`searcher` until the next {name}`SearchStep.rejected` and return it. If no
reject is found until the end returns {name}`none`.
-/
@[inline]
def nextReject (searcher : Std.Iter (α := σ s) (SearchStep s)) :
Option (Std.Iter (α := σ s) (SearchStep s) × s.Pos × s.Pos) :=
go searcher
where
go [ s, Std.Iterators.Finite (σ s) Id] (searcher : Std.Iter (α := σ s) (SearchStep s)) :
Option (Std.Iter (α := σ s) (SearchStep s) × s.Pos × s.Pos) :=
match searcher.step with
| .yield it (.rejected startPos endPos) _ => some (it, startPos, endPos)
| .yield it (.matched ..) _ | .skip it .. => go it
| .done .. => none
termination_by Std.Iterators.Iter.finitelyManySteps searcher
end Internal
namespace ForwardPattern
variable {ρ : Type} {σ : Slice Type}
variable [ s, Std.Iterators.Iterator (σ s) Id (SearchStep s)]
variable [ToForwardSearcher ρ σ]
@[specialize pat]
def defaultStartsWith (s : Slice) (pat : ρ) : Bool :=
let searcher := ToForwardSearcher.toSearcher s pat
match searcher.step with
| .yield _ (.matched start ..) _ => s.startPos = start
| _ => false
@[specialize pat]
def defaultDropPrefix? (s : Slice) (pat : ρ) : Option Slice :=
let searcher := ToForwardSearcher.toSearcher s pat
match searcher.step with
| .yield _ (.matched _ endPos) _ => some (s.replaceStart endPos)
| _ => none
@[always_inline, inline]
def defaultImplementation : ForwardPattern ρ where
startsWith := defaultStartsWith
dropPrefix? := defaultDropPrefix?
end ForwardPattern
/--
Provides a conversion from a pattern to an iterator of {name}`SearchStep` searching for matches of
the pattern from the end towards the start of a {name}`Slice`.
-/
class ToBackwardSearcher (ρ : Type) (σ : outParam (Slice Type)) where
/--
Build an iterator of {name}`SearchStep` corresponding to matches of {lean}`pat` along the slice
{name}`s`. The {name}`SearchStep`s returned by this iterator must contain ranges that are
adjacent, non-overlapping and cover all of {name}`s`.
-/
toSearcher : (s : Slice) (pat : ρ) Std.Iter (α := σ s) (SearchStep s)
/--
Provides simple pattern matching capabilities from the end of a {name}`Slice`.
While these operations can be implemented on top of {name}`ToBackwardSearcher` some patterns allow
for more efficient implementations so this class can be used to specialise for them. If there is no
need to specialise in this fashion
{name (scope := "Init.Data.String.Pattern.Basic")}`BackwardPattern.defaultImplementation` can be
used to automatically derive an instance.
-/
class BackwardPattern (ρ : Type) where
endsWith : Slice ρ Bool
dropSuffix? : Slice ρ Option Slice
namespace ToBackwardSearcher
variable {ρ : Type} {σ : Slice Type}
variable [ s, Std.Iterators.Iterator (σ s) Id (SearchStep s)]
variable [ToBackwardSearcher ρ σ]
@[specialize pat]
def defaultEndsWith (s : Slice) (pat : ρ) : Bool :=
let searcher := ToBackwardSearcher.toSearcher s pat
match searcher.step with
| .yield _ (.matched _ endPos) _ => s.endPos = endPos
| _ => false
@[specialize pat]
def defaultDropSuffix? (s : Slice) (pat : ρ) : Option Slice :=
let searcher := ToBackwardSearcher.toSearcher s pat
match searcher.step with
| .yield _ (.matched startPos _) _ => some (s.replaceEnd startPos)
| _ => none
@[always_inline, inline]
def defaultImplementation : BackwardPattern ρ where
endsWith := defaultEndsWith
dropSuffix? := defaultDropSuffix?
end ToBackwardSearcher
end String.Slice.Pattern

View File

@@ -0,0 +1,160 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Init.Data.String.Pattern.Basic
public import Init.Data.Iterators.Internal.Termination
public import Init.Data.Iterators.Consumers.Monadic.Loop
set_option doc.verso true
/-!
This module defines the necessary instances to register {name}`Char` with the pattern framework.
-/
public section
namespace String.Slice.Pattern
structure ForwardCharSearcher (s : Slice) where
currPos : s.Pos
needle : Char
deriving Inhabited
namespace ForwardCharSearcher
@[inline]
def iter (s : Slice) (c : Char) : Std.Iter (α := ForwardCharSearcher s) (SearchStep s) :=
{ internalState := { currPos := s.startPos, needle := c }}
instance (s : Slice) : Std.Iterators.Iterator (ForwardCharSearcher s) Id (SearchStep s) where
IsPlausibleStep it
| .yield it' out =>
it.internalState.needle = it'.internalState.needle
h1 : it.internalState.currPos s.endPos,
it'.internalState.currPos = it.internalState.currPos.next h1
match out with
| .matched startPos endPos =>
it.internalState.currPos = startPos
it'.internalState.currPos = endPos
it.internalState.currPos.get h1 = it.internalState.needle
| .rejected startPos endPos =>
it.internalState.currPos = startPos
it'.internalState.currPos = endPos
it.internalState.currPos.get h1 it.internalState.needle
| .skip _ => False
| .done => it.internalState.currPos = s.endPos
step := fun currPos, needle =>
if h1 : currPos = s.endPos then
pure .done, by simp [h1]
else
let nextPos := currPos.next h1
let nextIt := nextPos, needle
if h2 : currPos.get h1 = needle then
pure .yield nextIt (.matched currPos nextPos), by simp [h1, h2, nextIt, nextPos]
else
pure .yield nextIt (.rejected currPos nextPos), by simp [h1, h2, nextIt, nextPos]
def finitenessRelation : Std.Iterators.FinitenessRelation (ForwardCharSearcher s) Id where
rel := InvImage WellFoundedRelation.rel
(fun it => s.utf8ByteSize.byteIdx - it.internalState.currPos.offset.byteIdx)
wf := InvImage.wf _ WellFoundedRelation.wf
subrelation {it it'} h := by
simp_wf
obtain step, h, h' := h
cases step
· cases h
obtain _, h1, h2, _ := h'
have h3 := Char.utf8Size_pos (it.internalState.currPos.get h1)
have h4 := it.internalState.currPos.isValidForSlice.le_utf8ByteSize
simp [Pos.ext_iff, String.Pos.ext_iff, Pos.le_iff] at h1 h2 h4
omega
· cases h'
· cases h
instance : Std.Iterators.Finite (ForwardCharSearcher s) Id :=
.of_finitenessRelation finitenessRelation
instance : Std.Iterators.IteratorLoop (ForwardCharSearcher s) Id Id :=
.defaultImplementation
instance : ToForwardSearcher Char ForwardCharSearcher where
toSearcher := iter
instance : ForwardPattern Char := .defaultImplementation
end ForwardCharSearcher
structure BackwardCharSearcher (s : Slice) where
currPos : s.Pos
needle : Char
deriving Inhabited
namespace BackwardCharSearcher
@[inline]
def iter (s : Slice) (c : Char) : Std.Iter (α := BackwardCharSearcher s) (SearchStep s) :=
{ internalState := { currPos := s.endPos, needle := c }}
instance (s : Slice) : Std.Iterators.Iterator (BackwardCharSearcher s) Id (SearchStep s) where
IsPlausibleStep it
| .yield it' out =>
it.internalState.needle = it'.internalState.needle
h1 : it.internalState.currPos s.startPos,
it'.internalState.currPos = it.internalState.currPos.prev h1
match out with
| .matched startPos endPos =>
it.internalState.currPos = endPos
it'.internalState.currPos = startPos
(it.internalState.currPos.prev h1).get Pos.prev_ne_endPos = it.internalState.needle
| .rejected startPos endPos =>
it.internalState.currPos = endPos
it'.internalState.currPos = startPos
(it.internalState.currPos.prev h1).get Pos.prev_ne_endPos it.internalState.needle
| .skip _ => False
| .done => it.internalState.currPos = s.startPos
step := fun currPos, needle =>
if h1 : currPos = s.startPos then
pure .done, by simp [h1]
else
let nextPos := currPos.prev h1
let nextIt := nextPos, needle
if h2 : nextPos.get Pos.prev_ne_endPos = needle then
pure .yield nextIt (.matched nextPos currPos), by simp [h1, h2, nextIt, nextPos]
else
pure .yield nextIt (.rejected nextPos currPos), by simp [h1, h2, nextIt, nextPos]
def finitenessRelation : Std.Iterators.FinitenessRelation (BackwardCharSearcher s) Id where
rel := InvImage WellFoundedRelation.rel
(fun it => it.internalState.currPos.offset.byteIdx)
wf := InvImage.wf _ WellFoundedRelation.wf
subrelation {it it'} h := by
simp_wf
obtain step, h, h' := h
cases step
· cases h
obtain _, h1, h2, _ := h'
have h3 := Pos.offset_prev_lt_offset (h := h1)
simp [Pos.ext_iff, String.Pos.ext_iff] at h2 h3
omega
· cases h'
· cases h
instance : Std.Iterators.Finite (BackwardCharSearcher s) Id :=
.of_finitenessRelation finitenessRelation
instance : Std.Iterators.IteratorLoop (BackwardCharSearcher s) Id Id :=
.defaultImplementation
instance : ToBackwardSearcher Char BackwardCharSearcher where
toSearcher := iter
instance : BackwardPattern Char := ToBackwardSearcher.defaultImplementation
end BackwardCharSearcher
end String.Slice.Pattern

View File

@@ -0,0 +1,162 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Init.Data.String.Pattern.Basic
public import Init.Data.Iterators.Internal.Termination
public import Init.Data.Iterators.Consumers.Monadic.Loop
set_option doc.verso true
/-!
This module defines the necessary instances to register {lean}`Char → Bool` with the pattern
framework.
-/
public section
namespace String.Slice.Pattern
structure ForwardCharPredSearcher (s : Slice) where
currPos : s.Pos
needle : Char Bool
deriving Inhabited
namespace ForwardCharPredSearcher
@[inline]
def iter (s : Slice) (p : Char Bool) : Std.Iter (α := ForwardCharPredSearcher s) (SearchStep s) :=
{ internalState := { currPos := s.startPos, needle := p }}
instance (s : Slice) : Std.Iterators.Iterator (ForwardCharPredSearcher s) Id (SearchStep s) where
IsPlausibleStep it
| .yield it' out =>
it.internalState.needle = it'.internalState.needle
h1 : it.internalState.currPos s.endPos,
it'.internalState.currPos = it.internalState.currPos.next h1
match out with
| .matched startPos endPos =>
it.internalState.currPos = startPos
it'.internalState.currPos = endPos
it.internalState.needle (it.internalState.currPos.get h1)
| .rejected startPos endPos =>
it.internalState.currPos = startPos
it'.internalState.currPos = endPos
¬ it.internalState.needle (it.internalState.currPos.get h1)
| .skip _ => False
| .done => it.internalState.currPos = s.endPos
step := fun currPos, needle =>
if h1 : currPos = s.endPos then
pure .done, by simp [h1]
else
let nextPos := currPos.next h1
let nextIt := nextPos, needle
if h2 : needle <| currPos.get h1 then
pure .yield nextIt (.matched currPos nextPos), by simp [h1, h2, nextPos, nextIt]
else
pure .yield nextIt (.rejected currPos nextPos), by simp [h1, h2, nextPos, nextIt]
def finitenessRelation : Std.Iterators.FinitenessRelation (ForwardCharPredSearcher s) Id where
rel := InvImage WellFoundedRelation.rel
(fun it => s.utf8ByteSize.byteIdx - it.internalState.currPos.offset.byteIdx)
wf := InvImage.wf _ WellFoundedRelation.wf
subrelation {it it'} h := by
simp_wf
obtain step, h, h' := h
cases step
· cases h
obtain _, h1, h2, _ := h'
have h3 := Char.utf8Size_pos (it.internalState.currPos.get h1)
have h4 := it.internalState.currPos.isValidForSlice.le_utf8ByteSize
simp [Pos.ext_iff, String.Pos.ext_iff, Pos.le_iff] at h1 h2 h4
omega
· cases h'
· cases h
instance : Std.Iterators.Finite (ForwardCharPredSearcher s) Id :=
.of_finitenessRelation finitenessRelation
instance : Std.Iterators.IteratorLoop (ForwardCharPredSearcher s) Id Id :=
.defaultImplementation
instance : ToForwardSearcher (Char Bool) ForwardCharPredSearcher where
toSearcher := iter
instance : ForwardPattern (Char Bool) := .defaultImplementation
end ForwardCharPredSearcher
structure BackwardCharPredSearcher (s : Slice) where
currPos : s.Pos
needle : Char Bool
deriving Inhabited
namespace BackwardCharPredSearcher
@[inline]
def iter (s : Slice) (c : Char Bool) : Std.Iter (α := BackwardCharPredSearcher s) (SearchStep s) :=
{ internalState := { currPos := s.endPos, needle := c }}
instance (s : Slice) : Std.Iterators.Iterator (BackwardCharPredSearcher s) Id (SearchStep s) where
IsPlausibleStep it
| .yield it' out =>
it.internalState.needle = it'.internalState.needle
h1 : it.internalState.currPos s.startPos,
it'.internalState.currPos = it.internalState.currPos.prev h1
match out with
| .matched startPos endPos =>
it.internalState.currPos = endPos
it'.internalState.currPos = startPos
it.internalState.needle ((it.internalState.currPos.prev h1).get Pos.prev_ne_endPos)
| .rejected startPos endPos =>
it.internalState.currPos = endPos
it'.internalState.currPos = startPos
¬ it.internalState.needle ((it.internalState.currPos.prev h1).get Pos.prev_ne_endPos)
| .skip _ => False
| .done => it.internalState.currPos = s.startPos
step := fun currPos, needle =>
if h1 : currPos = s.startPos then
pure .done, by simp [h1]
else
let nextPos := currPos.prev h1
let nextIt := nextPos, needle
if h2 : needle <| nextPos.get Pos.prev_ne_endPos then
pure .yield nextIt (.matched nextPos currPos), by simp [h1, h2, nextIt, nextPos]
else
pure .yield nextIt (.rejected nextPos currPos), by simp [h1, h2, nextIt, nextPos]
def finitenessRelation : Std.Iterators.FinitenessRelation (BackwardCharPredSearcher s) Id where
rel := InvImage WellFoundedRelation.rel
(fun it => it.internalState.currPos.offset.byteIdx)
wf := InvImage.wf _ WellFoundedRelation.wf
subrelation {it it'} h := by
simp_wf
obtain step, h, h' := h
cases step
· cases h
obtain _, h1, h2, _ := h'
have h3 := Pos.offset_prev_lt_offset (h := h1)
simp [Pos.ext_iff, String.Pos.ext_iff] at h2 h3
omega
· cases h'
· cases h
instance : Std.Iterators.Finite (BackwardCharPredSearcher s) Id :=
.of_finitenessRelation finitenessRelation
instance : Std.Iterators.IteratorLoop (BackwardCharPredSearcher s) Id Id :=
.defaultImplementation
instance : ToBackwardSearcher (Char Bool) BackwardCharPredSearcher where
toSearcher := iter
instance : BackwardPattern (Char Bool) := ToBackwardSearcher.defaultImplementation
end BackwardCharPredSearcher
end String.Slice.Pattern

View File

@@ -0,0 +1,273 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
module
prelude
public import Init.Data.String.Pattern.Basic
public import Init.Data.Iterators.Internal.Termination
public import Init.Data.Iterators.Consumers.Monadic.Loop
set_option doc.verso true
/-!
This module defines the necessary instances to register {name}`String` and {name}`String.Slice`
with the pattern framework.
-/
public section
namespace String.Slice.Pattern
inductive ForwardSliceSearcher (s : Slice) where
| empty (pos : s.Pos)
| proper (needle : Slice) (table : Array String.Pos) (stackPos : String.Pos) (needlePos : String.Pos)
| atEnd
deriving Inhabited
namespace ForwardSliceSearcher
partial def buildTable (pat : Slice) : Array String.Pos :=
if pat.utf8ByteSize == 0 then
#[]
else
let arr := Array.emptyWithCapacity pat.utf8ByteSize.byteIdx
let arr := arr.push 0
go 1 arr
where
go (pos : String.Pos) (table : Array String.Pos) :=
if h : pos < pat.utf8ByteSize then
let patByte := pat.getUtf8Byte pos h
let distance := computeDistance table[table.size - 1]! patByte table
let distance := if patByte = pat.getUtf8Byte! distance then distance.inc else distance
go pos.inc (table.push distance)
else
table
computeDistance (distance : String.Pos) (patByte : UInt8) (table : Array String.Pos) :
String.Pos :=
if distance > 0 && patByte != pat.getUtf8Byte! distance then
computeDistance table[distance.byteIdx - 1]! patByte table
else
distance
@[inline]
def iter (s : Slice) (pat : Slice) : Std.Iter (α := ForwardSliceSearcher s) (SearchStep s) :=
if pat.utf8ByteSize == 0 then
{ internalState := .empty s.startPos }
else
{ internalState := .proper pat (buildTable pat) s.startPos.offset pat.startPos.offset }
partial def backtrackIfNecessary (pat : Slice) (table : Array String.Pos) (stackByte : UInt8)
(needlePos : String.Pos) : String.Pos :=
if needlePos != 0 && stackByte != pat.getUtf8Byte! needlePos then
backtrackIfNecessary pat table stackByte table[needlePos.byteIdx - 1]!
else
needlePos
instance (s : Slice) : Std.Iterators.Iterator (ForwardSliceSearcher s) Id (SearchStep s) where
IsPlausibleStep it
| .yield it' out =>
match it.internalState with
| .empty pos =>
( newPos, pos.offset < newPos.offset it'.internalState = .empty newPos)
it'.internalState = .atEnd
| .proper needle table stackPos needlePos =>
( newStackPos newNeedlePos,
stackPos < newStackPos
newStackPos s.utf8ByteSize
it'.internalState = .proper needle table newStackPos newNeedlePos)
it'.internalState = .atEnd
| .atEnd => False
| .skip _ => False
| .done => True
step := fun iter =>
match iter with
| .empty pos =>
let res := .matched pos pos
if h : pos s.endPos then
pure .yield .empty (pos.next h) res, by simp [Char.utf8Size_pos]
else
pure .yield .atEnd res, by simp
| .proper needle table stackPos needlePos =>
let rec findNext (startPos : String.Pos)
(currStackPos : String.Pos) (needlePos : String.Pos) (h : stackPos currStackPos) :=
if h1 : currStackPos < s.utf8ByteSize then
let stackByte := s.getUtf8Byte currStackPos h1
let needlePos := backtrackIfNecessary needle table stackByte needlePos
let patByte := needle.getUtf8Byte! needlePos
if stackByte != patByte then
let nextStackPos := s.findNextPos currStackPos h1 |>.offset
let res := .rejected (s.pos! startPos) (s.pos! nextStackPos)
have hiter := by
left
exists nextStackPos
have haux := lt_offset_findNextPos h1
simp only [pos_lt_eq, proper.injEq, true_and, exists_and_left, exists_eq', and_true,
nextStackPos]
constructor
· simp [String.Pos.le_iff] at h haux
omega
· apply Pos.IsValidForSlice.le_utf8ByteSize
apply Pos.isValidForSlice
.yield .proper needle table nextStackPos needlePos res, hiter
else
let needlePos := needlePos.inc
if needlePos == needle.utf8ByteSize then
let nextStackPos := currStackPos.inc
let res := .matched (s.pos! startPos) (s.pos! nextStackPos)
have hiter := by
left
exists nextStackPos
simp only [pos_lt_eq, Pos.byteIdx_inc, proper.injEq, true_and, exists_and_left,
exists_eq', and_true, nextStackPos]
constructor
· simp [String.Pos.le_iff] at h
omega
· simp [String.Pos.le_iff] at h1
omega
.yield .proper needle table nextStackPos 0 res, hiter
else
have hinv := by
simp [String.Pos.le_iff] at h
omega
findNext startPos currStackPos.inc needlePos hinv
else
if startPos != s.utf8ByteSize then
let res := .rejected (s.pos! startPos) (s.pos! currStackPos)
.yield .atEnd res, by simp
else
.done, by simp
termination_by s.utf8ByteSize.byteIdx - currStackPos.byteIdx
decreasing_by
simp at h1
omega
findNext stackPos stackPos needlePos (by simp)
| .atEnd => pure .done, by simp
private def toPair : ForwardSliceSearcher s (Nat × Nat)
| .empty pos => (1, s.utf8ByteSize.byteIdx - pos.offset.byteIdx)
| .proper _ _ sp _ => (1, s.utf8ByteSize.byteIdx - sp.byteIdx)
| .atEnd => (0, 0)
private instance : WellFoundedRelation (ForwardSliceSearcher s) where
rel s1 s2 := Prod.Lex (· < ·) (· < ·) s1.toPair s2.toPair
wf := by
apply InvImage.wf
apply (Prod.lex _ _).wf
private def finitenessRelation :
Std.Iterators.FinitenessRelation (ForwardSliceSearcher s) Id where
rel := InvImage WellFoundedRelation.rel (fun it => it.internalState)
wf := InvImage.wf _ WellFoundedRelation.wf
subrelation {it it'} h := by
simp_wf
obtain step, h, h' := h
cases step
· cases h
simp only [Std.Iterators.IterM.IsPlausibleStep, Std.Iterators.Iterator.IsPlausibleStep] at h'
split at h'
· next heq =>
rw [heq]
rcases h' with np, h1', h2' | h'
· rw [h2']
apply Prod.Lex.right'
· simp
· have haux := np.isValidForSlice.le_utf8ByteSize
simp [String.Pos.le_iff] at h1' haux
omega
· apply Prod.Lex.left
simp [h']
· next heq =>
rw [heq]
rcases h' with np, sp, h1', h2', h3' | h'
· rw [h3']
apply Prod.Lex.right'
· simp
· simp [String.Pos.le_iff] at h1' h2'
omega
· apply Prod.Lex.left
simp [h']
· contradiction
· cases h'
· cases h
@[no_expose]
instance : Std.Iterators.Finite (ForwardSliceSearcher s) Id :=
.of_finitenessRelation finitenessRelation
instance : Std.Iterators.IteratorLoop (ForwardSliceSearcher s) Id Id :=
.defaultImplementation
instance : ToForwardSearcher Slice ForwardSliceSearcher where
toSearcher := iter
@[inline]
def startsWith (s : Slice) (pat : Slice) : Bool :=
if h : pat.utf8ByteSize s.utf8ByteSize then
have hs := by
simp [Pos.le_iff] at h
omega
have hp := by
simp [Pos.le_iff]
Internal.memcmp s pat s.startPos.offset pat.startPos.offset pat.utf8ByteSize hs hp
else
false
@[inline]
def dropPrefix? (s : Slice) (pat : Slice) : Option Slice :=
if startsWith s pat then
some <| s.replaceStart <| s.pos! <| s.startPos.offset + pat.utf8ByteSize
else
none
instance : ForwardPattern Slice where
startsWith := startsWith
dropPrefix? := dropPrefix?
instance : ToForwardSearcher String ForwardSliceSearcher where
toSearcher slice pat := iter slice pat.toSlice
instance : ForwardPattern String where
startsWith s pat := startsWith s pat.toSlice
dropPrefix? s pat := dropPrefix? s pat.toSlice
end ForwardSliceSearcher
namespace BackwardSliceSearcher
@[inline]
def endsWith (s : Slice) (pat : Slice) : Bool :=
if h : pat.utf8ByteSize s.utf8ByteSize then
let sStart := s.endPos.offset - pat.utf8ByteSize
let patStart := pat.startPos.offset
have hs := by
simp [sStart, Pos.le_iff] at h
omega
have hp := by
simp [patStart, Pos.le_iff] at h
Internal.memcmp s pat sStart patStart pat.utf8ByteSize hs hp
else
false
@[inline]
def dropSuffix? (s : Slice) (pat : Slice) : Option Slice :=
if endsWith s pat then
some <| s.replaceEnd <| s.pos! <| s.endPos.offset - pat.utf8ByteSize
else
none
instance : BackwardPattern Slice where
endsWith := endsWith
dropSuffix? := dropSuffix?
instance : BackwardPattern String where
endsWith s pat := endsWith s pat.toSlice
dropSuffix? s pat := dropSuffix? s pat.toSlice
end BackwardSliceSearcher
end String.Slice.Pattern

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Meta
import Init.Data.String.Extra
public import Init.Data.String.Extra
/-!
Here we give the. implementation of `Name.toString`. There is also a private implementation in
@@ -26,7 +26,7 @@ namespace Lean.Name
-- If you change this, also change the corresponding function in `Init.Meta`.
private partial def needsNoEscapeAsciiRest (s : String) (i : Nat) : Bool :=
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
isIdRestAscii c && needsNoEscapeAsciiRest s (i + 1)
else
true

View File

@@ -21,12 +21,7 @@ open Nat
/-- Converts a `Fin UInt8.size` into the corresponding `UInt8`. -/
@[inline] def UInt8.ofFin (a : Fin UInt8.size) : UInt8 := a
@[deprecated UInt8.ofBitVec (since := "2025-02-12"), inherit_doc UInt8.ofBitVec]
def UInt8.mk (bitVec : BitVec 8) : UInt8 :=
UInt8.ofBitVec bitVec
@[inline, deprecated UInt8.ofNatLT (since := "2025-02-13"), inherit_doc UInt8.ofNatLT]
def UInt8.ofNatCore (n : Nat) (h : n < UInt8.size) : UInt8 :=
UInt8.ofNatLT n h
/-- Converts an `Int` to a `UInt8` by taking the (non-negative remainder of the division by `2 ^ 8`. -/
def UInt8.ofInt (x : Int) : UInt8 := ofNat (x % 2 ^ 8).toNat
@@ -188,14 +183,16 @@ def Bool.toUInt8 (b : Bool) : UInt8 := if b then 1 else 0
instance : Max UInt8 := maxOfLe
instance : Min UInt8 := minOfLe
/--
If `b` is the ASCII value of an uppercase character return the corresponding
lowercase value, otherwise leave it untouched.
-/
@[inline]
def UInt8.toAsciiLower (b : UInt8) : UInt8 :=
if b >= 65 && b <= 90 then (b + 32) else b
/-- Converts a `Fin UInt16.size` into the corresponding `UInt16`. -/
@[inline] def UInt16.ofFin (a : Fin UInt16.size) : UInt16 := a
@[deprecated UInt16.ofBitVec (since := "2025-02-12"), inherit_doc UInt16.ofBitVec]
def UInt16.mk (bitVec : BitVec 16) : UInt16 :=
UInt16.ofBitVec bitVec
@[inline, deprecated UInt16.ofNatLT (since := "2025-02-13"), inherit_doc UInt16.ofNatLT]
def UInt16.ofNatCore (n : Nat) (h : n < UInt16.size) : UInt16 :=
UInt16.ofNatLT n h
/-- Converts an `Int` to a `UInt16` by taking the (non-negative remainder of the division by `2 ^ 16`. -/
def UInt16.ofInt (x : Int) : UInt16 := ofNat (x % 2 ^ 16).toNat
@@ -406,12 +403,6 @@ instance : Min UInt16 := minOfLe
/-- Converts a `Fin UInt32.size` into the corresponding `UInt32`. -/
@[inline] def UInt32.ofFin (a : Fin UInt32.size) : UInt32 := a
@[deprecated UInt32.ofBitVec (since := "2025-02-12"), inherit_doc UInt32.ofBitVec]
def UInt32.mk (bitVec : BitVec 32) : UInt32 :=
UInt32.ofBitVec bitVec
@[inline, deprecated UInt32.ofNatLT (since := "2025-02-13"), inherit_doc UInt32.ofNatLT]
def UInt32.ofNatCore (n : Nat) (h : n < UInt32.size) : UInt32 :=
UInt32.ofNatLT n h
/-- Converts an `Int` to a `UInt32` by taking the (non-negative remainder of the division by `2 ^ 32`. -/
def UInt32.ofInt (x : Int) : UInt32 := ofNat (x % 2 ^ 32).toNat
@@ -585,12 +576,6 @@ def Bool.toUInt32 (b : Bool) : UInt32 := if b then 1 else 0
/-- Converts a `Fin UInt64.size` into the corresponding `UInt64`. -/
@[inline] def UInt64.ofFin (a : Fin UInt64.size) : UInt64 := a
@[deprecated UInt64.ofBitVec (since := "2025-02-12"), inherit_doc UInt64.ofBitVec]
def UInt64.mk (bitVec : BitVec 64) : UInt64 :=
UInt64.ofBitVec bitVec
@[inline, deprecated UInt64.ofNatLT (since := "2025-02-13"), inherit_doc UInt64.ofNatLT]
def UInt64.ofNatCore (n : Nat) (h : n < UInt64.size) : UInt64 :=
UInt64.ofNatLT n h
/-- Converts an `Int` to a `UInt64` by taking the (non-negative remainder of the division by `2 ^ 64`. -/
def UInt64.ofInt (x : Int) : UInt64 := ofNat (x % 2 ^ 64).toNat
@@ -799,12 +784,6 @@ instance : Min UInt64 := minOfLe
/-- Converts a `Fin USize.size` into the corresponding `USize`. -/
@[inline] def USize.ofFin (a : Fin USize.size) : USize := a
@[deprecated USize.ofBitVec (since := "2025-02-12"), inherit_doc USize.ofBitVec]
def USize.mk (bitVec : BitVec System.Platform.numBits) : USize :=
USize.ofBitVec bitVec
@[inline, deprecated USize.ofNatLT (since := "2025-02-13"), inherit_doc USize.ofNatLT]
def USize.ofNatCore (n : Nat) (h : n < USize.size) : USize :=
USize.ofNatLT n h
/-- Converts an `Int` to a `USize` by taking the (non-negative remainder of the division by `2 ^ numBits`. -/
def USize.ofInt (x : Int) : USize := ofNat (x % 2 ^ System.Platform.numBits).toNat
@@ -812,14 +791,6 @@ def USize.ofInt (x : Int) : USize := ofNat (x % 2 ^ System.Platform.numBits).toN
@[simp] theorem USize.le_size : 2 ^ 32 USize.size := by cases USize.size_eq <;> simp_all
@[simp] theorem USize.size_le : USize.size 2 ^ 64 := by cases USize.size_eq <;> simp_all
@[deprecated USize.size_le (since := "2025-02-24")]
theorem usize_size_le : USize.size 18446744073709551616 :=
USize.size_le
@[deprecated USize.le_size (since := "2025-02-24")]
theorem le_usize_size : 4294967296 USize.size :=
USize.le_size
/--
Multiplies two word-sized unsigned integers, wrapping around on overflow. Usually accessed via the
`*` operator.

View File

@@ -26,8 +26,6 @@ open Nat
/-- Converts a `UInt8` into the corresponding `Fin UInt8.size`. -/
def UInt8.toFin (x : UInt8) : Fin UInt8.size := x.toBitVec.toFin
@[deprecated UInt8.toFin (since := "2025-02-12"), inherit_doc UInt8.toFin]
def UInt8.val (x : UInt8) : Fin UInt8.size := x.toFin
/--
Converts a natural number to an 8-bit unsigned integer, returning the largest representable value if
@@ -67,8 +65,6 @@ instance UInt8.instOfNat : OfNat UInt8 n := ⟨UInt8.ofNat n⟩
/-- Converts a `UInt16` into the corresponding `Fin UInt16.size`. -/
def UInt16.toFin (x : UInt16) : Fin UInt16.size := x.toBitVec.toFin
@[deprecated UInt16.toFin (since := "2025-02-12"), inherit_doc UInt16.toFin]
def UInt16.val (x : UInt16) : Fin UInt16.size := x.toFin
/--
Converts a natural number to a 16-bit unsigned integer, wrapping on overflow.
@@ -134,8 +130,6 @@ instance UInt16.instOfNat : OfNat UInt16 n := ⟨UInt16.ofNat n⟩
/-- Converts a `UInt32` into the corresponding `Fin UInt32.size`. -/
def UInt32.toFin (x : UInt32) : Fin UInt32.size := x.toBitVec.toFin
@[deprecated UInt32.toFin (since := "2025-02-12"), inherit_doc UInt32.toFin]
def UInt32.val (x : UInt32) : Fin UInt32.size := x.toFin
/--
Converts a natural number to a 32-bit unsigned integer, wrapping on overflow.
@@ -149,8 +143,7 @@ Examples:
-/
@[extern "lean_uint32_of_nat"]
def UInt32.ofNat (n : @& Nat) : UInt32 := BitVec.ofNat 32 n
@[inline, deprecated UInt32.ofNatLT (since := "2025-02-13"), inherit_doc UInt32.ofNatLT]
def UInt32.ofNat' (n : Nat) (h : n < UInt32.size) : UInt32 := UInt32.ofNatLT n h
/--
Converts a natural number to a 32-bit unsigned integer, returning the largest representable value if
the number is too large.
@@ -210,23 +203,14 @@ theorem UInt32.ofNatLT_lt_of_lt {n m : Nat} (h1 : n < UInt32.size) (h2 : m < UIn
simp only [(· < ·), BitVec.toNat, ofNatLT, BitVec.ofNatLT, ofNat, BitVec.ofNat,
Fin.Internal.ofNat_eq_ofNat, Fin.ofNat, Nat.mod_eq_of_lt h2, imp_self]
@[deprecated UInt32.ofNatLT_lt_of_lt (since := "2025-02-13")]
theorem UInt32.ofNat'_lt_of_lt {n m : Nat} (h1 : n < UInt32.size) (h2 : m < UInt32.size) :
n < m UInt32.ofNatLT n h1 < UInt32.ofNat m := UInt32.ofNatLT_lt_of_lt h1 h2
theorem UInt32.lt_ofNatLT_of_lt {n m : Nat} (h1 : n < UInt32.size) (h2 : m < UInt32.size) :
m < n UInt32.ofNat m < UInt32.ofNatLT n h1 := by
simp only [(· < ·), BitVec.toNat, ofNatLT, BitVec.ofNatLT, ofNat, BitVec.ofNat, Fin.Internal.ofNat_eq_ofNat,
Fin.ofNat, Nat.mod_eq_of_lt h2, imp_self]
@[deprecated UInt32.lt_ofNatLT_of_lt (since := "2025-02-13")]
theorem UInt32.lt_ofNat'_of_lt {n m : Nat} (h1 : n < UInt32.size) (h2 : m < UInt32.size) :
m < n UInt32.ofNat m < UInt32.ofNatLT n h1 := UInt32.lt_ofNatLT_of_lt h1 h2
/-- Converts a `UInt64` into the corresponding `Fin UInt64.size`. -/
def UInt64.toFin (x : UInt64) : Fin UInt64.size := x.toBitVec.toFin
@[deprecated UInt64.toFin (since := "2025-02-12"), inherit_doc UInt64.toFin]
def UInt64.val (x : UInt64) : Fin UInt64.size := x.toFin
/--
Converts a natural number to a 64-bit unsigned integer, wrapping on overflow.
@@ -315,18 +299,9 @@ def UInt32.toUInt64 (a : UInt32) : UInt64 := ⟨⟨a.toNat, Nat.lt_trans a.toBit
instance UInt64.instOfNat : OfNat UInt64 n := UInt64.ofNat n
@[deprecated USize.size_eq (since := "2025-02-24")]
theorem usize_size_eq : USize.size = 4294967296 USize.size = 18446744073709551616 :=
USize.size_eq
@[deprecated USize.size_pos (since := "2025-02-24")]
theorem usize_size_pos : 0 < USize.size :=
USize.size_pos
/-- Converts a `USize` into the corresponding `Fin USize.size`. -/
def USize.toFin (x : USize) : Fin USize.size := x.toBitVec.toFin
@[deprecated USize.toFin (since := "2025-02-12"), inherit_doc USize.toFin]
def USize.val (x : USize) : Fin USize.size := x.toFin
/--
Converts an arbitrary-precision natural number to an unsigned word-sized integer, wrapping around on
overflow.

View File

@@ -42,9 +42,6 @@ macro "declare_uint_theorems" typeName:ident bits:term:arg : command => do
@[simp] theorem toNat_ofBitVec : (ofBitVec a).toNat = a.toNat := (rfl)
@[deprecated toNat_ofBitVec (since := "2025-02-12")]
theorem toNat_mk : (ofBitVec a).toNat = a.toNat := (rfl)
@[simp] theorem toNat_ofNat' {n : Nat} : (ofNat n).toNat = n % 2 ^ $bits := BitVec.toNat_ofNat ..
-- Not `simp` because we have simprocs which will avoid the modulo.
@@ -52,34 +49,14 @@ macro "declare_uint_theorems" typeName:ident bits:term:arg : command => do
@[simp] theorem toNat_ofNatLT {n : Nat} {h : n < size} : (ofNatLT n h).toNat = n := BitVec.toNat_ofNatLT ..
@[deprecated toNat_ofNatLT (since := "2025-02-13")]
theorem toNat_ofNatCore {n : Nat} {h : n < size} : (ofNatLT n h).toNat = n := BitVec.toNat_ofNatLT ..
@[simp] theorem toFin_val (x : $typeName) : x.toFin.val = x.toNat := (rfl)
@[deprecated toFin_val (since := "2025-02-12")]
theorem val_val_eq_toNat (x : $typeName) : x.toFin.val = x.toNat := (rfl)
@[simp] theorem toNat_toBitVec (x : $typeName) : x.toBitVec.toNat = x.toNat := (rfl)
@[simp] theorem toFin_toBitVec (x : $typeName) : x.toBitVec.toFin = x.toFin := (rfl)
@[deprecated toNat_toBitVec (since := "2025-02-21")]
theorem toNat_toBitVec_eq_toNat (x : $typeName) : x.toBitVec.toNat = x.toNat := (rfl)
@[simp] theorem ofBitVec_toBitVec : (a : $typeName), ofBitVec a.toBitVec = a
| _, _ => rfl
@[deprecated ofBitVec_toBitVec (since := "2025-02-21")]
theorem ofBitVec_toBitVec_eq : (a : $typeName), ofBitVec a.toBitVec = a :=
ofBitVec_toBitVec
@[deprecated ofBitVec_toBitVec_eq (since := "2025-02-12")]
theorem mk_toBitVec_eq : (a : $typeName), ofBitVec a.toBitVec = a
| _, _ => rfl
@[deprecated "Use `toNat_toBitVec` and `toNat_ofNat_of_lt`." (since := "2025-03-05")]
theorem toBitVec_eq_of_lt {a : Nat} : a < size (ofNat a).toBitVec.toNat = a :=
Nat.mod_eq_of_lt
theorem toBitVec_ofNat' (n : Nat) : (ofNat n).toBitVec = BitVec.ofNat _ n := (rfl)
theorem toNat_ofNat_of_lt' {n : Nat} (h : n < size) : (ofNat n).toNat = n := by
@@ -140,17 +117,10 @@ macro "declare_uint_theorems" typeName:ident bits:term:arg : command => do
protected theorem eq_of_toFin_eq {a b : $typeName} (h : a.toFin = b.toFin) : a = b := by
rcases a with _; rcases b with _; simp_all [toFin]
open $typeName (eq_of_toFin_eq) in
@[deprecated eq_of_toFin_eq (since := "2025-02-12")]
protected theorem eq_of_val_eq {a b : $typeName} (h : a.toFin = b.toFin) : a = b :=
eq_of_toFin_eq h
open $typeName (eq_of_toFin_eq) in
protected theorem toFin_inj {a b : $typeName} : a.toFin = b.toFin a = b :=
Iff.intro eq_of_toFin_eq (congrArg toFin)
open $typeName (toFin_inj) in
@[deprecated toFin_inj (since := "2025-02-12")]
protected theorem val_inj {a b : $typeName} : a.toFin = b.toFin a = b :=
toFin_inj
open $typeName (eq_of_toBitVec_eq) in
protected theorem toBitVec_ne_of_ne {a b : $typeName} (h : a b) : a.toBitVec b.toBitVec :=
@@ -236,8 +206,6 @@ instance : LawfulOrderLT $typeName where
@[simp]
theorem toFin_ofNat (n : Nat) : toFin (no_index (OfNat.ofNat n)) = OfNat.ofNat n := (rfl)
@[deprecated toFin_ofNat (since := "2025-02-12")]
theorem val_ofNat (n : Nat) : toFin (no_index (OfNat.ofNat n)) = OfNat.ofNat n := (rfl)
@[simp, int_toBitVec]
theorem toBitVec_ofNat (n : Nat) : toBitVec (no_index (OfNat.ofNat n)) = BitVec.ofNat _ n := (rfl)
@@ -245,9 +213,6 @@ instance : LawfulOrderLT $typeName where
@[simp]
theorem ofBitVec_ofNat (n : Nat) : ofBitVec (BitVec.ofNat _ n) = OfNat.ofNat n := (rfl)
@[deprecated ofBitVec_ofNat (since := "2025-02-12")]
theorem mk_ofNat (n : Nat) : ofBitVec (BitVec.ofNat _ n) = OfNat.ofNat n := (rfl)
@[simp, int_toBitVec] protected theorem toBitVec_add {a b : $typeName} : (a + b).toBitVec = a.toBitVec + b.toBitVec := (rfl)
@[simp, int_toBitVec] protected theorem toBitVec_sub {a b : $typeName} : (a - b).toBitVec = a.toBitVec - b.toBitVec := (rfl)
@[simp, int_toBitVec] protected theorem toBitVec_mul {a b : $typeName} : (a * b).toBitVec = a.toBitVec * b.toBitVec := (rfl)

View File

@@ -97,16 +97,16 @@ Unsafe implementation of `attachWith`, taking advantage of the fact that the rep
intro a m h₁ h₂
congr
@[simp, grind =] theorem pmap_empty {P : α Prop} {f : a, P a β} : pmap f #v[] (by simp) = #v[] := rfl
@[simp] theorem pmap_empty {P : α Prop} {f : a, P a β} : pmap f #v[] (by simp) = #v[] := rfl
@[simp, grind =] theorem pmap_push {P : α Prop} {f : a, P a β} {a : α} {xs : Vector α n} {h : b xs.push a, P b} :
@[simp] theorem pmap_push {P : α Prop} {f : a, P a β} {a : α} {xs : Vector α n} {h : b xs.push a, P b} :
pmap f (xs.push a) h =
(pmap f xs (fun a m => by simp at h; exact h a (.inl m))).push (f a (h a (by simp))) := by
simp [pmap]
@[simp, grind =] theorem attach_empty : (#v[] : Vector α 0).attach = #v[] := rfl
@[simp] theorem attach_empty : (#v[] : Vector α 0).attach = #v[] := rfl
@[simp, grind =] theorem attachWith_empty {P : α Prop} (H : x #v[], P x) : (#v[] : Vector α 0).attachWith P H = #v[] := rfl
@[simp] theorem attachWith_empty {P : α Prop} (H : x #v[], P x) : (#v[] : Vector α 0).attachWith P H = #v[] := rfl
@[simp]
theorem pmap_eq_map {p : α Prop} {f : α β} {xs : Vector α n} (H) :
@@ -120,13 +120,11 @@ theorem pmap_congr_left {p q : α → Prop} {f : ∀ a, p a → β} {g : ∀ a,
apply Array.pmap_congr_left
simpa using h
@[grind =]
theorem map_pmap {p : α Prop} {g : β γ} {f : a, p a β} {xs : Vector α n} (H) :
map g (pmap f xs H) = pmap (fun a h => g (f a h)) xs H := by
rcases xs with xs, rfl
simp [Array.map_pmap]
@[grind =]
theorem pmap_map {p : β Prop} {g : b, p b γ} {f : α β} {xs : Vector α n} (H) :
pmap g (map f xs) H = pmap (fun a h => g (f a) h) xs fun _ h => H _ (mem_map_of_mem h) := by
rcases xs with xs, rfl
@@ -142,13 +140,13 @@ theorem attachWith_congr {xs ys : Vector α n} (w : xs = ys) {P : α → Prop} {
subst w
simp
@[simp, grind =] theorem attach_push {a : α} {xs : Vector α n} :
@[simp] theorem attach_push {a : α} {xs : Vector α n} :
(xs.push a).attach =
(xs.attach.map (fun x, h => x, mem_push_of_mem a h)).push a, by simp := by
rcases xs with xs, rfl
simp [Array.map_attach_eq_pmap]
@[simp, grind =] theorem attachWith_push {a : α} {xs : Vector α n} {P : α Prop} {H : x xs.push a, P x} :
@[simp] theorem attachWith_push {a : α} {xs : Vector α n} {P : α Prop} {H : x xs.push a, P x} :
(xs.push a).attachWith P H =
(xs.attachWith P (fun x h => by simp at H; exact H x (.inl h))).push a, H a (by simp) := by
rcases xs with xs, rfl
@@ -172,9 +170,6 @@ theorem attach_map_val (xs : Vector α n) (f : α → β) :
rcases xs with xs, rfl
simp
@[deprecated attach_map_val (since := "2025-02-17")]
abbrev attach_map_coe := @attach_map_val
-- The argument `xs : Vector α n` is explicit to allow rewriting from right to left.
theorem attach_map_subtype_val (xs : Vector α n) : xs.attach.map Subtype.val = xs := by
rcases xs with xs, rfl
@@ -185,9 +180,6 @@ theorem attachWith_map_val {p : α → Prop} {f : α → β} {xs : Vector α n}
rcases xs with xs, rfl
simp
@[deprecated attachWith_map_val (since := "2025-02-17")]
abbrev attachWith_map_coe := @attachWith_map_val
theorem attachWith_map_subtype_val {p : α Prop} {xs : Vector α n} (H : a xs, p a) :
(xs.attachWith p H).map Subtype.val = xs := by
rcases xs with xs, rfl
@@ -258,26 +250,24 @@ theorem getElem_attach {xs : Vector α n} {i : Nat} (h : i < n) :
xs.attach[i] = xs[i]'(by simpa using h), getElem_mem (by simpa using h) :=
getElem_attachWith h
@[simp, grind =] theorem pmap_attach {xs : Vector α n} {p : {x // x xs} Prop} {f : a, p a β} (H) :
@[simp] theorem pmap_attach {xs : Vector α n} {p : {x // x xs} Prop} {f : a, p a β} (H) :
pmap f xs.attach H =
xs.pmap (P := fun a => h : a xs, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => h, H a, h (by simp)) := by
rcases xs with xs, rfl
ext <;> simp
@[simp, grind =] theorem pmap_attachWith {xs : Vector α n} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
@[simp] theorem pmap_attachWith {xs : Vector α n} {p : {x // q x} Prop} {f : a, p a β} (H₁ H₂) :
pmap f (xs.attachWith q H₁) H₂ =
xs.pmap (P := fun a => h : q a, p a, h)
(fun a h => f a, h.1 h.2) (fun a h => H₁ _ h, H₂ a, H₁ _ h (by simpa)) := by
ext <;> simp
@[grind =]
theorem foldl_pmap {xs : Vector α n} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a xs P a) (g : γ β γ) (x : γ) :
(xs.pmap f H).foldl g x = xs.attach.foldl (fun acc a => g acc (f a.1 (H _ a.2))) x := by
rw [pmap_eq_map_attach, foldl_map]
@[grind =]
theorem foldr_pmap {xs : Vector α n} {P : α Prop} {f : (a : α) P a β}
(H : (a : α), a xs P a) (g : β γ γ) (x : γ) :
(xs.pmap f H).foldr g x = xs.attach.foldr (fun a acc => g (f a.1 (H _ a.2)) acc) x := by
@@ -313,20 +303,18 @@ theorem foldr_attach {xs : Vector α n} {f : α → β → β} {b : β} :
rcases xs with xs, rfl
simp
@[grind =]
theorem attach_map {xs : Vector α n} {f : α β} :
(xs.map f).attach = xs.attach.map (fun x, h => f x, mem_map_of_mem h) := by
cases xs
ext <;> simp
@[grind =]
theorem attachWith_map {xs : Vector α n} {f : α β} {P : β Prop} (H : (b : β), b xs.map f P b) :
(xs.map f).attachWith P H = (xs.attachWith (P f) (fun _ h => H _ (mem_map_of_mem h))).map
fun x, h => f x, h := by
rcases xs with xs, rfl
simp [Array.attachWith_map]
@[simp, grind =] theorem map_attachWith {xs : Vector α n} {P : α Prop} {H : (a : α), a xs P a}
@[simp] theorem map_attachWith {xs : Vector α n} {P : α Prop} {H : (a : α), a xs P a}
{f : { x // P x } β} :
(xs.attachWith P H).map f = xs.attach.map fun x, h => f x, H _ h := by
rcases xs with xs, rfl
@@ -345,10 +333,6 @@ theorem map_attach_eq_pmap {xs : Vector α n} {f : { x // x ∈ xs } → β} :
rcases xs with xs, rfl
ext <;> simp
@[deprecated map_attach_eq_pmap (since := "2025-02-09")]
abbrev map_attach := @map_attach_eq_pmap
@[grind =]
theorem pmap_pmap {p : α Prop} {q : β Prop} {g : a, p a β} {f : b, q b γ} {xs : Vector α n} (H₁ H₂) :
pmap f (pmap g xs H₁) H₂ =
pmap (α := { x // x xs }) (fun a h => f (g a h) (H₂ (g a h) (mem_pmap_of_mem a.2))) xs.attach
@@ -356,7 +340,7 @@ theorem pmap_pmap {p : α → Prop} {q : β → Prop} {g : ∀ a, p a → β} {f
rcases xs with xs, rfl
ext <;> simp
@[simp, grind =] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {xs : Vector ι n} {ys : Vector ι m}
@[simp] theorem pmap_append {p : ι Prop} {f : a : ι, p a α} {xs : Vector ι n} {ys : Vector ι m}
(h : a xs ++ ys, p a) :
(xs ++ ys).pmap f h =
(xs.pmap f fun a ha => h a (mem_append_left ys ha)) ++
@@ -371,69 +355,66 @@ theorem pmap_append' {p : α → Prop} {f : ∀ a : α, p a → β} {xs : Vector
xs.pmap f h₁ ++ ys.pmap f h₂ :=
pmap_append _
@[simp, grind =] theorem attach_append {xs : Vector α n} {ys : Vector α m} :
@[simp] theorem attach_append {xs : Vector α n} {ys : Vector α m} :
(xs ++ ys).attach = xs.attach.map (fun x, h => (x, mem_append_left ys h : { x // x xs ++ ys })) ++
ys.attach.map (fun y, h => (y, mem_append_right xs h : { y // y xs ++ ys })) := by
rcases xs with xs, rfl
rcases ys with ys, rfl
simp [Array.map_attach_eq_pmap]
@[simp, grind =] theorem attachWith_append {P : α Prop} {xs : Vector α n} {ys : Vector α m}
@[simp] theorem attachWith_append {P : α Prop} {xs : Vector α n} {ys : Vector α m}
{H : (a : α), a xs ++ ys P a} :
(xs ++ ys).attachWith P H = xs.attachWith P (fun a h => H a (mem_append_left ys h)) ++
ys.attachWith P (fun a h => H a (mem_append_right xs h)) := by
simp [attachWith]
@[simp, grind =] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
@[simp] theorem pmap_reverse {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
(H : (a : α), a xs.reverse P a) :
xs.reverse.pmap f H = (xs.pmap f (fun a h => H a (by simpa using h))).reverse := by
induction xs <;> simp_all
@[grind =]
theorem reverse_pmap {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
(H : (a : α), a xs P a) :
(xs.pmap f H).reverse = xs.reverse.pmap f (fun a h => H a (by simpa using h)) := by
rw [pmap_reverse]
@[simp, grind =] theorem attachWith_reverse {P : α Prop} {xs : Vector α n}
@[simp] theorem attachWith_reverse {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs.reverse P a} :
xs.reverse.attachWith P H =
(xs.attachWith P (fun a h => H a (by simpa using h))).reverse := by
cases xs
simp
@[grind =]
theorem reverse_attachWith {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs P a} :
(xs.attachWith P H).reverse = (xs.reverse.attachWith P (fun a h => H a (by simpa using h))) := by
cases xs
simp
@[simp, grind =] theorem attach_reverse {xs : Vector α n} :
@[simp] theorem attach_reverse {xs : Vector α n} :
xs.reverse.attach = xs.attach.reverse.map fun x, h => x, by simpa using h := by
cases xs
rw [attach_congr (reverse_mk ..)]
simp [Array.map_attachWith]
@[grind =]
theorem reverse_attach {xs : Vector α n} :
xs.attach.reverse = xs.reverse.attach.map fun x, h => x, by simpa using h := by
cases xs
simp [Array.map_attach_eq_pmap]
@[simp, grind =] theorem back?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
@[simp] theorem back?_pmap {P : α Prop} {f : (a : α) P a β} {xs : Vector α n}
(H : (a : α), a xs P a) :
(xs.pmap f H).back? = xs.attach.back?.map fun a, m => f a (H a m) := by
cases xs
simp
@[simp, grind =] theorem back?_attachWith {P : α Prop} {xs : Vector α n}
@[simp] theorem back?_attachWith {P : α Prop} {xs : Vector α n}
{H : (a : α), a xs P a} :
(xs.attachWith P H).back? = xs.back?.pbind (fun a h => some a, H _ (mem_of_back? h)) := by
cases xs
simp
@[simp, grind =]
@[simp]
theorem back?_attach {xs : Vector α n} :
xs.attach.back? = xs.back?.pbind fun a h => some a, mem_of_back? h := by
cases xs

View File

@@ -879,11 +879,13 @@ set_option linter.indexVariables false in
rcases xs with xs, rfl
simp
@[grind =]
theorem getElem_push {xs : Vector α n} {x : α} {i : Nat} (h : i < n + 1) :
(xs.push x)[i] = if h : i < n then xs[i] else x := by
rcases xs with xs, rfl
simp [Array.getElem_push]
@[grind =]
theorem getElem?_push {xs : Vector α n} {x : α} {i : Nat} : (xs.push x)[i]? = if i = n then some x else xs[i]? := by
simp [getElem?_def, getElem_push]
(repeat' split) <;> first | rfl | omega
@@ -907,6 +909,8 @@ theorem getElem?_singleton {a : α} {i : Nat} : #v[a][i]? = if i = 0 then some a
grind_pattern getElem_mem => xs[i] xs
@[grind ]
theorem not_mem_empty (a : α) : ¬ a #v[] := nofun
@[simp, grind =] theorem mem_push {xs : Vector α n} {x y : α} : x xs.push y x xs x = y := by

View File

@@ -250,11 +250,6 @@ theorem getElem_of_getElem? [GetElem? cont idx elem dom] [LawfulGetElem cont idx
(c[i]? = some c[i]) True := by
simp [h]
@[deprecated getElem?_eq_none_iff (since := "2025-02-17")]
abbrev getElem?_eq_none := @getElem?_eq_none_iff
@[simp, grind =] theorem isSome_getElem? [GetElem? cont idx elem dom] [LawfulGetElem cont idx elem dom]
(c : cont) (i : idx) [Decidable (dom c i)] : c[i]?.isSome = dom c i := by
simp only [getElem?_def]

View File

@@ -24,3 +24,4 @@ public import Init.Grind.Attr
public import Init.Data.Int.OfNat -- This may not have otherwise been imported, breaking `grind` proofs.
public import Init.Grind.AC
public import Init.Grind.Injective
public import Init.Grind.Order

View File

@@ -129,6 +129,14 @@ If `grind!` is used, then only minimal indexable subexpressions are considered.
-/
syntax grindLR := patternIgnore("" <|> "=>")
/--
The `.` modifier instructs `grind` to select a multi-pattern by traversing the conclusion of the
theorem, and then the hypotheses from eft to right. We say this is the default modifier.
Each time it encounters a subexpression which covers an argument which was not
previously covered, it adds that subexpression as a pattern, until all arguments have been covered.
If `grind!` is used, then only minimal indexable subexpressions are considered.
-/
syntax grindDef := patternIgnore("." <|> "·") (grindGen)?
/--
The `usr` modifier indicates that this theorem was applied using a
**user-defined instantiation pattern**. Such patterns are declared with
the `grind_pattern` command, which lets you specify how `grind` should
@@ -206,9 +214,48 @@ syntax grindMod :=
grindEqBoth <|> grindEqRhs <|> grindEq <|> grindEqBwd <|> grindBwd
<|> grindFwd <|> grindRL <|> grindLR <|> grindUsr <|> grindCasesEager
<|> grindCases <|> grindIntro <|> grindExt <|> grindGen <|> grindSym <|> grindInj
<|> grindDef
/--
Marks a theorem or definition for use by the `grind` tactic.
An optional modifier (e.g. `=`, `→`, `←`, `cases`, `intro`, `ext`, `inj`, etc.)
controls how `grind` uses the declaration:
* whether it is applied forwards, backwards, or both,
* whether equalities are used on the left, right, or both sides,
* whether case-splits, constructors, extensionality, or injectivity are applied,
* or whether custom instantiation patterns are used.
See the individual modifier docstrings for details.
-/
syntax (name := grind) "grind" (ppSpace grindMod)? : attr
/--
Like `@[grind]`, but enforces the **minimal indexable subexpression condition**:
when several subterms cover the same free variables, `grind!` chooses the smallest one.
This influences E-matching pattern selection.
### Example
```lean
theorem fg_eq (h : x > 0) : f (g x) = x
@[grind <-] theorem fg_eq (h : x > 0) : f (g x) = x
-- Pattern selected: `f (g x)`
-- With minimal subexpression:
@[grind! <-] theorem fg_eq (h : x > 0) : f (g x) = x
-- Pattern selected: `g x`
-/
syntax (name := grind!) "grind!" (ppSpace grindMod)? : attr
/--
Like `@[grind]`, but also prints the pattern(s) selected by `grind`
as info messages. Useful for debugging annotations and modifiers.
-/
syntax (name := grind?) "grind?" (ppSpace grindMod)? : attr
/--
Like `@[grind!]`, but also prints the pattern(s) selected by `grind`
as info messages. Combines minimal subexpression selection with debugging output.
-/
syntax (name := grind!?) "grind!?" (ppSpace grindMod)? : attr
end Attr
end Lean.Parser

View File

@@ -7,7 +7,7 @@ module
prelude
public import Init.Data.Function
public import Init.Classical
public section
namespace Lean.Grind
open Function
@@ -31,4 +31,11 @@ noncomputable def leftInv {α : Sort u} {β : Sort v} (f : α → β) (hf : Inje
theorem leftInv_eq {α : Sort u} {β : Sort v} (f : α β) (hf : Injective f) [Nonempty α] (a : α) : leftInv f hf (f a) = a :=
Classical.choose_spec (hf.leftInverse f) a
@[app_unexpander leftInv]
meta def leftInvUnexpander : PrettyPrinter.Unexpander := fun stx => do
match stx with
| `($_ $f:term $_) => `($f⁻¹)
| `($_ $f:term $_ $a:term) => `($f⁻¹ $a)
| _ => throw ()
end Lean.Grind

View File

@@ -182,7 +182,7 @@ init_grind_norm
Nat.add_eq Nat.sub_eq Nat.mul_eq Nat.zero_eq Nat.le_eq
Nat.div_zero Nat.mod_zero Nat.div_one Nat.mod_one
Nat.sub_sub Nat.pow_zero Nat.pow_one Nat.sub_self
Nat.one_pow Nat.zero_sub
Nat.one_pow Nat.zero_sub Nat.sub_zero
-- Int
Int.lt_eq
Int.emod_neg Int.ediv_neg

215
src/Init/Grind/Order.lean Normal file
View File

@@ -0,0 +1,215 @@
/-
Copyright (c) 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Grind.Ordered.Ring
import Init.Grind.Ring
public section
namespace Lean.Grind.Order
/-!
Helper theorems to assert constraints
-/
theorem le_of_eq {α} [LE α] [Std.IsPreorder α]
(a b : α) : a = b a b := by
intro h; subst a; apply Std.IsPreorder.le_refl
theorem le_of_not_le {α} [LE α] [Std.IsLinearPreorder α]
(a b : α) : ¬ a b b a := by
intro h
have := Std.IsLinearPreorder.le_total a b
cases this; contradiction; assumption
theorem lt_of_not_le {α} [LE α] [LT α] [Std.IsLinearPreorder α] [Std.LawfulOrderLT α]
(a b : α) : ¬ a b b < a := by
intro h
rw [Std.LawfulOrderLT.lt_iff]
have := Std.IsLinearPreorder.le_total a b
cases this; contradiction; simp [*]
theorem le_of_not_lt {α} [LE α] [LT α] [Std.IsLinearPreorder α] [Std.LawfulOrderLT α]
(a b : α) : ¬ a < b b a := by
rw [Std.LawfulOrderLT.lt_iff]
open Classical in
rw [Classical.not_and_iff_not_or_not, Classical.not_not]
intro h; cases h
next =>
have := Std.IsLinearPreorder.le_total a b
cases this; contradiction; assumption
next => assumption
theorem int_lt (x y k : Int) : x < y + k x y + (k-1) := by
omega
/-!
Helper theorem for equality propagation
-/
theorem eq_of_le_of_le {α} [LE α] [Std.IsPartialOrder α] {a b : α} : a b b a a = b :=
Std.IsPartialOrder.le_antisymm _ _
/-!
Transitivity
-/
theorem le_trans {α} [LE α] [Std.IsPreorder α] {a b c : α} (h₁ : a b) (h₂ : b c) : a c :=
Std.IsPreorder.le_trans _ _ _ h₁ h₂
theorem lt_trans {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] {a b c : α} (h₁ : a < b) (h₂ : b < c) : a < c :=
Preorder.lt_trans h₁ h₂
theorem le_lt_trans {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] {a b c : α} (h₁ : a b) (h₂ : b < c) : a < c :=
Preorder.lt_of_le_of_lt h₁ h₂
theorem lt_le_trans {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] {a b c : α} (h₁ : a < b) (h₂ : b c) : a < c :=
Preorder.lt_of_lt_of_le h₁ h₂
theorem lt_unsat {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] (a : α) : a < a False :=
Preorder.lt_irrefl a
/-!
Transitivity with offsets
-/
attribute [local instance] Ring.intCast
theorem le_trans_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b c : α) (k₁ k₂ k : Int) (h₁ : a b + k₁) (h₂ : b c + k₂) : k == k₂ + k₁ a c + k := by
intro h; simp at h; subst k
replace h₂ := OrderedAdd.add_le_left_iff (M := α) k₁ |>.mp h₂
have := le_trans h₁ h₂
simp [Ring.intCast_add, Semiring.add_assoc, this]
theorem lt_trans_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b c : α) (k₁ k₂ k : Int) (h₁ : a < b + k₁) (h₂ : b < c + k₂) : k == k₂ + k₁ a < c + k := by
intro h; simp at h; subst k
replace h₂ := OrderedAdd.add_lt_left_iff (M := α) k₁ |>.mp h₂
have := lt_trans h₁ h₂
simp [Ring.intCast_add, Semiring.add_assoc, this]
theorem le_lt_trans_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b c : α) (k₁ k₂ k : Int) (h₁ : a b + k₁) (h₂ : b < c + k₂) : k == k₂ + k₁ a < c + k := by
intro h; simp at h; subst k
replace h₂ := OrderedAdd.add_lt_left_iff (M := α) k₁ |>.mp h₂
have := le_lt_trans h₁ h₂
simp [Ring.intCast_add, Semiring.add_assoc, this]
theorem lt_le_trans_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b c : α) (k₁ k₂ k : Int) (h₁ : a < b + k₁) (h₂ : b c + k₂) : k == k₂ + k₁ a < c + k := by
intro h; simp at h; subst k
replace h₂ := OrderedAdd.add_le_left_iff (M := α) k₁ |>.mp h₂
have := lt_le_trans h₁ h₂
simp [Ring.intCast_add, Semiring.add_assoc, this]
/-!
Unsat detection
-/
theorem le_unsat_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a : α) (k : Int) : k.blt' 0 a a + k False := by
simp; intro h₁ h₂
replace h₂ := OrderedAdd.add_le_left_iff (-a) |>.mp h₂
rw [AddCommGroup.add_neg_cancel, Semiring.add_assoc, Semiring.add_comm _ (-a)] at h₂
rw [ Semiring.add_assoc, AddCommGroup.add_neg_cancel, Semiring.add_comm, Semiring.add_zero] at h₂
rw [ Ring.intCast_zero] at h₂
replace h₂ := OrderedRing.le_of_intCast_le_intCast _ _ h₂
omega
theorem lt_unsat_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a : α) (k : Int) : k.ble' 0 a < a + k False := by
simp; intro h₁ h₂
replace h₂ := OrderedAdd.add_lt_left_iff (-a) |>.mp h₂
rw [AddCommGroup.add_neg_cancel, Semiring.add_assoc, Semiring.add_comm _ (-a)] at h₂
rw [ Semiring.add_assoc, AddCommGroup.add_neg_cancel, Semiring.add_comm, Semiring.add_zero] at h₂
rw [ Ring.intCast_zero] at h₂
replace h₂ := OrderedRing.lt_of_intCast_lt_intCast _ _ h₂
omega
/-!
Helper theorems
-/
private theorem add_lt_add_of_le_of_lt {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
{a b c d : α} (hab : a b) (hcd : c < d) : a + c < b + d :=
lt_le_trans (OrderedAdd.add_lt_right a hcd) (OrderedAdd.add_le_left hab d)
private theorem add_lt_add_of_lt_of_le {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
{a b c d : α} (hab : a < b) (hcd : c d) : a + c < b + d :=
le_lt_trans (OrderedAdd.add_le_right a hcd) (OrderedAdd.add_lt_left hab d)
/-! Theorems for propagating constraints to `True` -/
theorem le_eq_true_of_le_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : k₁.ble' k₂ a b + k₁ (a b + k₂) = True := by
simp; intro h₁ h₂
replace h₁ : 0 k₂ - k₁ := by omega
replace h₁ := OrderedRing.nonneg_intCast_of_nonneg (R := α) _ h₁
replace h₁ := OrderedAdd.add_le_add h₂ h₁
rw [Semiring.add_zero, Semiring.add_assoc, Int.sub_eq_add_neg, Int.add_comm] at h₁
rw [Ring.intCast_add, Ring.intCast_neg, Semiring.add_assoc (k₁ : α)] at h₁
rw [AddCommGroup.add_neg_cancel, Semiring.add_comm 0, Semiring.add_zero] at h₁
assumption
theorem le_eq_true_of_lt_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : k₁.ble' k₂ a < b + k₁ (a b + k₂) = True := by
intro h₁ h₂
replace h₂ := Std.le_of_lt h₂
apply le_eq_true_of_le_k <;> assumption
theorem lt_eq_true_of_lt_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : k₁.ble' k₂ a < b + k₁ (a < b + k₂) = True := by
simp; intro h₁ h₂
replace h₁ : 0 k₂ - k₁ := by omega
replace h₁ := OrderedRing.nonneg_intCast_of_nonneg (R := α) _ h₁
replace h₁ := add_lt_add_of_le_of_lt h₁ h₂
rw [Semiring.add_comm, Semiring.add_zero, Semiring.add_comm, Semiring.add_assoc, Int.sub_eq_add_neg, Int.add_comm] at h₁
rw [Ring.intCast_add, Ring.intCast_neg, Semiring.add_assoc (k₁ : α)] at h₁
rw [AddCommGroup.add_neg_cancel, Semiring.add_comm 0, Semiring.add_zero] at h₁
assumption
theorem lt_eq_true_of_le_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : k₁.blt' k₂ a b + k₁ (a < b + k₂) = True := by
simp; intro h₁ h₂
replace h₁ : 0 < k₂ - k₁ := by omega
replace h₁ := OrderedRing.pos_intCast_of_pos (R := α) _ h₁
replace h₁ := add_lt_add_of_le_of_lt h₂ h₁
rw [Semiring.add_zero, Semiring.add_assoc, Int.sub_eq_add_neg, Int.add_comm] at h₁
rw [Ring.intCast_add, Ring.intCast_neg, Semiring.add_assoc (k₁ : α)] at h₁
rw [AddCommGroup.add_neg_cancel, Semiring.add_comm 0, Semiring.add_zero] at h₁
assumption
/-! Theorems for propagating constraints to `False` -/
theorem le_eq_false_of_le_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : (k₂ + k₁).blt' 0 a b + k₁ (b a + k₂) = False := by
intro h₁; simp; intro h₂ h₃
have h := le_trans_k _ _ _ _ _ (k₂ + k₁) h₂ h₃
simp at h
apply le_unsat_k _ _ h₁ h
theorem lt_eq_false_of_le_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : (k₂ + k₁).ble' 0 a b + k₁ (b < a + k₂) = False := by
intro h₁; simp; intro h₂ h₃
have h := le_lt_trans_k _ _ _ _ _ (k₂ + k₁) h₂ h₃
simp at h
apply lt_unsat_k _ _ h₁ h
theorem lt_eq_false_of_lt_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : (k₂ + k₁).ble' 0 a < b + k₁ (b < a + k₂) = False := by
intro h₁; simp; intro h₂ h₃
have h := lt_trans_k _ _ _ _ _ (k₂ + k₁) h₂ h₃
simp at h
apply lt_unsat_k _ _ h₁ h
theorem le_eq_false_of_lt_k {α} [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) (k₁ k₂ : Int) : (k₂ + k₁).ble' 0 a < b + k₁ (b a + k₂) = False := by
intro h₁; simp; intro h₂ h₃
have h := lt_le_trans_k _ _ _ _ _ (k₂ + k₁) h₂ h₃
simp at h
apply lt_unsat_k _ _ h₁ h
end Lean.Grind.Order

View File

@@ -73,6 +73,9 @@ theorem add_lt_left_iff {a b : M} (c : M) : a < b ↔ a + c < b + c := by
theorem add_lt_right_iff {a b : M} (c : M) : a < b c + a < c + b := by
rw [add_comm c a, add_comm c b, add_lt_left_iff]
theorem add_lt_add {a b c d : M} (hab : a < b) (hcd : c < d) : a + c < b + d :=
Preorder.lt_trans (add_lt_right a hcd) (add_lt_left hab d)
end
section

View File

@@ -53,6 +53,151 @@ theorem ofNat_nonneg (x : Nat) : (OfNat.ofNat x : R) ≥ 0 := by
have := Preorder.lt_of_lt_of_le this ih
exact Preorder.le_of_lt this
attribute [local instance] Semiring.natCast Ring.intCast
theorem le_of_natCast_le_natCast (a b : Nat) : (a : R) (b : R) a b := by
induction a generalizing b <;> cases b <;> simp
next n ih =>
simp [Semiring.natCast_succ, Semiring.natCast_zero]
intro h
have : (n:R) 0 := by
have := OrderedRing.zero_lt_one (R := R)
replace this := OrderedAdd.add_le_right (M := R) (n:R) (Std.le_of_lt this)
rw [Semiring.add_zero] at this
exact Std.IsPreorder.le_trans _ _ _ this h
replace ih := ih 0
simp [Semiring.natCast_zero] at ih
replace ih := ih this
subst n
clear this
have := OrderedRing.zero_lt_one (R := R)
rw [Semiring.natCast_zero, Semiring.add_comm, Semiring.add_zero] at h
replace this := Std.lt_of_lt_of_le this h
have := Preorder.lt_irrefl (0:R)
contradiction
next ih m =>
simp [Semiring.natCast_succ]
intro h
have := OrderedAdd.add_le_left_iff _ |>.mpr h
exact ih _ this
theorem le_of_intCast_le_intCast (a b : Int) : (a : R) (b : R) a b := by
intro h
replace h := OrderedAdd.sub_nonneg_iff.mpr h
rw [ Ring.intCast_sub] at h
suffices 0 b - a by omega
revert h
generalize b - a = x
cases x <;> simp [Ring.intCast_natCast, Int.negSucc_eq, Ring.intCast_neg, Ring.intCast_add]
intro h
replace h := OrderedAdd.neg_nonneg_iff.mp h
rw [Ring.intCast_one, Semiring.natCast_one, Semiring.natCast_add, Semiring.natCast_zero] at h
replace h := le_of_natCast_le_natCast _ _ h
omega
theorem lt_of_natCast_lt_natCast (a b : Nat) : (a : R) < (b : R) a < b := by
induction a generalizing b <;> cases b <;> simp
next =>
simp [Semiring.natCast_zero]
exact Preorder.lt_irrefl (0:R)
next n ih =>
simp [Semiring.natCast_succ, Semiring.natCast_zero]
intro h
have : (n:R) < 0 := by
have := OrderedRing.zero_lt_one (R := R)
replace this := OrderedAdd.add_le_right (M := R) (n:R) (Std.le_of_lt this)
rw [Semiring.add_zero] at this
exact Std.lt_of_le_of_lt this h
replace ih := ih 0
simp [Semiring.natCast_zero] at ih
exact ih this
next ih m =>
simp [Semiring.natCast_succ]
intro h
have := OrderedAdd.add_lt_left_iff _ |>.mpr h
exact ih _ this
theorem lt_of_intCast_lt_intCast (a b : Int) : (a : R) < (b : R) a < b := by
intro h
replace h := OrderedAdd.sub_pos_iff.mpr h
rw [ Ring.intCast_sub] at h
suffices 0 < b - a by omega
revert h
generalize b - a = x
cases x <;> simp [Ring.intCast_natCast, Int.negSucc_eq, Ring.intCast_neg, Ring.intCast_add]
next => intro h; rw [ Semiring.natCast_zero] at h; exact lt_of_natCast_lt_natCast _ _ h
next =>
intro h
replace h := OrderedAdd.neg_pos_iff.mp h
rw [Ring.intCast_one, Semiring.natCast_one, Semiring.natCast_add, Semiring.natCast_zero] at h
replace h := lt_of_natCast_lt_natCast _ _ h
omega
theorem natCast_le_natCast_of_le (a b : Nat) : a b (a : R) (b : R) := by
induction a generalizing b <;> cases b <;> simp
next => simp [Semiring.natCast_zero, Std.IsPreorder.le_refl]
next n =>
have := ofNat_nonneg (R := R) n
simp [Semiring.ofNat_eq_natCast] at this
rw [Semiring.natCast_zero] at this
simp [Semiring.natCast_zero, Semiring.natCast_add, Semiring.natCast_one]
replace this := OrderedAdd.add_le_left_iff 1 |>.mp this
rw [Semiring.add_comm, Semiring.add_zero] at this
replace this := Std.lt_of_lt_of_le zero_lt_one this
exact Std.le_of_lt this
next n ih m =>
intro h
replace ih := ih _ h
simp [Semiring.natCast_add, Semiring.natCast_one]
exact OrderedAdd.add_le_left_iff _ |>.mp ih
theorem natCast_lt_natCast_of_lt (a b : Nat) : a < b (a : R) < (b : R) := by
induction a generalizing b <;> cases b <;> simp
next n =>
have := ofNat_nonneg (R := R) n
simp [Semiring.ofNat_eq_natCast] at this
rw [Semiring.natCast_zero] at this
simp [Semiring.natCast_zero, Semiring.natCast_add, Semiring.natCast_one]
replace this := OrderedAdd.add_le_left_iff 1 |>.mp this
rw [Semiring.add_comm, Semiring.add_zero] at this
exact Std.lt_of_lt_of_le zero_lt_one this
next n ih m =>
intro h
replace ih := ih _ h
simp [Semiring.natCast_add, Semiring.natCast_one]
exact OrderedAdd.add_lt_left_iff _ |>.mp ih
theorem pos_natCast_of_pos (a : Nat) : 0 < a 0 < (a : R) := by
induction a
next => simp
next n ih =>
simp; cases n
next => simp +arith; rw [Semiring.natCast_one]; apply zero_lt_one
next =>
simp at ih
replace ih := OrderedAdd.add_lt_add ih zero_lt_one
rw [Semiring.add_zero, Semiring.natCast_one, Semiring.natCast_add] at ih
assumption
theorem pos_intCast_of_pos (a : Int) : 0 < a 0 < (a : R) := by
cases a
next n =>
intro h
replace h : 0 < n := by cases n; simp at h; simp
replace h := pos_natCast_of_pos (R := R) _ h
rw [Int.ofNat_eq_natCast, Ring.intCast_natCast]
assumption
next => omega
theorem nonneg_intCast_of_nonneg (a : Int) : 0 a 0 (a : R) := by
cases a
next n =>
intro; rw [Int.ofNat_eq_natCast, Ring.intCast_natCast]
have := ofNat_nonneg (R := R) n
rw [Semiring.ofNat_eq_natCast] at this
assumption
next => omega
instance [Ring R] [LE R] [LT R] [LawfulOrderLT R] [IsPreorder R] [OrderedRing R] :
IsCharP R 0 := IsCharP.mk' _ _ <| by
intro x

View File

@@ -1739,4 +1739,75 @@ noncomputable def norm_int_cert (e : Expr) (p : Poly) : Bool :=
theorem norm_int (ctx : Context Int) (e : Expr) (p : Poly) : norm_int_cert e p e.denote ctx = p.denote' ctx := by
simp [norm_int_cert, Poly.denote'_eq_denote]; intro; subst p; simp [Expr.denote_toPoly]
/-!
Helper theorems for normalizing ring constraints in the `grind order` module.
-/
noncomputable def norm_cnstr_cert (lhs rhs lhs' rhs' : Expr) : Bool :=
(rhs.sub lhs).toPoly_k.beq' (rhs'.sub lhs').toPoly_k
theorem le_norm_expr {α} [CommRing α] [LE α] [LT α] [IsPreorder α] [OrderedRing α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_cnstr_cert lhs rhs lhs' rhs' (lhs.denote ctx rhs.denote ctx) = (lhs'.denote ctx rhs'.denote ctx) := by
simp [norm_cnstr_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly] at h
replace h : rhs.denote ctx - lhs.denote ctx = rhs'.denote ctx - lhs'.denote ctx := h
rw [ OrderedAdd.sub_nonneg_iff, h, OrderedAdd.sub_nonneg_iff]
theorem lt_norm_expr {α} [CommRing α] [LE α] [LT α] [LawfulOrderLT α] [IsPreorder α] [OrderedRing α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_cnstr_cert lhs rhs lhs' rhs' (lhs.denote ctx < rhs.denote ctx) = (lhs'.denote ctx < rhs'.denote ctx) := by
simp [norm_cnstr_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly] at h
replace h : rhs.denote ctx - lhs.denote ctx = rhs'.denote ctx - lhs'.denote ctx := h
rw [ OrderedAdd.sub_pos_iff, h, OrderedAdd.sub_pos_iff]
noncomputable def norm_eq_cert (lhs rhs lhs' rhs' : Expr) : Bool :=
(lhs.sub rhs).toPoly_k.beq' (lhs'.sub rhs').toPoly_k
theorem eq_norm_expr {α} [CommRing α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_eq_cert lhs rhs lhs' rhs' (lhs.denote ctx = rhs.denote ctx) = (lhs'.denote ctx = rhs'.denote ctx) := by
simp [norm_eq_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly] at h
replace h : lhs.denote ctx - rhs.denote ctx = lhs'.denote ctx - rhs'.denote ctx := h
rw [ AddCommGroup.sub_eq_zero_iff, h, AddCommGroup.sub_eq_zero_iff]
noncomputable def norm_cnstr_nc_cert (lhs rhs lhs' rhs' : Expr) : Bool :=
(rhs.sub lhs).toPoly_nc.beq' (rhs'.sub lhs').toPoly_nc
theorem le_norm_expr_nc {α} [Ring α] [LE α] [LT α] [IsPreorder α] [OrderedRing α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_cnstr_nc_cert lhs rhs lhs' rhs' (lhs.denote ctx rhs.denote ctx) = (lhs'.denote ctx rhs'.denote ctx) := by
simp [norm_cnstr_nc_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly_nc] at h
replace h : rhs.denote ctx - lhs.denote ctx = rhs'.denote ctx - lhs'.denote ctx := h
rw [ OrderedAdd.sub_nonneg_iff, h, OrderedAdd.sub_nonneg_iff]
theorem lt_norm_expr_nc {α} [Ring α] [LE α] [LT α] [LawfulOrderLT α] [IsPreorder α] [OrderedRing α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_cnstr_nc_cert lhs rhs lhs' rhs' (lhs.denote ctx < rhs.denote ctx) = (lhs'.denote ctx < rhs'.denote ctx) := by
simp [norm_cnstr_nc_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly_nc] at h
replace h : rhs.denote ctx - lhs.denote ctx = rhs'.denote ctx - lhs'.denote ctx := h
rw [ OrderedAdd.sub_pos_iff, h, OrderedAdd.sub_pos_iff]
noncomputable def norm_eq_nc_cert (lhs rhs lhs' rhs' : Expr) : Bool :=
(lhs.sub rhs).toPoly_nc.beq' (lhs'.sub rhs').toPoly_nc
theorem eq_norm_expr_nc {α} [Ring α] (ctx : Context α) (lhs rhs : Expr) (lhs' rhs' : Expr)
: norm_eq_nc_cert lhs rhs lhs' rhs' (lhs.denote ctx = rhs.denote ctx) = (lhs'.denote ctx = rhs'.denote ctx) := by
simp [norm_eq_nc_cert]; intro h
replace h := congrArg (Poly.denote ctx) h; simp [Expr.denote_toPoly_nc] at h
replace h : lhs.denote ctx - rhs.denote ctx = lhs'.denote ctx - rhs'.denote ctx := h
rw [ AddCommGroup.sub_eq_zero_iff, h, AddCommGroup.sub_eq_zero_iff]
/-!
Helper theorems for quick normalization
-/
theorem le_norm0 {α} [Ring α] [LE α] (lhs rhs : α) : (lhs rhs) = (lhs rhs + Int.cast (R := α) 0) := by
rw [Ring.intCast_zero, Semiring.add_zero]
theorem lt_norm0 {α} [Ring α] [LT α] (lhs rhs : α) : (lhs < rhs) = (lhs < rhs + Int.cast (R := α) 0) := by
rw [Ring.intCast_zero, Semiring.add_zero]
theorem eq_norm0 {α} [Ring α] (lhs rhs : α) : (lhs = rhs) = (lhs = rhs + Int.cast (R := α) 0) := by
rw [Ring.intCast_zero, Semiring.add_zero]
end Lean.Grind.CommRing

View File

@@ -4,11 +4,9 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
module
prelude
public import Init.Grind.Attr
public import Init.Core
public section
namespace Lean.Grind
@@ -98,8 +96,16 @@ structure Config where
zeta := true
/--
When `true` (default: `true`), uses procedure for handling equalities over commutative rings.
This solver also support commutative semirings, fields, and normalizer non-commutative rings and
semirings.
-/
ring := true
/--
Maximum number of steps performed by the `ring` solver.
A step is counted whenever one polynomial is used to simplify another.
For example, given `x^2 + 1` and `x^2 * y^3 + x * y`, the first can be
used to simplify the second to `-1 * y^3 + x * y`.
-/
ringSteps := 10000
/--
When `true` (default: `true`), uses procedure for handling linear arithmetic for `IntModule`, and
@@ -114,6 +120,12 @@ structure Config where
When `true` (default: `true`), uses procedure for handling associative (and commutative) operators.
-/
ac := true
/--
Maximum number of steps performed by the `ac` solver.
A step is counted whenever one AC equation is used to simplify another.
For example, given `ma x z = w` and `max x (max y z) = x`, the first can be
used to simplify the second to `max w y = x`.
-/
acSteps := 1000
/--
Maximum exponent eagerly evaluated while computing bounds for `ToInt` and
@@ -124,6 +136,19 @@ structure Config where
When `true` (default: `true`), automatically creates an auxiliary theorem to store the proof.
-/
abstractProof := true
/--
When `true` (default: `true`), enables the procedure for handling injective functions.
In this mode, `grind` takes into account theorems such as:
```
@[grind inj] theorem double_inj : Function.Injective double
```
-/
inj := true
/--
When `true` (default: `true`), enables the procedure for handling orders that implement
at least `Std.IsPreorder`
-/
order := true
deriving Inhabited, BEq
/--
@@ -182,7 +207,10 @@ namespace Lean.Parser.Tactic
syntax grindErase := "-" ident
syntax grindLemma := ppGroup((Attr.grindMod ppSpace)? ident)
-- `!` for enabling minimal indexable subexpression restriction
/--
The `!` modifier instructs `grind` to consider only minimal indexable subexpressions
when selecting patterns.
-/
syntax grindLemmaMin := ppGroup("!" (Attr.grindMod ppSpace)? ident)
syntax grindParam := grindErase <|> grindLemma <|> grindLemmaMin

View File

@@ -855,7 +855,7 @@ which would include `#guard_msgs` itself, and would cause duplicate and/or uncap
The top-level command elaborator only runs the linters if `#guard_msgs` is not present.
-/
syntax (name := guardMsgsCmd)
(docComment)? "#guard_msgs" (ppSpace guardMsgsSpec)? " in" ppLine command : command
(plainDocComment)? "#guard_msgs" (ppSpace guardMsgsSpec)? " in" ppLine command : command
/--
Format and print the info trees for a given command.

View File

@@ -3391,12 +3391,15 @@ of bytes using the UTF-8 encoding. Both the size in bytes (`String.utf8ByteSize`
modifications when the reference to the string is unique.
-/
structure String where ofByteArray ::
/-- The bytes of the UTF-8 encoding of the string. -/
/-- The bytes of the UTF-8 encoding of the string. Since strings have a special representation in
the runtime, this function actually takes linear time and space at runtime. For efficient access
to the string's bytes, use `String.utf8ByteSize` and `String.getUtf8Byte`. -/
bytes : ByteArray
/-- The bytes of the string form valid UTF-8. -/
isValidUtf8 : ByteArray.IsValidUtf8 bytes
attribute [extern "lean_string_to_utf8"] String.bytes
attribute [extern "lean_string_from_utf8_unchecked"] String.ofByteArray
/--
Decides whether two strings are equal. Normally used via the `DecidableEq String` instance and the
@@ -3446,8 +3449,8 @@ convenient than tracking the bounds by hand.
Using its constructor explicitly, it is possible to construct a `Substring` in which one or both of
the positions is invalid for the string. Many operations will return unexpected or confusing results
if the start and stop positions are not valid. Instead, it's better to use API functions that ensure
the validity of the positions in a substring to create and manipulate them.
if the start and stop positions are not valid. For this reason, `Substring` will be deprecated in
favor of `String.Slice`, which always represents a valid substring.
-/
structure Substring where
/-- The underlying string. -/

View File

@@ -599,7 +599,7 @@ theorem Decidable.or_congr_right' [Decidable a] (h : ¬a → (b ↔ c)) : a
**Important**: this function should be used instead of `rw` on `Decidable b`, because the
kernel will get stuck reducing the usage of `propext` otherwise,
and `decide` will not work. -/
@[inline] def decidable_of_iff (a : Prop) (h : a b) [Decidable a] : Decidable b :=
@[inline, expose] def decidable_of_iff (a : Prop) (h : a b) [Decidable a] : Decidable b :=
decidable_of_decidable_of_iff h
/-- Transfer decidability of `b` to decidability of `a`, if the propositions are equivalent.

View File

@@ -1513,6 +1513,15 @@ indicate failure.
-/
@[extern "lean_io_exit"] opaque exit : UInt8 IO α
/--
Terminates the current process with the provided exit code. `0` indicates success, all other values
indicate failure.
Calling this function is equivalent to calling
[`std::_Exit`](https://en.cppreference.com/w/cpp/utility/program/_Exit.html).
-/
@[extern "lean_io_force_exit"] opaque forceExit : UInt8 IO α
end Process
/-- Returns the thread ID of the calling thread. -/

View File

@@ -16,10 +16,6 @@ namespace IO
private opaque PromisePointed : NonemptyType.{0}
private structure PromiseImpl (α : Type) : Type where
prom : PromisePointed.type
h : Nonempty α
/--
`Promise α` allows you to create a `Task α` whose value is provided later by calling `resolve`.
@@ -33,7 +29,9 @@ Typical usage is as follows:
If the promise is dropped without ever being resolved, `promise.result?.get` will return `none`.
See `Promise.result!/resultD` for other ways to handle this case.
-/
def Promise (α : Type) : Type := PromiseImpl α
structure Promise (α : Type) : Type where
private prom : PromisePointed.type
private h : Nonempty α
instance [s : Nonempty α] : Nonempty (Promise α) :=
by exact Nonempty.intro { prom := Classical.choice PromisePointed.property, h := s }
@@ -73,9 +71,6 @@ def Promise.result! (promise : @& Promise α) : Task α :=
let _ : Nonempty α := promise.h
promise.result?.map (sync := true) Option.getOrBlock!
@[inherit_doc Promise.result!, deprecated Promise.result! (since := "2025-02-05")]
def Promise.result := @Promise.result!
/--
Like `Promise.result`, but resolves to `dflt` if the promise is dropped without ever being resolved.
-/

View File

@@ -436,8 +436,8 @@ macro "rfl'" : tactic => `(tactic| set_option smartUnfolding false in with_unfol
/--
`ac_rfl` proves equalities up to application of an associative and commutative operator.
```
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩
instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩
instance : Std.Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩
instance : Std.Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩
example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by ac_rfl
```
@@ -1025,7 +1025,7 @@ The form
```
fun_induction f
```
(with no arguments to `f`) searches the goal for an unique eligible application of `f`, and uses
(with no arguments to `f`) searches the goal for a unique eligible application of `f`, and uses
these arguments. An application of `f` is eligible if it is saturated and the arguments that will
become targets are free variables.
@@ -1058,7 +1058,7 @@ The form
```
fun_cases f
```
(with no arguments to `f`) searches the goal for an unique eligible application of `f`, and uses
(with no arguments to `f`) searches the goal for a unique eligible application of `f`, and uses
these arguments. An application of `f` is eligible if it is saturated and the arguments that will
become targets are free variables.
@@ -1563,8 +1563,8 @@ syntax (name := normCastAddElim) "norm_cast_add_elim" ident : command
list of hypotheses in the local context. In the latter case, a turnstile `⊢` or `|-`
can also be used, to signify the target of the goal.
```
instance : Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩
instance : Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩
instance : Std.Associative (α := Nat) (.+.) := ⟨Nat.add_assoc⟩
instance : Std.Commutative (α := Nat) (.+.) := ⟨Nat.add_comm⟩
example (a b c d : Nat) : a + b + c + d = d + (b + c) + a := by
ac_nf
@@ -2176,6 +2176,8 @@ macro (name := mspecMacro) (priority:=low) "mspec" : tactic =>
`mvcgen` will break down a Hoare triple proof goal like `⦃P⦄ prog ⦃Q⦄` into verification conditions,
provided that all functions used in `prog` have specifications registered with `@[spec]`.
### Verification Conditions and specifications
A verification condition is an entailment in the stateful logic of `Std.Do.SPred`
in which the original program `prog` no longer occurs.
Verification conditions are introduced by the `mspec` tactic; see the `mspec` tactic for what they
@@ -2183,6 +2185,8 @@ look like.
When there's no applicable `mspec` spec, `mvcgen` will try and rewrite an application
`prog = f a b c` with the simp set registered via `@[spec]`.
### Features
When used like `mvcgen +noLetElim [foo_spec, bar_def, instBEqFloat]`, `mvcgen` will additionally
* add a Hoare triple specification `foo_spec : ... → ⦃P⦄ foo ... ⦃Q⦄` to `spec` set for a
@@ -2191,11 +2195,68 @@ When used like `mvcgen +noLetElim [foo_spec, bar_def, instBEqFloat]`, `mvcgen` w
* unfold any method of the `instBEqFloat : BEq Float` instance in `prog`.
* it will no longer substitute away `let`-expressions that occur at most once in `P`, `Q` or `prog`.
Furthermore, `mvcgen` tries to close trivial verification conditions by `SPred.entails.rfl` or
the tactic sequence `try (mpure_intro; trivial)`. The variant `mvcgen_no_trivial` does not do this.
### Config options
For debugging purposes there is also `mvcgen_step 42` which will do at most 42 VC generation
steps. This is useful for bisecting issues with the generated VCs.
`+noLetElim` is just one config option of many. Check out `Lean.Elab.Tactic.Do.VCGen.Config` for all
options. Of particular note is `stepLimit = some 42`, which is useful for bisecting bugs in
`mvcgen` and tracing its execution.
### Extended syntax
Often, `mvcgen` will be used like this:
```
mvcgen [...]
case inv1 => by exact I1
case inv2 => by exact I2
all_goals (mleave; try grind)
```
There is special syntax for this:
```
mvcgen [...] invariants
· I1
· I2
with grind
```
When `I1` and `I2` need to refer to inaccessibles (`mvcgen` will introduce a lot of them for program
variables), you can use case label syntax:
```
mvcgen [...] invariants
| inv1 _ acc _ => I1 acc
| _ => I2
with grind
```
This is more convenient than the equivalent `· by rename_i _ acc _; exact I1 acc`.
### Invariant suggestions
`mvcgen` will suggest invariants for you if you use the `invariants?` keyword.
```
mvcgen [...] invariants?
```
This is useful if you do not recall the exact syntax to construct invariants.
Furthermore, it will suggest a concrete invariant encoding "this holds at the start of the loop and
this must hold at the end of the loop" by looking at the corresponding VCs.
Although the suggested invariant is a good starting point, it is too strong and requires users to
interpolate it such that the inductive step can be proved. Example:
```
def mySum (l : List Nat) : Nat := Id.run do
let mut acc := 0
for x in l do
acc := acc + x
return acc
/--
info: Try this:
invariants
· ⇓⟨xs, letMuts⟩ => ⌜xs.prefix = [] ∧ letMuts = 0 xs.suffix = [] ∧ letMuts = l.sum⌝
-/
#guard_msgs (info) in
theorem mySum_suggest_invariant (l : List Nat) : mySum l = l.sum := by
generalize h : mySum l = r
apply Id.of_wp_run_eq h
mvcgen invariants?
all_goals admit
```
-/
macro (name := mvcgenMacro) (priority:=low) "mvcgen" : tactic =>
Macro.throwError "to use `mvcgen`, please include `import Std.Tactic.Do`"

View File

@@ -112,7 +112,7 @@ def run (declNames : Array Name) : CompilerM (Array IR.Decl) := withAtLeastMaxRe
-- Check meta accesses now before optimizations may obscure references. This check should stay in
-- `lean` if some compilation is moved out.
for decl in decls do
checkMeta (isMeta ( getEnv) decl.name) decl
checkMeta decl
let decls := markRecDecls decls
let manager getPassManager
let isCheckEnabled := compiler.check.get ( getOptions)

View File

@@ -85,6 +85,9 @@ def builtinPassManager : PassManager := {
-/
simp { etaPoly := true, inlinePartial := true, implementedBy := true } (occurrence := 1),
eagerLambdaLifting,
-- Should be as early as possible but after `eagerLambdaLifting` to make sure instances are
-- checked without nested functions whose bodies specialization does not require access to.
checkTemplateVisibility,
specialize,
simp (occurrence := 2),
cse (shouldElimFunDecls := false) (occurrence := 1),

View File

@@ -101,24 +101,32 @@ instance : Literal Bool where
getLit := getBoolLit
mkLit := mkBoolLit
private partial def getLitAux [Inhabited α] (fvarId : FVarId) (ofNat : Nat α) (ofNatName : Name) : CompilerM (Option α) := do
private def getLitAux (fvarId : FVarId) (ofNat : Nat α) (ofNatName : Name) : CompilerM (Option α) := do
let some (.const declName _ #[.fvar fvarId]) findLetValue? fvarId | return none
unless declName == ofNatName do return none
let some natLit getLit fvarId | return none
return ofNat natLit
def mkNatWrapperInstance [Inhabited α] (ofNat : Nat α) (ofNatName : Name) (toNat : α Nat) : Literal α where
def mkNatWrapperInstance (ofNat : Nat α) (ofNatName : Name) (toNat : α Nat) : Literal α where
getLit := (getLitAux · ofNat ofNatName)
mkLit x := do
let helperId mkAuxLit <| toNat x
return .const ofNatName [] #[.fvar helperId]
instance : Literal UInt8 := mkNatWrapperInstance UInt8.ofNat ``UInt8.ofNat UInt8.toNat
instance : Literal UInt16 := mkNatWrapperInstance UInt16.ofNat ``UInt16.ofNat UInt16.toNat
instance : Literal UInt32 := mkNatWrapperInstance UInt32.ofNat ``UInt32.ofNat UInt32.toNat
instance : Literal UInt64 := mkNatWrapperInstance UInt64.ofNat ``UInt64.ofNat UInt64.toNat
instance : Literal Char := mkNatWrapperInstance Char.ofNat ``Char.ofNat Char.toNat
def mkUIntInstance (matchLit : LitValue Option α) (litValueCtor : α LitValue) : Literal α where
getLit fvarId := do
let some (.lit litVal) findLetValue? fvarId | return none
return matchLit litVal
mkLit x :=
return .lit <| litValueCtor x
instance : Literal UInt8 := mkUIntInstance (fun | .uint8 x => some x | _ => none) .uint8
instance : Literal UInt16 := mkUIntInstance (fun | .uint16 x => some x | _ => none) .uint16
instance : Literal UInt32 := mkUIntInstance (fun | .uint32 x => some x | _ => none) .uint32
instance : Literal UInt64 := mkUIntInstance (fun | .uint64 x => some x | _ => none) .uint64
end Literals
/--
@@ -386,7 +394,7 @@ def relationFolders : List (Name × Folder) := [
(``UInt64.decLt, Folder.mkBinaryDecisionProcedure UInt64.decLt),
(``UInt64.decLe, Folder.mkBinaryDecisionProcedure UInt64.decLe),
(``Bool.decEq, Folder.mkBinaryDecisionProcedure Bool.decEq),
(``Bool.decEq, Folder.mkBinaryDecisionProcedure String.decEq)
(``String.decEq, Folder.mkBinaryDecisionProcedure String.decEq)
]
def conversionFolders : List (Name × Folder) := [

View File

@@ -123,27 +123,49 @@ open Meta in
Convert a Lean type into a LCNF type used by the code generator.
-/
partial def toLCNFType (type : Expr) : MetaM Expr := do
if isProp type then
return erasedExpr
let type whnfEta type
match type with
| .sort u => return .sort u
| .const .. => visitApp type #[]
| .lam n d b bi =>
withLocalDecl n bi d fun x => do
let d toLCNFType d
let b toLCNFType (b.instantiate1 x)
if b.isErased then
return b
else
return Expr.lam n d (b.abstract #[x]) bi
| .forallE .. => visitForall type #[]
| .app .. => type.withApp visitApp
| .fvar .. => visitApp type #[]
| .proj ``Subtype 0 (.const ``IO.RealWorld.nonemptyType []) =>
return mkConst ``lcRealWorld
| _ => return mkConst ``lcAny
let res go type
if ( getEnv).header.isModule then
-- Under the module system, `type` may reduce differently in different modules, leading to
-- IR-level mismatches and thus undefined behavior. We want to make this part independent of the
-- current module and its view of the environment but until then, we force the user to make
-- involved type definitions exposed by checking whether we would infer a different type in the
-- public scope. We ignore failed inference in the public scope because it can easily fail when
-- compiling private declarations using private types, and even if that private code should
-- escape into different modules, it can only generate a static error there, not a
-- miscompilation.
-- Note that always using `withExporting` would not always be correct either because it is
-- ignored in non-`module`s and thus mismatches with upstream `module`s may again occur.
let res'? observing <| withExporting <| go type
if let .ok res' := res'? then
if res != res' then
throwError "Compilation failed, locally inferred compilation type{indentD res}\n\
differs from type{indentD res'}\n\
that would be inferred in other modules. This usually means that a type `def` involved \
with the mentioned declarations needs to be `@[expose]`d. This is a current compiler \
limitation for `module`s that may be lifted in the future."
return res
where
go type := do
if isProp type then
return erasedExpr
let type whnfEta type
match type with
| .sort u => return .sort u
| .const .. => visitApp type #[]
| .lam n d b bi =>
withLocalDecl n bi d fun x => do
let d go d
let b go (b.instantiate1 x)
if b.isErased then
return b
else
return Expr.lam n d (b.abstract #[x]) bi
| .forallE .. => visitForall type #[]
| .app .. => type.withApp visitApp
| .fvar .. => visitApp type #[]
| .proj ``Subtype 0 (.const ``IO.RealWorld.nonemptyType []) =>
return mkConst ``lcRealWorld
| _ => return mkConst ``lcAny
whnfEta (type : Expr) : MetaM Expr := do
let type whnf type
let type' := type.eta
@@ -158,12 +180,12 @@ where
let d := d.instantiateRev xs
withLocalDecl n bi d fun x => do
let isBorrowed := isMarkedBorrowed d
let mut d := ( toLCNFType d).abstract xs
let mut d := ( go d).abstract xs
if isBorrowed then
d := markBorrowed d
return .forallE n d ( visitForall b (xs.push x)) bi
| _ =>
let e toLCNFType (e.instantiateRev xs)
let e go (e.instantiateRev xs)
return e.abstract xs
visitApp (f : Expr) (args : Array Expr) := do
@@ -178,7 +200,7 @@ where
if isProp arg <||> isPropFormer arg then
result := mkApp result erasedExpr
else if isTypeFormer arg then
result := mkApp result ( toLCNFType arg)
result := mkApp result ( go arg)
else
result := mkApp result (mkConst ``lcAny)
return result

View File

@@ -9,6 +9,7 @@ prelude
public import Lean.Compiler.LCNF.PhaseExt
public import Lean.Compiler.MetaAttr
public import Lean.Compiler.ImplementedByAttr
import Lean.Compiler.Options
public section
@@ -57,35 +58,61 @@ partial def markDeclPublicRec (phase : Phase) (decl : Decl) : CompilerM Unit :=
markDeclPublicRec phase refDecl
/-- Checks whether references in the given declaration adhere to phase distinction. -/
partial def checkMeta (isMeta : Bool) (origDecl : Decl) : CompilerM Unit := do
if !( getEnv).header.isModule then
partial def checkMeta (origDecl : Decl) : CompilerM Unit := do
if !( getEnv).header.isModule || !compiler.checkMeta.get ( getOptions) then
return
let irPhases := getIRPhases ( getEnv) origDecl.name
if irPhases == .all then
return
-- If the meta decl is public, we want to ensure it can only refer to public meta imports so that
-- references to private imports cannot escape the current module. In particular, we check that
-- decls with relevant global attrs are public (`Lean.ensureAttrDeclIsMeta`).
let isPublic := !isPrivateName origDecl.name
go isPublic origDecl |>.run' {}
where go (isPublic : Bool) (decl : Decl) : StateT NameSet CompilerM Unit := do
go (irPhases == .comptime) isPublic origDecl |>.run' {}
where go (isMeta isPublic : Bool) (decl : Decl) : StateT NameSet CompilerM Unit := do
decl.value.forCodeM fun code =>
for ref in collectUsedDecls code do
if ( get).contains ref then
continue
modify (·.insert decl.name)
modify (·.insert ref)
let env getEnv
if isMeta && isPublic then
if let some modIdx := ( getEnv).getModuleIdxFor? ref then
if ( getEnv).header.modules[modIdx]?.any (!·.isExported) then
throwError "Invalid `meta` definition `{.ofConstName origDecl.name}`, `{.ofConstName ref}` not publicly marked or imported as `meta`"
match getIRPhases ( getEnv) ref, isMeta with
if let some modIdx := env.getModuleIdxFor? ref then
if Lean.isMeta env ref then
if env.header.modules[modIdx]?.any (!·.isExported) then
throwError "Invalid public `meta` definition `{.ofConstName origDecl.name}`, \
`{.ofConstName ref}` is not accessible here; consider adding \
`public import {env.header.moduleNames[modIdx]!}`"
else
-- TODO: does not account for `public import` + `meta import`, which is not the same
if env.header.modules[modIdx]?.any (!·.isExported) then
throwError "Invalid public `meta` definition `{.ofConstName origDecl.name}`, \
`{.ofConstName ref}` is not accessible here; consider adding \
`public meta import {env.header.moduleNames[modIdx]!}`"
match getIRPhases env ref, isMeta with
| .runtime, true =>
throwError "Invalid `meta` definition `{.ofConstName origDecl.name}`, may not access declaration `{.ofConstName ref}` not marked or imported as `meta`"
if let some modIdx := env.getModuleIdxFor? ref then
throwError "Invalid `meta` definition `{.ofConstName origDecl.name}`, \
`{.ofConstName ref}` is not accessible here; consider adding \
`meta import {env.header.moduleNames[modIdx]!}`"
else
throwError "Invalid `meta` definition `{.ofConstName origDecl.name}`, \
`{.ofConstName ref}` not marked `meta`"
| .comptime, false =>
throwError "Invalid definition `{.ofConstName origDecl.name}`, may not access declaration `{.ofConstName ref}` marked or imported as `meta`"
| _, _ =>
if let some modIdx := env.getModuleIdxFor? ref then
if !Lean.isMeta env ref then
throwError "Invalid definition `{.ofConstName origDecl.name}`, may not access \
declaration `{.ofConstName ref}` imported as `meta`; consider adding \
`import {env.header.moduleNames[modIdx]!}`"
throwError "Invalid definition `{.ofConstName origDecl.name}`, may not access \
declaration `{.ofConstName ref}` marked as `meta`"
| irPhases, _ =>
-- We allow auxiliary defs to be used in either phase but we need to recursively check
-- *their* references in this case. We also need to do this for non-auxiliary defs in case a
-- public meta def tries to use a private meta import via a local private meta def :/ .
if let some refDecl getLocalDecl? ref then
go isPublic refDecl
if irPhases == .all || isPublic && isPrivateName ref then
if let some refDecl getLocalDecl? ref then
go isMeta isPublic refDecl
/--
Checks meta availability just before `evalConst`. This is a "last line of defense" as accesses
@@ -104,13 +131,48 @@ where go (decl : Decl) : StateT NameSet (Except String) Unit :=
for ref in collectUsedDecls code do
if ( get).contains ref then
continue
modify (·.insert decl.name)
modify (·.insert ref)
if let some localDecl := baseExt.getState env |>.find? ref then
go localDecl
else
if getIRPhases env ref == .runtime then
throw s!"Cannot evaluate constant `{declName}` as it uses `{ref}` which is neither marked nor imported as `meta`"
/--
Checks that imports necessary for inlining/specialization are public as otherwise we may run into
unknown declarations at the point of inlining/specializing. This is a limitation that we want to
lift in the future by moving main compilation into a different process that can use a different
import/incremental system.
-/
partial def checkTemplateVisibility : Pass where
occurrence := 0
phase := .base
name := `checkTemplateVisibility
run decls := do
if ( getEnv).header.isModule then
for decl in decls do
-- A private template-like decl cannot directly be used by a different module. If it could be used
-- indirectly via a public template-like, we do a recursive check when checking the latter.
if !isPrivateName decl.name && ( decl.isTemplateLike) then
let isMeta := isMeta ( getEnv) decl.name
go decl decl |>.run' {}
return decls
where go (origDecl decl : Decl) : StateT NameSet CompilerM Unit := do
decl.value.forCodeM fun code =>
for ref in collectUsedDecls code do
if ( get).contains ref then
continue
modify (·.insert decl.name)
if let some localDecl := baseExt.getState ( getEnv) |>.find? ref then
-- check transitively through local decls
if isPrivateName localDecl.name && ( localDecl.isTemplateLike) then
go origDecl localDecl
else if let some modIdx := ( getEnv).getModuleIdxFor? ref then
if ( getEnv).header.modules[modIdx]?.any (!·.isExported) then
throwError "Cannot compile inline/specializing declaration `{.ofConstName origDecl.name}` as \
it uses `{.ofConstName ref}` of module `{(← getEnv).header.moduleNames[modIdx]!}` \
which must be imported publicly. This limitation may be lifted in the future."
def inferVisibility (phase : Phase) : Pass where
occurrence := 0
phase

View File

@@ -19,4 +19,11 @@ register_builtin_option compiler.check : Bool := {
descr := "type check code after each compiler step (this is useful for debugging purses)"
}
register_builtin_option compiler.checkMeta : Bool := {
defValue := true
descr := "Check that `meta` declarations only refer to other `meta` declarations and ditto for \
non-`meta` declarations. Disabling this option may lead to delayed compiler errors and is
intended only for debugging purposes."
}
end Lean.Compiler

View File

@@ -8,8 +8,8 @@ module
prelude
public import Init.Prelude
import Init.Data.Stream
import Init.Data.Range.Polymorphic.Nat
import Init.Data.Range.Polymorphic.Iterators
public import Init.Data.Range.Polymorphic.Nat
public import Init.Data.Range.Polymorphic.Iterators
namespace Array

View File

@@ -64,7 +64,7 @@ private def needEscape (s : String) : Bool :=
where
go (s : String) (i : Nat) : Bool :=
if h : i < s.utf8ByteSize then
let byte := s.getUtf8Byte i h
let byte := s.getUtf8Byte i h
have h1 : byte.toNat < 256 := UInt8.toNat_lt_size byte
have h2 : escapeTable.val.size = 256 := escapeTable.property
if escapeTable.val.get byte.toNat (Nat.lt_of_lt_of_eq h1 h2.symm) == 0 then

View File

@@ -94,6 +94,10 @@ def utf8PosToLspPos (text : FileMap) (pos : String.Pos) : Lsp.Position :=
def utf8RangeToLspRange (text : FileMap) (range : String.Range) : Lsp.Range :=
{ start := text.utf8PosToLspPos range.start, «end» := text.utf8PosToLspPos range.stop }
/-- Gets the LSP range of syntax `stx`. -/
def lspRangeOfStx? (text : FileMap) (stx : Syntax) (canonicalOnly := false) : Option Lsp.Range :=
text.utf8RangeToLspRange <$> stx.getRange? canonicalOnly
def lspRangeToUtf8Range (text : FileMap) (range : Lsp.Range) : String.Range :=
{ start := text.lspPosToUtf8Pos range.start, stop := text.lspPosToUtf8Pos range.end }

View File

@@ -8,5 +8,3 @@ module
prelude
public import Lean.Data.NameMap.Basic
public import Lean.Data.NameMap.AdditionalOperations
public section

View File

@@ -63,7 +63,7 @@ instance : Inhabited (Trie α) where
partial def upsert (t : Trie α) (s : String) (f : Option α α) : Trie α :=
let rec insertEmpty (i : Nat) : Trie α :=
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
let t := insertEmpty (i + 1)
node1 none c t
else
@@ -71,14 +71,14 @@ partial def upsert (t : Trie α) (s : String) (f : Option αα) : Trie α :
let rec loop
| i, leaf v =>
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
let t := insertEmpty (i + 1)
node1 v c t
else
leaf (f v)
| i, node1 v c' t' =>
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
if c == c'
then node1 v c' (loop (i + 1) t')
else
@@ -88,7 +88,7 @@ partial def upsert (t : Trie α) (s : String) (f : Option αα) : Trie α :
node1 (f v) c' t'
| i, node v cs ts =>
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
match cs.findIdx? (· == c) with
| none =>
let t := insertEmpty (i + 1)
@@ -113,7 +113,7 @@ partial def find? (t : Trie α) (s : String) : Option α :=
val
| i, node1 val c' t' =>
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
if c == c'
then loop (i + 1) t'
else none
@@ -121,7 +121,7 @@ partial def find? (t : Trie α) (s : String) : Option α :=
val
| i, node val cs ts =>
if h : i < s.utf8ByteSize then
let c := s.getUtf8Byte i h
let c := s.getUtf8Byte i h
match cs.findIdx? (· == c) with
| none => none
| some idx => loop (i + 1) ts[idx]!
@@ -150,7 +150,7 @@ partial def findPrefix (t : Trie α) (pre : String) : Array α := go t 0
where
go (t : Trie α) (i : Nat) : Array α :=
if h : i < pre.utf8ByteSize then
let c := pre.getUtf8Byte i h
let c := pre.getUtf8Byte i h
match t with
| leaf _val => .empty
| node1 _val c' t' =>
@@ -175,7 +175,7 @@ partial def matchPrefix (s : String) (t : Trie α) (i : String.Pos)
| node1 v c' t', i, res =>
let res := if v.isSome then v else res
if h : i < endByte then
let c := s.getUtf8Byte i (by omega)
let c := s.getUtf8Byte i (by simp; omega)
if c == c'
then loop t' (i + 1) res
else res
@@ -184,7 +184,7 @@ partial def matchPrefix (s : String) (t : Trie α) (i : String.Pos)
| node v cs ts, i, res =>
let res := if v.isSome then v else res
if h : i < endByte then
let c := s.getUtf8Byte i (by omega)
let c := s.getUtf8Byte i (by simp; omega)
match cs.findIdx? (· == c) with
| none => res
| some idx => loop ts[idx]! (i + 1) res

View File

@@ -14,7 +14,7 @@ import Lean.Elab.DocString
import Lean.DocString.Extension
import Lean.DocString.Links
import Lean.Parser.Types
import Lean.DocString.Parser
public import Lean.DocString.Parser
import Lean.ResolveName
public import Lean.Elab.Term.TermElabM
import Std.Data.HashMap
@@ -50,23 +50,20 @@ def validateDocComment
else
logError err
open Lean.Doc in
open Parser in
open Lean.Parser Command in
/--
Adds a Verso docstring to the specified declaration, which should already be present in the
environment.
`binders` should be the syntax of the parameters to the constant that is being documented, as a null
node that contains a sequence of bracketed binders. It is used to allow interactive features such as
document highlights and “find references” to work for documented parameters. If no parameter binders
are available, pass `Syntax.missing` or an empty null node.
Parses a docstring as Verso, returning the syntax if successful.
When not successful, parser errors are logged.
-/
def versoDocString
(declName : Name) (binders : Syntax) (docComment : TSyntax `Lean.Parser.Command.docComment) :
TermElabM (Array (Doc.Block ElabInline ElabBlock) × Array (Doc.Part ElabInline ElabBlock Empty)) := do
def parseVersoDocString
[Monad m] [MonadFileMap m] [MonadError m] [MonadEnv m] [MonadOptions m] [MonadLog m]
[MonadResolveName m]
(docComment : TSyntax [``docComment, ``moduleDoc]) : m (Option Syntax) := do
if docComment.raw.getKind == ``docComment then
match docComment.raw[0] with
| docStx@(.node _ ``versoCommentBody _) => return docStx[1]?
| _ => pure ()
let text getFileMap
-- TODO fallback to string version without nice interactivity
let some startPos := docComment.raw[1].getPos? (canonicalOnly := true)
@@ -93,7 +90,7 @@ def versoDocString
}
let s := mkParserState text.source |>.setPos startPos
-- TODO parse one block at a time for error recovery purposes
let s := (Doc.Parser.document).run ictx pmctx (getTokenTable env) s
let s := Doc.Parser.document.run ictx pmctx (getTokenTable env) s
if !s.allErrors.isEmpty then
for (pos, _, err) in s.allErrors do
@@ -103,11 +100,42 @@ def versoDocString
-- TODO end position
data := err.toString
}
return (#[], #[])
else
let stx := s.stxStack.back
let stx := stx.getArgs
Doc.elabBlocks (stx.map (·)) |>.exec declName binders
return none
return some s.stxStack.back
open Lean.Doc in
open Lean.Parser.Command in
/--
Elaborates a Verso docstring for the specified declaration, which should already be present in the
environment.
`binders` should be the syntax of the parameters to the constant that is being documented, as a null
node that contains a sequence of bracketed binders. It is used to allow interactive features such as
document highlights and “find references” to work for documented parameters. If no parameter binders
are available, pass `Syntax.missing` or an empty null node.
-/
def versoDocString
(declName : Name) (binders : Syntax) (docComment : TSyntax ``docComment) :
TermElabM (Array (Doc.Block ElabInline ElabBlock) × Array (Doc.Part ElabInline ElabBlock Empty)) := do
if let some stx parseVersoDocString docComment then
Doc.elabBlocks (stx.getArgs.map (·)) |>.exec declName binders
else return (#[], #[])
open Lean.Doc Parser in
open Lean.Parser.Command in
/--
Parses and elaborates a Verso module docstring.
-/
def versoModDocString
(range : DeclarationRange) (doc : TSyntax ``document) :
TermElabM VersoModuleDocs.Snippet := do
let level := getVersoModuleDocs ( getEnv) |>.terminalNesting |>.map (· + 1)
Doc.elabModSnippet range (doc.raw.getArgs.map (·)) (level.getD 0) |>.execForModule
open Lean.Doc in
open Parser in
@@ -131,7 +159,7 @@ def versoDocStringFromString
}
let s := mkParserState docComment
-- TODO parse one block at a time for error recovery purposes
let s := (Doc.Parser.document).run ictx pmctx (getTokenTable env) s
let s := Doc.Parser.document.run ictx pmctx (getTokenTable env) s
if !s.allErrors.isEmpty then
for (pos, _, err) in s.allErrors do
@@ -185,6 +213,19 @@ def addVersoDocStringCore [Monad m] [MonadEnv m] [MonadLiftT BaseIO m] [MonadErr
modifyEnv fun env =>
versoDocStringExt.insert env declName docs
/--
Adds an elaborated Verso module docstring to the environment.
-/
def addVersoModDocStringCore [Monad m] [MonadEnv m] [MonadLiftT BaseIO m] [MonadError m]
(docs : VersoModuleDocs.Snippet) : m Unit := do
if (getMainModuleDoc ( getEnv)).isEmpty then
match addVersoModuleDocSnippet ( getEnv) docs with
| .error e => throwError "Error adding module docs: {indentD <| toMessageData e}"
| .ok env' => setEnv env'
else
throwError m!"Can't add Verso-format module docs because there is already Markdown-format content present."
open Lean.Parser.Command in
/--
Adds a Verso docstring to the environment.
@@ -194,7 +235,7 @@ document highlights and “find references” to work for documented parameters.
are available, pass `Syntax.missing` or an empty null node.
-/
def addVersoDocString
(declName : Name) (binders : Syntax) (docComment : TSyntax `Lean.Parser.Command.docComment) :
(declName : Name) (binders : Syntax) (docComment : TSyntax ``docComment) :
TermElabM Unit := do
unless ( getEnv).getModuleIdxFor? declName |>.isNone do
throwError s!"invalid doc string, declaration '{declName}' is in an imported module"
@@ -278,3 +319,20 @@ def addDocString'
match docString? with
| some docString => addDocString declName binders docString
| none => return ()
open Lean.Parser.Command in
open Lean.Doc.Parser in
/--
Adds a Verso docstring to the environment.
`binders` should be the syntax of the parameters to the constant that is being documented, as a null
node that contains a sequence of bracketed binders. It is used to allow interactive features such as
document highlights and “find references” to work for documented parameters. If no parameter binders
are available, pass `Syntax.missing` or an empty null node.
-/
def addVersoModDocString
(range : DeclarationRange) (docComment : TSyntax ``document) :
TermElabM Unit := do
let snippet versoModDocString range docComment
addVersoModDocStringCore snippet

View File

@@ -12,7 +12,7 @@ public import Lean.DocString.Links
public import Lean.MonadEnv
public import Init.Data.String.Extra
public import Lean.DocString.Types
import Lean.DocString.Markdown
public import Lean.DocString.Markdown
public section
@@ -38,7 +38,7 @@ instance : Repr ElabInline where
.group (.nestD ("{ name :=" ++ .line ++ repr v.name)) ++ .line ++
.group (.nestD ("val :=" ++ .line ++ "Dynamic.mk " ++ repr v.val.typeName ++ " _ }"))
private instance : Doc.MarkdownInline ElabInline where
instance : Doc.MarkdownInline ElabInline where
-- TODO extensibility
toMarkdown go _i content := content.forM go
@@ -59,7 +59,7 @@ instance : Repr ElabBlock where
-- TODO extensible toMarkdown
private instance : Doc.MarkdownBlock ElabInline ElabBlock where
instance : Doc.MarkdownBlock ElabInline ElabBlock where
toMarkdown _goI goB _b content := content.forM goB
structure VersoDocString where
@@ -214,5 +214,211 @@ def getModuleDoc? (env : Environment) (moduleName : Name) : Option (Array Module
def getDocStringText [Monad m] [MonadError m] (stx : TSyntax `Lean.Parser.Command.docComment) : m String :=
match stx.raw[1] with
| Syntax.atom _ val => return val.extract 0 (val.endPos - 2)
| _ => throwErrorAt stx "unexpected doc string{indentD stx.raw[1]}"
| Syntax.atom _ val =>
return val.extract 0 (val.endPos - 2)
| Syntax.node _ `Lean.Parser.Command.versoCommentBody _ =>
match stx.raw[1][0] with
| Syntax.atom _ val =>
return val.extract 0 (val.endPos - 2)
| _ =>
throwErrorAt stx "unexpected doc string{indentD stx}"
| _ =>
throwErrorAt stx "unexpected doc string{indentD stx}"
/--
A snippet of a Verso module text.
A snippet consists of text followed by subsections. Because the sequence of snippets that occur in a
source file are conceptually a single document, they have a consistent header nesting structure.
This means that initial textual content of a snippet is a continuation of the text at the end of the
prior snippet.
The actual hierarchical structure of the document is reconstructed from the sequence of snippets.
The _terminal nesting_ of a sequence of snippets is 0 if there are no sections in the sequence.
Otherwise, it is one greater than the nesting of the last snippet's last section. The module
docstring elaborator maintains the invariant that each snippet's first section's level is at most
the terminal nesting of the preceding snippets, and that the level of each section within a snippet
is at most one greater than the preceding section's level.
-/
structure VersoModuleDocs.Snippet where
/-- Text to be inserted after the prior snippet's ending text. -/
text : Array (Doc.Block ElabInline ElabBlock) := #[]
/--
A sequence of parts with their absolute document nesting levels and header positions.
None of these parts may contain sub-parts.
-/
sections : Array (Nat × DeclarationRange × Doc.Part ElabInline ElabBlock Empty) := #[]
/--
The location of the snippet in the source file.
-/
declarationRange : DeclarationRange
deriving Inhabited, Repr
namespace VersoModuleDocs.Snippet
def canNestIn (level : Nat) (snippet : Snippet) : Bool :=
if let some s := snippet.sections[0]? then s.1 level + 1 else true
def terminalNesting (snippet : Snippet) : Option Nat :=
if let some s := snippet.sections.back? then s.1
else none
def addBlock (snippet : Snippet) (block : Doc.Block ElabInline ElabBlock) : Snippet :=
if h : snippet.sections.size = 0 then
{ snippet with text := snippet.text.push block }
else
{ snippet with
sections[snippet.sections.size - 1].2.2.content :=
snippet.sections[snippet.sections.size - 1].2.2.content.push block }
def addPart (snippet : Snippet) (level : Nat) (range : DeclarationRange) (part : Doc.Part ElabInline ElabBlock Empty) : Snippet :=
{ snippet with
sections := snippet.sections.push (level, range, part) }
end VersoModuleDocs.Snippet
open Lean Doc ToMarkdown MarkdownM in
instance : ToMarkdown VersoModuleDocs.Snippet where
toMarkdown
| {text, sections, ..} => do
text.forM toMarkdown
endBlock
for (level, _, part) in sections do
push ("".pushn '#' (level + 1))
push " "
for i in part.title do toMarkdown i
endBlock
for b in part.content do toMarkdown b
endBlock
structure VersoModuleDocs where
snippets : PersistentArray VersoModuleDocs.Snippet := {}
terminalNesting : Option Nat := snippets.findSomeRev? (·.terminalNesting)
deriving Inhabited
instance : Repr VersoModuleDocs where
reprPrec v _ :=
.group <| .nest 2 <|
"{ " ++
(.group <| .nest 2 <| "snippets := " ++ .line ++ repr v.snippets.toArray) ++ .line ++
(.group <| .nest 2 <| "snippets := " ++ .line ++ repr v.snippets.toArray) ++
" }"
namespace VersoModuleDocs
def isEmpty (docs : VersoModuleDocs) : Bool := docs.snippets.isEmpty
def canAdd (docs : VersoModuleDocs) (snippet : Snippet) : Bool :=
if let some level := docs.terminalNesting then
snippet.canNestIn level
else true
def add (docs : VersoModuleDocs) (snippet : Snippet) : Except String VersoModuleDocs := do
unless docs.canAdd snippet do
throw "Can't nest this snippet here"
return { docs with
snippets := docs.snippets.push snippet,
terminalNesting := snippet.terminalNesting
}
def add! (docs : VersoModuleDocs) (snippet : Snippet) : VersoModuleDocs :=
let ok :=
if let some level := docs.terminalNesting then
snippet.canNestIn level
else true
if not ok then
panic! "Can't nest this snippet here"
else
{ docs with
snippets := docs.snippets.push snippet,
terminalNesting := snippet.terminalNesting
}
private structure DocFrame where
content : Array (Doc.Block ElabInline ElabBlock)
priorParts : Array (Doc.Part ElabInline ElabBlock Empty)
titleString : String
title : Array (Doc.Inline ElabInline)
private structure DocContext where
content : Array (Doc.Block ElabInline ElabBlock)
priorParts : Array (Doc.Part ElabInline ElabBlock Empty)
context : Array DocFrame
private def DocContext.level (ctx : DocContext) : Nat := ctx.context.size
private def DocContext.close (ctx : DocContext) : Except String DocContext := do
if h : ctx.context.size = 0 then
throw "Can't close a section: none are open"
else
let last := ctx.context.back
pure {
content := last.content,
priorParts := last.priorParts.push {
title := last.title,
titleString := last.titleString,
metadata := none,
content := ctx.content,
subParts := ctx.priorParts,
},
context := ctx.context.pop
}
private partial def DocContext.closeAll (ctx : DocContext) : Except String DocContext := do
if ctx.context.size = 0 then
return ctx
else
( ctx.close).closeAll
private partial def DocContext.addPart (ctx : DocContext) (partLevel : Nat) (part : Doc.Part ElabInline ElabBlock Empty) : Except String DocContext := do
if partLevel > ctx.level then throw s!"Invalid nesting: expected at most {ctx.level} but got {partLevel}"
else if partLevel = ctx.level then pure { ctx with priorParts := ctx.priorParts.push part }
else
let ctx ctx.close
ctx.addPart partLevel part
private def DocContext.addBlocks (ctx : DocContext) (blocks : Array (Doc.Block ElabInline ElabBlock)) : Except String DocContext := do
if ctx.priorParts.isEmpty then pure { ctx with content := ctx.content ++ blocks }
else throw "Can't add content after sub-parts"
private def DocContext.addSnippet (ctx : DocContext) (snippet : Snippet) : Except String DocContext := do
let mut ctx ctx.addBlocks snippet.text
for (l, _, p) in snippet.sections do
ctx ctx.addPart l p
return ctx
def assemble (docs : VersoModuleDocs) : Except String VersoDocString := do
let mut ctx : DocContext := {content := #[], priorParts := #[], context := #[]}
for snippet in docs.snippets do
ctx ctx.addSnippet snippet
ctx ctx.closeAll
return { text := ctx.content, subsections := ctx.priorParts }
end VersoModuleDocs
private builtin_initialize versoModuleDocExt :
SimplePersistentEnvExtension VersoModuleDocs.Snippet VersoModuleDocs registerSimplePersistentEnvExtension {
addImportedFn := fun _ => {}
addEntryFn := fun s e => s.add! e
exportEntriesFnEx? := some fun _ _ es level =>
if level < .server then
#[]
else
es.toArray
}
def getVersoModuleDocs (env : Environment) : VersoModuleDocs :=
versoModuleDocExt.getState env
def addVersoModuleDocSnippet (env : Environment) (snippet : VersoModuleDocs.Snippet) : Except String Environment :=
let docs := getVersoModuleDocs env
if docs.canAdd snippet then
pure <| versoModuleDocExt.addEntry env snippet
else throw s!"Can't add - incorrect nesting {docs.terminalNesting.map (s!"(expected at most {·})") |>.getD ""})"

View File

@@ -0,0 +1,224 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: David Thrane Christiansen
-/
module
prelude
public import Lean.PrettyPrinter.Formatter
import Lean.DocString.Parser
namespace Lean.Doc.Parser
open Lean.PrettyPrinter Formatter
open Lean.Syntax.MonadTraverser
open Lean.Doc.Syntax
def atomString : Syntax String
| .node _ _ #[x] => atomString x
| .atom _ x => x
| stx => s!"NON-ATOM {stx}"
def pushAtomString : Formatter := do
push <| atomString ( getCur)
goLeft
def pushAtomStrLit : Formatter := do
push <| (Syntax.decodeStrLit (atomString ( getCur))).getD ""
goLeft
def identString : Syntax String
| .node _ _ #[x] => identString x
| .ident _ _ x _ => toString x
| stx => s!"NON-IDENT {stx}"
def pushIdent : Formatter := do
push <| identString ( getCur)
goLeft
def rep (f : Formatter) : Formatter := concat do
let count := ( getCur).getArgs.size
visitArgs do count.forM fun _ _ => do f
partial def versoSyntaxToString' (stx : Syntax) : ReaderT Nat (StateM String) Unit := do
if stx.getKind == nullKind then
stx.getArgs.forM versoSyntaxToString'
else
match stx with
| `(arg_val|$s:str) => out <| atomString s
| `(arg_val|$n:num) => out <| atomString n
| `(arg_val|$x:ident) => out <| identString x
| `(doc_arg|($x := $v)) | `(doc_arg|$x:ident := $v) =>
out "("
out <| identString x
out " := "
versoSyntaxToString' v
out ")"
| `(doc_arg|+%$tk$x:ident) | `(doc_arg|-%$tk$x:ident) =>
out <| atomString tk
out <| identString x
| `(doc_arg|$x:arg_val) => versoSyntaxToString' x
| `(inline|$s:str) => out s.getString
| `(inline|_[%$tk1 $inl* ]%$tk2) | `(inline|*[%$tk1 $inl* ]%$tk2) =>
out <| atomString tk1
inl.forM versoSyntaxToString'
out <| atomString tk2
| `(inline|link[%$tk1 $inl* ]%$tk2 $tgt) =>
out <| atomString tk1
inl.forM versoSyntaxToString'
out <| atomString tk2
versoSyntaxToString' tgt
| `(inline|image(%$tk1 $alt )%$tk2 $tgt) =>
out <| atomString tk1
out <| (Syntax.decodeStrLit (atomString alt)).getD ""
out <| atomString tk2
versoSyntaxToString' tgt
| `(inline|role{$name $args*}[$inls*]) =>
out "{"
out <| identString name
for arg in args do
out " "
versoSyntaxToString' arg
out "}["
inls.forM versoSyntaxToString'
out "]"
| `(inline|code(%$tk1$s)%$tk2) =>
out <| atomString tk1
out <| (Syntax.decodeStrLit (atomString s)).getD ""
out <| atomString tk2
| `(inline|footnote(%$tk1 $ref )%$tk2) =>
out <| atomString tk1
out <| (Syntax.decodeStrLit (atomString ref)).getD ""
out <| atomString tk2
| `(inline|line! $s) =>
out <| (Syntax.decodeStrLit (atomString s)).getD ""
| `(inline|\math%$tk1 code(%$tk2 $s )%$tk3)
| `(inline|\displaymath%$tk1 code(%$tk2 $s )%$tk3) =>
out <| atomString tk1
out <| atomString tk2
out <| (Syntax.decodeStrLit (atomString s)).getD ""
out <| atomString tk3
| `(link_target|[%$tk1 $ref ]%$tk2) =>
out <| atomString tk1
out <| (Syntax.decodeStrLit (atomString ref)).getD ""
out <| atomString tk2
| `(link_target|(%$tk1 $url )%$tk2) =>
out <| atomString tk1
out <| (Syntax.decodeStrLit (atomString url)).getD ""
out <| atomString tk2
| `(block|header($n){$inl*}) =>
out <| "#".pushn '#' n.getNat ++ " "
inl.forM versoSyntaxToString'
endBlock
| `(block|para[$inl*]) =>
startBlock
inl.forM versoSyntaxToString'
endBlock
| `(block|ul{$items*}) =>
startBlock
items.forM fun
| `(list_item|* $blks*) => do
out "* "
withReader (· + 2) (blks.forM versoSyntaxToString')
endBlock
| _ => pure ()
endBlock
| `(block|ol($n){$items*}) =>
startBlock
let mut n := n.getNat
for item in items do
match item with
| `(list_item|* $blks*) => do
out s!"{n}. "
withReader (· + 2) (blks.forM versoSyntaxToString')
endBlock
n := n + 1
| _ => pure ()
endBlock
| `(block| > $blks*) =>
startBlock
out "> "
withReader (· + 2) (blks.forM versoSyntaxToString')
endBlock
| `(block| ```%$tk1 $name $args* | $s ```%$tk2) =>
startBlock
out <| atomString tk1
out <| identString name
for arg in args do
out " "
versoSyntaxToString' arg
out "\n"
let i read
let s := Syntax.decodeStrLit (atomString s) |>.getD ""
|>.split (· == '\n')
|>.map ("".pushn ' ' i ++ · ) |> "\n".intercalate
out s
out <| "".pushn ' ' i
out <| atomString tk2
endBlock
| `(block| :::%$tk1 $name $args* {$blks*}%$tk2) =>
startBlock
out <| atomString tk1
out " "
out <| identString name
for arg in args do
out " "
versoSyntaxToString' arg
out "\n"
blks.forM versoSyntaxToString'
let i read
out <| "".pushn ' ' i
out <| atomString tk2
endBlock
| `(block|command{ $name $args* }) =>
startBlock
out <| "{"
out <| identString name
for arg in args do
out " "
versoSyntaxToString' arg
out "}"
endBlock
| other => out (toString other)
where
out (s : String) : ReaderT Nat (StateM String) Unit := modify (· ++ s)
nl : ReaderT Nat (StateM String) Unit := read >>= fun n => modify (· ++ "\n".pushn ' ' n)
startBlock : ReaderT Nat (StateM String) Unit := do
let s get
if s.endsWith "\n" then
let i read
out ("".pushn ' ' i)
endBlock : ReaderT Nat (StateM String) Unit := do
let s get
if s.endsWith "\n\n" then return
else if s.endsWith "\n" then out "\n"
else out "\n\n"
def formatMetadata : Formatter := do
visitArgs do
pushLine
visitAtom .anonymous
pushLine
Parser.metadataContents.formatter
pushLine
visitAtom .anonymous
def versoSyntaxToString (stx : Syntax) : String :=
versoSyntaxToString' stx |>.run 0 |>.run "" |>.2
public def document.formatter : Formatter := concat do
let stx getCur
let i := stx.getArgs.size
visitArgs do
for _ in [0:i] do
let blk getCur
if blk.getKind == ``Doc.Syntax.metadata_block then
formatMetadata
else
push (versoSyntaxToString blk)
goLeft

View File

@@ -85,11 +85,20 @@ Generates Markdown, rendering the result from the final state, without producing
public def MarkdownM.run' (act : MarkdownM Unit) (context : Context := {}) (state : State := {}) : String :=
act.run context state |>.2
private def MarkdownM.push (txt : String) : MarkdownM Unit := modify (·.push txt)
/--
Adds a string to the current Markdown output.
-/
public def MarkdownM.push (txt : String) : MarkdownM Unit := modify (·.push txt)
private def MarkdownM.endBlock : MarkdownM Unit := modify (·.endBlock)
/--
Terminates the current block.
-/
public def MarkdownM.endBlock : MarkdownM Unit := modify (·.endBlock)
private def MarkdownM.indent: MarkdownM α MarkdownM α :=
/--
Increases the indentation level by one.
-/
public def MarkdownM.indent: MarkdownM α MarkdownM α :=
withReader fun st => { st with linePrefix := st.linePrefix ++ " " }
/--
@@ -159,6 +168,41 @@ private def quoteCode (str : String) : String := Id.run do
let str := if str.startsWith "`" || str.endsWith "`" then " " ++ str ++ " " else str
backticks ++ str ++ backticks
private partial def trimLeft (inline : Inline i) : (String × Inline i) := go [inline]
where
go : List (Inline i) String × Inline i
| [] => ("", .empty)
| .text s :: more =>
if s.all (·.isWhitespace) then
let (pre, post) := go more
(s ++ pre, post)
else
let s1 := s.takeWhile (·.isWhitespace)
let s2 := s.drop s1.length
(s1, .text s2 ++ .concat more.toArray)
| .concat xs :: more => go (xs.toList ++ more)
| here :: more => ("", here ++ .concat more.toArray)
private partial def trimRight (inline : Inline i) : (Inline i × String) := go [inline]
where
go : List (Inline i) Inline i × String
| [] => (.empty, "")
| .text s :: more =>
if s.all (·.isWhitespace) then
let (pre, post) := go more
(pre, post ++ s)
else
let s1 := s.takeRightWhile (·.isWhitespace)
let s2 := s.dropRight s1.length
(.concat more.toArray.reverse ++ .text s2, s1)
| .concat xs :: more => go (xs.reverse.toList ++ more)
| here :: more => (.concat more.toArray.reverse ++ here, "")
private def trim (inline : Inline i) : (String × Inline i × String) :=
let (pre, more) := trimLeft inline
let (mid, post) := trimRight more
(pre, mid, post)
open MarkdownM in
private partial def inlineMarkdown [MarkdownInline i] : Inline i MarkdownM Unit
| .text s =>
@@ -166,19 +210,25 @@ private partial def inlineMarkdown [MarkdownInline i] : Inline i → MarkdownM U
| .linebreak s => do
push <| s.replace "\n" ("\n" ++ ( read).linePrefix )
| .emph xs => do
let (pre, mid, post) := trim (.concat xs)
push pre
unless ( read).inEmph do
push "*"
withReader (fun ρ => { ρ with inEmph := true }) do
for i in xs do inlineMarkdown i
inlineMarkdown mid
unless ( read).inEmph do
push "*"
push post
| .bold xs => do
let (pre, mid, post) := trim (.concat xs)
push pre
unless ( read).inBold do
push "**"
withReader (fun ρ => { ρ with inEmph := true }) do
for i in xs do inlineMarkdown i
inlineMarkdown mid
unless ( read).inBold do
push "**"
push post
| .concat xs =>
for i in xs do inlineMarkdown i
| .link content url => do

View File

@@ -7,6 +7,8 @@ module
prelude
public import Lean.Parser.Types
public import Lean.DocString.Syntax
import Lean.PrettyPrinter.Formatter
import Lean.Parser.Term.Basic
set_option linter.missingDocs true
@@ -231,6 +233,16 @@ public def fakeAtom (str : String) (info : SourceInfo := SourceInfo.none) : Pars
let atom := .atom info str
s.pushSyntax atom
/--
Construct a “fake” atom with the given string content, with zero-width source information at the
current position.
Normally, atoms are always substrings of the original input; however, Verso's concrete syntax is
different enough from Lean's that this isn't always a good match.
-/
private def fakeAtomHere (str : String) : ParserFn :=
withInfoSyntaxFn skip.fn (fun info => fakeAtom str (info := info))
private def pushMissing : ParserFn := fun _c s =>
s.pushSyntax .missing
@@ -586,7 +598,7 @@ A linebreak that isn't a block break (that is, there's non-space content on the
def linebreak (ctxt : InlineCtxt) : ParserFn :=
if ctxt.allowNewlines then
nodeFn ``linebreak <|
andthenFn (withInfoSyntaxFn skip.fn (fun info => fakeAtom "line!" info)) <|
andthenFn (fakeAtomHere "line!") <|
nodeFn strLitKind <|
asStringFn (quoted := true) <|
atomicFn (chFn '\n' >> lookaheadFn (manyFn (chFn ' ') >> notFollowedByFn (chFn '\n' <|> blockOpener) "newline"))
@@ -792,9 +804,14 @@ Parses a line of text (that is, one or more inline elements).
def textLine (allowNewlines := true) : ParserFn := many1Fn (inline { allowNewlines })
open Lean.Parser.Term in
def metadataContents : Parser :=
/-- A non-`meta` copy of `Lean.Doc.Syntax.metadataContents`. -/
@[run_builtin_parser_attribute_hooks]
public def metadataContents : Parser :=
structInstFields (sepByIndent structInstField ", " (allowTrailingSep := true))
def withPercents : ParserFn ParserFn := fun p =>
adaptUncacheableContextFn (fun c => {c with tokens := c.tokens.insert "%%%" "%%%"}) p
open Lean.Parser.Term in
/--
Parses a metadata block, which contains the contents of a Lean structure initialization but is
@@ -803,8 +820,7 @@ surrounded by `%%%` on each side.
public def metadataBlock : ParserFn :=
nodeFn ``metadata_block <|
opener >>
metadataContents.fn >>
takeWhileFn (·.isWhitespace) >>
withPercents metadataContents.fn >>
closer
where
opener := atomicFn (bolThen (eatSpaces >> strFn "%%%") "%%% (at line beginning)") >> eatSpaces >> ignoreFn (chFn '\n')
@@ -956,35 +972,35 @@ mutual
nodeFn ``ul <|
lookaheadUnorderedListIndicator ctxt fun type =>
withCurrentColumn fun c =>
fakeAtom "ul{" >>
fakeAtomHere "ul{" >>
many1Fn (listItem {ctxt with minIndent := c + 1 , inLists := c, .inr type :: ctxt.inLists}) >>
fakeAtom "}"
fakeAtomHere "}"
/-- Parses an ordered list. -/
public partial def orderedList (ctxt : BlockCtxt) : ParserFn :=
nodeFn ``ol <|
fakeAtom "ol(" >>
fakeAtomHere "ol(" >>
lookaheadOrderedListIndicator ctxt fun type _start => -- TODO? Validate list numbering?
withCurrentColumn fun c =>
fakeAtom ")" >> fakeAtom "{" >>
fakeAtomHere ")" >> fakeAtomHere "{" >>
many1Fn (listItem {ctxt with minIndent := c + 1 , inLists := c, .inl type :: ctxt.inLists}) >>
fakeAtom "}"
fakeAtomHere "}"
/-- Parses a definition list. -/
public partial def definitionList (ctxt : BlockCtxt) : ParserFn :=
nodeFn ``dl <|
atomicFn (onlyBlockOpeners >> takeWhileFn (· == ' ') >> ignoreFn (lookaheadFn (chFn ':' >> chFn ' ')) >> guardMinColumn ctxt.minIndent) >>
withInfoSyntaxFn skip.fn (fun info => fakeAtom "dl{" info) >>
fakeAtomHere "dl{" >>
withCurrentColumn (fun c => many1Fn (descItem {ctxt with minIndent := c})) >>
withInfoSyntaxFn skip.fn (fun info => fakeAtom "}" info)
fakeAtomHere "}"
/-- Parses a paragraph (that is, a sequence of otherwise-undecorated inlines). -/
public partial def para (ctxt : BlockCtxt) : ParserFn :=
nodeFn ``para <|
atomicFn (takeWhileFn (· == ' ') >> notFollowedByFn blockOpener "block opener" >> guardMinColumn ctxt.minIndent) >>
withInfoSyntaxFn skip.fn (fun info => fakeAtom "para{" (info := info)) >>
fakeAtomHere "para{" >>
textLine >>
withInfoSyntaxFn skip.fn (fun info => fakeAtom "}" (info := info))
fakeAtomHere "}"
/-- Parses a header. -/
public partial def header (ctxt : BlockCtxt) : ParserFn :=
@@ -999,7 +1015,7 @@ mutual
fakeAtom ")") >>
fakeAtom "{" >>
textLine (allowNewlines := false) >>
fakeAtom "}"
fakeAtomHere "}"
/--
Parses a code block. The resulting string literal has already had the fences' leading indentation
@@ -1170,3 +1186,56 @@ mutual
-/
public partial def document (blockContext : BlockCtxt := {}) : ParserFn := ignoreFn (manyFn blankLine) >> blocks blockContext
end
section
open Lean.PrettyPrinter
/--
Parses as `ifVerso` if the option `doc.verso` is `true`, or as `ifNotVerso` otherwise.
-/
public def ifVersoFn (ifVerso ifNotVerso : ParserFn) : ParserFn := fun c s =>
if c.options.getBool `doc.verso then ifVerso c s
else ifNotVerso c s
@[inherit_doc ifVersoFn]
public def ifVerso (ifVerso ifNotVerso : Parser) : Parser where
fn :=
ifVersoFn ifVerso.fn ifNotVerso.fn
/--
Formatter for `ifVerso`—formats according to the underlying formatters.
-/
@[combinator_formatter ifVerso, expose]
public def ifVerso.formatter (f1 f2 : Formatter) : Formatter := f1 <|> f2
/--
Parenthesizer for `ifVerso`—parenthesizes according to the underlying parenthesizers.
-/
@[combinator_parenthesizer ifVerso, expose]
public def ifVerso.parenthesizer (p1 p2 : Parenthesizer) : Parenthesizer := p1 <|> p2
/--
Disables the option `doc.verso` while running a parser.
-/
public def withoutVersoSyntax (p : Parser) : Parser where
fn :=
adaptUncacheableContextFn
(fun c => { c with options := c.options.setBool `doc.verso false })
p.fn
info := p.info
/--
Formatter for `withoutVersoSyntax`—formats according to the underlying formatter.
-/
@[combinator_formatter withoutVersoSyntax, expose]
public def withoutVersoSyntax.formatter (p : Formatter) : Formatter := p
/--
Parenthesizer for `withoutVersoSyntax`—parenthesizes according to the underlying parenthesizer.
-/
@[combinator_parenthesizer withoutVersoSyntax, expose]
public def withoutVersoSyntax.parenthesizer (p : Parenthesizer) : Parenthesizer := p
end
builtin_initialize
register_parser_alias withoutVersoSyntax

View File

@@ -7,14 +7,8 @@ Author: David Thrane Christiansen
module
prelude
import Init.Prelude
import Init.Notation
public import Lean.Parser.Types
import Lean.Syntax
import Lean.Parser.Extra
public import Lean.Parser.Term
meta import Lean.Parser.Term
public import Lean.Parser.Term.Basic
meta import Lean.Parser.Term.Basic
/-!
@@ -63,22 +57,28 @@ scoped syntax (name:=arg_num) num : arg_val
/-- Arguments -/
declare_syntax_cat doc_arg
/-- Anonymous positional arguments -/
/-- Anonymous positional argument -/
@[builtin_doc]
scoped syntax (name:=anon) arg_val : doc_arg
/-- Named arguments -/
/-- Named argument -/
@[builtin_doc]
scoped syntax (name:=named) "(" ident " := " arg_val ")": doc_arg
/-- Named arguments, without parentheses. -/
@[inherit_doc named, builtin_doc]
scoped syntax (name:=named_no_paren) ident " := " arg_val : doc_arg
/-- Boolean flags, turned on -/
/-- Boolean flag, turned on -/
@[builtin_doc]
scoped syntax (name:=flag_on) "+" ident : doc_arg
/-- Boolean flags, turned off -/
/-- Boolean flag, turned off -/
@[builtin_doc]
scoped syntax (name:=flag_off) "-" ident : doc_arg
/-- Link targets, which may be URLs or named references -/
declare_syntax_cat link_target
/-- A reference to a URL -/
/-- A URL target, written explicitly. Use square brackets for a named target. -/
@[builtin_doc]
scoped syntax (name:=url) "(" str ")" : link_target
/-- A named reference -/
/-- A named reference to a URL defined elsewhere. Use parentheses to write the URL here. -/
@[builtin_doc]
scoped syntax (name:=ref) "[" str "]" : link_target
/--
@@ -91,26 +91,87 @@ This syntax uses the following conventions:
-/
declare_syntax_cat inline
scoped syntax (name:=text) str : inline
/-- Emphasis (often rendered as italics) -/
/--
Emphasis, often rendered as italics.
Emphasis may be nested by using longer sequences of `_` for the outer delimiters. For example:
```
Remember: __always butter the _rugbrød_ before adding toppings!__
```
Here, the outer `__` is used to emphasize the instructions, while the inner `_` indicates the use of
a non-English word.
-/
@[builtin_doc]
scoped syntax (name:=emph) "_[" inline* "]" : inline
/-- Bold emphasis -/
/--
Bold emphasis.
A single `*` suffices to make text bold. Using `_` for emphasis.
Bold text may be nested by using longer sequences of `*` for the outer delimiters.
-/
@[builtin_doc]
scoped syntax (name:=bold) "*[" inline* "]" : inline
/-- Link -/
/--
A link. The link's target may either be a concrete URL (written in parentheses) or a named URL
(written in square brackets).
-/
@[builtin_doc]
scoped syntax (name:=link) "link[" inline* "]" link_target : inline
/-- Image -/
/--
An image, with alternate text and a URL.
The alternate text is a plain string, rather than Verso markup.
The image URL may either be a concrete URL (written in parentheses) or a named URL (written in
square brackets).
-/
@[builtin_doc]
scoped syntax (name:=image) "image(" str ")" link_target : inline
/-- A footnote use -/
/--
A footnote use site.
Footnotes must be defined elsewhere using the `[^NAME]: TEXT` syntax.
-/
@[builtin_doc]
scoped syntax (name:=footnote) "footnote(" str ")" : inline
/-- Line break -/
scoped syntax (name:=linebreak) "line!" str : inline
/-- Literal code. If the first and last characters are space, and it contains at least one non-space
character, then the resulting string has a single space stripped from each end.-/
/--
Literal code.
Code may begin with any non-zero number of backticks. It must be terminated with the same number,
and it may not contain a sequence of backticks that is at least as long as its starting or ending
delimiters.
If the first and last characters are space, and it contains at least one non-space character, then
the resulting string has a single space stripped from each end. Thus, ``` `` `x `` ``` represents
``"`x"``, not ``" `x "``.
-/
@[builtin_doc]
scoped syntax (name:=code) "code(" str ")" : inline
/-- A _role_: an extension to the Verso document language in an inline position -/
/--
A _role_: an extension to the Verso document language in an inline position.
Text is given a role using the following syntax: `{NAME ARGS*}[CONTENT]`. The `NAME` is an
identifier that determines which role is being used, akin to a function name. Each of the `ARGS` may
have the following forms:
* A value, which is a string literal, natural number, or identifier
* A named argument, of the form `(NAME := VALUE)`
* A flag, of the form `+NAME` or `-NAME`
The `CONTENT` is a sequence of inline content. If there is only one piece of content and it has
beginning and ending delimiters (e.g. code literals, links, or images, but not ordinary text), then
the `[` and `]` may be omitted. In particular, `` {NAME ARGS*}`x` `` is equivalent to
``{NAME ARGS*}[`x`]``.
-/
@[builtin_doc]
scoped syntax (name:=role) "role{" ident doc_arg* "}" "[" inline* "]" : inline
/-- Inline mathematical notation (equivalent to LaTeX's `$` notation) -/
@[builtin_doc]
scoped syntax (name:=inline_math) "\\math" code : inline
/-- Display-mode mathematical notation -/
@[builtin_doc]
scoped syntax (name:=display_math) "\\displaymath" code : inline
/--
@@ -132,40 +193,130 @@ declare_syntax_cat block
/-- Items from both ordered and unordered lists -/
declare_syntax_cat list_item
/-- List item -/
/-- A list item -/
@[builtin_doc]
syntax (name:=li) "*" block* : list_item
/-- A description of an item -/
declare_syntax_cat desc_item
/-- A description of an item -/
@[builtin_doc]
scoped syntax (name:=desc) ":" inline* "=>" block* : desc_item
/-- Paragraph -/
@[builtin_doc]
scoped syntax (name:=para) "para[" inline+ "]" : block
/-- Unordered List -/
@[builtin_doc]
scoped syntax (name:=ul) "ul{" list_item* "}" : block
/-- Definition list -/
/-- Description list -/
@[builtin_doc]
scoped syntax (name:=dl) "dl{" desc_item* "}" : block
/-- Ordered list -/
@[builtin_doc]
scoped syntax (name:=ol) "ol(" num ")" "{" list_item* "}" : block
/-- Literal code -/
/--
A code block that contains literal code.
Code blocks have the following syntax:
````
```(NAME ARGS*)?
CONTENT
```
````
`CONTENT` is a literal string. If the `CONTENT` contains a sequence of three or more backticks, then
the opening and closing ` ``` ` (called _fences_) should have more backticks than the longest
sequence in `CONTENT`. Additionally, the opening and closing fences should have the same number of
backticks.
If `NAME` and `ARGS` are not provided, then the code block represents literal text. If provided, the
`NAME` is an identifier that selects an interpretation of the block. Unlike Markdown, this name is
not necessarily the language in which the code is written, though many custom code blocks are, in
practice, named after the language that they contain. `NAME` is more akin to a function name. Each
of the `ARGS` may have the following forms:
* A value, which is a string literal, natural number, or identifier
* A named argument, of the form `(NAME := VALUE)`
* A flag, of the form `+NAME` or `-NAME`
The `CONTENT` is interpreted according to the indentation of the fences. If the fences are indented
`n` spaces, then `n` spaces are removed from the start of each line of `CONTENT`.
-/
@[builtin_doc]
scoped syntax (name:=codeblock) "```" (ident doc_arg*)? "|" str "```" : block
/-- Quotation -/
/--
A quotation, which contains a sequence of blocks that are at least as indented as the `>`.
-/
@[builtin_doc]
scoped syntax (name:=blockquote) ">" block* : block
/-- A link reference definition -/
/--
A named URL that can be used in links and images.
-/
@[builtin_doc]
scoped syntax (name:=link_ref) "[" str "]:" str : block
/-- A footnote definition -/
/--
A footnote definition.
-/
@[builtin_doc]
scoped syntax (name:=footnote_ref) "[^" str "]:" inline* : block
/-- Custom directive -/
/--
A _directive_, which is an extension to the Verso language in block position.
Directives have the following syntax:
```
:::NAME ARGS*
CONTENT*
:::
```
The `NAME` is an identifier that determines which directive is being used, akin to a function name.
Each of the `ARGS` may have the following forms:
* A value, which is a string literal, natural number, or identifier
* A named argument, of the form `(NAME := VALUE)`
* A flag, of the form `+NAME` or `-NAME`
The `CONTENT` is a sequence of block content. Directives may be nested by using more colons in
the outer directive. For example:
```
::::outer +flag (arg := 5)
A paragraph.
:::inner "label"
* 1
* 2
:::
::::
```
-/
@[builtin_doc]
scoped syntax (name:=directive) ":::" rawIdent doc_arg* "{" block:max* "}" : block
/-- A header -/
/--
A header
Headers must be correctly nested to form a tree structure. The first header in a document must
start with `#`, and subsequent headers must have at most one more `#` than the preceding header.
-/
@[builtin_doc]
scoped syntax (name:=header) "header(" num ")" "{" inline+ "}" : block
open Lean.Parser Term in
meta def metadataContents : Parser :=
meta def metadataContents : Lean.Parser.Parser :=
structInstFields (sepByIndent structInstField ", " (allowTrailingSep := true))
/-- Metadata for this section, defined by the current genre -/
/--
Metadata for the preceding header.
-/
@[builtin_doc]
scoped syntax (name:=metadata_block) "%%%" metadataContents "%%%" : block
/-- A block-level command -/
/--
A block-level command, which invokes an extension during documentation processing.
The `NAME` is an identifier that determines which command is being used, akin to a function name.
Each of the `ARGS` may have the following forms:
* A value, which is a string literal, natural number, or identifier
* A named argument, of the form `(NAME := VALUE)`
* A flag, of the form `+NAME` or `-NAME`
-/
@[builtin_doc]
scoped syntax (name:=command) "command{" rawIdent doc_arg* "}" : block

View File

@@ -10,6 +10,7 @@ prelude
public import Init.Data.Repr
public import Init.Data.Ord
import Init.Data.Nat.Compare
set_option linter.missingDocs true

View File

@@ -23,16 +23,23 @@ public section
namespace Lean.Elab.Command
@[builtin_command_elab moduleDoc] def elabModuleDoc : CommandElab := fun stx => do
let some range Elab.getDeclarationRange? stx
| return -- must be from partial syntax, ignore
match stx[1] with
| Syntax.atom _ val =>
let doc := val.extract 0 (val.endPos - 2)
let some range Elab.getDeclarationRange? stx
| return -- must be from partial syntax, ignore
modifyEnv fun env => addMainModuleDoc env doc, range
| _ => throwErrorAt stx "unexpected module doc string{indentD stx[1]}"
if getVersoModuleDocs ( getEnv) |>.isEmpty then
let doc := val.extract 0 (val.endPos - 2)
modifyEnv fun env => addMainModuleDoc env doc, range
else
throwError m!"Can't add Markdown-format module docs because there is already Verso-format content present."
| Syntax.node _ ``Lean.Parser.Command.versoCommentBody args =>
runTermElabM fun _ => do
addVersoModDocString range args.getD 0 .missing
| _ => throwErrorAt stx "unexpected module doc string{indentD <| stx}"
private def addScope (isNewNamespace : Bool) (header : String) (newNamespace : Name)
(isNoncomputable isPublic : Bool := false) (attrs : List (TSyntax ``Parser.Term.attrInstance) := []) :
(isNoncomputable isPublic isMeta : Bool := false) (attrs : List (TSyntax ``Parser.Term.attrInstance) := []) :
CommandElabM Unit := do
modify fun s => { s with
env := s.env.registerNamespace newNamespace,
@@ -40,6 +47,7 @@ private def addScope (isNewNamespace : Bool) (header : String) (newNamespace : N
header := header, currNamespace := newNamespace
isNoncomputable := s.scopes.head!.isNoncomputable || isNoncomputable
isPublic := s.scopes.head!.isPublic || isPublic
isMeta := s.scopes.head!.isMeta || isMeta
attrs := s.scopes.head!.attrs ++ attrs
} :: s.scopes
}
@@ -47,7 +55,7 @@ private def addScope (isNewNamespace : Bool) (header : String) (newNamespace : N
if isNewNamespace then
activateScoped newNamespace
private def addScopes (header : Name) (isNewNamespace : Bool) (isNoncomputable isPublic : Bool := false)
private def addScopes (header : Name) (isNewNamespace : Bool) (isNoncomputable isPublic isMeta : Bool := false)
(attrs : List (TSyntax ``Parser.Term.attrInstance) := []) : CommandElabM Unit :=
go header
where go
@@ -55,7 +63,7 @@ where go
| .str p header => do
go p
let currNamespace getCurrNamespace
addScope isNewNamespace header (if isNewNamespace then Name.mkStr currNamespace header else currNamespace) isNoncomputable isPublic attrs
addScope isNewNamespace header (if isNewNamespace then Name.mkStr currNamespace header else currNamespace) isNoncomputable isPublic isMeta attrs
| _ => throwError "invalid scope"
private def addNamespace (header : Name) : CommandElabM Unit :=
@@ -92,16 +100,16 @@ private def checkEndHeader : Name → List Scope → Option Name
@[builtin_command_elab «section»] def elabSection : CommandElab := fun stx => do
match stx with
| `(Parser.Command.section| $[@[expose%$expTk]]? $[public%$publicTk]? $[noncomputable%$ncTk]? section $(header?)?) =>
| `(Parser.Command.section| $[@[expose%$expTk]]? $[public%$publicTk]? $[noncomputable%$ncTk]? $[meta%$metaTk]? section $(header?)?) =>
-- TODO: allow more attributes?
let attrs if expTk.isSome then
pure [ `(Parser.Term.attrInstance| expose)]
else
pure []
if let some header := header? then
addScopes (isNewNamespace := false) (isNoncomputable := ncTk.isSome) (isPublic := publicTk.isSome) (attrs := attrs) header.getId
addScopes (isNewNamespace := false) (isNoncomputable := ncTk.isSome) (isPublic := publicTk.isSome) (isMeta := metaTk.isSome) (attrs := attrs) header.getId
else
addScope (isNewNamespace := false) (isNoncomputable := ncTk.isSome) (isPublic := publicTk.isSome) (attrs := attrs) "" ( getCurrNamespace)
addScope (isNewNamespace := false) (isNoncomputable := ncTk.isSome) (isPublic := publicTk.isSome) (isMeta := metaTk.isSome) (attrs := attrs) "" ( getCurrNamespace)
| _ => throwUnsupportedSyntax
@[builtin_command_elab InternalSyntax.end_local_scope] def elabEndLocalScope : CommandElab := fun _ => do

Some files were not shown because too many files have changed in this diff Show More