Compare commits

...

58 Commits

Author SHA1 Message Date
Sebastian Ullrich
39ab6c3343 chore: CI: fix msys2 2024-07-29 10:19:14 +02:00
Kim Morrison
83ad82162f feat: upstream more List lemmas (#4856) 2024-07-28 23:23:59 +00:00
arthur-adjedj
93ac635a89 chore: fix parenthesizing in test 2024-07-29 08:58:49 +10:00
Kim Morrison
642c28cdbb chore: update stage0 2024-07-29 08:58:49 +10:00
arthur-adjedj
a04f3cab5a fix: reduction behaviour of derived BEq instances
fix: forgot an assignation
2024-07-29 08:58:49 +10:00
arthur-adjedj
86af04cc08 fix: handle dependent fields when deriving BEq 2024-07-29 08:58:49 +10:00
arthur-adjedj
7253ef8751 chore: inverse rhs construction order in BEq's handler 2024-07-29 08:58:49 +10:00
Joachim Breitner
f830fc9f4d refactor: IndPredBelow: use apply_assumption (#4841)
when transforming the `match` statements in `IndPredBelow`, given a
local variable `x : T`, we need to search for `hx : T.below x`.
Previously this was done using the custom `backwardsChaining` method,
although my hypothesis is that we don’t need to chain anything here, and
can use `apply_assumption`.
2024-07-28 17:22:54 +00:00
Joachim Breitner
671ce7afd3 fix: IndPred: track function's motive in a let binding, use withoutProofIrrelevance, no chaining (#4839)
this improves support for structural recursion over inductive
*predicates* when there are reflexive arguments.

Consider
```lean
inductive F: Prop where
  | base
  | step (fn: Nat → F)

-- set_option trace.Meta.IndPredBelow.search true
set_option pp.proofs true

def F.asdf1 : (f : F) → True
  | base => trivial
  | step f => F.asdf1 (f 0)
termination_by structural f => f`
```

Previously the search for the right induction hypothesis would fail with
```
could not solve using backwards chaining x✝¹ : F
x✝ : x✝¹.below
f : Nat → F
a✝¹ : ∀ (a : Nat), (f a).below
a✝ : Nat → True
⊢ True
```

The backchaining process will try to use `a✝ : Nat → True`, but then has
no idea what to use for `Nat`.

There are three steps here to fix this.

1. We let-bind the function's type before the whole process. Now the
   goal is

   ```
   funType : F → Prop := fun x => True
   x✝ : x✝¹.below
   f : Nat → F
   a✝¹ : ∀ (a : Nat), (f a).below
   a✝ : ∀ (a : Nat), funType (f a)
   ⊢ funType (f 0)
   ```
2. Instead of using the general purpose backchaining proof search, which
is more
powerful than we need here (we need on recursive search and no
backtracking),
   we have a custom search that looks for local assumptions that 
   provide evidence of `funType`, and extracts the arguments from that
   “type” application to construct the recursive call.

   Above, it will thus unify `f a =?= f 0`.

3. In order to make progress here, we also turn on use
`withoutProofIrrelevance`,
because else `isDefEq` is happy to say “they are equal” without actually
looking
   at the terms and thus assigning `?a := 0`.

This idea of let-binding the function's motive may also be useful for
the other recursion compilers, as it may simplify the FunInd
construction. This is to be investigated.

fixes #4751
2024-07-28 17:22:27 +00:00
Kim Morrison
87c92a3f87 feat: upstream more List operations (#4855)
Upstreams `tail`, `findIdx`, `indexOf`, `countP`, `count`, `range'` from
Batteries.
2024-07-28 04:52:21 +00:00
Ikko Eltociear Ashimine
15bf41cd67 chore: update Topological.lean (#4853)
minor fix
2024-07-27 18:14:42 +00:00
Kyle Miller
906bc583c5 fix: handle unimported builtin names for location links (#4780)
The function `locationLinksFromDecl` could throw an error if the name it
is provided doesn't exist in the environment, which is possible if for
example an elaborator is a builtin.

Closes #3789
2024-07-27 17:39:39 +00:00
Kim Morrison
ea43ebd54a chore: cleanups for Mathlib.Init (#4852)
It is convenient to adjust some attributes here, to allow easier cleanup
in `Mathlib.Init`.
2024-07-27 07:37:17 +00:00
Siddharth
bb9c9bd99f feat: bitVec shiftLeft recurrences for bitblasting (#4571)
```lean 
@[simp]
theorem shiftLeftRec_zero (x : BitVec w₁) (y : BitVec w₂) :
    shiftLeftRec x y 0 = x <<< (y &&& twoPow w₂ 0)  := by
  simp [shiftLeftRec]

@[simp]
theorem shiftLeftRec_succ (x : BitVec w₁) (y : BitVec w₂) :
    shiftLeftRec x y (n + 1) =
      (shiftLeftRec x y n) <<< (y &&& twoPow w₂ (n + 1)) := by
  simp [shiftLeftRec]

theorem shiftLeftRec_eq (x : BitVec w₁) (y : BitVec w₂) (n : Nat) (hn : n + 1 ≤ w₂) :
  shiftLeftRec x y n = x <<< (y.truncate (n + 1)).zeroExtend w₂ := by
```

These theorems are used for bitblasting shiftLeft in LeanSAT.

---------

Co-authored-by: Alex Keizer <alex@keizer.dev>
Co-authored-by: Kim Morrison <scott@tqft.net>
Co-authored-by: Tobias Grosser <github@grosser.es>
Co-authored-by: Tobias Grosser <tobias@grosser.es>
2024-07-27 06:36:52 +00:00
Mac Malone
fe5894f2f6 feat: lake: CLI options to control output & failure log levels (#4847)
Adds the `--log-level=<lv>` CLI option for controlling the minimum log
level Lake should output. For instance, `--log-level=error` will only
print errors (not warnings or info).

Also, adds the parallel `--fail-level` CLI option to control what the
minimum log level of build failures is. The existing `--iofail` and
`--wfail` options are equivalent to `--fail-level=info` and
`--fail-level=warning` , respectively.

Closes #4805,
2024-07-27 06:16:59 +00:00
Mac Malone
3ecbf4ae2d doc: lake: require @ git in README (#4849)
Demonstrates `require @ git` in Lake's README and tweaks related
documentation.
2024-07-27 02:44:50 +00:00
Sebastian Ullrich
9b342efb84 fix: calling programs with spaces on Windows (#4515)
This entire API has been carefully optimized for maximum pain output

---------

Co-authored-by: Mac Malone <tydeu@hatpress.net>
2024-07-26 17:35:05 +00:00
Sebastian Ullrich
c02aa98c6a doc: triage 2024-07-26 18:24:06 +02:00
Kim Morrison
18ba5f24e1 chore: correct List.Subset lemma names (#4843) 2024-07-26 11:36:28 +00:00
Kim Morrison
cbe39dc4bb chore: fix List deprecations (#4842) 2024-07-26 11:32:18 +00:00
Joachim Breitner
a5b8d5b486 chore: upon nightly release, trigger nightly_bump_toolchain on mathlib4 (#4838)
as discussed at
https://leanprover.zulipchat.com/#narrow/stream/428973-nightly-testing/topic/Bumping.20more.20often/near/453976634
2024-07-26 06:51:19 +00:00
Kim Morrison
895391b73f feat: List.IsPrefix/IsSuffix is decidable (#4837) 2024-07-26 05:17:41 +00:00
Kim Morrison
e280de00b6 feat: gaps/cleanup in List lemmas (#4835) 2024-07-26 05:00:50 +00:00
Kim Morrison
8c87a90cea chore: upstream IsPrefix/IsSuffix/IsInfix (#4836)
Further lemmas to follow; this is the basic material from Batteries.
2024-07-26 04:35:36 +00:00
Joachim Breitner
54c22efca1 fix: structural recursion: do not check for brecOn too early (#4831)
Due to nested recursion, we do two passes of `getRecArgInfo`: One on
each argument in isolation, to see which inductive types are around
(e.g. `Tree` and `List`), and
then we later refine/replace this result with the data for the nested
type former (the implicit `ListTree`).

If we have nested recursion through a non-recursive data type like
`Array` or `Prod` then arguemnts of these types should survive the first
phase, so that we can still use them when looking for, say, `Array
Tree`.

This was helpfully reported by @arthur-adjedj.
2024-07-25 15:25:34 +00:00
Joachim Breitner
d4f2db9559 chore: report github actions failure on zulip (#4830)
only the master branch
2024-07-25 11:15:33 +00:00
Joachim Breitner
39e0b41fe1 test: make #1697 test case Linux-Debug safe (#4829) 2024-07-25 10:26:01 +00:00
Marc Huisinga
84f8871c3f fix: filter duplicate subexpressions (#4786)
For every parenthesized expression `(foo)`, the InfoView produces an
interactive component both for `(foo)` itself and its subexpression
`foo` because the corresponding `TaggedText` in the language server is
duplicated as well. Both of these subexpressions have the same
subexpression position and so they are identical w.r.t. interactive
features.

Removing this duplication would help reduce the size of the DOM of the
InfoView and ensure that the UI for InfoView features is consistent for
`(foo)` and `foo` (e.g. hovers would always highlight `(foo)`, not
either `(foo)` or `foo` depending on whether the mouse cursor is on the
bracket or not). It would also help resolve a bug where selecting a
subexpression will yield selection highlighting both for `(foo)` and
`foo`, as we use the subexpression position to identify which terms to
highlight.

This PR adjusts the parenthesizer to move the corresponding info instead
of duplicating it.
2024-07-25 08:58:49 +00:00
Austin Letson
93fa9c8837 feat: create ci workflow on lake new/init (#4608)
Draft of adding ci workflow using lean-action on `lake new/init`

This PR is currently missing lake options for the user to control this
feature.

Closes #4606

---------

Co-authored-by: Mac Malone <tydeu@hatpress.net>
2024-07-25 01:23:54 +00:00
Sebastian Ullrich
0768ad4eb9 chore: CI: Jira sync 2024-07-24 19:52:55 +02:00
Kyle Miller
c545e7b0c9 fix: make sure anonymous dot notation works with pi-type-valued type synonyms (#4818)
When resolving anonymous dot notation (`.ident x y z`), it would reduce
the expected type to whnf. Now, it unfolds definitions step-by-step,
even if the type synonym is for a pi type like so
```lean
def Foo : Prop := ∀ a : Nat, a = a
protected theorem Foo.intro : Foo := sorry
example : Foo := .intro
```

Closes #4761
2024-07-24 17:09:42 +00:00
Sebastian Ullrich
7b3c64fc85 feat: trailing whitespace changes should not invalidate imports (#4580)
Thus, starting to type the first declaration after the imports should
not make them reload
2024-07-24 13:08:01 +00:00
Sebastian Ullrich
af0b563099 feat: respond to info view requests as soon as relevant tactic has finished execution (#4727)
After each tactic step, we save the info tree created by it together
with an appropriate info tree context that makes it stand-alone (which
we already did before to some degree, see `Info.updateContext?`). Then,
in the adjusted request handlers, we first search for a snapshot task
containing the required position, if so wait on it, and if it yielded an
info tree, use it to answer the request, or else continue searching and
waiting, falling back to the full info tree, which should be unchanged
by this PR.

The definition header does *not* report info trees early as in general
it is not stand-alone in the tactic sense but may contain e.g.
metavariables solved by the body in which case we do want to show the
ultimate state as before. This could be refined in the future in case
there are no unsolved mvars.

The adjusted request handlers are exactly the ones waited on together by
the info view, so they all have to be adjusted to have any effect on the
UX. Further request handlers may be adjusted in the future.

No new tests as "replies early" is not something we can test with our
current framework but the existing test suite did help in uncovering
functional regressions.
2024-07-24 13:02:13 +00:00
Sebastian Ullrich
af40e61811 chore: typo 2024-07-24 15:11:54 +02:00
Kim Morrison
1758b37a71 chore: List.filterMapM runs and returns left-to-right (#4820)
Closes #4676. Previously `List.filterMapM` was returning results
left-to-right, but evaluating right-to-left.
2024-07-24 09:00:10 +00:00
Joachim Breitner
3701bee777 test: test case for #4751 (#4819)
and tracing for `IndPredBelow.backwardsChaining`.
2024-07-24 08:14:25 +00:00
Sebastian Ullrich
6d971827e2 chore: CI: add back dropped check-stage3 2024-07-24 09:03:11 +02:00
Joachim Breitner
871c9b4164 test: update test output following stage0 update (#4815)
this is a consequenc of #4807 that only shows up once that change made
it to stage0, it seem.
2024-07-23 21:43:38 +00:00
Lean stage0 autoupdater
ee6737ab4d chore: update stage0 2024-07-23 16:14:57 +00:00
Joachim Breitner
7d60d8b563 feat: safer #eval, and #eval! (#4810)
previously, `#eval` would happily evaluate expressions that contain
`sorry`, either explicitly or because of failing tactics. In conjunction
with operations like array access this can lead to the lean process
crashing, which isn't particularly great.

So how `#eval` will refuse to run code that (transitively) depends on
the `sorry` axiom (using the same code as `#print axioms`).

If the user really wants to run it, they can use `#eval!`.

Closes #1697
2024-07-23 15:26:56 +00:00
Joachim Breitner
a4673e20a5 chore: release notes for mutual structural induction (#4808) 2024-07-23 07:40:29 +00:00
Marc Huisinga
b2ee8c240d doc: update quickstart guide (#4806)
This PR updates the screenshots and instructions in the quickstart guide
for the most recent Lean 4 VS Code extension version and makes a small
stylistic change suggested by @semorrison.
2024-07-23 07:31:21 +00:00
Markus Himmel
5d632a97b8 feat: more hash map lemmas (#4803) 2024-07-23 06:57:44 +00:00
Kyle Miller
5938dbbd14 fix: make elab_as_elim eagerly elaborate arguments for parameters appearing in the types of targets (#4800)
The `elab_as_elim` elaborator eagerly elaborates arguments that can help
with elaborating the motive, however it does not include the transitive
closure of parameters appearing in types of parameters appearing in ...
types of targets.

This leads to counter-intuitive behavior where arguments supplied to the
eliminator may unexpectedly have postponed elaboration, causing motives
to be type incorrect for under-applied eliminators such as the
following:

```lean
class IsEmpty (α : Sort u) : Prop where
  protected false : α → False

@[elab_as_elim]
def isEmptyElim [IsEmpty α] {p : α → Sort _} (a : α) : p a :=
  (IsEmpty.false a).elim

example {α : Type _} [IsEmpty α] :
  id (α → False) := isEmptyElim (α := α)
```

The issue is that when `isEmptyElim (α := α)` is computing its motive,
the value of the postponed argument `α` is still an unassignable
metavariable. With this PR, this argument is now among those that are
eagerly elaborated since it appears as the type of the target `a`.

This PR also contains some other fixes:
* When underapplied, does unification when instantiating foralls in the
expected type.
* When overapplied, type checks the generalized-and-reverted expected
type.
* When collecting targets, collects them in the correct order.

Adds trace class `trace.Elab.app.elab_as_elim`.

This is a followup to #4722, which added motive type checking but
exposed the eagerness issue.
2024-07-22 23:23:28 +00:00
grunweg
852add3e55 doc: document Command.Scope (#4748)
Also extends existing definition for `getScope`/`getScopes` and
clarifies that the `end` command is optional at the end of a file.

---------

Co-authored-by: Kyle Miller <kmill31415@gmail.com>
2024-07-22 21:55:37 +00:00
Joachim Breitner
20c857147c feat: unnecessary termination_by clauses cause warnings, not errors (#4809)
fixes #4804
2024-07-22 20:52:14 +00:00
Joachim Breitner
9f1eb479b0 feat: functional induction for mutual structural recursion (#4772) 2024-07-22 15:10:11 +00:00
Markus Himmel
92cca5ed1b chore: remove bif from hash map lemmas (#4791)
The original idea was to use `bif` in computation contexts and `if` in
propositional contexts, but this turned out to be really inconvenient in
practice.
2024-07-22 14:39:00 +00:00
Joachim Breitner
3a4d2cded3 refactor: Introduce PProdN module (#4807)
code to create nested `PProd`s, and project out, and related functions
were scattered in variuos places. This unifies them in
`Lean.Meta.PProdN`.

It also consistently avoids the terminal `True` or `PUnit`, for slightly
easier to read constructions.
2024-07-22 11:56:50 +00:00
Joachim Breitner
22ae04f3e7 refactor: FunInd overhaul (#4789)
This refactoring PR changes the structure of the `FunInd` module, with
the main purpose to make it easier to support mutual structural
recursion.

In particular the recursive calls are now longer recognized by their
terms (simple for well-founded recursion, `.app oldIH [arg, proof]`, but
tedious for structural recursion and even more so for mutual structural
recursion), but the type after replacing `oldIH` with `newIH`, where the
type will be simply and plainly `mkAppN motive args`).

We also no longer try to guess whether we deal with well-founded or
structural recursion but instead rely on the `EqnInfo` environment
extensions. The previous code tried to handle both variants, but they
differ too much, so having separate top-level functions is easier.

This also fuses the `foldCalls` and `collectIHs` traversals and
introduces a suitable monad for collecting the inductive hypotheses.
2024-07-21 14:46:52 +00:00
François G. Dorais
99f362979b feat: use usize for array types (#4802)
This is part 2 of 2 of #4801 (which closes #4654). That PR was split in
two to allow a stage0 update between declaring the `usize` functions and
using them where they are needed.
2024-07-21 12:26:04 +00:00
Lean stage0 autoupdater
3a309f7691 chore: update stage0 2024-07-21 11:05:36 +00:00
François G. Dorais
8f0631ab1f feat: usize for array types (#4801)
Add efficient `usize` functions for `Array`, `ByteArray`, `FloatArray`.

This is part 1 of 2 since there is a need to update stage0 between the
two parts. (See discussion below.)

Closes #4654
2024-07-21 10:23:49 +00:00
Markus Himmel
08acf5a136 fix: remove typeclass assumptions for Nodup.eraseP (#4790)
The `α` in the typeclass assumptions wasn't even the element type of the
list.
2024-07-21 07:51:42 +00:00
Markus Himmel
bdfaa00b1e doc: remove reference to HashMap.find? from Option docstring (#4782) 2024-07-21 07:51:06 +00:00
Leonardo de Moura
8ceb24a5e6 perf: Expr.replace (#4799)
use the kernel implementation.
2024-07-20 04:53:43 +00:00
Lean stage0 autoupdater
201749ccac chore: update stage0 2024-07-20 04:31:11 +00:00
Leonardo de Moura
d0bc4e4245 fix: replace_fn.cpp (#4798) 2024-07-19 21:20:43 -07:00
523 changed files with 5744 additions and 2051 deletions

View File

@@ -298,8 +298,8 @@ jobs:
uses: msys2/setup-msys2@v2
with:
msystem: clang64
# `:p` means prefix with appropriate msystem prefix
pacboy: "make python cmake:p clang:p ccache:p gmp:p git zip unzip diffutils binutils tree zstd:p tar"
# `:` means do not prefix with msystem
pacboy: "make: python: cmake clang ccache gmp git: zip: unzip: diffutils: binutils: tree: zstd tar:"
if: runner.os == 'Windows'
- name: Install Brew Packages
run: |
@@ -426,7 +426,7 @@ jobs:
if: matrix.test-speedcenter
- name: Check Stage 3
run: |
make -C build -j$NPROC stage3
make -C build -j$NPROC check-stage3
if: matrix.test-speedcenter
- name: Test Speedcenter Benchmarks
run: |
@@ -455,12 +455,24 @@ jobs:
# mark as merely cancelled not failed if builds are cancelled
if: ${{ !cancelled() }}
steps:
- if: ${{ contains(needs.*.result, 'failure') && github.repository == 'leanprover/lean4' && github.ref_name == 'master' }}
uses: zulip/github-actions-zulip/send-message@v1
with:
api-key: ${{ secrets.ZULIP_BOT_KEY }}
email: "github-actions-bot@lean-fro.zulipchat.com"
organization-url: "https://lean-fro.zulipchat.com"
to: "infrastructure"
topic: "Github actions"
type: "stream"
content: |
A build of `${{ github.ref_name }}`, triggered by event `${{ github.event_name }}`, [failed](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}).
- if: contains(needs.*.result, 'failure')
uses: actions/github-script@v7
with:
script: |
core.setFailed('Some jobs failed')
# This job creates releases from tags
# (whether they are "unofficial" releases for experiments, or official releases when the tag is "v" followed by a semver string.)
# We do not attempt to automatically construct a changelog here:
@@ -533,3 +545,8 @@ jobs:
gh workflow -R leanprover/release-index run update-index.yml
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_INDEX_TOKEN }}
- name: Update toolchain on mathlib4's nightly-testing branch
run: |
gh workflow -R leanprover-community/mathlib4 run nightly_bump_toolchain.yml
env:
GITHUB_TOKEN: ${{ secrets.MATHLIB4_BOT }}

34
.github/workflows/jira.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Jira sync
on:
issues:
types: [closed]
jobs:
jira-sync:
runs-on: ubuntu-latest
steps:
- name: Move Jira issue to Done
env:
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
JIRA_USERNAME: ${{ secrets.JIRA_USERNAME }}
JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}
run: |
issue_number=${{ github.event.issue.number }}
jira_issue_key=$(curl -s -u "${JIRA_USERNAME}:${JIRA_API_TOKEN}" \
-X GET -H "Content-Type: application/json" \
"${JIRA_BASE_URL}/rest/api/2/search?jql=summary~\"${issue_number}\"" | \
jq -r '.issues[0].key')
if [ -z "$jira_issue_key" ]; then
exit
fi
curl -s -u "${JIRA_USERNAME}:${JIRA_API_TOKEN}" \
-X POST -H "Content-Type: application/json" \
--data "{\"transition\": {\"id\": \"41\"}}" \
"${JIRA_BASE_URL}/rest/api/2/issue/${jira_issue_key}/transitions"
echo "Moved Jira issue ${jira_issue_key} to Done"

View File

@@ -63,6 +63,20 @@ Because the change will be squashed, there is no need to polish the commit messa
Reviews and Feedback:
----
The lean4 repo is managed by the Lean FRO's *triage team* that aims to provide initial feedback on new bug reports, PRs, and RFCs weekly.
This feedback generally consists of prioritizing the ticket using one of the following categories:
* label `P-high`: We will work on this issue
* label `P-medium`: We may work on this issue if we find the time
* label `P-low`: We are not planning to work on this issue
* *closed*: This issue is already fixed, it is not an issue, or is not sufficiently compatible with our roadmap for the project and we will not work on it nor accept external contributions on it
For *bug reports*, the listed priority reflects our commitment to fixing the issue.
It is generally indicative but not necessarily identical to the priority an external contribution addressing this bug would receive.
For *PRs* and *RFCs*, the priority reflects our commitment to reviewing them and getting them to an acceptable state.
Accepted RFCs are marked with the label `RFC accepted` and afterwards assigned a new "implementation" priority as with bug reports.
General guidelines for interacting with reviews and feedback:
**Be Patient**: Given the limited number of full-time maintainers and the volume of PRs, reviews may take some time.
**Engage Constructively**: Always approach feedback positively and constructively. Remember, reviews are about ensuring the best quality for the project, not personal criticism.

View File

@@ -149,4 +149,4 @@ def fact : Expr ctx (Ty.fn Ty.int Ty.int) :=
(op (·*·) (delay fun _ => app fact (op (·-·) (var stop) (val 1))) (var stop)))
decreasing_by sorry
#eval fact.interp Env.nil 10
#eval! fact.interp Env.nil 10

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -7,12 +7,17 @@ See [Setup](./setup.md) for supported platforms and other ways to set up Lean 4.
1. Launch VS Code and install the `lean4` extension by clicking on the "Extensions" sidebar entry and searching for "lean4".
![installing the vscode-lean4 extension](images/code-ext.png)
![installing the vscode-lean4 extension](images/code-ext.png)
1. Open the Lean 4 setup guide by creating a new text file using "File > New Text File" (`Ctrl+N`), clicking on the ∀-symbol in the top right and selecting "Documentation… > Setup: Show Setup Guide".
1. Open the Lean 4 setup guide by creating a new text file using "File > New Text File" (`Ctrl+N` / `Cmd+N`), clicking on the ∀-symbol in the top right and selecting "Documentation… > Docs: Show Setup Guide".
![show setup guide](images/show-setup-guide.png)
![show setup guide](images/show-setup-guide.png)
1. Follow the Lean 4 setup guide. It will walk you through learning resources for Lean 4, teach you how to set up Lean's dependencies on your platform, install Lean 4 for you at the click of a button and help you set up your first project.
1. Follow the Lean 4 setup guide. It will:
![setup guide](images/setup_guide.png)
- walk you through learning resources for Lean,
- teach you how to set up Lean's dependencies on your platform,
- install Lean 4 for you at the click of a button,
- help you set up your first project.
![setup guide](images/setup_guide.png)

View File

@@ -0,0 +1,65 @@
* Structural recursion can now be explicitly requested using
```
termination_by structural x
```
in analogy to the existing `termination_by x` syntax that causes well-founded recursion to be used.
(#4542)
* The `termination_by?` syntax no longer forces the use of well-founded recursion, and when structural
recursion is inferred, will print the result using the `termination_by` syntax.
* Mutual structural recursion is supported now. This supports both mutual recursion over a non-mutual
data type, as well as recursion over mutual or nested data types:
```lean
mutual
def Even : Nat → Prop
| 0 => True
| n+1 => Odd n
def Odd : Nat → Prop
| 0 => False
| n+1 => Even n
end
mutual
inductive A
| other : B → A
| empty
inductive B
| other : A → B
| empty
end
mutual
def A.size : A → Nat
| .other b => b.size + 1
| .empty => 0
def B.size : B → Nat
| .other a => a.size + 1
| .empty => 0
end
inductive Tree where | node : List Tree → Tree
mutual
def Tree.size : Tree → Nat
| node ts => Tree.list_size ts
def Tree.list_size : List Tree → Nat
| [] => 0
| t::ts => Tree.size t + Tree.list_size ts
end
```
Functional induction principles are generated for these functions as well (`A.size.induct`, `A.size.mutual_induct`).
Nested structural recursion is still not supported.
PRs #4639, #4715, #4642, #4656, #4684, #4715, #4728, #4575, #4731, #4658, #4734, #4738, #4718,
#4733, #4787, #4788, #4789, #4807, #4772
* A bugfix in the structural recursion code may in some cases break existing code, when a parameter
of the type of the recursive argument is bound behind indices of that type. This can usually be
fixed by reordering the parameters of the function (PR #4672)

View File

@@ -1,5 +1,6 @@
cmake_minimum_required(VERSION 3.10)
cmake_policy(SET CMP0054 NEW)
cmake_policy(SET CMP0110 NEW)
if(NOT (${CMAKE_GENERATOR} MATCHES "Unix Makefiles"))
message(FATAL_ERROR "The only supported CMake generator at the moment is 'Unix Makefiles'")
endif()

View File

@@ -474,6 +474,8 @@ class LawfulSingleton (α : Type u) (β : Type v) [EmptyCollection β] [Insert
insert_emptyc_eq (x : α) : (insert x : β) = singleton x
export LawfulSingleton (insert_emptyc_eq)
attribute [simp] insert_emptyc_eq
/-- Type class used to implement the notation `{ a ∈ c | p a }` -/
class Sep (α : outParam <| Type u) (γ : Type v) where
/-- Computes `{ a ∈ c | p a }`. -/
@@ -701,7 +703,7 @@ theorem Ne.elim (h : a ≠ b) : a = b → False := h
theorem Ne.irrefl (h : a a) : False := h rfl
theorem Ne.symm (h : a b) : b a := fun h₁ => h (h₁.symm)
@[symm] theorem Ne.symm (h : a b) : b a := fun h₁ => h (h₁.symm)
theorem ne_comm {α} {a b : α} : a b b a := Ne.symm, Ne.symm
@@ -754,7 +756,7 @@ noncomputable def HEq.elim {α : Sort u} {a : α} {p : α → Sort v} {b : α} (
theorem HEq.subst {p : (T : Sort u) T Prop} (h₁ : HEq a b) (h₂ : p α a) : p β b :=
HEq.ndrecOn h₁ h₂
theorem HEq.symm (h : HEq a b) : HEq b a :=
@[symm] theorem HEq.symm (h : HEq a b) : HEq b a :=
h.rec (HEq.refl a)
theorem heq_of_eq (h : a = a') : HEq a a' :=
@@ -810,15 +812,15 @@ instance : Trans Iff Iff Iff where
theorem Eq.comm {a b : α} : a = b b = a := Iff.intro Eq.symm Eq.symm
theorem eq_comm {a b : α} : a = b b = a := Eq.comm
theorem Iff.symm (h : a b) : b a := Iff.intro h.mpr h.mp
@[symm] theorem Iff.symm (h : a b) : b a := Iff.intro h.mpr h.mp
theorem Iff.comm: (a b) (b a) := Iff.intro Iff.symm Iff.symm
theorem iff_comm : (a b) (b a) := Iff.comm
theorem And.symm : a b b a := fun ha, hb => hb, ha
@[symm] theorem And.symm : a b b a := fun ha, hb => hb, ha
theorem And.comm : a b b a := Iff.intro And.symm And.symm
theorem and_comm : a b b a := And.comm
theorem Or.symm : a b b a := .rec .inr .inl
@[symm] theorem Or.symm : a b b a := .rec .inr .inl
theorem Or.comm : a b b a := Iff.intro Or.symm Or.symm
theorem or_comm : a b b a := Or.comm
@@ -1105,6 +1107,7 @@ inductive Relation.TransGen {α : Sort u} (r : αα → Prop) : αα
/-! # Subtype -/
namespace Subtype
theorem existsOfSubtype {α : Type u} {p : α Prop} : { x // p x } Exists (fun x => p x)
| a, h => a, h
@@ -1201,9 +1204,13 @@ def Prod.map {α₁ : Type u₁} {α₂ : Type u₂} {β₁ : Type v₁} {β₂
/-! # Dependent products -/
theorem ex_of_PSigma {α : Type u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x)
theorem PSigma.exists {α : Type u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x)
| x, hx => x, hx
@[deprecated PSigma.exists (since := "2024-07-27")]
theorem ex_of_PSigma {α : Type u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x) :=
PSigma.exists
protected theorem PSigma.eta {α : Sort u} {β : α Sort v} {a₁ a₂ : α} {b₁ : β a₁} {b₂ : β a₂}
(h₁ : a₁ = a₂) (h₂ : Eq.ndrec b₁ h₁ = b₂) : PSigma.mk a₁ b₁ = PSigma.mk a₂ b₂ := by
subst h₁
@@ -1545,7 +1552,7 @@ protected abbrev rec
(q : Quot r) : motive q :=
Eq.ndrecOn (Quot.liftIndepPr1 f h q) ((lift (Quot.indep f) (Quot.indepCoherent f h) q).2)
@[inherit_doc Quot.rec] protected abbrev recOn
@[inherit_doc Quot.rec, elab_as_elim] protected abbrev recOn
(q : Quot r)
(f : (a : α) motive (Quot.mk r a))
(h : (a b : α) (p : r a b) Eq.ndrec (f a) (sound p) = f b)
@@ -1556,7 +1563,7 @@ protected abbrev rec
Dependent induction principle for a quotient, when the target type is a `Subsingleton`.
In this case the quotient's side condition is trivial so any function can be lifted.
-/
protected abbrev recOnSubsingleton
@[elab_as_elim] protected abbrev recOnSubsingleton
[h : (a : α) Subsingleton (motive (Quot.mk r a))]
(q : Quot r)
(f : (a : α) motive (Quot.mk r a))

View File

@@ -36,3 +36,4 @@ import Init.Data.Channel
import Init.Data.Cast
import Init.Data.Sum
import Init.Data.BEq
import Init.Data.Subtype

View File

@@ -50,6 +50,13 @@ instance : Inhabited (Array α) where
def singleton (v : α) : Array α :=
mkArray 1 v
/-- Low-level version of `size` that directly queries the C array object cached size.
While this is not provable, `usize` always returns the exact size of the array since
the implementation only supports arrays of size less than `USize.size`.
-/
@[extern "lean_array_size", simp]
def usize (a : @& Array α) : USize := a.size.toUSize
/-- Low-level version of `fget` which is as fast as a C array read.
`Fin` values are represented as tag pointers in the Lean runtime. Thus,
`fget` may be slightly slower than `uget`. -/
@@ -174,7 +181,7 @@ def modifyOp (self : Array α) (idx : Nat) (f : αα) : Array α :=
This kind of low level trick can be removed with a little bit of compiler support. For example, if the compiler simplifies `as.size < usizeSz` to true. -/
@[inline] unsafe def forInUnsafe {α : Type u} {β : Type v} {m : Type v Type w} [Monad m] (as : Array α) (b : β) (f : α β m (ForInStep β)) : m β :=
let sz := USize.ofNat as.size
let sz := as.usize
let rec @[specialize] loop (i : USize) (b : β) : m β := do
if i < sz then
let a := as.uget i lcProof
@@ -280,7 +287,7 @@ def foldrM {α : Type u} {β : Type v} {m : Type v → Type w} [Monad m] (f : α
/-- See comment at `forInUnsafe` -/
@[inline]
unsafe def mapMUnsafe {α : Type u} {β : Type v} {m : Type v Type w} [Monad m] (f : α m β) (as : Array α) : m (Array β) :=
let sz := USize.ofNat as.size
let sz := as.usize
let rec @[specialize] map (i : USize) (r : Array NonScalar) : m (Array PNonScalar.{v}) := do
if i < sz then
let v := r.uget i lcProof

View File

@@ -98,6 +98,37 @@ theorem carry_succ (i : Nat) (x y : BitVec w) (c : Bool) :
exact mod_two_pow_add_mod_two_pow_add_bool_lt_two_pow_succ ..
cases x.toNat.testBit i <;> cases y.toNat.testBit i <;> (simp; omega)
/--
If `x &&& y = 0`, then the carry bit `(x + y + 0)` is always `false` for any index `i`.
Intuitively, this is because a carry is only produced when at least two of `x`, `y`, and the
previous carry are true. However, since `x &&& y = 0`, at most one of `x, y` can be true,
and thus we never have a previous carry, which means that the sum cannot produce a carry.
-/
theorem carry_of_and_eq_zero {x y : BitVec w} (h : x &&& y = 0#w) : carry i x y false = false := by
induction i with
| zero => simp
| succ i ih =>
replace h := congrArg (·.getLsb i) h
simp_all [carry_succ]
/-- The final carry bit when computing `x + y + c` is `true` iff `x.toNat + y.toNat + c.toNat ≥ 2^w`. -/
theorem carry_width {x y : BitVec w} :
carry w x y c = decide (x.toNat + y.toNat + c.toNat 2^w) := by
simp [carry]
/--
If `x &&& y = 0`, then addition does not overflow, and thus `(x + y).toNat = x.toNat + y.toNat`.
-/
theorem toNat_add_of_and_eq_zero {x y : BitVec w} (h : x &&& y = 0#w) :
(x + y).toNat = x.toNat + y.toNat := by
rw [toNat_add]
apply Nat.mod_eq_of_lt
suffices ¬ decide (x.toNat + y.toNat + false.toNat 2^w) by
simp only [decide_eq_true_eq] at this
omega
rw [ carry_width]
simp [not_eq_true, carry_of_and_eq_zero h]
/-- Carry function for bitwise addition. -/
def adcb (x y c : Bool) : Bool × Bool := (atLeastTwo x y c, Bool.xor x (Bool.xor y c))
@@ -290,7 +321,7 @@ theorem zeroExtend_truncate_succ_eq_zeroExtend_truncate_add_twoPow (x : BitVec w
simp [hik', hik'']
· ext k
simp
omega
by_cases hi : x.getLsb i <;> simp [hi] <;> omega
/--
Recurrence lemma: multiplying `l` with the first `s` bits of `r` is the
@@ -314,7 +345,7 @@ theorem mulRec_eq_mul_signExtend_truncate (l r : BitVec w) (s : Nat) :
have heq :
(if r.getLsb (s' + 1) = true then l <<< (s' + 1) else 0) =
(l * (r &&& (BitVec.twoPow w (s' + 1)))) := by
simp only [ofNat_eq_ofNat, and_twoPow_eq]
simp only [ofNat_eq_ofNat, and_twoPow]
by_cases hr : r.getLsb (s' + 1) <;> simp [hr]
rw [heq, BitVec.mul_add, zeroExtend_truncate_succ_eq_zeroExtend_truncate_add_twoPow]
@@ -326,4 +357,78 @@ theorem getLsb_mul (x y : BitVec w) (i : Nat) :
· simp
· omega
/-! ## shiftLeft recurrence for bitblasting -/
/--
`shiftLeftRec x y n` shifts `x` to the left by the first `n` bits of `y`.
The theorem `shiftLeft_eq_shiftLeftRec` proves the equivalence of `(x <<< y)` and `shiftLeftRec`.
Together with equations `shiftLeftRec_zero`, `shiftLeftRec_succ`,
this allows us to unfold `shiftLeft` into a circuit for bitblasting.
-/
def shiftLeftRec (x : BitVec w₁) (y : BitVec w₂) (n : Nat) : BitVec w₁ :=
let shiftAmt := (y &&& (twoPow w₂ n))
match n with
| 0 => x <<< shiftAmt
| n + 1 => (shiftLeftRec x y n) <<< shiftAmt
@[simp]
theorem shiftLeftRec_zero {x : BitVec w₁} {y : BitVec w₂} :
shiftLeftRec x y 0 = x <<< (y &&& twoPow w₂ 0) := by
simp [shiftLeftRec]
@[simp]
theorem shiftLeftRec_succ {x : BitVec w₁} {y : BitVec w₂} :
shiftLeftRec x y (n + 1) = (shiftLeftRec x y n) <<< (y &&& twoPow w₂ (n + 1)) := by
simp [shiftLeftRec]
/--
If `y &&& z = 0`, `x <<< (y ||| z) = x <<< y <<< z`.
This follows as `y &&& z = 0` implies `y ||| z = y + z`,
and thus `x <<< (y ||| z) = x <<< (y + z) = x <<< y <<< z`.
-/
theorem shiftLeft_or_of_and_eq_zero {x : BitVec w₁} {y z : BitVec w₂}
(h : y &&& z = 0#w₂) :
x <<< (y ||| z) = x <<< y <<< z := by
rw [ add_eq_or_of_and_eq_zero _ _ h,
shiftLeft_eq', toNat_add_of_and_eq_zero h]
simp [shiftLeft_add]
/--
`shiftLeftRec x y n` shifts `x` to the left by the first `n` bits of `y`.
-/
theorem shiftLeftRec_eq {x : BitVec w₁} {y : BitVec w₂} {n : Nat} :
shiftLeftRec x y n = x <<< (y.truncate (n + 1)).zeroExtend w₂ := by
induction n generalizing x y
case zero =>
ext i
simp only [shiftLeftRec_zero, twoPow_zero, Nat.reduceAdd, truncate_one]
suffices (y &&& 1#w₂) = zeroExtend w₂ (ofBool (y.getLsb 0)) by simp [this]
ext i
by_cases h : (i : Nat) = 0
· simp [h, Bool.and_comm]
· simp [h]; omega
case succ n ih =>
simp only [shiftLeftRec_succ, and_twoPow]
rw [ih]
by_cases h : y.getLsb (n + 1)
· simp only [h, reduceIte]
rw [zeroExtend_truncate_succ_eq_zeroExtend_truncate_or_twoPow_of_getLsb_true h,
shiftLeft_or_of_and_eq_zero]
simp
· simp only [h, false_eq_true, reduceIte, shiftLeft_zero']
rw [zeroExtend_truncate_succ_eq_zeroExtend_truncate_of_getLsb_false (i := n + 1)]
simp [h]
/--
Show that `x <<< y` can be written in terms of `shiftLeftRec`.
This can be unfolded in terms of `shiftLeftRec_zero`, `shiftLeftRec_succ` for bitblasting.
-/
theorem shiftLeft_eq_shiftLeftRec (x : BitVec w₁) (y : BitVec w₂) :
x <<< y = shiftLeftRec x y (w₂ - 1) := by
rcases w₂ with rfl | w₂
· simp [of_length_zero]
· simp [shiftLeftRec_eq]
end BitVec

View File

@@ -436,6 +436,12 @@ theorem zeroExtend_ofNat_one_eq_ofNat_one_of_lt {v w : Nat} (hv : 0 < v) :
have hv := Nat.testBit_one_eq_true_iff_self_eq_zero.mp hi₁
omega
/-- Truncating to width 1 produces a bitvector equal to the least significant bit. -/
theorem truncate_one {x : BitVec w} :
x.truncate 1 = ofBool (x.getLsb 0) := by
ext i
simp [show i = 0 by omega]
/-! ## extractLsb -/
@[simp]
@@ -531,6 +537,11 @@ theorem and_assoc (x y z : BitVec w) :
ext i
simp [Bool.and_assoc]
theorem and_comm (x y : BitVec w) :
x &&& y = y &&& x := by
ext i
simp [Bool.and_comm]
/-! ### xor -/
@[simp] theorem toNat_xor (x y : BitVec v) :
@@ -626,6 +637,10 @@ theorem shiftLeft_zero_eq (x : BitVec w) : x <<< 0 = x := by
apply eq_of_toNat_eq
simp
@[simp]
theorem zero_shiftLeft (n : Nat) : 0#w <<< n = 0#w := by
simp [bv_toNat]
@[simp] theorem getLsb_shiftLeft (x : BitVec m) (n) :
getLsb (x <<< n) i = (decide (i < m) && !decide (i < n) && getLsb x (i - n)) := by
rw [ testBit_toNat, getLsb]
@@ -691,6 +706,22 @@ theorem shiftLeft_shiftLeft {w : Nat} (x : BitVec w) (n m : Nat) :
(x <<< n) <<< m = x <<< (n + m) := by
rw [shiftLeft_add]
/-! ### shiftLeft reductions from BitVec to Nat -/
@[simp]
theorem shiftLeft_eq' {x : BitVec w₁} {y : BitVec w₂} : x <<< y = x <<< y.toNat := by rfl
@[simp]
theorem shiftLeft_zero' {x : BitVec w₁} : x <<< 0#w₂ = x := by simp
theorem shiftLeft_shiftLeft' {x : BitVec w₁} {y : BitVec w₂} {z : BitVec w₃} :
x <<< y <<< z = x <<< (y.toNat + z.toNat) := by
simp [shiftLeft_add]
theorem getLsb_shiftLeft' {x : BitVec w₁} {y : BitVec w₂} {i : Nat} :
(x <<< y).getLsb i = (decide (i < w₁) && !decide (i < y.toNat) && x.getLsb (i - y.toNat)) := by
simp [shiftLeft_eq', getLsb_shiftLeft]
/-! ### ushiftRight -/
@[simp, bv_toNat] theorem toNat_ushiftRight (x : BitVec n) (i : Nat) :
@@ -1452,12 +1483,18 @@ theorem getLsb_twoPow (i j : Nat) : (twoPow w i).getLsb j = ((i < w) && (i = j))
simp at hi
simp_all
theorem and_twoPow_eq (x : BitVec w) (i : Nat) :
@[simp]
theorem and_twoPow (x : BitVec w) (i : Nat) :
x &&& (twoPow w i) = if x.getLsb i then twoPow w i else 0#w := by
ext j
simp only [getLsb_and, getLsb_twoPow]
by_cases hj : i = j <;> by_cases hx : x.getLsb i <;> simp_all
@[simp]
theorem twoPow_and (x : BitVec w) (i : Nat) :
(twoPow w i) &&& x = if x.getLsb i then twoPow w i else 0#w := by
rw [BitVec.and_comm, and_twoPow]
@[simp]
theorem mul_twoPow_eq_shiftLeft (x : BitVec w) (i : Nat) :
x * (twoPow w i) = x <<< i := by
@@ -1471,6 +1508,14 @@ theorem mul_twoPow_eq_shiftLeft (x : BitVec w) (i : Nat) :
apply Nat.pow_dvd_pow 2 (by omega)
simp [Nat.mul_mod, hpow]
theorem twoPow_zero {w : Nat} : twoPow w 0 = 1#w := by
apply eq_of_toNat_eq
simp
@[simp]
theorem getLsb_one {w i : Nat} : (1#w).getLsb i = (decide (0 < w) && decide (0 = i)) := by
rw [ twoPow_zero, getLsb_twoPow]
/- ### zeroExtend, truncate, and bitwise operations -/
/--

View File

@@ -37,6 +37,10 @@ def push : ByteArray → UInt8 → ByteArray
def size : (@& ByteArray) Nat
| bs => bs.size
@[extern "lean_sarray_size", simp]
def usize (a : @& ByteArray) : USize :=
a.size.toUSize
@[extern "lean_byte_array_uget"]
def uget : (a : @& ByteArray) (i : USize) i.toNat < a.size UInt8
| bs, i, h => bs[i]
@@ -119,7 +123,7 @@ def toList (bs : ByteArray) : List UInt8 :=
TODO: avoid code duplication in the future after we improve the compiler.
-/
@[inline] unsafe def forInUnsafe {β : Type v} {m : Type v Type w} [Monad m] (as : ByteArray) (b : β) (f : UInt8 β m (ForInStep β)) : m β :=
let sz := USize.ofNat as.size
let sz := as.usize
let rec @[specialize] loop (i : USize) (b : β) : m β := do
if i < sz then
let a := as.uget i lcProof

View File

@@ -37,6 +37,10 @@ def push : FloatArray → Float → FloatArray
def size : (@& FloatArray) Nat
| ds => ds.size
@[extern "lean_sarray_size", simp]
def usize (a : @& FloatArray) : USize :=
a.size.toUSize
@[extern "lean_float_array_uget"]
def uget : (a : @& FloatArray) (i : USize) i.toNat < a.size Float
| ds, i, h => ds[i]
@@ -90,7 +94,7 @@ partial def toList (ds : FloatArray) : List Float :=
-/
-- TODO: avoid code duplication in the future after we improve the compiler.
@[inline] unsafe def forInUnsafe {β : Type v} {m : Type v Type w} [Monad m] (as : FloatArray) (b : β) (f : Float β m (ForInStep β)) : m β :=
let sz := USize.ofNat as.size
let sz := as.usize
let rec @[specialize] loop (i : USize) (b : β) : m β := do
if i < sz then
let a := as.uget i lcProof

View File

@@ -12,3 +12,5 @@ import Init.Data.List.Attach
import Init.Data.List.Impl
import Init.Data.List.TakeDrop
import Init.Data.List.Notation
import Init.Data.List.Range
import Init.Data.List.Nat

View File

@@ -5,6 +5,7 @@ Authors: Mario Carneiro
-/
prelude
import Init.Data.List.Lemmas
import Init.Data.Subtype
namespace List
@@ -44,3 +45,155 @@ Unsafe implementation of `attachWith`, taking advantage of the fact that the rep
| nil, hL' => rfl
| cons _ L', hL' => congrArg _ <| go L' fun _ hx => hL' (.tail _ hx)
exact go L h'
@[simp] theorem attach_nil : ([] : List α).attach = [] := rfl
@[simp]
theorem pmap_eq_map (p : α Prop) (f : α β) (l : List α) (H) :
@pmap _ _ p (fun a _ => f a) l H = map f l := by
induction l
· rfl
· simp only [*, pmap, map]
theorem pmap_congr {p q : α Prop} {f : a, p a β} {g : a, q a β} (l : List α) {H₁ H₂}
(h : a l, (h₁ h₂), f a h₁ = g a h₂) : pmap f l H₁ = pmap g l H₂ := by
induction l with
| nil => rfl
| cons x l ih => rw [pmap, pmap, h _ (mem_cons_self _ _), ih fun a ha => h a (mem_cons_of_mem _ ha)]
theorem map_pmap {p : α Prop} (g : β γ) (f : a, p a β) (l H) :
map g (pmap f l H) = pmap (fun a h => g (f a h)) l H := by
induction l
· rfl
· simp only [*, pmap, map]
theorem pmap_map {p : β Prop} (g : b, p b γ) (f : α β) (l H) :
pmap g (map f l) H = pmap (fun a h => g (f a) h) l fun a h => H _ (mem_map_of_mem _ h) := by
induction l
· rfl
· simp only [*, pmap, map]
theorem pmap_eq_map_attach {p : α Prop} (f : a, p a β) (l H) :
pmap f l H = l.attach.map fun x => f x.1 (H _ x.2) := by
rw [attach, attachWith, map_pmap]; exact pmap_congr l fun _ _ _ _ => rfl
theorem attach_map_coe (l : List α) (f : α β) :
(l.attach.map fun (i : {i // i l}) => f i) = l.map f := by
rw [attach, attachWith, map_pmap]; exact pmap_eq_map _ _ _ _
theorem attach_map_val (l : List α) (f : α β) : (l.attach.map fun i => f i.val) = l.map f :=
attach_map_coe _ _
@[simp]
theorem attach_map_subtype_val (l : List α) : l.attach.map Subtype.val = l :=
(attach_map_coe _ _).trans l.map_id
theorem countP_attach (l : List α) (p : α Bool) : l.attach.countP (fun a : {x // x l} => p a) = l.countP p := by
simp only [ Function.comp_apply (g := Subtype.val), countP_map, attach_map_subtype_val]
@[simp]
theorem count_attach [DecidableEq α] (l : List α) (a : {x // x l}) : l.attach.count a = l.count a :=
Eq.trans (countP_congr fun _ _ => by simp [Subtype.ext_iff]) <| countP_attach _ _
@[simp]
theorem mem_attach (l : List α) : x, x l.attach
| a, h => by
have := mem_map.1 (by rw [attach_map_subtype_val] <;> exact h)
rcases this with _, _, m, rfl
exact m
@[simp]
theorem mem_pmap {p : α Prop} {f : a, p a β} {l H b} :
b pmap f l H (a : _) (h : a l), f a (H a h) = b := by
simp only [pmap_eq_map_attach, mem_map, mem_attach, true_and, Subtype.exists, eq_comm]
@[simp]
theorem length_pmap {p : α Prop} {f : a, p a β} {l H} : length (pmap f l H) = length l := by
induction l
· rfl
· simp only [*, pmap, length]
@[simp]
theorem length_attach (L : List α) : L.attach.length = L.length :=
length_pmap
@[simp]
theorem pmap_eq_nil {p : α Prop} {f : a, p a β} {l H} : pmap f l H = [] l = [] := by
rw [ length_eq_zero, length_pmap, length_eq_zero]
@[simp]
theorem attach_eq_nil (l : List α) : l.attach = [] l = [] :=
pmap_eq_nil
theorem getLast_pmap (p : α Prop) (f : a, p a β) (l : List α)
(hl₁ : a l, p a) (hl₂ : l []) :
(l.pmap f hl₁).getLast (mt List.pmap_eq_nil.1 hl₂) =
f (l.getLast hl₂) (hl₁ _ (List.getLast_mem hl₂)) := by
induction l with
| nil => apply (hl₂ rfl).elim
| cons l_hd l_tl l_ih =>
by_cases hl_tl : l_tl = []
· simp [hl_tl]
· simp only [pmap]
rw [getLast_cons, l_ih _ hl_tl]
simp only [getLast_cons hl_tl]
theorem getElem?_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) (n : Nat) :
(pmap f l h)[n]? = Option.pmap f l[n]? fun x H => h x (getElem?_mem H) := by
induction l generalizing n with
| nil => simp
| cons hd tl hl =>
rcases n with n
· simp only [Option.pmap]
split <;> simp_all
· simp only [hl, pmap, Option.pmap, getElem?_cons_succ]
split <;> rename_i h₁ _ <;> split <;> rename_i h₂ _
· simp_all
· simp at h₂
simp_all
· simp_all
· simp_all
theorem get?_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) (n : Nat) :
get? (pmap f l h) n = Option.pmap f (get? l n) fun x H => h x (get?_mem H) := by
simp only [get?_eq_getElem?]
simp [getElem?_pmap, h]
theorem getElem_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) {n : Nat}
(hn : n < (pmap f l h).length) :
(pmap f l h)[n] =
f (l[n]'(@length_pmap _ _ p f l h hn))
(h _ (getElem_mem l n (@length_pmap _ _ p f l h hn))) := by
induction l generalizing n with
| nil =>
simp only [length, pmap] at hn
exact absurd hn (Nat.not_lt_of_le n.zero_le)
| cons hd tl hl =>
cases n
· simp
· simp [hl]
theorem get_pmap {p : α Prop} (f : a, p a β) {l : List α} (h : a l, p a) {n : Nat}
(hn : n < (pmap f l h).length) :
get (pmap f l h) n, hn =
f (get l n, @length_pmap _ _ p f l h hn)
(h _ (get_mem l n (@length_pmap _ _ p f l h hn))) := by
simp only [get_eq_getElem]
simp [getElem_pmap]
theorem pmap_append {p : ι Prop} (f : a : ι, p a α) (l₁ l₂ : List ι)
(h : a l₁ ++ l₂, p a) :
(l₁ ++ l₂).pmap f h =
(l₁.pmap f fun a ha => h a (mem_append_left l₂ ha)) ++
l₂.pmap f fun a ha => h a (mem_append_right l₁ ha) := by
induction l₁ with
| nil => rfl
| cons _ _ ih =>
dsimp only [pmap, cons_append]
rw [ih]
theorem pmap_append' {p : α Prop} (f : a : α, p a β) (l₁ l₂ : List α)
(h₁ : a l₁, p a) (h₂ : a l₂, p a) :
((l₁ ++ l₂).pmap f fun a ha => (List.mem_append.1 ha).elim (h₁ a) (h₂ a)) =
l₁.pmap f h₁ ++ l₂.pmap f h₂ :=
pmap_append f l₁ l₂ _

View File

@@ -27,24 +27,32 @@ Recall that `length`, `get`, `set`, `foldl`, and `concat` have already been defi
The operations are organized as follow:
* Equality: `beq`, `isEqv`.
* Lexicographic ordering: `lt`, `le`, and instances.
* Head and tail operators: `head`, `head?`, `headD?`, `tail`, `tail?`, `tailD`.
* Basic operations:
`map`, `filter`, `filterMap`, `foldr`, `append`, `join`, `pure`, `bind`, `replicate`, and `reverse`.
`map`, `filter`, `filterMap`, `foldr`, `append`, `join`, `pure`, `bind`, `replicate`, and
`reverse`.
* Additional functions defined in terms of these: `leftpad`, `rightPad`, and `reduceOption`.
* List membership: `isEmpty`, `elem`, `contains`, `mem` (and the `∈` notation),
and decidability for predicates quantifying over membership in a `List`.
* Sublists: `take`, `drop`, `takeWhile`, `dropWhile`, `partition`, `dropLast`,
`isPrefixOf`, `isPrefixOf?`, `isSuffixOf`, `isSuffixOf?`, `Subset`, `Sublist`, `rotateLeft` and `rotateRight`.
* Manipulating elements: `replace`, `insert`, `erase`, `eraseP`, `eraseIdx`, `find?`, `findSome?`, and `lookup`.
`isPrefixOf`, `isPrefixOf?`, `isSuffixOf`, `isSuffixOf?`, `Subset`, `Sublist`,
`rotateLeft` and `rotateRight`.
* Manipulating elements: `replace`, `insert`, `erase`, `eraseP`, `eraseIdx`.
* Finding elements: `find?`, `findSome?`, `findIdx`, `indexOf`, `findIdx?`, `indexOf?`,
`countP`, `count`, and `lookup`.
* Logic: `any`, `all`, `or`, and `and`.
* Zippers: `zipWith`, `zip`, `zipWithAll`, and `unzip`.
* Ranges and enumeration: `range`, `iota`, `enumFrom`, and `enum`.
* Minima and maxima: `minimum?` and `maximum?`.
* Other functions: `intersperse`, `intercalate`, `eraseDups`, `eraseReps`, `span`, `groupBy`, `removeAll`
* Other functions: `intersperse`, `intercalate`, `eraseDups`, `eraseReps`, `span`, `groupBy`,
`removeAll`
(currently these functions are mostly only used in meta code,
and do not have API suitable for verification).
Further operations are defined in `Init.Data.List.BasicAux` (because they use `Array` in their implementations), namely:
Further operations are defined in `Init.Data.List.BasicAux`
(because they use `Array` in their implementations), namely:
* Variant getters: `get!`, `get?`, `getD`, `getLast`, `getLast!`, `getLast?`, and `getLastD`.
* Head and tail: `head`, `head!`, `head?`, `headD`, `tail!`, `tail?`, and `tailD`.
* Head and tail: `head!`, `tail!`.
* Other operations on sublists: `partitionMap`, `rotateLeft`, and `rotateRight`.
-/
@@ -315,6 +323,16 @@ def headD : (as : List α) → (fallback : α) → α
@[simp 1100] theorem headD_nil : @headD α [] d = d := rfl
@[simp 1100] theorem headD_cons : @headD α (a::l) d = a := rfl
/-! ### tail -/
/-- Get the tail of a nonempty list, or return `[]` for `[]`. -/
def tail : List α List α
| [] => []
| _::as => as
@[simp] theorem tail_nil : @tail α [] = [] := rfl
@[simp] theorem tail_cons : @tail α (a::as) = as := rfl
/-! ### tail? -/
/--
@@ -577,6 +595,28 @@ theorem replicate_succ (a : α) (n) : replicate (n+1) a = a :: replicate n a :=
| zero => simp
| succ n ih => simp only [ih, replicate_succ, length_cons, Nat.succ_eq_add_one]
/-! ## Additional functions -/
/-! ### leftpad and rightpad -/
/--
Pads `l : List α` on the left with repeated occurrences of `a : α` until it is of length `n`.
If `l` is initially larger than `n`, just return `l`.
-/
def leftpad (n : Nat) (a : α) (l : List α) : List α := replicate (n - length l) a ++ l
/--
Pads `l : List α` on the right with repeated occurrences of `a : α` until it is of length `n`.
If `l` is initially larger than `n`, just return `l`.
-/
def rightpad (n : Nat) (a : α) (l : List α) : List α := l ++ replicate (n - length l) a
/-! ### reduceOption -/
/-- Drop `none`s from a list, and replace each remaining `some a` with `a`. -/
@[inline] def reduceOption {α} : List (Option α) List α :=
List.filterMap id
/-! ## List membership
* `L.contains a : Bool` determines, using a `[BEq α]` instance, whether `L` contains an element `· == a`.
@@ -719,7 +759,7 @@ def take : Nat → List α → List α
@[simp] theorem take_nil : ([] : List α).take i = [] := by cases i <;> rfl
@[simp] theorem take_zero (l : List α) : l.take 0 = [] := rfl
@[simp] theorem take_cons_succ : (a::as).take (i+1) = a :: as.take i := rfl
@[simp] theorem take_succ_cons : (a::as).take (i+1) = a :: as.take i := rfl
/-! ### drop -/
@@ -826,46 +866,6 @@ def dropLast {α} : List α → List α
have ih := length_dropLast_cons b bs
simp [dropLast, ih]
/-! ### isPrefixOf -/
/-- `isPrefixOf l₁ l₂` returns `true` Iff `l₁` is a prefix of `l₂`.
That is, there exists a `t` such that `l₂ == l₁ ++ t`. -/
def isPrefixOf [BEq α] : List α List α Bool
| [], _ => true
| _, [] => false
| a::as, b::bs => a == b && isPrefixOf as bs
@[simp] theorem isPrefixOf_nil_left [BEq α] : isPrefixOf ([] : List α) l = true := by
simp [isPrefixOf]
@[simp] theorem isPrefixOf_cons_nil [BEq α] : isPrefixOf (a::as) ([] : List α) = false := rfl
theorem isPrefixOf_cons₂ [BEq α] {a : α} :
isPrefixOf (a::as) (b::bs) = (a == b && isPrefixOf as bs) := rfl
/-! ### isPrefixOf? -/
/-- `isPrefixOf? l₁ l₂` returns `some t` when `l₂ == l₁ ++ t`. -/
def isPrefixOf? [BEq α] : List α List α Option (List α)
| [], l₂ => some l₂
| _, [] => none
| (x₁ :: l₁), (x₂ :: l₂) =>
if x₁ == x₂ then isPrefixOf? l₁ l₂ else none
/-! ### isSuffixOf -/
/-- `isSuffixOf l₁ l₂` returns `true` Iff `l₁` is a suffix of `l₂`.
That is, there exists a `t` such that `l₂ == t ++ l₁`. -/
def isSuffixOf [BEq α] (l₁ l₂ : List α) : Bool :=
isPrefixOf l₁.reverse l₂.reverse
@[simp] theorem isSuffixOf_nil_left [BEq α] : isSuffixOf ([] : List α) l = true := by
simp [isSuffixOf]
/-! ### isSuffixOf? -/
/-- `isSuffixOf? l₁ l₂` returns `some t` when `l₂ == t ++ l₁`.-/
def isSuffixOf? [BEq α] (l₁ l₂ : List α) : Option (List α) :=
Option.map List.reverse <| isPrefixOf? l₁.reverse l₂.reverse
/-! ### Subset -/
/--
@@ -900,6 +900,68 @@ def isSublist [BEq α] : List α → List α → Bool
then tl₁.isSublist tl₂
else l₁.isSublist tl₂
/-! ### IsPrefix / isPrefixOf / isPrefixOf? -/
/--
`IsPrefix l₁ l₂`, or `l₁ <+: l₂`, means that `l₁` is a prefix of `l₂`,
that is, `l₂` has the form `l₁ ++ t` for some `t`.
-/
def IsPrefix (l₁ : List α) (l₂ : List α) : Prop := Exists fun t => l₁ ++ t = l₂
@[inherit_doc] infixl:50 " <+: " => IsPrefix
/-- `isPrefixOf l₁ l₂` returns `true` Iff `l₁` is a prefix of `l₂`.
That is, there exists a `t` such that `l₂ == l₁ ++ t`. -/
def isPrefixOf [BEq α] : List α List α Bool
| [], _ => true
| _, [] => false
| a::as, b::bs => a == b && isPrefixOf as bs
@[simp] theorem isPrefixOf_nil_left [BEq α] : isPrefixOf ([] : List α) l = true := by
simp [isPrefixOf]
@[simp] theorem isPrefixOf_cons_nil [BEq α] : isPrefixOf (a::as) ([] : List α) = false := rfl
theorem isPrefixOf_cons₂ [BEq α] {a : α} :
isPrefixOf (a::as) (b::bs) = (a == b && isPrefixOf as bs) := rfl
/-- `isPrefixOf? l₁ l₂` returns `some t` when `l₂ == l₁ ++ t`. -/
def isPrefixOf? [BEq α] : List α List α Option (List α)
| [], l₂ => some l₂
| _, [] => none
| (x₁ :: l₁), (x₂ :: l₂) =>
if x₁ == x₂ then isPrefixOf? l₁ l₂ else none
/-! ### IsSuffix / isSuffixOf / isSuffixOf? -/
/-- `isSuffixOf l₁ l₂` returns `true` Iff `l₁` is a suffix of `l₂`.
That is, there exists a `t` such that `l₂ == t ++ l₁`. -/
def isSuffixOf [BEq α] (l₁ l₂ : List α) : Bool :=
isPrefixOf l₁.reverse l₂.reverse
@[simp] theorem isSuffixOf_nil_left [BEq α] : isSuffixOf ([] : List α) l = true := by
simp [isSuffixOf]
/-- `isSuffixOf? l₁ l₂` returns `some t` when `l₂ == t ++ l₁`.-/
def isSuffixOf? [BEq α] (l₁ l₂ : List α) : Option (List α) :=
Option.map List.reverse <| isPrefixOf? l₁.reverse l₂.reverse
/--
`IsSuffix l₁ l₂`, or `l₁ <:+ l₂`, means that `l₁` is a suffix of `l₂`,
that is, `l₂` has the form `t ++ l₁` for some `t`.
-/
def IsSuffix (l₁ : List α) (l₂ : List α) : Prop := Exists fun t => t ++ l₁ = l₂
@[inherit_doc] infixl:50 " <:+ " => IsSuffix
/-! ### IsInfix -/
/--
`IsInfix l₁ l₂`, or `l₁ <:+: l₂`, means that `l₁` is a contiguous
substring of `l₂`, that is, `l₂` has the form `s ++ l₁ ++ t` for some `s, t`.
-/
def IsInfix (l₁ : List α) (l₂ : List α) : Prop := Exists fun s => Exists fun t => s ++ l₁ ++ t = l₂
@[inherit_doc] infixl:50 " <:+: " => IsInfix
/-! ### rotateLeft -/
/--
@@ -1058,6 +1120,8 @@ def eraseIdx : List α → Nat → List α
@[simp] theorem eraseIdx_cons_zero : (a::as).eraseIdx 0 = as := rfl
@[simp] theorem eraseIdx_cons_succ : (a::as).eraseIdx (i+1) = a :: as.eraseIdx i := rfl
/-! Finding elements -/
/-! ### find? -/
/--
@@ -1095,6 +1159,50 @@ theorem findSome?_cons {f : α → Option β} :
(a::as).findSome? f = match f a with | some b => some b | none => as.findSome? f :=
rfl
/-! ### findIdx -/
/-- Returns the index of the first element satisfying `p`, or the length of the list otherwise. -/
@[inline] def findIdx (p : α Bool) (l : List α) : Nat := go l 0 where
/-- Auxiliary for `findIdx`: `findIdx.go p l n = findIdx p l + n` -/
@[specialize] go : List α Nat Nat
| [], n => n
| a :: l, n => bif p a then n else go l (n + 1)
@[simp] theorem findIdx_nil {α : Type _} (p : α Bool) : [].findIdx p = 0 := rfl
/-! ### indexOf -/
/-- Returns the index of the first element equal to `a`, or the length of the list otherwise. -/
def indexOf [BEq α] (a : α) : List α Nat := findIdx (· == a)
@[simp] theorem indexOf_nil [BEq α] : ([] : List α).indexOf x = 0 := rfl
/-! ### findIdx? -/
/-- Return the index of the first occurrence of an element satisfying `p`. -/
def findIdx? (p : α Bool) : List α (start : Nat := 0) Option Nat
| [], _ => none
| a :: l, i => if p a then some i else findIdx? p l (i + 1)
/-! ### indexOf? -/
/-- Return the index of the first occurrence of `a` in the list. -/
@[inline] def indexOf? [BEq α] (a : α) : List α Option Nat := findIdx? (· == a)
/-! ### countP -/
/-- `countP p l` is the number of elements of `l` that satisfy `p`. -/
@[inline] def countP (p : α Bool) (l : List α) : Nat := go l 0 where
/-- Auxiliary for `countP`: `countP.go p l acc = countP p l + acc`. -/
@[specialize] go : List α Nat Nat
| [], acc => acc
| x :: xs, acc => bif p x then go xs (acc + 1) else go xs acc
/-! ### count -/
/-- `count a l` is the number of occurrences of `a` in `l`. -/
@[inline] def count [BEq α] (a : α) : List α Nat := countP (· == a)
/-! ### lookup -/
/--
@@ -1236,6 +1344,14 @@ def unzip : List (α × β) → List α × List β
/-! ## Ranges and enumeration -/
/-- Sum of a list of natural numbers. -/
-- This is not in the `List` namespace as later `List.sum` will be defined polymorphically.
protected def _root_.Nat.sum (l : List Nat) : Nat := l.foldr (·+·) 0
@[simp] theorem _root_.Nat.sum_nil : Nat.sum ([] : List Nat) = 0 := rfl
@[simp] theorem _root_.Nat.sum_cons (a : Nat) (l : List Nat) :
Nat.sum (a::l) = a + Nat.sum l := rfl
/-! ### range -/
/--
@@ -1251,6 +1367,14 @@ where
@[simp] theorem range_zero : range 0 = [] := rfl
/-! ### range' -/
/-- `range' start len step` is the list of numbers `[start, start+step, ..., start+(len-1)*step]`.
It is intended mainly for proving properties of `range` and `iota`. -/
def range' : (start len : Nat) (step : Nat := 1) List Nat
| _, 0, _ => []
| s, n+1, step => s :: range' (s+step) n step
/-! ### iota -/
/--

View File

@@ -127,12 +127,12 @@ results `y` for which `f x` returns `some y`.
@[inline]
def filterMapM {m : Type u Type v} [Monad m] {α β : Type u} (f : α m (Option β)) (as : List α) : m (List β) :=
let rec @[specialize] loop
| [], bs => pure bs
| [], bs => pure bs.reverse
| a :: as, bs => do
match ( f a) with
| none => loop as bs
| some b => loop as (b::bs)
loop as.reverse []
loop as []
/--
Folds a monadic function over a list from left to right:
@@ -227,6 +227,8 @@ def findSomeM? {m : Type u → Type v} [Monad m] {α : Type w} {β : Type u} (f
instance : ForIn m (List α) α where
forIn := List.forIn
@[simp] theorem forIn_eq_forIn [Monad m] : @List.forIn α β m _ = forIn := rfl
@[simp] theorem forIn_nil [Monad m] (f : α β m (ForInStep β)) (b : β) : forIn [] b f = pure b :=
rfl

View File

@@ -193,6 +193,17 @@ theorem replicateTR_loop_eq : ∀ n, replicateTR.loop a n acc = replicate n a ++
apply funext; intro α; apply funext; intro n; apply funext; intro a
exact (replicateTR_loop_replicate_eq _ 0 n).symm
/-! ## Additional functions -/
/-! ### leftpad -/
/-- Optimized version of `leftpad`. -/
@[inline] def leftpadTR (n : Nat) (a : α) (l : List α) : List α :=
replicateTR.loop a (n - length l) l
@[csimp] theorem leftpad_eq_leftpadTR : @leftpad = @leftpadTR := by
funext α n a l; simp [leftpad, leftpadTR, replicateTR_loop_eq]
/-! ## Sublists -/
/-! ### take -/
@@ -366,6 +377,26 @@ def unzipTR (l : List (α × β)) : List α × List β :=
/-! ## Ranges and enumeration -/
/-! ### range' -/
/-- Optimized version of `range'`. -/
@[inline] def range'TR (s n : Nat) (step : Nat := 1) : List Nat := go n (s + step * n) [] where
/-- Auxiliary for `range'TR`: `range'TR.go n e = [e-n, ..., e-1] ++ acc`. -/
go : Nat Nat List Nat List Nat
| 0, _, acc => acc
| n+1, e, acc => go n (e-step) ((e-step) :: acc)
@[csimp] theorem range'_eq_range'TR : @range' = @range'TR := by
funext s n step
let rec go (s) : n m,
range'TR.go step n (s + step * n) (range' (s + step * n) m step) = range' s (n + m) step
| 0, m => by simp [range'TR.go]
| n+1, m => by
simp [range'TR.go]
rw [Nat.mul_succ, Nat.add_assoc, Nat.add_sub_cancel, Nat.add_right_comm n]
exact go s n (m + 1)
exact (go s n 0).symm
/-! ### iota -/
/-- Tail-recursive version of `List.iota`. -/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,93 @@
/-
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro
-/
prelude
import Init.Data.List.Lemmas
import Init.Data.Nat.Lemmas
/-!
# Miscellaneous `List` lemmas, that require more `Nat` lemmas than are available in `Init.Data.List.Lemmas`.
In particular, `omega` is available here.
-/
open Nat
namespace List
/-! ### filter -/
theorem length_filter_lt_length_iff_exists (l) :
length (filter p l) < length l x l, ¬p x := by
simpa [length_eq_countP_add_countP p l, countP_eq_length_filter] using
countP_pos (fun x => ¬p x) (l := l)
/-! ### minimum? -/
-- A specialization of `minimum?_eq_some_iff` to Nat.
theorem minimum?_eq_some_iff' {xs : List Nat} :
xs.minimum? = some a (a xs b xs, a b) :=
minimum?_eq_some_iff
(le_refl := Nat.le_refl)
(min_eq_or := fun _ _ => by omega)
(le_min_iff := fun _ _ _ => by omega)
-- This could be generalized,
-- but will first require further work on order typeclasses in the core repository.
theorem minimum?_cons' {a : Nat} {l : List Nat} :
(a :: l).minimum? = some (match l.minimum? with
| none => a
| some m => min a m) := by
rw [minimum?_eq_some_iff']
split <;> rename_i h m
· simp_all
· rw [minimum?_eq_some_iff'] at m
obtain m, le := m
rw [Nat.min_def]
constructor
· split
· exact mem_cons_self a l
· exact mem_cons_of_mem a m
· intro b m
cases List.mem_cons.1 m with
| inl => split <;> omega
| inr h =>
specialize le b h
split <;> omega
/-! ### maximum? -/
-- A specialization of `maximum?_eq_some_iff` to Nat.
theorem maximum?_eq_some_iff' {xs : List Nat} :
xs.maximum? = some a (a xs b xs, b a) :=
maximum?_eq_some_iff
(le_refl := Nat.le_refl)
(max_eq_or := fun _ _ => by omega)
(max_le_iff := fun _ _ _ => by omega)
-- This could be generalized,
-- but will first require further work on order typeclasses in the core repository.
theorem maximum?_cons' {a : Nat} {l : List Nat} :
(a :: l).maximum? = some (match l.maximum? with
| none => a
| some m => max a m) := by
rw [maximum?_eq_some_iff']
split <;> rename_i h m
· simp_all
· rw [maximum?_eq_some_iff'] at m
obtain m, le := m
rw [Nat.max_def]
constructor
· split
· exact mem_cons_of_mem a m
· exact mem_cons_self a l
· intro b m
cases List.mem_cons.1 m with
| inl => split <;> omega
| inr h =>
specialize le b h
split <;> omega
end List

View File

@@ -0,0 +1,387 @@
/-
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro
-/
prelude
import Init.Data.List.TakeDrop
import Init.Data.Nat.Lemmas
/-!
# Lemmas about `List.range` and `List.enum`
-/
namespace List
open Nat
/-! ## Ranges and enumeration -/
/-! ### range' -/
theorem range'_succ (s n step) : range' s (n + 1) step = s :: range' (s + step) n step := by
simp [range', Nat.add_succ, Nat.mul_succ]
@[simp] theorem range'_one {s step : Nat} : range' s 1 step = [s] := rfl
@[simp] theorem length_range' (s step) : n : Nat, length (range' s n step) = n
| 0 => rfl
| _ + 1 => congrArg succ (length_range' _ _ _)
@[simp] theorem range'_eq_nil : range' s n step = [] n = 0 := by
rw [ length_eq_zero, length_range']
theorem mem_range' : {n}, m range' s n step i < n, m = s + step * i
| 0 => by simp [range', Nat.not_lt_zero]
| n + 1 => by
have h (i) : i n i = 0 j, i = succ j j < n := by
cases i <;> simp [Nat.succ_le, Nat.succ_inj']
simp [range', mem_range', Nat.lt_succ, h]; simp only [ exists_and_right, and_assoc]
rw [exists_comm]; simp [Nat.mul_succ, Nat.add_assoc, Nat.add_comm]
@[simp] theorem mem_range'_1 : m range' s n s m m < s + n := by
simp [mem_range']; exact
fun i, h, e => e Nat.le_add_right .., Nat.add_lt_add_left h _,
fun h₁, h₂ => m - s, Nat.sub_lt_left_of_lt_add h₁ h₂, (Nat.add_sub_cancel' h₁).symm
theorem pairwise_lt_range' s n (step := 1) (pos : 0 < step := by simp) :
Pairwise (· < ·) (range' s n step) :=
match s, n, step, pos with
| _, 0, _, _ => Pairwise.nil
| s, n + 1, step, pos => by
simp only [range'_succ, pairwise_cons]
constructor
· intros n m
rw [mem_range'] at m
omega
· exact pairwise_lt_range' (s + step) n step pos
theorem pairwise_le_range' s n (step := 1) :
Pairwise (· ·) (range' s n step) :=
match s, n, step with
| _, 0, _ => Pairwise.nil
| s, n + 1, step => by
simp only [range'_succ, pairwise_cons]
constructor
· intros n m
rw [mem_range'] at m
omega
· exact pairwise_le_range' (s + step) n step
theorem nodup_range' (s n : Nat) (step := 1) (h : 0 < step := by simp) : Nodup (range' s n step) :=
(pairwise_lt_range' s n step h).imp Nat.ne_of_lt
@[simp]
theorem map_add_range' (a) : s n step, map (a + ·) (range' s n step) = range' (a + s) n step
| _, 0, _ => rfl
| s, n + 1, step => by simp [range', map_add_range' _ (s + step) n step, Nat.add_assoc]
theorem map_sub_range' (a s n : Nat) (h : a s) :
map (· - a) (range' s n step) = range' (s - a) n step := by
conv => lhs; rw [ Nat.add_sub_cancel' h]
rw [ map_add_range', map_map, (?_ : __ = _), map_id]
funext x; apply Nat.add_sub_cancel_left
theorem range'_append : s m n step : Nat,
range' s m step ++ range' (s + step * m) n step = range' s (n + m) step
| s, 0, n, step => rfl
| s, m + 1, n, step => by
simpa [range', Nat.mul_succ, Nat.add_assoc, Nat.add_comm]
using range'_append (s + step) m n step
@[simp] theorem range'_append_1 (s m n : Nat) :
range' s m ++ range' (s + m) n = range' s (n + m) := by simpa using range'_append s m n 1
theorem range'_sublist_right {s m n : Nat} : range' s m step <+ range' s n step m n :=
fun h => by simpa only [length_range'] using h.length_le,
fun h => by rw [ Nat.sub_add_cancel h, range'_append]; apply sublist_append_left
theorem range'_subset_right {s m n : Nat} (step0 : 0 < step) :
range' s m step range' s n step m n := by
refine fun h => Nat.le_of_not_lt fun hn => ?_, fun h => (range'_sublist_right.2 h).subset
have i, h', e := mem_range'.1 <| h <| mem_range'.2 _, hn, rfl
exact Nat.ne_of_gt h' (Nat.eq_of_mul_eq_mul_left step0 (Nat.add_left_cancel e))
theorem range'_subset_right_1 {s m n : Nat} : range' s m range' s n m n :=
range'_subset_right (by decide)
theorem getElem?_range' (s step) :
{m n : Nat}, m < n (range' s n step)[m]? = some (s + step * m)
| 0, n + 1, _ => by simp [range'_succ]
| m + 1, n + 1, h => by
simp only [range'_succ, getElem?_cons_succ]
exact (getElem?_range' (s + step) step (Nat.lt_of_add_lt_add_right h)).trans <| by
simp [Nat.mul_succ, Nat.add_assoc, Nat.add_comm]
@[simp] theorem getElem_range' {n m step} (i) (H : i < (range' n m step).length) :
(range' n m step)[i] = n + step * i :=
(getElem?_eq_some.1 <| getElem?_range' n step (by simpa using H)).2
theorem range'_concat (s n : Nat) : range' s (n + 1) step = range' s n step ++ [s + step * n] := by
rw [Nat.add_comm n 1]; exact (range'_append s n 1 step).symm
theorem range'_1_concat (s n : Nat) : range' s (n + 1) = range' s n ++ [s + n] := by
simp [range'_concat]
/-! ### range -/
theorem range_loop_range' : s n : Nat, range.loop s (range' s n) = range' 0 (n + s)
| 0, n => rfl
| s + 1, n => by rw [ Nat.add_assoc, Nat.add_right_comm n s 1]; exact range_loop_range' s (n + 1)
theorem range_eq_range' (n : Nat) : range n = range' 0 n :=
(range_loop_range' n 0).trans <| by rw [Nat.zero_add]
theorem range_succ_eq_map (n : Nat) : range (n + 1) = 0 :: map succ (range n) := by
rw [range_eq_range', range_eq_range', range', Nat.add_comm, map_add_range']
congr; exact funext (Nat.add_comm 1)
theorem reverse_range' : s n : Nat, reverse (range' s n) = map (s + n - 1 - ·) (range n)
| s, 0 => rfl
| s, n + 1 => by
rw [range'_1_concat, reverse_append, range_succ_eq_map,
show s + (n + 1) - 1 = s + n from rfl, map, map_map]
simp [reverse_range', Nat.sub_right_comm, Nat.sub_sub]
theorem range'_eq_map_range (s n : Nat) : range' s n = map (s + ·) (range n) := by
rw [range_eq_range', map_add_range']; rfl
@[simp] theorem length_range (n : Nat) : length (range n) = n := by
simp only [range_eq_range', length_range']
@[simp] theorem range_eq_nil {n : Nat} : range n = [] n = 0 := by
rw [ length_eq_zero, length_range]
@[simp]
theorem range_sublist {m n : Nat} : range m <+ range n m n := by
simp only [range_eq_range', range'_sublist_right]
@[simp]
theorem range_subset {m n : Nat} : range m range n m n := by
simp only [range_eq_range', range'_subset_right, lt_succ_self]
@[simp]
theorem mem_range {m n : Nat} : m range n m < n := by
simp only [range_eq_range', mem_range'_1, Nat.zero_le, true_and, Nat.zero_add]
theorem not_mem_range_self {n : Nat} : n range n := by simp
theorem self_mem_range_succ (n : Nat) : n range (n + 1) := by simp
theorem pairwise_lt_range (n : Nat) : Pairwise (· < ·) (range n) := by
simp (config := {decide := true}) only [range_eq_range', pairwise_lt_range']
theorem pairwise_le_range (n : Nat) : Pairwise (· ·) (range n) :=
Pairwise.imp Nat.le_of_lt (pairwise_lt_range _)
theorem getElem?_range {m n : Nat} (h : m < n) : (range n)[m]? = some m := by
simp [range_eq_range', getElem?_range' _ _ h]
@[simp] theorem getElem_range {n : Nat} (m) (h : m < (range n).length) : (range n)[m] = m := by
simp [range_eq_range']
theorem range_succ (n : Nat) : range (succ n) = range n ++ [n] := by
simp only [range_eq_range', range'_1_concat, Nat.zero_add]
theorem range_add (a b : Nat) : range (a + b) = range a ++ (range b).map (a + ·) := by
rw [ range'_eq_map_range]
simpa [range_eq_range', Nat.add_comm] using (range'_append_1 0 a b).symm
theorem take_range (m n : Nat) : take m (range n) = range (min m n) := by
apply List.ext_getElem
· simp
· simp (config := { contextual := true }) [ getElem_take, Nat.lt_min]
theorem nodup_range (n : Nat) : Nodup (range n) := by
simp (config := {decide := true}) only [range_eq_range', nodup_range']
/-! ### iota -/
theorem iota_eq_reverse_range' : n : Nat, iota n = reverse (range' 1 n)
| 0 => rfl
| n + 1 => by simp [iota, range'_concat, iota_eq_reverse_range' n, reverse_append, Nat.add_comm]
@[simp] theorem length_iota (n : Nat) : length (iota n) = n := by simp [iota_eq_reverse_range']
@[simp]
theorem mem_iota {m n : Nat} : m iota n 1 m m n := by
simp [iota_eq_reverse_range', Nat.add_comm, Nat.lt_succ]
theorem pairwise_gt_iota (n : Nat) : Pairwise (· > ·) (iota n) := by
simpa only [iota_eq_reverse_range', pairwise_reverse] using pairwise_lt_range' 1 n
theorem nodup_iota (n : Nat) : Nodup (iota n) :=
(pairwise_gt_iota n).imp Nat.ne_of_gt
/-! ### enumFrom -/
@[simp]
theorem enumFrom_singleton (x : α) (n : Nat) : enumFrom n [x] = [(n, x)] :=
rfl
@[simp]
theorem enumFrom_eq_nil {n : Nat} {l : List α} : List.enumFrom n l = [] l = [] := by
cases l <;> simp
@[simp] theorem enumFrom_length : {n} {l : List α}, (enumFrom n l).length = l.length
| _, [] => rfl
| _, _ :: _ => congrArg Nat.succ enumFrom_length
@[simp]
theorem getElem?_enumFrom :
n (l : List α) m, (enumFrom n l)[m]? = l[m]?.map fun a => (n + m, a)
| n, [], m => rfl
| n, a :: l, 0 => by simp
| n, a :: l, m + 1 => by
simp only [enumFrom_cons, getElem?_cons_succ]
exact (getElem?_enumFrom (n + 1) l m).trans <| by rw [Nat.add_right_comm]; rfl
@[simp]
theorem getElem_enumFrom (l : List α) (n) (i : Nat) (h : i < (l.enumFrom n).length) :
(l.enumFrom n)[i] = (n + i, l[i]'(by simpa [enumFrom_length] using h)) := by
simp only [enumFrom_length] at h
rw [getElem_eq_getElem?]
simp only [getElem?_enumFrom, getElem?_eq_getElem h]
simp
theorem mk_add_mem_enumFrom_iff_getElem? {n i : Nat} {x : α} {l : List α} :
(n + i, x) enumFrom n l l[i]? = some x := by
simp [mem_iff_get?]
theorem mk_mem_enumFrom_iff_le_and_getElem?_sub {n i : Nat} {x : α} {l : List α} :
(i, x) enumFrom n l n i l[i - n]? = x := by
if h : n i then
rcases Nat.exists_eq_add_of_le h with i, rfl
simp [mk_add_mem_enumFrom_iff_getElem?, Nat.add_sub_cancel_left]
else
have : k, n + k i := by rintro k rfl; simp at h
simp [h, mem_iff_get?, this]
theorem le_fst_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) :
n x.1 :=
(mk_mem_enumFrom_iff_le_and_getElem?_sub.1 h).1
theorem fst_lt_add_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) :
x.1 < n + length l := by
rcases mem_iff_get.1 h with i, rfl
simpa using i.isLt
theorem map_enumFrom (f : α β) (n : Nat) (l : List α) :
map (Prod.map id f) (enumFrom n l) = enumFrom n (map f l) := by
induction l generalizing n <;> simp_all
@[simp]
theorem enumFrom_map_fst (n) :
(l : List α), map Prod.fst (enumFrom n l) = range' n l.length
| [] => rfl
| _ :: _ => congrArg (cons _) (enumFrom_map_fst _ _)
@[simp]
theorem enumFrom_map_snd : (n) (l : List α), map Prod.snd (enumFrom n l) = l
| _, [] => rfl
| _, _ :: _ => congrArg (cons _) (enumFrom_map_snd _ _)
theorem snd_mem_of_mem_enumFrom {x : Nat × α} {n : Nat} {l : List α} (h : x enumFrom n l) : x.2 l :=
enumFrom_map_snd n l mem_map_of_mem _ h
theorem mem_enumFrom {x : α} {i j : Nat} (xs : List α) (h : (i, x) xs.enumFrom j) :
j i i < j + xs.length x xs :=
le_fst_of_mem_enumFrom h, fst_lt_add_of_mem_enumFrom h, snd_mem_of_mem_enumFrom h
theorem map_fst_add_enumFrom_eq_enumFrom (l : List α) (n k : Nat) :
map (Prod.map (· + n) id) (enumFrom k l) = enumFrom (n + k) l :=
ext_getElem? fun i by simp [(· ·), Nat.add_comm, Nat.add_left_comm]; rfl
theorem map_fst_add_enum_eq_enumFrom (l : List α) (n : Nat) :
map (Prod.map (· + n) id) (enum l) = enumFrom n l :=
map_fst_add_enumFrom_eq_enumFrom l _ _
theorem enumFrom_cons' (n : Nat) (x : α) (xs : List α) :
enumFrom n (x :: xs) = (n, x) :: (enumFrom n xs).map (Prod.map (· + 1) id) := by
rw [enumFrom_cons, Nat.add_comm, map_fst_add_enumFrom_eq_enumFrom]
theorem enumFrom_map (n : Nat) (l : List α) (f : α β) :
enumFrom n (l.map f) = (enumFrom n l).map (Prod.map id f) := by
induction l with
| nil => rfl
| cons hd tl IH =>
rw [map_cons, enumFrom_cons', enumFrom_cons', map_cons, map_map, IH, map_map]
rfl
theorem enumFrom_append (xs ys : List α) (n : Nat) :
enumFrom n (xs ++ ys) = enumFrom n xs ++ enumFrom (n + xs.length) ys := by
induction xs generalizing ys n with
| nil => simp
| cons x xs IH =>
rw [cons_append, enumFrom_cons, IH, cons_append, enumFrom_cons, length, Nat.add_right_comm,
Nat.add_assoc]
theorem enumFrom_eq_zip_range' (l : List α) {n : Nat} : l.enumFrom n = (range' n l.length).zip l :=
zip_of_prod (enumFrom_map_fst _ _) (enumFrom_map_snd _ _)
@[simp]
theorem unzip_enumFrom_eq_prod (l : List α) {n : Nat} :
(l.enumFrom n).unzip = (range' n l.length, l) := by
simp only [enumFrom_eq_zip_range', unzip_zip, length_range']
/-! ### enum -/
theorem enum_cons : (a::as).enum = (0, a) :: as.enumFrom 1 := rfl
theorem enum_cons' (x : α) (xs : List α) :
enum (x :: xs) = (0, x) :: (enum xs).map (Prod.map (· + 1) id) :=
enumFrom_cons' _ _ _
@[simp]
theorem enum_eq_nil {l : List α} : List.enum l = [] l = [] := enumFrom_eq_nil
@[simp] theorem enum_singleton (x : α) : enum [x] = [(0, x)] := rfl
@[simp] theorem enum_length : (enum l).length = l.length :=
enumFrom_length
@[simp]
theorem getElem?_enum (l : List α) (n : Nat) : (enum l)[n]? = l[n]?.map fun a => (n, a) := by
rw [enum, getElem?_enumFrom, Nat.zero_add]
@[simp]
theorem getElem_enum (l : List α) (i : Nat) (h : i < l.enum.length) :
l.enum[i] = (i, l[i]'(by simpa [enum_length] using h)) := by
simp [enum]
theorem mk_mem_enum_iff_getElem? {i : Nat} {x : α} {l : List α} : (i, x) enum l l[i]? = x := by
simp [enum, mk_mem_enumFrom_iff_le_and_getElem?_sub]
theorem mem_enum_iff_getElem? {x : Nat × α} {l : List α} : x enum l l[x.1]? = some x.2 :=
mk_mem_enum_iff_getElem?
theorem fst_lt_of_mem_enum {x : Nat × α} {l : List α} (h : x enum l) : x.1 < length l := by
simpa using fst_lt_add_of_mem_enumFrom h
theorem snd_mem_of_mem_enum {x : Nat × α} {l : List α} (h : x enum l) : x.2 l :=
snd_mem_of_mem_enumFrom h
theorem map_enum (f : α β) (l : List α) : map (Prod.map id f) (enum l) = enum (map f l) :=
map_enumFrom f 0 l
@[simp] theorem enum_map_fst (l : List α) : map Prod.fst (enum l) = range l.length := by
simp only [enum, enumFrom_map_fst, range_eq_range']
@[simp]
theorem enum_map_snd (l : List α) : map Prod.snd (enum l) = l :=
enumFrom_map_snd _ _
theorem enum_map (l : List α) (f : α β) : (l.map f).enum = l.enum.map (Prod.map id f) :=
enumFrom_map _ _ _
theorem enum_append (xs ys : List α) : enum (xs ++ ys) = enum xs ++ enumFrom xs.length ys := by
simp [enum, enumFrom_append]
theorem enum_eq_zip_range (l : List α) : l.enum = (range l.length).zip l :=
zip_of_prod (enum_map_fst _) (enum_map_snd _)
@[simp]
theorem unzip_enum_eq_prod (l : List α) : l.enum.unzip = (range l.length, l) := by
simp only [enum_eq_zip_range, unzip_zip, length_range]
end List

View File

@@ -32,46 +32,6 @@ theorem length_take_le' (n) (l : List α) : length (take n l) ≤ l.length :=
theorem length_take_of_le (h : n length l) : length (take n l) = n := by simp [Nat.min_eq_left h]
theorem take_take : (n m) (l : List α), take n (take m l) = take (min n m) l
| n, 0, l => by rw [Nat.min_zero, take_zero, take_nil]
| 0, m, l => by rw [Nat.zero_min, take_zero, take_zero]
| succ n, succ m, nil => by simp only [take_nil]
| succ n, succ m, a :: l => by
simp only [take, succ_min_succ, take_take n m l]
@[simp] theorem take_replicate (a : α) : n m : Nat, take n (replicate m a) = replicate (min n m) a
| n, 0 => by simp [Nat.min_zero]
| 0, m => by simp [Nat.zero_min]
| succ n, succ m => by simp [replicate_succ, succ_min_succ, take_replicate]
@[simp] theorem drop_replicate (a : α) : n m : Nat, drop n (replicate m a) = replicate (m - n) a
| n, 0 => by simp
| 0, m => by simp
| succ n, succ m => by simp [replicate_succ, succ_sub_succ, drop_replicate]
/-- Taking the first `n` elements in `l₁ ++ l₂` is the same as appending the first `n` elements
of `l₁` to the first `n - l₁.length` elements of `l₂`. -/
theorem take_append_eq_append_take {l₁ l₂ : List α} {n : Nat} :
take n (l₁ ++ l₂) = take n l₁ ++ take (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases n
· simp [*]
· simp only [cons_append, take_cons_succ, length_cons, succ_eq_add_one, cons.injEq,
append_cancel_left_eq, true_and, *]
congr 1
omega
theorem take_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).take n = l₁.take n := by
simp [take_append_eq_append_take, Nat.sub_eq_zero_of_le h]
/-- Taking the first `l₁.length + i` elements in `l₁ ++ l₂` is the same as appending the first
`i` elements of `l₂` to `l₁`. -/
theorem take_append {l₁ l₂ : List α} (i : Nat) :
take (l₁.length + i) (l₁ ++ l₂) = l₁ ++ take i l₂ := by
rw [take_append_eq_append_take, take_all_of_le (Nat.le_add_right _ _), Nat.add_sub_cancel_left]
/-- The `i`-th element of a list coincides with the `i`-th element of any of its prefixes of
length `> i`. Version designed to rewrite from the big list to the small list. -/
theorem getElem_take (L : List α) {i j : Nat} (hi : i < L.length) (hj : i < j) :
@@ -157,6 +117,54 @@ theorem getLast_take {l : List α} (h : l.take n ≠ []) :
· rw [getElem?_eq_none (by omega), getLast_eq_getElem]
simp
theorem take_take : (n m) (l : List α), take n (take m l) = take (min n m) l
| n, 0, l => by rw [Nat.min_zero, take_zero, take_nil]
| 0, m, l => by rw [Nat.zero_min, take_zero, take_zero]
| succ n, succ m, nil => by simp only [take_nil]
| succ n, succ m, a :: l => by
simp only [take, succ_min_succ, take_take n m l]
theorem take_set_of_lt (a : α) {n m : Nat} (l : List α) (h : m < n) :
(l.set n a).take m = l.take m :=
List.ext_getElem? fun i => by
rw [getElem?_take_eq_if, getElem?_take_eq_if]
split
· next h' => rw [getElem?_set_ne (by omega)]
· rfl
@[simp] theorem take_replicate (a : α) : n m : Nat, take n (replicate m a) = replicate (min n m) a
| n, 0 => by simp [Nat.min_zero]
| 0, m => by simp [Nat.zero_min]
| succ n, succ m => by simp [replicate_succ, succ_min_succ, take_replicate]
@[simp] theorem drop_replicate (a : α) : n m : Nat, drop n (replicate m a) = replicate (m - n) a
| n, 0 => by simp
| 0, m => by simp
| succ n, succ m => by simp [replicate_succ, succ_sub_succ, drop_replicate]
/-- Taking the first `n` elements in `l₁ ++ l₂` is the same as appending the first `n` elements
of `l₁` to the first `n - l₁.length` elements of `l₂`. -/
theorem take_append_eq_append_take {l₁ l₂ : List α} {n : Nat} :
take n (l₁ ++ l₂) = take n l₁ ++ take (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases n
· simp [*]
· simp only [cons_append, take_succ_cons, length_cons, succ_eq_add_one, cons.injEq,
append_cancel_left_eq, true_and, *]
congr 1
omega
theorem take_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).take n = l₁.take n := by
simp [take_append_eq_append_take, Nat.sub_eq_zero_of_le h]
/-- Taking the first `l₁.length + i` elements in `l₁ ++ l₂` is the same as appending the first
`i` elements of `l₂` to `l₁`. -/
theorem take_append {l₁ l₂ : List α} (i : Nat) :
take (l₁.length + i) (l₁ ++ l₂) = l₁ ++ take i l₂ := by
rw [take_append_eq_append_take, take_of_length_le (Nat.le_add_right _ _), Nat.add_sub_cancel_left]
@[simp]
theorem take_eq_take :
{l : List α} {m n : Nat}, l.take m = l.take n min m l.length = min n l.length
@@ -164,13 +172,13 @@ theorem take_eq_take :
| _ :: xs, 0, 0 => by simp
| x :: xs, m + 1, 0 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, 0, n + 1 => by simp [Nat.zero_min, succ_min_succ]
| x :: xs, m + 1, n + 1 => by simp [succ_min_succ, take_eq_take]; omega
| x :: xs, m + 1, n + 1 => by simp [succ_min_succ, take_eq_take]
theorem take_add (l : List α) (m n : Nat) : l.take (m + n) = l.take m ++ (l.drop m).take n := by
suffices take (m + n) (take m l ++ drop m l) = take m l ++ take n (drop m l) by
rw [take_append_drop] at this
assumption
rw [take_append_eq_append_take, take_all_of_le, append_right_inj]
rw [take_append_eq_append_take, take_of_length_le, append_right_inj]
· simp only [take_eq_take, length_take, length_drop]
omega
apply Nat.le_trans (m := m)
@@ -178,8 +186,8 @@ theorem take_add (l : List α) (m n : Nat) : l.take (m + n) = l.take m ++ (l.dro
· apply Nat.le_add_right
theorem dropLast_take {n : Nat} {l : List α} (h : n < l.length) :
(l.take n).dropLast = l.take n.pred := by
simp only [dropLast_eq_take, length_take, Nat.le_of_lt h, take_take, pred_le, Nat.min_eq_left]
(l.take n).dropLast = l.take (n - 1) := by
simp only [dropLast_eq_take, length_take, Nat.le_of_lt h, Nat.min_eq_left, take_take, sub_le]
theorem map_eq_append_split {f : α β} {l : List α} {s₁ s₂ : List β}
(h : map f l = s₁ ++ s₂) : l₁ l₂, l = l₁ ++ l₂ map f l₁ = s₁ map f l₂ = s₂ := by
@@ -193,42 +201,6 @@ theorem map_eq_append_split {f : α → β} {l : List α} {s₁ s₂ : List β}
/-! ### drop -/
theorem drop_length_cons {l : List α} (h : l []) (a : α) :
(a :: l).drop l.length = [l.getLast h] := by
induction l generalizing a with
| nil =>
cases h rfl
| cons y l ih =>
simp only [drop, length]
by_cases h₁ : l = []
· simp [h₁]
rw [getLast_cons' _ h₁]
exact ih h₁ y
/-- Dropping the elements up to `n` in `l₁ ++ l₂` is the same as dropping the elements up to `n`
in `l₁`, dropping the elements up to `n - l₁.length` in `l₂`, and appending them. -/
theorem drop_append_eq_append_drop {l₁ l₂ : List α} {n : Nat} :
drop n (l₁ ++ l₂) = drop n l₁ ++ drop (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases n
· simp [*]
· simp only [cons_append, drop_succ_cons, length_cons, succ_eq_add_one, append_cancel_left_eq, *]
congr 1
omega
theorem drop_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).drop n = l₁.drop n ++ l₂ := by
simp [drop_append_eq_append_drop, Nat.sub_eq_zero_of_le h]
/-- Dropping the elements up to `l₁.length + i` in `l₁ + l₂` is the same as dropping the elements
up to `i` in `l₂`. -/
@[simp]
theorem drop_append {l₁ l₂ : List α} (i : Nat) : drop (l₁.length + i) (l₁ ++ l₂) = drop i l₂ := by
rw [drop_append_eq_append_drop, drop_eq_nil_of_le] <;>
simp [Nat.add_sub_cancel_left, Nat.le_add_right]
theorem lt_length_drop (L : List α) {i j : Nat} (h : i + j < L.length) : j < (L.drop i).length := by
have A : i < L.length := Nat.lt_of_le_of_lt (Nat.le.intro rfl) h
rw [(take_append_drop i L).symm] at h
@@ -307,6 +279,41 @@ theorem getLast_drop {l : List α} (h : l.drop n ≠ []) :
simp only [ getLast?_eq_getLast, getLast?_drop, ite_eq_right_iff]
omega
theorem drop_length_cons {l : List α} (h : l []) (a : α) :
(a :: l).drop l.length = [l.getLast h] := by
induction l generalizing a with
| nil =>
cases h rfl
| cons y l ih =>
simp only [drop, length]
by_cases h₁ : l = []
· simp [h₁]
rw [getLast_cons h₁]
exact ih h₁ y
/-- Dropping the elements up to `n` in `l₁ ++ l₂` is the same as dropping the elements up to `n`
in `l₁`, dropping the elements up to `n - l₁.length` in `l₂`, and appending them. -/
theorem drop_append_eq_append_drop {l₁ l₂ : List α} {n : Nat} :
drop n (l₁ ++ l₂) = drop n l₁ ++ drop (n - l₁.length) l₂ := by
induction l₁ generalizing n
· simp
· cases n
· simp [*]
· simp only [cons_append, drop_succ_cons, length_cons, succ_eq_add_one, append_cancel_left_eq, *]
congr 1
omega
theorem drop_append_of_le_length {l₁ l₂ : List α} {n : Nat} (h : n l₁.length) :
(l₁ ++ l₂).drop n = l₁.drop n ++ l₂ := by
simp [drop_append_eq_append_drop, Nat.sub_eq_zero_of_le h]
/-- Dropping the elements up to `l₁.length + i` in `l₁ + l₂` is the same as dropping the elements
up to `i` in `l₂`. -/
@[simp]
theorem drop_append {l₁ l₂ : List α} (i : Nat) : drop (l₁.length + i) (l₁ ++ l₂) = drop i l₂ := by
rw [drop_append_eq_append_drop, drop_eq_nil_of_le] <;>
simp [Nat.add_sub_cancel_left, Nat.le_add_right]
theorem set_eq_take_append_cons_drop {l : List α} {n : Nat} {a : α} :
l.set n a = if n < l.length then l.take n ++ a :: l.drop (n + 1) else l := by
split <;> rename_i h
@@ -316,7 +323,7 @@ theorem set_eq_take_append_cons_drop {l : List α} {n : Nat} {a : α} :
getElem?_take h']
· by_cases h'' : m = n
· subst h''
rw [getElem?_set_eq (by simp; omega), getElem?_append_right, length_take,
rw [getElem?_set_eq _, getElem?_append_right, length_take,
Nat.min_eq_left (by omega), Nat.sub_self, getElem?_cons_zero]
rw [length_take]
exact Nat.min_le_left m l.length
@@ -352,7 +359,7 @@ theorem drop_take : ∀ (m n : Nat) (l : List α), drop n (take m l) = take (m -
congr 1
omega
theorem take_reverse {α} {xs : List α} (n : Nat) (h : n xs.length) :
theorem take_reverse {α} {xs : List α} {n : Nat} (h : n xs.length) :
xs.reverse.take n = (xs.drop (xs.length - n)).reverse := by
induction xs generalizing n <;>
simp only [reverse_cons, drop, reverse_nil, Nat.zero_sub, length, take_nil]
@@ -360,7 +367,7 @@ theorem take_reverse {α} {xs : List α} (n : Nat) (h : n ≤ xs.length) :
cases Nat.lt_or_eq_of_le h with
| inl h' =>
have h' := Nat.le_of_succ_le_succ h'
rw [take_append_of_le_length, xs_ih _ h']
rw [take_append_of_le_length, xs_ih h']
rw [show xs_tl.length + 1 - n = succ (xs_tl.length - n) from _, drop]
· rwa [succ_eq_add_one, Nat.sub_add_comm]
· rwa [length_reverse]
@@ -374,6 +381,19 @@ theorem take_reverse {α} {xs : List α} (n : Nat) (h : n ≤ xs.length) :
@[deprecated (since := "2024-06-15")] abbrev reverse_take := @take_reverse
theorem drop_reverse {α} {xs : List α} {n : Nat} (h : n xs.length) :
xs.reverse.drop n = (xs.take (xs.length - n)).reverse := by
conv =>
rhs
rw [ reverse_reverse xs]
rw [ reverse_reverse xs] at h
generalize xs.reverse = xs' at h
rw [take_reverse]
· simp only [length_reverse, reverse_reverse] at *
congr
omega
· simp only [length_reverse, sub_le]
/-! ### rotateLeft -/
@[simp] theorem rotateLeft_replicate (n) (a : α) : rotateLeft (replicate m a) n = replicate m a := by
@@ -407,12 +427,43 @@ theorem take_reverse {α} {xs : List α} (n : Nat) (h : n ≤ xs.length) :
induction l₁ generalizing l₂ <;> cases l₂ <;>
simp_all [succ_min_succ, Nat.zero_min, Nat.min_zero]
theorem lt_length_left_of_zipWith {f : α β γ} {i : Nat} {l : List α} {l' : List β}
(h : i < (zipWith f l l').length) : i < l.length := by rw [length_zipWith] at h; omega
theorem lt_length_right_of_zipWith {f : α β γ} {i : Nat} {l : List α} {l' : List β}
(h : i < (zipWith f l l').length) : i < l'.length := by rw [length_zipWith] at h; omega
@[simp]
theorem getElem_zipWith {f : α β γ} {l : List α} {l' : List β}
{i : Nat} {h : i < (zipWith f l l').length} :
(zipWith f l l')[i] =
f (l[i]'(lt_length_left_of_zipWith h))
(l'[i]'(lt_length_right_of_zipWith h)) := by
rw [ Option.some_inj, getElem?_eq_getElem, getElem?_zipWith_eq_some]
exact
l[i]'(lt_length_left_of_zipWith h), l'[i]'(lt_length_right_of_zipWith h),
by rw [getElem?_eq_getElem], by rw [getElem?_eq_getElem]; exact rfl, rfl
theorem zipWith_eq_zipWith_take_min : (l₁ : List α) (l₂ : List β),
zipWith f l₁ l₂ = zipWith f (l₁.take (min l₁.length l₂.length)) (l₂.take (min l₁.length l₂.length))
| [], _ => by simp
| _, [] => by simp
| a :: l₁, b :: l₂ => by simp [succ_min_succ, zipWith_eq_zipWith_take_min l₁ l₂]
theorem reverse_zipWith (h : l.length = l'.length) :
(zipWith f l l').reverse = zipWith f l.reverse l'.reverse := by
induction l generalizing l' with
| nil => simp
| cons hd tl hl =>
cases l' with
| nil => simp
| cons hd' tl' =>
simp only [Nat.add_right_cancel_iff, length] at h
have : tl.reverse.length = tl'.reverse.length := by simp [h]
simp [hl h, zipWith_append _ _ _ _ _ this]
@[deprecated reverse_zipWith (since := "2024-07-28")] abbrev zipWith_distrib_reverse := @reverse_zipWith
@[simp] theorem zipWith_replicate {a : α} {b : β} {m n : Nat} :
zipWith f (replicate m a) (replicate n b) = replicate (min m n) (f a b) := by
rw [zipWith_eq_zipWith_take_min]
@@ -424,6 +475,20 @@ theorem zipWith_eq_zipWith_take_min : ∀ (l₁ : List α) (l₂ : List β),
length (zip l₁ l₂) = min (length l₁) (length l₂) := by
simp [zip]
theorem lt_length_left_of_zip {i : Nat} {l : List α} {l' : List β} (h : i < (zip l l').length) :
i < l.length :=
lt_length_left_of_zipWith h
theorem lt_length_right_of_zip {i : Nat} {l : List α} {l' : List β} (h : i < (zip l l').length) :
i < l'.length :=
lt_length_right_of_zipWith h
@[simp]
theorem getElem_zip {l : List α} {l' : List β} {i : Nat} {h : i < (zip l l').length} :
(zip l l')[i] =
(l[i]'(lt_length_left_of_zip h), l'[i]'(lt_length_right_of_zip h)) :=
getElem_zipWith (h := h)
theorem zip_eq_zip_take_min : (l₁ : List α) (l₂ : List β),
zip l₁ l₂ = zip (l₁.take (min l₁.length l₂.length)) (l₂.take (min l₁.length l₂.length))
| [], _ => by simp
@@ -435,70 +500,4 @@ theorem zip_eq_zip_take_min : ∀ (l₁ : List α) (l₂ : List β),
rw [zip_eq_zip_take_min]
simp
/-! ### minimum? -/
-- A specialization of `minimum?_eq_some_iff` to Nat.
theorem minimum?_eq_some_iff' {xs : List Nat} :
xs.minimum? = some a (a xs b xs, a b) :=
minimum?_eq_some_iff
(le_refl := Nat.le_refl)
(min_eq_or := fun _ _ => by omega)
(le_min_iff := fun _ _ _ => by omega)
-- This could be generalized,
-- but will first require further work on order typeclasses in the core repository.
theorem minimum?_cons' {a : Nat} {l : List Nat} :
(a :: l).minimum? = some (match l.minimum? with
| none => a
| some m => min a m) := by
rw [minimum?_eq_some_iff']
split <;> rename_i h m
· simp_all
· rw [minimum?_eq_some_iff'] at m
obtain m, le := m
rw [Nat.min_def]
constructor
· split
· exact mem_cons_self a l
· exact mem_cons_of_mem a m
· intro b m
cases List.mem_cons.1 m with
| inl => split <;> omega
| inr h =>
specialize le b h
split <;> omega
/-! ### maximum? -/
-- A specialization of `maximum?_eq_some_iff` to Nat.
theorem maximum?_eq_some_iff' {xs : List Nat} :
xs.maximum? = some a (a xs b xs, b a) :=
maximum?_eq_some_iff
(le_refl := Nat.le_refl)
(max_eq_or := fun _ _ => by omega)
(max_le_iff := fun _ _ _ => by omega)
-- This could be generalized,
-- but will first require further work on order typeclasses in the core repository.
theorem maximum?_cons' {a : Nat} {l : List Nat} :
(a :: l).maximum? = some (match l.maximum? with
| none => a
| some m => max a m) := by
rw [maximum?_eq_some_iff']
split <;> rename_i h m
· simp_all
· rw [maximum?_eq_some_iff'] at m
obtain m, le := m
rw [Nat.max_def]
constructor
· split
· exact mem_cons_of_mem a m
· exact mem_cons_self a l
· intro b m
cases List.mem_cons.1 m with
| inl => split <;> omega
| inr h =>
specialize le b h
split <;> omega
end List

View File

@@ -388,11 +388,11 @@ theorem le_or_eq_of_le_succ {m n : Nat} (h : m ≤ succ n) : m ≤ n m = suc
theorem le_or_eq_of_le_add_one {m n : Nat} (h : m n + 1) : m n m = n + 1 :=
le_or_eq_of_le_succ h
theorem le_add_right : (n k : Nat), n n + k
@[simp] theorem le_add_right : (n k : Nat), n n + k
| n, 0 => Nat.le_refl n
| n, k+1 => le_succ_of_le (le_add_right n k)
theorem le_add_left (n m : Nat): n m + n :=
@[simp] theorem le_add_left (n m : Nat): n m + n :=
Nat.add_comm n m le_add_right n m
theorem le_of_add_right_le {n m k : Nat} (h : n + k m) : n m :=
@@ -528,7 +528,7 @@ protected theorem le_of_add_le_add_right {a b c : Nat} : a + b ≤ c + b → a
rw [Nat.add_comm _ b, Nat.add_comm _ b]
apply Nat.le_of_add_le_add_left
protected theorem add_le_add_iff_right {n : Nat} : m + n k + n m k :=
@[simp] protected theorem add_le_add_iff_right {n : Nat} : m + n k + n m k :=
Nat.le_of_add_le_add_right, fun h => Nat.add_le_add_right h _
/-! ### le/lt -/

View File

@@ -265,8 +265,8 @@ theorem testBit_two_pow_add_gt {i j : Nat} (j_lt_i : j < i) (x : Nat) :
have x_eq : x = y + 2^j := Nat.eq_add_of_sub_eq x_ge_j y_eq
simp only [Nat.two_pow_pos, x_eq, Nat.le_add_left, true_and, ite_true]
have y_lt_x : y < x := by
simp [x_eq]
exact Nat.lt_add_of_pos_right (Nat.two_pow_pos j)
simp only [x_eq, Nat.lt_add_right_iff_pos]
exact Nat.two_pow_pos j
simp only [hyp y y_lt_x]
if i_lt_j : i < j then
rw [Nat.add_comm _ (2^_), testBit_two_pow_add_gt i_lt_j]

View File

@@ -46,6 +46,9 @@ theorem gcd_succ (x y : Nat) : gcd (succ x) y = gcd (y % succ x) (succ x) := by
theorem gcd_add_one (x y : Nat) : gcd (x + 1) y = gcd (y % (x + 1)) (x + 1) := by
rw [gcd]; rfl
theorem gcd_def (x y : Nat) : gcd x y = if x = 0 then y else gcd (y % x) x := by
cases x <;> simp [Nat.gcd_add_one]
@[simp] theorem gcd_one_left (n : Nat) : gcd 1 n = 1 := by
rw [gcd_succ, mod_one]
rfl

View File

@@ -19,6 +19,9 @@ and later these lemmas should be organised into other files more systematically.
-/
namespace Nat
attribute [simp] not_lt_zero
/-! ## add -/
protected theorem add_add_add_comm (a b c d : Nat) : (a + b) + (c + d) = (a + c) + (b + d) := by
@@ -36,13 +39,13 @@ protected theorem eq_zero_of_add_eq_zero_right (h : n + m = 0) : n = 0 :=
protected theorem add_eq_zero_iff : n + m = 0 n = 0 m = 0 :=
Nat.eq_zero_of_add_eq_zero, fun h₁, h₂ => h₂.symm h₁
protected theorem add_left_cancel_iff {n : Nat} : n + m = n + k m = k :=
@[simp] protected theorem add_left_cancel_iff {n : Nat} : n + m = n + k m = k :=
Nat.add_left_cancel, fun | rfl => rfl
protected theorem add_right_cancel_iff {n : Nat} : m + n = k + n m = k :=
@[simp] protected theorem add_right_cancel_iff {n : Nat} : m + n = k + n m = k :=
Nat.add_right_cancel, fun | rfl => rfl
protected theorem add_le_add_iff_left {n : Nat} : n + m n + k m k :=
@[simp] protected theorem add_le_add_iff_left {n : Nat} : n + m n + k m k :=
Nat.le_of_add_le_add_left, fun h => Nat.add_le_add_left h _
protected theorem lt_of_add_lt_add_right : {n : Nat}, k + n < m + n k < m
@@ -52,10 +55,10 @@ protected theorem lt_of_add_lt_add_right : ∀ {n : Nat}, k + n < m + n → k <
protected theorem lt_of_add_lt_add_left {n : Nat} : n + k < n + m k < m := by
rw [Nat.add_comm n, Nat.add_comm n]; exact Nat.lt_of_add_lt_add_right
protected theorem add_lt_add_iff_left {k n m : Nat} : k + n < k + m n < m :=
@[simp] protected theorem add_lt_add_iff_left {k n m : Nat} : k + n < k + m n < m :=
Nat.lt_of_add_lt_add_left, fun h => Nat.add_lt_add_left h _
protected theorem add_lt_add_iff_right {k n m : Nat} : n + k < m + k n < m :=
@[simp] protected theorem add_lt_add_iff_right {k n m : Nat} : n + k < m + k n < m :=
Nat.lt_of_add_lt_add_right, fun h => Nat.add_lt_add_right h _
protected theorem add_lt_add_of_le_of_lt {a b c d : Nat} (hle : a b) (hlt : c < d) :
@@ -75,10 +78,10 @@ protected theorem pos_of_lt_add_right (h : n < n + k) : 0 < k :=
protected theorem pos_of_lt_add_left : n < k + n 0 < k := by
rw [Nat.add_comm]; exact Nat.pos_of_lt_add_right
protected theorem lt_add_right_iff_pos : n < n + k 0 < k :=
@[simp] protected theorem lt_add_right_iff_pos : n < n + k 0 < k :=
Nat.pos_of_lt_add_right, Nat.lt_add_of_pos_right
protected theorem lt_add_left_iff_pos : n < k + n 0 < k :=
@[simp] protected theorem lt_add_left_iff_pos : n < k + n 0 < k :=
Nat.pos_of_lt_add_left, Nat.lt_add_of_pos_left
protected theorem add_pos_left (h : 0 < m) (n) : 0 < m + n :=

View File

@@ -173,13 +173,13 @@ instance : LawfulBEq PolyCnstr where
eq_of_beq {a b} h := by
cases a; rename_i eq₁ lhs₁ rhs₁
cases b; rename_i eq₂ lhs₂ rhs₂
have h : eq₁ == eq₂ && lhs₁ == lhs₂ && rhs₁ == rhs₂ := h
have h : eq₁ == eq₂ && (lhs₁ == lhs₂ && rhs₁ == rhs₂) := h
simp at h
have h₁, h₂, h₃ := h
have h₁, h₂, h₃ := h
rw [h₁, h₂, h₃]
rfl {a} := by
cases a; rename_i eq lhs rhs
show (eq == eq && lhs == lhs && rhs == rhs) = true
show (eq == eq && (lhs == lhs && rhs == rhs)) = true
simp [LawfulBEq.rfl]
def PolyCnstr.mul (k : Nat) (c : PolyCnstr) : PolyCnstr :=

View File

@@ -212,6 +212,9 @@ instance (α) [BEq α] [LawfulBEq α] : LawfulBEq (Option α) where
@[simp] theorem all_none : Option.all p none = true := rfl
@[simp] theorem all_some : Option.all p (some x) = p x := rfl
@[simp] theorem any_none : Option.any p none = false := rfl
@[simp] theorem any_some : Option.any p (some x) = p x := rfl
/-- The minimum of two optional values. -/
protected def min [Min α] : Option α Option α Option α
| some x, some y => some (Min.min x y)

View File

@@ -193,6 +193,16 @@ theorem mem_map_of_mem (g : α → β) (h : a ∈ x) : g a ∈ Option.map g x :=
@[simp] theorem filter_none (p : α Bool) : none.filter p = none := rfl
theorem filter_some : Option.filter p (some a) = if p a then some a else none := rfl
@[simp] theorem all_guard (p : α Prop) [DecidablePred p] (a : α) :
Option.all q (guard p a) = (!p a || q a) := by
simp only [guard]
split <;> simp_all
@[simp] theorem any_guard (p : α Prop) [DecidablePred p] (a : α) :
Option.any q (guard p a) = (p a && q a) := by
simp only [guard]
split <;> simp_all
theorem bind_map_comm {α β} {x : Option (Option α)} {f : α β} :
x.bind (Option.map f) = (x.map (Option.map f)).bind id := by cases x <;> simp

View File

@@ -0,0 +1,27 @@
/-
Copyright (c) 2017 Johannes Hölzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes Hölzl
-/
prelude
import Init.Ext
namespace Subtype
universe u
variable {α : Sort u} {p q : α Prop}
@[ext]
protected theorem ext : {a1 a2 : { x // p x }}, (a1 : α) = (a2 : α) a1 = a2
| _, _, _, _, rfl => rfl
@[simp]
protected theorem «forall» {q : { a // p a } Prop} : ( x, q x) a b, q a, b :=
fun h a b h a, b, fun h a, b h a b
@[simp]
protected theorem «exists» {q : { a // p a } Prop} :
(Exists fun x => q x) Exists fun a => Exists fun b => q a, b :=
fun a, b, h a, b, h, fun a, b, h a, b, h
end Subtype

View File

@@ -399,9 +399,16 @@ def setTailInfo (stx : Syntax) (info : SourceInfo) : Syntax :=
| some stx => stx
| none => stx
/--
Replaces the trailing whitespace in `stx`, if any, with an empty substring.
The trailing substring's `startPos` and `str` are preserved in order to ensure that the result could
have been produced by the parser, in case any syntax consumers rely on such an assumption.
-/
def unsetTrailing (stx : Syntax) : Syntax :=
match stx.getTailInfo with
| SourceInfo.original lead pos _ endPos => stx.setTailInfo (SourceInfo.original lead pos "".toSubstring endPos)
| SourceInfo.original lead pos trail endPos =>
stx.setTailInfo (SourceInfo.original lead pos { trail with stopPos := trail.startPos } endPos)
| _ => stx
@[specialize] private partial def updateFirst {α} [Inhabited α] (a : Array α) (f : α Option α) (i : Nat) : Option (Array α) :=

View File

@@ -320,7 +320,7 @@ Because this is in the `Eq` namespace, if you have a variable `h : a = b`,
For more information: [Equality](https://lean-lang.org/theorem_proving_in_lean4/quantifiers_and_equality.html#equality)
-/
theorem Eq.symm {α : Sort u} {a b : α} (h : Eq a b) : Eq b a :=
@[symm] theorem Eq.symm {α : Sort u} {a b : α} (h : Eq a b) : Eq b a :=
h rfl
/--
@@ -2214,12 +2214,17 @@ def Char.utf8Size (c : Char) : Nat :=
or `none`. In functional programming languages, this type is used to represent
the possibility of failure, or sometimes nullability.
For example, the function `HashMap.find? : HashMap α β → α → Option β` looks up
For example, the function `HashMap.get? : HashMap α β → α → Option β` looks up
a specified key `a : α` inside the map. Because we do not know in advance
whether the key is actually in the map, the return type is `Option β`, where
`none` means the value was not in the map, and `some b` means that the value
was found and `b` is the value retrieved.
The `xs[i]` syntax, which is used to index into collections, has a variant
`xs[i]?` that returns an optional value depending on whether the given index
is valid. For example, if `m : HashMap α β` and `a : α`, then `m[a]?` is
equivalent to `HashMap.get? m a`.
To extract a value from an `Option α`, we use pattern matching:
```
def map (f : α → β) (x : Option α) : Option β :=

View File

@@ -5,6 +5,7 @@ Authors: Leonardo de Moura
-/
prelude
import Lean.Util.FindMVar
import Lean.Util.CollectFVars
import Lean.Parser.Term
import Lean.Meta.KAbstract
import Lean.Meta.Tactic.ElimInfo
@@ -711,6 +712,12 @@ structure Context where
```
theorem Eq.subst' {α} {motive : α → Prop} {a b : α} (h : a = b) : motive a → motive b
```
For another example, the term `isEmptyElim (α := α)` is an underapplied eliminator, and it needs
argument `α` to be elaborated eagerly to create a type-correct motive.
```
def isEmptyElim [IsEmpty α] {p : α → Sort _} (a : α) : p a := ...
example {α : Type _} [IsEmpty α] : id (α → False) := isEmptyElim (α := α)
```
-/
extraArgsPos : Array Nat
@@ -724,8 +731,8 @@ structure State where
namedArgs : List NamedArg
/-- User-provided arguments that still have to be processed. -/
args : List Arg
/-- Discriminants processed so far. -/
discrs : Array Expr := #[]
/-- Discriminants (targets) processed so far. -/
discrs : Array (Option Expr)
/-- Instance implicit arguments collected so far. -/
instMVars : Array MVarId := #[]
/-- Position of the next argument to be processed. We use it to decide whether the argument is the motive or a discriminant. -/
@@ -742,10 +749,7 @@ def mkMotive (discrs : Array Expr) (expectedType : Expr): MetaM Expr := do
let motiveBody kabstract motive discr
/- We use `transform (usedLetOnly := true)` to eliminate unnecessary let-expressions. -/
let discrType transform (usedLetOnly := true) ( instantiateMVars ( inferType discr))
let motive := Lean.mkLambda ( mkFreshBinderName) BinderInfo.default discrType motiveBody
unless ( isTypeCorrect motive) do
throwError "failed to elaborate eliminator, motive is not type correct:{indentD motive}"
return motive
return Lean.mkLambda ( mkFreshBinderName) BinderInfo.default discrType motiveBody
/-- If the eliminator is over-applied, we "revert" the extra arguments. -/
def revertArgs (args : List Arg) (f : Expr) (expectedType : Expr) : TermElabM (Expr × Expr) :=
@@ -761,7 +765,7 @@ def revertArgs (args : List Arg) (f : Expr) (expectedType : Expr) : TermElabM (E
return (mkApp f val, mkForall ( mkFreshBinderName) BinderInfo.default valType expectedTypeBody)
/--
Construct the resulting application after all discriminants have bee elaborated, and we have
Construct the resulting application after all discriminants have been elaborated, and we have
consumed as many given arguments as possible.
-/
def finalize : M Expr := do
@@ -769,29 +773,50 @@ def finalize : M Expr := do
throwError "failed to elaborate eliminator, unused named arguments: {(← get).namedArgs.map (·.name)}"
let some motive := ( get).motive?
| throwError "failed to elaborate eliminator, insufficient number of arguments"
trace[Elab.app.elab_as_elim] "motive: {motive}"
forallTelescope ( get).fType fun xs _ => do
trace[Elab.app.elab_as_elim] "xs: {xs}"
let mut expectedType := ( read).expectedType
trace[Elab.app.elab_as_elim] "expectedType:{indentD expectedType}"
let throwInsufficient := do
throwError "failed to elaborate eliminator, insufficient number of arguments, expected type:{indentExpr expectedType}"
let mut f := ( get).f
if xs.size > 0 then
-- under-application, specialize the expected type using `xs`
assert! ( get).args.isEmpty
try
expectedType instantiateForall expectedType xs
catch _ =>
throwError "failed to elaborate eliminator, insufficient number of arguments, expected type:{indentExpr expectedType}"
for x in xs do
let .forallE _ t b _ whnf expectedType | throwInsufficient
unless fullApproxDefEq <| isDefEq t ( inferType x) do
-- We can't assume that these binding domains were supposed to line up, so report insufficient arguments
throwInsufficient
expectedType := b.instantiate1 x
trace[Elab.app.elab_as_elim] "xs after specialization of expected type: {xs}"
else
-- over-application, simulate `revert`
-- over-application, simulate `revert` while generalizing the values of these arguments in the expected type
(f, expectedType) revertArgs ( get).args f expectedType
unless isTypeCorrect expectedType do
throwError "failed to elaborate eliminator, after generalizing over-applied arguments, expected type is type incorrect:{indentExpr expectedType}"
trace[Elab.app.elab_as_elim] "expectedType after processing:{indentD expectedType}"
let result := mkAppN f xs
trace[Elab.app.elab_as_elim] "result:{indentD result}"
let mut discrs := ( get).discrs
let idx := ( get).idx
if ( get).discrs.size < ( read).elimInfo.targetsPos.size then
if discrs.any Option.isNone then
for i in [idx:idx + xs.size], x in xs do
if ( read).elimInfo.targetsPos.contains i then
discrs := discrs.push x
let motiveVal mkMotive discrs expectedType
if let some tidx := ( read).elimInfo.targetsPos.indexOf? i then
discrs := discrs.set! tidx x
if let some idx := discrs.findIdx? Option.isNone then
-- This should not happen.
trace[Elab.app.elab_as_elim] "Internal error, missing target with index {idx}"
throwError "failed to elaborate eliminator, insufficient number of arguments"
trace[Elab.app.elab_as_elim] "discrs: {discrs.map Option.get!}"
let motiveVal mkMotive (discrs.map Option.get!) expectedType
unless ( isTypeCorrect motiveVal) do
throwError "failed to elaborate eliminator, motive is not type correct:{indentD motiveVal}"
unless ( isDefEq motive motiveVal) do
throwError "failed to elaborate eliminator, invalid motive{indentExpr motiveVal}"
synthesizeAppInstMVars ( get).instMVars result
trace[Elab.app.elab_as_elim] "completed motive:{indentD motive}"
let result mkLambdaFVars xs ( instantiateMVars result)
return result
@@ -819,9 +844,9 @@ def getNextArg? (binderName : Name) (binderInfo : BinderInfo) : M (LOption Arg)
def setMotive (motive : Expr) : M Unit :=
modify fun s => { s with motive? := motive }
/-- Push the given expression into the `discrs` field in the state. -/
def addDiscr (discr : Expr) : M Unit :=
modify fun s => { s with discrs := s.discrs.push discr }
/-- Push the given expression into the `discrs` field in the state, where `i` is which target it is for. -/
def addDiscr (i : Nat) (discr : Expr) : M Unit :=
modify fun s => { s with discrs := s.discrs.set! i discr }
/-- Elaborate the given argument with the given expected type. -/
private def elabArg (arg : Arg) (argExpectedType : Expr) : M Expr := do
@@ -856,11 +881,11 @@ partial def main : M Expr := do
let motive mkImplicitArg binderType binderInfo
setMotive motive
addArgAndContinue motive
else if ( read).elimInfo.targetsPos.contains idx then
else if let some tidx := ( read).elimInfo.targetsPos.indexOf? idx then
match ( getNextArg? binderName binderInfo) with
| .some arg => let discr elabArg arg binderType; addDiscr discr; addArgAndContinue discr
| .some arg => let discr elabArg arg binderType; addDiscr tidx discr; addArgAndContinue discr
| .undef => finalize
| .none => let discr mkImplicitArg binderType binderInfo; addDiscr discr; addArgAndContinue discr
| .none => let discr mkImplicitArg binderType binderInfo; addDiscr tidx discr; addArgAndContinue discr
else match ( getNextArg? binderName binderInfo) with
| .some (.stx stx) =>
if ( read).extraArgsPos.contains idx then
@@ -922,10 +947,12 @@ def elabAppArgs (f : Expr) (namedArgs : Array NamedArg) (args : Array Arg)
let expectedType instantiateMVars expectedType
if expectedType.getAppFn.isMVar then throwError "failed to elaborate eliminator, expected type is not available"
let extraArgsPos getElabAsElimExtraArgsPos elimInfo
trace[Elab.app.elab_as_elim] "extraArgsPos: {extraArgsPos}"
ElabElim.main.run { elimInfo, expectedType, extraArgsPos } |>.run' {
f, fType
args := args.toList
namedArgs := namedArgs.toList
discrs := mkArray elimInfo.targetsPos.size none
}
else
ElabAppArgs.main.run { explicit, ellipsis, resultIsOutParamSupport } |>.run' {
@@ -955,19 +982,29 @@ where
/--
Collect extra argument positions that must be elaborated eagerly when using `elab_as_elim`.
The idea is that the contribute to motive inference. See comment at `ElamElim.Context.extraArgsPos`.
The idea is that they contribute to motive inference. See comment at `ElamElim.Context.extraArgsPos`.
-/
getElabAsElimExtraArgsPos (elimInfo : ElimInfo) : MetaM (Array Nat) := do
forallTelescope elimInfo.elimType fun xs type => do
let resultArgs := type.getAppArgs
let targets := type.getAppArgs
/- Compute transitive closure of fvars appearing in the motive and the targets. -/
let initMotiveFVars : CollectFVars.State := targets.foldl (init := {}) collectFVars
let motiveFVars xs.size.foldRevM (init := initMotiveFVars) fun i s => do
let x := xs[i]!
if elimInfo.motivePos == i || elimInfo.targetsPos.contains i || s.fvarSet.contains x.fvarId! then
return collectFVars s ( inferType x)
else
return s
/- Collect the extra argument positions -/
let mut extraArgsPos := #[]
for i in [:xs.size] do
let x := xs[i]!
unless elimInfo.targetsPos.contains i do
let xType inferType x
unless elimInfo.motivePos == i || elimInfo.targetsPos.contains i do
let xType x.fvarId!.getType
/- We only consider "first-order" types because we can reliably "extract" information from them. -/
if isFirstOrder xType
&& Option.isSome (xType.find? fun e => e.isFVar && resultArgs.contains e) then
if motiveFVars.fvarSet.contains x.fvarId!
|| (isFirstOrder xType
&& Option.isSome (xType.find? fun e => e.isFVar && motiveFVars.fvarSet.contains e.fvarId!)) then
extraArgsPos := extraArgsPos.push i
return extraArgsPos
@@ -1317,9 +1354,17 @@ private partial def resolveDotName (id : Syntax) (expectedType? : Option Expr) :
tryPostponeIfNoneOrMVar expectedType?
let some expectedType := expectedType?
| throwError "invalid dotted identifier notation, expected type must be known"
forallTelescopeReducing expectedType fun _ resultType => do
withForallBody expectedType fun resultType => do
go resultType expectedType #[]
where
/-- A weak version of forallTelescopeReducing that only uses whnfCore, to avoid unfolding definitions except by `unfoldDefinition?` below. -/
withForallBody {α} (type : Expr) (k : Expr TermElabM α) : TermElabM α :=
forallTelescope type fun _ body => do
let body whnfCore body
if body.isForall then
withForallBody body k
else
k body
go (resultType : Expr) (expectedType : Expr) (previousExceptions : Array Exception) : TermElabM Name := do
let resultType instantiateMVars resultType
let resultTypeFn := resultType.cleanupAnnotations.getAppFn
@@ -1335,7 +1380,8 @@ where
| ex@(.error ..) =>
match ( unfoldDefinition? resultType) with
| some resultType =>
go ( whnfCore resultType) expectedType (previousExceptions.push ex)
withForallBody resultType fun resultType => do
go resultType expectedType (previousExceptions.push ex)
| none =>
previousExceptions.forM fun ex => logException ex
throw ex
@@ -1528,5 +1574,6 @@ builtin_initialize
registerTraceClass `Elab.app.args (inherited := true)
registerTraceClass `Elab.app.propagateExpectedType (inherited := true)
registerTraceClass `Elab.app.finalize (inherited := true)
registerTraceClass `Elab.app.elab_as_elim (inherited := true)
end Lean.Elab.Term

View File

@@ -5,6 +5,7 @@ Authors: Leonardo de Moura
-/
prelude
import Lean.Util.CollectLevelParams
import Lean.Util.CollectAxioms
import Lean.Meta.Reduce
import Lean.Elab.DeclarationRange
import Lean.Elab.Eval
@@ -340,8 +341,7 @@ private def mkRunEval (e : Expr) : MetaM Expr := do
let instVal mkEvalInstCore ``Lean.Eval e
instantiateMVars (mkAppN (mkConst ``Lean.runEval [u]) #[α, instVal, mkSimpleThunk e])
unsafe def elabEvalUnsafe : CommandElab
| `(#eval%$tk $term) => do
unsafe def elabEvalCoreUnsafe (bang : Bool) (tk term : Syntax): CommandElabM Unit := do
let declName := `_eval
let addAndCompile (value : Expr) : TermElabM Unit := do
let value Term.levelMVarToParam ( instantiateMVars value)
@@ -358,6 +358,13 @@ unsafe def elabEvalUnsafe : CommandElab
}
Term.ensureNoUnassignedMVars decl
addAndCompile decl
-- Check for sorry axioms
let checkSorry (declName : Name) : MetaM Unit := do
unless bang do
let axioms collectAxioms declName
if axioms.contains ``sorryAx then
throwError ("cannot evaluate expression that depends on the `sorry` axiom.\nUse `#eval!` to " ++
"evaluate nevertheless (which may cause lean to crash).")
-- Elaborate `term`
let elabEvalTerm : TermElabM Expr := do
let e Term.elabTerm term none
@@ -386,6 +393,7 @@ unsafe def elabEvalUnsafe : CommandElab
else
let e mkRunMetaEval e
addAndCompile e
checkSorry declName
let act evalConst (Environment Options IO (String × Except IO.Error Environment)) declName
pure <| Sum.inr act
match act with
@@ -402,6 +410,7 @@ unsafe def elabEvalUnsafe : CommandElab
-- modify e to `runEval e`
let e mkRunEval ( elabEvalTerm)
addAndCompile e
checkSorry declName
let act evalConst (IO (String × Except IO.Error Unit)) declName
let (out, res) liftM (m := IO) act
logInfoAt tk out
@@ -412,10 +421,19 @@ unsafe def elabEvalUnsafe : CommandElab
elabMetaEval
else
elabEval
@[implemented_by elabEvalCoreUnsafe]
opaque elabEvalCore (bang : Bool) (tk term : Syntax): CommandElabM Unit
@[builtin_command_elab «eval»]
def elabEval : CommandElab
| `(#eval%$tk $term) => elabEvalCore false tk term
| _ => throwUnsupportedSyntax
@[builtin_command_elab «eval», implemented_by elabEvalUnsafe]
opaque elabEval : CommandElab
@[builtin_command_elab evalBang]
def elabEvalBang : CommandElab
| `(Parser.Command.evalBang|#eval!%$tk $term) => elabEvalCore true tk term
| _ => throwUnsupportedSyntax
private def checkImportsForRunCmds : CommandElabM Unit := do
unless ( getEnv).contains ``CommandElabM do

View File

@@ -12,17 +12,62 @@ import Lean.Language.Basic
namespace Lean.Elab.Command
/--
A `Scope` records the part of the `CommandElabM` state that respects scoping,
such as the data for `universe`, `open`, and `variable` declarations, the current namespace,
and currently enabled options.
The `CommandElabM` state contains a stack of scopes, and only the top `Scope`
on the stack is read from or modified. There is always at least one `Scope` on the stack,
even outside any `section` or `namespace`, and each new pushed `Scope`
starts as a modified copy of the previous top scope.
-/
structure Scope where
/--
The component of the `namespace` or `section` that this scope is associated to.
For example, `section a.b.c` and `namespace a.b.c` each create three scopes with headers
named `a`, `b`, and `c`.
This is used for checking the `end` command. The "base scope" has `""` as its header.
-/
header : String
/--
The current state of all set options at this point in the scope. Note that this is the
full current set of options and does *not* simply contain the options set
while this scope has been active.
-/
opts : Options := {}
/-- The current namespace. The top-level namespace is represented by `Name.anonymous`. -/
currNamespace : Name := Name.anonymous
/-- All currently `open`ed namespaces and names. -/
openDecls : List OpenDecl := []
/-- The current list of names for universe level variables to use for new declarations. This is managed by the `universe` command. -/
levelNames : List Name := []
/-- section variables -/
/--
The current list of binders to use for new declarations.
This is managed by the `variable` command.
Each binder is represented in `Syntax` form, and it is re-elaborated
within each command that uses this information.
This is also used by commands, such as `#check`, to create an initial local context,
even if they do not work with binders per se.
-/
varDecls : Array (TSyntax ``Parser.Term.bracketedBinder) := #[]
/-- Globally unique internal identifiers for the `varDecls` -/
/--
Globally unique internal identifiers for the `varDecls`.
There is one identifier per variable introduced by the binders
(recall that a binder such as `(a b c : Ty)` can produce more than one variable),
and each identifier is the user-provided variable name with a macro scope.
This is used by `TermElabM` in `Lean.Elab.Term.Context` to help with processing macros
that capture these variables.
-/
varUIds : Array Name := #[]
/-- noncomputable sections automatically add the `noncomputable` modifier to any declaration we cannot generate code for. -/
/--
If true (default: false), all declarations that fail to compile
automatically receive the `noncomputable` modifier.
A scope with this flag set is created by `noncomputable section`.
Recall that a new scope inherits all values from its parent scope,
so all sections and namespaces nested within a `noncomputable` section also have this flag set.
-/
isNoncomputable : Bool := false
deriving Inhabited
@@ -230,6 +275,7 @@ private def ioErrorToMessage (ctx : Context) (ref : Syntax) (err : IO.Error) : M
instance : MonadLiftT IO CommandElabM where
monadLift := liftIO
/-- Return the current scope. -/
def getScope : CommandElabM Scope := do pure ( get).scopes.head!
instance : MonadResolveName CommandElabM where
@@ -479,7 +525,7 @@ def elabCommandTopLevel (stx : Syntax) : CommandElabM Unit := withRef stx do pro
-- should be true iff the command supports incrementality
if ( IO.hasFinished snap.new.result) then
trace[Elab.snapshotTree]
Language.ToSnapshotTree.toSnapshotTree snap.new.result.get |>.format
(Language.ToSnapshotTree.toSnapshotTree snap.new.result.get |>.format)
modify fun st => { st with
messages := initMsgs ++ msgs
infoState := { st.infoState with trees := initInfoTrees ++ st.infoState.trees }
@@ -612,6 +658,11 @@ Interrupt and abort exceptions are caught but not logged.
private def liftAttrM {α} (x : AttrM α) : CommandElabM α := do
liftCoreM x
/--
Return the stack of all currently active scopes:
the base scope always comes last; new scopes are prepended in the front.
In particular, the current scope is always the first element.
-/
def getScopes : CommandElabM (List Scope) := do
pure ( get).scopes

View File

@@ -43,12 +43,10 @@ where
let mut ctorArgs1 := #[]
let mut ctorArgs2 := #[]
let mut rhs `(true)
-- add `_` for inductive parameters, they are inaccessible
for _ in [:indVal.numParams] do
ctorArgs1 := ctorArgs1.push ( `(_))
ctorArgs2 := ctorArgs2.push ( `(_))
let mut rhs_empty := true
for i in [:ctorInfo.numFields] do
let x := xs[indVal.numParams + i]!
let pos := indVal.numParams + ctorInfo.numFields - i - 1
let x := xs[pos]!
if type.containsFVar x.fvarId! then
-- If resulting type depends on this field, we don't need to compare
ctorArgs1 := ctorArgs1.push ( `(_))
@@ -62,11 +60,32 @@ where
if ( isProp xType) then
continue
if xType.isAppOf indVal.name then
rhs `($rhs && $(mkIdent auxFunName):ident $a:ident $b:ident)
if rhs_empty then
rhs `($(mkIdent auxFunName):ident $a:ident $b:ident)
rhs_empty := false
else
rhs `($(mkIdent auxFunName):ident $a:ident $b:ident && $rhs)
/- If `x` appears in the type of another field, use `eq_of_beq` to
unify the types of the subsequent variables -/
else if xs[pos+1:].anyM
(fun fvar => (Expr.containsFVar · x.fvarId!) <$> (inferType fvar)) then
rhs `(if h : $a:ident == $b:ident then by
cases (eq_of_beq h)
exact $rhs
else false)
rhs_empty := false
else
rhs `($rhs && $a:ident == $b:ident)
patterns := patterns.push ( `(@$(mkIdent ctorName):ident $ctorArgs1:term*))
patterns := patterns.push ( `(@$(mkIdent ctorName):ident $ctorArgs2:term*))
if rhs_empty then
rhs `($a:ident == $b:ident)
rhs_empty := false
else
rhs `($a:ident == $b:ident && $rhs)
-- add `_` for inductive parameters, they are inaccessible
for _ in [:indVal.numParams] do
ctorArgs1 := ctorArgs1.push ( `(_))
ctorArgs2 := ctorArgs2.push ( `(_))
patterns := patterns.push ( `(@$(mkIdent ctorName):ident $ctorArgs1.reverse:term*))
patterns := patterns.push ( `(@$(mkIdent ctorName):ident $ctorArgs2.reverse:term*))
`(matchAltExpr| | $[$patterns:term],* => $rhs:term)
alts := alts.push alt
alts := alts.push ( mkElseAlt)

View File

@@ -114,7 +114,7 @@ def checkTerminationByHints (preDefs : Array PreDefinition) : CoreM Unit := do
if !structural && !preDefsWithout.isEmpty then
let m := MessageData.andList (preDefsWithout.toList.map (m!"{·.declName}"))
let doOrDoes := if preDefsWithout.size = 1 then "does" else "do"
logErrorAt termBy.ref (m!"Incomplete set of `termination_by` annotations:\n"++
logErrorAt termBy.ref (m!"incomplete set of `termination_by` annotations:\n"++
m!"This function is mutually with {m}, which {doOrDoes} not have " ++
m!"a `termination_by` clause.\n" ++
m!"The present clause is ignored.")

View File

@@ -5,11 +5,11 @@ Authors: Leonardo de Moura, Joachim Breitner
-/
prelude
import Lean.Util.HasConstCache
import Lean.Meta.PProdN
import Lean.Meta.Match.MatcherApp.Transform
import Lean.Elab.RecAppSyntax
import Lean.Elab.PreDefinition.Basic
import Lean.Elab.PreDefinition.Structural.Basic
import Lean.Elab.PreDefinition.Structural.FunPacker
import Lean.Elab.PreDefinition.Structural.RecArgInfo
namespace Lean.Elab.Structural
@@ -21,11 +21,11 @@ private def throwToBelowFailed : MetaM α :=
partial def searchPProd (e : Expr) (F : Expr) (k : Expr Expr MetaM α) : MetaM α := do
match ( whnf e) with
| .app (.app (.const `PProd _) d1) d2 =>
(do searchPProd d1 ( mkAppM ``PProd.fst #[F]) k)
<|> (do searchPProd d2 ( mkAppM `PProd.snd #[F]) k)
(do searchPProd d1 (.proj ``PProd 0 F) k)
<|> (do searchPProd d2 (.proj ``PProd 1 F) k)
| .app (.app (.const `And _) d1) d2 =>
(do searchPProd d1 ( mkAppM `And.left #[F]) k)
<|> (do searchPProd d2 ( mkAppM `And.right #[F]) k)
(do searchPProd d1 (.proj `And 0 F) k)
<|> (do searchPProd d2 (.proj `And 1 F) k)
| .const `PUnit _
| .const `True _ => throwToBelowFailed
| _ => k e F
@@ -85,7 +85,7 @@ private def withBelowDict [Inhabited α] (below : Expr) (numIndParams : Nat)
return (( mkFreshUserName `C), fun _ => pure t)
withLocalDeclsD CDecls fun Cs => do
-- We have to pack these canary motives like we packed the real motives
let packedCs positions.mapMwith packMotives motiveTypes Cs
let packedCs positions.mapMwith PProdN.packLambdas motiveTypes Cs
let belowDict := mkAppN pre packedCs
let belowDict := mkAppN belowDict finalArgs
trace[Elab.definition.structural] "initial belowDict for {Cs}:{indentExpr belowDict}"
@@ -250,7 +250,7 @@ def mkBRecOnConst (recArgInfos : Array RecArgInfo) (positions : Positions)
let brecOnAux := brecOnCons 0
-- Infer the type of the packed motive arguments
let packedMotiveTypes inferArgumentTypesN indGroup.numMotives brecOnAux
let packedMotives positions.mapMwith packMotives packedMotiveTypes motives
let packedMotives positions.mapMwith PProdN.packLambdas packedMotiveTypes motives
return fun n => mkAppN (brecOnCons n) packedMotives
@@ -289,12 +289,11 @@ def mkBrecOnApp (positions : Positions) (fnIdx : Nat) (brecOnConst : Nat → Exp
let brecOn := brecOnConst recArgInfo.indIdx
let brecOn := mkAppN brecOn indexMajorArgs
let packedFTypes inferArgumentTypesN positions.size brecOn
let packedFArgs positions.mapMwith packFArgs packedFTypes FArgs
let packedFArgs positions.mapMwith PProdN.mkLambdas packedFTypes FArgs
let brecOn := mkAppN brecOn packedFArgs
let some poss := positions.find? (·.contains fnIdx)
| throwError "mkBrecOnApp: Could not find {fnIdx} in {positions}"
let brecOn if poss.size = 1 then pure brecOn else
mkPProdProjN (poss.getIdx? fnIdx).get! brecOn
let brecOn PProdN.proj poss.size (poss.getIdx? fnIdx).get! brecOn
mkLambdaFVars ys (mkAppN brecOn otherArgs)
end Lean.Elab.Structural

View File

@@ -68,9 +68,7 @@ def getRecArgInfo (fnName : Name) (numFixed : Nat) (xs : Array Expr) (i : Nat) :
throwError "it is a let-binding"
let xType whnfD localDecl.type
matchConstInduct xType.getAppFn (fun _ => throwError "its type is not an inductive") fun indInfo us => do
if !( hasConst (mkBRecOnName indInfo.name)) then
throwError "its type {indInfo.name} does not have a recursor"
else if indInfo.isReflexive && !( hasConst (mkBInductionOnName indInfo.name)) && !( isInductivePredicate indInfo.name) then
if indInfo.isReflexive && !( hasConst (mkBInductionOnName indInfo.name)) && !( isInductivePredicate indInfo.name) then
throwError "its type {indInfo.name} is a reflexive inductive, but {mkBInductionOnName indInfo.name} does not exist and it is not an inductive predicate"
else
let indArgs : Array Expr := xType.getAppArgs
@@ -263,6 +261,11 @@ def tryAllArgs (fnNames : Array Name) (xs : Array Expr) (values : Array Expr)
if let some combs := allCombinations recArgInfoss' then
for comb in combs do
try
-- Check that the group actually has a brecOn (we used to check this in getRecArgInfo,
-- but in the first phase we do not want to rule-out non-recursive types like `Array`, which
-- are ok in a nested group. This logic can maybe simplified)
unless ( hasConst (group.brecOnName false 0)) do
throwError "the type {group} does not have a `.brecOn` recursor"
-- TODO: Here we used to save and restore the state. But should the `try`-`catch`
-- not suffice?
let r k comb

View File

@@ -1,126 +0,0 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Meta.InferType
/-!
This module contains the logic that packs the motives and FArgs of multiple functions into one,
to allow structural mutual recursion where the number of functions is not exactly the same
as the number of inductive data types in the mutual inductive group.
The private helper functions related to `PProd` here should at some point be moved to their own
module, so that they can be used elsewhere (e.g. `FunInd`), and possibly unified with the similar
constructions for well-founded recursion (see `ArgsPacker` module).
-/
namespace Lean.Elab.Structural
open Meta
private def mkPUnit : Level Expr
| .zero => .const ``True []
| lvl => .const ``PUnit [lvl]
private def mkPProd (e1 e2 : Expr) : MetaM Expr := do
let lvl1 getLevel e1
let lvl2 getLevel e2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp2 (.const `And []) e1 e2
else
return mkApp2 (.const ``PProd [lvl1, lvl2]) e1 e2
private def mkNProd (lvl : Level) (es : Array Expr) : MetaM Expr :=
es.foldrM (init := mkPUnit lvl) mkPProd
private def mkPUnitMk : Level Expr
| .zero => .const ``True.intro []
| lvl => .const ``PUnit.unit [lvl]
private def mkPProdMk (e1 e2 : Expr) : MetaM Expr := do
let t1 inferType e1
let t2 inferType e2
let lvl1 getLevel t1
let lvl2 getLevel t2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp4 (.const ``And.intro []) t1 t2 e1 e2
else
return mkApp4 (.const ``PProd.mk [lvl1, lvl2]) t1 t2 e1 e2
private def mkNProdMk (lvl : Level) (es : Array Expr) : MetaM Expr :=
es.foldrM (init := mkPUnitMk lvl) mkPProdMk
/-- `PProd.fst` or `And.left` (as projections) -/
private def mkPProdFst (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 0 e
| And _ _ => return .proj ``And 0 e
| _ => throwError "Cannot project .1 out of{indentExpr e}\nof type{indentExpr t}"
/-- `PProd.snd` or `And.right` (as projections) -/
private def mkPProdSnd (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 1 e
| And _ _ => return .proj ``And 1 e
| _ => throwError "Cannot project .2 out of{indentExpr e}\nof type{indentExpr t}"
/-- Given a proof of `P₁ ∧ … ∧ Pᵢ ∧ … ∧ Pₙ ∧ True`, return the proof of `Pᵢ` -/
def mkPProdProjN (i : Nat) (e : Expr) : MetaM Expr := do
let mut value := e
for _ in [:i] do
value mkPProdSnd value
value mkPProdFst value
return value
/--
Combines motives from different functions that recurse on the same parameter type into a single
function returning a `PProd` type.
For example
```
packMotives (Nat → Sort u) #[(fun (n : Nat) => Nat), (fun (n : Nat) => Fin n -> Fin n )]
```
will return
```
fun (n : Nat) (PProd Nat (Fin n → Fin n))
```
It is the identity if `motives.size = 1`.
It returns a dummy motive `(xs : ) → PUnit` or `(xs : … ) → True` if no motive is given.
(this is the reason we need the expected type in the `motiveType` parameter).
-/
def packMotives (motiveType : Expr) (motives : Array Expr) : MetaM Expr := do
if motives.size = 1 then
return motives[0]!
trace[Elab.definition.structural] "packing Motives\nexpected: {motiveType}\nmotives: {motives}"
forallTelescope motiveType fun xs sort => do
unless sort.isSort do
throwError "packMotives: Unexpected motiveType {motiveType}"
-- NB: Use beta, not instantiateLambda; when constructing the belowDict below
-- we pass `C`, a plain FVar, here
let motives := motives.map (·.beta xs)
let packedMotives mkNProd sort.sortLevel! motives
mkLambdaFVars xs packedMotives
/--
Combines the F-args from different functions that recurse on the same parameter type into a single
function returning a `PProd` value. See `packMotives`
It is the identity if `motives.size = 1`.
-/
def packFArgs (FArgType : Expr) (FArgs : Array Expr) : MetaM Expr := do
if FArgs.size = 1 then
return FArgs[0]!
forallTelescope FArgType fun xs body => do
let lvl getLevel body
let FArgs := FArgs.map (·.beta xs)
let packedFArgs mkNProdMk lvl FArgs
mkLambdaFVars xs packedFArgs
end Lean.Elab.Structural

View File

@@ -36,6 +36,16 @@ def IndGroupInfo.ofInductiveVal (indInfo : InductiveVal) : IndGroupInfo where
def IndGroupInfo.numMotives (group : IndGroupInfo) : Nat :=
group.all.size + group.numNested
/-- Instantiates the right `.brecOn` or `.bInductionOn` for the given type former index,
including universe parameters and fixed prefix. -/
partial def IndGroupInfo.brecOnName (info : IndGroupInfo) (ind : Bool) (idx : Nat) : Name :=
if let .some n := info.all[idx]? then
if ind then mkBInductionOnName n
else mkBRecOnName n
else
let j := idx - info.all.size + 1
info.brecOnName ind 0 |>.appendIndexAfter j
/--
An instance of an mutually inductive group of inductives, identified by the `all` array
and the level and expressions parameters.
@@ -65,15 +75,9 @@ def IndGroupInst.isDefEq (igi1 igi2 : IndGroupInst) : MetaM Bool := do
/-- Instantiates the right `.brecOn` or `.bInductionOn` for the given type former index,
including universe parameters and fixed prefix. -/
def IndGroupInst.brecOn (group : IndGroupInst) (ind : Bool) (lvl : Level) (idx : Nat) : Expr :=
let e := if let .some n := group.all[idx]? then
if ind then .const (mkBInductionOnName n) group.levels
else .const (mkBRecOnName n) (lvl :: group.levels)
else
let n := group.all[0]!
let j := idx - group.all.size + 1
if ind then .const (mkBInductionOnName n |>.appendIndexAfter j) group.levels
else .const (mkBRecOnName n |>.appendIndexAfter j) (lvl :: group.levels)
mkAppN e group.params
let n := group.brecOnName ind idx
let us := if ind then group.levels else lvl :: group.levels
mkAppN (.const n us) group.params
/--
Figures out the nested type formers of an inductive group, with parameters instantiated

View File

@@ -12,8 +12,35 @@ import Lean.Elab.PreDefinition.Structural.RecArgInfo
namespace Lean.Elab.Structural
open Meta
private partial def replaceIndPredRecApps (recArgInfo : RecArgInfo) (motive : Expr) (e : Expr) : M Expr := do
let maxDepth := IndPredBelow.maxBackwardChainingDepth.get ( getOptions)
private def replaceIndPredRecApp (numFixed : Nat) (funType : Expr) (e : Expr) : M Expr := do
withoutProofIrrelevance do
withTraceNode `Elab.definition.structural (fun _ => pure m!"eliminating recursive call {e}") do
-- We want to replace `e` with an expression of the same type
let main mkFreshExprSyntheticOpaqueMVar ( inferType e)
let args : Array Expr := e.getAppArgs[numFixed:]
let lctx getLCtx
let r lctx.anyM fun localDecl => do
if localDecl.isAuxDecl then return false
let (mvars, _, t) forallMetaTelescope localDecl.type -- NB: do not reduce, we want to see the `funType`
unless t.getAppFn == funType do return false
withTraceNodeBefore `Elab.definition.structural (do pure m!"trying {mkFVar localDecl.fvarId} : {localDecl.type}") do
if args.size < t.getAppNumArgs then
trace[Elab.definition.structural] "too few arguments. Underapplied recursive call?"
return false
if ( (t.getAppArgs.zip args).allM (fun (t,s) => isDefEq t s)) then
main.mvarId!.assign (mkAppN (mkAppN localDecl.toExpr mvars) args[t.getAppNumArgs:])
return mvars.allM fun v => do
unless ( v.mvarId!.isAssigned) do
trace[Elab.definition.structural] "Cannot use {mkFVar localDecl.fvarId}: parameter {v} remains unassigned"
return false
return true
trace[Elab.definition.structural] "Arguments do not match"
return false
unless r do
throwError "Could not eliminate recursive call {e}"
instantiateMVars main
private partial def replaceIndPredRecApps (recArgInfo : RecArgInfo) (funType : Expr) (motive : Expr) (e : Expr) : M Expr := do
let rec loop (e : Expr) : M Expr := do
match e with
| Expr.lam n d b c =>
@@ -35,12 +62,7 @@ private partial def replaceIndPredRecApps (recArgInfo : RecArgInfo) (motive : Ex
let processApp (e : Expr) : M Expr := do
e.withApp fun f args => do
if f.isConstOf recArgInfo.fnName then
let ty inferType e
let main mkFreshExprSyntheticOpaqueMVar ty
if ( IndPredBelow.backwardsChaining main.mvarId! maxDepth) then
pure main
else
throwError "could not solve using backwards chaining {MessageData.ofGoal main.mvarId!}"
replaceIndPredRecApp recArgInfo.numFixed funType e
else
return mkAppN ( loop f) ( args.mapM loop)
match ( matchMatcherApp? e) with
@@ -79,33 +101,36 @@ def mkIndPredBRecOn (recArgInfo : RecArgInfo) (value : Expr) : M Expr := do
let type := ( inferType value).headBeta
let (indexMajorArgs, otherArgs) := recArgInfo.pickIndicesMajor ys
trace[Elab.definition.structural] "numFixed: {recArgInfo.numFixed}, indexMajorArgs: {indexMajorArgs}, otherArgs: {otherArgs}"
let motive mkForallFVars otherArgs type
let motive mkLambdaFVars indexMajorArgs motive
trace[Elab.definition.structural] "brecOn motive: {motive}"
let brecOn := Lean.mkConst (mkBRecOnName recArgInfo.indName!) recArgInfo.indGroupInst.levels
let brecOn := mkAppN brecOn recArgInfo.indGroupInst.params
let brecOn := mkApp brecOn motive
let brecOn := mkAppN brecOn indexMajorArgs
check brecOn
let brecOnType inferType brecOn
trace[Elab.definition.structural] "brecOn {brecOn}"
trace[Elab.definition.structural] "brecOnType {brecOnType}"
-- we need to close the telescope here, because the local context is used:
-- The root cause was, that this copied code puts an ih : FType into the
-- local context and later, when we use the local context to build the recursive
-- call, it uses this ih. But that ih doesn't exist in the actual brecOn call.
-- That's why it must go.
let FType forallBoundedTelescope brecOnType (some 1) fun F _ => do
let F := F[0]!
let FType inferType F
trace[Elab.definition.structural] "FType: {FType}"
instantiateForall FType indexMajorArgs
forallBoundedTelescope FType (some 1) fun below _ => do
let below := below[0]!
let valueNew replaceIndPredRecApps recArgInfo motive value
let Farg mkLambdaFVars (indexMajorArgs ++ #[below] ++ otherArgs) valueNew
let brecOn := mkApp brecOn Farg
let brecOn := mkAppN brecOn otherArgs
mkLambdaFVars ys brecOn
let funType mkLambdaFVars ys type
withLetDecl `funType ( inferType funType) funType fun funType => do
let motive mkForallFVars otherArgs (mkAppN funType ys)
let motive mkLambdaFVars indexMajorArgs motive
trace[Elab.definition.structural] "brecOn motive: {motive}"
let brecOn := Lean.mkConst (mkBRecOnName recArgInfo.indName!) recArgInfo.indGroupInst.levels
let brecOn := mkAppN brecOn recArgInfo.indGroupInst.params
let brecOn := mkApp brecOn motive
let brecOn := mkAppN brecOn indexMajorArgs
check brecOn
let brecOnType inferType brecOn
trace[Elab.definition.structural] "brecOn {brecOn}"
trace[Elab.definition.structural] "brecOnType {brecOnType}"
-- we need to close the telescope here, because the local context is used:
-- The root cause was, that this copied code puts an ih : FType into the
-- local context and later, when we use the local context to build the recursive
-- call, it uses this ih. But that ih doesn't exist in the actual brecOn call.
-- That's why it must go.
let FType forallBoundedTelescope brecOnType (some 1) fun F _ => do
let F := F[0]!
let FType inferType F
trace[Elab.definition.structural] "FType: {FType}"
instantiateForall FType indexMajorArgs
forallBoundedTelescope FType (some 1) fun below _ => do
let below := below[0]!
let valueNew replaceIndPredRecApps recArgInfo funType motive value
let Farg mkLambdaFVars (indexMajorArgs ++ #[below] ++ otherArgs) valueNew
let brecOn := mkApp brecOn Farg
let brecOn := mkAppN brecOn otherArgs
let brecOn mkLetFVars #[funType] brecOn
mkLambdaFVars ys brecOn
end Lean.Elab.Structural

View File

@@ -57,18 +57,18 @@ structure TerminationHints where
def TerminationHints.none : TerminationHints := .missing, .none, .none, .none, 0
/-- Logs warnings when the `TerminationHints` are present. -/
/-- Logs warnings when the `TerminationHints` are unexpectedly present. -/
def TerminationHints.ensureNone (hints : TerminationHints) (reason : String) : CoreM Unit := do
match hints.terminationBy??, hints.terminationBy?, hints.decreasingBy? with
| .none, .none, .none => pure ()
| .none, .none, .some dec_by =>
logErrorAt dec_by.ref m!"unused `decreasing_by`, function is {reason}"
logWarningAt dec_by.ref m!"unused `decreasing_by`, function is {reason}"
| .some term_by?, .none, .none =>
logErrorAt term_by? m!"unused `termination_by?`, function is {reason}"
logWarningAt term_by? m!"unused `termination_by?`, function is {reason}"
| .none, .some term_by, .none =>
logErrorAt term_by.ref m!"unused `termination_by`, function is {reason}"
logWarningAt term_by.ref m!"unused `termination_by`, function is {reason}"
| _, _, _ =>
logErrorAt hints.ref m!"unused termination hints, function is {reason}"
logWarningAt hints.ref m!"unused termination hints, function is {reason}"
/-- True if any form of termination hint is present. -/
def TerminationHints.isNotNone (hints : TerminationHints) : Bool :=

View File

@@ -4,8 +4,8 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Lean.Util.FoldConsts
import Lean.Meta.Eqns
import Lean.Util.CollectAxioms
import Lean.Elab.Command
namespace Lean.Elab.Command
@@ -120,40 +120,12 @@ private def printId (id : Syntax) : CommandElabM Unit := do
| `(#print%$tk $s:str) => logInfoAt tk s.getString
| _ => throwError "invalid #print command"
namespace CollectAxioms
structure State where
visited : NameSet := {}
axioms : Array Name := #[]
abbrev M := ReaderT Environment $ StateM State
partial def collect (c : Name) : M Unit := do
let collectExpr (e : Expr) : M Unit := e.getUsedConstants.forM collect
let s get
unless s.visited.contains c do
modify fun s => { s with visited := s.visited.insert c }
let env read
match env.find? c with
| some (ConstantInfo.axiomInfo _) => modify fun s => { s with axioms := s.axioms.push c }
| some (ConstantInfo.defnInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.thmInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.opaqueInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.quotInfo _) => pure ()
| some (ConstantInfo.ctorInfo v) => collectExpr v.type
| some (ConstantInfo.recInfo v) => collectExpr v.type
| some (ConstantInfo.inductInfo v) => collectExpr v.type *> v.ctors.forM collect
| none => pure ()
end CollectAxioms
private def printAxiomsOf (constName : Name) : CommandElabM Unit := do
let env getEnv
let (_, s) := ((CollectAxioms.collect constName).run env).run {}
if s.axioms.isEmpty then
let axioms collectAxioms constName
if axioms.isEmpty then
logInfo m!"'{constName}' does not depend on any axioms"
else
logInfo m!"'{constName}' depends on axioms: {s.axioms.qsort Name.lt |>.toList}"
logInfo m!"'{constName}' depends on axioms: {axioms.qsort Name.lt |>.toList}"
@[builtin_command_elab «printAxioms»] def elabPrintAxioms : CommandElab
| `(#print%$tk axioms $id) => withRef tk do

View File

@@ -156,7 +156,9 @@ partial def evalTactic (stx : Syntax) : TacticM Unit := do
-- Macro writers create a sequence of tactics `t₁ ... tₙ` using `mkNullNode #[t₁, ..., tₙ]`
-- We could support incrementality here by allocating `n` new snapshot bundles but the
-- practical value is not clear
Term.withoutTacticIncrementality true do
-- NOTE: `withTacticInfoContext` is used to preserve the invariant of `elabTactic` producing
-- exactly one info tree, which is necessary for using `getInfoTreeWithContext`.
Term.withoutTacticIncrementality true <| withTacticInfoContext stx do
stx.getArgs.forM evalTactic
else withTraceNode `Elab.step (fun _ => return stx) (tag := stx.getKind.toString) do
let evalFns := tacticElabAttribute.getEntries ( getEnv) stx.getKind
@@ -223,14 +225,18 @@ where
snap.new.resolve <| .mk {
stx := stx'
diagnostics := .empty
finished := .pure { state? := ( Tactic.saveState) }
} #[{ range? := stx'.getRange?, task := promise.result }]
finished := .pure {
diagnostics := .empty
state? := ( Tactic.saveState)
}
next := #[{ range? := stx'.getRange?, task := promise.result }]
}
-- Update `tacSnap?` to old unfolding
withTheReader Term.Context ({ · with tacSnap? := some {
new := promise
old? := do
let old old?
return old.data.stx, ( old.next.get? 0)
return old.data.stx, ( old.data.next.get? 0)
} }) do
evalTactic stx'
return

View File

@@ -60,7 +60,7 @@ where
if let some snap := ( readThe Term.Context).tacSnap? then
if let some old := snap.old? then
let oldParsed := old.val.get
oldInner? := oldParsed.next.get? 0 |>.map (oldParsed.data.stx, ·)
oldInner? := oldParsed.data.inner? |>.map (oldParsed.data.stx, ·)
-- compare `stx[0]` for `finished`/`next` reuse, focus on remainder of script
Term.withNarrowedTacticReuse (stx := stx) (fun stx => (stx[0], mkNullNode stx.getArgs[1:])) fun stxs => do
let some snap := ( readThe Term.Context).tacSnap?
@@ -73,29 +73,47 @@ where
if let some state := oldParsed.data.finished.get.state? then
reusableResult? := some ((), state)
-- only allow `next` reuse in this case
oldNext? := oldParsed.next.get? 1 |>.map (old.stx, ·)
oldNext? := oldParsed.data.next.get? 0 |>.map (old.stx, ·)
-- For `tac`'s snapshot task range, disregard synthetic info as otherwise
-- `SnapshotTree.findInfoTreeAtPos` might choose the wrong snapshot: for example, when
-- hovering over a `show` tactic, we should choose the info tree in `finished` over that in
-- `inner`, which points to execution of the synthesized `refine` step and does not contain
-- the full info. In most other places, siblings in the snapshot tree have disjoint ranges and
-- so this issue does not occur.
let mut range? := tac.getRange? (canonicalOnly := true)
-- Include trailing whitespace in the range so that `goalsAs?` does not have to wait for more
-- snapshots than necessary.
if let some range := range? then
range? := some { range with stop := range.stop.byteIdx + tac.getTrailingSize }
withAlwaysResolvedPromise fun next => do
withAlwaysResolvedPromise fun finished => do
withAlwaysResolvedPromise fun inner => do
snap.new.resolve <| .mk {
diagnostics := .empty
stx := tac
diagnostics := ( Language.Snapshot.Diagnostics.ofMessageLog
( Core.getAndEmptyMessageLog))
finished := finished.result
} #[
{
range? := tac.getRange?
task := inner.result },
{
range? := stxs |>.getRange?
task := next.result }]
let (_, state) withRestoreOrSaveFull reusableResult?
-- set up nested reuse; `evalTactic` will check for `isIncrementalElab`
(tacSnap? := some { old? := oldInner?, new := inner }) do
Term.withReuseContext tac do
evalTactic tac
finished.resolve { state? := state }
inner? := some { range?, task := inner.result }
finished := { range?, task := finished.result }
next := #[{ range? := stxs.getRange?, task := next.result }]
}
-- Run `tac` in a fresh info tree state and store resulting state in snapshot for
-- incremental reporting, then add back saved trees. Here we rely on `evalTactic`
-- producing at most one info tree as otherwise `getInfoTreeWithContext?` would panic.
let trees getResetInfoTrees
try
let (_, state) withRestoreOrSaveFull reusableResult?
-- set up nested reuse; `evalTactic` will check for `isIncrementalElab`
(tacSnap? := some { old? := oldInner?, new := inner }) do
Term.withReuseContext tac do
evalTactic tac
finished.resolve {
diagnostics := ( Language.Snapshot.Diagnostics.ofMessageLog
( Core.getAndEmptyMessageLog))
infoTree? := ( Term.getInfoTreeWithContext?)
state? := state
}
finally
modifyInfoState fun s => { s with trees := trees ++ s.trees }
withTheReader Term.Context ({ · with tacSnap? := some {
new := next

View File

@@ -263,9 +263,10 @@ where
-- save all relevant syntax here for comparison with next document version
stx := mkNullNode altStxs
diagnostics := .empty
finished := finished.result
} (altStxs.zipWith altPromises fun stx prom =>
{ range? := stx.getRange?, task := prom.result })
finished := { range? := none, task := finished.result }
next := altStxs.zipWith altPromises fun stx prom =>
{ range? := stx.getRange?, task := prom.result }
}
goWithIncremental <| altPromises.mapIdx fun i prom => {
old? := do
let old tacSnap.old?
@@ -274,10 +275,10 @@ where
let old := old.val.get
-- use old version of `mkNullNode altsSyntax` as guard, will be compared with new
-- version and picked apart in `applyAltStx`
return old.data.stx, ( old.next[i]?)
return old.data.stx, ( old.data.next[i]?)
new := prom
}
finished.resolve { state? := ( saveState) }
finished.resolve { diagnostics := .empty, state? := ( saveState) }
return
goWithIncremental #[]

View File

@@ -190,33 +190,38 @@ structure SavedState where
term : Term.SavedState
tactic : State
/-- State after finishing execution of a tactic. -/
structure TacticFinished where
/-- Reusable state, if no fatal exception occurred. -/
/-- Snapshot after finishing execution of a tactic. -/
structure TacticFinishedSnapshot extends Language.Snapshot where
/-- State saved for reuse, if no fatal exception occurred. -/
state? : Option SavedState
deriving Inhabited
instance : ToSnapshotTree TacticFinishedSnapshot where
toSnapshotTree s := s.toSnapshot, #[]
/-- Snapshot just before execution of a tactic. -/
structure TacticParsedSnapshotData extends Language.Snapshot where
structure TacticParsedSnapshotData (TacticParsedSnapshot : Type) extends Language.Snapshot where
/-- Syntax tree of the tactic, stored and compared for incremental reuse. -/
stx : Syntax
/-- Task for nested incrementality, if enabled for tactic. -/
inner? : Option (SnapshotTask TacticParsedSnapshot) := none
/-- Task for state after tactic execution. -/
finished : Task TacticFinished
finished : SnapshotTask TacticFinishedSnapshot
/-- Tasks for subsequent, potentially parallel, tactic steps. -/
next : Array (SnapshotTask TacticParsedSnapshot) := #[]
deriving Inhabited
/-- State after execution of a single synchronous tactic step. -/
inductive TacticParsedSnapshot where
| mk (data : TacticParsedSnapshotData) (next : Array (SnapshotTask TacticParsedSnapshot))
| mk (data : TacticParsedSnapshotData TacticParsedSnapshot)
deriving Inhabited
abbrev TacticParsedSnapshot.data : TacticParsedSnapshot TacticParsedSnapshotData
| .mk data _ => data
/-- Potential, potentially parallel, follow-up tactic executions. -/
-- In the first, non-parallel version, each task will depend on its predecessor
abbrev TacticParsedSnapshot.next : TacticParsedSnapshot Array (SnapshotTask TacticParsedSnapshot)
| .mk _ next => next
abbrev TacticParsedSnapshot.data : TacticParsedSnapshot TacticParsedSnapshotData TacticParsedSnapshot
| .mk data => data
partial instance : ToSnapshotTree TacticParsedSnapshot where
toSnapshotTree := go where
go := fun s, next => s.toSnapshot, next.map (·.map (sync := true) go)
go := fun s => s.toSnapshot,
s.inner?.toArray.map (·.map (sync := true) go) ++
#[s.finished.map (sync := true) toSnapshotTree] ++
s.next.map (·.map (sync := true) go)
end Snapshot
end Tactic
@@ -630,6 +635,32 @@ private def withoutModifyingStateWithInfoAndMessagesImpl (x : TermElabM α) : Te
let saved := { saved with meta.core.infoState := ( getInfoState), meta.core.messages := ( getThe Core.State).messages }
restoreState saved
/--
Wraps the trees returned from `getInfoTrees`, if any, in an `InfoTree.context` node based on the
current monadic context and state. This is mainly used to report info trees early via
`Snapshot.infoTree?`. The trees are not removed from the `getInfoTrees` state as the final info tree
of the elaborated command should be complete and not depend on whether parts have been reported
early.
As `InfoTree.context` can have only one child, this function panics if `trees` contains more than 1
tree. Also, `PartialContextInfo.parentDeclCtx` is not currently generated as that information is not
available in the monadic context and only needed for the final info tree.
-/
def getInfoTreeWithContext? : TermElabM (Option InfoTree) := do
let st getInfoState
if st.trees.size > 1 then
return panic! "getInfoTreeWithContext: overfull tree"
let some t := st.trees[0]? |
return none
let t := t.substitute st.assignment
let ctx readThe Core.Context
let s getThe Core.State
let ctx := PartialContextInfo.commandCtx {
env := s.env, fileMap := ctx.fileMap, mctx := {}, currNamespace := ctx.currNamespace,
openDecls := ctx.openDecls, options := ctx.options, ngen := s.ngen
}
return InfoTree.context ctx t
/-- For testing `TermElabM` methods. The #eval command will sign the error. -/
def throwErrorIfErrors : TermElabM Unit := do
if ( MonadLog.hasErrors) then

View File

@@ -12,6 +12,7 @@ prelude
import Init.System.Promise
import Lean.Message
import Lean.Parser.Types
import Lean.Elab.InfoTree
set_option linter.missingDocs true
@@ -46,6 +47,8 @@ def Snapshot.Diagnostics.empty : Snapshot.Diagnostics where
The base class of all snapshots: all the generic information the language server needs about a
snapshot. -/
structure Snapshot where
/-- Debug description shown by `trace.Elab.snapshotTree`, defaults to the caller's decl name. -/
desc : String := by exact decl_name%.toString
/--
The messages produced by this step. The union of message logs of all finished snapshots is
reported to the user. -/
@@ -71,7 +74,7 @@ structure SnapshotTask (α : Type) where
range? : Option String.Range
/-- Underlying task producing the snapshot. -/
task : Task α
deriving Nonempty
deriving Nonempty, Inhabited
/-- Creates a snapshot task from a reporting range and a `BaseIO` action. -/
def SnapshotTask.ofIO (range? : Option String.Range) (act : BaseIO α) : BaseIO (SnapshotTask α) := do
@@ -203,15 +206,19 @@ abbrev SnapshotTree.children : SnapshotTree → Array (SnapshotTask SnapshotTree
| mk _ children => children
/-- Produces debug tree format of given snapshot tree, synchronously waiting on all children. -/
partial def SnapshotTree.format : SnapshotTree Format := go none
where go range? s :=
let range := match range? with
| some range => f!"{range.start}..{range.stop} "
| none => ""
let element := f!"{s.element.diagnostics.msgLog.unreported.size} diagnostics"
let children := Std.Format.prefixJoin .line <|
s.children.toList.map fun c => go c.range? c.get
.nestD f!"• {range}{element}{children}"
partial def SnapshotTree.format [Monad m] [MonadFileMap m] [MonadLiftT IO m] :
SnapshotTree m Format :=
go none
where go range? s := do
let file getFileMap
let mut desc := f!"{s.element.desc}"
if let some range := range? then
desc := desc ++ f!"{file.toPosition range.start}-{file.toPosition range.stop} "
desc := desc ++ .prefixJoin "\n" ( s.element.diagnostics.msgLog.toList.mapM (·.toString))
if let some t := s.element.infoTree? then
desc := desc ++ f!"\n{← t.format}"
desc := desc ++ .prefixJoin "\n" ( s.children.toList.mapM fun c => go c.range? c.get)
return .nestD desc
/--
Helper class for projecting a heterogeneous hierarchy of snapshot classes to a homogeneous

View File

@@ -362,16 +362,16 @@ where
parseHeader (old? : Option HeaderParsedSnapshot) : LeanProcessingM HeaderParsedSnapshot := do
let ctx read
let ictx := ctx.toInputContext
let unchanged old newParserState :=
let unchanged old newStx newParserState :=
-- when header syntax is unchanged, reuse import processing task as is and continue with
-- parsing the first command, synchronously if possible
-- NOTE: even if the syntax tree is functionally unchanged, the new parser state may still
-- have changed because of trailing whitespace and comments etc., so it is passed separately
-- from `old`
-- NOTE: even if the syntax tree is functionally unchanged, its concrete structure and the new
-- parser state may still have changed because of trailing whitespace and comments etc., so
-- they are passed separately from `old`
if let some oldSuccess := old.result? then
return {
ictx
stx := old.stx
stx := newStx
diagnostics := old.diagnostics
cancelTk? := ctx.newCancelTk
result? := some { oldSuccess with
@@ -394,7 +394,7 @@ where
if let some nextCom processed.firstCmdSnap.get? then
if ( isBeforeEditPos nextCom.data.parserState.pos) then
-- ...go immediately to next snapshot
return ( unchanged old oldSuccess.parserState)
return ( unchanged old old.stx oldSuccess.parserState)
withHeaderExceptions ({ · with
ictx, stx := .missing, result? := none, cancelTk? := none }) do
@@ -408,16 +408,19 @@ where
cancelTk? := none
}
-- semi-fast path: go to next snapshot if syntax tree is unchanged AND we're still in front
-- of the edit location
-- TODO: dropping the second condition would require adjusting positions in the state
-- NOTE: as `parserState.pos` includes trailing whitespace, this forces reprocessing even if
-- only that whitespace changes, which is wasteful but still necessary because it may
-- influence the range of error messages such as from a trailing `exact`
let trimmedStx := stx.unsetTrailing
-- semi-fast path: go to next snapshot if syntax tree is unchanged
-- NOTE: We compare modulo `unsetTrailing` in order to ensure that changes in trailing
-- whitespace do not invalidate the header. This is safe because we only pass the trimmed
-- syntax tree to `processHeader` below, so there cannot be any references to the trailing
-- whitespace in its result. We still store the untrimmed syntax tree in the snapshot in order
-- to uphold the invariant that concatenating all top-level snapshots' syntax trees results in
-- the original file.
if let some old := old? then
if ( isBeforeEditPos parserState.pos) && old.stx == stx then
-- Here we must make sure to pass the *new* parser state; see NOTE in `unchanged`
return ( unchanged old parserState)
if trimmedStx.eqWithInfo old.stx.unsetTrailing then
-- Here we must make sure to pass the *new* syntax and parser state; see NOTE in
-- `unchanged`
return ( unchanged old stx parserState)
-- on first change, make sure to cancel old invocation
if let some tk := ctx.oldCancelTk? then
tk.set
@@ -426,7 +429,7 @@ where
diagnostics := ( Snapshot.Diagnostics.ofMessageLog msgLog)
result? := some {
parserState
processedSnap := ( processHeader stx parserState)
processedSnap := ( processHeader trimmedStx parserState)
}
cancelTk? := ctx.newCancelTk
}
@@ -523,7 +526,10 @@ where
-- semi-fast path
if let some old := old? then
if ( isBeforeEditPos parserState.pos ctx) && old.data.stx == stx then
-- NOTE: as `parserState.pos` includes trailing whitespace, this forces reprocessing even if
-- only that whitespace changes, which is wasteful but still necessary because it may
-- influence the range of error messages such as from a trailing `exact`
if stx.eqWithInfo old.data.stx then
-- Here we must make sure to pass the *new* parser state; see NOTE in `unchanged`
return ( unchanged old parserState)
-- on first change, make sure to cancel old invocation

View File

@@ -665,27 +665,6 @@ def mkIffOfEq (h : Expr) : MetaM Expr := do
else
mkAppM ``Iff.of_eq #[h]
/--
Given proofs of `P₁`, …, `Pₙ`, returns a proof of `P₁ ∧ … ∧ Pₙ`.
If `n = 0` returns a proof of `True`.
If `n = 1` returns the proof of `P₁`.
-/
def mkAndIntroN : Array Expr MetaM Expr
| #[] => return mkConst ``True.intro []
| #[e] => return e
| es => es.foldrM (start := es.size - 1) (fun a b => mkAppM ``And.intro #[a,b]) es.back
/-- Given a proof of `P₁ ∧ … ∧ Pᵢ ∧ … ∧ Pₙ`, return the proof of `Pᵢ` -/
def mkProjAndN (n i : Nat) (e : Expr) : Expr := Id.run do
let mut value := e
for _ in [:i] do
value := mkProj ``And 1 value
if i + 1 < n then
value := mkProj ``And 0 value
return value
builtin_initialize do
registerTraceClass `Meta.appBuilder
registerTraceClass `Meta.appBuilder.result (inherited := true)

View File

@@ -6,6 +6,7 @@ Authors: Joachim Breitner
prelude
import Lean.Meta.AppBuilder
import Lean.Meta.PProdN
import Lean.Meta.ArgsPacker.Basic
/-!
@@ -518,7 +519,7 @@ def curry (argsPacker : ArgsPacker) (e : Expr) : MetaM Expr := do
let mut es := #[]
for i in [:argsPacker.numFuncs] do
es := es.push ( argsPacker.curryProj e i)
mkAndIntroN es
PProdN.mk 0 es
/--
Given type `(a ⊗' b ⊕' c ⊗' d) → e`, brings `a → b → e` and `c → d → e`
@@ -533,7 +534,7 @@ where
| [], acc => k acc
| t::ts, acc => do
let name := if argsPacker.numFuncs = 1 then name else .mkSimple s!"{name}{acc.size+1}"
withLocalDecl name .default t fun x => do
withLocalDeclD name t fun x => do
go ts (acc.push x)
/--

View File

@@ -8,67 +8,18 @@ import Lean.Meta.InferType
import Lean.AuxRecursor
import Lean.AddDecl
import Lean.Meta.CompletionName
import Lean.Meta.PProdN
namespace Lean
open Meta
section PProd
/--!
Helpers to construct types and values of `PProd` and project out of them, set up to use `And`
instead of `PProd` if the universes allow. Maybe be extracted into a Utils module when useful
elsewhere.
-/
private def mkPUnit : Level Expr
| .zero => .const ``True []
| lvl => .const ``PUnit [lvl]
private def mkPProd (e1 e2 : Expr) : MetaM Expr := do
let lvl1 getLevel e1
let lvl2 getLevel e2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp2 (.const `And []) e1 e2
else
return mkApp2 (.const ``PProd [lvl1, lvl2]) e1 e2
private def mkNProd (lvl : Level) (es : Array Expr) : MetaM Expr :=
es.foldrM (init := mkPUnit lvl) mkPProd
private def mkPUnitMk : Level Expr
| .zero => .const ``True.intro []
| lvl => .const ``PUnit.unit [lvl]
private def mkPProdMk (e1 e2 : Expr) : MetaM Expr := do
let t1 inferType e1
let t2 inferType e2
let lvl1 getLevel t1
let lvl2 getLevel t2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp4 (.const ``And.intro []) t1 t2 e1 e2
else
return mkApp4 (.const ``PProd.mk [lvl1, lvl2]) t1 t2 e1 e2
private def mkNProdMk (lvl : Level) (es : Array Expr) : MetaM Expr :=
es.foldrM (init := mkPUnitMk lvl) mkPProdMk
/-- `PProd.fst` or `And.left` (as projections) -/
private def mkPProdFst (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 0 e
| And _ _ => return .proj ``And 0 e
| _ => throwError "Cannot project .1 out of{indentExpr e}\nof type{indentExpr t}"
/-- `PProd.snd` or `And.right` (as projections) -/
private def mkPProdSnd (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 1 e
| And _ _ => return .proj ``And 1 e
| _ => throwError "Cannot project .2 out of{indentExpr e}\nof type{indentExpr t}"
end PProd
/-- Transforms `e : xᵢ → (t₁ ×' t₂)` into `(xᵢ → t₁) ×' (xᵢ → t₂) -/
private def etaPProd (xs : Array Expr) (e : Expr) : MetaM Expr := do
if xs.isEmpty then return e
let r := mkAppN e xs
let r₁ mkLambdaFVars xs ( mkPProdFst r)
let r₂ mkLambdaFVars xs ( mkPProdSnd r)
mkPProdMk r₁ r₂
/--
If `minorType` is the type of a minor premies of a recursor, such as
@@ -91,7 +42,7 @@ private def buildBelowMinorPremise (rlvl : Level) (motives : Array Expr) (minorT
where
ibelow := rlvl matches .zero
go (prods : Array Expr) : List Expr MetaM Expr
| [] => mkNProd rlvl prods
| [] => PProdN.pack rlvl prods
| arg::args => do
let argType inferType arg
forallTelescope argType fun arg_args arg_type => do
@@ -243,7 +194,7 @@ private def buildBRecOnMinorPremise (rlvl : Level) (motives : Array Expr)
forallTelescope minorType fun minor_args minor_type => do
let rec go (prods : Array Expr) : List Expr MetaM Expr
| [] => minor_type.withApp fun minor_type_fn minor_type_args => do
let b mkNProdMk rlvl prods
let b PProdN.mk rlvl prods
let .some idx, _ := motives.indexOf? minor_type_fn
| throwError m!"Did not find {minor_type} in {motives}"
mkPProdMk (mkAppN fs[idx]! (minor_type_args.push b)) b
@@ -256,14 +207,8 @@ private def buildBRecOnMinorPremise (rlvl : Level) (motives : Array Expr)
let type' mkForallFVars arg_args
( mkPProd arg_type (mkAppN belows[idx]! arg_type_args) )
withLocalDeclD name type' fun arg' => do
if arg_args.isEmpty then
mkLambdaFVars #[arg'] ( go (prods.push arg') args)
else
let r := mkAppN arg' arg_args
let r₁ mkLambdaFVars arg_args ( mkPProdFst r)
let r₂ mkLambdaFVars arg_args ( mkPProdSnd r)
let r mkPProdMk r₁ r₂
mkLambdaFVars #[arg'] ( go (prods.push r) args)
let r etaPProd arg_args arg'
mkLambdaFVars #[arg'] ( go (prods.push r) args)
else
mkLambdaFVars #[arg] ( go prods args)
go #[] minor_args.toList

View File

@@ -6,6 +6,7 @@ Authors: Dany Fabian
prelude
import Lean.Meta.Constructions.CasesOn
import Lean.Meta.Match.Match
import Lean.Meta.Tactic.SolveByElim
namespace Lean.Meta.IndPredBelow
open Match
@@ -230,22 +231,28 @@ def mkBelowDecl (ctx : Context) : MetaM Declaration := do
ctx.typeInfos[0]!.isUnsafe
partial def backwardsChaining (m : MVarId) (depth : Nat) : MetaM Bool := do
if depth = 0 then return false
else
m.withContext do
let lctx getLCtx
m.withContext do
let mTy m.getType
lctx.anyM fun localDecl =>
if localDecl.isAuxDecl then
return false
else
commitWhen do
let (mvars, _, t) forallMetaTelescope localDecl.type
if isDefEq mTy t then
m.assign (mkAppN localDecl.toExpr mvars)
mvars.allM fun v =>
v.mvarId!.isAssigned <||> backwardsChaining v.mvarId! (depth - 1)
else return false
if depth = 0 then
trace[Meta.IndPredBelow.search] "searching for {mTy}: ran out of max depth"
return false
else
let lctx getLCtx
let r lctx.anyM fun localDecl =>
if localDecl.isAuxDecl then
return false
else
commitWhen do
let (mvars, _, t) forallMetaTelescope localDecl.type
if ( isDefEq mTy t) then
trace[Meta.IndPredBelow.search] "searching for {mTy}: trying {mkFVar localDecl.fvarId} : {localDecl.type}"
m.assign (mkAppN localDecl.toExpr mvars)
mvars.allM fun v =>
v.mvarId!.isAssigned <||> backwardsChaining v.mvarId! (depth - 1)
else return false
unless r do
trace[Meta.IndPredBelow.search] "searching for {mTy} failed"
return r
partial def proveBrecOn (ctx : Context) (indVal : InductiveVal) (type : Expr) : MetaM Expr := do
let main mkFreshExprSyntheticOpaqueMVar type
@@ -563,7 +570,7 @@ def findBelowIdx (xs : Array Expr) (motive : Expr) : MetaM $ Option (Expr × Nat
let below mkFreshExprSyntheticOpaqueMVar belowTy
try
trace[Meta.IndPredBelow.match] "{←Meta.ppGoal below.mvarId!}"
if ( backwardsChaining below.mvarId! 10) then
if ( below.mvarId!.applyRules { backtracking := false, maxDepth := 1 } []).isEmpty then
trace[Meta.IndPredBelow.match] "Found below term in the local context: {below}"
if ( xs.anyM (isDefEq below)) then pure none else pure (below, idx.val)
else
@@ -596,5 +603,6 @@ def mkBelow (declName : Name) : MetaM Unit := do
builtin_initialize
registerTraceClass `Meta.IndPredBelow
registerTraceClass `Meta.IndPredBelow.match
registerTraceClass `Meta.IndPredBelow.search
end Lean.Meta.IndPredBelow

145
src/Lean/Meta/PProdN.lean Normal file
View File

@@ -0,0 +1,145 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
prelude
import Lean.Meta.InferType
/-!
This module provides functios to pack and unpack values using nested `PProd` or `And`,
as used in the `.below` construction, in the `.brecOn` construction for mutual recursion and
and the `mutual_induct` construction.
It uses `And` (equivalent to `PProd.{0}` when possible).
The nesting is `t₁ ×' (t₂ ×' t₃)`, not `t₁ ×' (t₂ ×' (t₃ ×' PUnit))`. This is more readable,
slightly shorter, and means that the packing is the identity if `n=1`, which we rely on in some
places. It comes at the expense that hat projection needs to know `n`.
Packing an empty list uses `True` or `PUnit` depending on the given `lvl`.
Also see `Lean.Meta.ArgsPacker` for a similar module for `PSigma` and `PSum`, used by well-founded recursion.
-/
namespace Lean.Meta
/-- Given types `t₁` and `t₂`, produces `t₁ ×' t₂` (or `t₁ ∧ t₂` if the universes allow) -/
def mkPProd (e1 e2 : Expr) : MetaM Expr := do
let lvl1 getLevel e1
let lvl2 getLevel e2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp2 (.const `And []) e1 e2
else
return mkApp2 (.const ``PProd [lvl1, lvl2]) e1 e2
/-- Given values of typs `t₁` and `t₂`, produces value of type `t₁ ×' t₂` (or `t₁ ∧ t₂` if the universes allow) -/
def mkPProdMk (e1 e2 : Expr) : MetaM Expr := do
let t1 inferType e1
let t2 inferType e2
let lvl1 getLevel t1
let lvl2 getLevel t2
if lvl1 matches .zero && lvl2 matches .zero then
return mkApp4 (.const ``And.intro []) t1 t2 e1 e2
else
return mkApp4 (.const ``PProd.mk [lvl1, lvl2]) t1 t2 e1 e2
/-- `PProd.fst` or `And.left` (using `.proj`) -/
def mkPProdFst (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 0 e
| And _ _ => return .proj ``And 0 e
| _ => panic! "mkPProdFst: cannot handle{indentExpr e}\nof type{indentExpr t}"
/-- `PProd.snd` or `And.right` (using `.proj`) -/
def mkPProdSnd (e : Expr) : MetaM Expr := do
let t whnf ( inferType e)
match_expr t with
| PProd _ _ => return .proj ``PProd 1 e
| And _ _ => return .proj ``And 1 e
| _ => panic! "mkPProdSnd: cannot handle{indentExpr e}\nof type{indentExpr t}"
namespace PProdN
/-- Given types `tᵢ`, produces `t₁ ×' t₂ ×' t₃` -/
def pack (lvl : Level) (xs : Array Expr) : MetaM Expr := do
if xs.size = 0 then
if lvl matches .zero then return .const ``True []
else return .const ``PUnit [lvl]
let xBack := xs.back
xs.pop.foldrM mkPProd xBack
/-- Given values `xᵢ` of type `tᵢ`, produces value of type `t₁ ×' t₂ ×' t₃` -/
def mk (lvl : Level) (xs : Array Expr) : MetaM Expr := do
if xs.size = 0 then
if lvl matches .zero then return .const ``True.intro []
else return .const ``PUnit.unit [lvl]
let xBack := xs.back
xs.pop.foldrM mkPProdMk xBack
/-- Given a value of type `t₁ ×' … ×' tᵢ ×' … ×' tₙ`, return a value of type `tᵢ` -/
def proj (n i : Nat) (e : Expr) : MetaM Expr := do
let mut value := e
for _ in [:i] do
value mkPProdSnd value
if i+1 < n then
mkPProdFst value
else
pure value
/--
Packs multiple type-forming lambda expressions taking the same parameters using `PProd`.
The parameter `type` is the common type of the these expressions
For example
```
packLambdas (Nat → Sort u) #[(fun (n : Nat) => Nat), (fun (n : Nat) => Fin n -> Fin n )]
```
will return
```
fun (n : Nat) => (Nat ×' (Fin n → Fin n))
```
It is the identity if `es.size = 1`.
It returns a dummy motive `(xs : ) → PUnit` or `(xs : … ) → True` if no expressions are given.
(this is the reason we need the expected type in the `type` parameter).
-/
def packLambdas (type : Expr) (es : Array Expr) : MetaM Expr := do
if es.size = 1 then
return es[0]!
forallTelescope type fun xs sort => do
assert! sort.isSort
-- NB: Use beta, not instantiateLambda; when constructing the belowDict below
-- we pass `C`, a plain FVar, here
let es' := es.map (·.beta xs)
let packed PProdN.pack sort.sortLevel! es'
mkLambdaFVars xs packed
/--
The value analogue to `PProdN.packLambdas`.
It is the identity if `es.size = 1`.
-/
def mkLambdas (type : Expr) (es : Array Expr) : MetaM Expr := do
if es.size = 1 then
return es[0]!
forallTelescope type fun xs body => do
let lvl getLevel body
let es' := es.map (·.beta xs)
let packed PProdN.mk lvl es'
mkLambdaFVars xs packed
end PProdN
end Lean.Meta

File diff suppressed because it is too large Load Diff

View File

@@ -269,7 +269,7 @@ corresponding `end <id>` or the end of the file.
"namespace " >> checkColGt >> ident
/--
`end` closes a `section` or `namespace` scope. If the scope is named `<id>`, it has to be closed
with `end <id>`.
with `end <id>`. The `end` command is optional at the end of a file.
-/
@[builtin_command_parser] def «end» := leading_parser
"end" >> optional (ppSpace >> checkColGt >> ident)
@@ -437,6 +437,8 @@ structure Pair (α : Type u) (β : Type v) : Type (max u v) where
"#check_failure " >> termParser -- Like `#check`, but succeeds only if term does not type check
@[builtin_command_parser] def eval := leading_parser
"#eval " >> termParser
@[builtin_command_parser] def evalBang := leading_parser
"#eval! " >> termParser
@[builtin_command_parser] def synth := leading_parser
"#synth " >> termParser
@[builtin_command_parser] def exit := leading_parser

View File

@@ -350,15 +350,17 @@ def term.parenthesizer : CategoryParenthesizer | prec => do
maybeParenthesize `term true wrapParens prec $
parenthesizeCategoryCore `term prec
where
/-- Wraps the term `stx` in parentheses and then copies its `SourceInfo` to the result.
The purpose of this is to copy synthetic delaborator positions from the `stx` node to the parentheses node,
which causes the info view to view both of these nodes as referring to the same expression.
If we did not copy info, the info view would consider the parentheses to belong to the outer term.
/-- Wraps the term `stx` in parentheses and then moves its `SourceInfo` to the result.
The purpose of this is to move synthetic delaborator positions from the `stx` node to the parentheses node,
which causes the info view to view the node with parentheses as referring to the parenthesized expression.
If we did not move info, the info view would consider the parentheses to belong to the outer term.
Note: we do not do `withRef stx` because that causes the "(" and ")" tokens to have source info as well,
causing the info view to highlight each parenthesis as an independent expression. -/
wrapParens (stx : Syntax) : Syntax := Unhygienic.run do
let stxInfo := SourceInfo.fromRef stx
let stx := stx.setInfo .none
let pstx `(($(stx)))
return pstx.raw.setInfo (SourceInfo.fromRef stx)
return pstx.raw.setInfo stxInfo
@[builtin_category_parenthesizer tactic]
def tactic.parenthesizer : CategoryParenthesizer | prec => do

View File

@@ -234,31 +234,27 @@ def getInteractiveGoals (p : Lsp.PlainGoalParams) : RequestM (RequestTask (Optio
let doc readDoc
let text := doc.meta.text
let hoverPos := text.lspPosToUtf8Pos p.position
-- NOTE: use `>=` since the cursor can be *after* the input
withWaitFindSnap doc (fun s => s.endPos >= hoverPos)
(notFoundX := return none) fun snap => do
if let rs@(_ :: _) := snap.infoTree.goalsAt? doc.meta.text hoverPos then
let goals : List Widget.InteractiveGoals rs.mapM fun { ctxInfo := ci, tacticInfo := ti, useAfter := useAfter, .. } => do
let ciAfter := { ci with mctx := ti.mctxAfter }
let ci := if useAfter then ciAfter else { ci with mctx := ti.mctxBefore }
-- compute the interactive goals
let goals ci.runMetaM {} (do
let goals := List.toArray <| if useAfter then ti.goalsAfter else ti.goalsBefore
let goals goals.mapM Widget.goalToInteractive
return {goals}
)
-- compute the goal diff
let goals ciAfter.runMetaM {} (do
try
Widget.diffInteractiveGoals useAfter ti goals
catch _ =>
-- fail silently, since this is just a bonus feature
return goals
)
return goals
return some <| goals.foldl (· ++ ·)
else
return none
mapTask (findInfoTreeAtPos doc hoverPos) <| Option.bindM fun infoTree => do
let rs@(_ :: _) := infoTree.goalsAt? doc.meta.text hoverPos
| return none
let goals : List Widget.InteractiveGoals rs.mapM fun { ctxInfo := ci, tacticInfo := ti, useAfter := useAfter, .. } => do
let ciAfter := { ci with mctx := ti.mctxAfter }
let ci := if useAfter then ciAfter else { ci with mctx := ti.mctxBefore }
-- compute the interactive goals
let goals ci.runMetaM {} (do
let goals := List.toArray <| if useAfter then ti.goalsAfter else ti.goalsBefore
let goals goals.mapM Widget.goalToInteractive
return {goals}
)
-- compute the goal diff
ciAfter.runMetaM {} (do
try
Widget.diffInteractiveGoals useAfter ti goals
catch _ =>
-- fail silently, since this is just a bonus feature
return goals
)
return some <| goals.foldl (· ++ ·)
open Elab in
def handlePlainGoal (p : PlainGoalParams)
@@ -280,19 +276,17 @@ def getInteractiveTermGoal (p : Lsp.PlainTermGoalParams)
let doc readDoc
let text := doc.meta.text
let hoverPos := text.lspPosToUtf8Pos p.position
withWaitFindSnap doc (fun s => s.endPos > hoverPos)
(notFoundX := pure none) fun snap => do
if let some {ctx := ci, info := i@(Elab.Info.ofTermInfo ti), ..} := snap.infoTree.termGoalAt? hoverPos then
let ty ci.runMetaM i.lctx do
instantiateMVars <| ti.expectedType?.getD ( Meta.inferType ti.expr)
-- for binders, hide the last hypothesis (the binder itself)
let lctx' := if ti.isBinder then i.lctx.pop else i.lctx
let goal ci.runMetaM lctx' do
Widget.goalToInteractive ( Meta.mkFreshExprMVar ty).mvarId!
let range := if let some r := i.range? then r.toLspRange text else p.position, p.position
return some { goal with range, term := ti }
else
return none
mapTask (findInfoTreeAtPos doc hoverPos) <| Option.bindM fun infoTree => do
let some {ctx := ci, info := i@(Elab.Info.ofTermInfo ti), ..} := infoTree.termGoalAt? hoverPos
| return none
let ty ci.runMetaM i.lctx do
instantiateMVars <| ti.expectedType?.getD ( Meta.inferType ti.expr)
-- for binders, hide the last hypothesis (the binder itself)
let lctx' := if ti.isBinder then i.lctx.pop else i.lctx
let goal ci.runMetaM lctx' do
Widget.goalToInteractive ( Meta.mkFreshExprMVar ty).mvarId!
let range := if let some r := i.range? then r.toLspRange text else p.position, p.position
return some { goal with range, term := ti }
def handlePlainTermGoal (p : PlainTermGoalParams)
: RequestM (RequestTask (Option PlainTermGoal)) := do

View File

@@ -37,6 +37,8 @@ def moduleFromDocumentUri (srcSearchPath : SearchPath) (uri : DocumentUri)
open Elab in
def locationLinksFromDecl (srcSearchPath : SearchPath) (uri : DocumentUri) (n : Name)
(originRange? : Option Range) : MetaM (Array LocationLink) := do
-- Potentially this name is a builtin that has not been imported yet:
unless ( getEnv).contains n do return #[]
let mod? findModuleOf? n
let modUri? match mod? with
| some modName => documentUriFromModule srcSearchPath modName

View File

@@ -16,6 +16,32 @@ import Lean.Server.FileWorker.Utils
import Lean.Server.Rpc.Basic
namespace Lean.Language
/--
Finds the first (in pre-order) snapshot task in `tree` whose `range?` contains `pos` and which
contains an info tree, and then returns that info tree, waiting for any snapshot tasks on the way.
Subtrees that do not contain the position are skipped without forcing their tasks.
-/
partial def SnapshotTree.findInfoTreeAtPos (tree : SnapshotTree) (pos : String.Pos) :
Task (Option Elab.InfoTree) :=
goSeq tree.children.toList
where
goSeq
| [] => .pure none
| t::ts =>
if t.range?.any (·.contains pos) then
t.task.bind (sync := true) fun tree => Id.run do
if let some infoTree := tree.element.infoTree? then
return .pure infoTree
tree.findInfoTreeAtPos pos |>.bind (sync := true) fun
| some infoTree => .pure (some infoTree)
| none => goSeq ts
else
goSeq ts
end Lean.Language
namespace Lean.Server
structure RequestError where
@@ -144,6 +170,45 @@ def withWaitFindSnapAtPos
(notFoundX := throw .invalidParams, s!"no snapshot found at {lspPos}")
(x := f)
open Language.Lean in
/-- Finds the first `CommandParsedSnapshot` fulfilling `p`, asynchronously. -/
partial def findCmdParsedSnap (doc : EditableDocument) (p : CommandParsedSnapshot Bool) :
Task (Option CommandParsedSnapshot) := Id.run do
let some headerParsed := doc.initSnap.result?
| .pure none
headerParsed.processedSnap.task.bind (sync := true) fun headerProcessed => Id.run do
let some headerSuccess := headerProcessed.result?
| return .pure none
headerSuccess.firstCmdSnap.task.bind (sync := true) go
where
go cmdParsed :=
if p cmdParsed then
.pure (some cmdParsed)
else
match cmdParsed.nextCmdSnap? with
| some next => next.task.bind (sync := true) go
| none => .pure none
open Language in
/--
Finds the info tree of the first snapshot task containing `pos`, asynchronously. The info tree may
be from a nested snapshot, such as a single tactic.
See `SnapshotTree.findInfoTreeAtPos` for details on how the search is done.
-/
partial def findInfoTreeAtPos (doc : EditableDocument) (pos : String.Pos) :
Task (Option Elab.InfoTree) :=
-- NOTE: use `>=` since the cursor can be *after* the input (and there is no interesting info on
-- the first character of the subsequent command if any)
findCmdParsedSnap doc (·.data.parserState.pos pos) |>.bind (sync := true) fun
| some cmdParsed => toSnapshotTree cmdParsed |>.findInfoTreeAtPos pos |>.bind (sync := true) fun
| some infoTree => .pure <| some infoTree
| none => cmdParsed.data.finishedSnap.task.map (sync := true) fun s =>
-- the parser returns exactly one command per snapshot, and the elaborator creates exactly one node per command
assert! s.cmdState.infoState.trees.size == 1
some s.cmdState.infoState.trees[0]!
| none => .pure none
open Elab.Command in
def runCommandElabM (snap : Snapshot) (c : RequestT CommandElabM α) : RequestM α := do
let rc readThe RequestContext

View File

@@ -0,0 +1,45 @@
/-
Copyright (c) 2020 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Lean.MonadEnv
import Lean.Util.FoldConsts
namespace Lean
namespace CollectAxioms
structure State where
visited : NameSet := {}
axioms : Array Name := #[]
abbrev M := ReaderT Environment $ StateM State
partial def collect (c : Name) : M Unit := do
let collectExpr (e : Expr) : M Unit := e.getUsedConstants.forM collect
let s get
unless s.visited.contains c do
modify fun s => { s with visited := s.visited.insert c }
let env read
match env.find? c with
| some (ConstantInfo.axiomInfo _) => modify fun s => { s with axioms := s.axioms.push c }
| some (ConstantInfo.defnInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.thmInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.opaqueInfo v) => collectExpr v.type *> collectExpr v.value
| some (ConstantInfo.quotInfo _) => pure ()
| some (ConstantInfo.ctorInfo v) => collectExpr v.type
| some (ConstantInfo.recInfo v) => collectExpr v.type
| some (ConstantInfo.inductInfo v) => collectExpr v.type *> v.ctors.forM collect
| none => pure ()
end CollectAxioms
def collectAxioms [Monad m] [MonadEnv m] (constName : Name) : m (Array Name) := do
let env getEnv
let (_, s) := ((CollectAxioms.collect constName).run env).run {}
pure s.axioms
end Lean

View File

@@ -113,7 +113,7 @@ partial def findOLean (mod : Name) : IO FilePath := do
let pkg := FilePath.mk <| mod.getRoot.toString (escape := false)
let mut msg := s!"unknown module prefix '{pkg}'
No directory '{pkg}' or file '{pkg}.lean' in the search path entries:
No directory '{pkg}' or file '{pkg}.olean' in the search path entries:
{"\n".intercalate <| sp.map (·.toString)}"
throw <| IO.userError msg

View File

@@ -7,62 +7,13 @@ prelude
import Lean.Expr
import Lean.Util.PtrSet
namespace Lean
namespace Expr
namespace Lean.Expr
namespace ReplaceImpl
@[extern "lean_replace_expr"]
opaque replaceImpl (f? : @& (Expr Option Expr)) (e : @& Expr) : Expr
unsafe abbrev ReplaceM := StateM (PtrMap Expr Expr)
unsafe def cache (key : Expr) (exclusive : Bool) (result : Expr) : ReplaceM Expr := do
unless exclusive do
modify (·.insert key result)
pure result
@[specialize]
unsafe def replaceUnsafeM (f? : Expr Option Expr) (e : Expr) : ReplaceM Expr := do
let rec @[specialize] visit (e : Expr) := do
/-
TODO: We need better control over RC operations to ensure
the following (unsafe) optimization is correctly applied.
Optimization goal: only cache results for shared objects.
The main problem is that the current code generator ignores borrow annotations
for code written in Lean. These annotations are only taken into account for extern functions.
Moveover, the borrow inference heuristic currently tags `e` as "owned" since it may be stored
in the cache and is used in "update" functions.
Thus, when visiting `e` sub-expressions the code generator increases their RC
because we are recursively invoking `visit` :(
Thus, to fix this issue, we must
1- Take borrow annotations into account for code written in Lean.
2- Mark `e` is borrowed (i.e., `(e : @& Expr)`)
-/
let excl := isExclusiveUnsafe e
unless excl do
if let some result := ( get).find? e then
return result
match f? e with
| some eNew => cache e excl eNew
| none => match e with
| .forallE _ d b _ => cache e excl <| e.updateForallE! ( visit d) ( visit b)
| .lam _ d b _ => cache e excl <| e.updateLambdaE! ( visit d) ( visit b)
| .mdata _ b => cache e excl <| e.updateMData! ( visit b)
| .letE _ t v b _ => cache e excl <| e.updateLet! ( visit t) ( visit v) ( visit b)
| .app f a => cache e excl <| e.updateApp! ( visit f) ( visit a)
| .proj _ _ b => cache e excl <| e.updateProj! ( visit b)
| e => return e
visit e
@[inline]
unsafe def replaceUnsafe (f? : Expr Option Expr) (e : Expr) : Expr :=
(replaceUnsafeM f? e).run' mkPtrMap
end ReplaceImpl
/- TODO: use withPtrAddr, withPtrEq to avoid unsafe tricks above.
We also need an invariant at `State` and proofs for the `uget` operations. -/
@[inline] def replace (f? : Expr Option Expr) (e : Expr) : Expr :=
replaceImpl f? e
@[specialize]
def replaceNoCache (f? : Expr Option Expr) (e : Expr) : Expr :=
@@ -77,10 +28,4 @@ def replaceNoCache (f? : Expr → Option Expr) (e : Expr) : Expr :=
| .proj _ _ b => let b := replaceNoCache f? b; e.updateProj! b
| e => e
@[extern "lean_replace_expr"]
opaque replaceImpl (f? : @& (Expr Option Expr)) (e : @& Expr) : Expr
@[implemented_by ReplaceImpl.replaceUnsafe]
def replace (f? : Expr Option Expr) (e : Expr) : Expr :=
e.replaceNoCache f?
end Lean.Expr

View File

@@ -416,12 +416,9 @@ open Lean Server RequestM in
def getWidgets (pos : Lean.Lsp.Position) : RequestM (RequestTask (GetWidgetsResponse)) := do
let doc readDoc
let filemap := doc.meta.text
let nextLine := { line := pos.line + 1, character := 0 }
let t := doc.cmdSnaps.waitUntil fun snap => filemap.lspPosToUtf8Pos nextLine snap.endPos
mapTask t fun (snaps, _) => do
let some snap := snaps.getLast?
| return
runTermElabM snap do
mapTask (findInfoTreeAtPos doc <| filemap.lspPosToUtf8Pos pos) fun
| some infoTree@(.context (.commandCtx cc) _) =>
ContextInfo.runMetaM { cc with } {} do
let env getEnv
/- Panels from the environment. -/
let ws' evalPanelWidgets
@@ -436,7 +433,7 @@ def getWidgets (pos : Lean.Lsp.Position) : RequestM (RequestTask (GetWidgetsResp
return uwd.name
return { wi with name? }
/- Panels from the infotree. -/
let ws := widgetInfosAt? filemap snap.infoTree pos.line
let ws := widgetInfosAt? filemap infoTree pos.line
let ws : Array PanelWidgetInstance ws.toArray.mapM fun (wi : UserWidgetInfo) => do
let name? env.find? wi.id
|>.filter (·.type.isConstOf ``UserWidgetDefinition)
@@ -445,6 +442,7 @@ def getWidgets (pos : Lean.Lsp.Position) : RequestM (RequestTask (GetWidgetsResp
return uwd.name
return { wi with range? := String.Range.toLspRange filemap <$> Syntax.getRange? wi.stx, name? }
return { widgets := ws' ++ ws }
| _ => return
builtin_initialize
Server.registerBuiltinRpcProcedure ``getWidgets _ _ getWidgets

View File

@@ -82,6 +82,11 @@ theorem isEmpty_eq_false_iff_exists_isSome_getEntry? [BEq α] [ReflBEq α] :
| [] => by simp
| (k, v::l) => by simpa using k, by simp
theorem isEmpty_iff_forall_isSome_getEntry? [BEq α] [ReflBEq α] :
{l : List ((a : α) × β a)} l.isEmpty a, (getEntry? a l).isSome = false
| [] => by simp
| (k, v::l) => by simp, fun h => have := h k; by simp at this
section
variable {β : Type v}
@@ -255,6 +260,10 @@ theorem isEmpty_eq_false_iff_exists_containsKey [BEq α] [ReflBEq α] {l : List
l.isEmpty = false a, containsKey a l := by
simp [isEmpty_eq_false_iff_exists_isSome_getEntry?, containsKey_eq_isSome_getEntry?]
theorem isEmpty_iff_forall_containsKey [BEq α] [ReflBEq α] {l : List ((a : α) × β a)} :
l.isEmpty a, containsKey a l = false := by
simp only [isEmpty_iff_forall_isSome_getEntry?, containsKey_eq_isSome_getEntry?]
@[simp]
theorem getEntry?_eq_none [BEq α] {l : List ((a : α) × β a)} {a : α} :
getEntry? a l = none containsKey a l = false := by
@@ -579,7 +588,7 @@ theorem getEntry?_replaceEntry_of_true [BEq α] [PartialEquivBEq α] {l : List (
theorem getEntry?_replaceEntry [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {a k : α}
{v : β k} :
getEntry? a (replaceEntry k v l) = bif containsKey k l && k == a then some k, v else
getEntry? a (replaceEntry k v l) = if containsKey k l k == a then some k, v else
getEntry? a l := by
cases hl : containsKey k l
· simp [getEntry?_replaceEntry_of_containsKey_eq_false hl]
@@ -632,13 +641,11 @@ theorem getValueCast?_replaceEntry [BEq α] [LawfulBEq α] {l : List ((a : α)
@[simp]
theorem containsKey_replaceEntry [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {a k : α}
{v : β k} : containsKey a (replaceEntry k v l) = containsKey a l := by
cases h : containsKey k l && k == a
· rw [containsKey_eq_isSome_getEntry?, getEntry?_replaceEntry, h, cond_false,
containsKey_eq_isSome_getEntry?]
· rw [containsKey_eq_isSome_getEntry?, getEntry?_replaceEntry, h, cond_true, Option.isSome_some,
Eq.comm]
rw [Bool.and_eq_true] at h
exact containsKey_of_beq h.1 h.2
by_cases h : (getEntry? k l).isSome k == a
· simp only [containsKey_eq_isSome_getEntry?, getEntry?_replaceEntry, h, and_self, reduceIte,
Option.isSome_some, Bool.true_eq]
rw [ getEntry?_congr h.2, h.1]
· simp [containsKey_eq_isSome_getEntry?, getEntry?_replaceEntry, h]
/-- Internal implementation detail of the hash map -/
def eraseKey [BEq α] (k : α) : List ((a : α) × β a) List ((a : α) × β a)
@@ -681,7 +688,7 @@ theorem sublist_eraseKey [BEq α] {l : List ((a : α) × β a)} {k : α} :
· simpa using Sublist.cons_right Sublist.refl
theorem length_eraseKey [BEq α] {l : List ((a : α) × β a)} {k : α} :
(eraseKey k l).length = bif containsKey k l then l.length - 1 else l.length := by
(eraseKey k l).length = if containsKey k l then l.length - 1 else l.length := by
induction l using assoc_induction
· simp
· next k' v' t ih =>
@@ -690,7 +697,7 @@ theorem length_eraseKey [BEq α] {l : List ((a : α) × β a)} {k : α} :
· rw [cond_false, Bool.false_or, List.length_cons, ih]
cases h : containsKey k t
· simp
· simp only [cond_true, Nat.succ_eq_add_one, List.length_cons, Nat.add_sub_cancel]
· simp only [Nat.succ_eq_add_one, List.length_cons, Nat.add_sub_cancel, if_true]
rw [Nat.sub_add_cancel]
cases t
· simp at h
@@ -701,6 +708,11 @@ theorem length_eraseKey_le [BEq α] {l : List ((a : α) × β a)} {k : α} :
(eraseKey k l).length l.length :=
sublist_eraseKey.length_le
theorem length_le_length_eraseKey [BEq α] {l : List ((a : α) × β a)} {k : α} :
l.length (eraseKey k l).length + 1 := by
rw [length_eraseKey]
split <;> omega
theorem isEmpty_eraseKey [BEq α] {l : List ((a : α) × β a)} {k : α} :
(eraseKey k l).isEmpty = (l.isEmpty || (l.length == 1 && containsKey k l)) := by
rw [Bool.eq_iff_iff]
@@ -855,15 +867,18 @@ theorem isEmpty_insertEntry [BEq α] {l : List ((a : α) × β a)} {k : α} {v :
· rw [insertEntry_of_containsKey h, isEmpty_replaceEntry, isEmpty_eq_false_of_containsKey h]
theorem length_insertEntry [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
(insertEntry k v l).length = bif containsKey k l then l.length else l.length + 1 := by
(insertEntry k v l).length = if containsKey k l then l.length else l.length + 1 := by
simp [insertEntry, Bool.apply_cond List.length]
theorem length_le_length_insertEntry [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
l.length (insertEntry k v l).length := by
rw [length_insertEntry]
cases containsKey k l
· simpa using Nat.le_add_right ..
· simp
split <;> omega
theorem length_insertEntry_le [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
(insertEntry k v l).length l.length + 1 := by
rw [length_insertEntry]
split <;> omega
section
@@ -886,23 +901,23 @@ theorem getValue?_insertEntry_of_false [BEq α] [PartialEquivBEq α] {l : List (
· rw [insertEntry_of_containsKey h', getValue?_replaceEntry_of_false h]
theorem getValue?_insertEntry [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)} {k a : α}
{v : β} : getValue? a (insertEntry k v l) = bif k == a then some v else getValue? a l := by
{v : β} : getValue? a (insertEntry k v l) = if k == a then some v else getValue? a l := by
cases h : k == a
· simp [getValue?_insertEntry_of_false h, h]
· simp [getValue?_insertEntry_of_beq h, h]
theorem getValue?_insertEntry_self [BEq α] [EquivBEq α] {l : List ((_ : α) × β)} {k : α} {v : β} :
getValue? k (insertEntry k v l) = some v := by
rw [getValue?_insertEntry, Bool.cond_pos BEq.refl]
simp [getValue?_insertEntry]
end
theorem getEntry?_insertEntry [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k a : α}
{v : β k} :
getEntry? a (insertEntry k v l) = bif k == a then some k, v else getEntry? a l := by
getEntry? a (insertEntry k v l) = if k == a then some k, v else getEntry? a l := by
cases hl : containsKey k l
· rw [insertEntry_of_containsKey_eq_false hl, getEntry?_cons]
· rw [insertEntry_of_containsKey hl, getEntry?_replaceEntry, hl, Bool.true_and, BEq.comm]
· rw [insertEntry_of_containsKey_eq_false hl, getEntry?_cons, cond_eq_if]
· simp [insertEntry_of_containsKey hl, getEntry?_replaceEntry, hl]
theorem getValueCast?_insertEntry [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
{v : β k} : getValueCast? a (insertEntry k v l) =
@@ -938,21 +953,21 @@ theorem getValueCastD_insertEntry_self [BEq α] [LawfulBEq α] {l : List ((a :
theorem getValue!_insertEntry {β : Type v} [BEq α] [PartialEquivBEq α] [Inhabited β]
{l : List ((_ : α) × β)} {k a : α} {v : β} :
getValue! a (insertEntry k v l) = bif k == a then v else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_insertEntry, Bool.apply_cond Option.get!]
getValue! a (insertEntry k v l) = if k == a then v else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_insertEntry, apply_ite Option.get!]
theorem getValue!_insertEntry_self {β : Type v} [BEq α] [EquivBEq α] [Inhabited β]
{l : List ((_ : α) × β)} {k : α} {v : β} : getValue! k (insertEntry k v l) = v := by
rw [getValue!_insertEntry, BEq.refl, cond_true]
simp [getValue!_insertEntry, BEq.refl]
theorem getValueD_insertEntry {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k a : α} {fallback v : β} : getValueD a (insertEntry k v l) fallback =
bif k == a then v else getValueD a l fallback := by
simp [getValueD_eq_getValue?, getValue?_insertEntry, Bool.apply_cond (fun x => Option.getD x fallback)]
if k == a then v else getValueD a l fallback := by
simp [getValueD_eq_getValue?, getValue?_insertEntry, apply_ite (fun x => Option.getD x fallback)]
theorem getValueD_insertEntry_self {β : Type v} [BEq α] [EquivBEq α] {l : List ((_ : α) × β)}
{k : α} {fallback v : β} : getValueD k (insertEntry k v l) fallback = v := by
rw [getValueD_insertEntry, BEq.refl, cond_true]
simp [getValueD_insertEntry, BEq.refl]
@[simp]
theorem containsKey_insertEntry [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k a : α}
@@ -991,7 +1006,7 @@ theorem getValue_insertEntry {β : Type v} [BEq α] [PartialEquivBEq α] {l : Li
if h' : k == a then v
else getValue a l (containsKey_of_containsKey_insertEntry h (Bool.eq_false_iff.2 h')) := by
rw [ Option.some_inj, getValue?_eq_some_getValue, apply_dite Option.some,
getValue?_insertEntry, cond_eq_if, dite_eq_ite]
getValue?_insertEntry, dite_eq_ite]
simp only [ getValue?_eq_some_getValue]
theorem getValue_insertEntry_self {β : Type v} [BEq α] [EquivBEq α] {l : List ((_ : α) × β)} {k : α}
@@ -1020,7 +1035,7 @@ theorem isEmpty_insertEntryIfNew [BEq α] {l : List ((a : α) × β a)} {k : α}
theorem getEntry?_insertEntryIfNew [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k a : α}
{v : β k} : getEntry? a (insertEntryIfNew k v l) =
bif k == a && !containsKey k l then some k, v else getEntry? a l := by
if k == a && !containsKey k l then some k, v else getEntry? a l := by
cases h : containsKey k l
· simp [insertEntryIfNew_of_containsKey_eq_false h, getEntry?_cons]
· simp [insertEntryIfNew_of_containsKey h]
@@ -1036,18 +1051,22 @@ theorem getValueCast?_insertEntryIfNew [BEq α] [LawfulBEq α] {l : List ((a :
theorem getValue?_insertEntryIfNew {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k a : α} {v : β} : getValue? a (insertEntryIfNew k v l) =
bif k == a && !containsKey k l then some v else getValue? a l := by
if k == a containsKey k l = false then some v else getValue? a l := by
simp [getValue?_eq_getEntry?, getEntry?_insertEntryIfNew,
Bool.apply_cond (Option.map (fun (y : ((_ : α) × β)) => y.2))]
apply_ite (Option.map (fun (y : ((_ : α) × β)) => y.2))]
theorem containsKey_insertEntryIfNew [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)}
{k a : α} {v : β k} :
containsKey a (insertEntryIfNew k v l) = ((k == a) || containsKey a l) := by
simp only [containsKey_eq_isSome_getEntry?, getEntry?_insertEntryIfNew, Bool.apply_cond Option.isSome,
Option.isSome_some, Bool.cond_true_left]
simp only [containsKey_eq_isSome_getEntry?, getEntry?_insertEntryIfNew, apply_ite Option.isSome,
Option.isSome_some, if_true_left]
simp only [Bool.and_eq_true, Bool.not_eq_true', Option.not_isSome, Option.isNone_iff_eq_none,
getEntry?_eq_none, Bool.if_true_left, Bool.decide_and, Bool.decide_eq_true,
Bool.decide_eq_false]
cases h : k == a
· simp
· rw [Bool.true_and, Bool.true_or, getEntry?_congr h, Bool.not_or_self]
· rw [containsKey_eq_isSome_getEntry?, getEntry?_congr h]
simp
theorem containsKey_insertEntryIfNew_self [BEq α] [EquivBEq α] {l : List ((a : α) × β a)} {k : α}
{v : β k} : containsKey k (insertEntryIfNew k v l) := by
@@ -1085,7 +1104,7 @@ theorem getValue_insertEntryIfNew {β : Type v} [BEq α] [PartialEquivBEq α] {l
if h' : k == a containsKey k l = false then v
else getValue a l (containsKey_of_containsKey_insertEntryIfNew' h h') := by
rw [ Option.some_inj, getValue?_eq_some_getValue, apply_dite Option.some,
getValue?_insertEntryIfNew, cond_eq_if, dite_eq_ite]
getValue?_insertEntryIfNew, dite_eq_ite]
simp [ getValue?_eq_some_getValue]
theorem getValueCast!_insertEntryIfNew [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
@@ -1096,8 +1115,8 @@ theorem getValueCast!_insertEntryIfNew [BEq α] [LawfulBEq α] {l : List ((a :
theorem getValue!_insertEntryIfNew {β : Type v} [BEq α] [PartialEquivBEq α] [Inhabited β]
{l : List ((_ : α) × β)} {k a : α} {v : β} : getValue! a (insertEntryIfNew k v l) =
bif k == a && !containsKey k l then v else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_insertEntryIfNew, Bool.apply_cond Option.get!]
if k == a containsKey k l = false then v else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_insertEntryIfNew, apply_ite Option.get!]
theorem getValueCastD_insertEntryIfNew [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
{v : β k} {fallback : β a} : getValueCastD a (insertEntryIfNew k v l) fallback =
@@ -1108,20 +1127,23 @@ theorem getValueCastD_insertEntryIfNew [BEq α] [LawfulBEq α] {l : List ((a :
theorem getValueD_insertEntryIfNew {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k a : α} {fallback v : β} : getValueD a (insertEntryIfNew k v l) fallback =
bif k == a && !containsKey k l then v else getValueD a l fallback := by
if k == a containsKey k l = false then v else getValueD a l fallback := by
simp [getValueD_eq_getValue?, getValue?_insertEntryIfNew,
Bool.apply_cond (fun x => Option.getD x fallback)]
apply_ite (fun x => Option.getD x fallback)]
theorem length_insertEntryIfNew [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
(insertEntryIfNew k v l).length = bif containsKey k l then l.length else l.length + 1 := by
(insertEntryIfNew k v l).length = if containsKey k l then l.length else l.length + 1 := by
simp [insertEntryIfNew, Bool.apply_cond List.length]
theorem length_le_length_insertEntryIfNew [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
l.length (insertEntryIfNew k v l).length := by
rw [length_insertEntryIfNew]
cases containsKey k l
· simpa using Nat.le_add_right ..
· simp
split <;> omega
theorem length_insertEntryIfNew_le [BEq α] {l : List ((a : α) × β a)} {k : α} {v : β k} :
(insertEntryIfNew k v l).length l.length + 1 := by
rw [length_insertEntryIfNew]
split <;> omega
@[simp]
theorem keys_eraseKey [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k : α} :
@@ -1169,7 +1191,7 @@ theorem getEntry?_eraseKey_of_false [BEq α] [PartialEquivBEq α] {l : List ((a
theorem getEntry?_eraseKey [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k a : α}
(hl : DistinctKeys l) :
getEntry? a (eraseKey k l) = bif k == a then none else getEntry? a l := by
getEntry? a (eraseKey k l) = if k == a then none else getEntry? a l := by
cases h : k == a
· simp [getEntry?_eraseKey_of_false h, h]
· simp [getEntry?_eraseKey_of_beq hl h, h]
@@ -1222,8 +1244,8 @@ theorem getValue?_eraseKey_of_false [BEq α] [PartialEquivBEq α] {l : List ((_
theorem getValue?_eraseKey [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)} {k a : α}
(hl : DistinctKeys l) :
getValue? a (eraseKey k l) = bif k == a then none else getValue? a l := by
simp [getValue?_eq_getEntry?, getEntry?_eraseKey hl, Bool.apply_cond (Option.map _)]
getValue? a (eraseKey k l) = if k == a then none else getValue? a l := by
simp [getValue?_eq_getEntry?, getEntry?_eraseKey hl, apply_ite (Option.map _)]
end
@@ -1241,25 +1263,25 @@ theorem containsKey_eraseKey_of_false [BEq α] [PartialEquivBEq α] {l : List ((
theorem containsKey_eraseKey [BEq α] [PartialEquivBEq α] {l : List ((a : α) × β a)} {k a : α}
(hl : DistinctKeys l) : containsKey a (eraseKey k l) = (!(k == a) && containsKey a l) := by
simp [containsKey_eq_isSome_getEntry?, getEntry?_eraseKey hl, Bool.apply_cond]
simp [containsKey_eq_isSome_getEntry?, getEntry?_eraseKey hl, apply_ite]
theorem getValueCast?_eraseKey [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
(hl : DistinctKeys l) :
getValueCast? a (eraseKey k l) = bif k == a then none else getValueCast? a l := by
getValueCast? a (eraseKey k l) = if k == a then none else getValueCast? a l := by
rw [getValueCast?_eq_getEntry?, Option.dmap_congr (getEntry?_eraseKey hl)]
rcases Bool.eq_false_or_eq_true (k == a) with h|h
· rw [Option.dmap_congr (Bool.cond_pos h), Option.dmap_none, Bool.cond_pos h]
· rw [Option.dmap_congr (Bool.cond_neg h), getValueCast?_eq_getEntry?]
exact (Bool.cond_neg h).symm
by_cases h : k == a
· rw [Option.dmap_congr (if_pos h), Option.dmap_none, if_pos h]
· rw [Option.dmap_congr (if_neg h), getValueCast?_eq_getEntry?]
exact (if_neg h).symm
theorem getValueCast?_eraseKey_self [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k : α}
(hl : DistinctKeys l) : getValueCast? k (eraseKey k l) = none := by
rw [getValueCast?_eraseKey hl, Bool.cond_pos BEq.refl]
rw [getValueCast?_eraseKey hl, if_pos BEq.refl]
theorem getValueCast!_eraseKey [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
[Inhabited (β a)] (hl : DistinctKeys l) :
getValueCast! a (eraseKey k l) = bif k == a then default else getValueCast! a l := by
simp [getValueCast!_eq_getValueCast?, getValueCast?_eraseKey hl, Bool.apply_cond Option.get!]
getValueCast! a (eraseKey k l) = if k == a then default else getValueCast! a l := by
simp [getValueCast!_eq_getValueCast?, getValueCast?_eraseKey hl, apply_ite Option.get!]
theorem getValueCast!_eraseKey_self [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k : α}
[Inhabited (β k)] (hl : DistinctKeys l) : getValueCast! k (eraseKey k l) = default := by
@@ -1267,9 +1289,9 @@ theorem getValueCast!_eraseKey_self [BEq α] [LawfulBEq α] {l : List ((a : α)
theorem getValueCastD_eraseKey [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k a : α}
{fallback : β a} (hl : DistinctKeys l) : getValueCastD a (eraseKey k l) fallback =
bif k == a then fallback else getValueCastD a l fallback := by
if k == a then fallback else getValueCastD a l fallback := by
simp [getValueCastD_eq_getValueCast?, getValueCast?_eraseKey hl,
Bool.apply_cond (fun x => Option.getD x fallback)]
apply_ite (fun x => Option.getD x fallback)]
theorem getValueCastD_eraseKey_self [BEq α] [LawfulBEq α] {l : List ((a : α) × β a)} {k : α}
{fallback : β k} (hl : DistinctKeys l) :
@@ -1278,8 +1300,8 @@ theorem getValueCastD_eraseKey_self [BEq α] [LawfulBEq α] {l : List ((a : α)
theorem getValue!_eraseKey {β : Type v} [BEq α] [PartialEquivBEq α] [Inhabited β]
{l : List ((_ : α) × β)} {k a : α} (hl : DistinctKeys l) :
getValue! a (eraseKey k l) = bif k == a then default else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_eraseKey hl, Bool.apply_cond Option.get!]
getValue! a (eraseKey k l) = if k == a then default else getValue! a l := by
simp [getValue!_eq_getValue?, getValue?_eraseKey hl, apply_ite Option.get!]
theorem getValue!_eraseKey_self {β : Type v} [BEq α] [PartialEquivBEq α] [Inhabited β]
{l : List ((_ : α) × β)} {k : α} (hl : DistinctKeys l) :
@@ -1288,8 +1310,8 @@ theorem getValue!_eraseKey_self {β : Type v} [BEq α] [PartialEquivBEq α] [Inh
theorem getValueD_eraseKey {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k a : α} {fallback : β} (hl : DistinctKeys l) : getValueD a (eraseKey k l) fallback =
bif k == a then fallback else getValueD a l fallback := by
simp [getValueD_eq_getValue?, getValue?_eraseKey hl, Bool.apply_cond (fun x => Option.getD x fallback)]
if k == a then fallback else getValueD a l fallback := by
simp [getValueD_eq_getValue?, getValue?_eraseKey hl, apply_ite (fun x => Option.getD x fallback)]
theorem getValueD_eraseKey_self {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k : α} {fallback : β} (hl : DistinctKeys l) :
@@ -1304,15 +1326,15 @@ theorem getValueCast_eraseKey [BEq α] [LawfulBEq α] {l : List ((a : α) × β
(hl : DistinctKeys l) : getValueCast a (eraseKey k l) h =
getValueCast a l (containsKey_of_containsKey_eraseKey hl h) := by
rw [containsKey_eraseKey hl, Bool.and_eq_true, Bool.not_eq_true'] at h
rw [ Option.some_inj, getValueCast?_eq_some_getValueCast, getValueCast?_eraseKey hl, h.1,
cond_false, getValueCast?_eq_some_getValueCast]
rw [ Option.some_inj, getValueCast?_eq_some_getValueCast, getValueCast?_eraseKey hl, h.1]
simp [ getValueCast?_eq_some_getValueCast]
theorem getValue_eraseKey {β : Type v} [BEq α] [PartialEquivBEq α] {l : List ((_ : α) × β)}
{k a : α} {h} (hl : DistinctKeys l) :
getValue a (eraseKey k l) h = getValue a l (containsKey_of_containsKey_eraseKey hl h) := by
rw [containsKey_eraseKey hl, Bool.and_eq_true, Bool.not_eq_true'] at h
rw [ Option.some_inj, getValue?_eq_some_getValue, getValue?_eraseKey hl, h.1, cond_false,
getValue?_eq_some_getValue]
rw [ Option.some_inj, getValue?_eq_some_getValue, getValue?_eraseKey hl, h.1]
simp [ getValue?_eq_some_getValue]
theorem getEntry?_of_perm [BEq α] [PartialEquivBEq α] {l l' : List ((a : α) × β a)} {a : α}
(hl : DistinctKeys l) (h : Perm l l') : getEntry? a l = getEntry? a l' := by

View File

@@ -123,9 +123,12 @@ theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.1.isEmpty = false a, m.contains a = true := by
simp only [contains_eq_containsKey (Raw.WF.out h)]
simp_to_model using List.isEmpty_eq_false_iff_exists_containsKey
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.1.isEmpty a, m.contains a = false := by
simp_to_model using List.isEmpty_iff_forall_containsKey
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β k} :
(m.insert k v).contains a = ((k == a) || m.contains a) := by
simp_to_model using List.containsKey_insertEntry
@@ -145,13 +148,17 @@ theorem isEmpty_eq_size_eq_zero : m.1.isEmpty = (m.1.size == 0) := by
simp [Raw.isEmpty]
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).1.size = bif m.contains k then m.1.size else m.1.size + 1 := by
(m.insert k v).1.size = if m.contains k then m.1.size else m.1.size + 1 := by
simp_to_model using List.length_insertEntry
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.1.size (m.insert k v).1.size := by
simp_to_model using List.length_le_length_insertEntry
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).1.size m.1.size + 1 := by
simp_to_model using List.length_insertEntry_le
@[simp]
theorem erase_empty {k : α} {c : Nat} : (empty c : Raw₀ α β).erase k = empty c := by
simp [erase, empty]
@@ -169,13 +176,17 @@ theorem contains_of_contains_erase [EquivBEq α] [LawfulHashable α] {k a : α}
simp_to_model using List.containsKey_of_containsKey_eraseKey
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).1.size = bif m.contains k then m.1.size - 1 else m.1.size := by
(m.erase k).1.size = if m.contains k then m.1.size - 1 else m.1.size := by
simp_to_model using List.length_eraseKey
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).1.size m.1.size := by
simp_to_model using List.length_eraseKey_le
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.1.size (m.erase k).1.size + 1 := by
simp_to_model using List.length_le_length_eraseKey
@[simp]
theorem containsThenInsert_fst {k : α} {v : β k} : (m.containsThenInsert k v).1 = m.contains k := by
rw [containsThenInsert_eq_containsₘ, contains_eq_containsₘ]
@@ -215,7 +226,7 @@ theorem get?_eq_none [LawfulBEq α] {a : α} : m.contains a = false → m.get? a
simp_to_model using List.getValueCast?_eq_none
theorem get?_erase [LawfulBEq α] {k a : α} :
(m.erase k).get? a = bif k == a then none else m.get? a := by
(m.erase k).get? a = if k == a then none else m.get? a := by
simp_to_model using List.getValueCast?_eraseKey
theorem get?_erase_self [LawfulBEq α] {k : α} : (m.erase k).get? k = none := by
@@ -234,7 +245,7 @@ theorem get?_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
simp_to_model; empty
theorem get?_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insert k v) a = bif k == a then some v else get? m a := by
get? (m.insert k v) a = if k == a then some v else get? m a := by
simp_to_model using List.getValue?_insertEntry
theorem get?_insert_self [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
@@ -250,7 +261,7 @@ theorem get?_eq_none [EquivBEq α] [LawfulHashable α] {a : α} :
simp_to_model using List.getValue?_eq_none.2
theorem get?_erase [EquivBEq α] [LawfulHashable α] {k a : α} :
Const.get? (m.erase k) a = bif k == a then none else get? m a := by
Const.get? (m.erase k) a = if k == a then none else get? m a := by
simp_to_model using List.getValue?_eraseKey
theorem get?_erase_self [EquivBEq α] [LawfulHashable α] {k : α} :
@@ -340,7 +351,7 @@ theorem get!_eq_default [LawfulBEq α] {a : α} [Inhabited (β a)] :
simp_to_model using List.getValueCast!_eq_default
theorem get!_erase [LawfulBEq α] {k a : α} [Inhabited (β a)] :
(m.erase k).get! a = bif k == a then default else m.get! a := by
(m.erase k).get! a = if k == a then default else m.get! a := by
simp_to_model using List.getValueCast!_eraseKey
theorem get!_erase_self [LawfulBEq α] {k : α} [Inhabited (β k)] :
@@ -372,7 +383,7 @@ theorem get!_of_isEmpty [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
simp_to_model; empty
theorem get!_insert [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insert k v) a = bif k == a then v else get! m a := by
get! (m.insert k v) a = if k == a then v else get! m a := by
simp_to_model using List.getValue!_insertEntry
theorem get!_insert_self [EquivBEq α] [LawfulHashable α] [Inhabited β] {k : α} {v : β} :
@@ -384,7 +395,7 @@ theorem get!_eq_default [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
simp_to_model using List.getValue!_eq_default
theorem get!_erase [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} :
get! (m.erase k) a = bif k == a then default else get! m a := by
get! (m.erase k) a = if k == a then default else get! m a := by
simp_to_model using List.getValue!_eraseKey
theorem get!_erase_self [EquivBEq α] [LawfulHashable α] [Inhabited β] {k : α} :
@@ -435,7 +446,7 @@ theorem getD_eq_fallback [LawfulBEq α] {a : α} {fallback : β a} :
simp_to_model using List.getValueCastD_eq_fallback
theorem getD_erase [LawfulBEq α] {k a : α} {fallback : β a} :
(m.erase k).getD a fallback = bif k == a then fallback else m.getD a fallback := by
(m.erase k).getD a fallback = if k == a then fallback else m.getD a fallback := by
simp_to_model using List.getValueCastD_eraseKey
theorem getD_erase_self [LawfulBEq α] {k : α} {fallback : β k} :
@@ -471,7 +482,7 @@ theorem getD_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
simp_to_model; empty
theorem getD_insert [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insert k v) a fallback = bif k == a then v else getD m a fallback := by
getD (m.insert k v) a fallback = if k == a then v else getD m a fallback := by
simp_to_model using List.getValueD_insertEntry
theorem getD_insert_self [EquivBEq α] [LawfulHashable α] {k : α} {fallback v : β} :
@@ -483,7 +494,7 @@ theorem getD_eq_fallback [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
simp_to_model using List.getValueD_eq_fallback
theorem getD_erase [EquivBEq α] [LawfulHashable α] {k a : α} {fallback : β} :
getD (m.erase k) a fallback = bif k == a then fallback else getD m a fallback := by
getD (m.erase k) a fallback = if k == a then fallback else getD m a fallback := by
simp_to_model using List.getValueD_eraseKey
theorem getD_erase_self [EquivBEq α] [LawfulHashable α] {k : α} {fallback : β} :
@@ -539,13 +550,17 @@ theorem contains_of_contains_insertIfNew' [EquivBEq α] [LawfulHashable α] {k a
simp_to_model using List.containsKey_of_containsKey_insertEntryIfNew'
theorem size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).1.size = bif m.contains k then m.1.size else m.1.size + 1 := by
(m.insertIfNew k v).1.size = if m.contains k then m.1.size else m.1.size + 1 := by
simp_to_model using List.length_insertEntryIfNew
theorem size_le_size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.1.size (m.insertIfNew k v).1.size := by
simp_to_model using List.length_le_length_insertEntryIfNew
theorem size_insertIfNew_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).1.size m.1.size + 1 := by
simp_to_model using List.length_insertEntryIfNew_le
theorem get?_insertIfNew [LawfulBEq α] {k a : α} {v : β k} :
(m.insertIfNew k v).get? a =
if h : k == a m.contains k = false then some (cast (congrArg β (eq_of_beq h.1)) v)
@@ -575,7 +590,7 @@ namespace Const
variable {β : Type v} (m : Raw₀ α (fun _ => β)) (h : m.1.WF)
theorem get?_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insertIfNew k v) a = bif k == a && !m.contains k then some v else get? m a := by
get? (m.insertIfNew k v) a = if k == a m.contains k = false then some v else get? m a := by
simp_to_model using List.getValue?_insertEntryIfNew
theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h₁} :
@@ -585,12 +600,12 @@ theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h
simp_to_model using List.getValue_insertEntryIfNew
theorem get!_insertIfNew [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insertIfNew k v) a = bif k == a && !m.contains k then v else get! m a := by
get! (m.insertIfNew k v) a = if k == a m.contains k = false then v else get! m a := by
simp_to_model using List.getValue!_insertEntryIfNew
theorem getD_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insertIfNew k v) a fallback =
bif k == a && !m.contains k then v else getD m a fallback := by
if k == a m.contains k = false then v else getD m a fallback := by
simp_to_model using List.getValueD_insertEntryIfNew
end Const

View File

@@ -472,7 +472,8 @@ theorem wfImp_eraseₘaux [BEq α] [Hashable α] [EquivBEq α] [LawfulHashable
buckets_hash_self := isHashSelf_eraseₘaux m a h
size_eq := by
rw [(toListModel_eraseₘaux m a h).length_eq, eraseₘaux, length_eraseKey,
containsₘ_eq_containsKey h, h', cond_true, h.size_eq]
containsₘ_eq_containsKey h, h']
simp [h.size_eq]
distinct := h.distinct.eraseKey.perm (toListModel_eraseₘaux m a h)
theorem toListModel_perm_eraseKey_of_containsₘ_eq_false [BEq α] [Hashable α] [EquivBEq α]

View File

@@ -63,6 +63,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : DHashMap α β) :=
not_mem_empty
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false :=
Raw₀.contains_of_isEmpty m.1, _ m.2
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m := by
simpa [mem_iff_contains] using contains_of_isEmpty
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true :=
Raw₀.isEmpty_eq_false_iff_exists_contains_eq_true m.1, _ m.2
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m := by
simpa [mem_iff_contains] using isEmpty_eq_false_iff_exists_contains_eq_true
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false :=
Raw₀.isEmpty_iff_forall_contains m.1, _ m.2
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m := by
simpa [mem_iff_contains] using isEmpty_iff_forall_contains
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β k} :
(m.insert k v).contains a = (k == a || m.contains a) :=
@@ -102,13 +126,17 @@ theorem size_emptyc : (∅ : DHashMap α β).size = 0 :=
theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) := rfl
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insert k v).size = if k m then m.size else m.size + 1 :=
Raw₀.size_insert m.1, _ m.2
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.size (m.insert k v).size :=
Raw₀.size_le_size_insert m.1, _ m.2
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).size m.size + 1 :=
Raw₀.size_insert_le m.1, _ m.2
@[simp]
theorem erase_empty {k : α} {c : Nat} : (empty c : DHashMap α β).erase k = empty c :=
Subtype.eq (congrArg Subtype.val (Raw₀.erase_empty (k := k)) :) -- Lean code is happy
@@ -140,12 +168,16 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
simp
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size :=
(m.erase k).size = if k m then m.size - 1 else m.size :=
Raw₀.size_erase _ m.2
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size :=
Raw₀.size_erase_le _ m.2
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 :=
Raw₀.size_le_size_erase m.1, _ m.2
@[simp]
theorem containsThenInsert_fst {k : α} {v : β k} : (m.containsThenInsert k v).1 = m.contains k :=
Raw₀.containsThenInsert_fst _
@@ -194,7 +226,7 @@ theorem get?_eq_none [LawfulBEq α] {a : α} : ¬a ∈ m → m.get? a = none :=
simpa [mem_iff_contains] using get?_eq_none_of_contains_eq_false
theorem get?_erase [LawfulBEq α] {k a : α} :
(m.erase k).get? a = bif k == a then none else m.get? a :=
(m.erase k).get? a = if k == a then none else m.get? a :=
Raw₀.get?_erase m.1, _ m.2
@[simp]
@@ -218,7 +250,7 @@ theorem get?_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
Raw₀.Const.get?_of_isEmpty m.1, _ m.2
theorem get?_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insert k v) a = bif k == a then some v else get? m a :=
get? (m.insert k v) a = if k == a then some v else get? m a :=
Raw₀.Const.get?_insert m.1, _ m.2
@[simp]
@@ -238,7 +270,7 @@ theorem get?_eq_none [EquivBEq α] [LawfulHashable α] {a : α } : ¬a ∈ m →
simpa [mem_iff_contains] using get?_eq_none_of_contains_eq_false
theorem get?_erase [EquivBEq α] [LawfulHashable α] {k a : α} :
Const.get? (m.erase k) a = bif k == a then none else get? m a :=
Const.get? (m.erase k) a = if k == a then none else get? m a :=
Raw₀.Const.get?_erase m.1, _ m.2
@[simp]
@@ -339,7 +371,7 @@ theorem get!_eq_default [LawfulBEq α] {a : α} [Inhabited (β a)] :
simpa [mem_iff_contains] using get!_eq_default_of_contains_eq_false
theorem get!_erase [LawfulBEq α] {k a : α} [Inhabited (β a)] :
(m.erase k).get! a = bif k == a then default else m.get! a :=
(m.erase k).get! a = if k == a then default else m.get! a :=
Raw₀.get!_erase m.1, _ m.2
@[simp]
@@ -381,7 +413,7 @@ theorem get!_of_isEmpty [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
Raw₀.Const.get!_of_isEmpty m.1, _ m.2
theorem get!_insert [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insert k v) a = bif k == a then v else get! m a :=
get! (m.insert k v) a = if k == a then v else get! m a :=
Raw₀.Const.get!_insert m.1, _ m.2
@[simp]
@@ -398,7 +430,7 @@ theorem get!_eq_default [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
simpa [mem_iff_contains] using get!_eq_default_of_contains_eq_false
theorem get!_erase [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} :
get! (m.erase k) a = bif k == a then default else get! m a :=
get! (m.erase k) a = if k == a then default else get! m a :=
Raw₀.Const.get!_erase m.1, _ m.2
@[simp]
@@ -465,7 +497,7 @@ theorem getD_eq_fallback [LawfulBEq α] {a : α} {fallback : β a} :
simpa [mem_iff_contains] using getD_eq_fallback_of_contains_eq_false
theorem getD_erase [LawfulBEq α] {k a : α} {fallback : β a} :
(m.erase k).getD a fallback = bif k == a then fallback else m.getD a fallback :=
(m.erase k).getD a fallback = if k == a then fallback else m.getD a fallback :=
Raw₀.getD_erase m.1, _ m.2
@[simp]
@@ -512,7 +544,7 @@ theorem getD_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
Raw₀.Const.getD_of_isEmpty m.1, _ m.2
theorem getD_insert [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insert k v) a fallback = bif k == a then v else getD m a fallback :=
getD (m.insert k v) a fallback = if k == a then v else getD m a fallback :=
Raw₀.Const.getD_insert m.1, _ m.2
@[simp]
@@ -529,7 +561,7 @@ theorem getD_eq_fallback [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
simpa [mem_iff_contains] using getD_eq_fallback_of_contains_eq_false
theorem getD_erase [EquivBEq α] [LawfulHashable α] {k a : α} {fallback : β} :
getD (m.erase k) a fallback = bif k == a then fallback else getD m a fallback :=
getD (m.erase k) a fallback = if k == a then fallback else getD m a fallback :=
Raw₀.Const.getD_erase m.1, _ m.2
@[simp]
@@ -611,13 +643,17 @@ theorem mem_of_mem_insertIfNew' [EquivBEq α] [LawfulHashable α] {k a : α} {v
simpa [mem_iff_contains, -contains_insertIfNew] using contains_of_contains_insertIfNew'
theorem size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insertIfNew k v).size = if k m then m.size else m.size + 1 :=
Raw₀.size_insertIfNew m.1, _ m.2
theorem size_le_size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.size (m.insertIfNew k v).size :=
Raw₀.size_le_size_insertIfNew m.1, _ m.2
theorem size_insertIfNew_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).size m.size + 1 :=
Raw₀.size_insertIfNew_le _ m.2
theorem get?_insertIfNew [LawfulBEq α] {k a : α} {v : β k} : (m.insertIfNew k v).get? a =
if h : k == a ¬k m then some (cast (congrArg β (eq_of_beq h.1)) v) else m.get? a := by
simp only [mem_iff_contains, Bool.not_eq_true]
@@ -647,8 +683,9 @@ namespace Const
variable {β : Type v} {m : DHashMap α (fun _ => β)}
theorem get?_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insertIfNew k v) a = bif k == a && !m.contains k then some v else get? m a :=
Raw₀.Const.get?_insertIfNew m.1, _ m.2
get? (m.insertIfNew k v) a = if k == a ¬k m then some v else get? m a := by
simp only [mem_iff_contains, Bool.not_eq_true]
exact Raw₀.Const.get?_insertIfNew m.1, _ m.2
theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h₁} :
get (m.insertIfNew k v) a h₁ =
@@ -657,13 +694,15 @@ theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h
exact Raw₀.Const.get_insertIfNew m.1, _ m.2
theorem get!_insertIfNew [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insertIfNew k v) a = bif k == a && !m.contains k then v else get! m a :=
Raw₀.Const.get!_insertIfNew m.1, _ m.2
get! (m.insertIfNew k v) a = if k == a ¬k m then v else get! m a := by
simp only [mem_iff_contains, Bool.not_eq_true]
exact Raw₀.Const.get!_insertIfNew m.1, _ m.2
theorem getD_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insertIfNew k v) a fallback =
bif k == a && !m.contains k then v else getD m a fallback :=
Raw₀.Const.getD_insertIfNew m.1, _ m.2
if k == a ¬k m then v else getD m a fallback := by
simp only [mem_iff_contains, Bool.not_eq_true]
exact Raw₀.Const.getD_insertIfNew m.1, _ m.2
end Const

View File

@@ -111,6 +111,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : Raw α β) :=
not_mem_empty
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false := by
simp_to_raw using Raw₀.contains_of_isEmpty m, _
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m := by
simpa [mem_iff_contains] using contains_of_isEmpty h
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true := by
simp_to_raw using Raw₀.isEmpty_eq_false_iff_exists_contains_eq_true m, _
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m := by
simpa [mem_iff_contains] using isEmpty_eq_false_iff_exists_contains_eq_true h
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false := by
simp_to_raw using Raw₀.isEmpty_iff_forall_contains m, _
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m := by
simpa [mem_iff_contains] using isEmpty_iff_forall_contains h
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {a k : α} {v : β k} :
(m.insert k v).contains a = (k == a || m.contains a) := by
@@ -150,13 +174,18 @@ theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) := by
simp [isEmpty]
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).size = bif m.contains k then m.size else m.size + 1 := by
(m.insert k v).size = if k m then m.size else m.size + 1 := by
simp only [mem_iff_contains]
simp_to_raw using Raw₀.size_insert
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.size (m.insert k v).size := by
simp_to_raw using Raw₀.size_le_size_insert m, _ h
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insert k v).size m.size + 1 := by
simp_to_raw using Raw₀.size_insert_le m, _ h
@[simp]
theorem erase_empty {k : α} {c : Nat} : (empty c : Raw α β).erase k = empty c := by
rw [erase_eq (by wf_trivial)]
@@ -189,12 +218,17 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
simpa [mem_iff_contains] using contains_of_contains_erase h
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size := by
(m.erase k).size = if k m then m.size - 1 else m.size := by
simp only [mem_iff_contains]
simp_to_raw using Raw₀.size_erase
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size := by
simp_to_raw using Raw₀.size_erase_le
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 := by
simp_to_raw using Raw₀.size_le_size_erase m, _
@[simp]
theorem containsThenInsert_fst {k : α} {v : β k} : (m.containsThenInsert k v).1 = m.contains k := by
simp_to_raw using Raw₀.containsThenInsert_fst
@@ -243,7 +277,7 @@ theorem get?_eq_none [LawfulBEq α] {a : α} : ¬a ∈ m → m.get? a = none :=
simpa [mem_iff_contains] using get?_eq_none_of_contains_eq_false h
theorem get?_erase [LawfulBEq α] {k a : α} :
(m.erase k).get? a = bif k == a then none else m.get? a := by
(m.erase k).get? a = if k == a then none else m.get? a := by
simp_to_raw using Raw₀.get?_erase
@[simp]
@@ -267,7 +301,7 @@ theorem get?_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
simp_to_raw using Raw₀.Const.get?_of_isEmpty m, _
theorem get?_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insert k v) a = bif k == a then some v else get? m a := by
get? (m.insert k v) a = if k == a then some v else get? m a := by
simp_to_raw using Raw₀.Const.get?_insert
@[simp]
@@ -287,7 +321,7 @@ theorem get?_eq_none [EquivBEq α] [LawfulHashable α] {a : α} : ¬a ∈ m →
simpa [mem_iff_contains] using get?_eq_none_of_contains_eq_false h
theorem get?_erase [EquivBEq α] [LawfulHashable α] {k a : α} :
Const.get? (m.erase k) a = bif k == a then none else get? m a := by
Const.get? (m.erase k) a = if k == a then none else get? m a := by
simp_to_raw using Raw₀.Const.get?_erase
@[simp]
@@ -388,7 +422,7 @@ theorem get!_eq_default [LawfulBEq α] {a : α} [Inhabited (β a)] :
simpa [mem_iff_contains] using get!_eq_default_of_contains_eq_false h
theorem get!_erase [LawfulBEq α] {k a : α} [Inhabited (β a)] :
(m.erase k).get! a = bif k == a then default else m.get! a := by
(m.erase k).get! a = if k == a then default else m.get! a := by
simp_to_raw using Raw₀.get!_erase
@[simp]
@@ -429,7 +463,7 @@ theorem get!_of_isEmpty [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
simp_to_raw using Raw₀.Const.get!_of_isEmpty m, _
theorem get!_insert [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insert k v) a = bif k == a then v else get! m a := by
get! (m.insert k v) a = if k == a then v else get! m a := by
simp_to_raw using Raw₀.Const.get!_insert
@[simp]
@@ -446,7 +480,7 @@ theorem get!_eq_default [EquivBEq α] [LawfulHashable α] [Inhabited β] {a : α
simpa [mem_iff_contains] using get!_eq_default_of_contains_eq_false h
theorem get!_erase [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} :
get! (m.erase k) a = bif k == a then default else get! m a := by
get! (m.erase k) a = if k == a then default else get! m a := by
simp_to_raw using Raw₀.Const.get!_erase
@[simp]
@@ -513,7 +547,7 @@ theorem getD_eq_fallback [LawfulBEq α] {a : α} {fallback : β a} :
simpa [mem_iff_contains] using getD_eq_fallback_of_contains_eq_false h
theorem getD_erase [LawfulBEq α] {k a : α} {fallback : β a} :
(m.erase k).getD a fallback = bif k == a then fallback else m.getD a fallback := by
(m.erase k).getD a fallback = if k == a then fallback else m.getD a fallback := by
simp_to_raw using Raw₀.getD_erase
@[simp]
@@ -559,7 +593,7 @@ theorem getD_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
simp_to_raw using Raw₀.Const.getD_of_isEmpty m, _
theorem getD_insert [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insert k v) a fallback = bif k == a then v else getD m a fallback := by
getD (m.insert k v) a fallback = if k == a then v else getD m a fallback := by
simp_to_raw using Raw₀.Const.getD_insert
@[simp]
@@ -576,7 +610,7 @@ theorem getD_eq_fallback [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
simpa [mem_iff_contains] using getD_eq_fallback_of_contains_eq_false h
theorem getD_erase [EquivBEq α] [LawfulHashable α] {k a : α} {fallback : β} :
getD (m.erase k) a fallback = bif k == a then fallback else getD m a fallback := by
getD (m.erase k) a fallback = if k == a then fallback else getD m a fallback := by
simp_to_raw using Raw₀.Const.getD_erase
@[simp]
@@ -658,13 +692,18 @@ theorem mem_of_mem_insertIfNew' [EquivBEq α] [LawfulHashable α] {k a : α} {v
simpa [mem_iff_contains] using contains_of_contains_insertIfNew' h
theorem size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).size = bif m.contains k then m.size else m.size + 1 := by
(m.insertIfNew k v).size = if k m then m.size else m.size + 1 := by
simp only [mem_iff_contains]
simp_to_raw using Raw₀.size_insertIfNew
theorem size_le_size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
m.size (m.insertIfNew k v).size := by
simp_to_raw using Raw₀.size_le_size_insertIfNew m, _
theorem size_insertIfNew_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β k} :
(m.insertIfNew k v).size m.size + 1 := by
simp_to_raw using Raw₀.size_insertIfNew_le
theorem get?_insertIfNew [LawfulBEq α] {k a : α} {v : β k} :
(m.insertIfNew k v).get? a =
if h : k == a ¬k m then some (cast (congrArg β (eq_of_beq h.1)) v)
@@ -697,7 +736,8 @@ namespace Const
variable {β : Type v} {m : DHashMap.Raw α (fun _ => β)} (h : m.WF)
theorem get?_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
get? (m.insertIfNew k v) a = bif k == a && !m.contains k then some v else get? m a := by
get? (m.insertIfNew k v) a = if k == a ¬k m then some v else get? m a := by
simp only [mem_iff_contains, Bool.not_eq_true]
simp_to_raw using Raw₀.Const.get?_insertIfNew
theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h₁} :
@@ -708,12 +748,14 @@ theorem get_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h
simp_to_raw using Raw₀.Const.get_insertIfNew m, _
theorem get!_insertIfNew [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
get! (m.insertIfNew k v) a = bif k == a && !m.contains k then v else get! m a := by
get! (m.insertIfNew k v) a = if k == a ¬k m then v else get! m a := by
simp only [mem_iff_contains, Bool.not_eq_true]
simp_to_raw using Raw₀.Const.get!_insertIfNew
theorem getD_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
getD (m.insertIfNew k v) a fallback =
bif k == a && !m.contains k then v else getD m a fallback := by
if k == a ¬k m then v else getD m a fallback := by
simp only [mem_iff_contains, Bool.not_eq_true]
simp_to_raw using Raw₀.Const.getD_insertIfNew
end Const

View File

@@ -71,6 +71,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : HashMap α β) :=
DHashMap.not_mem_emptyc
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false :=
DHashMap.contains_of_isEmpty
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m :=
DHashMap.not_mem_of_isEmpty
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true :=
DHashMap.isEmpty_eq_false_iff_exists_contains_eq_true
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m :=
DHashMap.isEmpty_eq_false_iff_exists_mem
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false :=
DHashMap.isEmpty_iff_forall_contains
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m :=
DHashMap.isEmpty_iff_forall_not_mem
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insert k v).contains a = (k == a || m.contains a) :=
@@ -110,13 +134,17 @@ theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) :=
DHashMap.isEmpty_eq_size_eq_zero
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insert k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insert k v).size = if k m then m.size else m.size + 1 :=
DHashMap.size_insert
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
m.size (m.insert k v).size :=
DHashMap.size_le_size_insert
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insert k v).size m.size + 1 :=
DHashMap.size_insert_le
@[simp]
theorem erase_empty {a : α} {c : Nat} : (empty c : HashMap α β).erase a = empty c :=
ext DHashMap.erase_empty
@@ -148,12 +176,16 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
DHashMap.mem_of_mem_erase
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size :=
(m.erase k).size = if k m then m.size - 1 else m.size :=
DHashMap.size_erase
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size :=
DHashMap.size_erase_le
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 :=
DHashMap.size_le_size_erase
@[simp]
theorem containsThenInsert_fst {k : α} {v : β} : (m.containsThenInsert k v).1 = m.contains k :=
DHashMap.containsThenInsert_fst
@@ -185,7 +217,7 @@ theorem getElem?_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
DHashMap.Const.get?_of_isEmpty
theorem getElem?_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insert k v)[a]? = bif k == a then some v else m[a]? :=
(m.insert k v)[a]? = if k == a then some v else m[a]? :=
DHashMap.Const.get?_insert
@[simp]
@@ -205,7 +237,7 @@ theorem getElem?_eq_none [EquivBEq α] [LawfulHashable α] {a : α} : ¬a ∈ m
DHashMap.Const.get?_eq_none
theorem getElem?_erase [EquivBEq α] [LawfulHashable α] {k a : α} :
(m.erase k)[a]? = bif k == a then none else m[a]? :=
(m.erase k)[a]? = if k == a then none else m[a]? :=
DHashMap.Const.get?_erase
@[simp]
@@ -251,7 +283,7 @@ theorem getElem!_of_isEmpty [EquivBEq α] [LawfulHashable α] [Inhabited β] {a
DHashMap.Const.get!_of_isEmpty
theorem getElem!_insert [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
(m.insert k v)[a]! = bif k == a then v else m[a]! :=
(m.insert k v)[a]! = if k == a then v else m[a]! :=
DHashMap.Const.get!_insert
@[simp]
@@ -268,7 +300,7 @@ theorem getElem!_eq_default [EquivBEq α] [LawfulHashable α] [Inhabited β] {a
DHashMap.Const.get!_eq_default
theorem getElem!_erase [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} :
(m.erase k)[a]! = bif k == a then default else m[a]! :=
(m.erase k)[a]! = if k == a then default else m[a]! :=
DHashMap.Const.get!_erase
@[simp]
@@ -310,7 +342,7 @@ theorem getD_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
DHashMap.Const.getD_of_isEmpty
theorem getD_insert [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
(m.insert k v).getD a fallback = bif k == a then v else m.getD a fallback :=
(m.insert k v).getD a fallback = if k == a then v else m.getD a fallback :=
DHashMap.Const.getD_insert
@[simp]
@@ -327,7 +359,7 @@ theorem getD_eq_fallback [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
DHashMap.Const.getD_eq_fallback
theorem getD_erase [EquivBEq α] [LawfulHashable α] {k a : α} {fallback : β} :
(m.erase k).getD a fallback = bif k == a then fallback else m.getD a fallback :=
(m.erase k).getD a fallback = if k == a then fallback else m.getD a fallback :=
DHashMap.Const.getD_erase
@[simp]
@@ -403,15 +435,19 @@ theorem mem_of_mem_insertIfNew' [EquivBEq α] [LawfulHashable α] {k a : α} {v
DHashMap.mem_of_mem_insertIfNew'
theorem size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insertIfNew k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insertIfNew k v).size = if k m then m.size else m.size + 1 :=
DHashMap.size_insertIfNew
theorem size_le_size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
m.size (m.insertIfNew k v).size :=
DHashMap.size_le_size_insertIfNew
theorem size_insertIfNew_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insertIfNew k v).size m.size + 1 :=
DHashMap.size_insertIfNew_le
theorem getElem?_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insertIfNew k v)[a]? = bif k == a && !m.contains k then some v else m[a]? :=
(m.insertIfNew k v)[a]? = if k == a ¬k m then some v else m[a]? :=
DHashMap.Const.get?_insertIfNew
theorem getElem_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h₁} :
@@ -420,12 +456,12 @@ theorem getElem_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β
DHashMap.Const.get_insertIfNew (h₁ := h₁)
theorem getElem!_insertIfNew [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
(m.insertIfNew k v)[a]! = bif k == a && !m.contains k then v else m[a]! :=
(m.insertIfNew k v)[a]! = if k == a ¬k m then v else m[a]! :=
DHashMap.Const.get!_insertIfNew
theorem getD_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
(m.insertIfNew k v).getD a fallback =
bif k == a && !m.contains k then v else m.getD a fallback :=
if k == a ¬k m then v else m.getD a fallback :=
DHashMap.Const.getD_insertIfNew
@[simp]

View File

@@ -70,6 +70,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : Raw α β) :=
DHashMap.Raw.not_mem_emptyc
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false :=
DHashMap.Raw.contains_of_isEmpty h.out
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m :=
DHashMap.Raw.not_mem_of_isEmpty h.out
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true :=
DHashMap.Raw.isEmpty_eq_false_iff_exists_contains_eq_true h.out
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m :=
DHashMap.Raw.isEmpty_eq_false_iff_exists_mem h.out
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false :=
DHashMap.Raw.isEmpty_iff_forall_contains h.out
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m :=
DHashMap.Raw.isEmpty_iff_forall_not_mem h.out
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insert k v).contains a = (k == a || m.contains a) :=
@@ -109,13 +133,17 @@ theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) :=
DHashMap.Raw.isEmpty_eq_size_eq_zero
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insert k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insert k v).size = if k m then m.size else m.size + 1 :=
DHashMap.Raw.size_insert h.out
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
m.size (m.insert k v).size :=
DHashMap.Raw.size_le_size_insert h.out
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insert k v).size m.size + 1 :=
DHashMap.Raw.size_insert_le h.out
@[simp]
theorem erase_empty {k : α} {c : Nat} : (empty c : Raw α β).erase k = empty c :=
ext DHashMap.Raw.erase_empty
@@ -147,12 +175,16 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
DHashMap.Raw.mem_of_mem_erase h.out
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size :=
(m.erase k).size = if k m then m.size - 1 else m.size :=
DHashMap.Raw.size_erase h.out
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size :=
DHashMap.Raw.size_erase_le h.out
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 :=
DHashMap.Raw.size_le_size_erase h.out
@[simp]
theorem containsThenInsert_fst {k : α} {v : β} : (m.containsThenInsert k v).1 = m.contains k :=
DHashMap.Raw.containsThenInsert_fst h.out
@@ -184,7 +216,7 @@ theorem getElem?_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
DHashMap.Raw.Const.get?_of_isEmpty h.out
theorem getElem?_insert [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insert k v)[a]? = bif k == a then some v else m[a]? :=
(m.insert k v)[a]? = if k == a then some v else m[a]? :=
DHashMap.Raw.Const.get?_insert h.out
@[simp]
@@ -204,7 +236,7 @@ theorem getElem?_eq_none [EquivBEq α] [LawfulHashable α] {a : α} : ¬a ∈ m
DHashMap.Raw.Const.get?_eq_none h.out
theorem getElem?_erase [EquivBEq α] [LawfulHashable α] {k a : α} :
(m.erase k)[a]? = bif k == a then none else m[a]? :=
(m.erase k)[a]? = if k == a then none else m[a]? :=
DHashMap.Raw.Const.get?_erase h.out
@[simp]
@@ -250,7 +282,7 @@ theorem getElem!_of_isEmpty [EquivBEq α] [LawfulHashable α] [Inhabited β] {a
DHashMap.Raw.Const.get!_of_isEmpty h.out
theorem getElem!_insert [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
(m.insert k v)[a]! = bif k == a then v else m[a]! :=
(m.insert k v)[a]! = if k == a then v else m[a]! :=
DHashMap.Raw.Const.get!_insert h.out
@[simp]
@@ -267,7 +299,7 @@ theorem getElem!_eq_default [EquivBEq α] [LawfulHashable α] [Inhabited β] {a
DHashMap.Raw.Const.get!_eq_default h.out
theorem getElem!_erase [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} :
(m.erase k)[a]! = bif k == a then default else m[a]! :=
(m.erase k)[a]! = if k == a then default else m[a]! :=
DHashMap.Raw.Const.get!_erase h.out
@[simp]
@@ -308,7 +340,7 @@ theorem getD_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
DHashMap.Raw.Const.getD_of_isEmpty h.out
theorem getD_insert [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
(m.insert k v).getD a fallback = bif k == a then v else m.getD a fallback :=
(m.insert k v).getD a fallback = if k == a then v else m.getD a fallback :=
DHashMap.Raw.Const.getD_insert h.out
@[simp]
@@ -325,7 +357,7 @@ theorem getD_eq_fallback [EquivBEq α] [LawfulHashable α] {a : α} {fallback :
DHashMap.Raw.Const.getD_eq_fallback h.out
theorem getD_erase [EquivBEq α] [LawfulHashable α] {k a : α} {fallback : β} :
(m.erase k).getD a fallback = bif k == a then fallback else m.getD a fallback :=
(m.erase k).getD a fallback = if k == a then fallback else m.getD a fallback :=
DHashMap.Raw.Const.getD_erase h.out
@[simp]
@@ -401,15 +433,19 @@ theorem mem_of_mem_insertIfNew' [EquivBEq α] [LawfulHashable α] {k a : α} {v
DHashMap.Raw.mem_of_mem_insertIfNew' h.out
theorem size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insertIfNew k v).size = bif m.contains k then m.size else m.size + 1 :=
(m.insertIfNew k v).size = if k m then m.size else m.size + 1 :=
DHashMap.Raw.size_insertIfNew h.out
theorem size_le_size_insertIfNew [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
m.size (m.insertIfNew k v).size :=
DHashMap.Raw.size_le_size_insertIfNew h.out
theorem size_insertIfNew_le [EquivBEq α] [LawfulHashable α] {k : α} {v : β} :
(m.insertIfNew k v).size m.size + 1 :=
DHashMap.Raw.size_insertIfNew_le h.out
theorem getElem?_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} :
(m.insertIfNew k v)[a]? = bif k == a && !m.contains k then some v else m[a]? :=
(m.insertIfNew k v)[a]? = if k == a ¬k m then some v else m[a]? :=
DHashMap.Raw.Const.get?_insertIfNew h.out
theorem getElem_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β} {h₁} :
@@ -418,12 +454,12 @@ theorem getElem_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {v : β
DHashMap.Raw.Const.get_insertIfNew h.out (h₁ := h₁)
theorem getElem!_insertIfNew [EquivBEq α] [LawfulHashable α] [Inhabited β] {k a : α} {v : β} :
(m.insertIfNew k v)[a]! = bif k == a && !m.contains k then v else m[a]! :=
(m.insertIfNew k v)[a]! = if k == a ¬k m then v else m[a]! :=
DHashMap.Raw.Const.get!_insertIfNew h.out
theorem getD_insertIfNew [EquivBEq α] [LawfulHashable α] {k a : α} {fallback v : β} :
(m.insertIfNew k v).getD a fallback =
bif k == a && !m.contains k then v else m.getD a fallback :=
if k == a ¬k m then v else m.getD a fallback :=
DHashMap.Raw.Const.getD_insertIfNew h.out
@[simp]

View File

@@ -65,6 +65,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : HashSet α) :=
HashMap.not_mem_emptyc
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false :=
HashMap.contains_of_isEmpty
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m :=
HashMap.not_mem_of_isEmpty
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true :=
HashMap.isEmpty_eq_false_iff_exists_contains_eq_true
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m :=
HashMap.isEmpty_eq_false_iff_exists_mem
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false :=
HashMap.isEmpty_iff_forall_contains
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m :=
HashMap.isEmpty_iff_forall_not_mem
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} :
(m.insert k).contains a = (k == a || m.contains a) :=
@@ -102,12 +126,16 @@ theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) :=
HashMap.isEmpty_eq_size_eq_zero
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} :
(m.insert k).size = bif m.contains k then m.size else m.size + 1 :=
(m.insert k).size = if k m then m.size else m.size + 1 :=
HashMap.size_insertIfNew
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} : m.size (m.insert k).size :=
HashMap.size_le_size_insertIfNew
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} :
(m.insert k).size m.size + 1 :=
HashMap.size_insertIfNew_le
@[simp]
theorem erase_empty {a : α} {c : Nat} : (empty c : HashSet α).erase a = empty c :=
ext HashMap.erase_empty
@@ -139,12 +167,16 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
HashMap.mem_of_mem_erase
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size :=
(m.erase k).size = if k m then m.size - 1 else m.size :=
HashMap.size_erase
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size :=
HashMap.size_erase_le
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 :=
HashMap.size_le_size_erase
@[simp]
theorem containsThenInsert_fst {k : α} : (m.containsThenInsert k).1 = m.contains k :=
HashMap.containsThenInsertIfNew_fst

View File

@@ -65,6 +65,30 @@ theorem mem_congr [EquivBEq α] [LawfulHashable α] {a b : α} (hab : a == b) :
@[simp] theorem not_mem_emptyc {a : α} : ¬a ( : Raw α) :=
HashMap.Raw.not_mem_emptyc
theorem contains_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty m.contains a = false :=
HashMap.Raw.contains_of_isEmpty h.out
theorem not_mem_of_isEmpty [EquivBEq α] [LawfulHashable α] {a : α} :
m.isEmpty ¬a m :=
HashMap.Raw.not_mem_of_isEmpty h.out
theorem isEmpty_eq_false_iff_exists_contains_eq_true [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, m.contains a = true :=
HashMap.Raw.isEmpty_eq_false_iff_exists_contains_eq_true h.out
theorem isEmpty_eq_false_iff_exists_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = false a, a m :=
HashMap.Raw.isEmpty_eq_false_iff_exists_mem h.out
theorem isEmpty_iff_forall_contains [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, m.contains a = false :=
HashMap.Raw.isEmpty_iff_forall_contains h.out
theorem isEmpty_iff_forall_not_mem [EquivBEq α] [LawfulHashable α] :
m.isEmpty = true a, ¬a m :=
HashMap.Raw.isEmpty_iff_forall_not_mem h.out
@[simp]
theorem contains_insert [EquivBEq α] [LawfulHashable α] {k a : α} :
(m.insert k).contains a = (k == a || m.contains a) :=
@@ -102,12 +126,15 @@ theorem isEmpty_eq_size_eq_zero : m.isEmpty = (m.size == 0) :=
HashMap.Raw.isEmpty_eq_size_eq_zero
theorem size_insert [EquivBEq α] [LawfulHashable α] {k : α} :
(m.insert k).size = bif m.contains k then m.size else m.size + 1 :=
(m.insert k).size = if k m then m.size else m.size + 1 :=
HashMap.Raw.size_insertIfNew h.out
theorem size_le_size_insert [EquivBEq α] [LawfulHashable α] {k : α} : m.size (m.insert k).size :=
HashMap.Raw.size_le_size_insertIfNew h.out
theorem size_insert_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.insert k).size m.size + 1 :=
HashMap.Raw.size_insertIfNew_le h.out
@[simp]
theorem erase_empty {k : α} {c : Nat} : (empty c : Raw α).erase k = empty c :=
ext HashMap.Raw.erase_empty
@@ -139,12 +166,16 @@ theorem mem_of_mem_erase [EquivBEq α] [LawfulHashable α] {k a : α} : a ∈ m.
HashMap.Raw.mem_of_mem_erase h.out
theorem size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
(m.erase k).size = bif m.contains k then m.size - 1 else m.size :=
(m.erase k).size = if k m then m.size - 1 else m.size :=
HashMap.Raw.size_erase h.out
theorem size_erase_le [EquivBEq α] [LawfulHashable α] {k : α} : (m.erase k).size m.size :=
HashMap.Raw.size_erase_le h.out
theorem size_le_size_erase [EquivBEq α] [LawfulHashable α] {k : α} :
m.size (m.erase k).size + 1 :=
HashMap.Raw.size_le_size_erase h.out
@[simp]
theorem containsThenInsert_fst {k : α} : (m.containsThenInsert k).1 = m.contains k :=
HashMap.Raw.containsThenInsertIfNew_fst h.out

View File

@@ -105,7 +105,7 @@ class replace_fn {
lean_inc_ref(m_f);
lean_object * r = lean_apply_1(m_f, e.raw());
if (!lean_is_scalar(r)) {
expr e_new(lean_ctor_get(r, 0));
expr e_new(lean_ctor_get(r, 0), true);
lean_dec_ref(r);
return save_result(e, e_new, shared);
}

View File

@@ -30,6 +30,8 @@ structure BuildConfig where
dependent jobs will still continue unimpeded).
-/
failLv : LogLevel := .error
/-- The minimum log level for an log entry to be reported. -/
outLv : LogLevel := verbosity.minLogLv
/--
The stream to which Lake reports build progress.
By default, Lake uses `stderr`.
@@ -38,10 +40,6 @@ structure BuildConfig where
/-- Whether to use ANSI escape codes in build output. -/
ansiMode : AnsiMode := .auto
/-- The minimum log level for an log entry to be reported. -/
@[inline] def BuildConfig.outLv (cfg : BuildConfig) : LogLevel :=
cfg.verbosity.minLogLv
/--
Whether the build should show progress information.

View File

@@ -29,7 +29,7 @@ In this section, we define the primitives that make up a builder.
A dependently typed monadic *fetch* function.
That is, a function within the monad `m` and takes an input `a : α`
describing what to fetch and and produces some output `b : β a` (dependently
describing what to fetch and produces some output `b : β a` (dependently
typed) or `b : B` (not) describing what was fetched. All build functions are
fetch functions, but not all fetch functions need build something.
-/

View File

@@ -13,6 +13,7 @@ inductive CliError
| unknownCommand (cmd : String)
| missingArg (arg : String)
| missingOptArg (opt arg : String)
| invalidOptArg (opt arg : String)
| unknownShortOption (opt : Char)
| unknownLongOption (opt : String)
| unexpectedArguments (args : List String)
@@ -51,7 +52,8 @@ def toString : CliError → String
| missingCommand => "missing command"
| unknownCommand cmd => s!"unknown command '{cmd}'"
| missingArg arg => s!"missing {arg}"
| missingOptArg opt arg => s!"missing {arg} after {opt}"
| missingOptArg opt arg => s!"missing {arg} for {opt}"
| invalidOptArg opt arg => s!"invalid argument for {opt}; expected {arg}"
| unknownShortOption opt => s!"unknown short option '-{opt}'"
| unknownLongOption opt => s!"unknown long option '{opt}'"
| unexpectedArguments as => s!"unexpected arguments: {" ".intercalate as}"

View File

@@ -35,24 +35,32 @@ COMMANDS:
translate-config change language of the package configuration
serve start the Lean language server
OPTIONS:
BASIC OPTIONS:
--version print version and exit
--help, -h print help of the program or a command and exit
--dir, -d=file use the package configuration in a specific directory
--file, -f=file use a specific file for the package configuration
--quiet, -q hide progress messages
--verbose, -v show verbose information (command invocations)
--lean=cmd specify the `lean` command used by Lake
-K key[=value] set the configuration file option named key
--old only rebuild modified modules (ignore transitive deps)
--rehash, -H hash all files for traces (do not trust `.hash` files)
--update, -U update manifest before building
--reconfigure, -R elaborate configuration files instead of using OLeans
--wfail fail build if warnings are logged
--iofail fail build if any I/O or other info is logged
--ansi, --no-ansi toggle the use of ANSI escape codes to prettify output
--no-build exit immediately if a build target is not up-to-date
OUTPUT OPTIONS:
--quiet, -q hide informational logs and the progress indicator
--verbose, -v show trace logs (command invocations) and built targets
--ansi, --no-ansi toggle the use of ANSI escape codes to prettify output
--log-level=lv minimum log level to output on success
(levels: trace, info, warning, error)
--fail-level=lv minimum log level to fail a build (default: error)
--iofail fail build if any I/O or other info is logged
(same as --fail-level=info)
--wfail fail build if warnings are logged
(same as --fail-level=warning)
See `lake help <command>` for more information on a specific command."
def templateHelp :=

View File

@@ -156,6 +156,22 @@ def mathToolchainBlobUrl : String :=
def mathToolchainUrl : String :=
"https://github.com/leanprover-community/mathlib4/blob/master/lean-toolchain"
def leanActionWorkflowContents :=
"name: Lean Action CI
on:
push:
pull_request:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: leanprover/lean-action@v1
"
/-- Lake package template identifier. -/
inductive InitTemplate
@@ -195,12 +211,24 @@ def InitTemplate.configFileContents (tmp : InitTemplate) (lang : ConfigLang) (p
| .math, .lean => mathLeanConfigFileContents pkgNameStr (escapeName! root)
| .math, .toml => mathTomlConfigFileContents pkgNameStr root.toString
def createLeanActionWorkflow (dir : FilePath) : LogIO PUnit := do
logVerbose "creating lean-action CI workflow"
let workflowDir := dir / ".github" / "workflows"
let workflowFile := workflowDir / "lean_action_ci.yml"
if ( workflowFile.pathExists) then
logVerbose "lean-action CI workflow already exists"
return
IO.FS.createDirAll workflowDir
IO.FS.writeFile workflowFile leanActionWorkflowContents
logVerbose s!"created lean-action CI workflow at '{workflowFile}'"
/-- Initialize a new Lake package in the given directory with the given name. -/
def initPkg (dir : FilePath) (name : Name) (tmp : InitTemplate) (lang : ConfigLang) (env : Lake.Env) : LogIO PUnit := do
let configFile := dir / defaultConfigFile.addExtension lang.fileExtension
if ( configFile.pathExists) then
error "package already initialized"
createLeanActionWorkflow dir
-- determine the name to use for the root
-- use upper camel case unless the specific module name already exists
let (root, rootFile?) id do

View File

@@ -41,8 +41,12 @@ structure LakeOptions where
trustHash : Bool := true
noBuild : Bool := false
failLv : LogLevel := .error
outLv? : Option LogLevel := .none
ansiMode : AnsiMode := .auto
def LakeOptions.outLv (opts : LakeOptions) : LogLevel :=
opts.outLv?.getD opts.verbosity.minLogLv
/-- Get the Lean installation. Error if missing. -/
def LakeOptions.getLeanInstall (opts : LakeOptions) : Except CliError LeanInstall :=
match opts.leanInstall? with
@@ -82,6 +86,7 @@ def LakeOptions.mkBuildConfig (opts : LakeOptions) (out := OutStream.stderr) : B
noBuild := opts.noBuild
verbosity := opts.verbosity
failLv := opts.failLv
outLv := opts.outLv
ansiMode := opts.ansiMode
out := out
@@ -101,7 +106,7 @@ def CliM.run (self : CliM α) (args : List String) : BaseIO ExitCode := do
@[inline] def CliStateM.runLogIO (x : LogIO α) : CliStateM α := do
let opts get
MainM.runLogIO x opts.verbosity.minLogLv opts.ansiMode
MainM.runLogIO x opts.outLv opts.ansiMode
instance (priority := low) : MonadLift LogIO CliStateM := CliStateM.runLogIO
@@ -117,6 +122,10 @@ def takeOptArg (opt arg : String) : CliM String := do
| none => throw <| CliError.missingOptArg opt arg
| some arg => pure arg
@[inline] def takeOptArg' (opt arg : String) (f : String Option α) : CliM α := do
if let some a := f ( takeOptArg opt arg) then return a
throw <| CliError.invalidOptArg opt arg
/--
Verify that there are no CLI arguments remaining
before running the given action.
@@ -167,13 +176,25 @@ def lakeLongOption : (opt : String) → CliM PUnit
| "--rehash" => modifyThe LakeOptions ({· with trustHash := false})
| "--wfail" => modifyThe LakeOptions ({· with failLv := .warning})
| "--iofail" => modifyThe LakeOptions ({· with failLv := .info})
| "--log-level" => do
let outLv takeOptArg' "--log-level" "log level" LogLevel.ofString?
modifyThe LakeOptions ({· with outLv? := outLv})
| "--fail-level" => do
let failLv takeOptArg' "--fail-level" "log level" LogLevel.ofString?
modifyThe LakeOptions ({· with failLv})
| "--ansi" => modifyThe LakeOptions ({· with ansiMode := .ansi})
| "--no-ansi" => modifyThe LakeOptions ({· with ansiMode := .noAnsi})
| "--dir" => do let rootDir takeOptArg "--dir" "path"; modifyThe LakeOptions ({· with rootDir})
| "--file" => do let configFile takeOptArg "--file" "path"; modifyThe LakeOptions ({· with configFile})
| "--dir" => do
let rootDir takeOptArg "--dir" "path"
modifyThe LakeOptions ({· with rootDir})
| "--file" => do
let configFile takeOptArg "--file" "path"
modifyThe LakeOptions ({· with configFile})
| "--lean" => do setLean <| takeOptArg "--lean" "path or command"
| "--help" => modifyThe LakeOptions ({· with wantsHelp := true})
| "--" => do let subArgs takeArgs; modifyThe LakeOptions ({· with subArgs})
| "--" => do
let subArgs takeArgs
modifyThe LakeOptions ({· with subArgs})
| opt => throw <| CliError.unknownLongOption opt
def lakeOption :=
@@ -320,6 +341,7 @@ protected def resolveDeps : CliM PUnit := do
processOptions lakeOption
let opts getThe LakeOptions
let config mkLoadConfig opts
noArgsRem do
discard <| loadWorkspace config opts.updateDeps
protected def update : CliM PUnit := do

View File

@@ -67,8 +67,8 @@ syntax verSpec :=
&"git "? term:max
/--
The version of the package to lookup in Lake's package index.
A Git revision can be specified via `@ git "<rev>"`.
The version of the package to require.
To specify a Git revision, use the syntax `@ git <rev>`.
-/
syntax verClause :=
" @ " verSpec
@@ -131,8 +131,8 @@ the different forms this clause can take.
Without a `from` clause, Lake will lookup the package in the default
registry (i.e., Reservoir) and use the information there to download the
package at the specified `version`. The optional `scope` is used to
disambiguate which package with `pkg-name` to lookup. In Reservoir, this scope
package at the requested `version`. The `scope` is used to disambiguate between
packages in the registry with the same `pkg-name`. In Reservoir, this scope
is the package owner (e.g., `leanprover` of `@leanprover/doc-gen4`).
The `with` clause specifies a `NameMap String` of Lake options

View File

@@ -89,18 +89,31 @@ def LogLevel.ansiColor : LogLevel → String
| .warning => "33"
| .error => "31"
protected def LogLevel.ofString? (s : String) : Option LogLevel :=
match s.toLower with
| "trace" => some .trace
| "info" | "information" => some .info
| "warn" | "warning" => some .warning
| "error" => some .error
| _ => none
protected def LogLevel.toString : LogLevel String
| .trace => "trace"
| .info => "info"
| .warning => "warning"
| .error => "error"
instance : ToString LogLevel := LogLevel.toString
protected def LogLevel.ofMessageSeverity : MessageSeverity LogLevel
| .information => .info
| .warning => .warning
| .error => .error
instance : ToString LogLevel := LogLevel.toString
protected def LogLevel.toMessageSeverity : LogLevel MessageSeverity
| .info | .trace => .information
| .warning => .warning
| .error => .error
def Verbosity.minLogLv : Verbosity LogLevel
| .quiet => .warning

View File

@@ -337,7 +337,7 @@ For theorem proving packages which depend on `mathlib`, you can also run `lake n
**NOTE:** For mathlib in particular, you should run `lake exe cache get` prior to a `lake build` after adding or updating a mathlib dependency. Otherwise, it will be rebuilt from scratch (which can take hours). For more information, see mathlib's [wiki page](https://github.com/leanprover-community/mathlib4/wiki/Using-mathlib4-as-a-dependency) on using it as a dependency.
## Lean `require`
### Lean `require`
The `require` command in Lean Lake configuration follows the general syntax:
@@ -347,15 +347,18 @@ require ["<scope>" /] <pkg-name> [@ <version>]
```
The `from` clause tells Lake where to locate the dependency.
Without a `from` clause, Lake will lookup the package in the default registry (i.e., [Reservoir](https://reservoir.lean-lang.org/@lean-dojo/LeanCopilot)) and use the information there to download the package at the specified `version`. The optional `scope` is used to disambiguate which package with `pkg-name` to lookup. In Reservoir, this scope is the package owner (e.g., `leanprover` of [@leanprover/doc-gen4](https://reservoir.lean-lang.org/@leanprover/doc-gen4)).
Without a `from` clause, Lake will lookup the package in the default registry (i.e., [Reservoir](https://reservoir.lean-lang.org)) and use the information there to download the package at the requested `version`. To specify a Git revision, use the syntax `@ git <rev>`.
The `scope` is used to disambiguate between packages in the registry with the same `pkg-name`. In Reservoir, this scope is the package owner (e.g., `leanprover` of [@leanprover/doc-gen4](https://reservoir.lean-lang.org/@leanprover/doc-gen4)).
The `with` clause specifies a `NameMap String` of Lake options used to configure the dependency. This is equivalent to passing `-K` options to the dependency on the command line.
## Supported Sources
### Supported Sources
Lake supports the following types of dependencies as sources in a `from` clause.
### Path Dependencies
#### Path Dependencies
```
from <path>
@@ -363,7 +366,7 @@ from <path>
Lake loads the package located a fixed `path` relative to the requiring package's directory.
### Git Dependencies
#### Git Dependencies
```
from git <url> [@ <rev>] [/ <subDir>]
@@ -371,7 +374,7 @@ from git <url> [@ <rev>] [/ <subDir>]
Lake clones the Git repository available at the specified fixed Git `url`, and checks out the specified revision `rev`. The revision can be a commit hash, branch, or tag. If none is provided, Lake defaults to `master`. After checkout, Lake loads the package located in `subDir` (or the repository root if no subdirectory is specified).
## TOML `require`
### TOML `require`
To `require` a package in a TOML configuration, the parallel syntax for the above examples is:
@@ -383,6 +386,12 @@ scope = "<scope>"
version = "<version>"
options = {<options>}
# A Reservoir Git dependency
[[require]]
name = "<pkg-name>"
scope = "<scope>"
rev = "<rev>"
# A path dependency
[[require]]
name = "<pkg-name>"

View File

@@ -0,0 +1,2 @@
import Lean.Elab.Command
run_cmd Lean.logError "foo"

View File

@@ -0,0 +1,2 @@
import Lean.Elab.Command
run_cmd Lean.logInfo "foo"

View File

@@ -0,0 +1,2 @@
import Lean.Elab.Command
run_cmd Lean.logWarning "foo"

View File

@@ -0,0 +1,49 @@
import Lake
open Lake DSL
package test
/-
Test logging in Lake CLI
-/
def cfgLogLv? := (get_config? log).bind LogLevel.ofString?
meta if cfgLogLv?.isSome then
run_cmd Lean.log "bar" cfgLogLv?.get!.toMessageSeverity
/-
Test logging in Lean
-/
lean_lib Log
/-
Test logging in job
-/
def top (level : LogLevel) : FetchM (BuildJob Unit) := Job.async do
logEntry {level, message := "foo"}
return ((), .nil)
target topTrace : Unit := top .trace
target topInfo : Unit := top .info
target topWarning : Unit := top .warning
target topError : Unit := top .error
/--
Test logging in build helper
-/
def art (pkg : Package) (level : LogLevel) : FetchM (BuildJob Unit) := Job.async do
let artFile := pkg.buildDir / s!"art{level.toString.capitalize}"
(((), ·)) <$> buildFileUnlessUpToDate artFile .nil do
logEntry {level, message := "foo"}
createParentDirs artFile
IO.FS.writeFile artFile ""
target artTrace pkg : Unit := art pkg .trace
target artInfo pkg : Unit := art pkg .info
target artWarning pkg : Unit := art pkg .warning
target artError pkg : Unit := art pkg .error

54
src/lake/tests/logLevel/test.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
set -euxo pipefail
LAKE=${LAKE:-../../.lake/build/bin/lake}
./clean.sh
# Test failure log level
log_fail_target() {
($LAKE build "$@" && exit 1 || true) | grep --color foo
($LAKE build "$@" && exit 1 || true) | grep --color foo # test replay
}
log_fail_target topTrace --fail-level=trace
log_fail_target artTrace --fail-level=trace
log_fail() {
lv=$1; shift
log_fail_target top$lv "$@"
log_fail_target art$lv "$@"
log_fail_target Log.$lv "$@"
}
log_fail Info --iofail
log_fail Warning --wfail
log_fail Error
# Test output log level
log_empty() {
lv=$1; shift
rm -f .lake/build/art$lv
$LAKE build art$lv "$@" | grep --color foo && exit 1 || true
$LAKE build art$lv -v # test whole log was saved
$LAKE build art$lv "$@" | grep --color foo && exit 1 || true # test replay
}
log_empty Info -q
log_empty Info --log-level=warning
log_empty Warning --log-level=error
log_empty Trace -q
log_empty Trace --log-level=info
log_empty Trace
# Test configuration-time output log level
$LAKE resolve-deps -R -Klog=info 2>&1 | grep --color "info: bar"
$LAKE resolve-deps -R -Klog=info -q 2>&1 |
grep --color "info: bar" && exit 1 || true
$LAKE resolve-deps -R -Klog=warning 2>&1 | grep --color "warning: bar"
$LAKE resolve-deps -R -Klog=warning --log-level=error 2>&1 |
grep --color "warning: bar" && exit 1 || true

View File

@@ -1,3 +0,0 @@
import Lean.Elab.Command
run_cmd Lean.logWarning "bar"

View File

@@ -1,17 +0,0 @@
import Lake
open Lake DSL
package test
lean_lib Warn
target warn : PUnit := Job.async do
logWarning "foo"
return ((), .nil)
target warnArt pkg : PUnit := Job.async do
let warnArtFile := pkg.buildDir / "warn_art"
(((), ·)) <$> buildFileUnlessUpToDate warnArtFile .nil do
logWarning "foo-file"
createParentDirs warnArtFile
IO.FS.writeFile warnArtFile ""

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -euxo pipefail
LAKE=${LAKE:-../../.lake/build/bin/lake}
./clean.sh
# Test Lake warnings produce build failures with `--wfail`
$LAKE build warn | grep --color foo
$LAKE build warn | grep --color foo # test idempotent
$LAKE build warn --wfail && exit 1 || true
$LAKE build warnArt | grep --color foo-file
$LAKE build warnArt | grep --color foo-file # test `buildFileUpToDate` cache
$LAKE build warnArt --wfail && exit 1 || true
# Test Lean warnings produce build failures with `--wfail`
$LAKE build Warn | grep --color bar
$LAKE build Warn | grep --color bar # test Lean module build log cache
$LAKE build Warn --wfail && exit 1 || true

View File

@@ -175,7 +175,12 @@ static obj_res spawn(string_ref const & proc_name, array_ref<string_ref> const &
object * parent_stdout = box(0); setup_stdio(&saAttr, &child_stdout, &parent_stdout, false, stdout_mode);
object * parent_stderr = box(0); setup_stdio(&saAttr, &child_stderr, &parent_stderr, false, stderr_mode);
std::string command = proc_name.to_std_string();
std::string program = proc_name.to_std_string();
// Always escape program in cmdline, in case it contains spaces
std::string command = "\"";
command += program;
command += "\"";
// This needs some thought, on Windows we must pass a command string
// which is a valid command, that is a fully assembled command to be executed.
@@ -247,6 +252,8 @@ static obj_res spawn(string_ref const & proc_name, array_ref<string_ref> const &
// Create the child process.
bool bSuccess = CreateProcess(
// Passing `program` here should be more robust, but would require adding a `.exe` extension
// and searching through `PATH` where necessary
NULL,
const_cast<char *>(command.c_str()), // command line
NULL, // process security attributes

View File

@@ -115,18 +115,14 @@ ENDFOREACH(T)
# LEAN BENCHMARK TESTS
# do not test all .lean files in bench/
if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
message(STATUS "Skipping compiler tests on Windows because of shared library limit on number of exported symbols")
else()
file(GLOB LEANBENCHTESTS "${LEAN_SOURCE_DIR}/../tests/bench/*.lean.expected.out")
FOREACH(T_OUT ${LEANBENCHTESTS})
string(REPLACE ".expected.out" "" T ${T_OUT})
GET_FILENAME_COMPONENT(T_NAME ${T} NAME)
add_test(NAME "leanbenchtest_${T_NAME}"
WORKING_DIRECTORY "${LEAN_SOURCE_DIR}/../tests/bench"
COMMAND bash -c "${TEST_VARS} ./test_single.sh ${T_NAME}")
ENDFOREACH(T_OUT)
endif()
file(GLOB LEANBENCHTESTS "${LEAN_SOURCE_DIR}/../tests/bench/*.lean.expected.out")
FOREACH(T_OUT ${LEANBENCHTESTS})
string(REPLACE ".expected.out" "" T ${T_OUT})
GET_FILENAME_COMPONENT(T_NAME ${T} NAME)
add_test(NAME "leanbenchtest_${T_NAME}"
WORKING_DIRECTORY "${LEAN_SOURCE_DIR}/../tests/bench"
COMMAND bash -c "${TEST_VARS} ./test_single.sh ${T_NAME}")
ENDFOREACH(T_OUT)
file(GLOB LEANINTERPTESTS "${LEAN_SOURCE_DIR}/../tests/plugin/*.lean")
FOREACH(T ${LEANINTERPTESTS})
@@ -146,19 +142,15 @@ FOREACH(T ${LEANT0TESTS})
ENDFOREACH(T)
# LEAN PACKAGE TESTS
if(${CMAKE_SYSTEM_NAME} MATCHES "Windows")
message(STATUS "Skipping compiler tests on Windows because of shared library limit on number of exported symbols")
else()
file(GLOB LEANPKGTESTS "${LEAN_SOURCE_DIR}/../tests/pkg/*")
FOREACH(T ${LEANPKGTESTS})
if(IS_DIRECTORY ${T})
GET_FILENAME_COMPONENT(T_NAME ${T} NAME)
add_test(NAME "leanpkgtest_${T_NAME}"
WORKING_DIRECTORY "${T}"
COMMAND bash -c "${TEST_VARS} ./test.sh")
endif()
ENDFOREACH(T)
endif()
file(GLOB LEANPKGTESTS "${LEAN_SOURCE_DIR}/../tests/pkg/*")
FOREACH(T ${LEANPKGTESTS})
if(IS_DIRECTORY ${T})
GET_FILENAME_COMPONENT(T_NAME ${T} NAME)
add_test(NAME "leanpkgtest_${T_NAME}"
WORKING_DIRECTORY "${T}"
COMMAND bash -c "${TEST_VARS} ./test.sh")
endif()
ENDFOREACH(T)
# LEAN SERVER TESTS
file(GLOB LEANTESTS "${LEAN_SOURCE_DIR}/../tests/lean/server/*.lean")

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More