Compare commits

..

45 Commits

Author SHA1 Message Date
Leonardo de Moura
89e1b4d304 chore: reduce benchmark stack usage
We are running out of stack space in debug mode
2024-08-06 10:23:25 -07:00
Leonardo de Moura
f8816b8c15 chore: recursion => while at instantiate_mvars_fn 2024-08-06 10:23:25 -07:00
Leonardo de Moura
4a2fb6e922 chore: fix typo (#4929) 2024-08-06 15:27:20 +00:00
Henrik Böving
b7db82894b feat: generalized Parsec (#4774)
For experimentation by @the-sofi-uwu.

I also have an efficient number parser in LeanSAT that I am planning to
upstream after we have sufficiently bikeshed this change.
2024-08-06 15:17:23 +00:00
Sebastian Ullrich
35e1554ef7 chore: ignore stale leanpkg tests (#4925) 2024-08-06 08:19:33 +00:00
Leonardo de Moura
14d59b3599 feat: theorem diagnostics (#4924)
When `set_option diagnostics true`, for each theorem with size >
`diagnostics.threshold.proofSize`, display proof size, and the number of
applications for each constant symbol.
2024-08-06 01:01:28 +00:00
Leonardo de Moura
a8e480cd52 chore: profile instantiateMVars at MutualDef.lean (#4923)
`instantiateMVars` can be a performance bottleneck when assembling the
final proof term.

For example, it takes approx. 1 second at

https://github.com/leanprover/LNSym/blob/proof_size_expt/Proofs/SHA512/Experiments/Sym30.lean
2024-08-05 19:15:56 +00:00
Leonardo de Moura
d07239d1bd perf: use lean_instantiate_expr_mvars at instantiateExprMVars (#4922)
This PR completes #4915
2024-08-05 19:02:54 +00:00
Leonardo de Moura
590de785cc chore: cleanup betaRev (#4921) 2024-08-05 18:02:20 +00:00
Lean stage0 autoupdater
d671d0d61a chore: update stage0 2024-08-05 17:53:40 +00:00
Leonardo de Moura
8e476e9d22 perf: instantiateExprMVars (#4915)
TODO: 
- Support for `zeta := true` at `apply_beta`.
- Investigate test failure. 
- Break PR in pieces because of bootstrapping issues. The current PR
updates a stage0 file to workaround the issue.

Motivation: significant performance improvement at
https://github.com/leanprover/LNSym/blob/proof_size_expt/Proofs/SHA512/Experiments/Sym30.lean

With M1 Pro:
- Before: 4.56 secs
- After: 3.16 secs

Successfully built stage2 using this PR
2024-08-05 17:15:22 +00:00
Sebastian Ullrich
a3d144a362 feat: new snapshot architecture on the cmdline (#3106)
This is #3014 with cad5cce reverted for testing.
2024-08-05 15:57:42 +00:00
Sebastian Ullrich
87d41e6326 chore: missing include on Windows (#4919) 2024-08-05 15:50:43 +00:00
Kim Morrison
d6cb2432c6 chore: remove unnecessary steps from release checklist (#4914) 2024-08-05 01:57:30 +00:00
Kim Morrison
c0ffc85d75 chore: require docs in BitVec (#4913) 2024-08-05 01:12:04 +00:00
Leonardo de Moura
f62359acc7 perf: use lean_instantiate_level_mvars (#4912)
implemented in C/C++.
Next step: same for `instantiateExprMVars`
2024-08-04 23:21:57 +00:00
Shuhao Song
2d09c96caf chore: cli help text: comma-separate alternative option forms (#4911)
The help message of Lean command line contains
```
--o=oname -o create olean file
```
This may lead to misunderstanding that the command needs both argument
`--o=oname` and `-o`, i. e. `lean --o=test.o -o test.lean`. In the help
message of GNU coreutils, such as `ls`, it is `-a, --all ...`, which
might be better.
Some discussion is on Zulip thread
[https://leanprover.zulipchat.com/#narrow/stream/270676-lean4/topic/The.20help.20message.20of.20Lean.20command.20line](https://leanprover.zulipchat.com/#narrow/stream/270676-lean4/topic/The.20help.20message.20of.20Lean.20command.20line).
2024-08-04 22:54:34 +00:00
Lean stage0 autoupdater
21b4377d36 chore: update stage0 2024-08-04 19:11:50 +00:00
Leonardo de Moura
1e9d96be22 perf: add lean_instantiate_level_mvars (#4910)
The new code is not active yet because of bootstrapping issues.
It requires an `update_stage0`.
2024-08-04 18:31:44 +00:00
Leonardo de Moura
647a5e9492 perf: use NatPow Int instead of HPow Int Nat Int (#4903)
This modification improves the performance of the example in issue
#4861. It no longer times out but is still expensive.

Here is the analysis of the performance issue: Given `(x : Int)`, to
elaborate `x ^ 1`, a few default instances have to be tried.

First, the homogeneous instance is tried and fails since `Int` does not
implement `Pow Int`. Then, the `NatPow` instance is tried, and it also
fails. The same process is performed for each term of the form `p ^ 1`.
There are seveal of them at #4861. After all of these fail, the lower
priority default instance for numerals is tried, and `x ^ 1` becomes `x
^ (1 : Nat)`. Then, `HPow Int Nat Int` can be applied, and the
elaboration succeeds. However, this process has to be repeated for every
single term of the form `p ^ 1`. The elaborator tries all homogeneous
`HPow` and `NatPow` instances for all `p ^ 1` terms before trying the
lower priority default instance `OfNat`.

This commit ensures `Int` has a `NatPow` instance instead of `HPow Int
Nat Int`. This change shortcuts the process, but it still first tries
the homogeneous `HPow` instance, fails, and then tries `NatPow`. The
elaboration can be made much more efficient by writing `p ^ (1 : Nat)`.
2024-08-03 00:35:04 +00:00
Clement Courbet
9c4028aab4 perf: avoid expr copies in replace_rec_fn::apply (#4702)
Those represent ~13% of the time spent in `save_result`,
even though `r` is a temporary in all cases but one.

See #4698 for details.

---------

Co-authored-by: Leonardo de Moura <leomoura@amazon.com>
2024-08-02 18:32:36 +00:00
Clement Courbet
2c002718e0 perf: fix implementation of move constructors and move assignment ope… (#4700)
…rators

Right now those constructors result in a copy instead of the desired
move. We've measured that expr copying and assignment by itself uses
around 10% of total runtime on our workloads.

See #4698 for details.
2024-08-02 17:55:03 +00:00
Sebastian Ullrich
b07384acbb feat: accept user-defined options on the cmdline (#4741)
Initial options are now re-parsed and validated after importing. Cmdline
option assignments prefixed with `weak.` are silently discarded if the
option name without the prefix does not exist.

Fixes #3403
2024-08-02 12:24:56 +00:00
Sebastian Ullrich
efc99b982e chore: deprecate Nix-based build, remove interactive components (#4895)
Users who prefer the flake build should maintain it externally
2024-08-02 09:57:34 +00:00
Siddharth
ee430b6c80 feat: getLsb_replicate (#4873)
This allows bitblasting `BitVec.replicate`.

I changed the definition of `BitVec.replicate` to use `BitVec.cast` in
order to make the proof smoother, since it's an easier time simplifying
away terms with `BitVec.cast`.

---------

Co-authored-by: Tobias Grosser <tobias@grosser.es>
2024-08-01 23:07:19 +00:00
Alok Singh
a8740f5ed9 doc: add docstring to IO.FS.realpath (#4648)
Based on `IO.FS.rename` template.
2024-08-01 14:00:54 +00:00
Sebastian Ullrich
5e6a3cf5f9 chore: revert "fix: make import resolution case-sensitive on all platforms" (#4896)
Reverts leanprover/lean4#4538 because of unexpected overhead
2024-08-01 13:55:37 +00:00
Sebastian Ullrich
0ed1cf7244 fix: LEAN_EXPORT in sharecommon (#4893) 2024-08-01 13:03:25 +02:00
Lean stage0 autoupdater
e83f78d5af chore: update stage0 2024-08-01 06:54:29 +00:00
David Thrane Christiansen
32b9de8c77 fix: export symbols needed by Verso (#4884)
Verso needed a symbol that was unexported - this exposes it again.
2024-08-01 04:56:27 +00:00
Leonardo de Moura
a856016b9d perf: precise cache for expr_eq_fn (#4890)
This performance issue was exposed by the benchmarks at
https://github.com/leanprover/LNSym/tree/proof_size_expt/Proofs/SHA512/Experiments
2024-08-01 02:56:41 +00:00
Siddharth
c517688f1d feat: ushiftRight bitblasting (#4872)
This adds theorems `ushiftRight_rec_zero`, `ushiftRight_rec_succ`,
`ushiftRight_rec_eq`, and `shiftRight_eq_shiftRight_rec`.

---------

Co-authored-by: Tobias Grosser <github@grosser.es>
2024-07-31 21:44:06 +00:00
Leonardo de Moura
db594425bf refactor: sharecommon (#4887)
This PR also fixes a missing borrow annotation.
2024-07-31 19:13:12 +00:00
Kim Morrison
dcea47db02 chore: shorten suggestion about diagnostics (#4882)
This message is often incorporated into source files via `#guard_msgs`.
This change ensures it won't go over the 100 character ruler, and I
think is equally grammatical. :-)
2024-07-31 17:56:43 +00:00
Siddharth
f869902a4b feat: Nat simprocs for simplifying bit expressions (#4874)
This came up in the context of simplifying proof states for
https://github.com/leanprover/LNSym.
2024-07-31 17:26:05 +00:00
Sebastian Ullrich
d5a8c9647f fix: make import resolution case-sensitive on all platforms (#4538)
Co-authored-by: Mac Malone <tydeu@hatpress.net>
2024-07-31 16:48:14 +00:00
Sebastian Ullrich
d19bab0c27 feat: include command (#4883)
To be implemented in #4814
2024-07-31 13:25:54 +00:00
Sebastian Ullrich
6a4159c4a7 refactor: split out Lean.Language.Lean.Types (#4881) 2024-07-31 09:50:12 +00:00
Kim Morrison
8acdafd5b3 chore: correct doc-string for Array.swap! (#4869) 2024-07-31 04:02:30 +00:00
Kim Morrison
688da9d8a7 chore: updates to release_checklist.md (#4876)
Co-authored-by: Kyle Miller <kmill31415@gmail.com>
2024-07-31 04:02:19 +00:00
Kyle Miller
d5e7dbad80 fix: make "use `set_option diagnostics true" message conditional on current setting (#4781)
It is confusing that the message suggesting to use the `diagnostics`
option is given even when the option is already set. This PR makes use
of lazy message data to make the message contingent on the option being
false.

It also tones down the promise that there is any diagonostic information
available, since sometimes there is nothing to report.

Suggested by Johan Commelin.
2024-07-31 03:53:09 +00:00
Kyle Miller
82f48740dc chore: copy release notes from releases/v4.10.0 (#4864)
This also updates the 4.9.0 release notes with backported changes.
2024-07-31 03:30:13 +00:00
Kyle Miller
a827759f1d fix: mistake in statement of List.take_takeWhile (#4875)
This theorem is meant to say that `List.take` and `List.takeWhile`
commute.
2024-07-31 03:29:34 +00:00
Kim Morrison
a4015ca36c chore: rename PSigma.exists (#4878) 2024-07-31 03:26:17 +00:00
Kim Morrison
81719f94c9 chore: fix binder explicitness in List.map_subset (#4877) 2024-07-31 03:03:52 +00:00
324 changed files with 2649 additions and 1649 deletions

View File

@@ -103,7 +103,7 @@ jobs:
continue-on-error: true
- name: Build manual
run: |
nix build $NIX_BUILD_ARGS --update-input lean --no-write-lock-file ./doc#{lean-mdbook,leanInk,alectryon,test,inked} -o push-doc
nix build $NIX_BUILD_ARGS --update-input lean --no-write-lock-file ./doc#{lean-mdbook,leanInk,alectryon,inked} -o push-doc
nix build $NIX_BUILD_ARGS --update-input lean --no-write-lock-file ./doc
# https://github.com/netlify/cli/issues/1809
cp -r --dereference ./result ./dist
@@ -146,5 +146,3 @@ jobs:
- name: Fixup CCache Cache
run: |
sudo chown -R $USER /nix/var/cache
- name: CCache stats
run: CCACHE_DIR=/nix/var/cache/ccache nix run .#nixpkgs.ccache -- -s

View File

@@ -14,10 +14,289 @@ Development in progress.
v4.10.0
----------
Release candidate, release notes will be copied from branch `releases/v4.10.0` once completed.
### Language features, tactics, and metaprograms
* `split` tactic:
* [#4401](https://github.com/leanprover/lean4/pull/4401) improves the strategy `split` uses to generalize discriminants of matches and adds `trace.split.failure` trace class for diagnosing issues.
* `rw` tactic:
* [#4385](https://github.com/leanprover/lean4/pull/4385) prevents the tactic from claiming pre-existing goals are new subgoals.
* [dac1da](https://github.com/leanprover/lean4/commit/dac1dacc5b39911827af68247d575569d9c399b5) adds configuration for ordering new goals, like for `apply`.
* `simp` tactic:
* [#4430](https://github.com/leanprover/lean4/pull/4430) adds `dsimproc`s for `if` expressions (`ite` and `dite`).
* [#4434](https://github.com/leanprover/lean4/pull/4434) improves heuristics for unfolding. Equational lemmas now have priorities where more-specific equationals lemmas are tried first before a possible catch-all.
* [#4481](https://github.com/leanprover/lean4/pull/4481) fixes an issue where function-valued `OfNat` numeric literals would become denormalized.
* [#4467](https://github.com/leanprover/lean4/pull/4467) fixes an issue where dsimp theorems might not apply to literals.
* [#4484](https://github.com/leanprover/lean4/pull/4484) fixes the source position for the warning for deprecated simp arguments.
* [#4258](https://github.com/leanprover/lean4/pull/4258) adds docstrings for `dsimp` configuration.
* [#4567](https://github.com/leanprover/lean4/pull/4567) improves the accuracy of used simp lemmas reported by `simp?`.
* [fb9727](https://github.com/leanprover/lean4/commit/fb97275dcbb683efe6da87ed10a3f0cd064b88fd) adds (but does not implement) the simp configuration option `implicitDefEqProofs`, which will enable including `rfl`-theorems in proof terms.
* `omega` tactic:
* [#4360](https://github.com/leanprover/lean4/pull/4360) makes the tactic generate error messages lazily, improving its performance when used in tactic combinators.
* `bv_omega` tactic:
* [#4579](https://github.com/leanprover/lean4/pull/4579) works around changes to the definition of `Fin.sub` in this release.
* [#4490](https://github.com/leanprover/lean4/pull/4490) sets up groundwork for a tactic index in generated documentation, as there was in Lean 3. See PR description for details.
* **Commands**
* [#4370](https://github.com/leanprover/lean4/pull/4370) makes the `variable` command fully elaborate binders during validation, fixing an issue where some errors would be reported only at the next declaration.
* [#4408](https://github.com/leanprover/lean4/pull/4408) fixes a discrepency in universe parameter order between `theorem` and `def` declarations.
* [#4493](https://github.com/leanprover/lean4/pull/4493) and
[#4482](https://github.com/leanprover/lean4/pull/4482) fix a discrepancy in the elaborators for `theorem`, `def`, and `example`,
making `Prop`-valued `example`s and other definition commands elaborate like `theorem`s.
* [8f023b](https://github.com/leanprover/lean4/commit/8f023b85c554186ae562774b8122322d856c674e), [3c4d6b](https://github.com/leanprover/lean4/commit/3c4d6ba8648eb04d90371eb3fdbd114d16949501) and [0783d0](https://github.com/leanprover/lean4/commit/0783d0fcbe31b626fbd3ed2f29d838e717f09101) change the `#reduce` command to be able to control what gets reduced.
For example, `#reduce (proofs := true) (types := false) e` reduces both proofs and types in the expression `e`.
By default, neither proofs or types are reduced.
* [#4489](https://github.com/leanprover/lean4/pull/4489) fixes an elaboration bug in `#check_tactic`.
* [#4505](https://github.com/leanprover/lean4/pull/4505) adds support for `open _root_.<namespace>`.
* **Options**
* [#4576](https://github.com/leanprover/lean4/pull/4576) adds the `debug.byAsSorry` option. Setting `set_option debug.byAsSorry true` causes all `by ...` terms to elaborate as `sorry`.
* [7b56eb](https://github.com/leanprover/lean4/commit/7b56eb20a03250472f4b145118ae885274d1f8f7) and [d8e719](https://github.com/leanprover/lean4/commit/d8e719f9ab7d049e423473dfc7a32867d32c856f) add the `debug.skipKernelTC` option. Setting `set_option debug.skipKernelTC true` turns off kernel typechecking. This is meant for temporarily working around kernel performance issues, and it compromises soundness since buggy tactics may produce invalid proofs, which will not be caught if this option is set to true.
* [#4301](https://github.com/leanprover/lean4/pull/4301)
adds a linter to flag situations where a local variable's name is one of
the argumentless constructors of its type. This can arise when a user either
doesn't open a namespace or doesn't add a dot or leading qualifier, as
in the following:
```lean
inductive Tree (α : Type) where
| leaf
| branch (left : Tree α) (val : α) (right : Tree α)
def depth : Tree α → Nat
| leaf => 0
```
With this linter, the `leaf` pattern is highlighted as a local
variable whose name overlaps with the constructor `Tree.leaf`.
The linter can be disabled with `set_option linter.constructorNameAsVariable false`.
Additionally, the error message that occurs when a name in a pattern that takes arguments isn't valid now suggests similar names that would be valid. This means that the following definition:
```lean
def length (list : List α) : Nat :=
match list with
| nil => 0
| cons x xs => length xs + 1
```
now results in the following warning:
```
warning: Local variable 'nil' resembles constructor 'List.nil' - write '.nil' (with a dot) or 'List.nil' to use the constructor.
note: this linter can be disabled with `set_option linter.constructorNameAsVariable false`
```
and error:
```
invalid pattern, constructor or constant marked with '[match_pattern]' expected
Suggestion: 'List.cons' is similar
```
* **Metaprogramming**
* [#4454](https://github.com/leanprover/lean4/pull/4454) adds public `Name.isInternalDetail` function for filtering declarations using naming conventions for internal names.
* **Other fixes or improvements**
* [#4416](https://github.com/leanprover/lean4/pull/4416) sorts the ouput of `#print axioms` for determinism.
* [#4528](https://github.com/leanprover/lean4/pull/4528) fixes error message range for the cdot focusing tactic.
### Language server, widgets, and IDE extensions
* [#4443](https://github.com/leanprover/lean4/pull/4443) makes the watchdog be more resilient against badly behaving clients.
### Pretty printing
* [#4433](https://github.com/leanprover/lean4/pull/4433) restores fallback pretty printers when context is not available, and documents `addMessageContext`.
* [#4556](https://github.com/leanprover/lean4/pull/4556) introduces `pp.maxSteps` option and sets the default value of `pp.deepTerms` to `false`. Together, these keep excessively large or deep terms from overwhelming the Infoview.
### Library
* [#4560](https://github.com/leanprover/lean4/pull/4560) splits `GetElem` class into `GetElem` and `GetElem?`.
This enables removing `Decidable` instance arguments from `GetElem.getElem?` and `GetElem.getElem!`, improving their rewritability.
See the docstrings for these classes for more information.
* `Array`
* [#4389](https://github.com/leanprover/lean4/pull/4389) makes `Array.toArrayAux_eq` be a `simp` lemma.
* [#4399](https://github.com/leanprover/lean4/pull/4399) improves robustness of the proof for `Array.reverse_data`.
* `List`
* [#4469](https://github.com/leanprover/lean4/pull/4469) and [#4475](https://github.com/leanprover/lean4/pull/4475) improve the organization of the `List` API.
* [#4470](https://github.com/leanprover/lean4/pull/4470) improves the `List.set` and `List.concat` API.
* [#4472](https://github.com/leanprover/lean4/pull/4472) upstreams lemmas about `List.filter` from Batteries.
* [#4473](https://github.com/leanprover/lean4/pull/4473) adjusts `@[simp]` attributes.
* [#4488](https://github.com/leanprover/lean4/pull/4488) makes `List.getElem?_eq_getElem` be a simp lemma.
* [#4487](https://github.com/leanprover/lean4/pull/4487) adds missing `List.replicate` API.
* [#4521](https://github.com/leanprover/lean4/pull/4521) adds lemmas about `List.map`.
* [#4500](https://github.com/leanprover/lean4/pull/4500) changes `List.length_cons` to use `as.length + 1` instead of `as.length.succ`.
* [#4524](https://github.com/leanprover/lean4/pull/4524) fixes the statement of `List.filter_congr`.
* [#4525](https://github.com/leanprover/lean4/pull/4525) changes binder explicitness in `List.bind_map`.
* [#4550](https://github.com/leanprover/lean4/pull/4550) adds `maximum?_eq_some_iff'` and `minimum?_eq_some_iff?`.
* [#4400](https://github.com/leanprover/lean4/pull/4400) switches the normal forms for indexing `List` and `Array` to `xs[n]` and `xs[n]?`.
* `HashMap`
* [#4372](https://github.com/leanprover/lean4/pull/4372) fixes linearity in `HashMap.insert` and `HashMap.erase`, leading to a 40% speedup in a replace-heavy workload.
* `Option`
* [#4403](https://github.com/leanprover/lean4/pull/4403) generalizes type of `Option.forM` from `Unit` to `PUnit`.
* [#4504](https://github.com/leanprover/lean4/pull/4504) remove simp attribute from `Option.elim` and instead adds it to individal reduction lemmas, making unfolding less aggressive.
* `Nat`
* [#4242](https://github.com/leanprover/lean4/pull/4242) adds missing theorems for `n + 1` and `n - 1` normal forms.
* [#4486](https://github.com/leanprover/lean4/pull/4486) makes `Nat.min_assoc` be a simp lemma.
* [#4522](https://github.com/leanprover/lean4/pull/4522) moves `@[simp]` from `Nat.pred_le` to `Nat.sub_one_le`.
* [#4532](https://github.com/leanprover/lean4/pull/4532) changes various `Nat.succ n` to `n + 1`.
* `Int`
* [#3850](https://github.com/leanprover/lean4/pull/3850) adds complete div/mod simprocs for `Int`.
* `String`/`Char`
* [#4357](https://github.com/leanprover/lean4/pull/4357) make the byte size interface be `Nat`-valued with functions `Char.utf8Size` and `String.utf8ByteSize`.
* [#4438](https://github.com/leanprover/lean4/pull/4438) upstreams `Char.ext` from Batteries and adds some `Char` documentation to the manual.
* `Fin`
* [#4421](https://github.com/leanprover/lean4/pull/4421) adjusts `Fin.sub` to be more performant in definitional equality checks.
* `Prod`
* [#4526](https://github.com/leanprover/lean4/pull/4526) adds missing `Prod.map` lemmas.
* [#4533](https://github.com/leanprover/lean4/pull/4533) fixes binder explicitness in lemmas.
* `BitVec`
* [#4428](https://github.com/leanprover/lean4/pull/4428) adds missing `simproc` for `BitVec` equality.
* [#4417](https://github.com/leanprover/lean4/pull/4417) adds `BitVec.twoPow` and lemmas, toward bitblasting multiplication for LeanSAT.
* `Std` library
* [#4499](https://github.com/leanprover/lean4/pull/4499) introduces `Std`, a library situated between `Init` and `Lean`, providing functionality not in the prelude both to Lean's implementation and to external users.
* **Other fixes or improvements**
* [#3056](https://github.com/leanprover/lean4/pull/3056) standardizes on using `(· == a)` over `(a == ·)`.
* [#4502](https://github.com/leanprover/lean4/pull/4502) fixes errors reported by running the library through the the Batteries linters.
### Lean internals
* [#4391](https://github.com/leanprover/lean4/pull/4391) makes `getBitVecValue?` recognize `BitVec.ofNatLt`.
* [#4410](https://github.com/leanprover/lean4/pull/4410) adjusts `instantiateMVars` algorithm to zeta reduce `let` expressions while beta reducing instantiated metavariables.
* [#4420](https://github.com/leanprover/lean4/pull/4420) fixes occurs check for metavariable assignments to also take metavariable types into account.
* [#4425](https://github.com/leanprover/lean4/pull/4425) fixes `forEachModuleInDir` to iterate over each Lean file exactly once.
* [#3886](https://github.com/leanprover/lean4/pull/3886) adds support to build Lean core oleans using Lake.
* **Defeq and WHNF algorithms**
* [#4387](https://github.com/leanprover/lean4/pull/4387) improves performance of `isDefEq` by eta reducing lambda-abstracted terms during metavariable assignments, since these are beta reduced during metavariable instantiation anyway.
* [#4388](https://github.com/leanprover/lean4/pull/4388) removes redundant code in `isDefEqQuickOther`.
* **Typeclass inference**
* [#4530](https://github.com/leanprover/lean4/pull/4530) fixes handling of metavariables when caching results at `synthInstance?`.
* **Elaboration**
* [#4426](https://github.com/leanprover/lean4/pull/4426) makes feature where the "don't know how to synthesize implicit argument" error reports the name of the argument more reliable.
* [#4497](https://github.com/leanprover/lean4/pull/4497) fixes a name resolution bug for generalized field notation (dot notation).
* [#4536](https://github.com/leanprover/lean4/pull/4536) blocks the implicit lambda feature for `(e :)` notation.
* [#4562](https://github.com/leanprover/lean4/pull/4562) makes it be an error for there to be two functions with the same name in a `where`/`let rec` block.
* Recursion principles
* [#4549](https://github.com/leanprover/lean4/pull/4549) refactors `findRecArg`, extracting `withRecArgInfo`.
Errors are now reported in parameter order rather than the order they are tried (non-indices are tried first).
For every argument, it will say why it wasn't tried, even if the reason is obvious (e.g. a fixed prefix or is `Prop`-typed, etc.).
* Porting core C++ to Lean
* [#4474](https://github.com/leanprover/lean4/pull/4474) takes a step to refactor `constructions` toward a future port to Lean.
* [#4498](https://github.com/leanprover/lean4/pull/4498) ports `mk_definition_inferring_unsafe` to Lean.
* [#4516](https://github.com/leanprover/lean4/pull/4516) ports `recOn` construction to Lean.
* [#4517](https://github.com/leanprover/lean4/pull/4517), [#4653](https://github.com/leanprover/lean4/pull/4653), and [#4651](https://github.com/leanprover/lean4/pull/4651) port `below` and `brecOn` construction to Lean.
* Documentation
* [#4501](https://github.com/leanprover/lean4/pull/4501) adds a more-detailed docstring for `PersistentEnvExtension`.
* **Other fixes or improvements**
* [#4382](https://github.com/leanprover/lean4/pull/4382) removes `@[inline]` attribute from `NameMap.find?`, which caused respecialization at each call site.
* [5f9ded](https://github.com/leanprover/lean4/commit/5f9dedfe5ee9972acdebd669f228f487844a6156) improves output of `trace.Elab.snapshotTree`.
* [#4424](https://github.com/leanprover/lean4/pull/4424) removes "you might need to open '{dir}' in your editor" message that is now handled by Lake and the VS Code extension.
* [#4451](https://github.com/leanprover/lean4/pull/4451) improves the performance of `CollectMVars` and `FindMVar`.
* [#4479](https://github.com/leanprover/lean4/pull/4479) adds missing `DecidableEq` and `Repr` instances for intermediate structures used by the `BitVec` and `Fin` simprocs.
* [#4492](https://github.com/leanprover/lean4/pull/4492) adds tests for a previous `isDefEq` issue.
* [9096d6](https://github.com/leanprover/lean4/commit/9096d6fc7180fe533c504f662bcb61550e4a2492) removes `PersistentHashMap.size`.
* [#4508](https://github.com/leanprover/lean4/pull/4508) fixes `@[implemented_by]` for functions defined by well-founded recursion.
* [#4509](https://github.com/leanprover/lean4/pull/4509) adds additional tests for `apply?` tactic.
* [d6eab3](https://github.com/leanprover/lean4/commit/d6eab393f4df9d473b5736d636b178eb26d197e6) fixes a benchmark.
* [#4563](https://github.com/leanprover/lean4/pull/4563) adds a workaround for a bug in `IndPredBelow.mkBelowMatcher`.
* **Cleanup:** [#4380](https://github.com/leanprover/lean4/pull/4380), [#4431](https://github.com/leanprover/lean4/pull/4431), [#4494](https://github.com/leanprover/lean4/pull/4494), [e8f768](https://github.com/leanprover/lean4/commit/e8f768f9fd8cefc758533bc76e3a12b398ed4a39), [de2690](https://github.com/leanprover/lean4/commit/de269060d17a581ed87f40378dbec74032633b27), [d3a756](https://github.com/leanprover/lean4/commit/d3a7569c97123d022828106468d54e9224ed8207), [#4404](https://github.com/leanprover/lean4/pull/4404), [#4537](https://github.com/leanprover/lean4/pull/4537).
### Compiler, runtime, and FFI
* [d85d3d](https://github.com/leanprover/lean4/commit/d85d3d5f3a09ff95b2ee47c6f89ef50b7e339126) fixes criterion for tail-calls in ownership calculation.
* [#3963](https://github.com/leanprover/lean4/pull/3963) adds validation of UTF-8 at the C++-to-Lean boundary in the runtime.
* [#4512](https://github.com/leanprover/lean4/pull/4512) fixes missing unboxing in interpreter when loading initialized value.
* [#4477](https://github.com/leanprover/lean4/pull/4477) exposes the compiler flags for the bundled C compiler (clang).
### Lake
* [#4384](https://github.com/leanprover/lean4/pull/4384) deprecates `inputFile` and replaces it with `inputBinFile` and `inputTextFile`. Unlike `inputBinFile` (and `inputFile`), `inputTextFile` normalizes line endings, which helps ensure text file traces are platform-independent.
* [#4371](https://github.com/leanprover/lean4/pull/4371) simplifies dependency resolution code.
* [#4439](https://github.com/leanprover/lean4/pull/4439) touches up the Lake configuration DSL and makes other improvements:
string literals can now be used instead of identifiers for names,
avoids using French quotes in `lake new` and `lake init` templates,
changes the `exe` template to use `Main` for the main module,
improves the `math` template error if `lean-toolchain` fails to download,
and downgrades unknown configuration fields from an error to a warning to improve cross-version compatibility.
* [#4496](https://github.com/leanprover/lean4/pull/4496) tweaks `require` syntax and updates docs. Now `require` in TOML for a package name such as `doc-gen4` does not need French quotes.
* [#4485](https://github.com/leanprover/lean4/pull/4485) fixes a bug where package versions in indirect dependencies would take precedence over direct dependencies.
* [#4478](https://github.com/leanprover/lean4/pull/4478) fixes a bug where Lake incorrectly included the module dynamic library in a platform-independent trace.
* [#4529](https://github.com/leanprover/lean4/pull/4529) fixes some issues with bad import errors.
A bad import in an executable no longer prevents the executable's root
module from being built. This also fixes a problem where the location
of a transitive bad import would not been shown.
The root module of the executable now respects `nativeFacets`.
* [#4564](https://github.com/leanprover/lean4/pull/4564) fixes a bug where non-identifier script names could not be entered on the CLI without French quotes.
* [#4566](https://github.com/leanprover/lean4/pull/4566) addresses a few issues with precompiled libraries.
* Fixes a bug where Lake would always precompile the package of a module.
* If a module is precompiled, it now precompiles its imports. Previously, it would only do this if imported.
* [#4495](https://github.com/leanprover/lean4/pull/4495), [#4692](https://github.com/leanprover/lean4/pull/4692), [#4849](https://github.com/leanprover/lean4/pull/4849)
add a new type of `require` that fetches package metadata from a
registry API endpoint (e.g. Reservoir) and then clones a Git package
using the information provided. To require such a dependency, the new
syntax is:
```lean
require <scope> / <pkg-name> [@ git <rev>]
-- Examples:
require "leanprover" / "doc-gen4"
require "leanprover-community" / "proofwidgets" @ git "v0.0.39"
```
Or in TOML:
```toml
[[require]]
name = "<pkg-name>"
scope = "<scope>"
rev = "<rev>"
```
Unlike with Git dependencies, Lake can make use of the richer
information provided by the registry to determine the default branch of
the package. This means for repositories of packages like `doc-gen4`
which have a default branch that is not `master`, Lake will now use said
default branch (e.g., in `doc-gen4`'s case, `main`).
Lake also supports configuring the registry endpoint via an environment
variable: `RESERVIOR_API_URL`. Thus, any server providing a similar
interface to Reservoir can be used as the registry. Further
configuration options paralleling those of Cargo's [Alternative Registries](https://doc.rust-lang.org/cargo/reference/registries.html)
and [Source Replacement](https://doc.rust-lang.org/cargo/reference/source-replacement.html)
will come in the future.
### DevOps/CI
* [#4427](https://github.com/leanprover/lean4/pull/4427) uses Namespace runners for CI for `leanprover/lean4`.
* [#4440](https://github.com/leanprover/lean4/pull/4440) fixes speedcenter tests in CI.
* [#4441](https://github.com/leanprover/lean4/pull/4441) fixes that workflow change would break CI for unrebased PRs.
* [#4442](https://github.com/leanprover/lean4/pull/4442) fixes Wasm release-ci.
* [6d265b](https://github.com/leanprover/lean4/commit/6d265b42b117eef78089f479790587a399da7690) fixes for `github.event.pull_request.merge_commit_sha` sometimes not being available.
* [16cad2](https://github.com/leanprover/lean4/commit/16cad2b45c6a77efe4dce850dcdbaafaa7c91fc3) adds optimization for CI to not fetch complete history.
* [#4544](https://github.com/leanprover/lean4/pull/4544) causes releases to be marked as prerelease on GitHub.
* [#4446](https://github.com/leanprover/lean4/pull/4446) switches Lake to using `src/lake/lakefile.toml` to avoid needing to load a version of Lake to build Lake.
* Nix
* [5eb5fa](https://github.com/leanprover/lean4/commit/5eb5fa49cf9862e99a5bccff8d4ca1a062f81900) fixes `update-stage0-commit` for Nix.
* [#4476](https://github.com/leanprover/lean4/pull/4476) adds gdb to Nix shell.
* [e665a0](https://github.com/leanprover/lean4/commit/e665a0d716dc42ba79b339b95e01eb99fe932cb3) fixes `update-stage0` for Nix.
* [4808eb](https://github.com/leanprover/lean4/commit/4808eb7c4bfb98f212b865f06a97d46c44978a61) fixes `cacheRoots` for Nix.
* [#3811](https://github.com/leanprover/lean4/pull/3811) adds platform-dependent flag to lib target.
* [#4587](https://github.com/leanprover/lean4/pull/4587) adds linking of `-lStd` back into nix build flags on darwin.
### Breaking changes
* `Char.csize` is replaced by `Char.utf8Size` ([#4357](https://github.com/leanprover/lean4/pull/4357)).
* Library lemmas now are in terms of `(· == a)` over `(a == ·)` ([#3056](https://github.com/leanprover/lean4/pull/3056)).
* Now the normal forms for indexing into `List` and `Array` is `xs[n]` and `xs[n]?` rather than using functions like `List.get` ([#4400](https://github.com/leanprover/lean4/pull/4400)).
* Sometimes terms created via a sequence of unifications will be more eta reduced than before and proofs will require adaptation ([#4387](https://github.com/leanprover/lean4/pull/4387)).
* The `GetElem` class has been split into two; see the docstrings for `GetElem` and `GetElem?` for more information ([#4560](https://github.com/leanprover/lean4/pull/4560)).
v4.9.0
----------
----------
### Language features, tactics, and metaprograms
@@ -40,6 +319,8 @@ v4.9.0
* [#4395](https://github.com/leanprover/lean4/pull/4395) adds conservative fix for whitespace handling to avoid incremental reuse leading to goals in front of the text cursor being shown.
* [#4407](https://github.com/leanprover/lean4/pull/4407) fixes non-incremental commands in macros blocking further incremental reporting.
* [#4436](https://github.com/leanprover/lean4/pull/4436) fixes incremental reporting when there are nested tactics in terms.
* [#4459](https://github.com/leanprover/lean4/pull/4459) adds incrementality support for `next` and `if` tactics.
* [#4554](https://github.com/leanprover/lean4/pull/4554) disables incrementality for tactics in terms in tactics.
* **Functional induction**
* [#4135](https://github.com/leanprover/lean4/pull/4135) ensures that the names used for functional induction are reserved.
* [#4327](https://github.com/leanprover/lean4/pull/4327) adds support for structural recursion on reflexive types.
@@ -85,7 +366,7 @@ v4.9.0
When `index := false`, only the head function is taken into account, like in Lean 3.
This feature can help users diagnose tricky simp failures or issues in code from libraries
developed using Lean 3 and then ported to Lean 4.
In the following example, it will report that `foo` is a problematic theorem.
```lean
opaque f : Nat → Nat → Nat
@@ -105,7 +386,7 @@ v4.9.0
opaque f : Nat → Nat → Nat
@[simp] theorem foo : f x (no_index (x, y).2) = y := by sorry
example : f a b ≤ b := by
simp -- `foo` is still applied with `index := true`
```
@@ -123,6 +404,8 @@ v4.9.0
* [#4267](https://github.com/leanprover/lean4/pull/4267) cases signature elaboration errors to show even if there are parse errors in the body.
* [#4368](https://github.com/leanprover/lean4/pull/4368) improves error messages when numeric literals fail to synthesize an `OfNat` instance,
including special messages warning when the expected type of the numeral can be a proposition.
* [#4643](https://github.com/leanprover/lean4/pull/4643) fixes issue leading to nested error messages and info trees vanishing, where snapshot subtrees were not restored on reuse.
* [#4657](https://github.com/leanprover/lean4/pull/4657) calculates error suppression per snapshot, letting elaboration errors appear even when there are later parse errors ([RFC #3556](https://github.com/leanprover/lean4/issues/3556)).
* **Metaprogramming**
* [#4167](https://github.com/leanprover/lean4/pull/4167) adds `Lean.MVarId.revertAll` to revert all free variables.
* [#4169](https://github.com/leanprover/lean4/pull/4169) adds `Lean.MVarId.ensureNoMVar` to ensure the goal's target contains no expression metavariables.
@@ -239,6 +522,8 @@ v4.9.0
* [#4192](https://github.com/leanprover/lean4/pull/4192) fixes restoration of infotrees when auto-bound implicit feature is activated,
fixing a pretty printing error in hovers and strengthening the unused variable linter.
* [dfb496](https://github.com/leanprover/lean4/commit/dfb496a27123c3864571aec72f6278e2dad1cecf) fixes `declareBuiltin` to allow it to be called multiple times per declaration.
* [#4569](https://github.com/leanprover/lean4/pull/4569) fixes an issue introduced in a merge conflict, where the interrupt exception was swallowed by some `tryCatchRuntimeEx` uses.
* [b056a0](https://github.com/leanprover/lean4/commit/b056a0b395bb728512a3f3e83bf9a093059d4301) adapts kernel interruption to the new cancellation system.
* Cleanup: [#4112](https://github.com/leanprover/lean4/pull/4112), [#4126](https://github.com/leanprover/lean4/pull/4126), [#4091](https://github.com/leanprover/lean4/pull/4091), [#4139](https://github.com/leanprover/lean4/pull/4139), [#4153](https://github.com/leanprover/lean4/pull/4153).
* Tests: [030406](https://github.com/leanprover/lean4/commit/03040618b8f9b35b7b757858483e57340900cdc4), [#4133](https://github.com/leanprover/lean4/pull/4133).
@@ -249,6 +534,7 @@ v4.9.0
* [#3915](https://github.com/leanprover/lean4/pull/3915) documents the runtime memory layout for inductive types.
### Lake
* [#4518](https://github.com/leanprover/lean4/pull/4518) makes trace reading more robust. Lake now rebuilds if trace files are invalid or unreadable and is backwards compatible with previous pure numeric traces.
* [#4057](https://github.com/leanprover/lean4/pull/4057) adds support for docstrings on `require` commands.
* [#4088](https://github.com/leanprover/lean4/pull/4088) improves hovers for `family_def` and `library_data` commands.
* [#4147](https://github.com/leanprover/lean4/pull/4147) adds default `README.md` to package templates
@@ -298,12 +584,14 @@ v4.9.0
* [#4333](https://github.com/leanprover/lean4/pull/4333) adjusts workflow to update Batteries in manifest when creating `lean-pr-testing-NNNN` Mathlib branches.
* [#4355](https://github.com/leanprover/lean4/pull/4355) simplifies `lean4checker` step of release checklist.
* [#4361](https://github.com/leanprover/lean4/pull/4361) adds installing elan to `pr-release` CI step.
* [#4628](https://github.com/leanprover/lean4/pull/4628) fixes the Windows build, which was missing an exported symbol.
### Breaking changes
While most changes could be considered to be a breaking change, this section makes special note of API changes.
* `Nat.zero_or` and `Nat.or_zero` have been swapped ([#4094](https://github.com/leanprover/lean4/pull/4094)).
* `IsLawfulSingleton` is now `LawfulSingleton` ([#4350](https://github.com/leanprover/lean4/pull/4350)).
* The `BitVec` literal notation is now `<num>#<term>` rather than `<term>#<term>`, and it is global rather than scoped. Use `BitVec.ofNat w x` rather than `x#w` when `x` is a not a numeric literal ([0d3051](https://github.com/leanprover/lean4/commit/0d30517dca094a07bcb462252f718e713b93ffba)).
* `BitVec.rotateLeft` and `BitVec.rotateRight` now take the shift modulo the bitwidth ([#4229](https://github.com/leanprover/lean4/pull/4229)).
* These are no longer simp lemmas:
`List.length_pos` ([#4172](https://github.com/leanprover/lean4/pull/4172)),

View File

@@ -5,8 +5,11 @@ See below for the checklist for release candidates.
We'll use `v4.6.0` as the intended release version as a running example.
- One week before the planned release, ensure that (1) someone has written the release notes and (2) someone has written the first draft of the release blog post.
If there is any material in `./releases_drafts/`, then the release notes are not done. (See the section "Writing the release notes".)
- One week before the planned release, ensure that
(1) someone has written the release notes and
(2) someone has written the first draft of the release blog post.
If there is any material in `./releases_drafts/` on the `releases/v4.6.0` branch, then the release notes are not done.
(See the section "Writing the release notes".)
- `git checkout releases/v4.6.0`
(This branch should already exist, from the release candidates.)
- `git pull`
@@ -144,16 +147,10 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
- You can monitor this at `https://github.com/leanprover/lean4/actions/workflows/ci.yml`, looking for the `v4.7.0-rc1` tag.
- This step can take up to an hour.
- (GitHub release notes) Once the release appears at https://github.com/leanprover/lean4/releases/
- Edit the release notes on Github to select the "Set as a pre-release box".
- If release notes have been written already, copy the section of `RELEASES.md` for this version into the Github release notes
and use the title "Changes since v4.6.0 (from RELEASES.md)".
- Otherwise, in the "previous tag" dropdown, select `v4.6.0`, and click "Generate release notes".
- Verify that the release is marked as a prerelease (this should have been done automatically by the CI release job).
- In the "previous tag" dropdown, select `v4.6.0`, and click "Generate release notes".
This will add a list of all the commits since the last stable version.
- Delete anything already mentioned in the hand-written release notes above.
- Delete "update stage0" commits, and anything with a completely inscrutable commit message.
- Briefly rearrange the remaining items by category (e.g. `simp`, `lake`, `bug fixes`),
but for minor items don't put any work in expanding on commit messages.
- (How we want to release notes to look is evolving: please update this section if it looks wrong!)
- Next, we will move a curated list of downstream repos to the release candidate.
- This assumes that there is already a *reviewed* branch `bump/v4.7.0` on each repository
containing the required adaptations (or no adaptations are required).
@@ -187,6 +184,8 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
Please also make sure that whoever is handling social media knows the release is out.
- Begin the next development cycle (i.e. for `v4.8.0`) on the Lean repository, by making a PR that:
- Updates `src/CMakeLists.txt` to say `set(LEAN_VERSION_MINOR 8)`
- Replaces the "release notes will be copied" text in the `v4.6.0` section of `RELEASES.md` with the
finalized release notes from the `releases/v4.6.0` branch.
- Replaces the "development in progress" in the `v4.7.0` section of `RELEASES.md` with
```
Release candidate, release notes will be copied from `branch releases/v4.7.0` once completed.
@@ -222,12 +221,15 @@ Please read https://leanprover-community.github.io/contribute/tags_and_branches.
* This can either be done by the person managing this process directly,
or by soliciting assistance from authors of files, or generally helpful people on Zulip!
* Each repo has a `bump/v4.7.0` which accumulates reviewed changes adapting to new versions.
* Once `nightly-testing` is working on a given nightly, say `nightly-2024-02-15`, we:
* Once `nightly-testing` is working on a given nightly, say `nightly-2024-02-15`, we will create a PR to `bump/v4.7.0`.
* For Mathlib, there is a script in `scripts/create-adaptation-pr.sh` that automates this process.
* For Batteries and Aesop it is currently manual.
* For all of these repositories, the process is the same:
* Make sure `bump/v4.7.0` is up to date with `master` (by merging `master`, no PR necessary)
* Create from `bump/v4.7.0` a `bump/nightly-2024-02-15` branch.
* In that branch, `git merge --squash nightly-testing` to bring across changes from `nightly-testing`.
* In that branch, `git merge nightly-testing` to bring across changes from `nightly-testing`.
* Sanity check changes, commit, and make a PR to `bump/v4.7.0` from the `bump/nightly-2024-02-15` branch.
* Solicit review, merge the PR into `bump/v4,7,0`.
* Solicit review, merge the PR into `bump/v4.7.0`.
* It is always okay to merge in the following directions:
`master` -> `bump/v4.7.0` -> `bump/nightly-2024-02-15` -> `nightly-testing`.
Please remember to push any merges you make to intermediate steps!
@@ -239,7 +241,7 @@ The exact steps are a work in progress.
Here is the general idea:
* The work is done right on the `releases/v4.6.0` branch sometime after it is created but before the stable release is made.
The release notes for `v4.6.0` will be copied to `master`.
The release notes for `v4.6.0` will later be copied to `master` when we begin a new development cycle.
* There can be material for release notes entries in commit messages.
* There can also be pre-written entries in `./releases_drafts`, which should be all incorporated in the release notes and then deleted from the branch.
See `./releases_drafts/README.md` for more information.

138
doc/flake.lock generated
View File

@@ -18,12 +18,15 @@
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1656928814,
"narHash": "sha256-RIFfgBuKz6Hp89yRr7+NR5tzIAbn52h8vT6vXkYjZoM=",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "7e2a3b3dfd9af950a856d66b0a7d01e3c18aa249",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
@@ -35,13 +38,12 @@
"lean": {
"inputs": {
"flake-utils": "flake-utils",
"lean4-mode": "lean4-mode",
"nix": "nix",
"nixpkgs": "nixpkgs_2"
"nixpkgs": "nixpkgs",
"nixpkgs-old": "nixpkgs-old"
},
"locked": {
"lastModified": 0,
"narHash": "sha256-YnYbmG0oou1Q/GE4JbMNb8/yqUVXBPIvcdQQJHBqtPk=",
"narHash": "sha256-saRAtQ6VautVXKDw1XH35qwP0KEBKTKZbg/TRa4N9Vw=",
"path": "../.",
"type": "path"
},
@@ -50,22 +52,6 @@
"type": "path"
}
},
"lean4-mode": {
"flake": false,
"locked": {
"lastModified": 1659020985,
"narHash": "sha256-+dRaXB7uvN/weSZiKcfSKWhcdJVNg9Vg8k0pJkDNjpc=",
"owner": "leanprover",
"repo": "lean4-mode",
"rev": "37d5c99b7b29c80ab78321edd6773200deb0bca6",
"type": "github"
},
"original": {
"owner": "leanprover",
"repo": "lean4-mode",
"type": "github"
}
},
"leanInk": {
"flake": false,
"locked": {
@@ -83,22 +69,6 @@
"type": "github"
}
},
"lowdown-src": {
"flake": false,
"locked": {
"lastModified": 1633514407,
"narHash": "sha256-Dw32tiMjdK9t3ETl5fzGrutQTzh2rufgZV4A/BbxuD4=",
"owner": "kristapsdz",
"repo": "lowdown",
"rev": "d2c2b44ff6c27b936ec27358a2653caaef8f73b8",
"type": "github"
},
"original": {
"owner": "kristapsdz",
"repo": "lowdown",
"type": "github"
}
},
"mdBook": {
"flake": false,
"locked": {
@@ -115,65 +85,13 @@
"type": "github"
}
},
"nix": {
"inputs": {
"lowdown-src": "lowdown-src",
"nixpkgs": "nixpkgs",
"nixpkgs-regression": "nixpkgs-regression"
},
"locked": {
"lastModified": 1657097207,
"narHash": "sha256-SmeGmjWM3fEed3kQjqIAO8VpGmkC2sL1aPE7kKpK650=",
"owner": "NixOS",
"repo": "nix",
"rev": "f6316b49a0c37172bca87ede6ea8144d7d89832f",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1653988320,
"narHash": "sha256-ZaqFFsSDipZ6KVqriwM34T739+KLYJvNmCWzErjAg7c=",
"lastModified": 1710889954,
"narHash": "sha256-Pr6F5Pmd7JnNEMHHmspZ0qVqIBVxyZ13ik1pJtm2QXk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2fa57ed190fd6c7c746319444f34b5917666e5c1",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-22.05-small",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-regression": {
"locked": {
"lastModified": 1643052045,
"narHash": "sha256-uGJ0VXIhWKGXxkeNnq4TvV3CIOkUJ3PAoLZ3HMzNVMw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1657208011,
"narHash": "sha256-BlIFwopAykvdy1DYayEkj6ZZdkn+cVgPNX98QVLc0jM=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2770cc0b1e8faa0e20eb2c6aea64c256a706d4f2",
"rev": "7872526e9c5332274ea5932a0c3270d6e4724f3b",
"type": "github"
},
"original": {
@@ -183,6 +101,23 @@
"type": "github"
}
},
"nixpkgs-old": {
"flake": false,
"locked": {
"lastModified": 1581379743,
"narHash": "sha256-i1XCn9rKuLjvCdu2UeXKzGLF6IuQePQKFt4hEKRU5oc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "34c7eb7545d155cc5b6f499b23a7cb1c96ab4d59",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-19.03",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"alectryon": "alectryon",
@@ -194,6 +129,21 @@
"leanInk": "leanInk",
"mdBook": "mdBook"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

View File

@@ -17,7 +17,7 @@
};
outputs = inputs@{ self, ... }: inputs.flake-utils.lib.eachDefaultSystem (system:
with inputs.lean.packages.${system}; with nixpkgs;
with inputs.lean.packages.${system}.deprecated; with nixpkgs;
let
doc-src = lib.sourceByRegex ../. ["doc.*" "tests(/lean(/beginEndAsMacro.lean)?)?"];
in {
@@ -44,21 +44,6 @@
mdbook build -d $out
'';
};
# We use a separate derivation instead of `checkPhase` so we can push it but not `doc` to the binary cache
test = stdenv.mkDerivation {
name ="lean-doc-test";
src = doc-src;
buildInputs = [ lean-mdbook stage1.Lean.lean-package strace ];
patchPhase = ''
cd doc
patchShebangs test
'';
buildPhase = ''
mdbook test
touch $out
'';
dontInstall = true;
};
leanInk = (buildLeanPackage {
name = "Main";
src = inputs.leanInk;

113
flake.lock generated
View File

@@ -1,21 +1,5 @@
{
"nodes": {
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1673956053,
"narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
@@ -34,71 +18,18 @@
"type": "github"
}
},
"lean4-mode": {
"flake": false,
"locked": {
"lastModified": 1709737301,
"narHash": "sha256-uT9JN2kLNKJK9c/S/WxLjiHmwijq49EgLb+gJUSDpz0=",
"owner": "leanprover",
"repo": "lean4-mode",
"rev": "f1f24c15134dee3754b82c9d9924866fe6bc6b9f",
"type": "github"
},
"original": {
"owner": "leanprover",
"repo": "lean4-mode",
"type": "github"
}
},
"libgit2": {
"flake": false,
"locked": {
"lastModified": 1697646580,
"narHash": "sha256-oX4Z3S9WtJlwvj0uH9HlYcWv+x1hqp8mhXl7HsLu2f0=",
"owner": "libgit2",
"repo": "libgit2",
"rev": "45fd9ed7ae1a9b74b957ef4f337bc3c8b3df01b5",
"type": "github"
},
"original": {
"owner": "libgit2",
"repo": "libgit2",
"type": "github"
}
},
"nix": {
"inputs": {
"flake-compat": "flake-compat",
"libgit2": "libgit2",
"nixpkgs": "nixpkgs",
"nixpkgs-regression": "nixpkgs-regression"
},
"locked": {
"lastModified": 1711102798,
"narHash": "sha256-CXOIJr8byjolqG7eqCLa+Wfi7rah62VmLoqSXENaZnw=",
"owner": "NixOS",
"repo": "nix",
"rev": "a22328066416650471c3545b0b138669ea212ab4",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1709083642,
"narHash": "sha256-7kkJQd4rZ+vFrzWu8sTRtta5D1kBG0LSRYAfhtmMlSo=",
"lastModified": 1710889954,
"narHash": "sha256-Pr6F5Pmd7JnNEMHHmspZ0qVqIBVxyZ13ik1pJtm2QXk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b550fe4b4776908ac2a861124307045f8e717c8e",
"rev": "7872526e9c5332274ea5932a0c3270d6e4724f3b",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "release-23.11",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
@@ -120,44 +51,10 @@
"type": "github"
}
},
"nixpkgs-regression": {
"locked": {
"lastModified": 1643052045,
"narHash": "sha256-uGJ0VXIhWKGXxkeNnq4TvV3CIOkUJ3PAoLZ3HMzNVMw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1710889954,
"narHash": "sha256-Pr6F5Pmd7JnNEMHHmspZ0qVqIBVxyZ13ik1pJtm2QXk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7872526e9c5332274ea5932a0c3270d6e4724f3b",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"lean4-mode": "lean4-mode",
"nix": "nix",
"nixpkgs": "nixpkgs_2",
"nixpkgs": "nixpkgs",
"nixpkgs-old": "nixpkgs-old"
}
},

View File

@@ -1,38 +1,21 @@
{
description = "Lean interactive theorem prover";
description = "Lean development flake. Not intended for end users.";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
# old nixpkgs used for portable release with older glibc (2.27)
inputs.nixpkgs-old.url = "github:NixOS/nixpkgs/nixos-19.03";
inputs.nixpkgs-old.flake = false;
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nix.url = "github:NixOS/nix";
inputs.lean4-mode = {
url = "github:leanprover/lean4-mode";
flake = false;
};
# used *only* by `stage0-from-input` below
#inputs.lean-stage0 = {
# url = github:leanprover/lean4;
# inputs.nixpkgs.follows = "nixpkgs";
# inputs.flake-utils.follows = "flake-utils";
# inputs.nix.follows = "nix";
# inputs.lean4-mode.follows = "lean4-mode";
#};
outputs = { self, nixpkgs, nixpkgs-old, flake-utils, nix, lean4-mode, ... }@inputs: flake-utils.lib.eachDefaultSystem (system:
outputs = { self, nixpkgs, nixpkgs-old, flake-utils, ... }@inputs: flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs {
inherit system;
# for `vscode-with-extensions`
config.allowUnfree = true;
};
pkgs = import nixpkgs { inherit system; };
# An old nixpkgs for creating releases with an old glibc
pkgsDist-old = import nixpkgs-old { inherit system; };
# An old nixpkgs for creating releases with an old glibc
pkgsDist-old-aarch = import nixpkgs-old { localSystem.config = "aarch64-unknown-linux-gnu"; };
lean-packages = pkgs.callPackage (./nix/packages.nix) { src = ./.; inherit nix lean4-mode; };
lean-packages = pkgs.callPackage (./nix/packages.nix) { src = ./.; };
devShellWithDist = pkgsDist: pkgs.mkShell.override {
stdenv = pkgs.overrideCC pkgs.stdenv lean-packages.llvmPackages.clang;
@@ -58,41 +41,15 @@
GDB = pkgsDist.gdb;
});
in {
packages = lean-packages // rec {
debug = lean-packages.override { debug = true; };
stage0debug = lean-packages.override { stage0debug = true; };
asan = lean-packages.override { extraCMakeFlags = [ "-DLEAN_EXTRA_CXX_FLAGS=-fsanitize=address" "-DLEANC_EXTRA_FLAGS=-fsanitize=address" "-DSMALL_ALLOCATOR=OFF" "-DSYMBOLIC=OFF" ]; };
asandebug = asan.override { debug = true; };
tsan = lean-packages.override {
extraCMakeFlags = [ "-DLEAN_EXTRA_CXX_FLAGS=-fsanitize=thread" "-DLEANC_EXTRA_FLAGS=-fsanitize=thread" "-DCOMPRESSED_OBJECT_HEADER=OFF" ];
stage0 = (lean-packages.override {
# Compressed headers currently trigger data race reports in tsan.
# Turn them off for stage 0 as well so stage 1 can read its own stdlib.
extraCMakeFlags = [ "-DCOMPRESSED_OBJECT_HEADER=OFF" ];
}).stage1;
};
tsandebug = tsan.override { debug = true; };
stage0-from-input = lean-packages.override {
stage0 = pkgs.writeShellScriptBin "lean" ''
exec ${inputs.lean-stage0.packages.${system}.lean}/bin/lean -Dinterpreter.prefer_native=false "$@"
'';
};
inherit self;
packages = {
# to be removed when Nix CI is not needed anymore
inherit (lean-packages) cacheRoots test update-stage0-commit ciShell;
deprecated = lean-packages;
};
defaultPackage = lean-packages.lean-all;
# The default development shell for working on lean itself
devShells.default = devShellWithDist pkgs;
devShells.oldGlibc = devShellWithDist pkgsDist-old;
devShells.oldGlibcAArch = devShellWithDist pkgsDist-old-aarch;
checks.lean = lean-packages.test;
}) // rec {
templates.pkg = {
path = ./nix/templates/pkg;
description = "A custom Lean package";
};
defaultTemplate = templates.pkg;
};
});
}

View File

@@ -2,7 +2,7 @@
stdenv, lib, cmake, gmp, git, gnumake, bash, buildLeanPackage, writeShellScriptBin, runCommand, symlinkJoin, lndir, perl, gnused, darwin, llvmPackages, linkFarmFromDrvs,
... } @ args:
with builtins;
rec {
lib.warn "The Nix-based build is deprecated" rec {
inherit stdenv;
sourceByRegex = p: rs: lib.sourceByRegex p (map (r: "(/src/)?${r}") rs);
buildCMake = args: stdenv.mkDerivation ({

View File

@@ -1,5 +1,5 @@
{ lean, lean-leanDeps ? lean, lean-final ? lean, leanc,
stdenv, lib, coreutils, gnused, writeShellScriptBin, bash, lean-emacs, lean-vscode, nix, substituteAll, symlinkJoin, linkFarmFromDrvs,
stdenv, lib, coreutils, gnused, writeShellScriptBin, bash, substituteAll, symlinkJoin, linkFarmFromDrvs,
runCommand, darwin, mkShell, ... }:
let lean-final' = lean-final; in
lib.makeOverridable (
@@ -197,19 +197,6 @@ with builtins; let
then map (m: m.module) header.imports
else abort "errors while parsing imports of ${mod}:\n${lib.concatStringsSep "\n" header.errors}";
in mkMod mod (map (dep: if modDepsMap ? ${dep} then modCandidates.${dep} else externalModMap.${dep}) deps)) modDepsMap;
makeEmacsWrapper = name: emacs: lean: writeShellScriptBin name ''
${emacs} --eval "(progn (setq lean4-rootdir \"${lean}\"))" "$@"
'';
makeVSCodeWrapper = name: lean: writeShellScriptBin name ''
PATH=${lean}/bin:$PATH ${lean-vscode}/bin/code "$@"
'';
printPaths = deps: writeShellScriptBin "print-paths" ''
echo '${toJSON {
oleanPath = [(depRoot "print-paths" deps)];
srcPath = ["."] ++ map (dep: dep.src) allExternalDeps;
loadDynlibPaths = map pathOfSharedLib (loadDynlibsOfDeps deps);
}}'
'';
expandGlob = g:
if typeOf g == "string" then [g]
else if g.glob == "one" then [g.mod]
@@ -257,48 +244,4 @@ in rec {
-o $out/bin/${executableName} \
${lib.concatStringsSep " " allLinkFlags}
'') {};
lean-package = writeShellScriptBin "lean" ''
LEAN_PATH=${modRoot}:$LEAN_PATH LEAN_SRC_PATH=$LEAN_SRC_PATH:${src} exec ${lean-final}/bin/lean "$@"
'';
emacs-package = makeEmacsWrapper "emacs-package" lean-package;
vscode-package = makeVSCodeWrapper "vscode-package" lean-package;
link-ilean = writeShellScriptBin "link-ilean" ''
dest=''${1:-.}
mkdir -p $dest/build/lib
ln -sf ${iTree}/* $dest/build/lib
'';
makePrintPathsFor = deps: mods: printPaths deps // mapAttrs (_: mod: makePrintPathsFor (deps ++ [mod]) mods) mods;
print-paths = makePrintPathsFor [] (mods' // externalModMap);
# `lean` wrapper that dynamically runs Nix for the actual `lean` executable so the same editor can be
# used for multiple projects/after upgrading the `lean` input/for editing both stage 1 and the tests
lean-bin-dev = substituteAll {
name = "lean";
dir = "bin";
src = ./lean-dev.in;
isExecutable = true;
srcRoot = fullSrc; # use root flake.nix in case of Lean repo
inherit bash nix srcTarget srcArgs;
};
lake-dev = substituteAll {
name = "lake";
dir = "bin";
src = ./lake-dev.in;
isExecutable = true;
srcRoot = fullSrc; # use root flake.nix in case of Lean repo
inherit bash nix srcTarget srcArgs;
};
lean-dev = symlinkJoin { name = "lean-dev"; paths = [ lean-bin-dev lake-dev ]; };
emacs-dev = makeEmacsWrapper "emacs-dev" "${lean-emacs}/bin/emacs" lean-dev;
emacs-path-dev = makeEmacsWrapper "emacs-path-dev" "emacs" lean-dev;
vscode-dev = makeVSCodeWrapper "vscode-dev" lean-dev;
devShell = mkShell {
buildInputs = [ nix ];
shellHook = ''
export LEAN_SRC_PATH="${srcPath}"
'';
};
})

View File

@@ -1,9 +1,6 @@
{ src, pkgs, nix, ... } @ args:
{ src, pkgs, ... } @ args:
with pkgs;
let
nix-pinned = writeShellScriptBin "nix" ''
${nix.packages.${system}.default}/bin/nix --experimental-features 'nix-command flakes' --extra-substituters https://lean4.cachix.org/ --option warn-dirty false "$@"
'';
# https://github.com/NixOS/nixpkgs/issues/130963
llvmPackages = if stdenv.isDarwin then llvmPackages_11 else llvmPackages_15;
cc = (ccacheWrapper.override rec {
@@ -42,40 +39,9 @@ let
inherit (lean) stdenv;
lean = lean.stage1;
inherit (lean.stage1) leanc;
inherit lean-emacs lean-vscode;
nix = nix-pinned;
}));
lean4-mode = emacsPackages.melpaBuild {
pname = "lean4-mode";
version = "1";
commit = "1";
src = args.lean4-mode;
packageRequires = with pkgs.emacsPackages.melpaPackages; [ dash f flycheck magit-section lsp-mode s ];
recipe = pkgs.writeText "recipe" ''
(lean4-mode
:repo "leanprover/lean4-mode"
:fetcher github
:files ("*.el" "data"))
'';
};
lean-emacs = emacsWithPackages [ lean4-mode ];
# updating might be nicer by building from source from a flake input, but this is good enough for now
vscode-lean4 = vscode-utils.extensionFromVscodeMarketplace {
name = "lean4";
publisher = "leanprover";
version = "0.0.63";
sha256 = "sha256-kjEex7L0F2P4pMdXi4NIZ1y59ywJVubqDqsoYagZNkI=";
};
lean-vscode = vscode-with-extensions.override {
vscodeExtensions = [ vscode-lean4 ];
};
in {
inherit cc lean4-mode buildLeanPackage llvmPackages vscode-lean4;
lean = lean.stage1;
stage0print-paths = lean.stage1.Lean.print-paths;
HEAD-as-stage0 = (lean.stage1.Lean.overrideArgs { srcTarget = "..#stage0-from-input.stage0"; srcArgs = "(--override-input lean-stage0 ..\?rev=$(git rev-parse HEAD) -- -Dinterpreter.prefer_native=false \"$@\")"; });
HEAD-as-stage1 = (lean.stage1.Lean.overrideArgs { srcTarget = "..\?rev=$(git rev-parse HEAD)#stage0"; });
nix = nix-pinned;
inherit cc buildLeanPackage llvmPackages;
nixpkgs = pkgs;
ciShell = writeShellScriptBin "ciShell" ''
set -o pipefail
@@ -83,5 +49,4 @@ in {
# prefix lines with cumulative and individual execution time
"$@" |& ts -i "(%.S)]" | ts -s "[%M:%S"
'';
vscode = lean-vscode;
} // lean.stage1.Lean // lean.stage1 // lean
} // lean.stage1

View File

@@ -1,45 +0,0 @@
A new linter flags situations where a local variable's name is one of
the argumentless constructors of its type. This can arise when a user either
doesn't open a namespace or doesn't add a dot or leading qualifier, as
in the following:
````
inductive Tree (α : Type) where
| leaf
| branch (left : Tree α) (val : α) (right : Tree α)
def depth : Tree α → Nat
| leaf => 0
````
With this linter, the `leaf` pattern is highlighted as a local
variable whose name overlaps with the constructor `Tree.leaf`.
The linter can be disabled with `set_option linter.constructorNameAsVariable false`.
Additionally, the error message that occurs when a name in a pattern that takes arguments isn't valid now suggests similar names that would be valid. This means that the following definition:
```
def length (list : List α) : Nat :=
match list with
| nil => 0
| cons x xs => length xs + 1
```
now results in the following warning:
```
warning: Local variable 'nil' resembles constructor 'List.nil' - write '.nil' (with a dot) or 'List.nil' to use the constructor.
note: this linter can be disabled with `set_option linter.constructorNameAsVariable false`
```
and error:
```
invalid pattern, constructor or constant marked with '[match_pattern]' expected
Suggestion: 'List.cons' is similar
```
#4301

View File

@@ -1204,12 +1204,12 @@ def Prod.map {α₁ : Type u₁} {α₂ : Type u₂} {β₁ : Type v₁} {β₂
/-! # Dependent products -/
theorem PSigma.exists {α : Sort u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x)
theorem Exists.of_psigma_prop {α : Sort u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x)
| x, hx => x, hx
@[deprecated PSigma.exists (since := "2024-07-27")]
@[deprecated Exists.of_psigma_prop (since := "2024-07-27")]
theorem ex_of_PSigma {α : Type u} {p : α Prop} : (PSigma (fun x => p x)) Exists (fun x => p x) :=
PSigma.exists
Exists.of_psigma_prop
protected theorem PSigma.eta {α : Sort u} {β : α Sort v} {a₁ a₂ : α} {b₁ : β a₁} {b₂ : β a₂}
(h₁ : a₁ = a₂) (h₂ : Eq.ndrec b₁ h₁ = b₂) : PSigma.mk a₁ b₁ = PSigma.mk a₂ b₂ := by

View File

@@ -20,6 +20,8 @@ We define many of the bitvector operations from the
of SMT-LIBv2.
-/
set_option linter.missingDocs true
/--
A bitvector of the specified width.
@@ -34,9 +36,9 @@ structure BitVec (w : Nat) where
O(1), because we use `Fin` as the internal representation of a bitvector. -/
toFin : Fin (2^w)
@[deprecated (since := "2024-04-12")]
protected abbrev Std.BitVec := _root_.BitVec
/--
Bitvectors have decidable equality. This should be used via the instance `DecidableEq (BitVec n)`.
-/
-- We manually derive the `DecidableEq` instances for `BitVec` because
-- we want to have builtin support for bit-vector literals, and we
-- need a name for this function to implement `canUnfoldAtMatcher` at `WHNF.lean`.
@@ -583,11 +585,9 @@ instance : HAppend (BitVec w) (BitVec v) (BitVec (w + v)) := ⟨.append⟩
-- TODO: write this using multiplication
/-- `replicate i x` concatenates `i` copies of `x` into a new vector of length `w*i`. -/
def replicate : (i : Nat) BitVec w BitVec (w*i)
| 0, _ => 0
| 0, _ => 0#0
| n+1, x =>
have hEq : w + w*n = w*(n + 1) := by
rw [Nat.mul_add, Nat.add_comm, Nat.mul_one]
hEq (x ++ replicate n x)
(x ++ replicate n x).cast (by rw [Nat.mul_succ]; omega)
/-!
### Cons and Concat

View File

@@ -28,6 +28,8 @@ https://github.com/mhk119/lean-smt/blob/bitvec/Smt/Data/Bitwise.lean.
-/
set_option linter.missingDocs true
open Nat Bool
namespace Bool
@@ -403,12 +405,8 @@ theorem shiftLeftRec_eq {x : BitVec w₁} {y : BitVec w₂} {n : Nat} :
induction n generalizing x y
case zero =>
ext i
simp only [shiftLeftRec_zero, twoPow_zero, Nat.reduceAdd, truncate_one]
suffices (y &&& 1#w₂) = zeroExtend w₂ (ofBool (y.getLsb 0)) by simp [this]
ext i
by_cases h : (i : Nat) = 0
· simp [h, Bool.and_comm]
· simp [h]; omega
simp only [shiftLeftRec_zero, twoPow_zero, Nat.reduceAdd, truncate_one,
and_one_eq_zeroExtend_ofBool_getLsb]
case succ n ih =>
simp only [shiftLeftRec_succ, and_twoPow]
rw [ih]
@@ -431,4 +429,67 @@ theorem shiftLeft_eq_shiftLeftRec (x : BitVec w₁) (y : BitVec w₂) :
· simp [of_length_zero]
· simp [shiftLeftRec_eq]
/- ### Logical shift right (ushiftRight) recurrence for bitblasting -/
/--
`ushiftRightRec x y n` shifts `x` logically to the right by the first `n` bits of `y`.
The theorem `shiftRight_eq_ushiftRightRec` proves the equivalence
of `(x >>> y)` and `ushiftRightRec`.
Together with equations `ushiftRightRec_zero`, `ushiftRightRec_succ`,
this allows us to unfold `ushiftRight` into a circuit for bitblasting.
-/
def ushiftRightRec (x : BitVec w₁) (y : BitVec w₂) (n : Nat) : BitVec w₁ :=
let shiftAmt := (y &&& (twoPow w₂ n))
match n with
| 0 => x >>> shiftAmt
| n + 1 => (ushiftRightRec x y n) >>> shiftAmt
@[simp]
theorem ushiftRightRec_zero (x : BitVec w₁) (y : BitVec w₂) :
ushiftRightRec x y 0 = x >>> (y &&& twoPow w₂ 0) := by
simp [ushiftRightRec]
@[simp]
theorem ushiftRightRec_succ (x : BitVec w₁) (y : BitVec w₂) :
ushiftRightRec x y (n + 1) = (ushiftRightRec x y n) >>> (y &&& twoPow w₂ (n + 1)) := by
simp [ushiftRightRec]
/--
If `y &&& z = 0`, `x >>> (y ||| z) = x >>> y >>> z`.
This follows as `y &&& z = 0` implies `y ||| z = y + z`,
and thus `x >>> (y ||| z) = x >>> (y + z) = x >>> y >>> z`.
-/
theorem ushiftRight'_or_of_and_eq_zero {x : BitVec w₁} {y z : BitVec w₂}
(h : y &&& z = 0#w₂) :
x >>> (y ||| z) = x >>> y >>> z := by
simp [ add_eq_or_of_and_eq_zero _ _ h, toNat_add_of_and_eq_zero h, shiftRight_add]
theorem ushiftRightRec_eq (x : BitVec w₁) (y : BitVec w₂) (n : Nat) :
ushiftRightRec x y n = x >>> (y.truncate (n + 1)).zeroExtend w₂ := by
induction n generalizing x y
case zero =>
ext i
simp only [ushiftRightRec_zero, twoPow_zero, Nat.reduceAdd,
and_one_eq_zeroExtend_ofBool_getLsb, truncate_one]
case succ n ih =>
simp only [ushiftRightRec_succ, and_twoPow]
rw [ih]
by_cases h : y.getLsb (n + 1) <;> simp only [h, reduceIte]
· rw [zeroExtend_truncate_succ_eq_zeroExtend_truncate_or_twoPow_of_getLsb_true h,
ushiftRight'_or_of_and_eq_zero]
simp
· simp [zeroExtend_truncate_succ_eq_zeroExtend_truncate_of_getLsb_false, h]
/--
Show that `x >>> y` can be written in terms of `ushiftRightRec`.
This can be unfolded in terms of `ushiftRightRec_zero`, `ushiftRightRec_succ` for bitblasting.
-/
theorem shiftRight_eq_ushiftRightRec (x : BitVec w₁) (y : BitVec w₂) :
x >>> y = ushiftRightRec x y (w₂ - 1) := by
rcases w₂ with rfl | w₂
· simp [of_length_zero]
· simp [ushiftRightRec_eq]
end BitVec

View File

@@ -8,6 +8,8 @@ import Init.Data.BitVec.Lemmas
import Init.Data.Nat.Lemmas
import Init.Data.Fin.Iterate
set_option linter.missingDocs true
namespace BitVec
/--

View File

@@ -12,6 +12,8 @@ import Init.Data.Nat.Lemmas
import Init.Data.Nat.Mod
import Init.Data.Int.Bitwise.Lemmas
set_option linter.missingDocs true
namespace BitVec
/--
@@ -731,6 +733,16 @@ theorem getLsb_shiftLeft' {x : BitVec w₁} {y : BitVec w₂} {i : Nat} :
getLsb (x >>> i) j = getLsb x (i+j) := by
unfold getLsb ; simp
@[simp]
theorem ushiftRight_zero_eq (x : BitVec w) : x >>> 0 = x := by
simp [bv_toNat]
/-! ### ushiftRight reductions from BitVec to Nat -/
@[simp]
theorem ushiftRight_eq' (x : BitVec w₁) (y : BitVec w₂) :
x >>> y = x >>> y.toNat := by rfl
/-! ### sshiftRight -/
theorem sshiftRight_eq {x : BitVec n} {i : Nat} :
@@ -1549,4 +1561,54 @@ theorem zeroExtend_truncate_succ_eq_zeroExtend_truncate_or_twoPow_of_getLsb_true
simp [hx]
· by_cases hik' : k < i + 1 <;> simp [hik, hik'] <;> omega
/-- Bitwise and of `(x : BitVec w)` with `1#w` equals zero extending `x.lsb` to `w`. -/
theorem and_one_eq_zeroExtend_ofBool_getLsb {x : BitVec w} :
(x &&& 1#w) = zeroExtend w (ofBool (x.getLsb 0)) := by
ext i
simp only [getLsb_and, getLsb_one, getLsb_zeroExtend, Fin.is_lt, decide_True, getLsb_ofBool,
Bool.true_and]
by_cases h : (0 = (i : Nat)) <;> simp [h] <;> omega
@[simp]
theorem replicate_zero_eq {x : BitVec w} : x.replicate 0 = 0#0 := by
simp [replicate]
@[simp]
theorem replicate_succ_eq {x : BitVec w} :
x.replicate (n + 1) =
(x ++ replicate n x).cast (by rw [Nat.mul_succ]; omega) := by
simp [replicate]
/--
If a number `w * n ≤ i < w * (n + 1)`, then `i - w * n` equals `i % w`.
This is true by subtracting `w * n` from the inequality, giving
`0 ≤ i - w * n < w`, which uniquely identifies `i % w`.
-/
private theorem Nat.sub_mul_eq_mod_of_lt_of_le (hlo : w * n i) (hhi : i < w * (n + 1)) :
i - w * n = i % w := by
rw [Nat.mod_def]
congr
symm
apply Nat.div_eq_of_lt_le
(by rw [Nat.mul_comm]; omega)
(by rw [Nat.mul_comm]; omega)
@[simp]
theorem getLsb_replicate {n w : Nat} (x : BitVec w) :
(x.replicate n).getLsb i =
(decide (i < w * n) && x.getLsb (i % w)) := by
induction n generalizing x
case zero => simp
case succ n ih =>
simp only [replicate_succ_eq, getLsb_cast, getLsb_append]
by_cases hi : i < w * (n + 1)
· simp only [hi, decide_True, Bool.true_and]
by_cases hi' : i < w * n
· simp [hi', ih]
· simp only [hi', decide_False, cond_false]
rw [Nat.sub_mul_eq_mod_of_lt_of_le] <;> omega
· rw [Nat.mul_succ] at hi
simp only [show ¬i < w * n by omega, decide_False, cond_false, hi, Bool.false_and]
apply BitVec.getLsb_ge (x := x) (i := i - w * n) (ge := by omega)
end BitVec

View File

@@ -191,6 +191,121 @@ def foldlM {β : Type v} {m : Type v → Type w} [Monad m] (f : β → UInt8 →
def foldl {β : Type v} (f : β UInt8 β) (init : β) (as : ByteArray) (start := 0) (stop := as.size) : β :=
Id.run <| as.foldlM f init start stop
/-- Iterator over the bytes (`UInt8`) of a `ByteArray`.
Typically created by `arr.iter`, where `arr` is a `ByteArray`.
An iterator is *valid* if the position `i` is *valid* for the array `arr`, meaning `0 ≤ i ≤ arr.size`
Most operations on iterators return arbitrary values if the iterator is not valid. The functions in
the `ByteArray.Iterator` API should rule out the creation of invalid iterators, with two exceptions:
- `Iterator.next iter` is invalid if `iter` is already at the end of the array (`iter.atEnd` is
`true`)
- `Iterator.forward iter n`/`Iterator.nextn iter n` is invalid if `n` is strictly greater than the
number of remaining bytes.
-/
structure Iterator where
/-- The array the iterator is for. -/
array : ByteArray
/-- The current position.
This position is not necessarily valid for the array, for instance if one keeps calling
`Iterator.next` when `Iterator.atEnd` is true. If the position is not valid, then the
current byte is `(default : UInt8)`. -/
idx : Nat
deriving Inhabited
/-- Creates an iterator at the beginning of an array. -/
def mkIterator (arr : ByteArray) : Iterator :=
arr, 0
@[inherit_doc mkIterator]
abbrev iter := mkIterator
/-- The size of an array iterator is the number of bytes remaining. -/
instance : SizeOf Iterator where
sizeOf i := i.array.size - i.idx
theorem Iterator.sizeOf_eq (i : Iterator) : sizeOf i = i.array.size - i.idx :=
rfl
namespace Iterator
/-- Number of bytes remaining in the iterator. -/
def remainingBytes : Iterator Nat
| arr, i => arr.size - i
@[inherit_doc Iterator.idx]
def pos := Iterator.idx
/-- The byte at the current position.
On an invalid position, returns `(default : UInt8)`. -/
@[inline]
def curr : Iterator UInt8
| arr, i =>
if h:i < arr.size then
arr[i]'h
else
default
/-- Moves the iterator's position forward by one byte, unconditionally.
It is only valid to call this function if the iterator is not at the end of the array, *i.e.*
`Iterator.atEnd` is `false`; otherwise, the resulting iterator will be invalid. -/
@[inline]
def next : Iterator Iterator
| arr, i => arr, i + 1
/-- Decreases the iterator's position.
If the position is zero, this function is the identity. -/
@[inline]
def prev : Iterator Iterator
| arr, i => arr, i - 1
/-- True if the iterator is past the array's last byte. -/
@[inline]
def atEnd : Iterator Bool
| arr, i => i arr.size
/-- True if the iterator is not past the array's last byte. -/
@[inline]
def hasNext : Iterator Bool
| arr, i => i < arr.size
/-- True if the position is not zero. -/
@[inline]
def hasPrev : Iterator Bool
| _, i => i > 0
/-- Moves the iterator's position to the end of the array.
Note that `i.toEnd.atEnd` is always `true`. -/
@[inline]
def toEnd : Iterator Iterator
| arr, _ => arr, arr.size
/-- Moves the iterator's position several bytes forward.
The resulting iterator is only valid if the number of bytes to skip is less than or equal to
the number of bytes left in the iterator. -/
@[inline]
def forward : Iterator Nat Iterator
| arr, i, f => arr, i + f
@[inherit_doc forward, inline]
def nextn : Iterator Nat Iterator := forward
/-- Moves the iterator's position several bytes back.
If asked to go back more bytes than available, stops at the beginning of the array. -/
@[inline]
def prevn : Iterator Nat Iterator
| arr, i, f => arr, i - f
end Iterator
end ByteArray
def List.toByteArray (bs : List UInt8) : ByteArray :=

View File

@@ -322,8 +322,8 @@ protected def pow (m : Int) : Nat → Int
| 0 => 1
| succ n => Int.pow m n * m
instance : HPow Int Nat Int where
hPow := Int.pow
instance : NatPow Int where
pow := Int.pow
instance : LawfulBEq Int where
eq_of_beq h := by simp [BEq.beq] at h; assumption

View File

@@ -84,7 +84,7 @@ theorem cons_subset_cons {l₁ l₂ : List α} (a : α) (s : l₁ ⊆ l₂) : a
@[simp] theorem subset_nil {l : List α} : l [] l = [] :=
fun h => match l with | [] => rfl | _::_ => (nomatch h (.head ..)), fun | rfl => Subset.refl _
theorem map_subset {l₁ l₂ : List α} {f : α β} (h : l₁ l₂) : map f l₁ map f l₂ :=
theorem map_subset {l₁ l₂ : List α} (f : α β) (h : l₁ l₂) : map f l₁ map f l₂ :=
fun x => by simp only [mem_map]; exact .imp fun a => .imp_left (@h _)
theorem filter_subset {l₁ l₂ : List α} (p : α Bool) (H : l₁ l₂) : filter p l₁ filter p l₂ :=

View File

@@ -411,11 +411,11 @@ theorem dropWhile_replicate (p : α → Bool) :
split <;> simp_all
theorem take_takeWhile {l : List α} (p : α Bool) n :
(l.takeWhile p).take n = (l.takeWhile p).take n := by
induction l with
| nil => rfl
(l.takeWhile p).take n = (l.take n).takeWhile p := by
induction l generalizing n with
| nil => simp
| cons x xs ih =>
by_cases h : p x <;> simp [takeWhile_cons, h, ih]
by_cases h : p x <;> cases n <;> simp [takeWhile_cons, h, ih, take_succ_cons]
@[simp] theorem all_takeWhile {l : List α} : (l.takeWhile p).all p = true := by
induction l with

View File

@@ -203,6 +203,10 @@ theorem mod_add_div (m k : Nat) : m % k + k * (m / k) = m := by
| base x y h => simp [h]
| ind x y h IH => simp [h]; rw [Nat.mul_succ, Nat.add_assoc, IH, Nat.sub_add_cancel h.2]
theorem mod_def (m k : Nat) : m % k = m - k * (m / k) := by
rw [Nat.sub_eq_of_eq_add]
apply (Nat.mod_add_div _ _).symm
@[simp] protected theorem div_one (n : Nat) : n / 1 = n := by
have := mod_add_div n 1
rwa [mod_one, Nat.zero_add, Nat.one_mul] at this

View File

@@ -109,4 +109,4 @@ A more restrictive but efficient max sharing primitive.
Remark: it optimizes the number of RC operations, and the strategy for caching results.
-/
@[extern "lean_sharecommon_quick"]
def ShareCommon.shareCommon' (a : α) : α := a
def ShareCommon.shareCommon' (a : @& α) : α := a

View File

@@ -435,6 +435,12 @@ Note that EOF does not actually close a handle, so further reads may block and r
end Handle
/--
Resolves a pathname to an absolute pathname with no '.', '..', or symbolic links.
This function coincides with the [POSIX `realpath` function](https://pubs.opengroup.org/onlinepubs/9699919799/functions/realpath.html),
see there for more information.
-/
@[extern "lean_io_realpath"] opaque realPath (fname : FilePath) : IO FilePath
@[extern "lean_io_remove_file"] opaque removeFile (fname : @& FilePath) : IO Unit
/-- Remove given directory. Fails if not empty; see also `IO.FS.removeDirAll`. -/

View File

@@ -94,7 +94,10 @@ def emitCInitName (n : Name) : M Unit :=
def shouldExport (n : Name) : Bool :=
-- HACK: exclude symbols very unlikely to be used by the interpreter or other consumers of
-- libleanshared to avoid Windows symbol limit
!(`Lean.Compiler.LCNF).isPrefixOf n && !(`Lean.IR).isPrefixOf n && !(`Lean.Server).isPrefixOf n
!(`Lean.Compiler.LCNF).isPrefixOf n &&
!(`Lean.IR).isPrefixOf n &&
-- Lean.Server.findModuleRefs is used in Verso
(!(`Lean.Server).isPrefixOf n || n == `Lean.Server.findModuleRefs)
def emitFnDeclAux (decl : Decl) (cppBaseName : String) (isExternal : Bool) : M Unit := do
let ps := decl.params

View File

@@ -31,7 +31,16 @@ register_builtin_option maxHeartbeats : Nat := {
descr := "maximum amount of heartbeats per command. A heartbeat is number of (small) memory allocations (in thousands), 0 means no limit"
}
def useDiagnosticMsg := s!"use `set_option {diagnostics.name} true` to get diagnostic information"
/--
If the `diagnostics` option is not already set, gives a message explaining this option.
Begins with a `\n`, so an error message can look like `m!"some error occurred{useDiagnosticMsg}"`.
-/
def useDiagnosticMsg : MessageData :=
MessageData.lazy fun ctx =>
if diagnostics.get ctx.opts then
pure ""
else
pure s!"\nAdditional diagnostic information may be available using the `set_option {diagnostics.name} true` command."
namespace Core
@@ -300,8 +309,10 @@ register_builtin_option debug.moduleNameAtTimeout : Bool := {
def throwMaxHeartbeat (moduleName : Name) (optionName : Name) (max : Nat) : CoreM Unit := do
let includeModuleName := debug.moduleNameAtTimeout.get ( getOptions)
let atModuleName := if includeModuleName then s!" at `{moduleName}`" else ""
let msg := s!"(deterministic) timeout{atModuleName}, maximum number of heartbeats ({max/1000}) has been reached\nuse `set_option {optionName} <num>` to set the limit\n{useDiagnosticMsg}"
throw <| Exception.error ( getRef) (MessageData.ofFormat (Std.Format.text msg))
throw <| Exception.error ( getRef) m!"\
(deterministic) timeout{atModuleName}, maximum number of heartbeats ({max/1000}) has been reached\n\
Use `set_option {optionName} <num>` to set the limit.\
{useDiagnosticMsg}"
def checkMaxHeartbeatsCore (moduleName : String) (optionName : Name) (max : Nat) : CoreM Unit := do
unless max == 0 do

View File

@@ -12,10 +12,11 @@ import Lean.Data.RBMap
namespace Lean.Json.Parser
open Lean.Parsec
open Lean.Parsec.String
@[inline]
def hexChar : Parsec Nat := do
let c anyChar
def hexChar : Parser Nat := do
let c any
if '0' c c '9' then
pure $ c.val.toNat - '0'.val.toNat
else if 'a' c c 'f' then
@@ -25,8 +26,8 @@ def hexChar : Parsec Nat := do
else
fail "invalid hex character"
def escapedChar : Parsec Char := do
let c anyChar
def escapedChar : Parser Char := do
let c any
match c with
| '\\' => return '\\'
| '"' => return '"'
@@ -41,13 +42,13 @@ def escapedChar : Parsec Char := do
return Char.ofNat $ 4096*u1 + 256*u2 + 16*u3 + u4
| _ => fail "illegal \\u escape"
partial def strCore (acc : String) : Parsec String := do
partial def strCore (acc : String) : Parser String := do
let c peek!
if c = '"' then -- "
skip
return acc
else
let c anyChar
let c any
if c = '\\' then
strCore (acc.push ( escapedChar))
-- as to whether c.val > 0xffff should be split up and encoded with multiple \u,
@@ -58,9 +59,9 @@ partial def strCore (acc : String) : Parsec String := do
else
fail "unexpected character in string"
def str : Parsec String := strCore ""
def str : Parser String := strCore ""
partial def natCore (acc digits : Nat) : Parsec (Nat × Nat) := do
partial def natCore (acc digits : Nat) : Parser (Nat × Nat) := do
let some c peek? | return (acc, digits)
if '0' c c '9' then
skip
@@ -70,7 +71,7 @@ partial def natCore (acc digits : Nat) : Parsec (Nat × Nat) := do
return (acc, digits)
@[inline]
def lookahead (p : Char Prop) (desc : String) [DecidablePred p] : Parsec Unit := do
def lookahead (p : Char Prop) (desc : String) [DecidablePred p] : Parser Unit := do
let c peek!
if p c then
return ()
@@ -78,22 +79,22 @@ def lookahead (p : Char → Prop) (desc : String) [DecidablePred p] : Parsec Uni
fail <| "expected " ++ desc
@[inline]
def natNonZero : Parsec Nat := do
def natNonZero : Parser Nat := do
lookahead (fun c => '1' c c '9') "1-9"
let (n, _) natCore 0 0
return n
@[inline]
def natNumDigits : Parsec (Nat × Nat) := do
def natNumDigits : Parser (Nat × Nat) := do
lookahead (fun c => '0' c c '9') "digit"
natCore 0 0
@[inline]
def natMaybeZero : Parsec Nat := do
def natMaybeZero : Parser Nat := do
let (n, _) natNumDigits
return n
def num : Parsec JsonNumber := do
def num : Parser JsonNumber := do
let c peek!
let sign if c = '-' then
skip
@@ -132,10 +133,10 @@ def num : Parsec JsonNumber := do
else
return res
partial def arrayCore (anyCore : Parsec Json) (acc : Array Json) : Parsec (Array Json) := do
partial def arrayCore (anyCore : Parser Json) (acc : Array Json) : Parser (Array Json) := do
let hd anyCore
let acc' := acc.push hd
let c anyChar
let c any
if c = ']' then
ws
return acc'
@@ -145,12 +146,12 @@ partial def arrayCore (anyCore : Parsec Json) (acc : Array Json) : Parsec (Array
else
fail "unexpected character in array"
partial def objectCore (anyCore : Parsec Json) : Parsec (RBNode String (fun _ => Json)) := do
partial def objectCore (anyCore : Parser Json) : Parser (RBNode String (fun _ => Json)) := do
lookahead (fun c => c = '"') "\""; skip; -- "
let k strCore ""; ws
lookahead (fun c => c = ':') ":"; skip; ws
let v anyCore
let c anyChar
let c any
if c = '}' then
ws
return RBNode.singleton k v
@@ -161,7 +162,7 @@ partial def objectCore (anyCore : Parsec Json) : Parsec (RBNode String (fun _ =>
else
fail "unexpected character in object"
partial def anyCore : Parsec Json := do
partial def anyCore : Parser Json := do
let c peek!
if c = '[' then
skip; ws
@@ -203,7 +204,7 @@ partial def anyCore : Parsec Json := do
fail "unexpected input"
def any : Parsec Json := do
def any : Parser Json := do
ws
let res ← anyCore
eof

View File

@@ -4,181 +4,6 @@ Released under Apache 2.0 license as described in the file LICENSE.
Author: Dany Fabian
-/
prelude
import Init.NotationExtra
import Init.Data.ToString.Macro
namespace Lean
namespace Parsec
inductive ParseResult (α : Type) where
| success (pos : String.Iterator) (res : α)
| error (pos : String.Iterator) (err : String)
deriving Repr
end Parsec
def Parsec (α : Type) : Type := String.Iterator Lean.Parsec.ParseResult α
namespace Parsec
open ParseResult
instance (α : Type) : Inhabited (Parsec α) :=
λ it => error it ""
@[inline]
protected def pure (a : α) : Parsec α := λ it =>
success it a
@[inline]
def bind {α β : Type} (f : Parsec α) (g : α Parsec β) : Parsec β := λ it =>
match f it with
| success rem a => g a rem
| error pos msg => error pos msg
instance : Monad Parsec :=
{ pure := Parsec.pure, bind }
@[inline]
def fail (msg : String) : Parsec α := fun it =>
error it msg
@[inline]
def tryCatch (p : Parsec α)
(csuccess : α Parsec β)
(cerror : Unit Parsec β)
: Parsec β := fun it =>
match p it with
| .success rem a => csuccess a rem
| .error rem err =>
-- We assume that it.s never changes as the `Parsec` monad only modifies `it.pos`.
if it.pos = rem.pos then cerror () rem else .error rem err
@[inline]
def orElse (p : Parsec α) (q : Unit Parsec α) : Parsec α :=
tryCatch p pure q
@[inline]
def attempt (p : Parsec α) : Parsec α := λ it =>
match p it with
| success rem res => success rem res
| error _ err => error it err
instance : Alternative Parsec :=
{ failure := fail "", orElse }
protected def run (p : Parsec α) (s : String) : Except String α :=
match p s.mkIterator with
| Parsec.ParseResult.success _ res => Except.ok res
| Parsec.ParseResult.error it err => Except.error s!"offset {repr it.i.byteIdx}: {err}"
def expectedEndOfInput := "expected end of input"
@[inline]
def eof : Parsec Unit := fun it =>
if it.hasNext then
error it expectedEndOfInput
else
success it ()
@[specialize]
partial def manyCore (p : Parsec α) (acc : Array α) : Parsec $ Array α :=
tryCatch p (manyCore p <| acc.push ·) (fun _ => pure acc)
@[inline]
def many (p : Parsec α) : Parsec $ Array α := manyCore p #[]
@[inline]
def many1 (p : Parsec α) : Parsec $ Array α := do manyCore p #[p]
@[specialize]
partial def manyCharsCore (p : Parsec Char) (acc : String) : Parsec String :=
tryCatch p (manyCharsCore p <| acc.push ·) (fun _ => pure acc)
@[inline]
def manyChars (p : Parsec Char) : Parsec String := manyCharsCore p ""
@[inline]
def many1Chars (p : Parsec Char) : Parsec String := do manyCharsCore p (p).toString
/-- Parses the given string. -/
def pstring (s : String) : Parsec String := λ it =>
let substr := it.extract (it.forward s.length)
if substr = s then
success (it.forward s.length) substr
else
error it s!"expected: {s}"
@[inline]
def skipString (s : String) : Parsec Unit := pstring s *> pure ()
def unexpectedEndOfInput := "unexpected end of input"
@[inline]
def anyChar : Parsec Char := λ it =>
if it.hasNext then success it.next it.curr else error it unexpectedEndOfInput
@[inline]
def pchar (c : Char) : Parsec Char := attempt do
if (anyChar) = c then pure c else fail s!"expected: '{c}'"
@[inline]
def skipChar (c : Char) : Parsec Unit := pchar c *> pure ()
@[inline]
def digit : Parsec Char := attempt do
let c anyChar
if '0' c c '9' then return c else fail s!"digit expected"
@[inline]
def hexDigit : Parsec Char := attempt do
let c anyChar
if ('0' c c '9')
('a' c c 'f')
('A' c c 'F') then return c else fail s!"hex digit expected"
@[inline]
def asciiLetter : Parsec Char := attempt do
let c anyChar
if ('A' c c 'Z') ('a' c c 'z') then return c else fail s!"ASCII letter expected"
@[inline]
def satisfy (p : Char Bool) : Parsec Char := attempt do
let c anyChar
if p c then return c else fail "condition not satisfied"
@[inline]
def notFollowedBy (p : Parsec α) : Parsec Unit := λ it =>
match p it with
| success _ _ => error it ""
| error _ _ => success it ()
partial def skipWs (it : String.Iterator) : String.Iterator :=
if it.hasNext then
let c := it.curr
if c = '\u0009' c = '\u000a' c = '\u000d' c = '\u0020' then
skipWs it.next
else
it
else
it
@[inline]
def peek? : Parsec (Option Char) := fun it =>
if it.hasNext then
success it it.curr
else
success it none
@[inline]
def peek! : Parsec Char := do
let some c peek? | fail unexpectedEndOfInput
return c
@[inline]
def skip : Parsec Unit := fun it =>
success it.next ()
@[inline]
def ws : Parsec Unit := fun it =>
success (skipWs it) ()
end Parsec
import Lean.Data.Parsec.Basic
import Lean.Data.Parsec.String
import Lean.Data.Parsec.ByteArray

View File

@@ -0,0 +1,144 @@
/-
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Dany Fabian, Henrik Böving
-/
prelude
import Init.NotationExtra
import Init.Data.ToString.Macro
namespace Lean
namespace Parsec
inductive ParseResult (α : Type) (ι : Type) where
| success (pos : ι) (res : α)
| error (pos : ι) (err : String)
deriving Repr
end Parsec
def Parsec (ι : Type) (α : Type) : Type := ι Lean.Parsec.ParseResult α ι
namespace Parsec
class Input (ι : Type) (elem : outParam Type) (idx : outParam Type) [DecidableEq idx] [DecidableEq elem] where
pos : ι idx
next : ι ι
curr : ι elem
hasNext : ι Bool
variable {α : Type} {ι : Type} {elem : Type} {idx : Type}
variable [DecidableEq idx] [DecidableEq elem] [Input ι elem idx]
instance : Inhabited (Parsec ι α) where
default := fun it => .error it ""
@[inline]
protected def pure (a : α) : Parsec ι α := fun it =>
.success it a
@[inline]
def bind {α β : Type} (f : Parsec ι α) (g : α Parsec ι β) : Parsec ι β := fun it =>
match f it with
| .success rem a => g a rem
| .error pos msg => .error pos msg
instance : Monad (Parsec ι) where
pure := Parsec.pure
bind := Parsec.bind
@[inline]
def fail (msg : String) : Parsec ι α := fun it =>
.error it msg
@[inline]
def tryCatch (p : Parsec ι α) (csuccess : α Parsec ι β) (cerror : Unit Parsec ι β)
: Parsec ι β := fun it =>
match p it with
| .success rem a => csuccess a rem
| .error rem err =>
-- We assume that it.s never changes as the `Parsec` monad only modifies `it.pos`.
if Input.pos it = Input.pos rem then cerror () rem else .error rem err
@[inline]
def orElse (p : Parsec ι α) (q : Unit Parsec ι α) : Parsec ι α :=
tryCatch p pure q
@[inline]
def attempt (p : Parsec ι α) : Parsec ι α := fun it =>
match p it with
| .success rem res => .success rem res
| .error _ err => .error it err
instance : Alternative (Parsec ι) where
failure := fail ""
orElse := orElse
def expectedEndOfInput := "expected end of input"
@[inline]
def eof : Parsec ι Unit := fun it =>
if Input.hasNext it then
.error it expectedEndOfInput
else
.success it ()
@[specialize]
partial def manyCore (p : Parsec ι α) (acc : Array α) : Parsec ι <| Array α :=
tryCatch p (manyCore p <| acc.push ·) (fun _ => pure acc)
@[inline]
def many (p : Parsec ι α) : Parsec ι <| Array α := manyCore p #[]
@[inline]
def many1 (p : Parsec ι α) : Parsec ι <| Array α := do manyCore p #[ p]
def unexpectedEndOfInput := "unexpected end of input"
@[inline]
def any : Parsec ι elem := fun it =>
if Input.hasNext it then
.success (Input.next it) (Input.curr it)
else
.error it unexpectedEndOfInput
@[inline]
def satisfy (p : elem Bool) : Parsec ι elem := attempt do
let c any
if p c then return c else fail "condition not satisfied"
@[inline]
def notFollowedBy (p : Parsec ι α) : Parsec ι Unit := fun it =>
match p it with
| .success _ _ => .error it ""
| .error _ _ => .success it ()
@[inline]
def peek? : Parsec ι (Option elem) := fun it =>
if Input.hasNext it then
.success it (Input.curr it)
else
.success it none
@[inline]
def peek! : Parsec ι elem := do
let some c peek? | fail unexpectedEndOfInput
return c
@[inline]
def skip : Parsec ι Unit := fun it =>
.success (Input.next it) ()
@[specialize]
partial def manyCharsCore (p : Parsec ι Char) (acc : String) : Parsec ι String :=
tryCatch p (manyCharsCore p <| acc.push ·) (fun _ => pure acc)
@[inline]
def manyChars (p : Parsec ι Char) : Parsec ι String := manyCharsCore p ""
@[inline]
def many1Chars (p : Parsec ι Char) : Parsec ι String := do manyCharsCore p ( p).toString
end Parsec

View File

@@ -0,0 +1,103 @@
/-
Copyright (c) 2024 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Henrik Böving
-/
prelude
import Lean.Data.Parsec.Basic
import Init.Data.ByteArray.Basic
import Init.Data.String.Extra
namespace Lean
namespace Parsec
namespace ByteArray
instance : Input ByteArray.Iterator UInt8 Nat where
pos it := it.pos
next it := it.next
curr it := it.curr
hasNext it := it.hasNext
abbrev Parser (α : Type) : Type := Parsec ByteArray.Iterator α
protected def Parser.run (p : Parser α) (arr : ByteArray) : Except String α :=
match p arr.iter with
| .success _ res => Except.ok res
| .error it err => Except.error s!"offset {repr it.pos}: {err}"
@[inline]
def pbyte (b : UInt8) : Parser UInt8 := attempt do
if ( any) = b then pure b else fail s!"expected: '{b}'"
@[inline]
def skipByte (b : UInt8) : Parser Unit := pbyte b *> pure ()
def skipBytes (arr : ByteArray) : Parser Unit := do
for b in arr do
skipByte b
@[inline]
def pstring (s : String) : Parser String := do
skipBytes s.toUTF8
return s
@[inline]
def skipString (s : String) : Parser Unit := pstring s *> pure ()
/--
Parse a `Char` that can be represented in 1 byte. If `c` uses more than 1 byte it is truncated.
-/
@[inline]
def pByteChar (c : Char) : Parser Char := attempt do
if ( any) = c.toUInt8 then pure c else fail s!"expected: '{c}'"
/--
Skip a `Char` that can be represented in 1 byte. If `c` uses more than 1 byte it is truncated.
-/
@[inline]
def skipByteChar (c : Char) : Parser Unit := skipByte c.toUInt8
@[inline]
def digit : Parser Char := attempt do
let b any
if '0'.toUInt8 b b '9'.toUInt8 then return Char.ofUInt8 b else fail s!"digit expected"
@[inline]
def hexDigit : Parser Char := attempt do
let b any
if ('0'.toUInt8 b b '9'.toUInt8)
('a'.toUInt8 b b 'f'.toUInt8)
('A'.toUInt8 b b 'F'.toUInt8) then return Char.ofUInt8 b else fail s!"hex digit expected"
@[inline]
def asciiLetter : Parser Char := attempt do
let b any
if ('A'.toUInt8 b b 'Z'.toUInt8) ('a'.toUInt8 b b 'z'.toUInt8) then
return Char.ofUInt8 b
else
fail s!"ASCII letter expected"
private partial def skipWs (it : ByteArray.Iterator) : ByteArray.Iterator :=
if it.hasNext then
let b := it.curr
if b = '\u0009'.toUInt8 b = '\u000a'.toUInt8 b = '\u000d'.toUInt8 b = '\u0020'.toUInt8 then
skipWs it.next
else
it
else
it
@[inline]
def ws : Parser Unit := fun it =>
.success (skipWs it) ()
def take (n : Nat) : Parser ByteArray := fun it =>
let subarr := it.array.extract it.idx (it.idx + n)
if subarr.size != n then
.error it s!"expected: {n} bytes"
else
.success (it.forward n) subarr
end ByteArray
end Parsec
end Lean

View File

@@ -0,0 +1,84 @@
/-
Copyright (c) 2021 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Dany Fabian, Henrik Böving
-/
prelude
import Lean.Data.Parsec.Basic
namespace Lean
namespace Parsec
namespace String
instance : Input String.Iterator Char String.Pos where
pos it := it.pos
next it := it.next
curr it := it.curr
hasNext it := it.hasNext
abbrev Parser (α : Type) : Type := Parsec String.Iterator α
protected def Parser.run (p : Parser α) (s : String) : Except String α :=
match p s.mkIterator with
| .success _ res => Except.ok res
| .error it err => Except.error s!"offset {repr it.i.byteIdx}: {err}"
/-- Parses the given string. -/
def pstring (s : String) : Parser String := fun it =>
let substr := it.extract (it.forward s.length)
if substr = s then
.success (it.forward s.length) substr
else
.error it s!"expected: {s}"
@[inline]
def skipString (s : String) : Parser Unit := pstring s *> pure ()
@[inline]
def pchar (c : Char) : Parser Char := attempt do
if ( any) = c then pure c else fail s!"expected: '{c}'"
@[inline]
def skipChar (c : Char) : Parser Unit := pchar c *> pure ()
@[inline]
def digit : Parser Char := attempt do
let c any
if '0' c c '9' then return c else fail s!"digit expected"
@[inline]
def hexDigit : Parser Char := attempt do
let c any
if ('0' c c '9')
('a' c c 'f')
('A' c c 'F') then return c else fail s!"hex digit expected"
@[inline]
def asciiLetter : Parser Char := attempt do
let c any
if ('A' c c 'Z') ('a' c c 'z') then return c else fail s!"ASCII letter expected"
private partial def skipWs (it : String.Iterator) : String.Iterator :=
if it.hasNext then
let c := it.curr
if c = '\u0009' c = '\u000a' c = '\u000d' c = '\u0020' then
skipWs it.next
else
it
else
it
@[inline]
def ws : Parser Unit := fun it =>
.success (skipWs it) ()
def take (n : Nat) : Parser String := fun it =>
let substr := it.extract (it.forward n)
if substr.length != n then
.error it s!"expected: {n} codepoints"
else
.success (it.forward n) substr
end String
end Parsec
end Lean

View File

@@ -13,23 +13,24 @@ namespace Lean
namespace Xml
namespace Parser
open Lean.Parsec
open Parsec.ParseResult
open Lean.Parsec.String
abbrev LeanChar := Char
/-- consume a newline character sequence pretending, that we read '\n'. As per spec:
https://www.w3.org/TR/xml/#sec-line-ends -/
def endl : Parsec LeanChar := (skipString "\r\n" <|> skipChar '\r' <|> skipChar '\n') *> pure '\n'
def endl : Parser LeanChar := (skipString "\r\n" <|> skipChar '\r' <|> skipChar '\n') *> pure '\n'
def quote (p : Parsec α) : Parsec α :=
def quote (p : Parser α) : Parser α :=
skipChar '\'' *> p <* skipChar '\''
<|> skipChar '"' *> p <* skipChar '"'
/-- https://www.w3.org/TR/xml/#NT-Char -/
def Char : Parsec LeanChar :=
def Char : Parser LeanChar :=
(attempt do
let c anyChar
let c any
let cNat := c.toNat
if (0x20 cNat cNat 0xD7FF)
(0xE000 cNat cNat 0xFFFD)
@@ -37,11 +38,11 @@ def Char : Parsec LeanChar :=
<|> pchar '\t' <|> endl
/-- https://www.w3.org/TR/xml/#NT-S -/
def S : Parsec String :=
def S : Parser String :=
many1Chars (pchar ' ' <|> endl <|> pchar '\t')
/-- https://www.w3.org/TR/xml/#NT-Eq -/
def Eq : Parsec Unit :=
def Eq : Parser Unit :=
optional S *> skipChar '=' <* optional S
private def nameStartCharRanges : Array (Nat × Nat) :=
@@ -59,8 +60,8 @@ private def nameStartCharRanges : Array (Nat × Nat) :=
(0x10000, 0xEFFFF)]
/-- https://www.w3.org/TR/xml/#NT-NameStartChar -/
def NameStartChar : Parsec LeanChar := attempt do
let c anyChar
def NameStartChar : Parser LeanChar := attempt do
let c any
if ('A' c c 'Z') ('a' c c 'z') then pure c
else if c = ':' c = '_' then pure c
else
@@ -69,44 +70,44 @@ def NameStartChar : Parsec LeanChar := attempt do
else fail "expected a name character"
/-- https://www.w3.org/TR/xml/#NT-NameChar -/
def NameChar : Parsec LeanChar :=
def NameChar : Parser LeanChar :=
NameStartChar <|> digit <|> pchar '-' <|> pchar '.' <|> pchar '\xB7'
<|> satisfy (λ c => ('\u0300' c c '\u036F') ('\u203F' c c '\u2040'))
/-- https://www.w3.org/TR/xml/#NT-Name -/
def Name : Parsec String := do
def Name : Parser String := do
let x NameStartChar
manyCharsCore NameChar x.toString
/-- https://www.w3.org/TR/xml/#NT-VersionNum -/
def VersionNum : Parsec Unit :=
def VersionNum : Parser Unit :=
skipString "1." <* (many1 digit)
/-- https://www.w3.org/TR/xml/#NT-VersionInfo -/
def VersionInfo : Parsec Unit := do
def VersionInfo : Parser Unit := do
S *>
skipString "version"
Eq
quote VersionNum
/-- https://www.w3.org/TR/xml/#NT-EncName -/
def EncName : Parsec String := do
def EncName : Parser String := do
let x asciiLetter
manyCharsCore (asciiLetter <|> digit <|> pchar '-' <|> pchar '_' <|> pchar '.') x.toString
/-- https://www.w3.org/TR/xml/#NT-EncodingDecl -/
def EncodingDecl : Parsec String := do
def EncodingDecl : Parser String := do
S *>
skipString "encoding"
Eq
quote EncName
/-- https://www.w3.org/TR/xml/#NT-SDDecl -/
def SDDecl : Parsec String := do
def SDDecl : Parser String := do
S *> skipString "standalone" *> Eq *> quote (pstring "yes" <|> pstring "no")
/-- https://www.w3.org/TR/xml/#NT-XMLDecl -/
def XMLdecl : Parsec Unit := do
def XMLdecl : Parser Unit := do
skipString "<?xml"
VersionInfo
optional EncodingDecl *>
@@ -115,7 +116,7 @@ def XMLdecl : Parsec Unit := do
skipString "?>"
/-- https://www.w3.org/TR/xml/#NT-Comment -/
def Comment : Parsec String :=
def Comment : Parser String :=
let notDash := Char.toString <$> satisfy (λ c => c '-')
skipString "<!--" *>
Array.foldl String.append "" <$> many (attempt <| notDash <|> (do
@@ -125,45 +126,45 @@ def Comment : Parsec String :=
<* skipString "-->"
/-- https://www.w3.org/TR/xml/#NT-PITarget -/
def PITarget : Parsec String :=
def PITarget : Parser String :=
Name <* (skipChar 'X' <|> skipChar 'x') <* (skipChar 'M' <|> skipChar 'm') <* (skipChar 'L' <|> skipChar 'l')
/-- https://www.w3.org/TR/xml/#NT-PI -/
def PI : Parsec Unit := do
def PI : Parser Unit := do
skipString "<?"
<* PITarget <*
optional (S *> manyChars (notFollowedBy (skipString "?>") *> Char))
skipString "?>"
/-- https://www.w3.org/TR/xml/#NT-Misc -/
def Misc : Parsec Unit :=
def Misc : Parser Unit :=
Comment *> pure () <|> PI <|> S *> pure ()
/-- https://www.w3.org/TR/xml/#NT-SystemLiteral -/
def SystemLiteral : Parsec String :=
def SystemLiteral : Parser String :=
pchar '"' *> manyChars (satisfy λ c => c ≠ '"') <* pchar '"'
<|> pchar '\'' *> manyChars (satisfy λ c => c ≠ '\'') <* pure '\''
/-- https://www.w3.org/TR/xml/#NT-PubidChar -/
def PubidChar : Parsec LeanChar :=
def PubidChar : Parser LeanChar :=
asciiLetter <|> digit <|> endl <|> attempt do
let c ← anyChar
let c ← any
if "-'()+,./:=?;!*#@$_%".contains c then pure c else fail "PublidChar expected"
/-- https://www.w3.org/TR/xml/#NT-PubidLiteral -/
def PubidLiteral : Parsec String :=
def PubidLiteral : Parser String :=
pchar '"' *> manyChars PubidChar <* pchar '"'
<|> pchar '\'' *> manyChars (attempt do
let c ← PubidChar
if c = '\'' then fail "'\\'' not expected" else pure c) <* pchar '\''
/-- https://www.w3.org/TR/xml/#NT-ExternalID -/
def ExternalID : Parsec Unit :=
def ExternalID : Parser Unit :=
skipString "SYSTEM" *> S *> SystemLiteral *> pure ()
<|> skipString "PUBLIC" *> S *> PubidLiteral *> S *> SystemLiteral *> pure ()
/-- https://www.w3.org/TR/xml/#NT-Mixed -/
def Mixed : Parsec Unit :=
def Mixed : Parser Unit :=
(do
skipChar '('
optional S *>
@@ -175,11 +176,11 @@ def Mixed : Parsec Unit :=
mutual
/-- https://www.w3.org/TR/xml/#NT-cp -/
partial def cp : Parsec Unit :=
partial def cp : Parser Unit :=
(Name *> pure () <|> choice <|> seq) <* optional (skipChar '?' <|> skipChar '*' <|> skipChar '+')
/-- https://www.w3.org/TR/xml/#NT-choice -/
partial def choice : Parsec Unit := do
partial def choice : Parser Unit := do
skipChar '('
optional S *>
cp
@@ -188,7 +189,7 @@ mutual
skipChar ')'
/-- https://www.w3.org/TR/xml/#NT-seq -/
partial def seq : Parsec Unit := do
partial def seq : Parser Unit := do
skipChar '('
optional S *>
cp
@@ -198,15 +199,15 @@ mutual
end
/-- https://www.w3.org/TR/xml/#NT-children -/
def children : Parsec Unit :=
def children : Parser Unit :=
(choice <|> seq) <* optional (skipChar '?' <|> skipChar '*' <|> skipChar '+')
/-- https://www.w3.org/TR/xml/#NT-contentspec -/
def contentspec : Parsec Unit := do
def contentspec : Parser Unit := do
skipString "EMPTY" <|> skipString "ANY" <|> Mixed <|> children
/-- https://www.w3.org/TR/xml/#NT-elementdecl -/
def elementDecl : Parsec Unit := do
def elementDecl : Parser Unit := do
skipString "<!ELEMENT"
S *>
Name *>
@@ -215,11 +216,11 @@ def elementDecl : Parsec Unit := do
skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-StringType -/
def StringType : Parsec Unit :=
def StringType : Parser Unit :=
skipString "CDATA"
/-- https://www.w3.org/TR/xml/#NT-TokenizedType -/
def TokenizedType : Parsec Unit :=
def TokenizedType : Parser Unit :=
skipString "ID"
<|> skipString "IDREF"
<|> skipString "IDREFS"
@@ -229,7 +230,7 @@ def TokenizedType : Parsec Unit :=
<|> skipString "NMTOKENS"
/-- https://www.w3.org/TR/xml/#NT-NotationType -/
def NotationType : Parsec Unit := do
def NotationType : Parser Unit := do
skipString "NOTATION"
S *>
skipChar '(' <*
@@ -239,11 +240,11 @@ def NotationType : Parsec Unit := do
skipChar ')'
/-- https://www.w3.org/TR/xml/#NT-Nmtoken -/
def Nmtoken : Parsec String := do
def Nmtoken : Parser String := do
many1Chars NameChar
/-- https://www.w3.org/TR/xml/#NT-Enumeration -/
def Enumeration : Parsec Unit := do
def Enumeration : Parser Unit := do
skipChar '('
optional S *>
Nmtoken *> many (optional S *> skipChar '|' *> optional S *> Nmtoken) *>
@@ -251,11 +252,11 @@ def Enumeration : Parsec Unit := do
skipChar ')'
/-- https://www.w3.org/TR/xml/#NT-EnumeratedType -/
def EnumeratedType : Parsec Unit :=
def EnumeratedType : Parser Unit :=
NotationType <|> Enumeration
/-- https://www.w3.org/TR/xml/#NT-AttType -/
def AttType : Parsec Unit :=
def AttType : Parser Unit :=
StringType <|> TokenizedType <|> EnumeratedType
def predefinedEntityToChar : String → Option LeanChar
@@ -267,7 +268,7 @@ def predefinedEntityToChar : String → Option LeanChar
| _ => none
/-- https://www.w3.org/TR/xml/#NT-EntityRef -/
def EntityRef : Parsec $ Option LeanChar := attempt $
def EntityRef : Parser $ Option LeanChar := attempt $
skipChar '&' *> predefinedEntityToChar <$> Name <* skipChar ';'
@[inline]
@@ -280,7 +281,7 @@ def digitsToNat (base : Nat) (digits : Array Nat) : Nat :=
digits.foldl (λ r d => r * base + d) 0
/-- https://www.w3.org/TR/xml/#NT-CharRef -/
def CharRef : Parsec LeanChar := do
def CharRef : Parser LeanChar := do
skipString "&#"
let charCode
digitsToNat 10 <$> many1 (hexDigitToNat <$> digit)
@@ -289,11 +290,11 @@ def CharRef : Parsec LeanChar := do
return Char.ofNat charCode
/-- https://www.w3.org/TR/xml/#NT-Reference -/
def Reference : Parsec $ Option LeanChar :=
def Reference : Parser $ Option LeanChar :=
EntityRef <|> some <$> CharRef
/-- https://www.w3.org/TR/xml/#NT-AttValue -/
def AttValue : Parsec String := do
def AttValue : Parser String := do
let chars
(do
skipChar '"'
@@ -306,25 +307,25 @@ def AttValue : Parsec String := do
return chars.foldl (λ s c => if let some c := c then s.push c else s) ""
/-- https://www.w3.org/TR/xml/#NT-DefaultDecl -/
def DefaultDecl : Parsec Unit :=
def DefaultDecl : Parser Unit :=
skipString "#REQUIRED"
<|> skipString "#IMPLIED"
<|> optional (skipString "#FIXED" <* S) *> AttValue *> pure ()
/-- https://www.w3.org/TR/xml/#NT-AttDef -/
def AttDef : Parsec Unit :=
def AttDef : Parser Unit :=
S *> Name *> S *> AttType *> S *> DefaultDecl
/-- https://www.w3.org/TR/xml/#NT-AttlistDecl -/
def AttlistDecl : Parsec Unit :=
def AttlistDecl : Parser Unit :=
skipString "<!ATTLIST" *> S *> Name *> many AttDef *> optional S *> skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-PEReference -/
def PEReference : Parsec Unit :=
def PEReference : Parser Unit :=
skipChar '%' *> Name *> skipChar ';'
/-- https://www.w3.org/TR/xml/#NT-EntityValue -/
def EntityValue : Parsec String := do
def EntityValue : Parser String := do
let chars ←
(do
skipChar '"'
@@ -338,51 +339,51 @@ def EntityValue : Parsec String := do
/-- https://www.w3.org/TR/xml/#NT-NDataDecl -/
def NDataDecl : Parsec Unit :=
def NDataDecl : Parser Unit :=
S *> skipString "NDATA" <* S <* Name
/-- https://www.w3.org/TR/xml/#NT-EntityDef -/
def EntityDef : Parsec Unit :=
def EntityDef : Parser Unit :=
EntityValue *> pure () <|> (ExternalID <* optional NDataDecl)
/-- https://www.w3.org/TR/xml/#NT-GEDecl -/
def GEDecl : Parsec Unit :=
def GEDecl : Parser Unit :=
skipString "<!ENTITY" *> S *> Name *> S *> EntityDef *> optional S *> skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-PEDef -/
def PEDef : Parsec Unit :=
def PEDef : Parser Unit :=
EntityValue *> pure () <|> ExternalID
/-- https://www.w3.org/TR/xml/#NT-PEDecl -/
def PEDecl : Parsec Unit :=
def PEDecl : Parser Unit :=
skipString "<!ENTITY" *> S *> skipChar '%' *> S *> Name *> PEDef *> optional S *> skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-EntityDecl -/
def EntityDecl : Parsec Unit :=
def EntityDecl : Parser Unit :=
GEDecl <|> PEDecl
/-- https://www.w3.org/TR/xml/#NT-PublicID -/
def PublicID : Parsec Unit :=
def PublicID : Parser Unit :=
skipString "PUBLIC" <* S <* PubidLiteral
/-- https://www.w3.org/TR/xml/#NT-NotationDecl -/
def NotationDecl : Parsec Unit :=
def NotationDecl : Parser Unit :=
skipString "<!NOTATION" *> S *> Name *> (ExternalID <|> PublicID) *> optional S *> skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-markupdecl -/
def markupDecl : Parsec Unit :=
def markupDecl : Parser Unit :=
elementDecl <|> AttlistDecl <|> EntityDecl <|> NotationDecl <|> PI <|> (Comment *> pure ())
/-- https://www.w3.org/TR/xml/#NT-DeclSep -/
def DeclSep : Parsec Unit :=
def DeclSep : Parser Unit :=
PEReference <|> S *> pure ()
/-- https://www.w3.org/TR/xml/#NT-intSubset -/
def intSubset : Parsec Unit :=
def intSubset : Parser Unit :=
many (markupDecl <|> DeclSep) *> pure ()
/-- https://www.w3.org/TR/xml/#NT-doctypedecl -/
def doctypedecl : Parsec Unit := do
def doctypedecl : Parser Unit := do
skipString "<!DOCTYPE"
S *>
Name *>
@@ -392,19 +393,19 @@ def doctypedecl : Parsec Unit := do
skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-prolog -/
def prolog : Parsec Unit :=
def prolog : Parser Unit :=
optional XMLdecl *>
many Misc *>
optional (doctypedecl <* many Misc) *> pure ()
/-- https://www.w3.org/TR/xml/#NT-Attribute -/
def Attribute : Parsec (String × String) := do
def Attribute : Parser (String × String) := do
let name Name
Eq
let value AttValue
return (name, value)
protected def elementPrefix : Parsec (Array Content Element) := do
protected def elementPrefix : Parser (Array Content Element) := do
skipChar '<'
let name Name
let attributes many (attempt <| S *> Attribute)
@@ -412,40 +413,40 @@ protected def elementPrefix : Parsec (Array Content → Element) := do
return Element.Element name (RBMap.fromList attributes.toList compare)
/-- https://www.w3.org/TR/xml/#NT-EmptyElemTag -/
def EmptyElemTag (elem : Array Content Element) : Parsec Element := do
def EmptyElemTag (elem : Array Content Element) : Parser Element := do
skipString "/>" *> pure (elem #[])
/-- https://www.w3.org/TR/xml/#NT-STag -/
def STag (elem : Array Content Element) : Parsec (Array Content Element) := do
def STag (elem : Array Content Element) : Parser (Array Content Element) := do
skipChar '>' *> pure elem
/-- https://www.w3.org/TR/xml/#NT-ETag -/
def ETag : Parsec Unit :=
def ETag : Parser Unit :=
skipString "</" *> Name *> optional S *> skipChar '>'
/-- https://www.w3.org/TR/xml/#NT-CDStart -/
def CDStart : Parsec Unit :=
def CDStart : Parser Unit :=
skipString "<![CDATA["
/-- https://www.w3.org/TR/xml/#NT-CDEnd -/
def CDEnd : Parsec Unit :=
def CDEnd : Parser Unit :=
skipString "]]>"
/-- https://www.w3.org/TR/xml/#NT-CData -/
def CData : Parsec String :=
manyChars (notFollowedBy (skipString "]]>") *> anyChar)
def CData : Parser String :=
manyChars (notFollowedBy (skipString "]]>") *> any)
/-- https://www.w3.org/TR/xml/#NT-CDSect -/
def CDSect : Parsec String :=
def CDSect : Parser String :=
CDStart *> CData <* CDEnd
/-- https://www.w3.org/TR/xml/#NT-CharData -/
def CharData : Parsec String :=
def CharData : Parser String :=
notFollowedBy (skipString "]]>") *> manyChars (satisfy λ c => c '<' c '&')
mutual
/-- https://www.w3.org/TR/xml/#NT-content -/
partial def content : Parsec (Array Content) := do
partial def content : Parser (Array Content) := do
let x optional (Content.Character <$> CharData)
let xs many do
let y
@@ -468,20 +469,20 @@ mutual
return res
/-- https://www.w3.org/TR/xml/#NT-element -/
partial def element : Parsec Element := do
partial def element : Parser Element := do
let elem Parser.elementPrefix
EmptyElemTag elem <|> STag elem <*> content <* ETag
end
/-- https://www.w3.org/TR/xml/#NT-document -/
def document : Parsec Element := prolog *> element <* many Misc <* eof
def document : Parser Element := prolog *> element <* many Misc <* eof
end Parser
def parse (s : String) : Except String Element :=
match Xml.Parser.document s.mkIterator with
| Parsec.ParseResult.success _ res => Except.ok res
| Parsec.ParseResult.error it err => Except.error s!"offset {it.i.byteIdx.repr}: {err}\n{(it.prevn 10).extract it}"
| .success _ res => Except.ok res
| .error it err => Except.error s!"offset {it.i.byteIdx.repr}: {err}\n{(it.prevn 10).extract it}"
end Xml

View File

@@ -42,7 +42,7 @@ def mkCalcTrans (result resultType step stepType : Expr) : MetaM (Expr × Expr)
unless ( getCalcRelation? resultType).isSome do
throwError "invalid 'calc' step, step result is not a relation{indentExpr resultType}"
return (result, resultType)
| _ => throwError "invalid 'calc' step, failed to synthesize `Trans` instance{indentExpr selfType}\n{useDiagnosticMsg}"
| _ => throwError "invalid 'calc' step, failed to synthesize `Trans` instance{indentExpr selfType}{useDiagnosticMsg}"
/--
Adds a type annotation to a hole that occurs immediately at the beginning of the term.

View File

@@ -81,10 +81,6 @@ end Frontend
open Frontend
def IO.processCommands (inputCtx : Parser.InputContext) (parserState : Parser.ModuleParserState) (commandState : Command.State) : IO State := do
let (_, s) (Frontend.processCommands.run { inputCtx := inputCtx }).run { commandState := commandState, parserState := parserState, cmdPos := parserState.pos }
pure s
structure IncrementalState extends State where
inputCtx : Parser.InputContext
initialSnap : Language.Lean.CommandParsedSnapshot
@@ -92,12 +88,10 @@ deriving Nonempty
open Language in
/--
Variant of `IO.processCommands` that uses the new Lean language processor implementation for
potential incremental reuse. Pass in result of a previous invocation done with the same state
(but usually different input context) to allow for reuse.
Variant of `IO.processCommands` that allows for potential incremental reuse. Pass in the result of a
previous invocation done with the same state (but usually different input context) to allow for
reuse.
-/
-- `IO.processCommands` can be reimplemented on top of this as soon as the additional tasks speed up
-- things instead of slowing them down
partial def IO.processCommandsIncrementally (inputCtx : Parser.InputContext)
(parserState : Parser.ModuleParserState) (commandState : Command.State)
(old? : Option IncrementalState) :
@@ -110,7 +104,7 @@ where
let snap := t.get
let commands := commands.push snap.data.stx
if let some next := snap.nextCmdSnap? then
go initialSnap next commands
go initialSnap next.task commands
else
-- Opting into reuse also enables incremental reporting, so make sure to collect messages from
-- all snapshots
@@ -126,6 +120,11 @@ where
inputCtx, initialSnap, commands
}
def IO.processCommands (inputCtx : Parser.InputContext) (parserState : Parser.ModuleParserState)
(commandState : Command.State) : IO State := do
let st IO.processCommandsIncrementally inputCtx parserState commandState none
return st.toState
def process (input : String) (env : Environment) (opts : Options) (fileName : Option String := none) : IO (Environment × MessageLog) := do
let fileName := fileName.getD "<input>"
let inputCtx := Parser.mkInputContext input fileName
@@ -144,62 +143,31 @@ def runFrontend
: IO (Environment × Bool) := do
let startTime := ( IO.monoNanosNow).toFloat / 1000000000
let inputCtx := Parser.mkInputContext input fileName
if true then
-- Temporarily keep alive old cmdline driver for the Lean language so that we don't pay the
-- overhead of passing the environment between snapshots until we actually make good use of it
-- outside the server
let (header, parserState, messages) Parser.parseHeader inputCtx
-- allow `env` to be leaked, which would live until the end of the process anyway
let (env, messages) processHeader (leakEnv := true) header opts messages inputCtx trustLevel
let env := env.setMainModule mainModuleName
let mut commandState := Command.mkState env messages opts
let elabStartTime := ( IO.monoNanosNow).toFloat / 1000000000
if ileanFileName?.isSome then
-- Collect InfoTrees so we can later extract and export their info to the ilean file
commandState := { commandState with infoState.enabled := true }
let s IO.processCommands inputCtx parserState commandState
Language.reportMessages s.commandState.messages opts jsonOutput
if let some ileanFileName := ileanFileName? then
let trees := s.commandState.infoState.trees.toArray
let references
Lean.Server.findModuleRefs inputCtx.fileMap trees (localVars := false) |>.toLspModuleRefs
let ilean := { module := mainModuleName, references : Lean.Server.Ilean }
IO.FS.writeFile ileanFileName $ Json.compress $ toJson ilean
if let some out := trace.profiler.output.get? opts then
let traceState := s.commandState.traceState
-- importing does not happen in an elaboration monad, add now
let traceState := { traceState with
traces := #[{
ref := .missing,
msg := .trace { cls := `Import, startTime, stopTime := elabStartTime }
(.ofFormat "importing") #[]
}].toPArray' ++ traceState.traces
}
let profile Firefox.Profile.export mainModuleName.toString startTime traceState opts
IO.FS.writeFile out <| Json.compress <| toJson profile
return (s.commandState.env, !s.commandState.messages.hasErrors)
let opts := Language.Lean.internal.minimalSnapshots.set opts true
let ctx := { inputCtx with }
let processor := Language.Lean.process
let snap processor (fun _ => pure <| .ok { mainModuleName, opts, trustLevel }) none ctx
let snaps := Language.toSnapshotTree snap
snaps.runAndReport opts jsonOutput
if let some ileanFileName := ileanFileName? then
let trees := snaps.getAll.concatMap (match ·.infoTree? with | some t => #[t] | _ => #[])
let references := Lean.Server.findModuleRefs inputCtx.fileMap trees (localVars := false)
let ilean := { module := mainModuleName, references := references.toLspModuleRefs : Lean.Server.Ilean }
IO.FS.writeFile ileanFileName $ Json.compress $ toJson ilean
let hasErrors := snaps.getAll.any (·.diagnostics.msgLog.hasErrors)
-- TODO: remove default when reworking cmdline interface in Lean; currently the only case
-- where we use the environment despite errors in the file is `--stats`
let env := Language.Lean.waitForFinalEnv? snap |>.getD ( mkEmptyEnvironment)
pure (env, !hasErrors)
let some cmdState := Language.Lean.waitForFinalCmdState? snap
| return ( mkEmptyEnvironment, false)
if let some out := trace.profiler.output.get? opts then
let traceState := cmdState.traceState
let profile Firefox.Profile.export mainModuleName.toString startTime traceState opts
IO.FS.writeFile out <| Json.compress <| toJson profile
let hasErrors := snaps.getAll.any (·.diagnostics.msgLog.hasErrors)
pure (cmdState.env, !hasErrors)
end Lean.Elab

View File

@@ -323,6 +323,10 @@ private def declValToTerminationHint (declVal : Syntax) : TermElabM TerminationH
else
return .none
def instantiateMVarsProfiling (e : Expr) : MetaM Expr := do
profileitM Exception s!"instantiate metavars" ( getOptions) do
instantiateMVars e
private def elabFunValues (headers : Array DefViewElabHeader) : TermElabM (Array Expr) :=
headers.mapM fun header => do
let mut reusableResult? := none
@@ -348,7 +352,7 @@ private def elabFunValues (headers : Array DefViewElabHeader) : TermElabM (Array
elabTermEnsuringType valStx type <* synthesizeSyntheticMVarsNoPostponing
-- NOTE: without this `instantiatedMVars`, `mkLambdaFVars` may leave around a redex that
-- leads to more section variables being included than necessary
let val instantiateMVars val
let val instantiateMVarsProfiling val
mkLambdaFVars xs val
if let some snap := header.bodySnap? then
snap.new.resolve <| some {
@@ -389,7 +393,7 @@ private def instantiateMVarsAtHeader (header : DefViewElabHeader) : TermElabM De
private def instantiateMVarsAtLetRecToLift (toLift : LetRecToLift) : TermElabM LetRecToLift := do
let type instantiateMVars toLift.type
let val instantiateMVars toLift.val
let val instantiateMVarsProfiling toLift.val
pure { toLift with type, val }
private def typeHasRecFun (type : Expr) (funFVars : Array Expr) (letRecsToLift : List LetRecToLift) : Option FVarId :=
@@ -597,7 +601,7 @@ private def pickMaxFVar? (lctx : LocalContext) (fvarIds : Array FVarId) : Option
fvarIds.getMax? fun fvarId₁ fvarId₂ => (lctx.get! fvarId₁).index < (lctx.get! fvarId₂).index
private def preprocess (e : Expr) : TermElabM Expr := do
let e instantiateMVars e
let e instantiateMVarsProfiling e
-- which let-decls are dependent. We say a let-decl is dependent if its lambda abstraction is type incorrect.
Meta.check e
pure e
@@ -708,7 +712,7 @@ private def mkLetRecClosures (sectionVars : Array Expr) (mainFVarIds : Array FVa
-- This can happen when this particular let-rec has nested let-rec that have been resolved in previous iterations.
-- This code relies on the fact that nested let-recs occur before the outer most let-recs at `letRecsToLift`.
-- Unresolved nested let-recs appear as metavariables before they are resolved. See `assignExprMVar` at `mkLetRecClosureFor`
let valNew instantiateMVars letRecsToLift[i]!.val
let valNew instantiateMVarsProfiling letRecsToLift[i]!.val
letRecsToLift := letRecsToLift.modify i fun t => { t with val := valNew }
-- We have to recompute the `freeVarMap` in this case. This overhead should not be an issue in practice.
freeVarMap mkFreeVarMap sectionVars mainFVarIds recFVarIds letRecsToLift
@@ -821,10 +825,10 @@ def main (sectionVars : Array Expr) (mainHeaders : Array DefViewElabHeader) (mai
let letRecsToLift letRecsToLift.mapM fun toLift => withLCtx toLift.lctx toLift.localInstances do
Meta.check toLift.type
Meta.check toLift.val
return { toLift with val := ( instantiateMVars toLift.val), type := ( instantiateMVars toLift.type) }
return { toLift with val := ( instantiateMVarsProfiling toLift.val), type := ( instantiateMVars toLift.type) }
let letRecClosures mkLetRecClosures sectionVars mainFVarIds recFVarIds letRecsToLift
-- mkLetRecClosures assign metavariables that were placeholders for the lifted declarations.
let mainVals mainVals.mapM (instantiateMVars ·)
let mainVals mainVals.mapM (instantiateMVarsProfiling ·)
let mainHeaders mainHeaders.mapM instantiateMVarsAtHeader
let letRecClosures letRecClosures.mapM fun closure => do pure { closure with toLift := ( instantiateMVarsAtLetRecToLift closure.toLift) }
-- Replace fvarIds for functions being defined with closed terms
@@ -923,7 +927,7 @@ where
try
let values elabFunValues headers
Term.synthesizeSyntheticMVarsNoPostponing
values.mapM (instantiateMVars ·)
values.mapM (instantiateMVarsProfiling ·)
catch ex =>
logException ex
headers.mapM fun header => mkSorry header.type (synthetic := true)

View File

@@ -7,6 +7,9 @@ prelude
import Init.ShareCommon
import Lean.Compiler.NoncomputableAttr
import Lean.Util.CollectLevelParams
import Lean.Util.NumObjs
import Lean.Util.NumApps
import Lean.PrettyPrinter
import Lean.Meta.AbstractNestedProofs
import Lean.Meta.ForEachExpr
import Lean.Elab.RecAppSyntax
@@ -17,7 +20,6 @@ namespace Lean.Elab
open Meta
open Term
/--
A (potentially recursive) definition.
The elaborator converts it into Kernel definitions using many different strategies.
@@ -98,15 +100,33 @@ private def compileDecl (decl : Declaration) : TermElabM Bool := do
throw ex
return true
register_builtin_option diagnostics.threshold.proofSize : Nat := {
defValue := 16384
group := "diagnostics"
descr := "only display proof statistics when proof has at least this number of terms"
}
private def reportTheoremDiag (d : TheoremVal) : TermElabM Unit := do
if ( isDiagnosticsEnabled) then
let proofSize d.value.numObjs
if proofSize > diagnostics.threshold.proofSize.get ( getOptions) then
let sizeMsg := MessageData.trace { cls := `size } m!"{proofSize}" #[]
let constOccs d.value.numApps (threshold := diagnostics.threshold.get ( getOptions))
let constOccsMsg constOccs.mapM fun (declName, numOccs) => return MessageData.trace { cls := `occs } m!"{MessageData.ofConst (← mkConstWithLevelParams declName)} ↦ {numOccs}" #[]
-- let info
logInfo <| MessageData.trace { cls := `theorem } m!"{d.name}" (#[sizeMsg] ++ constOccsMsg)
private def addNonRecAux (preDef : PreDefinition) (compile : Bool) (all : List Name) (applyAttrAfterCompilation := true) : TermElabM Unit :=
withRef preDef.ref do
let preDef abstractNestedProofs preDef
let decl
match preDef.kind with
| DefKind.«theorem» =>
pure <| Declaration.thmDecl {
let d := {
name := preDef.declName, levelParams := preDef.levelParams, type := preDef.type, value := preDef.value, all
}
reportTheoremDiag d
pure <| Declaration.thmDecl d
| DefKind.«opaque» =>
pure <| Declaration.opaqueDecl {
name := preDef.declName, levelParams := preDef.levelParams, type := preDef.type, value := preDef.value

View File

@@ -996,7 +996,7 @@ def synthesizeInstMVarCore (instMVar : MVarId) (maxResultSize? : Option Nat := n
if ( read).ignoreTCFailures then
return false
else
throwError "failed to synthesize{indentExpr type}{extraErrorMsg}\n{useDiagnosticMsg}"
throwError "failed to synthesize{indentExpr type}{extraErrorMsg}{useDiagnosticMsg}"
def mkCoe (expectedType : Expr) (e : Expr) (f? : Option Expr := none) (errorMsgHeader? : Option String := none) : TermElabM Expr := do
withTraceNode `Elab.coe (fun _ => return m!"adding coercion for {e} : {← inferType e} =?= {expectedType}") do

View File

@@ -1452,28 +1452,26 @@ partial def betaRev (f : Expr) (revArgs : Array Expr) (useZeta := false) (preser
else
let sz := revArgs.size
let rec go (e : Expr) (i : Nat) : Expr :=
let done (_ : Unit) : Expr :=
let n := sz - i
mkAppRevRange (e.instantiateRange n sz revArgs) 0 n revArgs
match e with
| Expr.lam _ _ b _ =>
| .lam _ _ b _ =>
if i + 1 < sz then
go b (i+1)
else
let n := sz - (i + 1)
mkAppRevRange (b.instantiateRange n sz revArgs) 0 n revArgs
| Expr.letE _ _ v b _ =>
b.instantiate revArgs
| .letE _ _ v b _ =>
if useZeta && i < sz then
go (b.instantiate1 v) i
else
let n := sz - i
mkAppRevRange (e.instantiateRange n sz revArgs) 0 n revArgs
| Expr.mdata k b =>
done ()
| .mdata _ b =>
if preserveMData then
let n := sz - i
mkMData k (mkAppRevRange (b.instantiateRange n sz revArgs) 0 n revArgs)
done ()
else
go b i
| b =>
let n := sz - i
mkAppRevRange (b.instantiateRange n sz revArgs) 0 n revArgs
| _ => done ()
go f 0
/--

View File

@@ -10,8 +10,8 @@ Authors: Sebastian Ullrich
prelude
import Lean.Language.Basic
import Lean.Language.Lean.Types
import Lean.Parser.Module
import Lean.Elab.Command
import Lean.Elab.Import
/-!
@@ -166,11 +166,6 @@ namespace Lean.Language.Lean
open Lean.Elab Command
open Lean.Parser
private def pushOpt (a? : Option α) (as : Array α) : Array α :=
match a? with
| some a => as.push a
| none => as
/-- Option for capturing output to stderr during elaboration. -/
register_builtin_option stderrAsMessages : Bool := {
defValue := true
@@ -178,101 +173,6 @@ register_builtin_option stderrAsMessages : Bool := {
descr := "(server) capture output to the Lean stderr channel (such as from `dbg_trace`) during elaboration of a command as a diagnostic message"
}
/-! The hierarchy of Lean snapshot types -/
/-- Snapshot after elaboration of the entire command. -/
structure CommandFinishedSnapshot extends Language.Snapshot where
/-- Resulting elaboration state. -/
cmdState : Command.State
deriving Nonempty
instance : ToSnapshotTree CommandFinishedSnapshot where
toSnapshotTree s := s.toSnapshot, #[]
/-- State after a command has been parsed. -/
structure CommandParsedSnapshotData extends Snapshot where
/-- Syntax tree of the command. -/
stx : Syntax
/-- Resulting parser state. -/
parserState : Parser.ModuleParserState
/--
Snapshot for incremental reporting and reuse during elaboration, type dependent on specific
elaborator.
-/
elabSnap : SnapshotTask DynamicSnapshot
/-- State after processing is finished. -/
finishedSnap : SnapshotTask CommandFinishedSnapshot
/-- Cache for `save`; to be replaced with incrementality. -/
tacticCache : IO.Ref Tactic.Cache
deriving Nonempty
/-- State after a command has been parsed. -/
-- workaround for lack of recursive structures
inductive CommandParsedSnapshot where
/-- Creates a command parsed snapshot. -/
| mk (data : CommandParsedSnapshotData)
(nextCmdSnap? : Option (SnapshotTask CommandParsedSnapshot))
deriving Nonempty
/-- The snapshot data. -/
abbrev CommandParsedSnapshot.data : CommandParsedSnapshot CommandParsedSnapshotData
| mk data _ => data
/-- Next command, unless this is a terminal command. -/
abbrev CommandParsedSnapshot.nextCmdSnap? : CommandParsedSnapshot
Option (SnapshotTask CommandParsedSnapshot)
| mk _ next? => next?
partial instance : ToSnapshotTree CommandParsedSnapshot where
toSnapshotTree := go where
go s := s.data.toSnapshot,
#[s.data.elabSnap.map (sync := true) toSnapshotTree,
s.data.finishedSnap.map (sync := true) toSnapshotTree] |>
pushOpt (s.nextCmdSnap?.map (·.map (sync := true) go))
/-- State after successful importing. -/
structure HeaderProcessedState where
/-- The resulting initial elaboration state. -/
cmdState : Command.State
/-- First command task (there is always at least a terminal command). -/
firstCmdSnap : SnapshotTask CommandParsedSnapshot
/-- State after the module header has been processed including imports. -/
structure HeaderProcessedSnapshot extends Snapshot where
/-- State after successful importing. -/
result? : Option HeaderProcessedState
isFatal := result?.isNone
instance : ToSnapshotTree HeaderProcessedSnapshot where
toSnapshotTree s := s.toSnapshot, #[] |>
pushOpt (s.result?.map (·.firstCmdSnap.map (sync := true) toSnapshotTree))
/-- State after successfully parsing the module header. -/
structure HeaderParsedState where
/-- Resulting parser state. -/
parserState : Parser.ModuleParserState
/-- Header processing task. -/
processedSnap : SnapshotTask HeaderProcessedSnapshot
/-- State after the module header has been parsed. -/
structure HeaderParsedSnapshot extends Snapshot where
/-- Parser input context supplied by the driver, stored here for incremental parsing. -/
ictx : Parser.InputContext
/-- Resulting syntax tree. -/
stx : Syntax
/-- State after successful parsing. -/
result? : Option HeaderParsedState
isFatal := result?.isNone
/-- Cancellation token for interrupting processing of this run. -/
cancelTk? : Option IO.CancelToken
instance : ToSnapshotTree HeaderParsedSnapshot where
toSnapshotTree s := s.toSnapshot,
#[] |> pushOpt (s.result?.map (·.processedSnap.map (sync := true) toSnapshotTree))
/-- Shortcut accessor to the final header state, if successful. -/
def HeaderParsedSnapshot.processedResult (snap : HeaderParsedSnapshot) :
SnapshotTask (Option HeaderProcessedState) :=
snap.result?.bind (·.processedSnap.map (sync := true) (·.result?)) |>.getD (.pure none)
/-- Initial snapshot of the Lean language processor: a "header parsed" snapshot. -/
abbrev InitialSnapshot := HeaderParsedSnapshot
/-- Lean-specific processing context. -/
structure LeanProcessingContext extends ProcessingContext where
/-- Position of the first file difference if there was a previous invocation. -/
@@ -334,6 +234,54 @@ structure SetupImportsResult where
/-- Kernel trust level. -/
trustLevel : UInt32 := 0
/-- Performance option used by cmdline driver. -/
register_builtin_option internal.minimalSnapshots : Bool := {
defValue := false
descr := "reduce information stored in snapshots to the minimum necessary for the cmdline \
driver: diagnostics per command and final full snapshot"
}
/--
Parses values of options registered during import and left by the C++ frontend as strings, fails if
any option names remain unknown.
-/
def reparseOptions (opts : Options) : IO Options := do
let mut opts := opts
let decls getOptionDecls
for (name, val) in opts do
let .ofString val := val
| continue -- Already parsed by C++
-- Options can be prefixed with `weak` in order to turn off the error when the option is not
-- defined
let weak := name.getRoot == `weak
if weak then
opts := opts.erase name
let name := name.replacePrefix `weak Name.anonymous
let some decl := decls.find? name
| unless weak do
throw <| .userError s!"invalid -D parameter, unknown configuration option '{name}'
If the option is defined in this library, use '-D{`weak ++ name}' to set it conditionally"
match decl.defValue with
| .ofBool _ =>
match val with
| "true" => opts := opts.insert name true
| "false" => opts := opts.insert name false
| _ =>
throw <| .userError s!"invalid -D parameter, invalid configuration option '{val}' value, \
it must be true/false"
| .ofNat _ =>
let some val := val.toNat?
| throw <| .userError s!"invalid -D parameter, invalid configuration option '{val}' value, \
it must be a natural number"
opts := opts.insert name val
| .ofString _ => opts := opts.insert name val
| _ => throw <| .userError s!"invalid -D parameter, configuration option '{name}' \
cannot be set in the command line, use set_option command"
return opts
/--
Entry point of the Lean language processor.
@@ -379,9 +327,11 @@ where
if let some oldProcSuccess := oldProcessed.result? then
-- also wait on old command parse snapshot as parsing is cheap and may allow for
-- elaboration reuse
oldProcSuccess.firstCmdSnap.bindIO (sync := true) fun oldCmd =>
oldProcSuccess.firstCmdSnap.bindIO (sync := true) fun oldCmd => do
let prom IO.Promise.new
let _ IO.asTask (parseCmd oldCmd newParserState oldProcSuccess.cmdState oldProcSuccess.cmdState.env prom ctx)
return .pure { oldProcessed with result? := some { oldProcSuccess with
firstCmdSnap := ( parseCmd oldCmd newParserState oldProcSuccess.cmdState ctx) } }
firstCmdSnap := { range? := none, task := prom.result } } }
else
return .pure oldProcessed) } }
else return old
@@ -443,42 +393,68 @@ where
let setup match ( setupImports stx) with
| .ok setup => pure setup
| .error snap => return snap
let startTime := ( IO.monoNanosNow).toFloat / 1000000000
-- allows `headerEnv` to be leaked, which would live until the end of the process anyway
let (headerEnv, msgLog) Elab.processHeader (leakEnv := true) stx setup.opts .empty
ctx.toInputContext setup.trustLevel
let stopTime := ( IO.monoNanosNow).toFloat / 1000000000
let diagnostics := ( Snapshot.Diagnostics.ofMessageLog msgLog)
if msgLog.hasErrors then
return { diagnostics, result? := none }
let headerEnv := headerEnv.setMainModule setup.mainModuleName
let cmdState := Elab.Command.mkState headerEnv msgLog setup.opts
let cmdState := { cmdState with infoState := {
enabled := true
trees := #[Elab.InfoTree.context (.commandCtx {
env := headerEnv
fileMap := ctx.fileMap
ngen := { namePrefix := `_import }
}) (Elab.InfoTree.node
(Elab.Info.ofCommandInfo { elaborator := `header, stx })
(stx[1].getArgs.toList.map (fun importStx =>
Elab.InfoTree.node (Elab.Info.ofCommandInfo {
elaborator := `import
stx := importStx
}) #[].toPArray'
)).toPArray'
)].toPArray'
}}
let mut traceState := default
if trace.profiler.output.get? setup.opts |>.isSome then
traceState := {
traces := #[{
ref := .missing,
msg := .trace { cls := `Import, startTime, stopTime }
(.ofFormat "importing") #[]
: TraceElem
}].toPArray'
}
-- now that imports have been loaded, check options again
let opts reparseOptions setup.opts
let cmdState := Elab.Command.mkState headerEnv msgLog opts
let cmdState := { cmdState with
infoState := {
enabled := true
trees := #[Elab.InfoTree.context (.commandCtx {
env := headerEnv
fileMap := ctx.fileMap
ngen := { namePrefix := `_import }
}) (Elab.InfoTree.node
(Elab.Info.ofCommandInfo { elaborator := `header, stx })
(stx[1].getArgs.toList.map (fun importStx =>
Elab.InfoTree.node (Elab.Info.ofCommandInfo {
elaborator := `import
stx := importStx
}) #[].toPArray'
)).toPArray'
)].toPArray'
}
traceState
}
let prom IO.Promise.new
-- The speedup of these `markPersistent`s is negligible but they help in making unexpected
-- `inc_ref_cold`s more visible
let parserState := Runtime.markPersistent parserState
let cmdState := Runtime.markPersistent cmdState
let ctx := Runtime.markPersistent ctx
let _ IO.asTask (parseCmd none parserState cmdState cmdState.env prom ctx)
return {
diagnostics
infoTree? := cmdState.infoState.trees[0]!
result? := some {
cmdState
firstCmdSnap := ( parseCmd none parserState cmdState)
firstCmdSnap := { range? := none, task := prom.result }
}
}
parseCmd (old? : Option CommandParsedSnapshot) (parserState : Parser.ModuleParserState)
(cmdState : Command.State) : LeanProcessingM (SnapshotTask CommandParsedSnapshot) := do
(cmdState : Command.State) (initEnv : Environment) (prom : IO.Promise CommandParsedSnapshot) :
LeanProcessingM Unit := do
let ctx read
-- check for cancellation, most likely during elaboration of previous command, before starting
@@ -487,82 +463,100 @@ where
-- this is a bit ugly as we don't want to adjust our API with `Option`s just for cancellation
-- (as no-one should look at this result in that case) but anything containing `Environment`
-- is not `Inhabited`
return .pure <| .mk (nextCmdSnap? := none) {
prom.resolve <| .mk (nextCmdSnap? := none) {
diagnostics := .empty, stx := .missing, parserState
elabSnap := .pure <| .ofTyped { diagnostics := .empty : SnapshotLeaf }
finishedSnap := .pure { diagnostics := .empty, cmdState }
tacticCache := ( IO.mkRef {})
}
return
let unchanged old newParserState : BaseIO CommandParsedSnapshot :=
let unchanged old newParserState : BaseIO Unit :=
-- when syntax is unchanged, reuse command processing task as is
-- NOTE: even if the syntax tree is functionally unchanged, the new parser state may still
-- have changed because of trailing whitespace and comments etc., so it is passed separately
-- from `old`
if let some oldNext := old.nextCmdSnap? then
return .mk (data := old.data)
(nextCmdSnap? := ( old.data.finishedSnap.bindIO (sync := true) fun oldFinished =>
-- also wait on old command parse snapshot as parsing is cheap and may allow for
-- elaboration reuse
oldNext.bindIO (sync := true) fun oldNext => do
parseCmd oldNext newParserState oldFinished.cmdState ctx))
else return old -- terminal command, we're done!
if let some oldNext := old.nextCmdSnap? then do
let newProm IO.Promise.new
let _ old.data.finishedSnap.bindIO fun oldFinished =>
-- also wait on old command parse snapshot as parsing is cheap and may allow for
-- elaboration reuse
oldNext.bindIO (sync := true) fun oldNext => do
parseCmd oldNext newParserState oldFinished.cmdState initEnv newProm ctx
return .pure ()
prom.resolve <| .mk (data := old.data) (nextCmdSnap? := some { range? := none, task := newProm.result })
else prom.resolve old -- terminal command, we're done!
-- fast path, do not even start new task for this snapshot
if let some old := old? then
if let some nextCom old.nextCmdSnap?.bindM (·.get?) then
if ( isBeforeEditPos nextCom.data.parserState.pos) then
return .pure ( unchanged old old.data.parserState)
return ( unchanged old old.data.parserState)
SnapshotTask.ofIO (some parserState.pos, ctx.input.endPos) do
let beginPos := parserState.pos
let scope := cmdState.scopes.head!
let pmctx := {
env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace
openDecls := scope.openDecls
}
let (stx, parserState, msgLog) := Parser.parseCommand ctx.toInputContext pmctx parserState
.empty
let beginPos := parserState.pos
let scope := cmdState.scopes.head!
let pmctx := {
env := cmdState.env, options := scope.opts, currNamespace := scope.currNamespace
openDecls := scope.openDecls
}
let (stx, parserState, msgLog) :=
profileit "parsing" scope.opts fun _ =>
Parser.parseCommand ctx.toInputContext pmctx parserState .empty
-- semi-fast path
if let some old := old? then
-- NOTE: as `parserState.pos` includes trailing whitespace, this forces reprocessing even if
-- only that whitespace changes, which is wasteful but still necessary because it may
-- influence the range of error messages such as from a trailing `exact`
if stx.eqWithInfo old.data.stx then
-- Here we must make sure to pass the *new* parser state; see NOTE in `unchanged`
return ( unchanged old parserState)
-- on first change, make sure to cancel old invocation
-- TODO: pass token into incrementality-aware elaborators to improve reuse of still-valid,
-- still-running elaboration steps?
if let some tk := ctx.oldCancelTk? then
tk.set
-- semi-fast path
if let some old := old? then
-- NOTE: as `parserState.pos` includes trailing whitespace, this forces reprocessing even if
-- only that whitespace changes, which is wasteful but still necessary because it may
-- influence the range of error messages such as from a trailing `exact`
if stx.eqWithInfo old.data.stx then
-- Here we must make sure to pass the *new* parser state; see NOTE in `unchanged`
return ( unchanged old parserState)
-- on first change, make sure to cancel old invocation
-- TODO: pass token into incrementality-aware elaborators to improve reuse of still-valid,
-- still-running elaboration steps?
if let some tk := ctx.oldCancelTk? then
tk.set
-- definitely resolved in `doElab` task
let elabPromise IO.Promise.new
let tacticCache old?.map (·.data.tacticCache) |>.getDM (IO.mkRef {})
let finishedSnap
doElab stx cmdState beginPos
{ old? := old?.map fun old => old.data.stx, old.data.elabSnap, new := elabPromise }
tacticCache
ctx
let next? if Parser.isTerminalCommand stx then pure none
-- for now, wait on "command finished" snapshot before parsing next command
else some <$> finishedSnap.bindIO fun finished =>
parseCmd none parserState finished.cmdState ctx
return .mk (nextCmdSnap? := next?) {
diagnostics := ( Snapshot.Diagnostics.ofMessageLog msgLog)
stx
parserState
elabSnap := { range? := stx.getRange?, task := elabPromise.result }
finishedSnap
-- definitely resolved in `doElab` task
let elabPromise IO.Promise.new
let tacticCache old?.map (·.data.tacticCache) |>.getDM (IO.mkRef {})
let finishedSnap
doElab stx cmdState beginPos
{ old? := old?.map fun old => old.data.stx, old.data.elabSnap, new := elabPromise }
tacticCache
ctx
let minimalSnapshots := internal.minimalSnapshots.get cmdState.scopes.head!.opts
let next? if Parser.isTerminalCommand stx then pure none
-- for now, wait on "command finished" snapshot before parsing next command
else some <$> IO.Promise.new
let diagnostics Snapshot.Diagnostics.ofMessageLog msgLog
let data := if minimalSnapshots && !Parser.isTerminalCommand stx then {
diagnostics
stx := .missing
parserState := {}
elabSnap := { range? := stx.getRange?, task := elabPromise.result }
finishedSnap := .pure {
diagnostics := finishedSnap.diagnostics
infoTree? := none
cmdState := {
env := initEnv
maxRecDepth := 0
}
}
tacticCache
} else {
diagnostics, stx, parserState, tacticCache
elabSnap := { range? := stx.getRange?, task := elabPromise.result }
finishedSnap := .pure finishedSnap
}
prom.resolve <| .mk (nextCmdSnap? := next?.map ({ range? := some parserState.pos, ctx.input.endPos, task := ·.result })) data
if let some next := next? then
parseCmd none parserState finishedSnap.cmdState initEnv next ctx
doElab (stx : Syntax) (cmdState : Command.State) (beginPos : String.Pos)
(snap : SnapshotBundle DynamicSnapshot) (tacticCache : IO.Ref Tactic.Cache) :
LeanProcessingM (SnapshotTask CommandFinishedSnapshot) := do
LeanProcessingM CommandFinishedSnapshot := do
let ctx read
-- (Try to) use last line of command as range for final snapshot task. This ensures we do not
-- retract the progress bar to a previous position in case the command support incremental
@@ -571,48 +565,46 @@ where
-- `parseCmd` and containing the entire range of the command will determine the reported
-- progress and be resolved effectively at the same time as this snapshot task, so `tailPos` is
-- irrelevant in this case.
let tailPos := stx.getTailPos? |>.getD beginPos
SnapshotTask.ofIO (some tailPos, tailPos) do
let scope := cmdState.scopes.head!
let cmdStateRef IO.mkRef { cmdState with messages := .empty }
/-
The same snapshot may be executed by different tasks. So, to make sure `elabCommandTopLevel`
has exclusive access to the cache, we create a fresh reference here. Before this change, the
following `tacticCache.modify` would reset the tactic post cache while another snapshot was
still using it.
-/
let tacticCacheNew IO.mkRef ( tacticCache.get)
let cmdCtx : Elab.Command.Context := { ctx with
cmdPos := beginPos
tacticCache? := some tacticCacheNew
snap? := some snap
cancelTk? := some ctx.newCancelTk
}
let (output, _)
IO.FS.withIsolatedStreams (isolateStderr := stderrAsMessages.get scope.opts) do
liftM (m := BaseIO) do
withLoggingExceptions
(getResetInfoTrees *> Elab.Command.elabCommandTopLevel stx)
cmdCtx cmdStateRef
let postNew := ( tacticCacheNew.get).post
tacticCache.modify fun _ => { pre := postNew, post := {} }
let cmdState cmdStateRef.get
let mut messages := cmdState.messages
if !output.isEmpty then
messages := messages.add {
fileName := ctx.fileName
severity := MessageSeverity.information
pos := ctx.fileMap.toPosition beginPos
data := output
}
let cmdState := { cmdState with messages }
-- definitely resolve eventually
snap.new.resolve <| .ofTyped { diagnostics := .empty : SnapshotLeaf }
return {
diagnostics := ( Snapshot.Diagnostics.ofMessageLog cmdState.messages)
infoTree? := some cmdState.infoState.trees[0]!
cmdState
let scope := cmdState.scopes.head!
let cmdStateRef IO.mkRef { cmdState with messages := .empty }
/-
The same snapshot may be executed by different tasks. So, to make sure `elabCommandTopLevel`
has exclusive access to the cache, we create a fresh reference here. Before this change, the
following `tacticCache.modify` would reset the tactic post cache while another snapshot was
still using it.
-/
let tacticCacheNew IO.mkRef ( tacticCache.get)
let cmdCtx : Elab.Command.Context := { ctx with
cmdPos := beginPos
tacticCache? := some tacticCacheNew
snap? := if internal.minimalSnapshots.get scope.opts then none else snap
cancelTk? := some ctx.newCancelTk
}
let (output, _)
IO.FS.withIsolatedStreams (isolateStderr := stderrAsMessages.get scope.opts) do
liftM (m := BaseIO) do
withLoggingExceptions
(getResetInfoTrees *> Elab.Command.elabCommandTopLevel stx)
cmdCtx cmdStateRef
let postNew := ( tacticCacheNew.get).post
tacticCache.modify fun _ => { pre := postNew, post := {} }
let cmdState cmdStateRef.get
let mut messages := cmdState.messages
if !output.isEmpty then
messages := messages.add {
fileName := ctx.fileName
severity := MessageSeverity.information
pos := ctx.fileMap.toPosition beginPos
data := output
}
let cmdState := { cmdState with messages }
-- definitely resolve eventually
snap.new.resolve <| .ofTyped { diagnostics := .empty : SnapshotLeaf }
return {
diagnostics := ( Snapshot.Diagnostics.ofMessageLog cmdState.messages)
infoTree? := some cmdState.infoState.trees[0]!
cmdState
}
/--
Convenience function for tool uses of the language processor that skips header handling.
@@ -620,14 +612,15 @@ Convenience function for tool uses of the language processor that skips header h
def processCommands (inputCtx : Parser.InputContext) (parserState : Parser.ModuleParserState)
(commandState : Command.State)
(old? : Option (Parser.InputContext × CommandParsedSnapshot) := none) :
BaseIO (SnapshotTask CommandParsedSnapshot) := do
process.parseCmd (old?.map (·.2)) parserState commandState
BaseIO (Task CommandParsedSnapshot) := do
let prom IO.Promise.new
process.parseCmd (old?.map (·.2)) parserState commandState commandState.env prom
|>.run (old?.map (·.1))
|>.run { inputCtx with }
return prom.result
/-- Waits for and returns final environment, if importing was successful. -/
partial def waitForFinalEnv? (snap : InitialSnapshot) : Option Environment := do
/-- Waits for and returns final command state, if importing was successful. -/
partial def waitForFinalCmdState? (snap : InitialSnapshot) : Option Command.State := do
let snap snap.result?
let snap snap.processedSnap.get.result?
goCmd snap.firstCmdSnap.get
@@ -635,6 +628,6 @@ where goCmd snap :=
if let some next := snap.nextCmdSnap? then
goCmd next.get
else
snap.data.finishedSnap.get.cmdState.env
snap.data.finishedSnap.get.cmdState
end Lean

View File

@@ -0,0 +1,119 @@
/-
Copyright (c) 2023 Lean FRO. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Implementation of the Lean language: parsing and processing of header and commands, incremental
recompilation
Authors: Sebastian Ullrich
-/
prelude
import Lean.Language.Basic
import Lean.Elab.Command
set_option linter.missingDocs true
namespace Lean.Language.Lean
open Lean.Elab Command
open Lean.Parser
private def pushOpt (a? : Option α) (as : Array α) : Array α :=
match a? with
| some a => as.push a
| none => as
/-! The hierarchy of Lean snapshot types -/
/-- Snapshot after elaboration of the entire command. -/
structure CommandFinishedSnapshot extends Language.Snapshot where
/-- Resulting elaboration state. -/
cmdState : Command.State
deriving Nonempty
instance : ToSnapshotTree CommandFinishedSnapshot where
toSnapshotTree s := s.toSnapshot, #[]
/-- State after a command has been parsed. -/
structure CommandParsedSnapshotData extends Snapshot where
/-- Syntax tree of the command. -/
stx : Syntax
/-- Resulting parser state. -/
parserState : Parser.ModuleParserState
/--
Snapshot for incremental reporting and reuse during elaboration, type dependent on specific
elaborator.
-/
elabSnap : SnapshotTask DynamicSnapshot
/-- State after processing is finished. -/
finishedSnap : SnapshotTask CommandFinishedSnapshot
/-- Cache for `save`; to be replaced with incrementality. -/
tacticCache : IO.Ref Tactic.Cache
deriving Nonempty
/-- State after a command has been parsed. -/
-- workaround for lack of recursive structures
inductive CommandParsedSnapshot where
/-- Creates a command parsed snapshot. -/
| mk (data : CommandParsedSnapshotData)
(nextCmdSnap? : Option (SnapshotTask CommandParsedSnapshot))
deriving Nonempty
/-- The snapshot data. -/
abbrev CommandParsedSnapshot.data : CommandParsedSnapshot CommandParsedSnapshotData
| mk data _ => data
/-- Next command, unless this is a terminal command. -/
abbrev CommandParsedSnapshot.nextCmdSnap? : CommandParsedSnapshot
Option (SnapshotTask CommandParsedSnapshot)
| mk _ next? => next?
partial instance : ToSnapshotTree CommandParsedSnapshot where
toSnapshotTree := go where
go s := s.data.toSnapshot,
#[s.data.elabSnap.map (sync := true) toSnapshotTree,
s.data.finishedSnap.map (sync := true) toSnapshotTree] |>
pushOpt (s.nextCmdSnap?.map (·.map (sync := true) go))
/-- State after successful importing. -/
structure HeaderProcessedState where
/-- The resulting initial elaboration state. -/
cmdState : Command.State
/-- First command task (there is always at least a terminal command). -/
firstCmdSnap : SnapshotTask CommandParsedSnapshot
/-- State after the module header has been processed including imports. -/
structure HeaderProcessedSnapshot extends Snapshot where
/-- State after successful importing. -/
result? : Option HeaderProcessedState
isFatal := result?.isNone
instance : ToSnapshotTree HeaderProcessedSnapshot where
toSnapshotTree s := s.toSnapshot, #[] |>
pushOpt (s.result?.map (·.firstCmdSnap.map (sync := true) toSnapshotTree))
/-- State after successfully parsing the module header. -/
structure HeaderParsedState where
/-- Resulting parser state. -/
parserState : Parser.ModuleParserState
/-- Header processing task. -/
processedSnap : SnapshotTask HeaderProcessedSnapshot
/-- State after the module header has been parsed. -/
structure HeaderParsedSnapshot extends Snapshot where
/-- Parser input context supplied by the driver, stored here for incremental parsing. -/
ictx : Parser.InputContext
/-- Resulting syntax tree. -/
stx : Syntax
/-- State after successful parsing. -/
result? : Option HeaderParsedState
isFatal := result?.isNone
/-- Cancellation token for interrupting processing of this run. -/
cancelTk? : Option IO.CancelToken
instance : ToSnapshotTree HeaderParsedSnapshot where
toSnapshotTree s := s.toSnapshot,
#[] |> pushOpt (s.result?.map (·.processedSnap.map (sync := true) toSnapshotTree))
/-- Shortcut accessor to the final header state, if successful. -/
def HeaderParsedSnapshot.processedResult (snap : HeaderParsedSnapshot) :
SnapshotTask (Option HeaderProcessedState) :=
snap.result?.bind (·.processedSnap.map (sync := true) (·.result?)) |>.getD (.pure none)
/-- Initial snapshot of the Lean language processor: a "header parsed" snapshot. -/
abbrev InitialSnapshot := HeaderParsedSnapshot

View File

@@ -636,7 +636,7 @@ def main (type : Expr) (maxResultSize : Nat) : MetaM (Option AbstractMVarsResult
(action.run { maxResultSize := maxResultSize, maxHeartbeats := getMaxHeartbeats ( getOptions) } |>.run' {})
fun ex =>
if ex.isRuntime then
throwError "failed to synthesize{indentExpr type}\n{ex.toMessageData}\n{useDiagnosticMsg}"
throwError "failed to synthesize{indentExpr type}\n{ex.toMessageData}{useDiagnosticMsg}"
else
throw ex
@@ -810,7 +810,7 @@ def trySynthInstance (type : Expr) (maxResultSize? : Option Nat := none) : MetaM
(fun _ => pure LOption.undef)
def throwFailedToSynthesize (type : Expr) : MetaM Expr :=
throwError "failed to synthesize{indentExpr type}\n{useDiagnosticMsg}"
throwError "failed to synthesize{indentExpr type}{useDiagnosticMsg}"
def synthInstance (type : Expr) (maxResultSize? : Option Nat := none) : MetaM Expr :=
catchInternalId isDefEqStuckExceptionId

View File

@@ -60,6 +60,16 @@ builtin_dsimproc [simp, seval] reducePow ((_ ^ _ : Nat)) := fun e => do
unless ( checkExponent m) do return .continue
return .done <| toExpr (n ^ m)
builtin_dsimproc [simp, seval] reduceAnd ((_ &&& _ : Nat)) := reduceBin ``HOr.hOr 6 (· &&& ·)
builtin_dsimproc [simp, seval] reduceXor ((_ ^^^ _ : Nat)) := reduceBin ``HXor.hXor 6 (· ^^^ ·)
builtin_dsimproc [simp, seval] reduceOr ((_ ||| _ : Nat)) := reduceBin ``HOr.hOr 6 (· ||| ·)
builtin_dsimproc [simp, seval] reduceShiftLeft ((_ <<< _ : Nat)) :=
reduceBin ``HShiftLeft.hShiftLeft 6 (· <<< ·)
builtin_dsimproc [simp, seval] reduceShiftRight ((_ >>> _ : Nat)) :=
reduceBin ``HShiftRight.hShiftRight 6 (· >>> ·)
builtin_dsimproc [simp, seval] reduceGcd (gcd _ _) := reduceBin ``gcd 2 gcd
builtin_simproc [simp, seval] reduceLT (( _ : Nat) < _) := reduceBinPred ``LT.lt 4 (. < .)

View File

@@ -336,6 +336,8 @@ structure MetavarContext where
For more information about delayed abstraction, see the docstring for `DelayedMetavarAssignment`. -/
dAssignment : PersistentHashMap MVarId DelayedMetavarAssignment := {}
instance : Inhabited MetavarContext := {}
/-- A monad with a stateful metavariable context, defining `getMCtx` and `modifyMCtx`. -/
class MonadMCtx (m : Type Type) where
getMCtx : m MetavarContext
@@ -358,15 +360,27 @@ abbrev setMCtx [MonadMCtx m] (mctx : MetavarContext) : m Unit :=
abbrev getLevelMVarAssignment? [Monad m] [MonadMCtx m] (mvarId : LMVarId) : m (Option Level) :=
return ( getMCtx).lAssignment.find? mvarId
@[export lean_get_lmvar_assignment]
def getLevelMVarAssignmentExp (m : MetavarContext) (mvarId : LMVarId) : Option Level :=
m.lAssignment.find? mvarId
def MetavarContext.getExprAssignmentCore? (m : MetavarContext) (mvarId : MVarId) : Option Expr :=
m.eAssignment.find? mvarId
@[export lean_get_mvar_assignment]
def MetavarContext.getExprAssignmentExp (m : MetavarContext) (mvarId : MVarId) : Option Expr :=
m.eAssignment.find? mvarId
def getExprMVarAssignment? [Monad m] [MonadMCtx m] (mvarId : MVarId) : m (Option Expr) :=
return ( getMCtx).getExprAssignmentCore? mvarId
def MetavarContext.getDelayedMVarAssignmentCore? (mctx : MetavarContext) (mvarId : MVarId) : Option DelayedMetavarAssignment :=
mctx.dAssignment.find? mvarId
@[export lean_get_delayed_mvar_assignment]
def MetavarContext.getDelayedMVarAssignmentExp (mctx : MetavarContext) (mvarId : MVarId) : Option DelayedMetavarAssignment :=
mctx.dAssignment.find? mvarId
def getDelayedMVarAssignment? [Monad m] [MonadMCtx m] (mvarId : MVarId) : m (Option DelayedMetavarAssignment) :=
return ( getMCtx).getDelayedMVarAssignmentCore? mvarId
@@ -478,6 +492,10 @@ def hasAssignableMVar [Monad m] [MonadMCtx m] : Expr → m Bool
def assignLevelMVar [MonadMCtx m] (mvarId : LMVarId) (val : Level) : m Unit :=
modifyMCtx fun m => { m with lAssignment := m.lAssignment.insert mvarId val }
@[export lean_assign_lmvar]
def assignLevelMVarExp (m : MetavarContext) (mvarId : LMVarId) (val : Level) : MetavarContext :=
{ m with lAssignment := m.lAssignment.insert mvarId val }
/--
Add `mvarId := x` to the metavariable assignment.
This method does not check whether `mvarId` is already assigned, nor it checks whether
@@ -487,6 +505,10 @@ This is a low-level API, and it is safer to use `isDefEq (mkMVar mvarId) x`.
def _root_.Lean.MVarId.assign [MonadMCtx m] (mvarId : MVarId) (val : Expr) : m Unit :=
modifyMCtx fun m => { m with eAssignment := m.eAssignment.insert mvarId val }
@[export lean_assign_mvar]
def assignExp (m : MetavarContext) (mvarId : MVarId) (val : Expr) : MetavarContext :=
{ m with eAssignment := m.eAssignment.insert mvarId val }
/--
Add a delayed assignment for the given metavariable. You must make sure that
the metavariable is not already assigned or delayed-assigned.
@@ -516,95 +538,22 @@ To avoid this term eta-expanded term, we apply beta-reduction when instantiating
This operation is performed at `instantiateExprMVars`, `elimMVarDeps`, and `levelMVarToParam`.
-/
partial def instantiateLevelMVars [Monad m] [MonadMCtx m] : Level m Level
| lvl@(Level.succ lvl₁) => return Level.updateSucc! lvl ( instantiateLevelMVars lvl₁)
| lvl@(Level.max lvl₁ lvl₂) => return Level.updateMax! lvl ( instantiateLevelMVars lvl₁) ( instantiateLevelMVars lvl₂)
| lvl@(Level.imax lvl₁ lvl₂) => return Level.updateIMax! lvl ( instantiateLevelMVars lvl₁) ( instantiateLevelMVars lvl₂)
| lvl@(Level.mvar mvarId) => do
match ( getLevelMVarAssignment? mvarId) with
| some newLvl =>
if !newLvl.hasMVar then pure newLvl
else do
let newLvl' instantiateLevelMVars newLvl
assignLevelMVar mvarId newLvl'
pure newLvl'
| none => pure lvl
| lvl => pure lvl
@[extern "lean_instantiate_level_mvars"]
opaque instantiateLevelMVarsImp (mctx : MetavarContext) (l : Level) : MetavarContext × Level
partial def instantiateLevelMVars [Monad m] [MonadMCtx m] (l : Level) : m Level := do
let (mctx, lNew) := instantiateLevelMVarsImp ( getMCtx) l
setMCtx mctx
return lNew
@[extern "lean_instantiate_expr_mvars"]
opaque instantiateExprMVarsImp (mctx : MetavarContext) (e : Expr) : MetavarContext × Expr
/-- instantiateExprMVars main function -/
partial def instantiateExprMVars [Monad m] [MonadMCtx m] [STWorld ω m] [MonadLiftT (ST ω) m] (e : Expr) : MonadCacheT ExprStructEq Expr m Expr :=
if !e.hasMVar then
pure e
else checkCache { val := e : ExprStructEq } fun _ => do match e with
| .proj _ _ s => return e.updateProj! ( instantiateExprMVars s)
| .forallE _ d b _ => return e.updateForallE! ( instantiateExprMVars d) ( instantiateExprMVars b)
| .lam _ d b _ => return e.updateLambdaE! ( instantiateExprMVars d) ( instantiateExprMVars b)
| .letE _ t v b _ => return e.updateLet! ( instantiateExprMVars t) ( instantiateExprMVars v) ( instantiateExprMVars b)
| .const _ lvls => return e.updateConst! ( lvls.mapM instantiateLevelMVars)
| .sort lvl => return e.updateSort! ( instantiateLevelMVars lvl)
| .mdata _ b => return e.updateMData! ( instantiateExprMVars b)
| .app .. => e.withApp fun f args => do
let instArgs (f : Expr) : MonadCacheT ExprStructEq Expr m Expr := do
let args args.mapM instantiateExprMVars
pure (mkAppN f args)
let instApp : MonadCacheT ExprStructEq Expr m Expr := do
let wasMVar := f.isMVar
let f instantiateExprMVars f
if wasMVar && f.isLambda then
/- Some of the arguments in `args` are irrelevant after we beta
reduce. Also, it may be a bug to not instantiate them, since they
may depend on free variables that are not in the context (see
issue #4375). So we pass `useZeta := true` to ensure that they are
instantiated. -/
instantiateExprMVars (f.betaRev args.reverse (useZeta := true))
else
instArgs f
match f with
| .mvar mvarId =>
match ( getDelayedMVarAssignment? mvarId) with
| none => instApp
| some { fvars, mvarIdPending } =>
/-
Apply "delayed substitution" (i.e., delayed assignment + application).
That is, `f` is some metavariable `?m`, that is delayed assigned to `val`.
If after instantiating `val`, we obtain `newVal`, and `newVal` does not contain
metavariables, we replace the free variables `fvars` in `newVal` with the first
`fvars.size` elements of `args`. -/
if fvars.size > args.size then
/- We don't have sufficient arguments for instantiating the free variables `fvars`.
This can only happen if a tactic or elaboration function is not implemented correctly.
We decided to not use `panic!` here and report it as an error in the frontend
when we are checking for unassigned metavariables in an elaborated term. -/
instArgs f
else
let newVal instantiateExprMVars (mkMVar mvarIdPending)
if newVal.hasExprMVar then
instArgs f
else do
let args args.mapM instantiateExprMVars
/-
Example: suppose we have
`?m t1 t2 t3`
That is, `f := ?m` and `args := #[t1, t2, t3]`
Moreover, `?m` is delayed assigned
`?m #[x, y] := f x y`
where, `fvars := #[x, y]` and `newVal := f x y`.
After abstracting `newVal`, we have `f (Expr.bvar 0) (Expr.bvar 1)`.
After `instantiaterRevRange 0 2 args`, we have `f t1 t2`.
After `mkAppRange 2 3`, we have `f t1 t2 t3` -/
let newVal := newVal.abstract fvars
let result := newVal.instantiateRevRange 0 fvars.size args
let result := mkAppRange result fvars.size args.size args
pure result
| _ => instApp
| e@(.mvar mvarId) => checkCache { val := e : ExprStructEq } fun _ => do
match ( getExprMVarAssignment? mvarId) with
| some newE => do
let newE' instantiateExprMVars newE
mvarId.assign newE'
pure newE'
| none => pure e
| e => pure e
def instantiateExprMVars [Monad m] [MonadMCtx m] (e : Expr) : m Expr := do
let (mctx, eNew) := instantiateExprMVarsImp ( getMCtx) e
setMCtx mctx
return eNew
instance : MonadMCtx (StateRefT MetavarContext (ST ω)) where
getMCtx := get
@@ -792,8 +741,6 @@ def localDeclDependsOnPred [Monad m] [MonadMCtx m] (localDecl : LocalDecl) (pf :
namespace MetavarContext
instance : Inhabited MetavarContext := {}
@[export lean_mk_metavar_ctx]
def mkMetavarContext : Unit MetavarContext := fun _ => {}

View File

@@ -703,6 +703,9 @@ list, so it should be brief.
@[builtin_command_parser] def genInjectiveTheorems := leading_parser
"gen_injective_theorems% " >> ident
/-- To be implemented. -/
@[builtin_command_parser] def «include» := leading_parser "include " >> many1 (checkColGt >> ident)
/-- No-op parser used as syntax kind for attaching remaining whitespace at the end of the input. -/
@[run_builtin_parser_attribute_hooks] def eoi : Parser := leading_parser ""

View File

@@ -16,7 +16,7 @@ import Lean.Data.Json.FromToJson
import Lean.Util.FileSetupInfo
import Lean.LoadDynlib
import Lean.Language.Basic
import Lean.Language.Lean
import Lean.Server.Utils
import Lean.Server.AsyncList

View File

@@ -5,7 +5,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
Authors: Wojciech Nawrocki, Marc Huisinga
-/
prelude
import Lean.Language.Lean
import Lean.Language.Lean.Types
import Lean.Server.Utils
import Lean.Server.Snapshots
import Lean.Server.AsyncList

View File

@@ -9,7 +9,6 @@ import Init.System.IO
import Lean.Elab.Import
import Lean.Elab.Command
import Lean.Language.Lean
import Lean.Widget.InteractiveDiagnostic

View File

@@ -32,3 +32,4 @@ import Lean.Util.Heartbeats
import Lean.Util.SearchPath
import Lean.Util.SafeExponentiation
import Lean.Util.NumObjs
import Lean.Util.NumApps

View File

@@ -0,0 +1,57 @@
/-
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
prelude
import Lean.Expr
import Lean.Util.PtrSet
namespace Lean.Expr
namespace NumApps
unsafe structure State where
visited : PtrSet Expr := mkPtrSet
counters : NameMap Nat := {}
unsafe abbrev M := StateM State
unsafe def visit (e : Expr) : M Unit :=
unless ( get).visited.contains e do
modify fun s => { s with visited := s.visited.insert e }
match e with
| .forallE _ d b _ => visit d; visit b
| .lam _ d b _ => visit d; visit b
| .mdata _ b => visit b
| .letE _ t v b _ => visit t; visit v; visit b
| .app .. => e.withApp fun f args => do
if let .const declName _ := f then
let c := ( get).counters.find? declName |>.getD 0
modify fun s => { s with counters := s.counters.insert declName (c+1) }
visit f
args.forM visit
| .proj _ _ b => visit b
| _ => return ()
unsafe def main (e : Expr) : NameMap Nat :=
let (_, s) := NumApps.visit e |>.run {}
s.counters
end NumApps
/--
Returns the number of applications for each declaration used in `e`.
This operation is performed in `IO` because the result depends on the memory representation of the object.
Note: Use this function primarily for diagnosing performance issues.
-/
def numApps (e : Expr) (threshold : Nat := 0) : IO (Array (Name × Nat)) := do
let counters := unsafe NumApps.main e
let mut result := #[]
for (declName, num) in counters do
if num > threshold then
result := result.push (declName, num)
return result.qsort fun a b => a.2 > b.2
end Lean.Expr

View File

@@ -2,4 +2,4 @@ add_library(kernel OBJECT level.cpp expr.cpp expr_eq_fn.cpp
for_each_fn.cpp replace_fn.cpp abstract.cpp instantiate.cpp
local_ctx.cpp declaration.cpp environment.cpp type_checker.cpp
init_module.cpp expr_cache.cpp equiv_manager.cpp quot.cpp
inductive.cpp trace.cpp)
inductive.cpp trace.cpp instantiate_mvars.cpp)

View File

@@ -63,9 +63,9 @@ class constant_val : public object_ref {
public:
constant_val(name const & n, names const & lparams, expr const & type);
constant_val(constant_val const & other):object_ref(other) {}
constant_val(constant_val && other):object_ref(other) {}
constant_val(constant_val && other):object_ref(std::move(other)) {}
constant_val & operator=(constant_val const & other) { object_ref::operator=(other); return *this; }
constant_val & operator=(constant_val && other) { object_ref::operator=(other); return *this; }
constant_val & operator=(constant_val && other) { object_ref::operator=(std::move(other)); return *this; }
name const & get_name() const { return static_cast<name const &>(cnstr_get_ref(*this, 0)); }
names const & get_lparams() const { return static_cast<names const &>(cnstr_get_ref(*this, 1)); }
expr const & get_type() const { return static_cast<expr const &>(cnstr_get_ref(*this, 2)); }
@@ -79,9 +79,9 @@ class axiom_val : public object_ref {
public:
axiom_val(name const & n, names const & lparams, expr const & type, bool is_unsafe);
axiom_val(axiom_val const & other):object_ref(other) {}
axiom_val(axiom_val && other):object_ref(other) {}
axiom_val(axiom_val && other):object_ref(std::move(other)) {}
axiom_val & operator=(axiom_val const & other) { object_ref::operator=(other); return *this; }
axiom_val & operator=(axiom_val && other) { object_ref::operator=(other); return *this; }
axiom_val & operator=(axiom_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
names const & get_lparams() const { return to_constant_val().get_lparams(); }
@@ -105,9 +105,9 @@ class definition_val : public object_ref {
public:
definition_val(name const & n, names const & lparams, expr const & type, expr const & val, reducibility_hints const & hints, definition_safety safety, names const & all);
definition_val(definition_val const & other):object_ref(other) {}
definition_val(definition_val && other):object_ref(other) {}
definition_val(definition_val && other):object_ref(std::move(other)) {}
definition_val & operator=(definition_val const & other) { object_ref::operator=(other); return *this; }
definition_val & operator=(definition_val && other) { object_ref::operator=(other); return *this; }
definition_val & operator=(definition_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
names const & get_lparams() const { return to_constant_val().get_lparams(); }
@@ -128,9 +128,9 @@ class theorem_val : public object_ref {
public:
theorem_val(name const & n, names const & lparams, expr const & type, expr const & val, names const & all);
theorem_val(theorem_val const & other):object_ref(other) {}
theorem_val(theorem_val && other):object_ref(other) {}
theorem_val(theorem_val && other):object_ref(std::move(other)) {}
theorem_val & operator=(theorem_val const & other) { object_ref::operator=(other); return *this; }
theorem_val & operator=(theorem_val && other) { object_ref::operator=(other); return *this; }
theorem_val & operator=(theorem_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
names const & get_lparams() const { return to_constant_val().get_lparams(); }
@@ -148,9 +148,9 @@ class opaque_val : public object_ref {
public:
opaque_val(name const & n, names const & lparams, expr const & type, expr const & val, bool is_unsafe, names const & all);
opaque_val(opaque_val const & other):object_ref(other) {}
opaque_val(opaque_val && other):object_ref(other) {}
opaque_val(opaque_val && other):object_ref(std::move(other)) {}
opaque_val & operator=(opaque_val const & other) { object_ref::operator=(other); return *this; }
opaque_val & operator=(opaque_val && other) { object_ref::operator=(other); return *this; }
opaque_val & operator=(opaque_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
names const & get_lparams() const { return to_constant_val().get_lparams(); }
@@ -179,9 +179,9 @@ class inductive_type : public object_ref {
public:
inductive_type(name const & id, expr const & type, constructors const & cnstrs);
inductive_type(inductive_type const & other):object_ref(other) {}
inductive_type(inductive_type && other):object_ref(other) {}
inductive_type(inductive_type && other):object_ref(std::move(other)) {}
inductive_type & operator=(inductive_type const & other) { object_ref::operator=(other); return *this; }
inductive_type & operator=(inductive_type && other) { object_ref::operator=(other); return *this; }
inductive_type & operator=(inductive_type && other) { object_ref::operator=(std::move(other)); return *this; }
name const & get_name() const { return static_cast<name const &>(cnstr_get_ref(*this, 0)); }
expr const & get_type() const { return static_cast<expr const &>(cnstr_get_ref(*this, 1)); }
constructors const & get_cnstrs() const { return static_cast<constructors const &>(cnstr_get_ref(*this, 2)); }
@@ -205,7 +205,7 @@ class declaration : public object_ref {
public:
declaration();
declaration(declaration const & other):object_ref(other) {}
declaration(declaration && other):object_ref(other) {}
declaration(declaration && other):object_ref(std::move(other)) {}
/* low-level constructors */
explicit declaration(object * o):object_ref(o) {}
explicit declaration(b_obj_arg o, bool b):object_ref(o, b) {}
@@ -213,7 +213,7 @@ public:
declaration_kind kind() const { return static_cast<declaration_kind>(obj_tag(raw())); }
declaration & operator=(declaration const & other) { object_ref::operator=(other); return *this; }
declaration & operator=(declaration && other) { object_ref::operator=(other); return *this; }
declaration & operator=(declaration && other) { object_ref::operator=(std::move(other)); return *this; }
friend bool is_eqp(declaration const & d1, declaration const & d2) { return d1.raw() == d2.raw(); }
@@ -263,10 +263,10 @@ declaration mk_axiom_inferring_unsafe(environment const & env, name const & n,
class inductive_decl : public object_ref {
public:
inductive_decl(inductive_decl const & other):object_ref(other) {}
inductive_decl(inductive_decl && other):object_ref(other) {}
inductive_decl(inductive_decl && other):object_ref(std::move(other)) {}
inductive_decl(declaration const & d):object_ref(d) { lean_assert(d.is_inductive()); }
inductive_decl & operator=(inductive_decl const & other) { object_ref::operator=(other); return *this; }
inductive_decl & operator=(inductive_decl && other) { object_ref::operator=(other); return *this; }
inductive_decl & operator=(inductive_decl && other) { object_ref::operator=(std::move(other)); return *this; }
names const & get_lparams() const { return static_cast<names const &>(cnstr_get_ref(raw(), 0)); }
nat const & get_nparams() const { return static_cast<nat const &>(cnstr_get_ref(raw(), 1)); }
inductive_types const & get_types() const { return static_cast<inductive_types const &>(cnstr_get_ref(raw(), 2)); }
@@ -290,9 +290,9 @@ public:
inductive_val(name const & n, names const & lparams, expr const & type, unsigned nparams,
unsigned nindices, names const & all, names const & cnstrs, unsigned nnested, bool is_rec, bool is_unsafe, bool is_reflexive);
inductive_val(inductive_val const & other):object_ref(other) {}
inductive_val(inductive_val && other):object_ref(other) {}
inductive_val(inductive_val && other):object_ref(std::move(other)) {}
inductive_val & operator=(inductive_val const & other) { object_ref::operator=(other); return *this; }
inductive_val & operator=(inductive_val && other) { object_ref::operator=(other); return *this; }
inductive_val & operator=(inductive_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
unsigned get_nparams() const { return static_cast<nat const &>(cnstr_get_ref(*this, 1)).get_small_value(); }
unsigned get_nindices() const { return static_cast<nat const &>(cnstr_get_ref(*this, 2)).get_small_value(); }
@@ -317,9 +317,9 @@ class constructor_val : public object_ref {
public:
constructor_val(name const & n, names const & lparams, expr const & type, name const & induct, unsigned cidx, unsigned nparams, unsigned nfields, bool is_unsafe);
constructor_val(constructor_val const & other):object_ref(other) {}
constructor_val(constructor_val && other):object_ref(other) {}
constructor_val(constructor_val && other):object_ref(std::move(other)) {}
constructor_val & operator=(constructor_val const & other) { object_ref::operator=(other); return *this; }
constructor_val & operator=(constructor_val && other) { object_ref::operator=(other); return *this; }
constructor_val & operator=(constructor_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_induct() const { return static_cast<name const &>(cnstr_get_ref(*this, 1)); }
unsigned get_cidx() const { return static_cast<nat const &>(cnstr_get_ref(*this, 2)).get_small_value(); }
@@ -338,9 +338,9 @@ class recursor_rule : public object_ref {
public:
recursor_rule(name const & cnstr, unsigned nfields, expr const & rhs);
recursor_rule(recursor_rule const & other):object_ref(other) {}
recursor_rule(recursor_rule && other):object_ref(other) {}
recursor_rule(recursor_rule && other):object_ref(std::move(other)) {}
recursor_rule & operator=(recursor_rule const & other) { object_ref::operator=(other); return *this; }
recursor_rule & operator=(recursor_rule && other) { object_ref::operator=(other); return *this; }
recursor_rule & operator=(recursor_rule && other) { object_ref::operator=(std::move(other)); return *this; }
name const & get_cnstr() const { return static_cast<name const &>(cnstr_get_ref(*this, 0)); }
unsigned get_nfields() const { return static_cast<nat const &>(cnstr_get_ref(*this, 1)).get_small_value(); }
expr const & get_rhs() const { return static_cast<expr const &>(cnstr_get_ref(*this, 2)); }
@@ -365,9 +365,9 @@ public:
names const & all, unsigned nparams, unsigned nindices, unsigned nmotives,
unsigned nminors, recursor_rules const & rules, bool k, bool is_unsafe);
recursor_val(recursor_val const & other):object_ref(other) {}
recursor_val(recursor_val && other):object_ref(other) {}
recursor_val(recursor_val && other):object_ref(std::move(other)) {}
recursor_val & operator=(recursor_val const & other) { object_ref::operator=(other); return *this; }
recursor_val & operator=(recursor_val && other) { object_ref::operator=(other); return *this; }
recursor_val & operator=(recursor_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
name const & get_induct() const { return get_name().get_prefix(); }
@@ -398,9 +398,9 @@ class quot_val : public object_ref {
public:
quot_val(name const & n, names const & lparams, expr const & type, quot_kind k);
quot_val(quot_val const & other):object_ref(other) {}
quot_val(quot_val && other):object_ref(other) {}
quot_val(quot_val && other):object_ref(std::move(other)) {}
quot_val & operator=(quot_val const & other) { object_ref::operator=(other); return *this; }
quot_val & operator=(quot_val && other) { object_ref::operator=(other); return *this; }
quot_val & operator=(quot_val && other) { object_ref::operator=(std::move(other)); return *this; }
constant_val const & to_constant_val() const { return static_cast<constant_val const &>(cnstr_get_ref(*this, 0)); }
name const & get_name() const { return to_constant_val().get_name(); }
names const & get_lparams() const { return to_constant_val().get_lparams(); }
@@ -434,14 +434,14 @@ public:
constant_info(constructor_val const & v);
constant_info(recursor_val const & v);
constant_info(constant_info const & other):object_ref(other) {}
constant_info(constant_info && other):object_ref(other) {}
constant_info(constant_info && other):object_ref(std::move(other)) {}
explicit constant_info(b_obj_arg o, bool b):object_ref(o, b) {}
explicit constant_info(obj_arg o):object_ref(o) {}
constant_info_kind kind() const { return static_cast<constant_info_kind>(cnstr_tag(raw())); }
constant_info & operator=(constant_info const & other) { object_ref::operator=(other); return *this; }
constant_info & operator=(constant_info && other) { object_ref::operator=(other); return *this; }
constant_info & operator=(constant_info && other) { object_ref::operator=(std::move(other)); return *this; }
friend bool is_eqp(constant_info const & d1, constant_info const & d2) { return d1.raw() == d2.raw(); }

View File

@@ -9,6 +9,7 @@ Author: Leonardo de Moura
#include <limits>
#include "runtime/sstream.h"
#include "runtime/thread.h"
#include "runtime/sharecommon.h"
#include "util/map_foreach.h"
#include "util/io.h"
#include "kernel/environment.h"
@@ -220,12 +221,15 @@ environment environment::add_theorem(declaration const & d, bool check) const {
theorem_val const & v = d.to_theorem_val();
if (check) {
type_checker checker(*this, diag.get());
if (!checker.is_prop(v.get_type()))
throw theorem_type_is_not_prop(*this, v.get_name(), v.get_type());
sharecommon_persistent_fn share;
expr val(share(v.get_value().raw()));
expr type(share(v.get_type().raw()));
if (!checker.is_prop(type))
throw theorem_type_is_not_prop(*this, v.get_name(), type);
check_constant_val(*this, v.to_constant_val(), checker);
check_no_metavar_no_fvar(*this, v.get_name(), v.get_value());
expr val_type = checker.check(v.get_value(), v.get_lparams());
if (!checker.is_def_eq(val_type, v.get_type()))
check_no_metavar_no_fvar(*this, v.get_name(), val);
expr val_type = checker.check(val, v.get_lparams());
if (!checker.is_def_eq(val_type, type))
throw definition_type_mismatch_exception(*this, d, val_type);
}
return diag.update(add(constant_info(d)));

View File

@@ -50,9 +50,9 @@ public:
explicit literal(nat const & v);
literal():literal(0u) {}
literal(literal const & other):object_ref(other) {}
literal(literal && other):object_ref(other) {}
literal(literal && other):object_ref(std::move(other)) {}
literal & operator=(literal const & other) { object_ref::operator=(other); return *this; }
literal & operator=(literal && other) { object_ref::operator=(other); return *this; }
literal & operator=(literal && other) { object_ref::operator=(std::move(other)); return *this; }
static literal_kind kind(object * o) { return static_cast<literal_kind>(cnstr_tag(o)); }
literal_kind kind() const { return kind(raw()); }
@@ -100,14 +100,14 @@ class expr : public object_ref {
public:
expr();
expr(expr const & other):object_ref(other) {}
expr(expr && other):object_ref(other) {}
expr(expr && other):object_ref(std::move(other)) {}
explicit expr(b_obj_arg o, bool b):object_ref(o, b) {}
explicit expr(obj_arg o):object_ref(o) {}
static expr_kind kind(object * o) { return static_cast<expr_kind>(cnstr_tag(o)); }
expr_kind kind() const { return kind(raw()); }
expr & operator=(expr const & other) { object_ref::operator=(other); return *this; }
expr & operator=(expr && other) { object_ref::operator=(other); return *this; }
expr & operator=(expr && other) { object_ref::operator=(std::move(other)); return *this; }
friend bool is_eqp(expr const & e1, expr const & e2) { return e1.raw() == e2.raw(); }
};

View File

@@ -11,65 +11,49 @@ Author: Leonardo de Moura
#include "kernel/expr.h"
#include "kernel/expr_sets.h"
#ifndef LEAN_EQ_CACHE_CAPACITY
#define LEAN_EQ_CACHE_CAPACITY 1024*8
#endif
namespace lean {
struct eq_cache {
struct entry {
object * m_a;
object * m_b;
entry():m_a(nullptr), m_b(nullptr) {}
};
unsigned m_capacity;
std::vector<entry> m_cache;
std::vector<unsigned> m_used;
eq_cache():m_capacity(LEAN_EQ_CACHE_CAPACITY), m_cache(LEAN_EQ_CACHE_CAPACITY) {}
/**
\brief Functional object for comparing expressions.
bool check(expr const & a, expr const & b) {
if (!is_shared(a) || !is_shared(b))
return false;
unsigned i = hash(hash(a), hash(b)) % m_capacity;
if (m_cache[i].m_a == a.raw() && m_cache[i].m_b == b.raw()) {
return true;
} else {
if (m_cache[i].m_a == nullptr)
m_used.push_back(i);
m_cache[i].m_a = a.raw();
m_cache[i].m_b = b.raw();
return false;
}
}
void clear() {
for (unsigned i : m_used)
m_cache[i].m_a = nullptr;
m_used.clear();
}
};
/* CACHE_RESET: No */
MK_THREAD_LOCAL_GET_DEF(eq_cache, get_eq_cache);
/** \brief Functional object for comparing expressions.
Remark if CompareBinderInfo is true, then functional object will also compare
binder information attached to lambda and Pi expressions */
Remark if CompareBinderInfo is true, then functional object will also compare
binder information attached to lambda and Pi expressions
*/
template<bool CompareBinderInfo>
class expr_eq_fn {
eq_cache & m_cache;
struct key_hasher {
std::size_t operator()(std::pair<lean_object *, lean_object *> const & p) const {
return hash((size_t)p.first >> 3, (size_t)p.first >> 3);
}
};
typedef std::unordered_set<std::pair<lean_object *, lean_object *>, key_hasher> cache;
cache * m_cache = nullptr;
bool check_cache(expr const & a, expr const & b) {
if (!is_shared(a) || !is_shared(b))
return false;
if (!m_cache)
m_cache = new cache();
std::pair<lean_object *, lean_object *> key(a.raw(), b.raw());
if (m_cache->find(key) != m_cache->end())
return true;
m_cache->insert(key);
return false;
}
static void check_system() {
::lean::check_system("expression equality test");
}
bool apply(expr const & a, expr const & b) {
bool apply(expr const & a, expr const & b, bool root = false) {
if (is_eqp(a, b)) return true;
if (hash(a) != hash(b)) return false;
if (a.kind() != b.kind()) return false;
if (is_bvar(a)) return bvar_idx(a) == bvar_idx(b);
if (m_cache.check(a, b))
switch (a.kind()) {
case expr_kind::BVar: return bvar_idx(a) == bvar_idx(b);
case expr_kind::Lit: return lit_value(a) == lit_value(b);
case expr_kind::MVar: return mvar_name(a) == mvar_name(b);
case expr_kind::FVar: return fvar_name(a) == fvar_name(b);
case expr_kind::Sort: return sort_level(a) == sort_level(b);
default: break;
}
if (!root && check_cache(a, b))
return true;
/*
We increase the number of heartbeats here because some code (e.g., `simp`) may spend a lot of time comparing
@@ -78,6 +62,10 @@ class expr_eq_fn {
lean_inc_heartbeat();
switch (a.kind()) {
case expr_kind::BVar:
case expr_kind::Lit:
case expr_kind::MVar:
case expr_kind::FVar:
case expr_kind::Sort:
lean_unreachable(); // LCOV_EXCL_LINE
case expr_kind::MData:
return
@@ -88,16 +76,10 @@ class expr_eq_fn {
apply(proj_expr(a), proj_expr(b)) &&
proj_sname(a) == proj_sname(b) &&
proj_idx(a) == proj_idx(b);
case expr_kind::Lit:
return lit_value(a) == lit_value(b);
case expr_kind::Const:
return
const_name(a) == const_name(b) &&
compare(const_levels(a), const_levels(b), [](level const & l1, level const & l2) { return l1 == l2; });
case expr_kind::MVar:
return mvar_name(a) == mvar_name(b);
case expr_kind::FVar:
return fvar_name(a) == fvar_name(b);
case expr_kind::App:
check_system();
return
@@ -117,15 +99,13 @@ class expr_eq_fn {
apply(let_value(a), let_value(b)) &&
apply(let_body(a), let_body(b)) &&
(!CompareBinderInfo || let_name(a) == let_name(b));
case expr_kind::Sort:
return sort_level(a) == sort_level(b);
}
lean_unreachable(); // LCOV_EXCL_LINE
}
public:
expr_eq_fn():m_cache(get_eq_cache()) {}
~expr_eq_fn() { m_cache.clear(); }
bool operator()(expr const & a, expr const & b) { return apply(a, b); }
expr_eq_fn() {}
~expr_eq_fn() { if (m_cache) delete m_cache; }
bool operator()(expr const & a, expr const & b) { return apply(a, b, true); }
};
bool is_equal(expr const & a, expr const & b) {

View File

@@ -165,22 +165,38 @@ bool is_head_beta(expr const & t) {
return is_app(t) && is_lambda(get_app_fn(t));
}
expr apply_beta(expr f, unsigned num_args, expr const * args) {
if (num_args == 0) {
return f;
} else if (!is_lambda(f)) {
return mk_rev_app(f, num_args, args);
} else {
unsigned m = 1;
while (is_lambda(binding_body(f)) && m < num_args) {
f = binding_body(f);
m++;
static expr apply_beta_rec(expr e, unsigned i, unsigned num_rev_args, expr const * rev_args, bool preserve_data, bool zeta) {
if (is_lambda(e)) {
if (i + 1 < num_rev_args) {
return apply_beta_rec(binding_body(e), i+1, num_rev_args, rev_args, preserve_data, zeta);
} else {
return instantiate(binding_body(e), num_rev_args, rev_args);
}
lean_assert(m <= num_args);
return mk_rev_app(instantiate(binding_body(f), m, args + (num_args - m)), num_args - m, args);
} else if (is_let(e)) {
if (zeta && i < num_rev_args) {
return apply_beta_rec(instantiate(let_body(e), let_value(e)), i, num_rev_args, rev_args, preserve_data, zeta);
} else {
unsigned n = num_rev_args - i;
return mk_rev_app(instantiate(e, i, rev_args + n), n, rev_args);
}
} else if (is_mdata(e)) {
if (preserve_data) {
unsigned n = num_rev_args - i;
return mk_rev_app(instantiate(e, i, rev_args + n), n, rev_args);
} else {
return apply_beta_rec(mdata_expr(e), i, num_rev_args, rev_args, preserve_data, zeta);
}
} else {
unsigned n = num_rev_args - i;
return mk_rev_app(instantiate(e, i, rev_args + n), n, rev_args);
}
}
expr apply_beta(expr f, unsigned num_rev_args, expr const * rev_args, bool preserve_data, bool zeta) {
if (num_rev_args == 0) return f;
return apply_beta_rec(f, 0, num_rev_args, rev_args, preserve_data, zeta);
}
expr head_beta_reduce(expr const & t) {
if (!is_head_beta(t)) {
return t;

View File

@@ -24,7 +24,7 @@ inline expr instantiate_rev(expr const & e, buffer<expr> const & s) {
return instantiate_rev(e, s.size(), s.data());
}
expr apply_beta(expr f, unsigned num_rev_args, expr const * rev_args);
expr apply_beta(expr f, unsigned num_rev_args, expr const * rev_args, bool preserve_data = true, bool zeta = false);
bool is_head_beta(expr const & t);
expr head_beta_reduce(expr const & t);
/* If `e` is of the form `(fun x, t) a` return `head_beta_const_fn(t)` if `t` does not depend on `x`,

View File

@@ -0,0 +1,369 @@
/*
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
*/
#include <vector>
#include <unordered_map>
#include "util/name_set.h"
#include "runtime/option_ref.h"
#include "runtime/array_ref.h"
#include "kernel/instantiate.h"
#include "kernel/replace_fn.h"
/*
This module is not used by the kernel. It just provides an efficient implementation of
`instantiateExprMVars`
*/
namespace lean {
extern "C" object * lean_get_lmvar_assignment(obj_arg mctx, obj_arg mid);
extern "C" object * lean_assign_lmvar(obj_arg mctx, obj_arg mid, obj_arg val);
typedef object_ref metavar_ctx;
void assign_lmvar(metavar_ctx & mctx, name const & mid, level const & l) {
object * r = lean_assign_lmvar(mctx.steal(), mid.to_obj_arg(), l.to_obj_arg());
mctx.set_box(r);
}
option_ref<level> get_lmvar_assignment(metavar_ctx & mctx, name const & mid) {
return option_ref<level>(lean_get_lmvar_assignment(mctx.to_obj_arg(), mid.to_obj_arg()));
}
class instantiate_lmvars_fn {
metavar_ctx & m_mctx;
std::unordered_map<lean_object *, level> m_cache;
std::vector<level> m_saved; // Helper vector to prevent values from being garbagge collected
inline level cache(level const & l, level r, bool shared) {
if (shared) {
m_cache.insert(mk_pair(l.raw(), r));
}
return r;
}
public:
instantiate_lmvars_fn(metavar_ctx & mctx):m_mctx(mctx) {}
level visit(level const & l) {
if (!has_mvar(l))
return l;
bool shared = false;
if (is_shared(l)) {
auto it = m_cache.find(l.raw());
if (it != m_cache.end()) {
return it->second;
}
shared = true;
}
switch (l.kind()) {
case level_kind::Succ:
return cache(l, update_succ(l, visit(succ_of(l))), shared);
case level_kind::Max: case level_kind::IMax:
return cache(l, update_max(l, visit(level_lhs(l)), visit(level_rhs(l))), shared);
case level_kind::Zero: case level_kind::Param:
lean_unreachable();
case level_kind::MVar: {
option_ref<level> r = get_lmvar_assignment(m_mctx, mvar_id(l));
if (!r) {
return l;
} else {
level a(r.get_val());
if (!has_mvar(a)) {
return a;
} else {
level a_new = visit(a);
if (!is_eqp(a, a_new)) {
/*
We save `a` to ensure it will not be garbage collected
after we update `mctx`. This is necessary because `m_cache`
may contain references to its subterms.
*/
m_saved.push_back(a);
assign_lmvar(m_mctx, mvar_id(l), a_new);
}
return a_new;
}
}
}}
}
level operator()(level const & l) { return visit(l); }
};
extern "C" LEAN_EXPORT object * lean_instantiate_level_mvars(object * m, object * l) {
metavar_ctx mctx(m);
level l_new = instantiate_lmvars_fn(mctx)(level(l));
object * r = alloc_cnstr(0, 2, 0);
cnstr_set(r, 0, mctx.steal());
cnstr_set(r, 1, l_new.steal());
return r;
}
extern "C" object * lean_get_mvar_assignment(obj_arg mctx, obj_arg mid);
extern "C" object * lean_get_delayed_mvar_assignment(obj_arg mctx, obj_arg mid);
extern "C" object * lean_assign_mvar(obj_arg mctx, obj_arg mid, obj_arg val);
typedef object_ref delayed_assignment;
void assign_mvar(metavar_ctx & mctx, name const & mid, expr const & e) {
object * r = lean_assign_mvar(mctx.steal(), mid.to_obj_arg(), e.to_obj_arg());
mctx.set_box(r);
}
option_ref<expr> get_mvar_assignment(metavar_ctx & mctx, name const & mid) {
return option_ref<expr>(lean_get_mvar_assignment(mctx.to_obj_arg(), mid.to_obj_arg()));
}
option_ref<delayed_assignment> get_delayed_mvar_assignment(metavar_ctx & mctx, name const & mid) {
return option_ref<delayed_assignment>(lean_get_delayed_mvar_assignment(mctx.to_obj_arg(), mid.to_obj_arg()));
}
expr replace_fvars(expr const & e, array_ref<expr> const & fvars, expr const * rev_args) {
size_t sz = fvars.size();
if (sz == 0)
return e;
return replace(e, [=](expr const & m, unsigned offset) -> optional<expr> {
if (!has_fvar(m))
return some_expr(m); // expression m does not contain free variables
if (is_fvar(m)) {
size_t i = sz;
name const & fid = fvar_name(m);
while (i > 0) {
--i;
if (fvar_name(fvars[i]) == fid) {
return some_expr(lift_loose_bvars(rev_args[sz - i - 1], offset));
}
}
}
return none_expr();
});
}
class instantiate_mvars_fn {
metavar_ctx & m_mctx;
instantiate_lmvars_fn m_level_fn;
name_set m_already_normalized; // Store metavariables whose assignment has already been normalized.
std::unordered_map<lean_object *, expr> m_cache;
std::vector<expr> m_saved; // Helper vector to prevent values from being garbagge collected
level visit_level(level const & l) {
return m_level_fn(l);
}
levels visit_levels(levels const & ls) {
buffer<level> lsNew;
for (auto const & l : ls)
lsNew.push_back(visit_level(l));
return levels(lsNew);
}
inline expr cache(expr const & e, expr r, bool shared) {
if (shared) {
m_cache.insert(mk_pair(e.raw(), r));
}
return r;
}
optional<expr> get_assignment(name const & mid) {
option_ref<expr> r = get_mvar_assignment(m_mctx, mid);
if (!r) {
return optional<expr>();
} else {
expr a(r.get_val());
if (!has_mvar(a) || m_already_normalized.contains(mid)) {
return optional<expr>(a);
} else {
m_already_normalized.insert(mid);
expr a_new = visit(a);
if (!is_eqp(a, a_new)) {
/*
We save `a` to ensure it will not be garbage collected
after we update `mctx`. This is necessary because `m_cache`
may contain references to its subterms.
*/
m_saved.push_back(a);
assign_mvar(m_mctx, mid, a_new);
}
return optional<expr>(a_new);
}
}
}
/*
Given `e` of the form `f a_1 ... a_n` where `f` is not a metavariable,
instantiate metavariables.
*/
expr visit_app_default(expr const & e) {
buffer<expr> args;
expr const * curr = &e;
while (is_app(*curr)) {
args.push_back(visit(app_arg(*curr)));
curr = &app_fn(*curr);
}
lean_assert(!is_mvar(*curr));
expr f = visit(*curr);
return mk_rev_app(f, args.size(), args.data());
}
/*
Given `e` of the form `?m a_1 ... a_n`, return new application where
the metavariables in the arguments `a_i` have been instantiated.
*/
expr visit_mvar_app_args(expr const & e) {
buffer<expr> args;
expr const * curr = &e;
while (is_app(*curr)) {
args.push_back(visit(app_arg(*curr)));
curr = &app_fn(*curr);
}
lean_assert(is_mvar(*curr));
return mk_rev_app(*curr, args.size(), args.data());
}
/*
Given `e` of the form `f a_1 ... a_n`, return new application `f_new a_1' ... a_n'`
where `a_i'` is `visit(a_i)`. `args` is an accumulator for the new arguments.
*/
expr visit_args_and_beta(expr const & f_new, expr const & e, buffer<expr> & args) {
expr const * curr = &e;
while (is_app(*curr)) {
args.push_back(visit(app_arg(*curr)));
curr = &app_fn(*curr);
}
/*
Some of the arguments in `args` are irrelevant after we beta
reduce. Also, it may be a bug to not instantiate them, since they
may depend on free variables that are not in the context (see
issue #4375). So we pass `useZeta := true` to ensure that they are
instantiated.
*/
bool preserve_data = false;
bool zeta = true;
return apply_beta(f_new, args.size(), args.data(), preserve_data, zeta);
}
/*
Helper function for delayed assignment case at `visit_app`.
`e` is a term of the form `?m t1 t2 t3`
Moreover, `?m` is delayed assigned
`?m #[x, y] := g x y`
where, `fvars := #[x, y]` and `val := g x y`.
`args` is an accumulator for `e`'s arguments.
We want to return `g t1' t2' t3'` where
`ti'`s are `visit(ti)`.
*/
expr visit_delayed(array_ref<expr> const & fvars, expr const & val, expr const & e, buffer<expr> & args) {
expr const * curr = &e;
while (is_app(*curr)) {
args.push_back(visit(app_arg(*curr)));
curr = &app_fn(*curr);
}
expr val_new = replace_fvars(val, fvars, args.data() + (args.size() - fvars.size()));
return mk_rev_app(val_new, args.size() - fvars.size(), args.data());
}
expr visit_app(expr const & e) {
expr const & f = get_app_fn(e);
if (!is_mvar(f)) {
return visit_app_default(e);
} else {
name const & mid = mvar_name(f);
option_ref<delayed_assignment> d = get_delayed_mvar_assignment(m_mctx, mid);
if (!d) {
// mvar is not delayed assigned
expr f_new = visit(f);
if (is_eqp(f, f_new)) {
return visit_mvar_app_args(e);
} else {
buffer<expr> args;
return visit_args_and_beta(f_new, e, args);
}
} else {
/*
Apply "delayed substitution" (i.e., delayed assignment + application).
That is, `f` is some metavariable `?m`, that is delayed assigned to `val`.
If after instantiating `val`, we obtain `newVal`, and `newVal` does not contain
metavariables, we replace the free variables `fvars` in `newVal` with the first
`fvars.size` elements of `args`.
*/
array_ref<expr> fvars(cnstr_get(d.get_val().raw(), 0), true);
name mid_pending(cnstr_get(d.get_val().raw(), 1), true);
if (fvars.size() > get_app_num_args(e)) {
/*
We don't have sufficient arguments for instantiating the free variables `fvars`.
This can only happen if a tactic or elaboration function is not implemented correctly.
We decided to not use `panic!` here and report it as an error in the frontend
when we are checking for unassigned metavariables in an elaborated term. */
return visit_mvar_app_args(e);
}
optional<expr> val = get_assignment(mid_pending);
if (!val)
// mid_pending has not been assigned yet.
return visit_mvar_app_args(e);
if (has_expr_mvar(*val))
// mid_pending has been assigned, but assignment contains mvars.
return visit_mvar_app_args(e);
buffer<expr> args;
return visit_delayed(fvars, *val, e, args);
}
}
}
expr visit_mvar(expr const & e) {
name const & mid = mvar_name(e);
if (auto r = get_assignment(mid)) {
return *r;
} else {
return e;
}
}
public:
instantiate_mvars_fn(metavar_ctx & mctx):m_mctx(mctx), m_level_fn(mctx) {}
expr visit(expr const & e) {
if (!has_mvar(e))
return e;
bool shared = false;
if (is_shared(e)) {
auto it = m_cache.find(e.raw());
if (it != m_cache.end()) {
return it->second;
}
shared = true;
}
switch (e.kind()) {
case expr_kind::BVar:
case expr_kind::Lit: case expr_kind::FVar:
lean_unreachable();
case expr_kind::Sort:
return cache(e, update_sort(e, visit_level(sort_level(e))), shared);
case expr_kind::Const:
return cache(e, update_const(e, visit_levels(const_levels(e))), shared);
case expr_kind::MVar:
return visit_mvar(e);
case expr_kind::MData:
return cache(e, update_mdata(e, visit(mdata_expr(e))), shared);
case expr_kind::Proj:
return cache(e, update_proj(e, visit(proj_expr(e))), shared);
case expr_kind::App:
return cache(e, visit_app(e), shared);
case expr_kind::Pi: case expr_kind::Lambda:
return cache(e, update_binding(e, visit(binding_domain(e)), visit(binding_body(e))), shared);
case expr_kind::Let:
return cache(e, update_let(e, visit(let_type(e)), visit(let_value(e)), visit(let_body(e))), shared);
}
}
expr operator()(expr const & e) { return visit(e); }
};
extern "C" LEAN_EXPORT object * lean_instantiate_expr_mvars(object * m, object * e) {
metavar_ctx mctx(m);
expr e_new = instantiate_mvars_fn(mctx)(expr(e));
object * r = alloc_cnstr(0, 2, 0);
cnstr_set(r, 0, mctx.steal());
cnstr_set(r, 1, e_new.steal());
return r;
}
}

View File

@@ -43,14 +43,14 @@ public:
explicit level(obj_arg o):object_ref(o) {}
explicit level(b_obj_arg o, bool b):object_ref(o, b) {}
level(level const & other):object_ref(other) {}
level(level && other):object_ref(other) {}
level(level && other):object_ref(std::move(other)) {}
level_kind kind() const {
return lean_is_scalar(raw()) ? level_kind::Zero : static_cast<level_kind>(lean_ptr_tag(raw()));
}
unsigned hash() const;
level & operator=(level const & other) { object_ref::operator=(other); return *this; }
level & operator=(level && other) { object_ref::operator=(other); return *this; }
level & operator=(level && other) { object_ref::operator=(std::move(other)); return *this; }
friend bool is_eqp(level const & l1, level const & l2) { return l1.raw() == l2.raw(); }
@@ -82,6 +82,8 @@ inline bool operator!=(level const & l1, level const & l2) { return !operator==(
struct level_hash { unsigned operator()(level const & n) const { return n.hash(); } };
struct level_eq { bool operator()(level const & n1, level const & n2) const { return n1 == n2; } };
inline bool is_shared(level const & l) { return !is_exclusive(l.raw()); }
inline optional<level> none_level() { return optional<level>(); }
inline optional<level> some_level(level const & e) { return optional<level>(e); }
inline optional<level> some_level(level && e) { return optional<level>(std::forward<level>(e)); }

View File

@@ -27,11 +27,11 @@ class local_decl : public object_ref {
public:
local_decl();
local_decl(local_decl const & other):object_ref(other) {}
local_decl(local_decl && other):object_ref(other) {}
local_decl(local_decl && other):object_ref(std::move(other)) {}
local_decl(obj_arg o):object_ref(o) {}
local_decl(b_obj_arg o, bool):object_ref(o, true) {}
local_decl & operator=(local_decl const & other) { object_ref::operator=(other); return *this; }
local_decl & operator=(local_decl && other) { object_ref::operator=(other); return *this; }
local_decl & operator=(local_decl && other) { object_ref::operator=(std::move(other)); return *this; }
friend bool is_eqp(local_decl const & d1, local_decl const & d2) { return d1.raw() == d2.raw(); }
unsigned get_idx() const { return static_cast<nat const &>(cnstr_get_ref(raw(), 0)).get_small_value(); }
name const & get_name() const { return static_cast<name const &>(cnstr_get_ref(raw(), 1)); }
@@ -54,9 +54,9 @@ public:
explicit local_ctx(obj_arg o):object_ref(o) {}
local_ctx(b_obj_arg o, bool):object_ref(o, true) {}
local_ctx(local_ctx const & other):object_ref(other) {}
local_ctx(local_ctx && other):object_ref(other) {}
local_ctx(local_ctx && other):object_ref(std::move(other)) {}
local_ctx & operator=(local_ctx const & other) { object_ref::operator=(other); return *this; }
local_ctx & operator=(local_ctx && other) { object_ref::operator=(other); return *this; }
local_ctx & operator=(local_ctx && other) { object_ref::operator=(std::move(other)); return *this; }
bool empty() const;

View File

@@ -6,6 +6,7 @@ Author: Leonardo de Moura
*/
#include <vector>
#include <memory>
#include <utility>
#include <unordered_map>
#include "kernel/replace_fn.h"
@@ -14,14 +15,14 @@ namespace lean {
class replace_rec_fn {
struct key_hasher {
std::size_t operator()(std::pair<lean_object *, unsigned> const & p) const {
return hash((size_t)p.first, p.second);
return hash((size_t)p.first >> 3, p.second);
}
};
std::unordered_map<std::pair<lean_object *, unsigned>, expr, key_hasher> m_cache;
std::function<optional<expr>(expr const &, unsigned)> m_f;
bool m_use_cache;
expr save_result(expr const & e, unsigned offset, expr const & r, bool shared) {
expr save_result(expr const & e, unsigned offset, expr r, bool shared) {
if (shared)
m_cache.insert(mk_pair(mk_pair(e.raw(), offset), r));
return r;
@@ -36,7 +37,7 @@ class replace_rec_fn {
shared = true;
}
if (optional<expr> r = m_f(e, offset)) {
return save_result(e, offset, *r, shared);
return save_result(e, offset, std::move(*r), shared);
} else {
switch (e.kind()) {
case expr_kind::Const: case expr_kind::Sort:

View File

@@ -92,10 +92,10 @@ public:
object_ref(mk_cnstr(0, ns, ks)) {}
spec_info():spec_info(names(), spec_arg_kinds()) {}
spec_info(spec_info const & other):object_ref(other) {}
spec_info(spec_info && other):object_ref(other) {}
spec_info(spec_info && other):object_ref(std::move(other)) {}
spec_info(b_obj_arg o, bool b):object_ref(o, b) {}
spec_info & operator=(spec_info const & other) { object_ref::operator=(other); return *this; }
spec_info & operator=(spec_info && other) { object_ref::operator=(other); return *this; }
spec_info & operator=(spec_info && other) { object_ref::operator=(std::move(other)); return *this; }
names const & get_mutual_decls() const { return static_cast<names const &>(cnstr_get_ref(*this, 0)); }
spec_arg_kinds const & get_arg_kinds() const { return static_cast<spec_arg_kinds const &>(cnstr_get_ref(*this, 1)); }
};

View File

@@ -24,13 +24,13 @@ public:
projection_info(name const & c, unsigned nparams, unsigned i, bool inst_implicit);
projection_info():projection_info(name(), 0, 0, false) {}
projection_info(projection_info const & other):object_ref(other) {}
projection_info(projection_info && other):object_ref(other) {}
projection_info(projection_info && other):object_ref(std::move(other)) {}
/* low-level constructors */
explicit projection_info(object * o):object_ref(o) {}
explicit projection_info(b_obj_arg o, bool b):object_ref(o, b) {}
explicit projection_info(object_ref const & o):object_ref(o) {}
projection_info & operator=(projection_info const & other) { object_ref::operator=(other); return *this; }
projection_info & operator=(projection_info && other) { object_ref::operator=(other); return *this; }
projection_info & operator=(projection_info && other) { object_ref::operator=(std::move(other)); return *this; }
name const & get_constructor() const { return static_cast<name const &>(cnstr_get_ref(*this, 0)); }
unsigned get_nparams() const { return static_cast<nat const &>(cnstr_get_ref(*this, 1)).get_small_value(); }
unsigned get_i() const { return static_cast<nat const &>(cnstr_get_ref(*this, 2)).get_small_value(); }

View File

@@ -28,12 +28,12 @@ public:
array_ref(b_obj_arg o, bool b):object_ref(o, b) {}
array_ref():object_ref(array_mk_empty()) {}
array_ref(array_ref const & other):object_ref(other) {}
array_ref(array_ref && other):object_ref(other) {}
array_ref(array_ref && other):object_ref(std::move(other)) {}
array_ref(std::initializer_list<T> const & elems):object_ref(to_array(elems)) {}
array_ref(buffer<T> const & elems):object_ref(to_array(elems)) {}
array_ref & operator=(array_ref const & other) { object_ref::operator=(other); return *this; }
array_ref & operator=(array_ref && other) { object_ref::operator=(other); return *this; }
array_ref & operator=(array_ref && other) { object_ref::operator=(std::move(other)); return *this; }
size_t size() const { return array_size(raw()); }

View File

@@ -23,7 +23,7 @@ public:
explicit list_ref(list_ref<T> const * l) { if (l) *this = *l; }
list_ref(T const & h, list_ref<T> const & t):object_ref(mk_cnstr(1, h.raw(), t.raw())) { inc(h.raw()); inc(t.raw()); }
list_ref(list_ref const & other):object_ref(other) {}
list_ref(list_ref && other):object_ref(other) {}
list_ref(list_ref && other):object_ref(std::move(other)) {}
template<typename It> list_ref(It const & begin, It const & end):list_ref() {
auto it = end;
while (it != begin) {
@@ -35,7 +35,7 @@ public:
list_ref(buffer<T> const & b):list_ref(b.begin(), b.end()) {}
list_ref & operator=(list_ref const & other) { object_ref::operator=(other); return *this; }
list_ref & operator=(list_ref && other) { object_ref::operator=(other); return *this; }
list_ref & operator=(list_ref && other) { object_ref::operator=(std::move(other)); return *this; }
explicit operator bool() const { return !is_scalar(raw()); }
friend bool is_nil(list_ref const & l) { return is_scalar(l.raw()); }

View File

@@ -35,6 +35,10 @@ public:
s.m_obj = box(0);
return *this;
}
void set_box(object * o) {
lean_assert(is_scalar(m_obj));
m_obj = o;
}
object * raw() const { return m_obj; }
object * steal() { object * r = m_obj; m_obj = box(0); return r; }
object * to_obj_arg() const { inc(m_obj); return m_obj; }

View File

@@ -16,18 +16,20 @@ public:
option_ref(b_obj_arg o, bool b):object_ref(o, b) {}
option_ref():object_ref(box(0)) {}
option_ref(option_ref const & other):object_ref(other) {}
option_ref(option_ref && other):object_ref(other) {}
option_ref(option_ref && other):object_ref(std::move(other)) {}
explicit option_ref(T const & a):object_ref(mk_cnstr(1, a.raw())) { inc(a.raw()); }
explicit option_ref(T const * a) { if (a) *this = option_ref(*a); }
explicit option_ref(option_ref<T> const * o) { if (o) *this = *o; }
explicit option_ref(optional<T> const & o):option_ref(o ? &*o : nullptr) {}
option_ref & operator=(option_ref const & other) { object_ref::operator=(other); return *this; }
option_ref & operator=(option_ref && other) { object_ref::operator=(other); return *this; }
option_ref & operator=(option_ref && other) { object_ref::operator=(std::move(other)); return *this; }
explicit operator bool() const { return !is_scalar(raw()); }
optional<T> get() const { return *this ? some(static_cast<T const &>(cnstr_get_ref(*this, 0))) : optional<T>(); }
T get_val() const { lean_assert(*this); return static_cast<T const &>(cnstr_get_ref(*this, 0)); }
/** \brief Structural equality. */
friend bool operator==(option_ref const & o1, option_ref const & o2) {
return o1.get() == o2.get();

View File

@@ -31,13 +31,13 @@ public:
}
optional(optional && other):m_some(other.m_some) {
if (other.m_some)
new (&m_value) T(std::forward<T>(other.m_value));
new (&m_value) T(std::move(other.m_value));
}
explicit optional(T const & v):m_some(true) {
new (&m_value) T(v);
}
explicit optional(T && v):m_some(true) {
new (&m_value) T(std::forward<T>(v));
new (&m_value) T(std::move(v));
}
template<typename... Args>
explicit optional(Args&&... args):m_some(true) {
@@ -84,7 +84,7 @@ public:
m_value.~T();
m_some = other.m_some;
if (m_some)
new (&m_value) T(std::forward<T>(other.m_value));
new (&m_value) T(std::move(other.m_value));
return *this;
}
optional& operator=(T const & other) {
@@ -98,7 +98,7 @@ public:
if (m_some)
m_value.~T();
m_some = true;
new (&m_value) T(std::forward<T>(other));
new (&m_value) T(std::move(other));
return *this;
}
@@ -130,9 +130,9 @@ template<> class optional<P> { \
public: \
optional():m_value(nullptr) {} \
optional(optional const & other):m_value(other.m_value) {} \
optional(optional && other):m_value(std::forward<P>(other.m_value)) {} \
optional(optional && other):m_value(std::move(other.m_value)) {} \
explicit optional(P const & v):m_value(v) {} \
explicit optional(P && v):m_value(std::forward<P>(v)) {} \
explicit optional(P && v):m_value(std::move(v)) {} \
\
explicit operator bool() const { return m_value.m_ptr != nullptr; } \
P const * operator->() const { lean_assert(m_value.m_ptr); return &m_value; } \
@@ -142,9 +142,9 @@ public: \
P const & value() const { lean_assert(m_value.m_ptr); return m_value; } \
P & value() { lean_assert(m_value.m_ptr); return m_value; } \
optional & operator=(optional const & other) { m_value = other.m_value; return *this; } \
optional& operator=(optional && other) { m_value = std::forward<P>(other.m_value); return *this; } \
optional& operator=(optional && other) { m_value = std::move(other.m_value); return *this; } \
optional& operator=(P const & other) { m_value = other; return *this; } \
optional& operator=(P && other) { m_value = std::forward<P>(other); return *this; } \
optional& operator=(P && other) { m_value = std::move(other); return *this; } \
friend bool operator==(optional const & o1, optional const & o2) { \
return static_cast<bool>(o1) == static_cast<bool>(o2) && (!o1 || o1.m_value == o2.m_value); \
} \

View File

@@ -4,11 +4,8 @@ Released under Apache 2.0 license as described in the file LICENSE.
Author: Leonardo de Moura
*/
#include <vector>
#include <cstring>
#include <unordered_map>
#include <unordered_set>
#include "runtime/object.h"
#include "runtime/sharecommon.h"
#include "runtime/hash.h"
namespace lean {
@@ -271,176 +268,168 @@ extern "C" LEAN_EXPORT obj_res lean_state_sharecommon(b_obj_arg tc, obj_arg s, o
return sharecommon_fn(tc, s)(a);
}
/*
A faster version of `sharecommon_fn` which only uses a local state.
It optimizes the number of RC operations, the strategy for caching results,
and uses C++ hashmap.
We do not increment reference counters when inserting Lean objects at `m_cache` and `m_set`.
This is correct because
- The domain of `m_cache` contains only sub-objects of `lean_sharecommon_quick` parameter,
and we know the object referenced by this parameter will remain alive.
- The range of `m_cache` contains only new objects that have been maxed shared, and these
objects will be are sub-objects of the object returned by `lean_sharecommon_quick`.
- `m_set` is like the range of `m_cache`.
*/
class sharecommon_quick_fn {
struct set_hash {
std::size_t operator()(lean_object * o) const { return lean_sharecommon_hash(o); }
};
struct set_eq {
std::size_t operator()(lean_object * o1, lean_object * o2) const { return lean_sharecommon_eq(o1, o2); }
};
/*
We use `m_cache` to ensure we do **not** traverse a DAG as a tree.
We use pointer equality for this collection.
*/
std::unordered_map<lean_object *, lean_object *> m_cache;
/* Set of maximally shared terms. AKA hash-consing table. */
std::unordered_set<lean_object *, set_hash, set_eq> m_set;
/*
We do not increment reference counters when inserting Lean objects at `m_cache` and `m_set`.
This is correct because
- The domain of `m_cache` contains only sub-objects of `lean_sharecommon_quick` parameter,
and we know the object referenced by this parameter will remain alive.
- The range of `m_cache` contains only new objects that have been maxed shared, and these
objects will be are sub-objects of the object returned by `lean_sharecommon_quick`.
- `m_set` is like the range of `m_cache`.
*/
lean_object * check_cache(lean_object * a) {
if (!lean_is_exclusive(a)) {
// We only check the cache if `a` is a shared object
auto it = m_cache.find(a);
if (it != m_cache.end()) {
// All objects stored in the range of `m_cache` are single threaded.
lean_assert(lean_is_st(it->second));
// We increment the reference counter because this object
// will be returned by `lean_sharecommon_quick` or stored into a new object.
it->second->m_rc++;
return it->second;
lean_object * sharecommon_quick_fn::check_cache(lean_object * a) {
if (!lean_is_exclusive(a)) {
// We only check the cache if `a` is a shared object
auto it = m_cache.find(a);
if (it != m_cache.end()) {
// All objects stored in the range of `m_cache` are single threaded.
lean_assert(lean_is_st(it->second));
// We increment the reference counter because this object
// will be returned by `lean_sharecommon_quick` or stored into a new object.
it->second->m_rc++;
return it->second;
}
if (m_check_set) {
auto it = m_set.find(a);
if (it != m_set.end()) {
lean_object * result = *it;
lean_assert(lean_is_st(result));
result->m_rc++;
return result;
}
}
return nullptr;
}
return nullptr;
}
/*
`new_a` is a new object that is equal to `a`, but its subobjects are maximally shared.
*/
lean_object * save(lean_object * a, lean_object * new_a) {
lean_assert(lean_is_st(new_a));
lean_assert(new_a->m_rc == 1);
auto it = m_set.find(new_a);
lean_object * result;
if (it == m_set.end()) {
// `new_a` is a new object
m_set.insert(new_a);
result = new_a;
} else {
// We already have a maximally shared object that is equal to `new_a`
result = *it;
DEBUG_CODE({
if (lean_is_ctor(new_a)) {
lean_assert(lean_is_ctor(result));
unsigned num_objs = lean_ctor_num_objs(new_a);
lean_assert(lean_ctor_num_objs(result) == num_objs);
for (unsigned i = 0; i < num_objs; i++) {
lean_assert(lean_ctor_get(result, i) == lean_ctor_get(new_a, i));
}
/*
`new_a` is a new object that is equal to `a`, but its subobjects are maximally shared.
*/
lean_object * sharecommon_quick_fn::save(lean_object * a, lean_object * new_a) {
lean_assert(lean_is_st(new_a));
lean_assert(new_a->m_rc == 1);
auto it = m_set.find(new_a);
lean_object * result;
if (it == m_set.end()) {
// `new_a` is a new object
m_set.insert(new_a);
result = new_a;
} else {
// We already have a maximally shared object that is equal to `new_a`
result = *it;
DEBUG_CODE({
if (lean_is_ctor(new_a)) {
lean_assert(lean_is_ctor(result));
unsigned num_objs = lean_ctor_num_objs(new_a);
lean_assert(lean_ctor_num_objs(result) == num_objs);
for (unsigned i = 0; i < num_objs; i++) {
lean_assert(lean_ctor_get(result, i) == lean_ctor_get(new_a, i));
}
});
lean_dec_ref(new_a); // delete `new_a`
// All objects in `m_set` are single threaded.
lean_assert(lean_is_st(result));
result->m_rc++;
lean_assert(result->m_rc > 1);
}
if (!lean_is_exclusive(a)) {
// We only cache the result if `a` is a shared object.
m_cache.insert(std::make_pair(a, result));
}
lean_assert(result == new_a || result->m_rc > 1);
lean_assert(result != new_a || result->m_rc == 1);
return result;
}
});
lean_dec_ref(new_a); // delete `new_a`
// All objects in `m_set` are single threaded.
lean_assert(lean_is_st(result));
result->m_rc++;
lean_assert(result->m_rc > 1);
}
if (!lean_is_exclusive(a)) {
// We only cache the result if `a` is a shared object.
m_cache.insert(std::make_pair(a, result));
}
lean_assert(result == new_a || result->m_rc > 1);
lean_assert(result != new_a || result->m_rc == 1);
return result;
}
// `sarray` and `string`
lean_object * visit_terminal(lean_object * a) {
auto it = m_set.find(a);
if (it == m_set.end()) {
m_set.insert(a);
} else {
a = *it;
}
lean_inc_ref(a);
// `sarray` and `string`
lean_object * sharecommon_quick_fn::visit_terminal(lean_object * a) {
auto it = m_set.find(a);
if (it == m_set.end()) {
m_set.insert(a);
} else {
a = *it;
}
lean_inc_ref(a);
return a;
}
lean_object * sharecommon_quick_fn::visit_array(lean_object * a) {
lean_object * r = check_cache(a);
if (r != nullptr) { lean_assert(r->m_rc > 1); return r; }
size_t sz = array_size(a);
lean_array_object * new_a = (lean_array_object*)lean_alloc_array(sz, sz);
for (size_t i = 0; i < sz; i++) {
lean_array_set_core((lean_object*)new_a, i, visit(lean_array_get_core(a, i)));
}
return save(a, (lean_object*)new_a);
}
lean_object * sharecommon_quick_fn::visit_ctor(lean_object * a) {
lean_object * r = check_cache(a);
if (r != nullptr) { lean_assert(r->m_rc > 1); return r; }
unsigned num_objs = lean_ctor_num_objs(a);
unsigned tag = lean_ptr_tag(a);
unsigned sz = lean_object_byte_size(a);
unsigned scalar_offset = sizeof(lean_object) + num_objs*sizeof(void*);
unsigned scalar_sz = sz - scalar_offset;
lean_object * new_a = lean_alloc_ctor(tag, num_objs, scalar_sz);
for (unsigned i = 0; i < num_objs; i++) {
lean_ctor_set(new_a, i, visit(lean_ctor_get(a, i)));
}
if (scalar_sz > 0) {
memcpy(reinterpret_cast<char*>(new_a) + scalar_offset, reinterpret_cast<char*>(a) + scalar_offset, scalar_sz);
}
return save(a, new_a);
}
/*
**TODO:** We did not implement stack overflow detection.
We claim it is not needed in the current uses of `shareCommon'`.
If this becomes an issue, we can use the following approach to address the issue without
affecting the performance.
- Add an extra `depth` parameter.
- In `operator()`, estimate the maximum depth based on the remaining stack space. See `check_stack`.
- If the limit is reached, simply return `a`.
*/
lean_object * sharecommon_quick_fn::visit(lean_object * a) {
if (lean_is_scalar(a)) {
return a;
}
lean_object * visit_array(lean_object * a) {
lean_object * r = check_cache(a);
if (r != nullptr) { lean_assert(r->m_rc > 1); return r; }
size_t sz = array_size(a);
lean_array_object * new_a = (lean_array_object*)lean_alloc_array(sz, sz);
for (size_t i = 0; i < sz; i++) {
lean_array_set_core((lean_object*)new_a, i, visit(lean_array_get_core(a, i)));
}
return save(a, (lean_object*)new_a);
}
lean_object * visit_ctor(lean_object * a) {
lean_object * r = check_cache(a);
if (r != nullptr) { lean_assert(r->m_rc > 1); return r; }
unsigned num_objs = lean_ctor_num_objs(a);
unsigned tag = lean_ptr_tag(a);
unsigned sz = lean_object_byte_size(a);
unsigned scalar_offset = sizeof(lean_object) + num_objs*sizeof(void*);
unsigned scalar_sz = sz - scalar_offset;
lean_object * new_a = lean_alloc_ctor(tag, num_objs, scalar_sz);
for (unsigned i = 0; i < num_objs; i++) {
lean_ctor_set(new_a, i, visit(lean_ctor_get(a, i)));
}
if (scalar_sz > 0) {
memcpy(reinterpret_cast<char*>(new_a) + scalar_offset, reinterpret_cast<char*>(a) + scalar_offset, scalar_sz);
}
return save(a, new_a);
}
public:
switch (lean_ptr_tag(a)) {
/*
**TODO:** We did not implement stack overflow detection.
We claim it is not needed in the current uses of `shareCommon'`.
If this becomes an issue, we can use the following approach to address the issue without
affecting the performance.
- Add an extra `depth` parameter.
- In `operator()`, estimate the maximum depth based on the remaining stack space. See `check_stack`.
- If the limit is reached, simply return `a`.
Similarly to `sharecommon_fn`, we only maximally share arrays, scalar arrays, strings, and
constructor objects.
*/
lean_object * visit(lean_object * a) {
if (lean_is_scalar(a)) {
return a;
}
switch (lean_ptr_tag(a)) {
/*
Similarly to `sharecommon_fn`, we only maximally share arrays, scalar arrays, strings, and
constructor objects.
*/
case LeanMPZ: lean_inc_ref(a); return a;
case LeanClosure: lean_inc_ref(a); return a;
case LeanThunk: lean_inc_ref(a); return a;
case LeanTask: lean_inc_ref(a); return a;
case LeanRef: lean_inc_ref(a); return a;
case LeanExternal: lean_inc_ref(a); return a;
case LeanReserved: lean_inc_ref(a); return a;
case LeanScalarArray: return visit_terminal(a);
case LeanString: return visit_terminal(a);
case LeanArray: return visit_array(a);
default: return visit_ctor(a);
}
case LeanMPZ: lean_inc_ref(a); return a;
case LeanClosure: lean_inc_ref(a); return a;
case LeanThunk: lean_inc_ref(a); return a;
case LeanTask: lean_inc_ref(a); return a;
case LeanRef: lean_inc_ref(a); return a;
case LeanExternal: lean_inc_ref(a); return a;
case LeanReserved: lean_inc_ref(a); return a;
case LeanScalarArray: return visit_terminal(a);
case LeanString: return visit_terminal(a);
case LeanArray: return visit_array(a);
default: return visit_ctor(a);
}
lean_object * operator()(lean_object * a) {
return visit(a);
}
};
}
// def ShareCommon.shareCommon' (a : A) : A := a
extern "C" LEAN_EXPORT obj_res lean_sharecommon_quick(obj_arg a) {
return sharecommon_quick_fn()(a);
}
lean_object * sharecommon_persistent_fn::operator()(lean_object * e) {
lean_object * r = check_cache(e);
if (r != nullptr)
return r;
m_saved.push_back(object_ref(e, true));
r = visit(e);
m_saved.push_back(object_ref(r, true));
return r;
}
};

71
src/runtime/sharecommon.h Normal file
View File

@@ -0,0 +1,71 @@
/*
Copyright (c) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Leonardo de Moura
*/
#pragma once
#include <vector>
#include <unordered_map>
#include <unordered_set>
#include "runtime/object_ref.h"
namespace lean {
extern "C" LEAN_EXPORT uint8 lean_sharecommon_eq(b_obj_arg o1, b_obj_arg o2);
extern "C" LEAN_EXPORT uint64_t lean_sharecommon_hash(b_obj_arg o);
/*
A faster version of `sharecommon_fn` which only uses a local state.
It optimizes the number of RC operations, the strategy for caching results,
and uses C++ hashmap.
*/
class LEAN_EXPORT sharecommon_quick_fn {
protected:
struct set_hash {
std::size_t operator()(lean_object * o) const { return lean_sharecommon_hash(o); }
};
struct set_eq {
std::size_t operator()(lean_object * o1, lean_object * o2) const { return lean_sharecommon_eq(o1, o2); }
};
/*
We use `m_cache` to ensure we do **not** traverse a DAG as a tree.
We use pointer equality for this collection.
*/
std::unordered_map<lean_object *, lean_object *> m_cache;
/* Set of maximally shared terms. AKA hash-consing table. */
std::unordered_set<lean_object *, set_hash, set_eq> m_set;
/*
If `true`, `check_cache` will also check `m_set`.
This is useful when the input term may contain terms that have already
been hashconsed.
*/
bool m_check_set;
lean_object * check_cache(lean_object * a);
lean_object * save(lean_object * a, lean_object * new_a);
lean_object * visit_terminal(lean_object * a);
lean_object * visit_array(lean_object * a);
lean_object * visit_ctor(lean_object * a);
lean_object * visit(lean_object * a);
public:
sharecommon_quick_fn(bool s = false):m_check_set(s) {}
void set_check_set(bool f) { m_check_set = f; }
lean_object * operator()(lean_object * a) {
return visit(a);
}
};
/*
Similar to `sharecommon_quick_fn`, but we save the entry points and result values to ensure
they are not deleted.
*/
class LEAN_EXPORT sharecommon_persistent_fn : private sharecommon_quick_fn {
std::vector<object_ref> m_saved;
public:
sharecommon_persistent_fn(bool s = false):sharecommon_quick_fn(s) {}
void set_check_set(bool f) { m_check_set = f; }
lean_object * operator()(lean_object * e);
};
};

View File

@@ -17,11 +17,11 @@ public:
explicit string_ref(std::string const & s):object_ref(mk_string(s)) {}
explicit string_ref(sstream const & strm):string_ref(strm.str()) {}
string_ref(string_ref const & other):object_ref(other) {}
string_ref(string_ref && other):object_ref(other) {}
string_ref(string_ref && other):object_ref(std::move(other)) {}
explicit string_ref(obj_arg o):object_ref(o) {}
string_ref(b_obj_arg o, bool b):object_ref(o, b) {}
string_ref & operator=(string_ref const & other) { object_ref::operator=(other); return *this; }
string_ref & operator=(string_ref && other) { object_ref::operator=(other); return *this; }
string_ref & operator=(string_ref && other) { object_ref::operator=(std::move(other)); return *this; }
/* Number of bytes used to store the string in UTF8.
Remark: it does not include the null character added in the end to make it efficient to
convert to C string. */

View File

@@ -9,6 +9,9 @@ Author: Leonardo de Moura
#include <iostream>
#ifdef LEAN_WINDOWS
#include <windows.h>
# ifdef LEAN_AUTO_THREAD_FINALIZATION
#include <pthread.h>
# endif
#else
#include <pthread.h>
#endif

View File

@@ -144,7 +144,7 @@ ENDFOREACH(T)
# LEAN PACKAGE TESTS
file(GLOB LEANPKGTESTS "${LEAN_SOURCE_DIR}/../tests/pkg/*")
FOREACH(T ${LEANPKGTESTS})
if(IS_DIRECTORY ${T})
if(EXISTS ${T}/test.sh)
GET_FILENAME_COMPONENT(T_NAME ${T} NAME)
add_test(NAME "leanpkgtest_${T_NAME}"
WORKING_DIRECTORY "${T}"

View File

@@ -30,9 +30,9 @@ public:
explicit data_value(name const & v):object_ref(mk_cnstr(static_cast<unsigned>(data_value_kind::Name), v.raw())) { inc(v.raw()); }
data_value():data_value(false) {}
data_value(data_value const & other):object_ref(other) {}
data_value(data_value && other):object_ref(other) {}
data_value(data_value && other):object_ref(std::move(other)) {}
data_value & operator=(data_value const & other) { object_ref::operator=(other); return *this; }
data_value & operator=(data_value && other) { object_ref::operator=(other); return *this; }
data_value & operator=(data_value && other) { object_ref::operator=(std::move(other)); return *this; }
data_value_kind kind() const { return static_cast<data_value_kind>(cnstr_tag(raw())); }
string_ref const & get_string() const { lean_assert(kind() == data_value_kind::String); return static_cast<string_ref const &>(cnstr_get_ref(*this, 0)); }

View File

@@ -74,7 +74,7 @@ public:
name(std::string const & s):name(name(), string_ref(s)) {}
name(string_ref const & s):name(name(), s) {}
name(name const & other):object_ref(other) {}
name(name && other):object_ref(other) {}
name(name && other):object_ref(std::move(other)) {}
/**
\brief Create a hierarchical name using the given strings.
Example: <code>name{"foo", "bla", "tst"}</code> creates the hierarchical
@@ -97,7 +97,7 @@ public:
*/
static name mk_internal_unique_name();
name & operator=(name const & other) { object_ref::operator=(other); return *this; }
name & operator=(name && other) { object_ref::operator=(other); return *this; }
name & operator=(name && other) { object_ref::operator=(std::move(other)); return *this; }
static uint64_t hash(b_obj_arg n) {
lean_assert(lean_name_hash(n) == lean_name_hash_exported_b(n));
return lean_name_hash(n);

View File

@@ -27,10 +27,10 @@ public:
static nat of_size_t(size_t v) { return nat(lean_usize_to_nat(v)); }
nat(nat const & other):object_ref(other) {}
nat(nat && other):object_ref(other) {}
nat(nat && other):object_ref(std::move(other)) {}
nat & operator=(nat const & other) { object_ref::operator=(other); return *this; }
nat & operator=(nat && other) { object_ref::operator=(other); return *this; }
nat & operator=(nat && other) { object_ref::operator=(std::move(other)); return *this; }
bool is_small() const { return is_scalar(raw()); }
size_t get_small_value() const { lean_assert(is_small()); return unbox(raw()); }
bool is_zero() const { return is_small() && get_small_value() == 0; }

View File

@@ -191,43 +191,44 @@ static void display_features(std::ostream & out) {
static void display_help(std::ostream & out) {
display_header(out);
std::cout << "Miscellaneous:\n";
std::cout << " --help -h display this message\n";
std::cout << " --features -f display features compiler provides (eg. LLVM support)\n";
std::cout << " --version -v display version number\n";
std::cout << " --githash display the git commit hash number used to build this binary\n";
std::cout << " --run call the 'main' definition in a file with the remaining arguments\n";
std::cout << " --o=oname -o create olean file\n";
std::cout << " --i=iname -i create ilean file\n";
std::cout << " --c=fname -c name of the C output file\n";
std::cout << " --bc=fname -b name of the LLVM bitcode file\n";
std::cout << " --stdin take input from stdin\n";
std::cout << " --root=dir set package root directory from which the module name of the input file is calculated\n"
<< " (default: current working directory)\n";
std::cout << " --trust=num -t trust level (default: max) 0 means do not trust any macro,\n"
<< " and type check all imported modules\n";
std::cout << " --quiet -q do not print verbose messages\n";
std::cout << " --memory=num -M maximum amount of memory that should be used by Lean\n";
std::cout << " (in megabytes)\n";
std::cout << " --timeout=num -T maximum number of memory allocations per task\n";
std::cout << " this is a deterministic way of interrupting long running tasks\n";
std::cout << " -h, --help display this message\n";
std::cout << " -f, --features display features compiler provides (eg. LLVM support)\n";
std::cout << " -v, --version display version number\n";
std::cout << " --githash display the git commit hash number used to build this binary\n";
std::cout << " --run call the 'main' definition in a file with the remaining arguments\n";
std::cout << " -o, --o=oname create olean file\n";
std::cout << " -i, --i=iname create ilean file\n";
std::cout << " -c, --c=fname name of the C output file\n";
std::cout << " -b, --bc=fname name of the LLVM bitcode file\n";
std::cout << " --stdin take input from stdin\n";
std::cout << " --root=dir set package root directory from which the module name\n"
<< " of the input file is calculated\n"
<< " (default: current working directory)\n";
std::cout << " -t, --trust=num trust level (default: max) 0 means do not trust any macro,\n"
<< " and type check all imported modules\n";
std::cout << " -q, --quiet do not print verbose messages\n";
std::cout << " -M, --memory=num maximum amount of memory that should be used by Lean\n";
std::cout << " (in megabytes)\n";
std::cout << " -T, --timeout=num maximum number of memory allocations per task\n";
std::cout << " this is a deterministic way of interrupting long running tasks\n";
#if defined(LEAN_MULTI_THREAD)
std::cout << " --threads=num -j number of threads used to process lean files\n";
std::cout << " --tstack=num -s thread stack size in Kb\n";
std::cout << " --server start lean in server mode\n";
std::cout << " --worker start lean in server-worker mode\n";
std::cout << " -j, --threads=num number of threads used to process lean files\n";
std::cout << " -s, --tstack=num thread stack size in Kb\n";
std::cout << " --server start lean in server mode\n";
std::cout << " --worker start lean in server-worker mode\n";
#endif
std::cout << " --plugin=file load and initialize Lean shared library for registering linters etc.\n";
std::cout << " --load-dynlib=file load shared library to make its symbols available to the interpreter\n";
std::cout << " --json report Lean output (e.g., messages) as JSON (one per line)\n";
std::cout << " --deps just print dependencies of a Lean input\n";
std::cout << " --print-prefix print the installation prefix for Lean and exit\n";
std::cout << " --print-libdir print the installation directory for Lean's built-in libraries and exit\n";
std::cout << " --profile display elaboration/type checking time for each definition/theorem\n";
std::cout << " --stats display environment statistics\n";
std::cout << " --plugin=file load and initialize Lean shared library for registering linters etc.\n";
std::cout << " --load-dynlib=file load shared library to make its symbols available to the interpreter\n";
std::cout << " --json report Lean output (e.g., messages) as JSON (one per line)\n";
std::cout << " --deps just print dependencies of a Lean input\n";
std::cout << " --print-prefix print the installation prefix for Lean and exit\n";
std::cout << " --print-libdir print the installation directory for Lean's built-in libraries and exit\n";
std::cout << " --profile display elaboration/type checking time for each definition/theorem\n";
std::cout << " --stats display environment statistics\n";
DEBUG_CODE(
std::cout << " --debug=tag enable assertions with the given tag\n";
std::cout << " --debug=tag enable assertions with the given tag\n";
)
std::cout << " -D name=value set a configuration option (see set_option command)\n";
std::cout << " -D name=value set a configuration option (see set_option command)\n";
}
static int print_prefix = 0;
@@ -310,7 +311,9 @@ options set_config_option(options const & opts, char const * in) {
<< "' cannot be set in the command line, use set_option command");
}
} else {
throw lean::exception(lean::sstream() << "invalid -D parameter, unknown configuration option '" << opt << "'");
// More options may be registered by imports, so we leave validating them to the Lean side.
// This (minor) duplication will be resolved when this file is rewritten in Lean.
return opts.update(opt, val.c_str());
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/src/kernel/expr.h generated

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/src/kernel/instantiate_mvars.cpp generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/src/runtime/sharecommon.h generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/src/util/kvmap.h generated

Binary file not shown.

BIN
stage0/src/util/name.h generated

Binary file not shown.

BIN
stage0/src/util/nat.h generated

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More