Compare commits

..

34 Commits

Author SHA1 Message Date
Kim Morrison
589ae168c4 test file 2025-09-10 02:06:22 +02:00
Kim Morrison
b93c653b9a chore: remove over-eager grind lemma 2025-09-10 02:06:22 +02:00
Cameron Zwarich
9923a8d9f8 chore: remove special case for extern constructors (#10257)
This is subsumed by the fix in #10256.
2025-09-05 06:08:45 +00:00
Cameron Zwarich
de38a16fa9 fix: use IR decls in toIR for applications without mono decls (#10256)
This PR corrects a mistake in `toIR` where it could over-apply a
function that has an IR decl but no mono decl.

Fixes #10181.
2025-09-05 05:32:19 +00:00
Cameron Zwarich
c0238e396c refactor: inline tryIrDecl? into its only caller (#10255)
This helper function was actually incorrectly named anyways.
2025-09-05 04:41:34 +00:00
Cameron Zwarich
c7cc398935 refactor: create a mkApplication helper for toIR (#10254) 2025-09-05 01:42:36 +00:00
Mac Malone
849bb770fd refactor: lake: split PackageConfig from Config.Package (#10253)
This PR moves the `PackageConfig` definition from `Lake.Config.Package`
into its own module. This enables a significant reduction in the `meta
import` tree of the `Lake.CLI.Translate` modules.
2025-09-04 23:15:37 +00:00
Leonardo de Moura
6cefbc4bb0 chore: fix typo (#10251) 2025-09-04 16:05:00 +00:00
Paul Reichert
9b6a4a7588 fix: solve two problems with LinearOrderPackage factories (#10250)
This PR fixes a bug in the `LinearOrderPackage.ofOrd` factory. If there
is a `LawfulEqOrd` instance available, it should automatically use it
instead of requiring the user to provide the `eq_of_compare` argument to
the factory. The PR also solves a hygiene-related problem making the
factories fail when `Std` is not open.
2025-09-04 15:27:09 +00:00
Sebastian Ullrich
47787dc1cb perf: rebuild leak on private match (#10246)
This PR prevents downstream rebuilds on changes to private `match`es
under the module system
2025-09-04 12:51:42 +00:00
Lean stage0 autoupdater
25ab3dd93d chore: update stage0 2025-09-04 08:22:20 +00:00
Kim Morrison
bbd45b13f4 chore: move omega internals to a namespace (#10243)
This PR moves some internal implementation details of `omega` out of the
`List` namespace. See [#mathlib4 > Naming: ne_zero vs nonzero @
💬](https://leanprover.zulipchat.com/#narrow/channel/287929-mathlib4/topic/Naming.3A.20ne_zero.20vs.20nonzero/near/537424328).
2025-09-04 06:32:02 +00:00
Kim Morrison
85f168bbd0 chore: add test cases for grind on Fin lemmas (#10241)
This PR adds some test cases for `grind` working with `Fin`. There are
many still failing tests in `tests/lean/grind/grind_fin.lean` which I'm
intending to triage and work on.
2025-09-04 04:28:29 +00:00
Marcus Rossel
89aed0931e feat: improve error message when passing local hypotheses to grind (#8891)
This PR improves the error message produced when passing (automatically
redundant) local hypotheses to `grind`.
2025-09-04 03:00:21 +00:00
Sebastian Ullrich
92d24e1c40 fix: Environment.realizeConst to replay realization map (#10238)
This PR fixes an issue with retrieving realized declarations after use
of Aesop uncovered by #10229
2025-09-03 22:16:40 +00:00
Leonardo de Moura
c15ee8a9f0 fix: universe polymorphic E-matching (#10239)
This PR fixes the E-matching procedure for theorems that contain
universe parameters not referenced by any regular parameter. This kind
of theorem seldom happens in practice, but we do have instances in the
standard library. Example:
```
@[simp, grind =] theorem Std.Do.SPred.down_pure {φ : Prop} : (⌜φ⌝ : SPred []).down = φ := rfl
```

closes #10233
2025-09-03 22:14:58 +00:00
Leonardo de Moura
320b02108b fix: grind canonicalizer (#10237)
This PR fixes a missing case in the `grind` canonicalizer. Some types
may include terms or propositions that are internalized later in the
`grind` state.

closes #10232
2025-09-03 18:08:48 +00:00
Rob23oba
80df86dfdd feat: add more MonoBind instances for monad transformers (#10230)
This PR adds `MonoBind` for more monad transformers. This allows using
`partial_fixpoint` for more complicated monads based on `Option` and
`EIO`. Example:
```lean-4
abbrev M := ReaderT String (StateT String.Pos Option)

def parseAll (x : M α) : M (List α) := do
  if (← read).atEnd (← get) then
    return []
  let val ← x
  let list ← parseAll x
  return val :: list
partial_fixpoint
```
2025-09-03 17:15:41 +00:00
Paul Reichert
fef390df08 perf: improve iterator/range benchmarks, use shortcut instances for Int ranges (#10197)
This PR is the result of analyzing the elaborator performance regression
introduced by #10005. It makes the `workspaceSymboldNewRanges` and
`iterators` benchmarks less noisy. It also replaces some range-related
instances for `Nat` with shortcuts to the general-purpose instances.
This is a trade-off between the ergonomics and the synthesis cost of
having general-purpose instances.
2025-09-03 15:47:52 +00:00
Sebastian Ullrich
37be918c50 perf: do not export EqnInfo for non-exposed defs (#10229) 2025-09-03 10:03:52 +00:00
Sebastian Ullrich
2efbe4ac36 feat: support visibility modifiers on syntax abbrevs (#10228)
Closes #10068
2025-09-03 07:53:29 +00:00
Eric Wieser
6d68aab56a feat: generalize universes in monadic operators for collections (#10224)
This PR generalizes the monadic operations for `HashMap`, `TreeMap`, and
`HashSet` to work for `m : Type u → Type v`.

This upstreams [a workaround from
Aesop](66a992130e/Aesop/Util/Basic.lean (L57-L66)),
and seems to continue a pattern already established in other files, such
as:
```lean
Array.forM.{u, v, w} {α : Type u} {m : Type v → Type w} [Monad m] (f : α → m PUnit) (as : Array α) (start : Nat := 0)
  (stop : Nat := as.size) : m PUnit
```
2025-09-03 07:24:14 +00:00
Joachim Breitner
ccb8568756 feat: linear-size DecidableEq instance (#10152)
This PR introduces an alternative construction for `DecidableEq`
instances that avoids the quadratic overhead of the default
construction.

The usual construction uses a `match` statement that looks at each pair
of constructors, and thus is necessarily quadratic in size. For
inductive data type with dozens of constructors or more, this quickly
becomes slow to process.

The new construction first compares the constructor tags (using the
`.ctorIdx` introduced in #9951), and handles the case of a differing
constructor tag quickly. If the constructor tags match, it uses the
per-constructor-eliminators (#9952) to create a linear-size instance. It
does so by creating a custom “matcher” for a parallel match on the data
types and the `h : x1.ctorIdx = x2.ctorIdx` assumption; this behaves
(and delaborates) like a normal `match` statement, but is implemented in
a bespoke way. This same-constructor-matcher will be useful for
implementing other instances as well.

The new construction produces less efficient code at the moment, so we
use it only for inductive types with 10 or more constructors by default.
The option `deriving.decEq.linear_construction_threshold` can be used to
adjust the threshold; set it to 0 to always use the new construction.
2025-09-03 06:31:49 +00:00
Leonardo de Moura
a4f6f391fe feat: equality propagation from AC module to grind core (#10223)
This PR implements equality propagation from the new AC module into the
`grind` core. Examples:

```lean
example {α β : Sort u} (f : α → β) (op : α → α → α) [Std.Associative op] [Std.Commutative op] 
    (a b c d : α) : op a (op b b) = op d c → f (op (op b a) (op b c)) = f (op c (op d c)) := by
  grind only

example (a b c : Nat) : min a (max b (max c 0)) = min (max c b) a := by
  grind -cutsat only

example {α β : Sort u} (bar : α → β) (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op]
    (a b c d e f x y w : α) :
    op d (op x c) = op a b →
    op e (op f (op y w)) = op (op d a) (op b c) →
    bar (op d (op x c)) = bar (op e (op f (op y w))) := by
  grind only
```
2025-09-02 23:02:25 +00:00
Leonardo de Moura
dac61c406f feat: extra critical pairs for associative + idempotent operators in grind ac (#10221)
This PR adds the extra critical pairs to ensure the `grind ac` procedure
is complete when the operator is associative and idempotent, but not
commutative. Example:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op] (a b c d e f x y w : α)
    : op d (op x c) = op a b →
      op e (op f (op y w)) = op a (op b c) →
      op d (op x c) = op e (op f (op y w)) := by
  grind only

example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op] (a b c d e f x y w : α)
    : op a (op d x) = op b c →
      op e (op f (op y w)) = op a (op b c) →
      op a (op d x) = op e (op f (op y w)) := by
  grind only
```
2025-09-02 15:52:56 +00:00
Henrik Böving
db35f98b26 fix: make csimp equivalence criteria more strict (#10214)
This PR fixes #10213.
2025-09-02 14:36:08 +00:00
Leonardo de Moura
e6f50b0181 perf: EqCnstr.superposeWith (#10218)
This PR adds a small optimization for `EqCnstr.superposeWith`
It also adds a new test unrelated to the optimization.
2025-09-02 13:50:47 +00:00
Dax Fohl
2877196656 doc: fix broken "quickstart" and "supported editors" link (#8785)
The "supported editors" link in
https://github.com/leanprover/lean4/blob/master/doc/dev/index.md is
broken, as `setup.md` no longer exists in the repo. This PR changes the
link to point to the live Lean docs setup page at
https://docs.lean-lang.org/lean4/doc/setup.html#editing.

A similar fix for quickstart is included.

---------

Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
2025-09-02 12:45:04 +00:00
Aaron Liu
f748d1c4ef doc: fix typo in docstring for fieldIdxKind (#8814)
This PR fixes a typo in the docstring for `Lean.fieldIdxKind`, which was
missing a backtick.
2025-09-02 12:30:07 +00:00
Eric Wieser
848832dd61 chore: demote a panic to an exception in saveModuleData (#9127)
This PR makes `saveModuleData` throw an IO.Error instead of panicking,
if given something that cannot be serialized. This doesn't really matter
for saving modules, but is handy when writing tools to save auxiliary
date in olean files via Batteries' `pickle`.

The caller of this C++ function already is guarded in a `try`/`catch`
that promotes from a `lean::exception` to an `IO.userError`.

A simple test of this in the web editor is
```
import Batteries

#eval pickle "/tmp/foo.txt" fun x : Nat => x
```
which crashes before this change.

---------

Co-authored-by: Laurent Sartran <lsartran@google.com>
2025-09-02 12:25:45 +00:00
Henrik Böving
c5f2c192d6 fix: Selectable.one does not panic on empty array (#10216)
This PR fixes #10193.
2025-09-02 11:55:36 +00:00
Sebastian Ullrich
96c42b95fa chore: CI: reintroduce lost CTEST_OPTIONS (#10211) 2025-09-02 09:26:29 +00:00
Leonardo de Moura
d826474b14 feat: extra critical pairs for AC + idempotent operators in grind ac (#10208)
This PR adds the extra critical pairs to ensure the `grind ac` procedure
is complete when the operator is AC and idempotent. Example:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.Commutative op] [Std.IdempotentOp op] 
      (a b c d : α) : op a (op b b) = op d c → op (op b a) (op b c) = op c (op d c)  := by
  grind only
```
2025-09-02 04:24:22 +00:00
Kim Morrison
8d9d23b5bb feat: (approximate) inverses of dyadic rationals (#10194)
This PR adds the inverse of a dyadic rational, at a given precision, and
characterising lemmas. Also cleans up various parts of the `Int.DivMod`
and `Rat` APIs, and proves some characterising lemmas about
`Rat.toDyadic`.

---------

Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
2025-09-02 03:43:53 +00:00
473 changed files with 3820 additions and 745 deletions

View File

@@ -229,7 +229,7 @@ jobs:
id: test
run: |
ulimit -c unlimited # coredumps
time ctest --preset ${{ matrix.CMAKE_PRESET || 'release' }} --test-dir build/$TARGET_STAGE -j$NPROC --output-junit test-results.xml
time ctest --preset ${{ matrix.CMAKE_PRESET || 'release' }} --test-dir build/$TARGET_STAGE -j$NPROC --output-junit test-results.xml ${{ matrix.CTEST_OPTIONS }}
if: (matrix.wasm || !matrix.cross) && (inputs.check-level >= 1 || matrix.test)
- name: Test Summary
uses: test-summary/action@v2

View File

@@ -200,8 +200,6 @@ jobs:
"os": "ubuntu-latest",
"check-level": 2,
"CMAKE_PRESET": "reldebug",
// exclude seriously slow/stackoverflowing tests
"CTEST_OPTIONS": "-E 'interactivetest|leanpkgtest|laketest|benchtest|bv_bitblast_stress|3807'"
},
// TODO: suddenly started failing in CI
/*{
@@ -247,8 +245,6 @@ jobs:
"check-level": 2,
"shell": "msys2 {0}",
"CMAKE_OPTIONS": "-G \"Unix Makefiles\"",
// for reasons unknown, interactivetests are flaky on Windows
"CTEST_OPTIONS": "--repeat until-pass:2",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-x86_64-w64-windows-gnu.tar.zst",
"prepare-llvm": "../script/prepare-llvm-mingw.sh lean-llvm*",
"binary-check": "ldd",

View File

@@ -8,7 +8,7 @@ You should not edit the `stage0` directory except using the commands described i
## Development Setup
You can use any of the [supported editors](../setup.md) for editing the Lean source code.
You can use any of the [supported editors](https://lean-lang.org/install/manual/) for editing the Lean source code.
Please see below for specific instructions for VS Code.
### Dev setup using elan

View File

@@ -1,6 +1,6 @@
These are instructions to set up a working development environment for those who wish to make changes to Lean itself. It is part of the [Development Guide](../dev/index.md).
We strongly suggest that new users instead follow the [Quickstart](../quickstart.md) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
We strongly suggest that new users instead follow the [Installation Instructions](https://lean-lang.org/install/) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
Requirements
------------

View File

@@ -728,7 +728,7 @@ theorem isNone_findFinIdx? {xs : Array α} {p : α → Bool} :
cases xs
simp only [List.findFinIdx?_toArray, hf, List.findFinIdx?_subtype]
rw [findFinIdx?_congr List.unattach_toArray]
simp only [Option.map_map, Function.comp_def, Fin.cast_trans]
simp only [Option.map_map, Function.comp_def, Fin.cast_cast]
simp [Array.size]
/-! ### idxOf

View File

@@ -1951,7 +1951,6 @@ theorem append_left_inj {xs₁ xs₂ : Array α} (ys) : xs₁ ++ ys = xs₂ ++ y
@[simp] theorem append_eq_empty_iff {xs ys : Array α} : xs ++ ys = #[] xs = #[] ys = #[] := by
cases xs <;> simp
@[grind ]
theorem eq_empty_of_append_eq_empty {xs ys : Array α} (h : xs ++ ys = #[]) : xs = #[] ys = #[] :=
append_eq_empty_iff.mp h

View File

@@ -25,12 +25,12 @@ namespace Fin
@[deprecated ofNat_zero (since := "2025-05-28")] abbrev ofNat'_zero := @ofNat_zero
theorem mod_def (a m : Fin n) : a % m = Fin.mk (a % m) (Nat.lt_of_le_of_lt (Nat.mod_le _ _) a.2) :=
theorem mod_def (a m : Fin n) : a % m = Fin.mk (a.val % m.val) (Nat.lt_of_le_of_lt (Nat.mod_le _ _) a.2) :=
rfl
theorem mul_def (a b : Fin n) : a * b = Fin.mk ((a * b) % n) (Nat.mod_lt _ a.pos) := rfl
theorem mul_def (a b : Fin n) : a * b = Fin.mk ((a.val * b.val) % n) (Nat.mod_lt _ a.pos) := rfl
theorem sub_def (a b : Fin n) : a - b = Fin.mk (((n - b) + a) % n) (Nat.mod_lt _ a.pos) := rfl
theorem sub_def (a b : Fin n) : a - b = Fin.mk (((n - b.val) + a.val) % n) (Nat.mod_lt _ a.pos) := rfl
theorem pos' : [Nonempty (Fin n)], 0 < n | i => i.pos
@@ -81,7 +81,7 @@ theorem mk_val (i : Fin n) : (⟨i, i.isLt⟩ : Fin n) = i := Fin.eta ..
@[deprecated ofNat_self (since := "2025-05-28")] abbrev ofNat'_self := @ofNat_self
@[simp] theorem ofNat_val_eq_self [NeZero n] (x : Fin n) : (Fin.ofNat n x) = x := by
@[simp] theorem ofNat_val_eq_self [NeZero n] (x : Fin n) : (Fin.ofNat n x.val) = x := by
ext
rw [val_ofNat, Nat.mod_eq_of_lt]
exact x.2
@@ -121,8 +121,6 @@ Non-trivial loops lead to undesirable and counterintuitive elaboration behavior.
For example, for `x : Fin k` and `n : Nat`,
it causes `x < n` to be elaborated as `x < ↑n` rather than `↑x < n`,
silently introducing wraparound arithmetic.
Note: as of 2025-06-03, Mathlib has such a coercion for `Fin n` anyway!
-/
@[expose]
def instNatCast (n : Nat) [NeZero n] : NatCast (Fin n) where
@@ -265,7 +263,7 @@ instance : LawfulOrderLT (Fin n) where
lt_iff := by
simp [ Fin.not_le, Decidable.imp_iff_not_or, Std.Total.total]
@[simp, grind =] theorem val_rev (i : Fin n) : rev i = n - (i + 1) := rfl
@[simp, grind =] theorem val_rev (i : Fin n) : (rev i).val = n - (i + 1) := rfl
@[simp] theorem rev_rev (i : Fin n) : rev (rev i) = i := Fin.ext <| by
rw [val_rev, val_rev, Nat.sub_sub, Nat.sub_sub_self (by exact i.2), Nat.add_sub_cancel]
@@ -500,9 +498,11 @@ theorem succ_succ_ne_one (a : Fin n) : Fin.succ (Fin.succ a) ≠ 1 :=
ext
simp
@[simp] theorem cast_trans {k : Nat} (h : n = m) (h' : m = k) {i : Fin n} :
@[simp, grind =] theorem cast_cast {k : Nat} (h : n = m) (h' : m = k) {i : Fin n} :
(i.cast h).cast h' = i.cast (Eq.trans h h') := rfl
@[deprecated cast_cast (since := "2025-09-03")] abbrev cast_trans := @cast_cast
theorem castLE_of_eq {m n : Nat} (h : m = n) {h' : m n} : castLE h' = Fin.cast h := rfl
@[simp] theorem coe_castAdd (m : Nat) (i : Fin n) : (castAdd m i : Nat) = i := rfl
@@ -531,7 +531,7 @@ theorem cast_castAdd_left {n n' m : Nat} (i : Fin n') (h : n' + m = n + m) :
(i.castAdd m').cast h = i.castAdd m := rfl
theorem castAdd_castAdd {m n p : Nat} (i : Fin m) :
(i.castAdd n).castAdd p = (i.castAdd (n + p)).cast (Nat.add_assoc ..).symm := rfl
(i.castAdd n).castAdd p = (i.castAdd (n + p)).cast (Nat.add_assoc ..).symm := rfl
/-- The cast of the successor is the successor of the cast. See `Fin.succ_cast_eq` for rewriting in
the reverse direction. -/

View File

@@ -1709,7 +1709,6 @@ theorem getLast_concat {a : α} : ∀ {l : List α}, getLast (l ++ [a]) (by simp
theorem nil_eq_append_iff : [] = a ++ b a = [] b = [] := by
simp
@[grind ]
theorem eq_nil_of_append_eq_nil {l₁ l₂ : List α} (h : l₁ ++ l₂ = []) : l₁ = [] l₂ = [] :=
append_eq_nil_iff.mp h

View File

@@ -67,20 +67,20 @@ public structure Packages.PreorderOfLEArgs (α : Type u) where
extract_lets
first
| infer_instance
| exact Classical.Order.instLT
| exact _root_.Classical.Order.instLT
beq :
let := le; let := decidableLE
BEq α := by
extract_lets
first
| infer_instance
| exact FactoryInstances.beqOfDecidableLE
| exact _root_.Std.FactoryInstances.beqOfDecidableLE
lt_iff :
let := le; let := lt
a b : α, a < b a b ¬ b a := by
extract_lets
first
| exact LawfulOrderLT.lt_iff
| exact _root_.Std.LawfulOrderLT.lt_iff
| fail "Failed to automatically prove that the `LE` and `LT` instances are compatible. \
Please ensure that a `LawfulOrderLT` instance can be synthesized or \
manually provide the field `lt_iff`."
@@ -89,10 +89,10 @@ public structure Packages.PreorderOfLEArgs (α : Type u) where
have := lt_iff
DecidableLT α := by
extract_lets
haveI := @LawfulOrderLT.mk (lt_iff := by assumption) ..
haveI := @_root_.Std.LawfulOrderLT.mk (lt_iff := by assumption) ..
first
| infer_instance
| exact FactoryInstances.decidableLTOfLE
| exact _root_.Std.FactoryInstances.decidableLTOfLE
| fail "Failed to automatically derive that `LT` is decidable. \
Please ensure that a `DecidableLT` instance can be synthesized or \
manually provide the field `decidableLT`."
@@ -101,7 +101,7 @@ public structure Packages.PreorderOfLEArgs (α : Type u) where
a b : α, a == b a b b a := by
extract_lets
first
| exact LawfulOrderBEq.beq_iff_le_and_ge
| exact _root_.Std.LawfulOrderBEq.beq_iff_le_and_ge
| fail "Failed to automatically prove that the `LE` and `BEq` instances are compatible. \
Please ensure that a `LawfulOrderBEq` instance can be synthesized or \
manually provide the field `beq_iff_le_and_ge`."
@@ -110,7 +110,7 @@ public structure Packages.PreorderOfLEArgs (α : Type u) where
a : α, a a := by
extract_lets
first
| exact Std.Refl.refl (r := (· ·))
| exact _root_.Std.Refl.refl (r := (· ·))
| fail "Failed to automatically prove that the `LE` instance is reflexive. \
Please ensure that a `Refl` instance can be synthesized or \
manually provide the field `le_refl`."
@@ -119,7 +119,7 @@ public structure Packages.PreorderOfLEArgs (α : Type u) where
a b c : α, a b b c a c := by
extract_lets
first
| exact fun _ _ _ hab hbc => Trans.trans (r := (· ·)) (s := (· ·)) (t := (· ·)) hab hbc
| exact fun _ _ _ hab hbc => _root_.Trans.trans (r := (· ·)) (s := (· ·)) (t := (· ·)) hab hbc
| fail "Failed to automatically prove that the `LE` instance is transitive. \
Please ensure that a `Trans` instance can be synthesized or \
manually provide the field `le_trans`."
@@ -202,7 +202,7 @@ public structure Packages.PartialOrderOfLEArgs (α : Type u) extends Packages.Pr
a b : α, a b b a a = b := by
extract_lets
first
| exact Antisymm.antisymm
| exact _root_.Std.Antisymm.antisymm
| fail "Failed to automatically prove that the `LE` instance is antisymmetric. \
Please ensure that a `Antisymm` instance can be synthesized or \
manually provide the field `le_antisymm`."
@@ -310,11 +310,11 @@ public structure Packages.LinearPreorderOfLEArgs (α : Type u) extends
extract_lets
first
| infer_instance
| exact FactoryInstances.instOrdOfDecidableLE
| exact _root_.Std.FactoryInstances.instOrdOfDecidableLE
le_total :
a b : α, a b b a := by
first
| exact Total.total
| exact _root_.Std.Total.total
| fail "Failed to automatically prove that the `LE` instance is total. \
Please ensure that a `Total` instance can be synthesized or \
manually provide the field `le_total`."
@@ -324,7 +324,7 @@ public structure Packages.LinearPreorderOfLEArgs (α : Type u) extends
a b : α, (compare a b).isLE a b := by
extract_lets
first
| exact LawfulOrderOrd.isLE_compare
| exact _root_.Std.LawfulOrderOrd.isLE_compare
| fail "Failed to automatically prove that `(compare a b).isLE` is equivalent to `a ≤ b`. \
Please ensure that a `LawfulOrderOrd` instance can be synthesized or \
manually provide the field `isLE_compare`."
@@ -333,7 +333,7 @@ public structure Packages.LinearPreorderOfLEArgs (α : Type u) extends
a b : α, (compare a b).isGE b a := by
extract_lets
first
| exact LawfulOrderOrd.isGE_compare
| exact _root_.Std.LawfulOrderOrd.isGE_compare
| fail "Failed to automatically prove that `(compare a b).isGE` is equivalent to `b ≤ a`. \
Please ensure that a `LawfulOrderOrd` instance can be synthesized or \
manually provide the field `isGE_compare`."
@@ -411,20 +411,20 @@ public structure Packages.LinearOrderOfLEArgs (α : Type u) extends
extract_lets
first
| infer_instance
| exact Min.leftLeaningOfLE _
| exact _root_.Min.leftLeaningOfLE _
max :
let := le; let := decidableLE
Max α := by
extract_lets
first
| infer_instance
| exact Max.leftLeaningOfLE _
| exact _root_.Max.leftLeaningOfLE _
min_eq :
let := le; let := decidableLE; let := min
a b : α, Min.min a b = if a b then a else b := by
extract_lets
first
| exact fun a b => Std.min_eq_if (a := a) (b := b)
| exact fun a b => _root_.Std.min_eq_if (a := a) (b := b)
| fail "Failed to automatically prove that `min` is left-leaning. \
Please ensure that a `LawfulOrderLeftLeaningMin` instance can be synthesized or \
manually provide the field `min_eq`."
@@ -433,7 +433,7 @@ public structure Packages.LinearOrderOfLEArgs (α : Type u) extends
a b : α, Max.max a b = if b a then a else b := by
extract_lets
first
| exact fun a b => Std.max_eq_if (a := a) (b := b)
| exact fun a b => _root_.Std.max_eq_if (a := a) (b := b)
| fail "Failed to automatically prove that `max` is left-leaning. \
Please ensure that a `LawfulOrderLeftLeaningMax` instance can be synthesized or \
manually provide the field `max_eq`."
@@ -538,7 +538,7 @@ public structure Packages.LinearPreorderOfOrdArgs (α : Type u) where
extract_lets
first
| infer_instance
| exact LE.ofOrd _
| exact _root_.LE.ofOrd _
lawfulOrderOrd :
let := ord; let := transOrd; let := le
LawfulOrderOrd α := by
@@ -554,7 +554,7 @@ public structure Packages.LinearPreorderOfOrdArgs (α : Type u) where
extract_lets
first
| infer_instance
| exact DecidableLE.ofOrd _
| exact _root_.DecidableLE.ofOrd _
| fail "Failed to automatically derive that `LE` is decidable.\
Please ensure that a `DecidableLE` instance can be synthesized or \
manually provide the field `decidableLE`."
@@ -570,7 +570,7 @@ public structure Packages.LinearPreorderOfOrdArgs (α : Type u) where
a b : α, a < b compare a b = .lt := by
extract_lets
first
| exact fun _ _ => Std.compare_eq_lt.symm
| exact fun _ _ => _root_.Std.compare_eq_lt.symm
| fail "Failed to automatically derive that `LT` and `Ord` are compatible. \
Please ensure that a `LawfulOrderLT` instance can be synthesized or \
manually provide the field `lt_iff`."
@@ -580,7 +580,7 @@ public structure Packages.LinearPreorderOfOrdArgs (α : Type u) where
extract_lets
first
| infer_instance
| exact DecidableLT.ofOrd _
| exact _root_DecidableLT.ofOrd _
| fail "Failed to automatically derive that `LT` is decidable. \
Please ensure that a `DecidableLT` instance can be synthesized or \
manually provide the field `decidableLT`."
@@ -589,7 +589,7 @@ public structure Packages.LinearPreorderOfOrdArgs (α : Type u) where
extract_lets
first
| infer_instance
| exact BEq.ofOrd _
| exact _root_.BEq.ofOrd _
beq_iff :
let := ord; let := le; have := lawfulOrderOrd; let := beq
a b : α, a == b compare a b = .eq := by
@@ -708,7 +708,7 @@ public structure Packages.LinearOrderOfOrdArgs (α : Type u) extends
a b : α, compare a b = .eq a = b := by
extract_lets
first
| exact LawfulEqOrd.eq_of_compare
| exact fun _ _ => _root_.Std.LawfulEqOrd.eq_of_compare
| fail "Failed to derive a `LawfulEqOrd` instance. \
Please make sure that it can be synthesized or \
manually provide the field `eq_of_compare`."
@@ -718,20 +718,20 @@ public structure Packages.LinearOrderOfOrdArgs (α : Type u) extends
extract_lets
first
| infer_instance
| exact FactoryInstances.instMinOfOrd
| exact _root_.Std.FactoryInstances.instMinOfOrd
max :
let := ord
Max α := by
extract_lets
first
| infer_instance
| exact FactoryInstances.instMaxOfOrd
| exact _root_.Std.FactoryInstances.instMaxOfOrd
min_eq :
let := ord; let := le; let := min; have := lawfulOrderOrd
a b : α, Min.min a b = if (compare a b).isLE then a else b := by
extract_lets
first
| exact fun a b => Std.min_eq_if_isLE_compare (a := a) (b := b)
| exact fun a b => _root_.Std.min_eq_if_isLE_compare (a := a) (b := b)
| fail "Failed to automatically prove that `min` is left-leaning. \
Please ensure that a `LawfulOrderLeftLeaningMin` instance can be synthesized or \
manually provide the field `min_eq`."
@@ -740,7 +740,7 @@ public structure Packages.LinearOrderOfOrdArgs (α : Type u) extends
a b : α, Max.max a b = if (compare a b).isGE then a else b := by
extract_lets
first
| exact fun a b => Std.max_eq_if_isGE_compare (a := a) (b := b)
| exact fun a b => _root_.Std.max_eq_if_isGE_compare (a := a) (b := b)
| fail "Failed to automatically prove that `max` is left-leaning. \
Please ensure that a `LawfulOrderLeftLeaningMax` instance can be synthesized or \
manually provide the field `max_eq`."

View File

@@ -6,8 +6,11 @@ Authors: Paul Reichert
module
prelude
public import Init.Data.Nat.Lemmas
public import Init.Data.Range.Polymorphic.Basic
import Init.Data.Nat.Lemmas
public import Init.Data.Nat.Order
public import Init.Data.Range.Polymorphic.Instances
public import Init.Data.Order.Classes
import Init.Data.Order.Lemmas
public section
@@ -20,6 +23,10 @@ instance : UpwardEnumerable Nat where
instance : Least? Nat where
least? := some 0
instance : LawfulUpwardEnumerableLeast? Nat where
eq_succMany?_least? a := by
simpa [Least?.least?] using a, by simp [UpwardEnumerable.succMany?]
instance : LawfulUpwardEnumerableLE Nat where
le_iff a b := by
constructor
@@ -30,98 +37,29 @@ instance : LawfulUpwardEnumerableLE Nat where
rw [ hn]
exact Nat.le_add_right _ _
instance : LawfulUpwardEnumerableLT Nat where
lt_iff a b := by
constructor
· intro h
refine b - a - 1, ?_
simp [UpwardEnumerable.succMany?]
rw [Nat.sub_add_cancel, Nat.add_sub_cancel']
· exact Nat.le_of_lt h
· rwa [Nat.lt_iff_add_one_le, Nat.le_sub_iff_add_le'] at h
exact Nat.le_trans (Nat.le_succ _) h
· rintro n, hn
simp only [UpwardEnumerable.succMany?, Option.some.injEq] at hn
rw [ hn]
apply Nat.lt_add_of_pos_right
apply Nat.zero_lt_succ
instance : LawfulUpwardEnumerable Nat where
succMany?_zero := by simp [UpwardEnumerable.succMany?]
succMany?_succ := by simp [UpwardEnumerable.succMany?, UpwardEnumerable.succ?, Nat.add_assoc]
ne_of_lt a b hlt := by
rw [ LawfulUpwardEnumerableLT.lt_iff] at hlt
exact Nat.ne_of_lt hlt
have hn := hlt.choose_spec
simp only [UpwardEnumerable.succMany?, Option.some.injEq] at hn
omega
instance : LawfulUpwardEnumerableLowerBound .closed Nat where
isSatisfied_iff a l := by
simp [ LawfulUpwardEnumerableLE.le_iff, BoundedUpwardEnumerable.init?,
SupportsLowerBound.IsSatisfied]
instance : LawfulUpwardEnumerableUpperBound .closed Nat where
isSatisfied_of_le u a b hub hab := by
rw [ LawfulUpwardEnumerableLE.le_iff] at hab
exact Nat.le_trans hab hub
instance : LawfulUpwardEnumerableLowerBound .open Nat where
isSatisfied_iff a l := by
simp [ LawfulUpwardEnumerableLE.le_iff, BoundedUpwardEnumerable.init?,
SupportsLowerBound.IsSatisfied, UpwardEnumerable.succ?, Nat.lt_iff_add_one_le]
instance : LawfulUpwardEnumerableUpperBound .open Nat where
isSatisfied_of_le u a b hub hab := by
rw [ LawfulUpwardEnumerableLE.le_iff] at hab
exact Nat.lt_of_le_of_lt hab hub
instance : LawfulUpwardEnumerableLowerBound .unbounded Nat where
isSatisfied_iff a l := by
simp [ LawfulUpwardEnumerableLE.le_iff, BoundedUpwardEnumerable.init?,
SupportsLowerBound.IsSatisfied, Least?.least?]
instance : LawfulUpwardEnumerableUpperBound .unbounded Nat where
isSatisfied_of_le _ _ _ _ _ := .intro
instance : LinearlyUpwardEnumerable Nat where
eq_of_succ?_eq a b := by simp [UpwardEnumerable.succ?]
instance : LawfulUpwardEnumerableLT Nat := inferInstance
instance : LawfulUpwardEnumerableLowerBound .closed Nat := inferInstance
instance : LawfulUpwardEnumerableUpperBound .closed Nat := inferInstance
instance : LawfulUpwardEnumerableLowerBound .open Nat := inferInstance
instance : LawfulUpwardEnumerableUpperBound .open Nat := inferInstance
instance : LawfulUpwardEnumerableLowerBound .unbounded Nat := inferInstance
instance : LawfulUpwardEnumerableUpperBound .unbounded Nat := inferInstance
instance : InfinitelyUpwardEnumerable Nat where
isSome_succ? a := by simp [UpwardEnumerable.succ?]
private def rangeRev (k : Nat) :=
match k with
| 0 => []
| k + 1 => k :: rangeRev k
private theorem mem_rangeRev {k l : Nat} (h : l < k) : l rangeRev k := by
induction k
case zero => cases h
case succ k ih =>
rw [rangeRev]
by_cases hl : l = k
· simp [hl]
· apply List.mem_cons_of_mem
exact ih (Nat.lt_of_le_of_ne (Nat.le_of_lt_succ h) hl)
@[no_expose]
instance : HasFiniteRanges .closed Nat where
finite init u := by
refine u - init + 1, ?_
simp only [UpwardEnumerable.succMany?, SupportsUpperBound.IsSatisfied, Nat.not_le,
Option.elim_some]
omega
@[no_expose]
instance : HasFiniteRanges .open Nat where
finite init u := by
refine u - init, ?_
simp only [UpwardEnumerable.succMany?, SupportsUpperBound.IsSatisfied, Option.elim_some]
omega
instance : RangeSize .closed Nat where
size bound a := bound + 1 - a
instance : RangeSize .open Nat where
size bound a := bound - a
instance : RangeSize .open Nat := .openOfClosed
instance : LawfulRangeSize .closed Nat where
size_eq_zero_of_not_isSatisfied upperBound init hu := by
@@ -135,17 +73,16 @@ instance : LawfulRangeSize .closed Nat where
Option.some.injEq] at hu h
omega
instance : LawfulRangeSize .open Nat where
size_eq_zero_of_not_isSatisfied upperBound init hu := by
simp only [SupportsUpperBound.IsSatisfied, RangeSize.size] at hu
omega
size_eq_one_of_succ?_eq_none upperBound init hu h := by
simp only [UpwardEnumerable.succ?] at h
cases h
size_eq_succ_of_succ?_eq_some upperBound init hu h := by
simp only [SupportsUpperBound.IsSatisfied, RangeSize.size, UpwardEnumerable.succ?,
Option.some.injEq] at hu h
omega
instance : LawfulRangeSize .open Nat := inferInstance
instance : HasFiniteRanges .closed Nat := inferInstance
instance : HasFiniteRanges .open Nat := inferInstance
instance : LinearlyUpwardEnumerable Nat := by
exact instLinearlyUpwardEnumerableOfTotalLeOfLawfulUpwardEnumerableOfLawfulUpwardEnumerableLE
/-!
The following instances are used for the implementation of array slices a.k.a. `Subarray`.
See also `Init.Data.Slice.Array`.
-/
instance : ClosedOpenIntersection .open, .open Nat where
intersection r s := PRange.mk (max (r.lower + 1) s.lower) (min r.upper s.upper)

View File

@@ -495,7 +495,7 @@ noncomputable def superpose_ac_cert (r₁ c r₂ lhs₁ rhs₁ lhs₂ rhs₂ lhs
Given `lhs₁ = rhs₁` and `lhs₂ = rhs₂` where `lhs₁ := union c r₁` and `lhs₂ := union c r₂`,
`lhs = rhs` where `lhs := union r₂ rhs₁` and `rhs := union r₁ rhs₂`
-/
theorem superpose_ac {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op} {inst₂ : Std.Commutative ctx.op} (r₁ c r₂ lhs₁ rhs₁ lhs₂ rhs₂ lhs rhs : Seq)
theorem superpose_ac {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op} {inst₂ : Std.Commutative ctx.op} (r₁ c r₂ lhs₁ rhs₁ lhs₂ rhs₂ lhs rhs : Seq)
: superpose_ac_cert r₁ c r₂ lhs₁ rhs₁ lhs₂ rhs₂ lhs rhs lhs₁.denote ctx = rhs₁.denote ctx lhs₂.denote ctx = rhs₂.denote ctx
lhs.denote ctx = rhs.denote ctx := by
simp [superpose_ac_cert]; intro _ _ _ _; subst lhs₁ lhs₂ lhs rhs; simp
@@ -506,6 +506,107 @@ theorem superpose_ac {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op}
apply congrArg (ctx.op (c.denote ctx))
rw [Std.Commutative.comm (self := inst₂) (r₂.denote ctx)]
noncomputable def Seq.contains_k (s : Seq) (x : Var) : Bool :=
Seq.rec (fun y => Nat.beq x y) (fun y _ ih => Bool.or' (Nat.beq x y) ih) s
theorem Seq.contains_k_var (y x : Var) : Seq.contains_k (.var y) x = (x == y) := by
simp [Seq.contains_k]; rw [Bool.eq_iff_iff]; simp
theorem Seq.contains_k_cons (y x : Var) (s : Seq) : Seq.contains_k (.cons y s) x = (x == y || s.contains_k x) := by
show (Nat.beq x y |>.or' (s.contains_k x)) = (x == y || s.contains_k x)
simp; rw [Bool.eq_iff_iff]; simp
attribute [local simp] Seq.contains_k_var Seq.contains_k_cons
theorem Seq.denote_insert_of_contains {α} (ctx : Context α) [inst₁ : Std.Associative ctx.op] [inst₂ : Std.Commutative ctx.op] [inst₃ : Std.IdempotentOp ctx.op]
(s : Seq) (x : Var) : s.contains_k x (s.insert x).denote ctx = s.denote ctx := by
induction s
next => simp; intro; subst x; rw [Std.IdempotentOp.idempotent (self := inst₃)]
next y s ih =>
simp; intro h; cases h
next => subst x; rw [ Std.Associative.assoc (self := inst₁), Std.IdempotentOp.idempotent (self := inst₃)]
next h =>
replace ih := ih h
simp at ih
rw [ Std.Associative.assoc (self := inst₁), Std.Commutative.comm (self := inst₂) (x.denote ctx)]
rw [Std.Associative.assoc (self := inst₁), ih]
noncomputable def superpose_ac_idempotent_cert (x : Var) (lhs₁ rhs₁ rhs : Seq) : Bool :=
lhs₁.contains_k x |>.and' (rhs.beq' (rhs₁.insert x))
/-!
Remark: see Section 4.1 of the paper "MODULARITY, COMBINATION, AC CONGRUENCE CLOSURE" to understand why
`superpose_ac_idempotent` is needed.
-/
theorem superpose_ac_idempotent {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op} {inst₂ : Std.Commutative ctx.op} {inst₃ : Std.IdempotentOp ctx.op}
(x : Var) (lhs₁ rhs₁ rhs : Seq) : superpose_ac_idempotent_cert x lhs₁ rhs₁ rhs lhs₁.denote ctx = rhs₁.denote ctx lhs₁.denote ctx = rhs.denote ctx := by
simp [superpose_ac_idempotent_cert]; intro h₁ _ h₂; subst rhs
replace h₂ : Seq.denote ctx (lhs₁.insert x) = Seq.denote ctx (rhs₁.insert x) := by
simp [h₂]
rw [ h₂, Seq.denote_insert_of_contains ctx lhs₁ x h₁]
noncomputable def Seq.startsWithVar_k (s : Seq) (x : Var) : Bool :=
Seq.rec (fun y => Nat.beq x y) (fun y _ _ => Nat.beq x y) s
theorem Seq.startsWithVar_k_var (y x : Var) : Seq.startsWithVar_k (.var y) x = (x == y) := by
simp [startsWithVar_k]; rw [Bool.eq_iff_iff]; simp
theorem Seq.startsWithVar_k_cons (y x : Var) (s : Seq) : Seq.startsWithVar_k (.cons y s) x = (x == y) := by
simp [startsWithVar_k]; rw [Bool.eq_iff_iff]; simp
attribute [local simp] Seq.startsWithVar_k_var Seq.startsWithVar_k_cons
theorem Seq.denote_concat_of_startsWithVar {α} (ctx : Context α) [inst₁ : Std.Associative ctx.op] [inst₂ : Std.IdempotentOp ctx.op]
(s : Seq) (x : Var) : s.startsWithVar_k x (concat_k (.var x) s).denote ctx = s.denote ctx := by
cases s <;> simp <;> intro <;> subst x
next => rw [Std.IdempotentOp.idempotent (self := inst₂)]
next => rw [ Std.Associative.assoc (self := inst₁), Std.IdempotentOp.idempotent (self := inst₂)]
noncomputable def superpose_head_idempotent_cert (x : Var) (lhs₁ rhs₁ rhs : Seq) : Bool :=
lhs₁.startsWithVar_k x |>.and' (rhs.beq' (Seq.concat (.var x) rhs₁))
/--
`superpose_ac_idempotent` for the non-commutative case. This is the "head"-case
-/
theorem superpose_head_idempotent {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op} {inst₂ : Std.IdempotentOp ctx.op}
(x : Var) (lhs₁ rhs₁ rhs : Seq) : superpose_head_idempotent_cert x lhs₁ rhs₁ rhs lhs₁.denote ctx = rhs₁.denote ctx lhs₁.denote ctx = rhs.denote ctx := by
simp [superpose_head_idempotent_cert]; intro h₁ _ h₂; subst rhs
replace h₂ : Seq.denote ctx (Seq.concat (.var x) lhs₁) = Seq.denote ctx (Seq.concat (.var x) rhs₁) := by
simp [h₂]
rw [ h₂, Seq.concat_k_eq_concat, Seq.denote_concat_of_startsWithVar ctx lhs₁ x h₁]
noncomputable def Seq.endsWithVar_k (s : Seq) (x : Var) : Bool :=
Seq.rec (fun y => Nat.beq x y) (fun _ _ ih => ih) s
theorem Seq.endsWithVar_k_var (y x : Var) : Seq.endsWithVar_k (.var y) x = (x == y) := by
simp [Seq.endsWithVar_k]; rw [Bool.eq_iff_iff]; simp
theorem Seq.endsWithVar_k_cons (y x : Var) (s : Seq) : Seq.endsWithVar_k (.cons y s) x = s.endsWithVar_k x := rfl
attribute [local simp] Seq.endsWithVar_k_var Seq.endsWithVar_k_cons
theorem Seq.denote_concat_of_endsWithVar {α} (ctx : Context α) [inst₁ : Std.Associative ctx.op] [inst₂ : Std.IdempotentOp ctx.op]
(s : Seq) (x : Var) : s.endsWithVar_k x (s.concat_k (.var x)).denote ctx = s.denote ctx := by
induction s
next => simp; intro; subst x; rw [Std.IdempotentOp.idempotent (self := inst₂)]
next ih =>
simp; intro h; replace ih := ih h
simp at ih; rw [Std.Associative.assoc (self := inst₁), ih]
noncomputable def superpose_tail_idempotent_cert (x : Var) (lhs₁ rhs₁ rhs : Seq) : Bool :=
lhs₁.endsWithVar_k x |>.and' (rhs.beq' (Seq.concat rhs₁ (.var x)))
/--
`superpose_ac_idempotent` for the non-commutative case. It is similar to `superpose_head_idempotent` but for the "tail"-case
-/
theorem superpose_tail_idempotent {α} (ctx : Context α) {inst₁ : Std.Associative ctx.op} {inst₂ : Std.IdempotentOp ctx.op}
(x : Var) (lhs₁ rhs₁ rhs : Seq) : superpose_tail_idempotent_cert x lhs₁ rhs₁ rhs lhs₁.denote ctx = rhs₁.denote ctx lhs₁.denote ctx = rhs.denote ctx := by
simp [superpose_tail_idempotent_cert]; intro h₁ _ h₂; subst rhs
replace h₂ : Seq.denote ctx (Seq.concat lhs₁ (.var x) ) = Seq.denote ctx (Seq.concat rhs₁ (.var x) ) := by
simp [h₂]
rw [ h₂, Seq.concat_k_eq_concat, Seq.denote_concat_of_endsWithVar ctx lhs₁ x h₁]
noncomputable def eq_norm_a_cert (lhs rhs : Expr) (lhs' rhs' : Seq) : Bool :=
lhs.toSeq.beq' lhs' |>.and' (rhs.toSeq.beq' rhs')
@@ -578,4 +679,37 @@ noncomputable def diseq_unsat_cert (lhs rhs : Seq) : Bool :=
theorem diseq_unsat {α} (ctx : Context α) (lhs rhs : Seq) : diseq_unsat_cert lhs rhs lhs.denote ctx rhs.denote ctx False := by
simp [diseq_unsat_cert]; intro; subst lhs; simp
theorem norm_a {α} (ctx : Context α) {_ : Std.Associative ctx.op} (e : Expr) (s : Seq)
: e.toSeq.beq' s e.denote ctx = s.denote ctx := by
simp; intro _; subst s; simp
theorem norm_ac {α} (ctx : Context α) {_ : Std.Associative ctx.op} {_ : Std.Commutative ctx.op} (e : Expr) (s : Seq)
: e.toSeq.sort.beq' s e.denote ctx = s.denote ctx := by
simp; intro _; subst s; simp
theorem norm_ai {α} (ctx : Context α) {_ : Std.Associative ctx.op} {_ : Std.LawfulIdentity ctx.op (Var.denote ctx 0)}
(e : Expr) (s : Seq) : e.toSeq.erase0.beq' s e.denote ctx = s.denote ctx := by
simp; intro _; subst s; simp
theorem norm_aci {α} (ctx : Context α) {_ : Std.Associative ctx.op} {_ : Std.Commutative ctx.op} {_ : Std.LawfulIdentity ctx.op (Var.denote ctx 0)}
(e : Expr) (s : Seq) : e.toSeq.erase0.sort.beq' s e.denote ctx = s.denote ctx := by
simp; intro _ ; subst s; simp
theorem eq_erase0_rhs {α} (ctx : Context α) {_ : Std.Associative ctx.op} {_ : Std.LawfulIdentity ctx.op (Var.denote ctx 0)}
(lhs rhs rhs' : Seq) : rhs.erase0.beq' rhs' lhs.denote ctx = rhs.denote ctx lhs.denote ctx = rhs'.denote ctx := by
simp; intro _ _; subst rhs'; simp [*]
theorem eq_erase_dup_rhs {α} (ctx : Context α) {_ : Std.Associative ctx.op} {_ : Std.IdempotentOp ctx.op}
(lhs rhs rhs' : Seq) : rhs.eraseDup.beq' rhs' lhs.denote ctx = rhs.denote ctx lhs.denote ctx = rhs'.denote ctx := by
simp; intro _ _; subst rhs'; simp [*]
theorem eq_expr_seq_seq {α} (ctx : Context α) (e : Expr) (s₁ s₂ : Seq) : e.denote ctx = s₁.denote ctx s₁.denote ctx = s₂.denote ctx e.denote ctx = s₂.denote ctx := by
apply Eq.trans
theorem imp_eq {α} (ctx : Context α) (lhs rhs : Expr) (s : Seq)
: lhs.denote ctx = s.denote ctx rhs.denote ctx = s.denote ctx lhs.denote ctx = rhs.denote ctx := by
simp_all
theorem refl {α} (ctx : Context α) (s : Seq) : s.denote ctx = s.denote ctx := (rfl)
end Lean.Grind.AC

View File

@@ -95,7 +95,7 @@ Ordinarily, the grind attribute does not consider the `=` symbol when generating
-/
syntax grindEqBwd := patternIgnore(atomic("" "=") <|> atomic("<-" "="))
/--
The `` modifier instructs `grind` to select a multi-pattern from the conclusion of theorem.
The `` modifier instructs `grind` to select a multi-pattern from the conclusion of theorem.
In other words, `grind` will use the theorem for backwards reasoning.
This may fail if not all of the arguments to the theorem appear in the conclusion.
-/

View File

@@ -10,7 +10,13 @@ prelude
public import Init.ByCases
public import Init.RCases
public import Init.Control.Except -- for `MonoBind` instance
public import Init.Control.StateRef -- for `MonoBind` instance
public import Init.Control.Option -- for `MonoBind` instance
public import Init.System.IO -- for `MonoBind` instance
import all Init.Control.Except -- for `MonoBind` instance
import all Init.Control.StateRef -- for `MonoBind` instance
import all Init.Control.Option -- for `MonoBind` instance
import all Init.System.IO -- for `MonoBind` instance
public section
@@ -790,14 +796,14 @@ on `m` is monotone in both arguments with regard to that order.
This is intended to be used in the construction of `partial_fixpoint`, and not meant to be used otherwise.
-/
class MonoBind (m : Type u Type v) [Bind m] [ α, PartialOrder (m α)] where
bind_mono_left {a₁ a₂ : m α} {f : α m b} (h : a₁ a₂) : a₁ >>= f a₂ >>= f
bind_mono_right {a : m α} {f₁ f₂ : α m b} (h : x, f₁ x f₂ x) : a >>= f₁ a >>= f₂
bind_mono_left {a₁ a₂ : m α} {f : α m β} (h : a₁ a₂) : a₁ >>= f a₂ >>= f
bind_mono_right {a : m α} {f₁ f₂ : α m β} (h : x, f₁ x f₂ x) : a >>= f₁ a >>= f₂
@[partial_fixpoint_monotone]
theorem monotone_bind
(m : Type u Type v) [Bind m] [ α, PartialOrder (m α)] [MonoBind m]
{α β : Type u}
{γ : Type w} [PartialOrder γ]
{γ : Sort w} [PartialOrder γ]
(f : γ m α) (g : γ α m β)
(hmono₁ : monotone f)
(hmono₂ : monotone g) :
@@ -823,9 +829,9 @@ theorem Option.admissible_eq_some (P : Prop) (y : α) :
admissible (fun (x : Option α) => x = some y P) := by
apply admissible_flatOrder; simp
instance [Monad m] [inst : α, PartialOrder (m α)] : PartialOrder (ExceptT ε m α) := inst _
instance [Monad m] [ α, PartialOrder (m α)] [inst : α, CCPO (m α)] : CCPO (ExceptT ε m α) := inst _
instance [Monad m] [ α, PartialOrder (m α)] [ α, CCPO (m α)] [MonoBind m] : MonoBind (ExceptT ε m) where
instance [inst : α, PartialOrder (m α)] : PartialOrder (ExceptT ε m α) := inst _
instance [inst : α, CCPO (m α)] : CCPO (ExceptT ε m α) := inst _
instance [Monad m] [ α, PartialOrder (m α)] [MonoBind m] : MonoBind (ExceptT ε m) where
bind_mono_left h₁₂ := by
apply MonoBind.bind_mono_left (m := m)
exact h₁₂
@@ -836,6 +842,112 @@ instance [Monad m] [∀ α, PartialOrder (m α)] [∀ α, CCPO (m α)] [MonoBind
· apply PartialOrder.rel_refl
· apply h₁₂
@[partial_fixpoint_monotone]
theorem monotone_exceptTRun [PartialOrder γ]
[Monad m] [ α, PartialOrder (m α)]
(f : γ ExceptT ε m α) (hmono : monotone f) :
monotone (fun (x : γ) => ExceptT.run (f x)) :=
hmono
instance [inst : α, PartialOrder (m α)] : PartialOrder (OptionT m α) := inst _
instance [inst : α, CCPO (m α)] : CCPO (OptionT m α) := inst _
instance [Monad m] [ α, PartialOrder (m α)] [MonoBind m] : MonoBind (OptionT m) where
bind_mono_left h₁₂ := by
apply MonoBind.bind_mono_left (m := m)
exact h₁₂
bind_mono_right h₁₂ := by
apply MonoBind.bind_mono_right (m := m)
intro x
cases x
· apply PartialOrder.rel_refl
· apply h₁₂
@[partial_fixpoint_monotone]
theorem monotone_optionTRun [PartialOrder γ]
[Monad m] [ α, PartialOrder (m α)]
(f : γ OptionT m α) (hmono : monotone f) :
monotone (fun (x : γ) => OptionT.run (f x)) :=
hmono
instance [inst : PartialOrder (m α)] : PartialOrder (ReaderT ρ m α) := instOrderPi
instance [inst : CCPO (m α)] : CCPO (ReaderT ρ m α) := instCCPOPi
instance [Monad m] [ α, PartialOrder (m α)] [MonoBind m] : MonoBind (ReaderT ρ m) where
bind_mono_left h₁₂ := by
intro x
apply MonoBind.bind_mono_left (m := m)
exact h₁₂ x
bind_mono_right h₁₂ := by
intro x
apply MonoBind.bind_mono_right (m := m)
intro y
apply h₁₂
@[partial_fixpoint_monotone]
theorem monotone_readerTRun [PartialOrder γ]
[Monad m] [PartialOrder (m α)]
(f : γ ReaderT σ m α) (hmono : monotone f) (s : σ) :
monotone (fun (x : γ) => ReaderT.run (f x) s) :=
monotone_apply s _ hmono
instance [inst : PartialOrder (m α)] : PartialOrder (StateRefT' ω σ m α) := instOrderPi
instance [inst : CCPO (m α)] : CCPO (StateRefT' ω σ m α) := instCCPOPi
instance [Monad m] [ α, PartialOrder (m α)] [MonoBind m] : MonoBind (StateRefT' ω σ m) :=
inferInstanceAs (MonoBind (ReaderT _ _))
@[partial_fixpoint_monotone]
theorem monotone_stateRefT'Run [PartialOrder γ]
[Monad m] [MonadLiftT (ST ω) m] [ α, PartialOrder (m α)] [MonoBind m]
(f : γ StateRefT' ω σ m α) (hmono : monotone f) (s : σ) :
monotone (fun (x : γ) => StateRefT'.run (f x) s) := by
apply monotone_bind
· apply monotone_const
· refine monotone_of_monotone_apply _ fun ref => ?_
apply monotone_bind
· exact monotone_apply _ _ hmono
· apply monotone_const
instance [inst : α, PartialOrder (m α)] : PartialOrder (StateT σ m α) := instOrderPi
instance [inst : α, CCPO (m α)] : CCPO (StateT σ m α) := instCCPOPi
instance [Monad m] [ α, PartialOrder (m α)] [MonoBind m] : MonoBind (StateT ρ m) where
bind_mono_left h₁₂ := by
intro x
apply MonoBind.bind_mono_left (m := m)
exact h₁₂ x
bind_mono_right h₁₂ := by
intro x
apply MonoBind.bind_mono_right (m := m)
intro y
apply h₁₂
@[partial_fixpoint_monotone]
theorem monotone_stateTRun [PartialOrder γ]
[Monad m] [ α, PartialOrder (m α)]
(f : γ StateT σ m α) (hmono : monotone f) (s : σ) :
monotone (fun (x : γ) => StateT.run (f x) s) :=
monotone_apply s _ hmono
-- TODO: axiomatize these instances (ideally without `Nonempty ε`) when EIO is opaque
noncomputable instance [Nonempty ε] : CCPO (EIO ε α) :=
inferInstanceAs (CCPO ((s : _) FlatOrder (.error Classical.ofNonempty (Classical.choice s))))
noncomputable instance [Nonempty ε] : MonoBind (EIO ε) where
bind_mono_left {_ _ a₁ a₂ f} h₁₂ := by
intro s
specialize h₁₂ s
change FlatOrder.rel (a₁.bind f s) (a₂.bind f s)
simp only [EStateM.bind]
generalize a₁ s = a₁ at h₁₂; generalize a₂ s = a₂ at h₁₂
cases h₁₂
· exact .bot
· exact .refl
bind_mono_right {_ _ a f₁ f₂} h₁₂ := by
intro w
change FlatOrder.rel (a.bind f₁ w) (a.bind f₂ w)
simp only [EStateM.bind]
split
· apply h₁₂
· exact .refl
end mono_bind
section implication_order

View File

@@ -4545,7 +4545,7 @@ abbrev scientificLitKind : SyntaxNodeKind := `scientific
/-- `` `name `` is the node kind of name literals like `` `foo ``. -/
abbrev nameLitKind : SyntaxNodeKind := `name
/-- `` `fieldIdx ` is the node kind of projection indices like the `2` in `x.2`. -/
/-- `` `fieldIdx `` is the node kind of projection indices like the `2` in `x.2`. -/
abbrev fieldIdxKind : SyntaxNodeKind := `fieldIdx
/--

View File

@@ -165,8 +165,10 @@ def registerTagAttribute (name : Name) (descr : String)
mkInitial := pure {}
addImportedFn := fun _ _ => pure {}
addEntryFn := fun (s : NameSet) n => s.insert n
exportEntriesFn := fun es =>
exportEntriesFnEx := fun env es _ =>
let r : Array Name := es.foldl (fun a e => a.push e) #[]
-- Do not export info for private defs
let r := r.filter (env.contains (skipRealize := false))
r.qsort Name.quickLt
statsFn := fun s => "tag attribute" ++ Format.line ++ "number of local entries: " ++ format s.size
asyncMode := asyncMode
@@ -232,8 +234,10 @@ def registerParametricAttribute (impl : ParametricAttributeImpl α) : IO (Parame
mkInitial := pure {}
addImportedFn := fun s => impl.afterImport s *> pure {}
addEntryFn := fun (s : NameMap α) (p : Name × α) => s.insert p.1 p.2
exportEntriesFn := fun m =>
exportEntriesFnEx := fun env m _ =>
let r : Array (Name × α) := m.foldl (fun a n p => a.push (n, p)) #[]
-- Do not export info for private defs
let r := r.filter (env.contains (skipRealize := false) ·.1)
r.qsort (fun a b => Name.quickLt a.1 b.1)
statsFn := fun s => "parametric attribute" ++ Format.line ++ "number of local entries: " ++ format s.size
}
@@ -289,8 +293,10 @@ def registerEnumAttributes (attrDescrs : List (Name × String × α))
mkInitial := pure {}
addImportedFn := fun _ _ => pure {}
addEntryFn := fun (s : NameMap α) (p : Name × α) => s.insert p.1 p.2
exportEntriesFn := fun m =>
exportEntriesFnEx := fun env m _ =>
let r : Array (Name × α) := m.foldl (fun a n p => a.push (n, p)) #[]
-- Do not export info for private defs
let r := r.filter (env.contains (skipRealize := false) ·.1)
r.qsort (fun a b => Name.quickLt a.1 b.1)
statsFn := fun s => "enumeration attribute extension" ++ Format.line ++ "number of local entries: " ++ format s.size
-- We assume (and check in `modifyState`) that, if used asynchronously, enum attributes are set

View File

@@ -37,10 +37,11 @@ builtin_initialize ext : SimpleScopedEnvExtension Entry State ←
}
private def isConstantReplacement? (declName : Name) : CoreM (Option Entry) := do
let info getConstInfo declName
let info getConstVal declName
match info.type.eq? with
| some (_, Expr.const fromDeclName us .., Expr.const toDeclName vs ..) =>
if us == vs then
| some (_, Expr.const fromDeclName us, Expr.const toDeclName vs) =>
let set := Std.HashSet.ofList us
if set.size == us.length && set.all Level.isParam && us == vs then
return some { fromDeclName, toDeclName, thmName := declName }
else
return none

View File

@@ -153,14 +153,13 @@ partial def lowerLet (decl : LCNF.LetDecl) (k : LCNF.Code) : M FnBody := do
lowerCode k
| .const name _ args =>
let irArgs args.mapM lowerArg
if let some code tryIrDecl? name irArgs then
return code
if let some decl findDecl name then
return ( mkApplication name decl.params.size irArgs)
if let some decl LCNF.getMonoDecl? name then
return ( mkApplication name decl.params.size irArgs)
let env Lean.getEnv
match env.find? name with
| some (.ctorInfo ctorVal) =>
if isExtern env name then
return ( mkFap name irArgs)
let type nameToIRType ctorVal.induct
if type.isScalar then
let var bindVar decl.fvarId
@@ -244,18 +243,14 @@ where
return .vdecl tmpVar .object (.fap name firstArgs) <|
.vdecl var type (.ap tmpVar restArgs) ( lowerCode k)
tryIrDecl? (name : Name) (args : Array Arg) : M (Option FnBody) := do
if let some decl LCNF.getMonoDecl? name then
let numArgs := args.size
let numParams := decl.params.size
if numArgs < numParams then
return some ( mkPap name args)
else if numArgs == numParams then
return some ( mkFap name args)
else
return some ( mkOverApplication name numParams args)
mkApplication (name : Name) (numParams : Nat) (args : Array Arg) : M FnBody := do
let numArgs := args.size
if numArgs < numParams then
mkPap name args
else if numArgs == numParams then
mkFap name args
else
return none
mkOverApplication name numParams args
partial def lowerAlt (discr : VarId) (a : LCNF.Alt) : M Alt := do
match a with

View File

@@ -21,7 +21,7 @@ inductive AssocList (α : Type u) (β : Type v) where
deriving Inhabited
namespace AssocList
variable {α : Type u} {β : Type v} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : Type v} {δ : Type w} {m : Type w Type w'} [Monad m]
abbrev empty : AssocList α β :=
nil

View File

@@ -48,7 +48,7 @@ variable {_ : BEq α} {_ : Hashable α}
@[inline] def contains (s : PersistentHashSet α) (a : α) : Bool :=
s.set.contains a
@[inline] def foldM {β : Type v} {m : Type v Type v} [Monad m] (f : β α m β) (init : β) (s : PersistentHashSet α) : m β :=
@[inline] def foldM {β : Type v} {m : Type v Type w} [Monad m] (f : β α m β) (init : β) (s : PersistentHashSet α) : m β :=
s.set.foldlM (init := init) fun d a _ => f d a
@[inline] def fold {β : Type v} (f : β α β) (init : β) (s : PersistentHashSet α) : β :=

View File

@@ -231,7 +231,7 @@ inductive WellFormed (cmp : αα → Ordering) : RBNode α β → Prop wher
section Map
@[specialize] def mapM {α : Type v} {β γ : α Type v} {M : Type v Type v} [Applicative M]
@[specialize] def mapM {α : Type v} {β γ : α Type v} {M : Type v Type w} [Applicative M]
(f : (a : α) β a M (γ a))
: RBNode α β M (RBNode α γ)
| leaf => pure leaf

View File

@@ -92,7 +92,7 @@ def switch (m : SMap α β) : SMap α β :=
m.map₂.foldl f s
/-- Monadic fold over a staged map. -/
def foldM {m : Type w Type w} [Monad m]
def foldM {m : Type w Type w'} [Monad m]
(f : σ α β m σ) (init : σ) (map : SMap α β) : m σ := do
map.map₂.foldlM f ( map.map₁.foldM f init)

View File

@@ -6,21 +6,32 @@ Authors: Leonardo de Moura
module
prelude
public import Lean.Data.Options
import Lean.Meta.Transform
import Lean.Meta.Inductive
import Lean.Elab.Deriving.Basic
import Lean.Elab.Deriving.Util
import Lean.Meta.NatTable
import Lean.Meta.Constructions.CtorIdx
import Lean.Meta.Constructions.CtorElim
import Lean.Meta.Constructions.CasesOnSameCtor
namespace Lean.Elab.Deriving.DecEq
open Lean.Parser.Term
open Meta
register_builtin_option deriving.decEq.linear_construction_threshold : Nat := {
defValue := 10
descr := "If the inductive data type has this many or more constructors, use a different \
implementation for deciding equality that avoids the quadratic code size produced by the \
default implementation.\n\n\
The alternative construction compiles to less efficient code in some cases, so by default \
it is only used for inductive types with 10 or more constructors." }
def mkDecEqHeader (indVal : InductiveVal) : TermElabM Header := do
mkHeader `DecidableEq 2 indVal
def mkMatch (ctx : Context) (header : Header) (indVal : InductiveVal) : TermElabM Term := do
def mkMatchOld (ctx : Context) (header : Header) (indVal : InductiveVal) : TermElabM Term := do
let discrs mkDiscrs header indVal
let alts mkAlts
`(match $[$discrs],* with $alts:matchAlt*)
@@ -91,6 +102,76 @@ where
alts := alts.push ( `(matchAltExpr| | $[$patterns:term],* => $rhs:term))
return alts
def mkMatchNew (ctx : Context) (header : Header) (indVal : InductiveVal) : TermElabM Term := do
assert! header.targetNames.size == 2
let x1 := mkIdent header.targetNames[0]!
let x2 := mkIdent header.targetNames[1]!
let ctorIdxName := mkCtorIdxName indVal.name
-- NB: the getMatcherInfo? assumes all mathcers are called `match_`
let casesOnSameCtorName mkFreshUserName (indVal.name ++ `match_on_same_ctor)
mkCasesOnSameCtor casesOnSameCtorName indVal.name
let alts Array.ofFnM (n := indVal.numCtors) fun ctorIdx, _ => do
let ctorName := indVal.ctors[ctorIdx]!
let ctorInfo getConstInfoCtor ctorName
forallTelescopeReducing ctorInfo.type fun xs type => do
let type Core.betaReduce type -- we 'beta-reduce' to eliminate "artificial" dependencies
let mut ctorArgs1 : Array Term := #[]
let mut ctorArgs2 : Array Term := #[]
let mut todo := #[]
for i in *...ctorInfo.numFields do
let x := xs[indVal.numParams + i]!
if type.containsFVar x.fvarId! then
-- If resulting type depends on this field, we don't need to bring it into
-- scope nor compare it
ctorArgs1 := ctorArgs1.push ( `(_))
else
let a := mkIdent ( mkFreshUserName `a)
let b := mkIdent ( mkFreshUserName `b)
ctorArgs1 := ctorArgs1.push a
ctorArgs2 := ctorArgs2.push b
let xType inferType x
let indValNum :=
ctx.typeInfos.findIdx?
(xType.isAppOf ConstantVal.name InductiveVal.toConstantVal)
let recField := indValNum.map (ctx.auxFunNames[·]!)
let isProof isProp xType
todo := todo.push (a, b, recField, isProof)
let rhs mkSameCtorRhs todo.toList
`(@fun $ctorArgs1:term* $ctorArgs2:term* =>$rhs:term)
if indVal.numCtors == 1 then
`( $(mkCIdent casesOnSameCtorName) $x1:term $x2:term rfl $alts:term* )
else
`( if h : $(mkCIdent ctorIdxName) $x1:ident = $(mkCIdent ctorIdxName) $x2:ident then
$(mkCIdent casesOnSameCtorName) $x1:term $x2:term h $alts:term*
else
isFalse (fun h' => h (congrArg $(mkCIdent ctorIdxName) h')))
where
mkSameCtorRhs : List (Ident × Ident × Option Name × Bool) TermElabM Term
| [] => ``(isTrue rfl)
| (a, b, recField, isProof) :: todo => withFreshMacroScope do
let rhs if isProof then
`(have h : @$a = @$b := rfl; by subst h; exact $( mkSameCtorRhs todo):term)
else
let sameCtor mkSameCtorRhs todo
`(if h : @$a = @$b then
by subst h; exact $sameCtor:term
else
isFalse (by intro n; injection n; apply h _; assumption))
if let some auxFunName := recField then
-- add local instance for `a = b` using the function being defined `auxFunName`
`(let inst := $(mkIdent auxFunName) @$a @$b; $rhs)
else
return rhs
def mkMatch (ctx : Context) (header : Header) (indVal : InductiveVal) : TermElabM Term := do
if indVal.numCtors deriving.decEq.linear_construction_threshold.get ( getOptions) then
mkMatchNew ctx header indVal
else
mkMatchOld ctx header indVal
def mkAuxFunction (ctx : Context) (auxFunName : Name) (indVal : InductiveVal): TermElabM (TSyntax `command) := do
let header mkDecEqHeader indVal
let body mkMatch ctx header indVal

View File

@@ -29,7 +29,10 @@ structure EqnInfo extends EqnInfoCore where
fixpointType : Array PartialFixpointType
deriving Inhabited
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo mkMapDeclarationExtension
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo
mkMapDeclarationExtension (exportEntriesFn := fun env s _ =>
-- Do not export for non-exposed defs
s.filter (fun n _ => env.find? n |>.any (·.hasValue)) |>.toArray)
def registerEqnsInfo (preDefs : Array PreDefinition) (declNameNonRec : Name)
(fixedParamPerms : FixedParamPerms) (fixpointType : Array PartialFixpointType): MetaM Unit := do

View File

@@ -197,6 +197,7 @@ private partial def replaceRecApps (recArgInfos : Array RecArgInfo) (positions :
mkLambdaFVars xs ( loop belowForAlt altBody)
pure { matcherApp with alts := altsNew }.toExpr
else
trace[Elab.definition.structural] "`matcherApp.addArg?` failed"
processApp e
| none => processApp e
| e =>

View File

@@ -104,7 +104,10 @@ where
}
inferDefEqAttr name
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo mkMapDeclarationExtension
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo
mkMapDeclarationExtension (exportEntriesFn := fun env s _ =>
-- Do not export for non-exposed defs
s.filter (fun n _ => env.find? n |>.any (·.hasValue)) |>.toArray)
def registerEqnsInfo (preDef : PreDefinition) (declNames : Array Name) (recArgPos : Nat)
(fixedParamPerms : FixedParamPerms) : CoreM Unit := do

View File

@@ -27,7 +27,10 @@ structure EqnInfo extends EqnInfoCore where
fixedParamPerms : FixedParamPerms
deriving Inhabited
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo mkMapDeclarationExtension
builtin_initialize eqnInfoExt : MapDeclarationExtension EqnInfo
mkMapDeclarationExtension (exportEntriesFn := fun env s _ =>
-- Do not export for non-exposed defs
s.filter (fun n _ => env.find? n |>.any (·.hasValue)) |>.toArray)
def registerEqnsInfo (preDefs : Array PreDefinition) (declNameNonRec : Name) (fixedParamPerms : FixedParamPerms)
(argsPacker : ArgsPacker) : MetaM Unit := do

View File

@@ -429,11 +429,11 @@ def elabSyntax (stx : Syntax) : CommandElabM Name := do
discard <| elabSyntax stx
@[builtin_command_elab «syntaxAbbrev»] def elabSyntaxAbbrev : CommandElab := fun stx => do
let `($[$doc?:docComment]? syntax $declName:ident := $[$ps:stx]*) ← pure stx | throwUnsupportedSyntax
let `($[$doc?:docComment]? $[$vis?:visibility]? syntax $declName:ident := $[$ps:stx]*) ← pure stx | throwUnsupportedSyntax
-- TODO: nonatomic names
let (val, _) ← runTermElabM fun _ => Term.toParserDescr (mkNullNode ps) Name.anonymous
let stxNodeKind := (← getCurrNamespace) ++ declName.getId
let stx' ← `($[$doc?:docComment]? meta def $declName:ident : Lean.ParserDescr := ParserDescr.nodeWithAntiquot $(quote (toString declName.getId)) $(quote stxNodeKind) $val)
let stx' ← `($[$doc?:docComment]? $[$vis?:visibility]? meta def $declName:ident : Lean.ParserDescr := ParserDescr.nodeWithAntiquot $(quote (toString declName.getId)) $(quote stxNodeKind) $val)
withMacroExpansion stx stx' <| elabCommand stx'
def checkRuleKind (given expected : SyntaxNodeKind) : Bool :=

View File

@@ -91,7 +91,13 @@ def elabGrindParams (params : Grind.Params) (ps : TSyntaxArray ``Parser.Tactic.
else
params := { params with ematch := ( params.ematch.eraseDecl declName) }
| `(Parser.Tactic.grindParam| $[$mod?:grindMod]? $id:ident) =>
let declName realizeGlobalConstNoOverloadWithInfo id
let declName try
realizeGlobalConstNoOverloadWithInfo id
catch err =>
if ( resolveLocalName id.getId).isSome then
throwErrorAt id "redundant parameter `{id}`, `grind` uses local hypotheses automatically"
else
throw err
let kind if let some mod := mod? then Grind.getAttrKindCore mod else pure .infer
match kind with
| .ematch .user =>

View File

@@ -24,6 +24,9 @@ so we keep it in the `Lean/Elab/Tactic/Omega` directory
-/
open List
namespace Lean.Elab.Tactic.Omega
namespace List
@@ -37,22 +40,22 @@ def nonzeroMinimum (xs : List Nat) : Nat := xs.filter (· ≠ 0) |>.min? |>.getD
open Classical in
@[simp] theorem nonzeroMinimum_eq_zero_iff {xs : List Nat} :
xs.nonzeroMinimum = 0 x xs, x = 0 := by
nonzeroMinimum xs = 0 x xs, x = 0 := by
simp [nonzeroMinimum, Option.getD_eq_iff, min?_eq_none_iff, min?_eq_some_iff,
filter_eq_nil_iff, mem_filter]
theorem nonzeroMinimum_mem {xs : List Nat} (w : xs.nonzeroMinimum 0) :
xs.nonzeroMinimum xs := by
theorem nonzeroMinimum_mem {xs : List Nat} (w : nonzeroMinimum xs 0) :
nonzeroMinimum xs xs := by
dsimp [nonzeroMinimum] at *
generalize h : (xs.filter (· 0) |>.min?) = m at *
match m, w with
| some (m+1), _ => simp_all [min?_eq_some_iff, mem_filter]
theorem nonzeroMinimum_pos {xs : List Nat} (m : a xs) (h : a 0) : 0 < xs.nonzeroMinimum :=
theorem nonzeroMinimum_pos {xs : List Nat} (m : a xs) (h : a 0) : 0 < nonzeroMinimum xs :=
Nat.pos_iff_ne_zero.mpr fun w => h (nonzeroMinimum_eq_zero_iff.mp w _ m)
theorem nonzeroMinimum_le {xs : List Nat} (m : a xs) (h : a 0) : xs.nonzeroMinimum a := by
have : (xs.filter (· 0) |>.min?) = some xs.nonzeroMinimum := by
theorem nonzeroMinimum_le {xs : List Nat} (m : a xs) (h : a 0) : nonzeroMinimum xs a := by
have : (xs.filter (· 0) |>.min?) = some (nonzeroMinimum xs) := by
have w := nonzeroMinimum_pos m h
dsimp [nonzeroMinimum] at *
generalize h : (xs.filter (· 0) |>.min?) = m? at *
@@ -64,7 +67,7 @@ theorem nonzeroMinimum_le {xs : List Nat} (m : a ∈ xs) (h : a ≠ 0) : xs.nonz
exact m, h
theorem nonzeroMinimum_eq_nonzero_iff {xs : List Nat} {y : Nat} (h : y 0) :
xs.nonzeroMinimum = y y xs ( x xs, y x x = 0) := by
nonzeroMinimum xs = y y xs ( x xs, y x x = 0) := by
constructor
· rintro rfl
constructor
@@ -76,7 +79,7 @@ theorem nonzeroMinimum_eq_nonzero_iff {xs : List Nat} {y : Nat} (h : y ≠ 0) :
· rintro m, w
apply Nat.le_antisymm
· exact nonzeroMinimum_le m h
· have nz : xs.nonzeroMinimum 0 := by
· have nz : nonzeroMinimum xs 0 := by
apply Nat.pos_iff_ne_zero.mp
apply nonzeroMinimum_pos m h
specialize w (nonzeroMinimum xs) (nonzeroMinimum_mem nz)
@@ -84,18 +87,18 @@ theorem nonzeroMinimum_eq_nonzero_iff {xs : List Nat} {y : Nat} (h : y ≠ 0) :
| inl h => exact h
| inr h => exact False.elim (nz h)
theorem nonzeroMinimum_eq_of_nonzero {xs : List Nat} (h : xs.nonzeroMinimum 0) :
x xs, xs.nonzeroMinimum = x :=
xs.nonzeroMinimum, ((nonzeroMinimum_eq_nonzero_iff h).mp rfl).1, rfl
theorem nonzeroMinimum_eq_of_nonzero {xs : List Nat} (h : nonzeroMinimum xs 0) :
x xs, nonzeroMinimum xs = x :=
nonzeroMinimum xs, ((nonzeroMinimum_eq_nonzero_iff h).mp rfl).1, rfl
theorem nonzeroMinimum_le_iff {xs : List Nat} {y : Nat} :
xs.nonzeroMinimum y xs.nonzeroMinimum = 0 x xs, x y x 0 := by
nonzeroMinimum xs y nonzeroMinimum xs = 0 x xs, x y x 0 := by
refine fun h => ?_, fun h => ?_
· rw [Classical.or_iff_not_imp_right]
simp only [ne_eq, not_exists, not_and, Classical.not_not, nonzeroMinimum_eq_zero_iff]
intro w
apply nonzeroMinimum_eq_zero_iff.mp
if p : xs.nonzeroMinimum = 0 then
if p : nonzeroMinimum xs = 0 then
exact p
else
exact w _ (nonzeroMinimum_mem p) h
@@ -106,9 +109,9 @@ theorem nonzeroMinimum_le_iff {xs : List Nat} {y : Nat} :
theorem nonzeroMinimum_map_le_nonzeroMinimum (f : α β) (p : α Nat) (q : β Nat) (xs : List α)
(h : a, a xs (p a = 0 q (f a) = 0))
(w : a, a xs p a 0 q (f a) p a) :
((xs.map f).map q).nonzeroMinimum (xs.map p).nonzeroMinimum := by
nonzeroMinimum ((xs.map f).map q) nonzeroMinimum (xs.map p) := by
rw [nonzeroMinimum_le_iff]
if z : (xs.map p).nonzeroMinimum = 0 then
if z : nonzeroMinimum (xs.map p) = 0 then
rw [nonzeroMinimum_eq_zero_iff]
simp_all
else
@@ -126,16 +129,20 @@ The minimum absolute value of a nonzero entry, or zero if all entries are zero.
We completely characterize the function via
`minNatAbs_eq_zero_iff` and `minNatAbs_eq_nonzero_iff` below.
-/
def minNatAbs (xs : List Int) : Nat := xs.map Int.natAbs |>.nonzeroMinimum
def minNatAbs (xs : List Int) : Nat := xs.map Int.natAbs |> nonzeroMinimum
@[simp] theorem minNatAbs_eq_zero_iff {xs : List Int} : xs.minNatAbs = 0 y xs, y = 0 := by
@[simp] theorem minNatAbs_eq_zero_iff {xs : List Int} : minNatAbs xs = 0 y xs, y = 0 := by
simp [minNatAbs]
theorem minNatAbs_eq_nonzero_iff (xs : List Int) (w : z 0) :
xs.minNatAbs = z ( y xs, y.natAbs = z) ( y xs, z y.natAbs y = 0) := by
minNatAbs xs = z ( y xs, y.natAbs = z) ( y xs, z y.natAbs y = 0) := by
simp [minNatAbs, nonzeroMinimum_eq_nonzero_iff w]
@[simp] theorem minNatAbs_nil : ([] : List Int).minNatAbs = 0 := (rfl)
@[simp] theorem minNatAbs_nil : minNatAbs ([] : List Int) = 0 := (rfl)
/-- The maximum absolute value in a list of integers. -/
def maxNatAbs (xs : List Int) : Nat := xs.map Int.natAbs |>.max? |>.getD 0
end List
end Lean.Elab.Tactic.Omega

View File

@@ -125,7 +125,8 @@ deriving Inhabited
def mkMapDeclarationExtension (name : Name := by exact decl_name%)
(asyncMode : EnvExtension.AsyncMode := .async .mainEnv)
(exportEntriesFn : Environment NameMap α OLeanLevel Array (Name × α) :=
fun _ s _ => s.toArray) :
-- Do not export info for private defs by default
fun env s _ => s.toArray.filter (fun (n, _) => env.contains (skipRealize := false) n)) :
IO (MapDeclarationExtension α) :=
.mk <$> registerPersistentEnvExtension {
name := name,

View File

@@ -1769,6 +1769,7 @@ private def looksLikeOldCodegenName : Name → Bool
private opaque getIRExtraConstNames : Environment OLeanLevel Array Name
def mkModuleData (env : Environment) (level : OLeanLevel := .private) : IO ModuleData := do
let env := env.setExporting (level != .private)
let pExts persistentEnvExtensionsRef.get
let entries := pExts.map fun pExt => Id.run do
-- get state from `checked` at the end if `async`; it would otherwise panic
@@ -1778,7 +1779,6 @@ def mkModuleData (env : Environment) (level : OLeanLevel := .private) : IO Modul
let state := pExt.getState (asyncMode := asyncMode) env
(pExt.name, pExt.exportEntriesFn env state level)
let kenv := env.toKernelEnv
let env := env.setExporting (level != .private)
let constNames := kenv.constants.foldStage2 (fun names name _ => names.push name) #[]
-- not all kernel constants may be exported at `level < .private`
let constants := if level == .private then
@@ -2362,6 +2362,9 @@ def replayConsts (dest : Environment) (oldEnv newEnv : Environment) (skipExistin
consts.add c
}
checked := dest.checked.map fun kenv => replayKernel exts newPrivateConsts kenv |>.toOption.getD kenv
allRealizations := dest.allRealizations.map (sync := true) fun allRealizations =>
newPrivateConsts.foldl (init := allRealizations) fun allRealizations c =>
allRealizations.insert c.constInfo.name c
}
where
replayKernel (exts : Array (EnvExtension EnvExtensionState)) (consts : List AsyncConst)

View File

@@ -10,5 +10,6 @@ public import Lean.Meta.Constructions.CasesOn
public import Lean.Meta.Constructions.NoConfusion
public import Lean.Meta.Constructions.RecOn
public import Lean.Meta.Constructions.BRecOn
public import Lean.Meta.Constructions.CasesOnSameCtor
public section

View File

@@ -0,0 +1,248 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joachim Breitner
-/
module
prelude
public import Lean.Meta.Basic
import Lean.AddDecl
import Lean.Meta.AppBuilder
import Lean.Meta.CompletionName
import Lean.Meta.Constructions.CtorIdx
import Lean.Meta.Constructions.CtorElim
import Lean.Elab.App
/-!
See `mkCasesOnSameCtor` below.
-/
namespace Lean
open Meta
/--
Helper for `mkCasesOnSameCtor` that constructs a heterogenous matcher (indices may differ)
and does not include the equality proof in the motive (so it's not a the shape of a matcher) yet.
-/
public def mkCasesOnSameCtorHet (declName : Name) (indName : Name) : MetaM Unit := do
let ConstantInfo.inductInfo info getConstInfo indName | unreachable!
let casesOnName := mkCasesOnName indName
let casesOnInfo getConstVal casesOnName
let v::us := casesOnInfo.levelParams.map mkLevelParam | panic! "unexpected universe levels on `casesOn`"
let e forallBoundedTelescope casesOnInfo.type info.numParams fun params t =>
forallBoundedTelescope t (some 1) fun _ t => -- ignore motive
forallBoundedTelescope t (some (info.numIndices + 1)) fun ism1 _ =>
forallBoundedTelescope t (some (info.numIndices + 1)) fun ism2 _ => do
let motiveType mkForallFVars (ism1 ++ ism2) (mkSort v)
withLocalDecl `motive .implicit motiveType fun motive => do
let altTypes info.ctors.toArray.mapIdxM fun i ctorName => do
let ctor := mkAppN (mkConst ctorName us) params
let ctorType inferType ctor
forallTelescope ctorType fun zs1 ctorRet1 => do
let ctorApp1 := mkAppN ctor zs1
let ctorRet1 whnf ctorRet1
let is1 : Array Expr := ctorRet1.getAppArgs[info.numParams:]
let ism1 := is1.push ctorApp1
forallTelescope ctorType fun zs2 ctorRet2 => do
let ctorApp2 := mkAppN ctor zs2
let ctorRet2 whnf ctorRet2
let is2 : Array Expr := ctorRet2.getAppArgs[info.numParams:]
let ism2 := is2.push ctorApp2
let e := mkAppN motive (ism1 ++ ism2)
let e mkForallFVars (zs1 ++ zs2) e
let name := match ctorName with
| Name.str _ s => Name.mkSimple s
| _ => Name.mkSimple s!"alt{i+1}"
return (name, e)
withLocalDeclsDND altTypes fun alts => do
let ctorApp1 := mkAppN (mkConst (mkCtorIdxName indName) us) (params ++ ism1)
let ctorApp2 := mkAppN (mkConst (mkCtorIdxName indName) us) (params ++ ism2)
let heqType mkEq ctorApp1 ctorApp2
let heqType' mkEq ctorApp2 ctorApp1
withLocalDeclD `h heqType fun heq => do
let motive1 mkLambdaFVars ism1 ( mkArrow heqType' (mkAppN motive (ism1 ++ ism2)))
let e := mkConst casesOnInfo.name (v :: us)
let e := mkAppN e params
let e := mkApp e motive1
let e := mkAppN e ism1
let alts1 info.ctors.toArray.mapIdxM fun i ctorName => do
let ctor := mkAppN (mkConst ctorName us) params
let ctorType inferType ctor
forallTelescope ctorType fun zs1 ctorRet1 => do
let ctorApp1 := mkAppN ctor zs1
let ctorRet1 whnf ctorRet1
let is1 : Array Expr := ctorRet1.getAppArgs[info.numParams:]
let ism1 := is1.push ctorApp1
-- Here we let the typecheker reduce the `ctorIdx` application
let heq := mkApp3 (mkConst ``Eq [1]) (mkConst ``Nat) ctorApp2 (mkRawNatLit i)
withLocalDeclD `h heq fun h => do
let motive2 mkLambdaFVars ism2 (mkAppN motive (ism1 ++ ism2))
let alt forallTelescope ctorType fun zs2 _ => do
mkLambdaFVars zs2 <| mkAppN alts[i]! (zs1 ++ zs2)
let e := if info.numCtors = 1 then
let casesOn := mkConst (mkCasesOnName indName) (v :: us)
mkAppN casesOn (params ++ #[motive2] ++ ism2 ++ #[alt])
else
let casesOn := mkConst (mkConstructorElimName indName ctorName) (v :: us)
mkAppN casesOn (params ++ #[motive2] ++ ism2 ++ #[h, alt])
mkLambdaFVars (zs1.push h) e
let e := mkAppN e alts1
let e := mkApp e ( mkEqSymm heq)
mkLambdaFVars (params ++ #[motive] ++ ism1 ++ ism2 ++ #[heq] ++ alts) e
addAndCompile (.defnDecl ( mkDefinitionValInferringUnsafe
(name := declName)
(levelParams := casesOnInfo.levelParams)
(type := ( inferType e))
(value := e)
(hints := ReducibilityHints.abbrev)
))
modifyEnv fun env => markAuxRecursor env declName
modifyEnv fun env => addToCompletionBlackList env declName
modifyEnv fun env => addProtected env declName
Elab.Term.elabAsElim.setTag declName
setReducibleAttribute declName
def withSharedIndices (ctor : Expr) (k : Array Expr Expr Expr MetaM α) : MetaM α := do
let ctorType inferType ctor
forallTelescopeReducing ctorType fun zs ctorRet => do
let ctor1 := mkAppN ctor zs
let rec go ctor2 todo acc := do
match todo with
| [] => k acc ctor1 ctor2
| z::todo' =>
if ctorRet.containsFVar z.fvarId! then
go (mkApp ctor2 z) todo' acc
else
let t whnfForall ( inferType ctor2)
assert! t.isForall
withLocalDeclD t.bindingName! t.bindingDomain! fun z' => do
go (mkApp ctor2 z') todo' (acc.push z')
go ctor zs.toList zs
/--
This constructs a matcher for a match statement that matches on the constructors of
a data type in parallel. So if `h : x1.ctorIdx = x2.ctorIdx`, then it implements
```
match x1, x2, h with
| ctor1 .. , ctor1 .. , _ => ...
| ctor2 .. , ctor2 .. , _ => ...
```
The normal matcher supports such matches, but implements them using nested `casesOn`, which
leads to a quadratic blow-up. This function uses the per-constructor eliminators to implement this
more efficiently.
This is useful for implementing or deriving functionality like `BEq`, `DecidableEq`, `Ord` and
proving their lawfulness.
One could imagine a future where `match` compilation is smart enough to do that automatically; then
this module can be dropped.
Note that for some data types where the indices determine the constructor (e.g. `Vec`), this leads
to less efficient code than the normal matcher, as this needs to read the constructor tag on both
arguments, wheras the normal matcher produces code that reads just the first arguments tag, and
then boldly reads the second arguments fields.
-/
public def mkCasesOnSameCtor (declName : Name) (indName : Name) : MetaM Unit := do
let ConstantInfo.inductInfo info getConstInfo indName | unreachable!
let casesOnSameCtorHet := declName ++ `het
mkCasesOnSameCtorHet casesOnSameCtorHet indName
let casesOnName := mkCasesOnName indName
let casesOnInfo getConstVal casesOnName
let v::us := casesOnInfo.levelParams.map mkLevelParam | panic! "unexpected universe levels on `casesOn`"
forallBoundedTelescope casesOnInfo.type info.numParams fun params t =>
let t0 := t.bindingBody! -- ignore motive
forallBoundedTelescope t0 (some info.numIndices) fun is t =>
forallBoundedTelescope t (some 1) fun x1 _ =>
forallBoundedTelescope t (some 1) fun x2 _ => do
let x1 := x1[0]!
let x2 := x2[0]!
let ctorApp1 := mkAppN (mkConst (mkCtorIdxName indName) us) (params ++ is ++ #[x1])
let ctorApp2 := mkAppN (mkConst (mkCtorIdxName indName) us) (params ++ is ++ #[x2])
let heqType mkEq ctorApp1 ctorApp2
withLocalDeclD `h heqType fun heq => do
let motiveType mkForallFVars (is ++ #[x1,x2,heq]) (mkSort v)
withLocalDecl `motive .implicit motiveType fun motive => do
let altTypes info.ctors.toArray.mapIdxM fun i ctorName => do
let ctor := mkAppN (mkConst ctorName us) params
withSharedIndices ctor fun zs12 ctorApp1 ctorApp2 => do
let ctorRet1 whnf ( inferType ctorApp1)
let is : Array Expr := ctorRet1.getAppArgs[info.numParams:]
let e := mkAppN motive (is ++ #[ctorApp1, ctorApp2, ( mkEqRefl (mkNatLit i))])
let e mkForallFVars zs12 e
let name := match ctorName with
| Name.str _ s => Name.mkSimple s
| _ => Name.mkSimple s!"alt{i+1}"
return (name, e)
withLocalDeclsDND altTypes fun alts => do
forallBoundedTelescope t0 (some (info.numIndices + 1)) fun ism1' _ =>
forallBoundedTelescope t0 (some (info.numIndices + 1)) fun ism2' _ => do
let (motive', newRefls)
withNewEqs (is.push x1) ism1' fun newEqs1 newRefls1 => do
withNewEqs (is.push x2) ism2' fun newEqs2 newRefls2 => do
let motive' := mkAppN motive (is ++ #[x1, x2, heq])
let motive' mkForallFVars (newEqs1 ++ newEqs2) motive'
let motive' mkLambdaFVars (ism1' ++ ism2') motive'
return (motive', newRefls1 ++ newRefls2)
let casesOn2 := mkConst casesOnSameCtorHet (v :: us)
let casesOn2 := mkAppN casesOn2 params
let casesOn2 := mkApp casesOn2 motive'
let casesOn2 := mkAppN casesOn2 (is ++ #[x1] ++ is ++ #[x2])
let casesOn2 := mkApp casesOn2 heq
let altTypes' inferArgumentTypesN info.numCtors casesOn2
let alts' info.ctors.toArray.mapIdxM fun i ctorName => do
let ctor := mkAppN (mkConst ctorName us) params
let ctorType inferType ctor
forallTelescope ctorType fun zs1 _ctorRet1 => do
forallTelescope ctorType fun zs2 _ctorRet2 => do
let altType instantiateForall altTypes'[i]! (zs1 ++ zs2)
let alt mkFreshExprSyntheticOpaqueMVar altType
let goal := alt.mvarId!
let some (goal, _) Cases.unifyEqs? newRefls.size goal {}
| throwError "unifyEqns? unexpectedly closed goal"
let [] goal.apply alts[i]!
| throwError "could not apply {alts[i]!} to close\n{goal}"
mkLambdaFVars (zs1 ++ zs2) ( instantiateMVars alt)
let casesOn2 := mkAppN casesOn2 alts'
let casesOn2 := mkAppN casesOn2 newRefls
let e mkLambdaFVars (params ++ #[motive] ++ is ++ #[x1,x2] ++ #[heq] ++ alts) casesOn2
let decl := .defnDecl ( mkDefinitionValInferringUnsafe
(name := declName)
(levelParams := casesOnInfo.levelParams)
(type := ( inferType e))
(value := e)
(hints := ReducibilityHints.abbrev)
)
let matcherInfo : MatcherInfo := {
numParams := info.numParams
numDiscrs := info.numIndices + 3
altNumParams := altTypes.map (·.2.getNumHeadForalls)
uElimPos? := some 0
discrInfos := #[{}, {}, {}]}
-- Compare attributes with `mkMatcherAuxDefinition`
addDecl decl
Elab.Term.elabAsElim.setTag declName
Match.addMatcherInfo declName matcherInfo
setInlineAttribute declName
-- Pragmatic hack:
-- Normally a matcher is not marked as an aux recursor. We still do that here
-- because this makes the elaborator unfold it more eagerily, it seems,
-- and this works around issues with the structural recursion equation generator
-- (see #10195).
modifyEnv fun env => markAuxRecursor env declName
enableRealizationsForConst declName
compileDecl decl
end Lean

View File

@@ -87,6 +87,9 @@ builtin_initialize extension : SimplePersistentEnvExtension Entry State ←
addEntryFn := State.addEntry
addImportedFn := fun es => (mkStateFromImportedEntries State.addEntry {} es).switch
asyncMode := .async .mainEnv
exportEntriesFnEx? := some fun env _ entries _ =>
-- Do not export info for private defs
entries.filter (env.contains (skipRealize := false) ·.name) |>.toArray
}
def addMatcherInfo (env : Environment) (matcherName : Name) (info : MatcherInfo) : Environment :=

View File

@@ -27,4 +27,5 @@ builtin_initialize registerTraceClass `grind.debug.ac.simp
builtin_initialize registerTraceClass `grind.debug.ac.check
builtin_initialize registerTraceClass `grind.debug.ac.queue
builtin_initialize registerTraceClass `grind.debug.ac.superpose
builtin_initialize registerTraceClass `grind.debug.ac.eq
end Lean

View File

@@ -254,8 +254,61 @@ private def EqCnstr.superposeWithA (c₁ : EqCnstr) : ACM Unit := do
c₁.superposeA c₂
c₂.superposeA c₁
/--
Similar to `addToQueue`, but checks whether `erase0` can be applied to `c.rhs`.
This function is used to implement extra superposition for steps when the
operator is idempotent
-/
private def EqCnstr.addToQueue' (c : EqCnstr) : ACM Unit := do
if ( hasNeutral) && c.rhs.contains 0 then
( c.erase0).addToQueue
else
c.addToQueue
/--
If the operator is idempotent, we have to add extra critical pairs.
See section 4.1 of the paper "MODULARITY, COMBINATION, AC CONGRUENCE CLOSURE"
The idea is the following, given `c` of the form `lhs = rhs`,
for each variable `x` in `lhs` s.t. `x` is not in `rhs`, we add the equation
`lhs = rhs.union {x}`
Note that the paper does not include `x` is not in `rhs`, but this extra filter is correct
since after normalization and simplification `lhs = rhs.union {x}` would be discarded.
-/
private def EqCnstr.superposeACIdempotent (c : EqCnstr) : ACM Unit := do
go c.lhs
where
goVar (x : AC.Var) : ACM Unit := do
unless c.rhs.contains x do
let c mkEqCnstr c.lhs (c.rhs.insert x) (.superpose_ac_idempotent x c)
c.addToQueue'
go (s : AC.Seq) : ACM Unit := do
match s with
| .var x => goVar x
| .cons x s => goVar x; go s
/--
Similar to `superposeACIdempotent`, but for the non-commutative case
-/
private def EqCnstr.superposeAIdempotent (c : EqCnstr) : ACM Unit := do
let x := c.lhs.firstVar
unless c.rhs.startsWithVar x do
let c mkEqCnstr c.lhs ((AC.Seq.var x) ++ c.rhs) (.superpose_head_idempotent x c)
c.addToQueue'
let x := c.lhs.lastVar
unless c.rhs.endsWithVar x do
let c mkEqCnstr c.lhs (c.rhs ++ (AC.Seq.var x)) (.superpose_tail_idempotent x c)
c.addToQueue'
private def EqCnstr.superposeWith (c : EqCnstr) : ACM Unit := do
if ( isCommutative) then c.superposeWithAC else c.superposeWithA
if c.lhs matches .var _ then return ()
if ( isCommutative) then
c.superposeWithAC
if ( isIdempotent) then c.superposeACIdempotent
else
c.superposeWithA
if ( isIdempotent) then c.superposeAIdempotent
private def EqCnstr.simplifyBasis (c : EqCnstr) : ACM Unit := do
let rec go (basis : List EqCnstr) (acc : List EqCnstr) : ACM (List EqCnstr) := do
@@ -347,9 +400,80 @@ private def checkDiseqs : ACM Unit := do
c.assert
if ( isInconsistent) then return
/-- Simplifies the right-hand-side of the given equation. -/
def EqCnstr.simplifyRHS (c : EqCnstr) : ACM EqCnstr := do
let comm isCommutative
let idempotent isIdempotent
let neutral hasNeutral
let mut c := c
let mut progress := true
while progress do
progress := false
incSteps
if ( checkMaxSteps) then return c
if neutral && c.rhs.contains 0 then
let rhs := c.rhs.erase0
if rhs != c.rhs then
c := { c with rhs, h := .erase0_rhs c }
progress := true
if idempotent then
let rhs := c.rhs.eraseDup
if rhs != c.rhs then
c := { c with rhs, h := .erase_dup_rhs c }
progress := true
if comm then
if let some (c', r) c.rhs.findSimpAC? then
c := c.simplifyRhsWithAC c' r
progress := true
else
if let some (c', r) c.rhs.findSimpA? then
c := c.simplifyRhsWithA c' r
progress := true
return c
/--
Data for equality propagation. We maintain a mapping from sequences to `EqData`
-/
structure EqData where
e : Expr
r : AC.Expr
c : EqCnstr
def mkEqData (e : Expr) (r : AC.Expr) : ACM EqData := do
let s norm r
let c mkEqCnstr s s (.refl s)
let c c.simplifyRHS
return { e, r, c }
abbrev PropagateEqMap := Std.HashMap AC.Seq EqData
private def propagateEqs : ACM Unit := do
-- TODO
return ()
if ( isInconsistent) then return ()
/-
This is a very simple procedure that does not use any indexing data-structure.
We don't even cache the simplified expressions.
TODO: optimize
-/
let mut map : PropagateEqMap := {}
for e in ( getStruct).vars do
if ( checkMaxSteps) then return ()
let r asACExpr e
map process map e r
for (e, r) in ( getStruct).denoteEntries do
if ( checkMaxSteps) then return ()
map process map e r
where
process (map : PropagateEqMap) (e : Expr) (r : AC.Expr) : ACM PropagateEqMap := do
let d mkEqData e r
let s := d.c.rhs
trace[grind.debug.ac.eq] "{e}, s: {← s.denoteExpr}"
if let some d' := map[s]? then
trace[grind.debug.ac.eq] "found [{← isEqv d.e d'.e}]: {d.e}, {d'.e}"
unless ( isEqv d.e d'.e) do
propagateEq d.e d'.e d.r d'.r d.c d'.c
return map
else
return map.insert s d
private def checkStruct : ACM Bool := do
unless ( needCheck) do return false
@@ -375,6 +499,8 @@ def check : GoalM Bool := do profileitM Exception "grind ac" (← getOptions) do
let r ACM.run opId checkStruct
progress := progress || r
if ( isInconsistent) then return true
if progress then
processNewFacts
return progress
finally
checkInvariants

View File

@@ -32,7 +32,11 @@ def internalizeImpl (e : Expr) (parent? : Option Expr) : GoalM Unit := do
ACM.run id do
if ( getStruct).denote.contains { expr := e } then return ()
let e' reify e
modifyStruct fun s => { s with denote := s.denote.insert { expr := e } e' }
modifyStruct fun s => { s with
denote := s.denote.insert { expr := e } e'
denoteEntries := s.denoteEntries.push (e, e')
recheck := true -- new equalities may be found by normalization
}
trace[grind.ac.internalize] "[{id}] {← e'.denoteExpr}"
addTermOpId e
markAsACTerm e

View File

@@ -17,6 +17,7 @@ open Lean.Grind
structure ProofM.State where
cache : Std.HashMap UInt64 Expr := {}
varDecls : Std.HashMap AC.Var Expr := {}
exprDecls : Std.HashMap AC.Expr Expr := {}
seqDecls : Std.HashMap AC.Seq Expr := {}
@@ -45,6 +46,9 @@ local macro "declare! " decls:ident a:ident : term =>
modify fun s => { s with $decls:ident := (s.$decls).insert $a x };
return x)
private def mkVarDecl (x : AC.Var) : ProofM Expr := do
declare! varDecls x
private def mkSeqDecl (s : AC.Seq) : ProofM Expr := do
declare! seqDecls s
@@ -83,10 +87,14 @@ private def mkACIPrefix (declName : Name) : ProofM Expr := do
let s getStruct
return mkApp5 (mkConst declName [s.u]) s.type ( getContext) s.assocInst ( getCommInst) ( getNeutralInst)
private def mkDupPrefix (declName : Name) : ProofM Expr := do
private def mkADPrefix (declName : Name) : ProofM Expr := do
let s getStruct
return mkApp4 (mkConst declName [s.u]) s.type ( getContext) s.assocInst ( getIdempotentInst)
private def mkACDPrefix (declName : Name) : ProofM Expr := do
let s getStruct
return mkApp5 (mkConst declName [s.u]) s.type ( getContext) s.assocInst ( getCommInst) ( getIdempotentInst)
private def toContextExpr (vars : Array Expr) : ACM Expr := do
let s getStruct
if h : 0 < vars.size then
@@ -95,13 +103,18 @@ private def toContextExpr (vars : Array Expr) : ACM Expr := do
private def mkContext (h : Expr) : ProofM Expr := do
let s getStruct
let varDecls := ( get).varDecls
let mut usedVars :=
collectMapVars varDecls collectVar >>
collectMapVars ( get).seqDecls (·.collectVars) >>
collectMapVars ( get).exprDecls (·.collectVars) >>
(if ( hasNeutral) then (collectVar 0) else id) <| {}
let vars' := usedVars.toArray
let varRename := mkVarRename vars'
let vars := ( getStruct).vars
let varFVars := vars'.map fun x => varDecls[x]?.getD default
let varIdsAsExpr := List.range vars'.size |>.toArray |>.map toExpr
let h := h.replaceFVars varFVars varIdsAsExpr
let up := mkApp (mkConst ``PLift.up [s.u]) s.type
let vars := vars'.map fun x => mkApp up vars[x]!
let h := mkLetOfMap ( get).seqDecls h `s (mkConst ``Grind.AC.Seq) fun p => toExpr <| p.renameVars varRename
@@ -130,12 +143,21 @@ partial def EqCnstr.toExprProof (c : EqCnstr) : ProofM Expr := do caching c do
| true, false => mkACPrefix ``AC.eq_norm_ac
| true, true => mkACIPrefix ``AC.eq_norm_aci
return mkApp6 h ( mkExprDecl lhs) ( mkExprDecl rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( mkEqProof a b)
| .refl s =>
let h mkPrefix ``AC.refl
return mkApp h ( mkSeqDecl s)
| .erase_dup c₁ =>
let h mkDupPrefix ``AC.eq_erase_dup
let h mkADPrefix ``AC.eq_erase_dup
return mkApp6 h ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c₁.toExprProof)
| .erase_dup_rhs c₁ =>
let h mkADPrefix ``AC.eq_erase_dup_rhs
return mkApp5 h ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c₁.toExprProof)
| .erase0 c₁ =>
let h mkAIPrefix ``AC.eq_erase0
return mkApp6 h ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c₁.toExprProof)
| .erase0_rhs c₁ =>
let h mkAIPrefix ``AC.eq_erase0_rhs
return mkApp5 h ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c₁.toExprProof)
| .simp_exact isLhs c₁ c₂ =>
let h mkPrefix <| if isLhs then ``AC.eq_simp_lhs_exact else ``AC.eq_simp_rhs_exact
let o := if isLhs then c₂.rhs else c₂.lhs
@@ -173,6 +195,18 @@ partial def EqCnstr.toExprProof (c : EqCnstr) : ProofM Expr := do caching c do
let h := mkApp9 h ( mkSeqDecl p) ( mkSeqDecl common) ( mkSeqDecl s) ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs)
( mkSeqDecl c₂.lhs) ( mkSeqDecl c₂.rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs)
return mkApp3 h eagerReflBoolTrue ( c₁.toExprProof) ( c₂.toExprProof)
| .superpose_ac_idempotent x c₁ =>
let h mkACDPrefix ``AC.superpose_ac_idempotent
let h := mkApp4 h ( mkVarDecl x) ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.rhs)
return mkApp2 h eagerReflBoolTrue ( c₁.toExprProof)
| .superpose_head_idempotent x c₁ =>
let h mkADPrefix ``AC.superpose_head_idempotent
let h := mkApp4 h ( mkVarDecl x) ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.rhs)
return mkApp2 h eagerReflBoolTrue ( c₁.toExprProof)
| .superpose_tail_idempotent x c₁ =>
let h mkADPrefix ``AC.superpose_tail_idempotent
let h := mkApp4 h ( mkVarDecl x) ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.rhs)
return mkApp2 h eagerReflBoolTrue ( c₁.toExprProof)
partial def DiseqCnstr.toExprProof (c : DiseqCnstr) : ProofM Expr := do caching c do
match c.h with
@@ -184,7 +218,7 @@ partial def DiseqCnstr.toExprProof (c : DiseqCnstr) : ProofM Expr := do caching
| true, true => mkACIPrefix ``AC.diseq_norm_aci
return mkApp6 h ( mkExprDecl lhs) ( mkExprDecl rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( mkDiseqProof a b)
| .erase_dup c₁ =>
let h mkDupPrefix ``AC.diseq_erase_dup
let h mkADPrefix ``AC.diseq_erase_dup
return mkApp6 h ( mkSeqDecl c₁.lhs) ( mkSeqDecl c₁.rhs) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c₁.toExprProof)
| .erase0 c₁ =>
let h mkAIPrefix ``AC.diseq_erase0
@@ -219,4 +253,33 @@ def DiseqCnstr.setUnsat (c : DiseqCnstr) : ACM Unit := do
return mkApp4 ( mkPrefix ``AC.diseq_unsat) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) eagerReflBoolTrue ( c.toExprProof)
closeGoal h
def mkExprEqSeqProof (e : AC.Expr) (c : EqCnstr) : ProofM Expr := do
let h match ( isCommutative), ( hasNeutral) with
| false, false => mkAPrefix ``AC.norm_a
| false, true => mkAIPrefix ``AC.norm_ai
| true, false => mkACPrefix ``AC.norm_ac
| true, true => mkACIPrefix ``AC.norm_aci
let h₁ := mkApp3 h ( mkExprDecl e) ( mkSeqDecl c.lhs) eagerReflBoolTrue
let h₂ c.toExprProof
let h mkPrefix ``AC.eq_expr_seq_seq
return mkApp5 h ( mkExprDecl e) ( mkSeqDecl c.lhs) ( mkSeqDecl c.rhs) h₁ h₂
/--
Asserts `a = b`, where
- `ea` is `a`s reification
- `eb` is `b`s reification
- `ca` is a proof for `sa = s` where `norm ea = sa`
- `cb` is a proof for `sb = s` where `norm eb = sb`
-/
def propagateEq (a b : Expr) (ea eb : AC.Expr) (ca cb : EqCnstr) : ACM Unit := do
assert! ca.rhs == cb.rhs
let h withProofContext do
let h₁ mkExprEqSeqProof ea ca
let h₂ mkExprEqSeqProof eb cb
let h mkPrefix ``AC.imp_eq
return mkApp5 h ( mkExprDecl ea) ( mkExprDecl eb) ( mkSeqDecl ca.rhs) h₁ h₂
let s getStruct
let eq := mkApp3 (mkConst ``Eq [s.u]) s.type a b
pushEq a b <| mkExpectedPropHint h eq
end Lean.Meta.Grind.AC

View File

@@ -309,4 +309,24 @@ where
| .exact => none
| .prefix s => some (p.reverse, s₁, s)
def Seq.firstVar (s : Seq) : Var :=
match s with
| .var x => x
| .cons x _ => x
def Seq.startsWithVar (s : Seq) (x : Var) : Bool :=
match s with
| .var y => x == y
| .cons y _ => x == y
def Seq.lastVar (s : Seq) : Var :=
match s with
| .var x => x
| .cons _ s => s.lastVar
def Seq.endsWithVar (s : Seq) (x : Var) : Bool :=
match s with
| .var y => x == y
| .cons _ s => s.endsWithVar x
end Lean.Grind.AC

View File

@@ -37,6 +37,13 @@ inductive EqCnstrProof where
| simp_middle (lhs : Bool) (s₁ s₂ : AC.Seq) (c₁ : EqCnstr) (c₂ : EqCnstr)
| superpose_ac (r₁ c r₂ : AC.Seq) (c₁ : EqCnstr) (c₂ : EqCnstr)
| superpose (p s c : AC.Seq) (c₁ : EqCnstr) (c₂ : EqCnstr)
| superpose_ac_idempotent (x : AC.Var) (c₁ : EqCnstr)
| superpose_head_idempotent (x : AC.Var) (c₁ : EqCnstr)
| superpose_tail_idempotent (x : AC.Var) (c₁ : EqCnstr)
-- The following constructors are for equality propagation
| refl (s : AC.Seq)
| erase_dup_rhs (c : EqCnstr)
| erase0_rhs (c : EqCnstr)
end
instance : Inhabited EqCnstrProof where
@@ -90,6 +97,8 @@ structure Struct where
varMap : PHashMap ExprPtr AC.Var := {}
/-- Mapping from Lean expressions to their representations as `AC.Expr` -/
denote : PHashMap ExprPtr AC.Expr := {}
/-- `denoteEntries` is `denote` as a `PArray` for deterministic traversal. -/
denoteEntries : PArray (Expr × AC.Expr) := {}
/-- Equations to process. -/
queue : Queue := {}
/-- Processed equations. -/

View File

@@ -233,7 +233,12 @@ where
let arg := args[i]
trace_goal[grind.debug.canon] "[{repr (← shouldCanon pinfos i arg)}]: {arg} : {← inferType arg}"
let arg' match ( shouldCanon pinfos i arg) with
| .canonType => canonType e f i arg
| .canonType =>
/-
The type may have nested propositions and terms that may need to be canonicalized too.
So, we must recurse over it. See issue #10232
-/
canonType e f i ( visit arg)
| .canonImplicit => canonImplicit e f i ( visit arg)
| .visit => visit arg
| .canonInst =>

View File

@@ -6,6 +6,7 @@ Authors: Leonardo de Moura
module
prelude
public import Lean.Meta.Tactic.Grind.Types
import Lean.Util.CollectLevelMVars
import Lean.Meta.Tactic.Grind.Core
import Lean.Meta.Tactic.Grind.MatchDiscrOnly
import Lean.Meta.Tactic.Grind.ProveEq
@@ -332,6 +333,95 @@ private def annotateMatchEqnType (prop : Expr) (initApp : Expr) : M Expr := do
let lhs markAsSimpMatchDiscrsOnly lhs
return mkApp4 (mkConst ``Grind.EqMatch f.constLevels!) α lhs rhs initApp
/--
Helper function for collecting constants containing unassigned universe levels.
This functions is used by `assignUnassignedLevelMVars`
-/
private def collectConstsWithLevelMVars (e : Expr) : CoreM (Array Expr) := do
let (_, s) go |>.run {}
return s.toArray
where
go : StateRefT (Std.HashSet Expr) CoreM Unit := do
e.forEach fun e => do
if e.isConst && e.hasLevelMVar then
modify (·.insert e)
/--
Helper function for E-match theorems containing universe metavariables that do not occur
in any of their parameters. This is a very rare case, but it does occur in practice.
Example:
```lean
@[simp, grind =] theorem down_pure {φ : Prop} : (⌜φ⌝ : SPred []).down = φ := rfl
```
The parameter `us` contains the unassigned universe metavariables.
Recall that when we collect patterns for a theorem, we check coverage only for regular
parameters, not universe parameters. This function attempts to instantiate the universe
parameters using the following heuristic:
* Collect all constants `C` in `prop` that contain universe metavariables.
* Create a collection `P` of pairs `(c, cs)` where `c ∈ C` and `cs` are instances of `c` in the current goal.
* Sort `P` by the size of `cs`, prioritizing constants with fewer occurrences.
* Perform a backtracking search to unify each `c` with its occurrences. Stop as soon as all universe parameters are instantiated.
We expect this function not to be a performance bottleneck in practice because:
1. Very few theorems contain universe metavariables not covered by any parameters.
2. These theorems usually involve only a small number of universe levels and universe polymorphic constants.
3. Goals rarely contain constants instantiated with many different universe variables.
If this does become a performance issue, we will need to add support for assigning universe levels
during E-matching and check for universe-level coverage when collecting patterns.
The result is a collection of pairs `(proof', prop')` where `proof'` and `prop'` are `proof` and `prop`
with all universe metavariables instantiated.
-/
private def assignUnassignedLevelMVars (us : Array LMVarId) (proof : Expr) (prop : Expr) : GoalM (Array (Expr × Expr)) := do
let us := us.map mkLevelMVar
let cs collectConstsWithLevelMVars prop
let mut candidates : Array (Expr × Array Expr) := #[]
for c in cs do
let declName := c.constName!
if let some apps := ( getThe Goal).appMap.find? (.const declName) then
let consts : Std.HashSet Expr := Std.HashSet.ofList <| apps.map Expr.getAppFn
candidates := candidates.push (c, consts.toArray)
candidates := candidates.qsort fun (c₁, cs₁) (c₂, cs₂) =>
if cs₁.size == cs₂.size then c₁.quickLt c₂ else cs₁.size < cs₂.size
let rec search (us : Array Level) (i : Nat) : StateRefT (Array (Expr × Expr)) MetaM Unit := do
checkSystem "grind"
if h : i < candidates.size then
let (c, cs) := candidates[i]
for c' in cs do
let saved getMCtx
try
unless ( isDefEq c c') do return ()
-- update pending universe metavariables
let us us.filterMapM fun u => do
let u instantiateLevelMVars u
if ( hasAssignableLevelMVar u) then
return some u
else
return none
if us.isEmpty then
-- all universe metavariables have been assigned
let prop instantiateMVars prop
let proof instantiateMVars proof
modify (·.push (proof, prop))
return () -- Found instantiation
search us (i+1)
finally
setMCtx saved
else
return ()
let (_ , s) search us 0 |>.run #[]
return s
private def getUnassignedLevelMVars (e : Expr) : MetaM (Array LMVarId) := do
unless e.hasLevelMVar do return #[]
let us := collectLevelMVars {} e |>.result
us.filterM fun u => isLevelMVarAssignable u
/--
Stores new theorem instance in the state.
Recall that new instances are internalized later, after a full round of ematching.
@@ -340,20 +430,33 @@ private def addNewInstance (thm : EMatchTheorem) (proof : Expr) (generation : Na
let proof instantiateMVars proof
if grind.debug.proofs.get ( getOptions) then
check proof
let mut prop inferType proof
let mut proof := proof
if ( isMatchEqLikeDeclName thm.origin.key) then
prop annotateMatchEqnType prop ( read).initApp
-- We must add a hint here because `annotateMatchEqnType` introduces `simpMatchDiscrsOnly` and
-- `Grind.PreMatchCond` which are not reducible.
proof := mkExpectedPropHint proof prop
else if ( isEqnThm thm.origin.key) then
prop annotateEqnTypeConds prop
-- We must add a hint because `annotateEqnTypeConds` introduces `Grind.PreMatchCond`
-- which is not reducible.
proof := mkExpectedPropHint proof prop
trace_goal[grind.ematch.instance] "{thm.origin.pp}: {prop}"
addTheoremInstance thm proof prop (generation+1)
let prop inferType proof
let us getUnassignedLevelMVars prop
if !us.isEmpty then
let pps assignUnassignedLevelMVars us proof prop
if pps.isEmpty then
reportIssue! "failed to instantiate `{thm.origin.pp}`, proposition contains universe metavariables{indentExpr prop}"
return ()
for (proof, prop) in pps do
go proof prop
else
go proof prop
where
go (proof prop : Expr) : M Unit := do
let mut proof := proof
let mut prop := prop
if ( isMatchEqLikeDeclName thm.origin.key) then
prop annotateMatchEqnType prop ( read).initApp
-- We must add a hint here because `annotateMatchEqnType` introduces `simpMatchDiscrsOnly` and
-- `Grind.PreMatchCond` which are not reducible.
proof := mkExpectedPropHint proof prop
else if ( isEqnThm thm.origin.key) then
prop annotateEqnTypeConds prop
-- We must add a hint because `annotateEqnTypeConds` introduces `Grind.PreMatchCond`
-- which is not reducible.
proof := mkExpectedPropHint proof prop
trace_goal[grind.ematch.instance] "{thm.origin.pp}: {prop}"
addTheoremInstance thm proof prop (generation+1)
private def synthesizeInsts (mvars : Array Expr) (bis : Array BinderInfo) : OptionT M Unit := do
let thm := ( read).thm

View File

@@ -87,7 +87,7 @@ def notationItem := ppSpace >> withAntiquot (mkAntiquot "notationItem" decl_name
optional docComment >> optional Term.«attributes» >> Term.attrKind >>
"syntax " >> optPrecedence >> optNamedName >> optNamedPrio >> many1 (ppSpace >> syntaxParser argPrec) >> " : " >> ident
@[builtin_command_parser] def syntaxAbbrev := leading_parser
optional docComment >> "syntax " >> ident >> " := " >> many1 syntaxParser
optional docComment >> optional visibility >> "syntax " >> ident >> " := " >> many1 syntaxParser
def catBehaviorBoth := leading_parser nonReservedSymbol "both"
def catBehaviorSymbol := leading_parser nonReservedSymbol "symbol"
def catBehavior := optional (" (" >> nonReservedSymbol "behavior" >> " := " >> (catBehaviorBoth <|> catBehaviorSymbol) >> ")")

View File

@@ -121,15 +121,16 @@ private partial def resolvePrivateName (env : Environment) (declName : Name) : O
return n
/-- Check whether `ns ++ id` is a valid namespace name and/or there are aliases names `ns ++ id`. -/
private def resolveQualifiedName (env : Environment) (ns : Name) (id : Name) : List Name :=
private def resolveQualifiedName (env : Environment) (ns : Name) (id : Name) : List Name := Id.run do
let resolvedId := ns ++ id
-- We ignore protected aliases if `id` is atomic.
let resolvedIds := getAliases env resolvedId (skipProtected := id.isAtomic)
if (containsDeclOrReserved env resolvedId && (!id.isAtomic || !isProtected env resolvedId)) then
resolvedId :: resolvedIds
else
if let some resolvedIdPrv := resolvePrivateName env resolvedId then resolvedIdPrv :: resolvedIds
else resolvedIds
if !id.isAtomic || !isProtected env resolvedId then
if containsDeclOrReserved env resolvedId then
return resolvedId :: resolvedIds
else if let some resolvedIdPrv := resolvePrivateName env resolvedId then
return resolvedIdPrv :: resolvedIds
return resolvedIds
/-- Check surrounding namespaces -/
private def resolveUsingNamespace (env : Environment) (id : Name) : Name List Name

View File

@@ -31,9 +31,9 @@ For implementation notes, see the docstring of the module `Std.Data.DHashMap.Int
set_option linter.missingDocs true
set_option autoImplicit false
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w'} [Monad m]
variable {_ : BEq α} {_ : Hashable α}

View File

@@ -25,11 +25,11 @@ File contents: Operations on associative lists
set_option linter.missingDocs true
set_option autoImplicit false
universe w v u
universe w v u w'
namespace Std.DHashMap.Internal
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w'} [Monad m]
/--
`AssocList α β` is "the same as" `List (α × β)`, but flattening the structure

View File

@@ -25,9 +25,9 @@ set_option autoImplicit false
open Std.DHashMap.Internal
open List (Perm perm_middle)
universe w v u
universe w v u w'
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w'} [Monad m]
namespace Std.DHashMap.Internal.AssocList

View File

@@ -146,9 +146,9 @@ and clean.
set_option linter.missingDocs true
set_option autoImplicit false
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w'} [Monad m]
namespace Std

View File

@@ -1245,7 +1245,7 @@ end Const
section monadic
-- The types are redefined because fold/for does not need BEq/Hashable
variable {α : Type u} {β : α Type v} (m : Raw₀ α β) {δ : Type w} {m' : Type w Type w}
variable {α : Type u} {β : α Type v} (m : Raw₀ α β) {δ : Type w} {m' : Type w Type w'}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m']
{f : δ (a : α) β a m' δ} {init : δ} :

View File

@@ -29,7 +29,7 @@ open Std.Internal
set_option linter.missingDocs true
set_option autoImplicit false
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {γ : Type w} {δ : α Type w}
@@ -150,7 +150,7 @@ theorem fold_push_key {l : Raw α β} {acc : Array α} :
acc ++ (List.keys (toListModel l.buckets)).toArray := by
simp [fold_push_apply, keys_eq_map]
theorem foldM_eq_foldlM_toListModel {δ : Type w} {m : Type w Type w } [Monad m] [LawfulMonad m]
theorem foldM_eq_foldlM_toListModel {δ : Type w} {m : Type w Type w'} [Monad m] [LawfulMonad m]
{f : δ (a : α) β a m δ} {init : δ} {b : Raw α β} :
b.foldM f init = (toListModel b.buckets).foldlM (fun a b => f a b.1 b.2) init := by
simp only [Raw.foldM, Array.foldlM_toList, toListModel]
@@ -171,7 +171,7 @@ theorem fold_eq_foldl_toListModel {l : Raw α β} {f : γ → (a : α) → β a
l.fold f init = (toListModel l.buckets).foldl (fun a b => f a b.1 b.2) init := by
simp [Raw.fold, foldM_eq_foldlM_toListModel]
theorem foldRevM_eq_foldrM_toListModel {δ : Type w} {m : Type w Type w } [Monad m] [LawfulMonad m]
theorem foldRevM_eq_foldrM_toListModel {δ : Type w} {m : Type w Type w'} [Monad m] [LawfulMonad m]
{f : δ (a : α) β a m δ} {init : δ} {b : Raw α β} :
Raw.Internal.foldRevM f init b =
(toListModel b.buckets).foldrM (fun a b => f b a.1 a.2) init := by
@@ -216,7 +216,7 @@ theorem keysArray_eq_toArray_keys_toListModel {m : Raw α β} :
m.keysArray = (List.keys (toListModel m.buckets)).toArray := by
simp [Raw.keysArray, fold_push_key]
theorem forM_eq_forM_toListModel {l: Raw α β} {m : Type w Type w} [Monad m] [LawfulMonad m]
theorem forM_eq_forM_toListModel {l: Raw α β} {m : Type w Type w'} [Monad m] [LawfulMonad m]
{f : (a : α) β a m PUnit} :
l.forM f = (toListModel l.buckets).forM (fun a => f a.1 a.2) := by
simp only [Raw.forM, Array.forM, Array.foldlM_toList, toListModel]
@@ -236,7 +236,7 @@ theorem forM_eq_forM_toListModel {l: Raw α β} {m : Type w → Type w} [Monad m
· funext x
simp [ih]
theorem forIn_eq_forIn_toListModel {δ : Type w} {l : Raw α β} {m : Type w Type w} [Monad m] [LawfulMonad m]
theorem forIn_eq_forIn_toListModel {δ : Type w} {l : Raw α β} {m : Type w Type w'} [Monad m] [LawfulMonad m]
{f : (a : α) β a δ m (ForInStep δ)} {init : δ} :
l.forIn f init = ForIn.forIn (toListModel l.buckets) init (fun a d => f a.1 a.2 d) := by
rw [Raw.forIn, Array.forIn_toList, toListModel]

View File

@@ -1423,7 +1423,7 @@ end Const
section monadic
variable {m : DHashMap α β} {δ : Type w} {m' : Type w Type w}
variable {m : DHashMap α β} {δ : Type w} {m' : Type w Type w'}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m']
{f : δ (a : α) β a m' δ} {init : δ} :

View File

@@ -29,9 +29,9 @@ Lemmas about the operations on `Std.DHashMap.Raw` are available in the module
set_option linter.missingDocs true
set_option autoImplicit false
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w} [Monad m]
variable {α : Type u} {β : α Type v} {δ : Type w} {m : Type w Type w'} [Monad m]
namespace Std

View File

@@ -1496,7 +1496,7 @@ end Const
section monadic
variable {m : Raw α β} {δ : Type w} {m' : Type w Type w}
variable {m : Raw α β} {δ : Type w} {m' : Type w Type w'}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m'] (h : m.WF)
{f : δ (a : α) β a m' δ} {init : δ} :

View File

@@ -33,9 +33,9 @@ impossible cases need to be checked for.
set_option autoImplicit false
set_option linter.all true
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w}
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w'}
namespace Std.DTreeMap.Internal.Impl

View File

@@ -6519,7 +6519,7 @@ theorem mergeWith! [TransOrd α] [LawfulEqOrd α]
section Const
variable {β : Type v} {t₁ t₂ t₃ t₄ : Impl α β} {δ : Type w} {m : Type w Type w}
variable {β : Type v} {t₁ t₂ t₃ t₄ : Impl α β} {δ : Type w} {m : Type w Type w'}
theorem constGet?_eq [TransOrd α] (h₁ : t₁.WF) (h₂ : t₂.WF) (h : t₁ ~m t₂) {k : α} :
Const.get? t₁ k = Const.get? t₂ k := by

View File

@@ -691,7 +691,7 @@ def map [Ord α] (f : (a : α) → β a → γ a) (t : Impl α β) : Impl α γ
Monadic version of `map`.
-/
@[specialize]
def mapM {α : Type v} {β γ : α Type v} {M : Type v Type v} [Applicative M]
def mapM {α : Type v} {β γ : α Type v} {M : Type v Type w} [Applicative M]
(f : (a : α) β a M (γ a)) : Impl α β M (Impl α γ)
| leaf => pure leaf
| inner sz k v l r => pure (.inner sz k) <*> f k v <*> l.mapM f <*> r.mapM f

View File

@@ -22,9 +22,9 @@ This file contains the basic definition implementing the functionality of the si
set_option autoImplicit false
set_option linter.all true
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w}
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w'}
namespace Std.DTreeMap.Internal.Impl
local instance : Coe (Type v) (α Type v) where coe γ := fun _ => γ

View File

@@ -22,9 +22,9 @@ A central consequence of well-formedness, balancedness, is shown for all well-fo
set_option autoImplicit false
set_option linter.all true
universe u v w
universe u v w w'
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w}
variable {α : Type u} {β : α Type v} {γ : α Type w} {δ : Type w} {m : Type w Type w'}
namespace Std.DTreeMap.Internal
local instance : Coe (Type v) (α Type v) where coe γ := fun _ => γ

View File

@@ -4598,7 +4598,7 @@ theorem mergeWith [TransCmp cmp] [LawfulEqCmp cmp] (f : (a : α) → β a → β
section Const
variable {β : Type v} {t₁ t₂ t₃ t₄ : DTreeMap α β cmp} {δ : Type w} {m : Type w Type w}
variable {β : Type v} {t₁ t₂ t₃ t₄ : DTreeMap α β cmp} {δ : Type w} {m : Type w Type w'}
theorem constGet?_eq [TransCmp cmp] {k : α} (h : t₁ ~m t₂) : Const.get? t₁ k = Const.get? t₂ k :=
h.1.constGet?_eq t₁.2 t₂.2

View File

@@ -4320,7 +4320,7 @@ theorem mergeWith [TransCmp cmp] [LawfulEqCmp cmp]
section Const
variable {β : Type v} {t₁ t₂ t₃ t₄ : Raw α β cmp} (δ : Type w) (m : Type w Type w)
variable {β : Type v} {t₁ t₂ t₃ t₄ : Raw α β cmp} (δ : Type w) (m : Type w Type w')
theorem constGet?_eq [TransCmp cmp] {k : α} (h₁ : t₁.WF) (h₂ : t₂.WF) (h : t₁ ~m t₂) :
Const.get? t₁ k = Const.get? t₂ k :=

View File

@@ -29,7 +29,7 @@ See the module `Std.Data.HashMap.Raw` for a variant of this type which is safe t
nested inductive types.
-/
universe u v w
universe u v w w'
variable {α : Type u} {β : Type v} {_ : BEq α} {_ : Hashable α}
@@ -205,7 +205,7 @@ instance [BEq α] [Hashable α] : GetElem? (HashMap α β) α β (fun m a => a
List (α × β) :=
DHashMap.Const.toList m.inner
@[inline, inherit_doc DHashMap.foldM] def foldM {m : Type w Type w}
@[inline, inherit_doc DHashMap.foldM] def foldM {m : Type w Type w'}
[Monad m] {γ : Type w} (f : γ α β m γ) (init : γ) (b : HashMap α β) : m γ :=
b.inner.foldM f init
@@ -213,18 +213,18 @@ instance [BEq α] [Hashable α] : GetElem? (HashMap α β) α β (fun m a => a
(f : γ α β γ) (init : γ) (b : HashMap α β) : γ :=
b.inner.fold f init
@[inline, inherit_doc DHashMap.forM] def forM {m : Type w Type w} [Monad m]
@[inline, inherit_doc DHashMap.forM] def forM {m : Type w Type w'} [Monad m]
(f : (a : α) β m PUnit) (b : HashMap α β) : m PUnit :=
b.inner.forM f
@[inline, inherit_doc DHashMap.forIn] def forIn {m : Type w Type w} [Monad m]
@[inline, inherit_doc DHashMap.forIn] def forIn {m : Type w Type w'} [Monad m]
{γ : Type w} (f : (a : α) β γ m (ForInStep γ)) (init : γ) (b : HashMap α β) : m γ :=
b.inner.forIn f init
instance [BEq α] [Hashable α] {m : Type w Type w} : ForM m (HashMap α β) (α × β) where
instance [BEq α] [Hashable α] {m : Type w Type w'} : ForM m (HashMap α β) (α × β) where
forM m f := m.forM (fun a b => f (a, b))
instance [BEq α] [Hashable α] {m : Type w Type w} : ForIn m (HashMap α β) (α × β) where
instance [BEq α] [Hashable α] {m : Type w Type w'} : ForIn m (HashMap α β) (α × β) where
forIn m init f := m.forIn (fun a b acc => f (a, b) acc) init
@[inline, inherit_doc DHashMap.filter] def filter (f : α β Bool)

View File

@@ -1018,7 +1018,7 @@ theorem mem_toArray_iff_getKey?_eq_some_and_getElem?_eq_some [EquivBEq α] [Lawf
section monadic
variable {m : HashMap α β} {δ : Type w} {m' : Type w Type w}
variable {m : HashMap α β} {δ : Type w} {m' : Type w Type w'}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m']
{f : δ (a : α) β m' δ} {init : δ} :

View File

@@ -27,7 +27,7 @@ Lemmas about the operations on `Std.HashMap.Raw` are available in the module
`Std.Data.HashMap.RawLemmas`.
-/
universe u v w
universe u v w w'
variable {α : Type u} {β : Type v}
@@ -206,7 +206,7 @@ instance [BEq α] [Hashable α] : GetElem? (Raw α β) α β (fun m a => a ∈ m
@[inline, inherit_doc DHashMap.Raw.Const.toList] def toList (m : Raw α β) : List (α × β) :=
DHashMap.Raw.Const.toList m.inner
@[inline, inherit_doc DHashMap.Raw.foldM] def foldM {m : Type w Type w} [Monad m] {γ : Type w}
@[inline, inherit_doc DHashMap.Raw.foldM] def foldM {m : Type w Type w'} [Monad m] {γ : Type w}
(f : γ α β m γ) (init : γ) (b : Raw α β) : m γ :=
b.inner.foldM f init
@@ -214,18 +214,18 @@ instance [BEq α] [Hashable α] : GetElem? (Raw α β) α β (fun m a => a ∈ m
(b : Raw α β) : γ :=
b.inner.fold f init
@[inline, inherit_doc DHashMap.Raw.forM] def forM {m : Type w Type w} [Monad m]
@[inline, inherit_doc DHashMap.Raw.forM] def forM {m : Type w Type w'} [Monad m]
(f : (a : α) β m PUnit) (b : Raw α β) : m PUnit :=
b.inner.forM f
@[inline, inherit_doc DHashMap.Raw.forIn] def forIn {m : Type w Type w} [Monad m] {γ : Type w}
@[inline, inherit_doc DHashMap.Raw.forIn] def forIn {m : Type w Type w'} [Monad m] {γ : Type w}
(f : (a : α) β γ m (ForInStep γ)) (init : γ) (b : Raw α β) : m γ :=
b.inner.forIn f init
instance {m : Type w Type w} : ForM m (Raw α β) (α × β) where
instance {m : Type w Type w'} : ForM m (Raw α β) (α × β) where
forM m f := m.forM (fun a b => f (a, b))
instance {m : Type w Type w} : ForIn m (Raw α β) (α × β) where
instance {m : Type w Type w'} : ForIn m (Raw α β) (α × β) where
forIn m init f := m.forIn (fun a b acc => f (a, b) acc) init
section Unverified

View File

@@ -1034,7 +1034,7 @@ theorem mem_toArray_iff_getKey?_eq_some_and_getElem?_eq_some [EquivBEq α] [Lawf
section monadic
variable {m : Raw α β} {δ : Type w} {m' : Type w Type w}
variable {m : Raw α β} {δ : Type w} {m' : Type w Type w'}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m'] (h : m.WF)
{f : δ (a : α) β m' δ} {init : δ} :

View File

@@ -25,7 +25,7 @@ nested inductive types.
set_option linter.missingDocs true
set_option autoImplicit false
universe u v
universe u v w
variable {α : Type u} {_ : BEq α} {_ : Hashable α}
@@ -190,7 +190,7 @@ in the collection will be present in the returned hash set.
Monadically computes a value by folding the given function over the elements in the hash set in some
order.
-/
@[inline] def foldM {m : Type v Type v} [Monad m] {β : Type v}
@[inline] def foldM {m : Type v Type w} [Monad m] {β : Type v}
(f : β α m β) (init : β) (b : HashSet α) : m β :=
b.inner.foldM (fun b a _ => f b a) init
@@ -200,19 +200,19 @@ order.
m.inner.fold (fun b a _ => f b a) init
/-- Carries out a monadic action on each element in the hash set in some order. -/
@[inline] def forM {m : Type v Type v} [Monad m] (f : α m PUnit)
@[inline] def forM {m : Type v Type w} [Monad m] (f : α m PUnit)
(b : HashSet α) : m PUnit :=
b.inner.forM (fun a _ => f a)
/-- Support for the `for` loop construct in `do` blocks. -/
@[inline] def forIn {m : Type v Type v} [Monad m] {β : Type v}
@[inline] def forIn {m : Type v Type w} [Monad m] {β : Type v}
(f : α β m (ForInStep β)) (init : β) (b : HashSet α) : m β :=
b.inner.forIn (fun a _ acc => f a acc) init
instance [BEq α] [Hashable α] {m : Type v Type v} : ForM m (HashSet α) α where
instance [BEq α] [Hashable α] {m : Type v Type w} : ForM m (HashSet α) α where
forM m f := m.forM f
instance [BEq α] [Hashable α] {m : Type v Type v} : ForIn m (HashSet α) α where
instance [BEq α] [Hashable α] {m : Type v Type w} : ForIn m (HashSet α) α where
forIn m init f := m.forIn f init
/-- Removes all elements from the hash set for which the given function returns `false`. -/

View File

@@ -22,7 +22,7 @@ is to provide an instance of `LawfulBEq α`.
set_option linter.missingDocs true
set_option autoImplicit false
universe u v
universe u v w
variable {α : Type u} {_ : BEq α} {_ : Hashable α}
@@ -532,7 +532,7 @@ theorem contains_of_mem_toArray [EquivBEq α] [LawfulHashable α] {k : α}
section monadic
variable {δ : Type v} {m' : Type v Type v}
variable {δ : Type v} {m' : Type v Type w}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m']
{f : δ α m' δ} {init : δ} :

View File

@@ -27,7 +27,7 @@ Lemmas about the operations on `Std.HashSet.Raw` are available in the module
set_option linter.missingDocs true
set_option autoImplicit false
universe u v
universe u v w
variable {α : Type u}
@@ -193,7 +193,7 @@ in the collection will be present in the returned hash set.
Monadically computes a value by folding the given function over the elements in the hash set in some
order.
-/
@[inline] def foldM {m : Type v Type v} [Monad m] {β : Type v} (f : β α m β) (init : β)
@[inline] def foldM {m : Type v Type w} [Monad m] {β : Type v} (f : β α m β) (init : β)
(b : Raw α) : m β :=
b.inner.foldM (fun b a _ => f b a) init
@@ -202,18 +202,18 @@ order.
m.inner.fold (fun b a _ => f b a) init
/-- Carries out a monadic action on each element in the hash set in some order. -/
@[inline] def forM {m : Type v Type v} [Monad m] (f : α m PUnit) (b : Raw α) : m PUnit :=
@[inline] def forM {m : Type v Type w} [Monad m] (f : α m PUnit) (b : Raw α) : m PUnit :=
b.inner.forM (fun a _ => f a)
/-- Support for the `for` loop construct in `do` blocks. -/
@[inline] def forIn {m : Type v Type v} [Monad m] {β : Type v} (f : α β m (ForInStep β))
@[inline] def forIn {m : Type v Type w} [Monad m] {β : Type v} (f : α β m (ForInStep β))
(init : β) (b : Raw α) : m β :=
b.inner.forIn (fun a _ acc => f a acc) init
instance {m : Type v Type v} : ForM m (Raw α) α where
instance {m : Type v Type w} : ForM m (Raw α) α where
forM m f := m.forM f
instance {m : Type v Type v} : ForIn m (Raw α) α where
instance {m : Type v Type w} : ForIn m (Raw α) α where
forIn m init f := m.forIn f init
/-- Removes all elements from the hash set for which the given function returns `false`. -/

View File

@@ -22,7 +22,7 @@ is to provide an instance of `LawfulBEq α`.
set_option linter.missingDocs true
set_option autoImplicit false
universe u v
universe u v w
variable {α : Type u}
@@ -556,7 +556,7 @@ theorem contains_of_mem_toArray [EquivBEq α] [LawfulHashable α] (h : m.WF) {k
section monadic
variable {m : Raw α} {δ : Type v} {m' : Type v Type v}
variable {m : Raw α} {δ : Type v} {m' : Type v Type w}
theorem foldM_eq_foldlM_toList [Monad m'] [LawfulMonad m'] (h : m.WF)
{f : δ α m' δ} {init : δ} :

View File

@@ -123,6 +123,9 @@ The protocol for this is as follows:
the `Selectable.cont` of the winning `Selector` is executed and returned.
-/
partial def Selectable.one (selectables : Array (Selectable α)) : IO (AsyncTask α) := do
if selectables.isEmpty then
throw <| .userError "Selectable.one requires at least one Selectable"
let seed := UInt64.toNat (ByteArray.toUInt64LE! ( IO.getRandomBytes 8))
let gen := mkStdGen seed
let selectables := shuffleIt selectables gen

View File

@@ -10,7 +10,10 @@ public import Lake.Config.Package
import Lake.DSL.Syntax
import Lake.Config.Package
meta import Lean.Parser.Module
meta import Lake.Config.Package
meta import Lake.Config.LeanLibConfig
meta import Lake.Config.LeanExeConfig
meta import Lake.Config.InputFileConfig
meta import Lake.Config.PackageConfig
import all Lake.Build.Key
open System Lean Syntax Parser Module

View File

@@ -8,7 +8,10 @@ module
prelude
public import Lake.Toml.Encode
public import Lake.Config.Package
meta import Lake.Config.Package
meta import Lake.Config.LeanLibConfig
meta import Lake.Config.LeanExeConfig
meta import Lake.Config.InputFileConfig
meta import Lake.Config.PackageConfig
/-! # TOML Translation

View File

@@ -113,7 +113,7 @@ private def mkConfigAuxDecls
let fieldsDef `( $[$vis?:visibility]? def $fieldsId:ident := $(data.fields))
let instId := mkIdentFrom structId <| structId.getId.modifyBase (·.str "instConfigFields")
let fieldsInst `( $[$vis?:visibility]? instance $instId:ident : ConfigFields $structTy := $fieldsId)
let instId := mkIdentFrom structId <| structId.getId.modifyBase (·.str "instConfigMeta")
let instId := mkIdentFrom structId <| structId.getId.modifyBase (·.str "instConfigInfo")
let structNameLit : Term := mkNode ``Term.doubleQuotedName #[mkAtom "`", mkAtom "`", structId]
let infoInst `( $[$vis?:visibility]? instance $instId:ident : ConfigInfo $structNameLit := {fields := $fieldsId})
let instId := mkIdentFrom structId <| structId.getId.modifyBase (·.str "instEmptyCollection")

View File

@@ -6,330 +6,20 @@ Authors: Gabriel Ebner, Sebastian Ullrich, Mac Malone
module
prelude
public import Lake.Config.Defaults
public import Lake.Config.OutFormat
public import Lake.Config.WorkspaceConfig
public import Lake.Config.Dependency
public import Lake.Config.ConfigDecl
public import Lake.Config.Script
public import Lake.Config.Cache
public import Lake.Config.MetaClasses
public import Lake.Config.Script
public import Lake.Config.ConfigDecl
public import Lake.Config.Dependency
public import Lake.Config.PackageConfig
public import Lake.Util.FilePath -- use scoped instance downstream
public import Lake.Util.OrdHashSet
public import Lake.Util.Version
public import Lake.Util.Name
meta import all Lake.Config.Meta
meta import all Lake.Util.OpaqueType
open System Lean
namespace Lake
/-- The default `buildArchive` configuration for a package with `name`. -/
@[inline] public def defaultBuildArchive (name : Name) : String :=
s!"{name.toString false}-{System.Platform.target}.tar.gz"
--------------------------------------------------------------------------------
/-! # PackageConfig -/
--------------------------------------------------------------------------------
set_option linter.unusedVariables false in
/-- A `Package`'s declarative configuration. -/
public configuration PackageConfig (name : Name) extends WorkspaceConfig, LeanConfig where
/-- **For internal use.** Whether this package is Lean itself. -/
bootstrap : Bool := false
/--
**This field is deprecated.**
The path of a package's manifest file, which stores the exact versions
of its resolved dependencies.
Defaults to `defaultManifestFile` (i.e., `lake-manifest.json`).
-/
manifestFile : Option FilePath := none
/-- An `Array` of target names to build whenever the package is used. -/
extraDepTargets : Array Name := #[]
/--
Whether to compile each of the package's module into a native shared library
that is loaded whenever the module is imported. This speeds up evaluation of
metaprograms and enables the interpreter to run functions marked `@[extern]`.
Defaults to `false`.
-/
precompileModules : Bool := false
/--
Additional arguments to pass to the Lean language server
(i.e., `lean --server`) launched by `lake serve`, both for this package and
also for any packages browsed from this one in the same session.
-/
moreGlobalServerArgs, moreServerArgs : Array String := #[]
/--
The directory containing the package's Lean source files.
Defaults to the package's directory.
(This will be passed to `lean` as the `-R` option.)
-/
srcDir : FilePath := "."
/--
The directory to which Lake should output the package's build results.
Defaults to `defaultBuildDir` (i.e., `.lake/build`).
-/
buildDir : FilePath := defaultBuildDir
/--
The build subdirectory to which Lake should output the package's
binary Lean libraries (e.g., `.olean`, `.ilean` files).
Defaults to `defaultLeanLibDir` (i.e., `lib`).
-/
leanLibDir : FilePath := defaultLeanLibDir
/--
The build subdirectory to which Lake should output the package's
native libraries (e.g., `.a`, `.so`, `.dll` files).
Defaults to `defaultNativeLibDir` (i.e., `lib`).
-/
nativeLibDir : FilePath := defaultNativeLibDir
/--
The build subdirectory to which Lake should output the package's binary executable.
Defaults to `defaultBinDir` (i.e., `bin`).
-/
binDir : FilePath := defaultBinDir
/--
The build subdirectory to which Lake should output
the package's intermediary results (e.g., `.c` and `.o` files).
Defaults to `defaultIrDir` (i.e., `ir`).
-/
irDir : FilePath := defaultIrDir
/--
The URL of the GitHub repository to upload and download releases of this package.
If `none` (the default), for downloads, Lake uses the URL the package was download
from (if it is a dependency) and for uploads, uses `gh`'s default.
-/
releaseRepo, releaseRepo? : Option String := none
/--
A custom name for the build archive for the GitHub cloud release.
If `none` (the default), Lake defaults to `{(pkg-)name}-{System.Platform.target}.tar.gz`.
-/
buildArchive, buildArchive? : Option String := none
/--
Whether to prefer downloading a prebuilt release (from GitHub) rather than
building this package from the source when this package is used as a dependency.
-/
preferReleaseBuild : Bool := false
/--
The name of the script, executable, or library by `lake test` when
this package is the workspace root. To point to a definition in another
package, use the syntax `<pkg>/<def>`.
A script driver will be run by `lake test` with the arguments
configured in `testDriverArgs` followed by any specified on the CLI
(e.g., via `lake lint -- <args>...`). An executable driver will be built
and then run like a script. A library will just be built.
-/
testDriver, testRunner : String := ""
/--
Arguments to pass to the package's test driver.
These arguments will come before those passed on the command line via
`lake test -- <args>...`.
-/
testDriverArgs : Array String := #[]
/--
The name of the script or executable used by `lake lint` when this package
is the workspace root. To point to a definition in another package, use the
syntax `<pkg>/<def>`.
A script driver will be run by `lake lint` with the arguments
configured in `lintDriverArgs` followed by any specified on the CLI
(e.g., via `lake lint -- <args>...`). An executable driver will be built
and then run like a script.
-/
lintDriver : String := ""
/--
Arguments to pass to the package's linter.
These arguments will come before those passed on the command line via
`lake lint -- <args>...`.
-/
lintDriverArgs : Array String := #[]
/--
The package version. Versions have the form:
```
v!"<major>.<minor>.<patch>[-<specialDescr>]"
```
A version with a `-` suffix is considered a "prerelease".
Lake suggest the following guidelines for incrementing versions:
* **Major version increment** *(e.g., v1.3.0 → v2.0.0)*
Indicates significant breaking changes in the package.
Package users are not expected to update to the new version
without manual intervention.
* **Minor version increment** *(e.g., v1.3.0 → v1.4.0)*
Denotes notable changes that are expected to be
generally backwards compatible.
Package users are expected to update to this version automatically
and should be able to fix any breakages and/or warnings easily.
* **Patch version increment** *(e.g., v1.3.0 → v1.3.1)*
Reserved for bug fixes and small touchups.
Package users are expected to update automatically and should not expect
significant breakage, except in the edge case of users relying on the
behavior of patched bugs.
**Note that backwards-incompatible changes may occur at any version increment.**
The is because the current nature of Lean (e.g., transitive imports,
rich metaprogramming, reducibility in proofs), makes it infeasible to
define a completely stable interface for a package.
Instead, the different version levels indicate a change's intended significance
and how difficult migration is expected to be.
Versions of form the `0.x.x` are considered development versions prior to
first official release. Like prerelease, they are not expected to closely
follow the above guidelines.
Packages without a defined version default to `0.0.0`.
-/
version : StdVer := {}
/--
Git tags of this package's repository that should be treated as versions.
Package indices (e.g., Reservoir) can make use of this information to determine
the Git revisions corresponding to released versions.
Defaults to tags that are "version-like".
That is, start with a `v` followed by a digit.
-/
versionTags : StrPat := defaultVersionTags
/-- A short description for the package (e.g., for Reservoir). -/
description : String := ""
/--
Custom keywords associated with the package.
Reservoir can make use of a package's keywords to group related packages
together and make it easier for users to discover them.
Good keywords include the domain (e.g., `math`, `software-verification`,
`devtool`), specific subtopics (e.g., `topology`, `cryptology`), and
significant implementation details (e.g., `dsl`, `ffi`, `cli`).
For instance, Lake's keywords could be `devtool`, `cli`, `dsl`,
`package-manager`, and `build-system`.
-/
keywords : Array String := #[]
/--
A URL to information about the package.
Reservoir will already include a link to the package's GitHub repository
(if the package is sourced from there). Thus, users are advised to specify
something else for this (if anything).
-/
homepage : String := ""
/--
The package's license (if one).
Should be a valid [SPDX License Expression][1].
Reservoir requires that packages uses an OSI-approved license to be
included in its index, and currently only supports single identifier
SPDX expressions. For, a list of OSI-approved SPDX license identifiers,
see the [SPDX LIcense List][2].
[1]: https://spdx.github.io/spdx-spec/v3.0/annexes/SPDX-license-expressions/
[2]: https://spdx.org/licenses/
-/
license : String := ""
/--
Files containing licensing information for the package.
These should be the license files that users are expected to include when
distributing package sources, which may be more then one file for some licenses.
For example, the Apache 2.0 license requires the reproduction of a `NOTICE`
file along with the license (if such a file exists).
Defaults to `#["LICENSE"]`.
-/
licenseFiles : Array FilePath := #["LICENSE"]
/--
The path to the package's README.
A README should be a Markdown file containing an overview of the package.
Reservoir displays the rendered HTML of this file on a package's page.
A nonstandard location can be used to provide a different README for Reservoir
and GitHub.
Defaults to `README.md`.
-/
readmeFile : FilePath := "README.md"
/--
Whether Reservoir should include the package in its index.
When set to `false`, Reservoir will not add the package to its index
and will remove it if it was already there (when Reservoir is next updated).
-/
reservoir : Bool := true
/--
Whether to enables Lake's local, offline artifact cache for the package.
Artifacts (i.e., build products) of packages will be shared across
local copies by storing them in a cache associated with the Lean toolchain.
This can significantly reduce initial build times and disk space usage when
working with multiple copies of large projects or large dependencies.
As a caveat, build targets which support the artifact cache will not be stored
in their usual location within the build directory. Thus, projects with custom build
scripts that rely on specific location of artifacts may wish to disable this feature.
If `none` (the default), the cache will be disabled by default unless
the `LAKE_ARTIFACT_CACHE` environment variable is set to true.
-/
enableArtifactCache?, enableArtifactCache : Option Bool := none
/--
Whether native libraries (of this package) should be prefixed with `lib` on Windows.
Unlike Unix, Windows does not require native libraries to start with `lib` and,
by convention, they usually do not. However, for consistent naming across all platforms,
users may wish to enable this.
Defaults to `false`.
-/
libPrefixOnWindows : Bool := false
deriving Inhabited
/-- The package's name. -/
public abbrev PackageConfig.name (_ : PackageConfig n) := n
/-- A package declaration from a configuration written in Lean. -/
public structure PackageDecl where
name : Name
config : PackageConfig name
deriving TypeName
--------------------------------------------------------------------------------
/-! # Package -/
--------------------------------------------------------------------------------
public nonempty_type OpaquePostUpdateHook (pkg : Name)
/-- A Lake package -- its location plus its configuration. -/

View File

@@ -0,0 +1,316 @@
/-
Copyright (c) 2017 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Gabriel Ebner, Sebastian Ullrich, Mac Malone
-/
module
prelude
public import Init.Dynamic
public import Lake.Util.Version
public import Lake.Config.Pattern
public import Lake.Config.Defaults
public import Lake.Config.LeanConfig
public import Lake.Config.WorkspaceConfig
meta import all Lake.Config.Meta
open System Lean
namespace Lake
/-- The default `buildArchive` configuration for a package with `name`. -/
@[inline] public def defaultBuildArchive (name : Name) : String :=
s!"{name.toString false}-{System.Platform.target}.tar.gz"
set_option linter.unusedVariables false in
/-- A `Package`'s declarative configuration. -/
public configuration PackageConfig (name : Name) extends WorkspaceConfig, LeanConfig where
/-- **For internal use.** Whether this package is Lean itself. -/
bootstrap : Bool := false
/--
**This field is deprecated.**
The path of a package's manifest file, which stores the exact versions
of its resolved dependencies.
Defaults to `defaultManifestFile` (i.e., `lake-manifest.json`).
-/
manifestFile : Option FilePath := none
/-- An `Array` of target names to build whenever the package is used. -/
extraDepTargets : Array Name := #[]
/--
Whether to compile each of the package's module into a native shared library
that is loaded whenever the module is imported. This speeds up evaluation of
metaprograms and enables the interpreter to run functions marked `@[extern]`.
Defaults to `false`.
-/
precompileModules : Bool := false
/--
Additional arguments to pass to the Lean language server
(i.e., `lean --server`) launched by `lake serve`, both for this package and
also for any packages browsed from this one in the same session.
-/
moreGlobalServerArgs, moreServerArgs : Array String := #[]
/--
The directory containing the package's Lean source files.
Defaults to the package's directory.
(This will be passed to `lean` as the `-R` option.)
-/
srcDir : FilePath := "."
/--
The directory to which Lake should output the package's build results.
Defaults to `defaultBuildDir` (i.e., `.lake/build`).
-/
buildDir : FilePath := defaultBuildDir
/--
The build subdirectory to which Lake should output the package's
binary Lean libraries (e.g., `.olean`, `.ilean` files).
Defaults to `defaultLeanLibDir` (i.e., `lib`).
-/
leanLibDir : FilePath := defaultLeanLibDir
/--
The build subdirectory to which Lake should output the package's
native libraries (e.g., `.a`, `.so`, `.dll` files).
Defaults to `defaultNativeLibDir` (i.e., `lib`).
-/
nativeLibDir : FilePath := defaultNativeLibDir
/--
The build subdirectory to which Lake should output the package's binary executable.
Defaults to `defaultBinDir` (i.e., `bin`).
-/
binDir : FilePath := defaultBinDir
/--
The build subdirectory to which Lake should output
the package's intermediary results (e.g., `.c` and `.o` files).
Defaults to `defaultIrDir` (i.e., `ir`).
-/
irDir : FilePath := defaultIrDir
/--
The URL of the GitHub repository to upload and download releases of this package.
If `none` (the default), for downloads, Lake uses the URL the package was download
from (if it is a dependency) and for uploads, uses `gh`'s default.
-/
releaseRepo, releaseRepo? : Option String := none
/--
A custom name for the build archive for the GitHub cloud release.
If `none` (the default), Lake defaults to `{(pkg-)name}-{System.Platform.target}.tar.gz`.
-/
buildArchive, buildArchive? : Option String := none
/--
Whether to prefer downloading a prebuilt release (from GitHub) rather than
building this package from the source when this package is used as a dependency.
-/
preferReleaseBuild : Bool := false
/--
The name of the script, executable, or library by `lake test` when
this package is the workspace root. To point to a definition in another
package, use the syntax `<pkg>/<def>`.
A script driver will be run by `lake test` with the arguments
configured in `testDriverArgs` followed by any specified on the CLI
(e.g., via `lake lint -- <args>...`). An executable driver will be built
and then run like a script. A library will just be built.
-/
testDriver, testRunner : String := ""
/--
Arguments to pass to the package's test driver.
These arguments will come before those passed on the command line via
`lake test -- <args>...`.
-/
testDriverArgs : Array String := #[]
/--
The name of the script or executable used by `lake lint` when this package
is the workspace root. To point to a definition in another package, use the
syntax `<pkg>/<def>`.
A script driver will be run by `lake lint` with the arguments
configured in `lintDriverArgs` followed by any specified on the CLI
(e.g., via `lake lint -- <args>...`). An executable driver will be built
and then run like a script.
-/
lintDriver : String := ""
/--
Arguments to pass to the package's linter.
These arguments will come before those passed on the command line via
`lake lint -- <args>...`.
-/
lintDriverArgs : Array String := #[]
/--
The package version. Versions have the form:
```
v!"<major>.<minor>.<patch>[-<specialDescr>]"
```
A version with a `-` suffix is considered a "prerelease".
Lake suggest the following guidelines for incrementing versions:
* **Major version increment** *(e.g., v1.3.0 → v2.0.0)*
Indicates significant breaking changes in the package.
Package users are not expected to update to the new version
without manual intervention.
* **Minor version increment** *(e.g., v1.3.0 → v1.4.0)*
Denotes notable changes that are expected to be
generally backwards compatible.
Package users are expected to update to this version automatically
and should be able to fix any breakages and/or warnings easily.
* **Patch version increment** *(e.g., v1.3.0 → v1.3.1)*
Reserved for bug fixes and small touchups.
Package users are expected to update automatically and should not expect
significant breakage, except in the edge case of users relying on the
behavior of patched bugs.
**Note that backwards-incompatible changes may occur at any version increment.**
The is because the current nature of Lean (e.g., transitive imports,
rich metaprogramming, reducibility in proofs), makes it infeasible to
define a completely stable interface for a package.
Instead, the different version levels indicate a change's intended significance
and how difficult migration is expected to be.
Versions of form the `0.x.x` are considered development versions prior to
first official release. Like prerelease, they are not expected to closely
follow the above guidelines.
Packages without a defined version default to `0.0.0`.
-/
version : StdVer := {}
/--
Git tags of this package's repository that should be treated as versions.
Package indices (e.g., Reservoir) can make use of this information to determine
the Git revisions corresponding to released versions.
Defaults to tags that are "version-like".
That is, start with a `v` followed by a digit.
-/
versionTags : StrPat := defaultVersionTags
/-- A short description for the package (e.g., for Reservoir). -/
description : String := ""
/--
Custom keywords associated with the package.
Reservoir can make use of a package's keywords to group related packages
together and make it easier for users to discover them.
Good keywords include the domain (e.g., `math`, `software-verification`,
`devtool`), specific subtopics (e.g., `topology`, `cryptology`), and
significant implementation details (e.g., `dsl`, `ffi`, `cli`).
For instance, Lake's keywords could be `devtool`, `cli`, `dsl`,
`package-manager`, and `build-system`.
-/
keywords : Array String := #[]
/--
A URL to information about the package.
Reservoir will already include a link to the package's GitHub repository
(if the package is sourced from there). Thus, users are advised to specify
something else for this (if anything).
-/
homepage : String := ""
/--
The package's license (if one).
Should be a valid [SPDX License Expression][1].
Reservoir requires that packages uses an OSI-approved license to be
included in its index, and currently only supports single identifier
SPDX expressions. For, a list of OSI-approved SPDX license identifiers,
see the [SPDX LIcense List][2].
[1]: https://spdx.github.io/spdx-spec/v3.0/annexes/SPDX-license-expressions/
[2]: https://spdx.org/licenses/
-/
license : String := ""
/--
Files containing licensing information for the package.
These should be the license files that users are expected to include when
distributing package sources, which may be more then one file for some licenses.
For example, the Apache 2.0 license requires the reproduction of a `NOTICE`
file along with the license (if such a file exists).
Defaults to `#["LICENSE"]`.
-/
licenseFiles : Array FilePath := #["LICENSE"]
/--
The path to the package's README.
A README should be a Markdown file containing an overview of the package.
Reservoir displays the rendered HTML of this file on a package's page.
A nonstandard location can be used to provide a different README for Reservoir
and GitHub.
Defaults to `README.md`.
-/
readmeFile : FilePath := "README.md"
/--
Whether Reservoir should include the package in its index.
When set to `false`, Reservoir will not add the package to its index
and will remove it if it was already there (when Reservoir is next updated).
-/
reservoir : Bool := true
/--
Whether to enables Lake's local, offline artifact cache for the package.
Artifacts (i.e., build products) of packages will be shared across
local copies by storing them in a cache associated with the Lean toolchain.
This can significantly reduce initial build times and disk space usage when
working with multiple copies of large projects or large dependencies.
As a caveat, build targets which support the artifact cache will not be stored
in their usual location within the build directory. Thus, projects with custom build
scripts that rely on specific location of artifacts may wish to disable this feature.
If `none` (the default), the cache will be disabled by default unless
the `LAKE_ARTIFACT_CACHE` environment variable is set to true.
-/
enableArtifactCache?, enableArtifactCache : Option Bool := none
/--
Whether native libraries (of this package) should be prefixed with `lib` on Windows.
Unlike Unix, Windows does not require native libraries to start with `lib` and,
by convention, they usually do not. However, for consistent naming across all platforms,
users may wish to enable this.
Defaults to `false`.
-/
libPrefixOnWindows : Bool := false
deriving Inhabited
/-- The package's name. -/
public abbrev PackageConfig.name (_ : PackageConfig n) := n
/-- A package declaration from a configuration written in Lean. -/
public structure PackageDecl where
name : Name
config : PackageConfig name
deriving TypeName

View File

@@ -11,6 +11,7 @@ Author: Leonardo de Moura
#include <lean/lean.h>
#include "runtime/hash.h"
#include "runtime/compact.h"
#include "runtime/exception.h"
#include "util/alloc.h"
#ifndef LEAN_WINDOWS
@@ -357,7 +358,7 @@ void object_compactor::operator()(object * o) {
g_tag_counters[lean_ptr_tag(curr)]++;
#endif
switch (lean_ptr_tag(curr)) {
case LeanClosure: lean_internal_panic("closures cannot be compacted. One possible cause of this error is trying to store a function in a persistent environment extension.");
case LeanClosure: throw exception("closures cannot be compacted. One possible cause of this error is trying to store a function in a persistent environment extension.");
case LeanArray: r = insert_array(curr); break;
case LeanScalarArray: insert_sarray(curr); break;
case LeanString: insert_string(curr); break;
@@ -366,7 +367,7 @@ void object_compactor::operator()(object * o) {
case LeanTask: r = insert_task(curr); break;
case LeanPromise: r = insert_promise(curr); break;
case LeanRef: r = insert_ref(curr); break;
case LeanExternal: lean_internal_panic("external objects cannot be compacted");
case LeanExternal: throw exception("external objects cannot be compacted");
case LeanReserved: lean_unreachable();
default: r = insert_constructor(curr); break;
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
stage0/stdlib/Init/Data/Dyadic/Inv.c generated Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More