mirror of
https://github.com/leanprover/lean4.git
synced 2026-03-18 02:44:12 +00:00
Compare commits
47 Commits
vector
...
insertionS
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0cce9ff119 | ||
|
|
6e41298f30 | ||
|
|
88e3a2b1ab | ||
|
|
b378fe98a7 | ||
|
|
5f1ff42a15 | ||
|
|
30d01f7a9a | ||
|
|
81b85d8e2f | ||
|
|
5982a6d230 | ||
|
|
cfc8c6ad8e | ||
|
|
8457fca519 | ||
|
|
ac1197ff59 | ||
|
|
609346f5e0 | ||
|
|
e3b05c13e1 | ||
|
|
04f80a1f9f | ||
|
|
0fc4ed91d1 | ||
|
|
7e9dd5668b | ||
|
|
66ebec97ca | ||
|
|
79f050b816 | ||
|
|
afd398678b | ||
|
|
7791ec7844 | ||
|
|
8f0d0995d6 | ||
|
|
e04e923b82 | ||
|
|
438a1dc989 | ||
|
|
af4a3f2251 | ||
|
|
7692343720 | ||
|
|
597ef8cfee | ||
|
|
321e148f51 | ||
|
|
ce692436f4 | ||
|
|
23bec25fce | ||
|
|
3d511a582a | ||
|
|
51015bf5c0 | ||
|
|
3ece36de9d | ||
|
|
54c48363ca | ||
|
|
0a22f8fa6f | ||
|
|
f70b7e5722 | ||
|
|
9a17919ef1 | ||
|
|
606aeddf06 | ||
|
|
0eca3bd55d | ||
|
|
43dfc2a25f | ||
|
|
935fcfb6ec | ||
|
|
20acc72a29 | ||
|
|
9221d9d4db | ||
|
|
c3948cba24 | ||
|
|
427dc66af3 | ||
|
|
5a23cefd80 | ||
|
|
5cfe1ca35b | ||
|
|
9052d3daef |
@@ -103,10 +103,21 @@ your PR using rebase merge, bypassing the merge queue.
|
||||
As written above, changes in meta code in the current stage usually will only
|
||||
affect later stages. This is an issue in two specific cases.
|
||||
|
||||
* For the special case of *quotations*, it is desirable to have changes in builtin parsers affect them immediately: when the changes in the parser become active in the next stage, builtin macros implemented via quotations should generate syntax trees compatible with the new parser, and quotation patterns in builtin macros and elaborators should be able to match syntax created by the new parser and macros.
|
||||
Since quotations capture the syntax tree structure during execution of the current stage and turn it into code for the next stage, we need to run the current stage's builtin parsers in quotations via the interpreter for this to work.
|
||||
Caveats:
|
||||
* We activate this behavior by default when building stage 1 by setting `-Dinternal.parseQuotWithCurrentStage=true`.
|
||||
We force-disable it inside `macro/macro_rules/elab/elab_rules` via `suppressInsideQuot` as they are guaranteed not to run in the next stage and may need to be run in the current one, so the stage 0 parser is the correct one to use for them.
|
||||
It may be necessary to extend this disabling to functions that contain quotations and are (exclusively) used by one of the mentioned commands. A function using quotations should never be used by both builtin and non-builtin macros/elaborators. Example: https://github.com/leanprover/lean4/blob/f70b7e5722da6101572869d87832494e2f8534b7/src/Lean/Elab/Tactic/Config.lean#L118-L122
|
||||
* The parser needs to be reachable via an `import` statement, otherwise the version of the previous stage will silently be used.
|
||||
* Only the parser code (`Parser.fn`) is affected; all metadata such as leading tokens is taken from the previous stage.
|
||||
|
||||
For an example, see https://github.com/leanprover/lean4/commit/f9dcbbddc48ccab22c7674ba20c5f409823b4cc1#diff-371387aed38bb02bf7761084fd9460e4168ae16d1ffe5de041b47d3ad2d22422R13
|
||||
|
||||
* For *non-builtin* meta code such as `notation`s or `macro`s in
|
||||
`Notation.lean`, we expect changes to affect the current file and all later
|
||||
files of the same stage immediately, just like outside the stdlib. To ensure
|
||||
this, we need to build the stage using `-Dinterpreter.prefer_native=false` -
|
||||
this, we build stage 1 using `-Dinterpreter.prefer_native=false` -
|
||||
otherwise, when executing a macro, the interpreter would notice that there is
|
||||
already a native symbol available for this function and run it instead of the
|
||||
new IR, but the symbol is from the previous stage!
|
||||
@@ -124,26 +135,11 @@ affect later stages. This is an issue in two specific cases.
|
||||
further stages (e.g. after an `update-stage0`) will then need to be compiled
|
||||
with the flag set to `false` again since they will expect the new signature.
|
||||
|
||||
For an example, see https://github.com/leanprover/lean4/commit/da4c46370d85add64ef7ca5e7cc4638b62823fbb.
|
||||
When enabling `prefer_native`, we usually want to *disable* `parseQuotWithCurrentStage` as it would otherwise make quotations use the interpreter after all.
|
||||
However, there is a specific case where we want to set both options to `true`: when we make changes to a non-builtin parser like `simp` that has a builtin elaborator, we cannot have the new parser be active outside of quotations in stage 1 as the builtin elaborator from stage 0 would not understand them; on the other hand, we need quotations in e.g. the builtin `simp` elaborator to produce the new syntax in the next stage.
|
||||
As this issue usually affects only tactics, enabling `debug.byAsSorry` instead of `prefer_native` can be a simpler solution.
|
||||
|
||||
* For the special case of *quotations*, it is desirable to have changes in
|
||||
built-in parsers affect them immediately: when the changes in the parser
|
||||
become active in the next stage, macros implemented via quotations should
|
||||
generate syntax trees compatible with the new parser, and quotation patterns
|
||||
in macro and elaborators should be able to match syntax created by the new
|
||||
parser and macros. Since quotations capture the syntax tree structure during
|
||||
execution of the current stage and turn it into code for the next stage, we
|
||||
need to run the current stage's built-in parsers in quotation via the
|
||||
interpreter for this to work. Caveats:
|
||||
* Since interpreting full parsers is not nearly as cheap and we rarely change
|
||||
built-in syntax, this needs to be opted in using `-Dinternal.parseQuotWithCurrentStage=true`.
|
||||
* The parser needs to be reachable via an `import` statement, otherwise the
|
||||
version of the previous stage will silently be used.
|
||||
* Only the parser code (`Parser.fn`) is affected; all metadata such as leading
|
||||
tokens is taken from the previous stage.
|
||||
|
||||
For an example, see https://github.com/leanprover/lean4/commit/f9dcbbddc48ccab22c7674ba20c5f409823b4cc1#diff-371387aed38bb02bf7761084fd9460e4168ae16d1ffe5de041b47d3ad2d22422
|
||||
(from before the flag defaulted to `false`).
|
||||
For a `prefer_native` example, see https://github.com/leanprover/lean4/commit/da4c46370d85add64ef7ca5e7cc4638b62823fbb.
|
||||
|
||||
To modify either of these flags both for building and editing the stdlib, adjust
|
||||
the code in `stage0/src/stdlib_flags.h`. The flags will automatically be reset
|
||||
|
||||
@@ -51,6 +51,8 @@ option(LLVM "LLVM" OFF)
|
||||
option(USE_GITHASH "GIT_HASH" ON)
|
||||
# When ON we install LICENSE files to CMAKE_INSTALL_PREFIX
|
||||
option(INSTALL_LICENSE "INSTALL_LICENSE" ON)
|
||||
# When ON we install a copy of cadical
|
||||
option(INSTALL_CADICAL "Install a copy of cadical" ON)
|
||||
# When ON thread storage is automatically finalized, it assumes platform support pthreads.
|
||||
# This option is important when using Lean as library that is invoked from a different programming language (e.g., Haskell).
|
||||
option(AUTO_THREAD_FINALIZATION "AUTO_THREAD_FINALIZATION" ON)
|
||||
@@ -616,7 +618,7 @@ else()
|
||||
OUTPUT_NAME leancpp)
|
||||
endif()
|
||||
|
||||
if((${STAGE} GREATER 0) AND CADICAL)
|
||||
if((${STAGE} GREATER 0) AND CADICAL AND INSTALL_CADICAL)
|
||||
add_custom_target(copy-cadical
|
||||
COMMAND cmake -E copy_if_different "${CADICAL}" "${CMAKE_BINARY_DIR}/bin/cadical${CMAKE_EXECUTABLE_SUFFIX}")
|
||||
add_dependencies(leancpp copy-cadical)
|
||||
@@ -738,7 +740,7 @@ file(COPY ${LEAN_SOURCE_DIR}/bin/leanmake DESTINATION ${CMAKE_BINARY_DIR}/bin)
|
||||
|
||||
install(DIRECTORY "${CMAKE_BINARY_DIR}/bin/" USE_SOURCE_PERMISSIONS DESTINATION bin)
|
||||
|
||||
if (${STAGE} GREATER 0 AND CADICAL)
|
||||
if (${STAGE} GREATER 0 AND CADICAL AND INSTALL_CADICAL)
|
||||
install(PROGRAMS "${CADICAL}" DESTINATION bin)
|
||||
endif()
|
||||
|
||||
|
||||
@@ -19,3 +19,4 @@ import Init.Data.Array.GetLit
|
||||
import Init.Data.Array.MapIdx
|
||||
import Init.Data.Array.Set
|
||||
import Init.Data.Array.Monadic
|
||||
import Init.Data.Array.FinRange
|
||||
|
||||
@@ -23,7 +23,7 @@ theorem foldlM_toList.aux [Monad m]
|
||||
· cases Nat.not_le_of_gt ‹_› (Nat.zero_add _ ▸ H)
|
||||
· rename_i i; rw [Nat.succ_add] at H
|
||||
simp [foldlM_toList.aux f arr i (j+1) H]
|
||||
rw (occs := .pos [2]) [← List.getElem_cons_drop_succ_eq_drop ‹_›]
|
||||
rw (occs := [2]) [← List.getElem_cons_drop_succ_eq_drop ‹_›]
|
||||
rfl
|
||||
· rw [List.drop_of_length_le (Nat.ge_of_not_lt ‹_›)]; rfl
|
||||
|
||||
|
||||
14
src/Init/Data/Array/FinRange.lean
Normal file
14
src/Init/Data/Array/FinRange.lean
Normal file
@@ -0,0 +1,14 @@
|
||||
/-
|
||||
Copyright (c) 2024 François G. Dorais. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: François G. Dorais
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.List.FinRange
|
||||
|
||||
namespace Array
|
||||
|
||||
/-- `finRange n` is the array of all elements of `Fin n` in order. -/
|
||||
protected def finRange (n : Nat) : Array (Fin n) := ofFn fun i => i
|
||||
|
||||
end Array
|
||||
@@ -5,24 +5,91 @@ Authors: Leonardo de Moura
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Array.Basic
|
||||
import Init.Data.Nat.Fold
|
||||
import Init.Data.Vector.Lemmas
|
||||
|
||||
@[inline] def Array.insertionSort (a : Array α) (lt : α → α → Bool := by exact (· < ·)) : Array α :=
|
||||
traverse a 0 a.size
|
||||
where
|
||||
@[specialize] traverse (a : Array α) (i : Nat) (fuel : Nat) : Array α :=
|
||||
match fuel with
|
||||
| 0 => a
|
||||
| fuel+1 =>
|
||||
if h : i < a.size then
|
||||
traverse (swapLoop a i h) (i+1) fuel
|
||||
else
|
||||
a
|
||||
@[specialize] swapLoop (a : Array α) (j : Nat) (h : j < a.size) : Array α :=
|
||||
match (generalizing := false) he:j with -- using `generalizing` because we don't want to refine the type of `h`
|
||||
| 0 => a
|
||||
| j'+1 =>
|
||||
have h' : j' < a.size := by subst j; exact Nat.lt_trans (Nat.lt_succ_self _) h
|
||||
if lt a[j] a[j'] then
|
||||
swapLoop (a.swap j j') j' (by rw [size_swap]; assumption; done)
|
||||
else
|
||||
a
|
||||
namespace Vector
|
||||
|
||||
/-- Swap the `i`-th element repeatedly to the left, while the element to its left is not `lt` it. -/
|
||||
@[specialize, inline] def swapLeftWhileLT {n} (a : Vector α n) (i : Nat) (h : i < n)
|
||||
(lt : α → α → Bool := by exact (· < ·)) : Vector α n :=
|
||||
match h' : i with
|
||||
| 0 => a
|
||||
| i'+1 =>
|
||||
if lt a[i] a[i'] then
|
||||
swapLeftWhileLT (a.swap i' i) i' (by omega) lt
|
||||
else
|
||||
a
|
||||
|
||||
end Vector
|
||||
|
||||
open Vector
|
||||
namespace Array
|
||||
|
||||
/-- Sort an array in place using insertion sort. -/
|
||||
@[inline] def insertionSort (a : Array α) (lt : α → α → Bool := by exact (· < ·)) : Array α :=
|
||||
a.size.fold (init := ⟨a, rfl⟩) (fun i h acc => swapLeftWhileLT acc i h lt) |>.toArray
|
||||
|
||||
/-- Insert an element into an array, after the last element which is not `lt` the inserted element. -/
|
||||
def orderedInsert (a : Array α) (x : α) (lt : α → α → Bool := by exact (· < ·)) : Array α :=
|
||||
swapLeftWhileLT ⟨a.push x, rfl⟩ a.size (by simp) lt |>.toArray
|
||||
|
||||
end Array
|
||||
|
||||
/-! ### Verification -/
|
||||
|
||||
namespace Vector
|
||||
|
||||
theorem swapLeftWhileLT_push {n} (a : Vector α n) (x : α) (j : Nat) (h : j < n) :
|
||||
swapLeftWhileLT (a.push x) j (by omega) lt = (swapLeftWhileLT a j h lt).push x := by
|
||||
induction j generalizing a with
|
||||
| zero => simp [swapLeftWhileLT]
|
||||
| succ j ih =>
|
||||
simp [swapLeftWhileLT]
|
||||
split <;> rename_i h
|
||||
· rw [Vector.getElem_push_lt (by omega), Vector.getElem_push_lt (by omega)] at h
|
||||
rw [← Vector.push_swap, ih, if_pos h]
|
||||
· rw [Vector.getElem_push_lt (by omega), Vector.getElem_push_lt (by omega)] at h
|
||||
rw [if_neg h]
|
||||
|
||||
theorem swapLeftWhileLT_cast {n m} (a : Vector α n) (j : Nat) (h : j < n) (h' : n = m) :
|
||||
swapLeftWhileLT (a.cast h') j (by omega) lt = (swapLeftWhileLT a j h lt).cast h' := by
|
||||
subst h'
|
||||
simp
|
||||
|
||||
end Vector
|
||||
|
||||
namespace Array
|
||||
|
||||
@[simp] theorem size_insertionSort (a : Array α) : (a.insertionSort lt).size = a.size := by
|
||||
simp [insertionSort]
|
||||
|
||||
private theorem insertionSort_push' (a : Array α) (x : α) :
|
||||
(a.push x).insertionSort lt =
|
||||
(swapLeftWhileLT ⟨(a.insertionSort lt).push x, rfl⟩ a.size (by simp) lt).toArray := by
|
||||
rw [insertionSort, Nat.fold_congr (size_push a x), Nat.fold]
|
||||
have : (a.size.fold (fun i h acc => swapLeftWhileLT acc i (by simp; omega) lt) ⟨a.push x, rfl⟩) =
|
||||
((a.size.fold (fun i h acc => swapLeftWhileLT acc i h lt) ⟨a, rfl⟩).push x).cast (by simp) := by
|
||||
rw [Vector.eq_cast_iff]
|
||||
simp only [Nat.fold_eq_finRange_foldl]
|
||||
rw [← List.foldl_hom (fun a => (Vector.push x a)) _ (fun v ⟨i, h⟩ => swapLeftWhileLT v i (by omega) lt)]
|
||||
rw [Vector.push_mk]
|
||||
rw [← List.foldl_hom (Vector.cast _) _ (fun v ⟨i, h⟩ => swapLeftWhileLT v i (by omega) lt)]
|
||||
· simp
|
||||
· intro v i
|
||||
simp only
|
||||
rw [swapLeftWhileLT_cast]
|
||||
· simp [swapLeftWhileLT_push]
|
||||
rw [this]
|
||||
simp only [Nat.lt_add_one, swapLeftWhileLT_cast, Vector.toArray_cast]
|
||||
unfold insertionSort
|
||||
simp only [Vector.push]
|
||||
congr
|
||||
all_goals simp
|
||||
|
||||
theorem insertionSort_push (a : Array α) (x : α) :
|
||||
(a.push x).insertionSort lt = (a.insertionSort lt).orderedInsert x lt := by
|
||||
rw [insertionSort_push', orderedInsert]
|
||||
simp
|
||||
|
||||
end Array
|
||||
|
||||
@@ -496,6 +496,11 @@ where
|
||||
simp only [← length_toList]
|
||||
simp
|
||||
|
||||
@[simp] theorem mapM_empty [Monad m] (f : α → m β) : mapM f #[] = pure #[] := by
|
||||
rw [mapM, mapM.map]; rfl
|
||||
|
||||
@[simp] theorem map_empty (f : α → β) : map f #[] = #[] := mapM_empty f
|
||||
|
||||
@[simp] theorem appendList_nil (arr : Array α) : arr ++ ([] : List α) = arr := Array.ext' (by simp)
|
||||
|
||||
@[simp] theorem appendList_cons (arr : Array α) (a : α) (l : List α) :
|
||||
@@ -590,6 +595,20 @@ theorem getElem_set (a : Array α) (i : Nat) (h' : i < a.size) (v : α) (j : Nat
|
||||
(ne : i ≠ j) : (a.set i v)[j]? = a[j]? := by
|
||||
by_cases h : j < a.size <;> simp [getElem?_lt, getElem?_ge, Nat.ge_of_not_lt, ne, h]
|
||||
|
||||
theorem push_set (a : Array α) (x y : α) {i : Nat} {hi} :
|
||||
(a.set i x).push y = (a.push y).set i x (by simp; omega):= by
|
||||
ext j h₁ h₂
|
||||
· simp
|
||||
· if h' : j = a.size then
|
||||
rw [getElem_push, getElem_set_ne, dif_neg]
|
||||
all_goals simp_all <;> omega
|
||||
else
|
||||
rw [getElem_push_lt, getElem_set, getElem_set]
|
||||
split
|
||||
· rfl
|
||||
· rw [getElem_push_lt]
|
||||
simp_all; omega
|
||||
|
||||
/-! # setIfInBounds -/
|
||||
|
||||
@[simp] theorem set!_is_setIfInBounds : @set! = @setIfInBounds := rfl
|
||||
@@ -1093,6 +1112,34 @@ theorem foldr_congr {as bs : Array α} (h₀ : as = bs) {f g : α → β → β}
|
||||
as.foldr f a start stop = bs.foldr g b start' stop' := by
|
||||
congr
|
||||
|
||||
theorem foldl_eq_foldlM (f : β → α → β) (b) (l : Array α) :
|
||||
l.foldl f b = l.foldlM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldl_eq_foldlM]
|
||||
|
||||
theorem foldr_eq_foldrM (f : α → β → β) (b) (l : Array α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
cases l
|
||||
simp [List.foldr_eq_foldrM]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : Array α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
theorem foldl_hom (f : α₁ → α₂) (g₁ : α₁ → β → α₁) (g₂ : α₂ → β → α₂) (l : Array β) (init : α₁)
|
||||
(H : ∀ x y, g₂ (f x) y = f (g₁ x y)) : l.foldl g₂ (f init) = f (l.foldl g₁ init) := by
|
||||
cases l
|
||||
simp
|
||||
rw [List.foldl_hom _ _ _ _ _ H]
|
||||
|
||||
theorem foldr_hom (f : β₁ → β₂) (g₁ : α → β₁ → β₁) (g₂ : α → β₂ → β₂) (l : Array α) (init : β₁)
|
||||
(H : ∀ x y, g₂ x (f y) = f (g₁ x y)) : l.foldr g₂ (f init) = f (l.foldr g₁ init) := by
|
||||
cases l
|
||||
simp
|
||||
rw [List.foldr_hom _ _ _ _ _ H]
|
||||
|
||||
/-! ### map -/
|
||||
|
||||
@[simp] theorem mem_map {f : α → β} {l : Array α} : b ∈ l.map f ↔ ∃ a, a ∈ l ∧ f a = b := by
|
||||
@@ -1685,6 +1732,11 @@ theorem swap_comm (a : Array α) {i j : Nat} {hi hj} : a.swap i j hi hj = a.swap
|
||||
· split <;> simp_all
|
||||
· split <;> simp_all
|
||||
|
||||
theorem push_swap (a : Array α) (x : α) {i j : Nat} {hi hj} :
|
||||
(a.swap i j hi hj).push x = (a.push x).swap i j (by simp; omega) (by simp; omega) := by
|
||||
rw [swap_def, swap_def]
|
||||
simp [push_set, getElem_push_lt, hi, hj]
|
||||
|
||||
/-! ### eraseIdx -/
|
||||
|
||||
theorem eraseIdx_eq_eraseIdxIfInBounds {a : Array α} {i : Nat} (h : i < a.size) :
|
||||
@@ -2032,6 +2084,20 @@ theorem foldr_filterMap (f : α → Option β) (g : β → γ → γ) (l : Array
|
||||
simp [List.foldr_filterMap]
|
||||
rfl
|
||||
|
||||
theorem foldl_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : Array α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldl f' (g a) = g (l.foldl f a) := by
|
||||
cases l
|
||||
simp
|
||||
rw [List.foldl_map' _ _ _ _ _ h]
|
||||
|
||||
theorem foldr_map' (g : α → β) (f : α → α → α) (f' : β → β → β) (a : α) (l : List α)
|
||||
(h : ∀ x y, f' (g x) (g y) = g (f x y)) :
|
||||
(l.map g).foldr f' (g a) = g (l.foldr f a) := by
|
||||
cases l
|
||||
simp
|
||||
rw [List.foldr_map' _ _ _ _ _ h]
|
||||
|
||||
/-! ### flatten -/
|
||||
|
||||
@[simp] theorem flatten_empty : flatten (#[] : Array (Array α)) = #[] := rfl
|
||||
|
||||
@@ -1622,14 +1622,16 @@ theorem signExtend_eq (x : BitVec w) : x.signExtend w = x := by
|
||||
/-- Sign extending to a larger bitwidth depends on the msb.
|
||||
If the msb is false, then the result equals the original value.
|
||||
If the msb is true, then we add a value of `(2^v - 2^w)`, which arises from the sign extension. -/
|
||||
theorem toNat_signExtend_of_le (x : BitVec w) {v : Nat} (hv : w ≤ v) :
|
||||
private theorem toNat_signExtend_of_le (x : BitVec w) {v : Nat} (hv : w ≤ v) :
|
||||
(x.signExtend v).toNat = x.toNat + if x.msb then 2^v - 2^w else 0 := by
|
||||
apply Nat.eq_of_testBit_eq
|
||||
intro i
|
||||
have ⟨k, hk⟩ := Nat.exists_eq_add_of_le hv
|
||||
rw [hk, testBit_toNat, getLsbD_signExtend, Nat.pow_add, ← Nat.mul_sub_one, Nat.add_comm (x.toNat)]
|
||||
by_cases hx : x.msb
|
||||
· simp [hx, Nat.testBit_mul_pow_two_add _ x.isLt, testBit_toNat]
|
||||
· simp only [hx, Bool.if_true_right, ↓reduceIte,
|
||||
Nat.testBit_mul_pow_two_add _ x.isLt,
|
||||
testBit_toNat, Nat.testBit_two_pow_sub_one]
|
||||
-- Case analysis on i being in the intervals [0..w), [w..w + k), [w+k..∞)
|
||||
have hi : i < w ∨ (w ≤ i ∧ i < w + k) ∨ w + k ≤ i := by omega
|
||||
rcases hi with hi | hi | hi
|
||||
@@ -1637,7 +1639,8 @@ theorem toNat_signExtend_of_le (x : BitVec w) {v : Nat} (hv : w ≤ v) :
|
||||
· simp [hi]; omega
|
||||
· simp [hi, show ¬ (i < w + k) by omega, show ¬ (i < w) by omega]
|
||||
omega
|
||||
· simp [hx, Nat.testBit_mul_pow_two_add _ x.isLt, testBit_toNat]
|
||||
· simp only [hx, Bool.if_false_right,
|
||||
Bool.false_eq_true, ↓reduceIte, Nat.zero_add, testBit_toNat]
|
||||
have hi : i < w ∨ (w ≤ i ∧ i < w + k) ∨ w + k ≤ i := by omega
|
||||
rcases hi with hi | hi | hi
|
||||
· simp [hi]; omega
|
||||
@@ -2758,12 +2761,6 @@ theorem getElem_rotateLeft {x : BitVec w} {r i : Nat} (h : i < w) :
|
||||
if h' : i < r % w then x[(w - (r % w) + i)] else x[i - (r % w)] := by
|
||||
simp [← BitVec.getLsbD_eq_getElem, h]
|
||||
|
||||
/-- If `w ≤ x < 2 * w`, then `x % w = x - w` -/
|
||||
theorem mod_eq_sub_of_le_of_lt {x w : Nat} (x_le : w ≤ x) (x_lt : x < 2 * w) :
|
||||
x % w = x - w := by
|
||||
rw [Nat.mod_eq_sub_mod, Nat.mod_eq_of_lt (by omega)]
|
||||
omega
|
||||
|
||||
theorem getMsbD_rotateLeftAux_of_lt {x : BitVec w} {r : Nat} {i : Nat} (hi : i < w - r) :
|
||||
(x.rotateLeftAux r).getMsbD i = x.getMsbD (r + i) := by
|
||||
rw [rotateLeftAux, getMsbD_or]
|
||||
@@ -2773,6 +2770,20 @@ theorem getMsbD_rotateLeftAux_of_ge {x : BitVec w} {r : Nat} {i : Nat} (hi : i
|
||||
(x.rotateLeftAux r).getMsbD i = (decide (i < w) && x.getMsbD (i - (w - r))) := by
|
||||
simp [rotateLeftAux, getMsbD_or, show i + r ≥ w by omega, show ¬i < w - r by omega]
|
||||
|
||||
/--
|
||||
If a number `w * n ≤ i < w * (n + 1)`, then `i - w * n` equals `i % w`.
|
||||
This is true by subtracting `w * n` from the inequality, giving
|
||||
`0 ≤ i - w * n < w`, which uniquely identifies `i % w`.
|
||||
-/
|
||||
private theorem Nat.sub_mul_eq_mod_of_lt_of_le (hlo : w * n ≤ i) (hhi : i < w * (n + 1)) :
|
||||
i - w * n = i % w := by
|
||||
rw [Nat.mod_def]
|
||||
congr
|
||||
symm
|
||||
apply Nat.div_eq_of_lt_le
|
||||
(by rw [Nat.mul_comm]; omega)
|
||||
(by rw [Nat.mul_comm]; omega)
|
||||
|
||||
/-- When `r < w`, we give a formula for `(x.rotateLeft r).getMsbD i`. -/
|
||||
theorem getMsbD_rotateLeft_of_lt {n w : Nat} {x : BitVec w} (hi : r < w):
|
||||
(x.rotateLeft r).getMsbD n = (decide (n < w) && x.getMsbD ((r + n) % w)) := by
|
||||
@@ -2785,8 +2796,8 @@ theorem getMsbD_rotateLeft_of_lt {n w : Nat} {x : BitVec w} (hi : r < w):
|
||||
by_cases h₁ : n < w + 1
|
||||
· simp only [h₁, decide_true, Bool.true_and]
|
||||
have h₂ : (r + n) < 2 * (w + 1) := by omega
|
||||
rw [mod_eq_sub_of_le_of_lt (by omega) (by omega)]
|
||||
congr 1
|
||||
rw [← Nat.sub_mul_eq_mod_of_lt_of_le (n := 1) (by omega) (by omega), Nat.mul_one]
|
||||
omega
|
||||
· simp [h₁]
|
||||
|
||||
@@ -3103,20 +3114,6 @@ theorem replicate_succ_eq {x : BitVec w} :
|
||||
(x ++ replicate n x).cast (by rw [Nat.mul_succ]; omega) := by
|
||||
simp [replicate]
|
||||
|
||||
/--
|
||||
If a number `w * n ≤ i < w * (n + 1)`, then `i - w * n` equals `i % w`.
|
||||
This is true by subtracting `w * n` from the inequality, giving
|
||||
`0 ≤ i - w * n < w`, which uniquely identifies `i % w`.
|
||||
-/
|
||||
private theorem Nat.sub_mul_eq_mod_of_lt_of_le (hlo : w * n ≤ i) (hhi : i < w * (n + 1)) :
|
||||
i - w * n = i % w := by
|
||||
rw [Nat.mod_def]
|
||||
congr
|
||||
symm
|
||||
apply Nat.div_eq_of_lt_le
|
||||
(by rw [Nat.mul_comm]; omega)
|
||||
(by rw [Nat.mul_comm]; omega)
|
||||
|
||||
@[simp]
|
||||
theorem getLsbD_replicate {n w : Nat} (x : BitVec w) :
|
||||
(x.replicate n).getLsbD i =
|
||||
@@ -3222,6 +3219,11 @@ theorem toInt_neg_of_ne_intMin {x : BitVec w} (rs : x ≠ intMin w) :
|
||||
have := @Nat.two_pow_pred_mul_two w (by omega)
|
||||
split <;> split <;> omega
|
||||
|
||||
theorem toInt_neg_eq_ite {x : BitVec w} :
|
||||
(-x).toInt = if x = intMin w then x.toInt else -(x.toInt) := by
|
||||
by_cases hx : x = intMin w <;>
|
||||
simp [hx, neg_intMin, toInt_neg_of_ne_intMin]
|
||||
|
||||
theorem msb_intMin {w : Nat} : (intMin w).msb = decide (0 < w) := by
|
||||
simp only [msb_eq_decide, toNat_intMin, decide_eq_decide]
|
||||
by_cases h : 0 < w <;> simp_all
|
||||
@@ -3355,6 +3357,73 @@ theorem getMsbD_abs {i : Nat} {x : BitVec w} :
|
||||
getMsbD (x.abs) i = if x.msb then getMsbD (-x) i else getMsbD x i := by
|
||||
by_cases h : x.msb <;> simp [BitVec.abs, h]
|
||||
|
||||
/-
|
||||
The absolute value of `x : BitVec w` is naively a case split on the sign of `x`.
|
||||
However, recall that when `x = intMin w`, `-x = x`.
|
||||
Thus, the full value of `abs x` is computed by the case split:
|
||||
- If `x : BitVec w` is `intMin`, then its absolute value is also `intMin w`, and
|
||||
thus `toInt` will equal `intMin.toInt`.
|
||||
- Otherwise, if `x` is negative, then `x.abs.toInt = (-x).toInt`.
|
||||
- If `x` is positive, then it is equal to `x.abs.toInt = x.toInt`.
|
||||
-/
|
||||
theorem toInt_abs_eq_ite {x : BitVec w} :
|
||||
x.abs.toInt =
|
||||
if x = intMin w then (intMin w).toInt
|
||||
else if x.msb then -x.toInt
|
||||
else x.toInt := by
|
||||
by_cases hx : x = intMin w
|
||||
· simp [hx]
|
||||
· simp [hx]
|
||||
by_cases hx₂ : x.msb
|
||||
· simp [hx₂, abs_eq, toInt_neg_of_ne_intMin hx]
|
||||
· simp [hx₂, abs_eq]
|
||||
|
||||
|
||||
|
||||
/-
|
||||
The absolute value of `x : BitVec w` is a case split on the sign of `x`, when `x ≠ intMin w`.
|
||||
This is a variant of `toInt_abs_eq_ite`.
|
||||
-/
|
||||
theorem toInt_abs_eq_ite_of_ne_intMin {x : BitVec w} (hx : x ≠ intMin w) :
|
||||
x.abs.toInt = if x.msb then -x.toInt else x.toInt := by
|
||||
simp [toInt_abs_eq_ite, hx]
|
||||
|
||||
|
||||
/--
|
||||
The absolute value of `x : BitVec w`, interpreted as an integer, is a case split:
|
||||
- When `x = intMin w`, then `x.abs = intMin w`
|
||||
- Otherwise, `x.abs.toInt` equals the absolute value (`x.toInt.natAbs`).
|
||||
|
||||
This is a simpler version of `BitVec.toInt_abs_eq_ite`, which hides a case split on `x.msb`.
|
||||
-/
|
||||
theorem toInt_abs_eq_natAbs {x : BitVec w} : x.abs.toInt =
|
||||
if x = intMin w then (intMin w).toInt else x.toInt.natAbs := by
|
||||
rw [toInt_abs_eq_ite]
|
||||
by_cases hx : x = intMin w
|
||||
· simp [hx]
|
||||
· simp [hx]
|
||||
by_cases h : x.msb
|
||||
· simp only [h, ↓reduceIte]
|
||||
have : x.toInt < 0 := by
|
||||
rw [toInt_neg_iff]
|
||||
have := msb_eq_true_iff_two_mul_ge.mp h
|
||||
omega
|
||||
omega
|
||||
· simp only [h, Bool.false_eq_true, ↓reduceIte]
|
||||
have : 0 ≤ x.toInt := by
|
||||
rw [toInt_pos_iff]
|
||||
exact msb_eq_false_iff_two_mul_lt.mp (by simp [h])
|
||||
omega
|
||||
|
||||
/-
|
||||
The absolute value of `(x : BitVec w)`, when interpreted as an integer,
|
||||
is the absolute value of `x.toInt` when `(x ≠ intMin)`.
|
||||
-/
|
||||
theorem toInt_abs_eq_natAbs_of_ne_intMin {x : BitVec w} (hx : x ≠ intMin w) :
|
||||
x.abs.toInt = x.toInt.natAbs := by
|
||||
simp [toInt_abs_eq_natAbs, hx]
|
||||
|
||||
|
||||
/-! ### Decidable quantifiers -/
|
||||
|
||||
theorem forall_zero_iff {P : BitVec 0 → Prop} :
|
||||
|
||||
@@ -13,17 +13,17 @@ namespace Fin
|
||||
/-- Folds over `Fin n` from the left: `foldl 3 f x = f (f (f x 0) 1) 2`. -/
|
||||
@[inline] def foldl (n) (f : α → Fin n → α) (init : α) : α := loop init 0 where
|
||||
/-- Inner loop for `Fin.foldl`. `Fin.foldl.loop n f x i = f (f (f x i) ...) (n-1)` -/
|
||||
loop (x : α) (i : Nat) : α :=
|
||||
@[semireducible] loop (x : α) (i : Nat) : α :=
|
||||
if h : i < n then loop (f x ⟨i, h⟩) (i+1) else x
|
||||
termination_by n - i
|
||||
decreasing_by decreasing_trivial_pre_omega
|
||||
|
||||
/-- Folds over `Fin n` from the right: `foldr 3 f x = f 0 (f 1 (f 2 x))`. -/
|
||||
@[inline] def foldr (n) (f : Fin n → α → α) (init : α) : α := loop ⟨n, Nat.le_refl n⟩ init where
|
||||
@[inline] def foldr (n) (f : Fin n → α → α) (init : α) : α := loop n (Nat.le_refl n) init where
|
||||
/-- Inner loop for `Fin.foldr`. `Fin.foldr.loop n f i x = f 0 (f ... (f (i-1) x))` -/
|
||||
loop : {i // i ≤ n} → α → α
|
||||
| ⟨0, _⟩, x => x
|
||||
| ⟨i+1, h⟩, x => loop ⟨i, Nat.le_of_lt h⟩ (f ⟨i, h⟩ x)
|
||||
loop : (i : _) → i ≤ n → α → α
|
||||
| 0, _, x => x
|
||||
| i+1, h, x => loop i (Nat.le_of_lt h) (f ⟨i, h⟩ x)
|
||||
termination_by structural i => i
|
||||
|
||||
/--
|
||||
Folds a monadic function over `Fin n` from left to right:
|
||||
@@ -176,17 +176,19 @@ theorem foldl_eq_foldlM (f : α → Fin n → α) (x) :
|
||||
/-! ### foldr -/
|
||||
|
||||
theorem foldr_loop_zero (f : Fin n → α → α) (x) :
|
||||
foldr.loop n f ⟨0, Nat.zero_le _⟩ x = x := by
|
||||
foldr.loop n f 0 (Nat.zero_le _) x = x := by
|
||||
rw [foldr.loop]
|
||||
|
||||
theorem foldr_loop_succ (f : Fin n → α → α) (x) (h : i < n) :
|
||||
foldr.loop n f ⟨i+1, h⟩ x = foldr.loop n f ⟨i, Nat.le_of_lt h⟩ (f ⟨i, h⟩ x) := by
|
||||
foldr.loop n f (i+1) h x = foldr.loop n f i (Nat.le_of_lt h) (f ⟨i, h⟩ x) := by
|
||||
rw [foldr.loop]
|
||||
|
||||
theorem foldr_loop (f : Fin (n+1) → α → α) (x) (h : i+1 ≤ n+1) :
|
||||
foldr.loop (n+1) f ⟨i+1, h⟩ x =
|
||||
f 0 (foldr.loop n (fun j => f j.succ) ⟨i, Nat.le_of_succ_le_succ h⟩ x) := by
|
||||
induction i generalizing x <;> simp [foldr_loop_zero, foldr_loop_succ, *]
|
||||
foldr.loop (n+1) f (i+1) h x =
|
||||
f 0 (foldr.loop n (fun j => f j.succ) i (Nat.le_of_succ_le_succ h) x) := by
|
||||
induction i generalizing x with
|
||||
| zero => simp [foldr_loop_succ, foldr_loop_zero]
|
||||
| succ i ih => rw [foldr_loop_succ, ih]; rfl
|
||||
|
||||
@[simp] theorem foldr_zero (f : Fin 0 → α → α) (x) : foldr 0 f x = x :=
|
||||
foldr_loop_zero ..
|
||||
|
||||
@@ -26,3 +26,4 @@ import Init.Data.List.Sort
|
||||
import Init.Data.List.ToArray
|
||||
import Init.Data.List.MapIdx
|
||||
import Init.Data.List.OfFn
|
||||
import Init.Data.List.FinRange
|
||||
|
||||
@@ -231,7 +231,7 @@ theorem ext_get? : ∀ {l₁ l₂ : List α}, (∀ n, l₁.get? n = l₂.get? n)
|
||||
injection h0 with aa; simp only [aa, ext_get? fun n => h (n+1)]
|
||||
|
||||
/-- Deprecated alias for `ext_get?`. The preferred extensionality theorem is now `ext_getElem?`. -/
|
||||
@[deprecated (since := "2024-06-07")] abbrev ext := @ext_get?
|
||||
@[deprecated ext_get? (since := "2024-06-07")] abbrev ext := @ext_get?
|
||||
|
||||
/-! ### getD -/
|
||||
|
||||
@@ -682,7 +682,7 @@ theorem elem_cons [BEq α] {a : α} :
|
||||
(b::bs).elem a = match a == b with | true => true | false => bs.elem a := rfl
|
||||
|
||||
/-- `notElem a l` is `!(elem a l)`. -/
|
||||
@[deprecated (since := "2024-06-15")]
|
||||
@[deprecated "Use `!(elem a l)` instead."(since := "2024-06-15")]
|
||||
def notElem [BEq α] (a : α) (as : List α) : Bool :=
|
||||
!(as.elem a)
|
||||
|
||||
|
||||
48
src/Init/Data/List/FinRange.lean
Normal file
48
src/Init/Data/List/FinRange.lean
Normal file
@@ -0,0 +1,48 @@
|
||||
/-
|
||||
Copyright (c) 2024 François G. Dorais. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: François G. Dorais
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.List.OfFn
|
||||
|
||||
namespace List
|
||||
|
||||
/-- `finRange n` lists all elements of `Fin n` in order -/
|
||||
def finRange (n : Nat) : List (Fin n) := ofFn fun i => i
|
||||
|
||||
@[simp] theorem length_finRange (n) : (List.finRange n).length = n := by
|
||||
simp [List.finRange]
|
||||
|
||||
@[simp] theorem getElem_finRange (i : Nat) (h : i < (List.finRange n).length) :
|
||||
(finRange n)[i] = Fin.cast (length_finRange n) ⟨i, h⟩ := by
|
||||
simp [List.finRange]
|
||||
|
||||
@[simp] theorem finRange_zero : finRange 0 = [] := by simp [finRange, ofFn]
|
||||
|
||||
theorem finRange_succ (n) : finRange (n+1) = 0 :: (finRange n).map Fin.succ := by
|
||||
apply List.ext_getElem; simp; intro i; cases i <;> simp
|
||||
|
||||
theorem finRange_succ_last (n) :
|
||||
finRange (n+1) = (finRange n).map Fin.castSucc ++ [Fin.last n] := by
|
||||
apply List.ext_getElem
|
||||
· simp
|
||||
· intros
|
||||
simp only [List.finRange, List.getElem_ofFn, getElem_append, length_map, length_ofFn,
|
||||
getElem_map, Fin.castSucc_mk, getElem_singleton]
|
||||
split
|
||||
· rfl
|
||||
· next h => exact Fin.eq_last_of_not_lt h
|
||||
|
||||
theorem finRange_reverse (n) : (finRange n).reverse = (finRange n).map Fin.rev := by
|
||||
induction n with
|
||||
| zero => simp
|
||||
| succ n ih =>
|
||||
conv => lhs; rw [finRange_succ_last]
|
||||
conv => rhs; rw [finRange_succ]
|
||||
rw [reverse_append, reverse_cons, reverse_nil, nil_append, singleton_append, ← map_reverse,
|
||||
map_cons, ih, map_map, map_map]
|
||||
congr; funext
|
||||
simp [Fin.rev_succ]
|
||||
|
||||
end List
|
||||
@@ -101,7 +101,7 @@ theorem tail_eq_of_cons_eq (H : h₁ :: t₁ = h₂ :: t₂) : t₁ = t₂ := (c
|
||||
theorem cons_inj_right (a : α) {l l' : List α} : a :: l = a :: l' ↔ l = l' :=
|
||||
⟨tail_eq_of_cons_eq, congrArg _⟩
|
||||
|
||||
@[deprecated (since := "2024-06-15")] abbrev cons_inj := @cons_inj_right
|
||||
@[deprecated cons_inj_right (since := "2024-06-15")] abbrev cons_inj := @cons_inj_right
|
||||
|
||||
theorem cons_eq_cons {a b : α} {l l' : List α} : a :: l = b :: l' ↔ a = b ∧ l = l' :=
|
||||
List.cons.injEq .. ▸ .rfl
|
||||
@@ -171,7 +171,7 @@ theorem get_cons_succ {as : List α} {h : i + 1 < (a :: as).length} :
|
||||
theorem get_cons_succ' {as : List α} {i : Fin as.length} :
|
||||
(a :: as).get i.succ = as.get i := rfl
|
||||
|
||||
@[deprecated (since := "2024-07-09")]
|
||||
@[deprecated "Deprecated without replacement." (since := "2024-07-09")]
|
||||
theorem get_cons_cons_one : (a₁ :: a₂ :: as).get (1 : Fin (as.length + 2)) = a₂ := rfl
|
||||
|
||||
theorem get_mk_zero : ∀ {l : List α} (h : 0 < l.length), l.get ⟨0, h⟩ = l.head (length_pos.mp h)
|
||||
@@ -791,6 +791,24 @@ theorem mem_or_eq_of_mem_set : ∀ {l : List α} {n : Nat} {a b : α}, a ∈ l.s
|
||||
· intro a
|
||||
simp
|
||||
|
||||
@[simp] theorem beq_nil_iff [BEq α] {l : List α} : (l == []) = l.isEmpty := by
|
||||
cases l <;> rfl
|
||||
|
||||
@[simp] theorem nil_beq_iff [BEq α] {l : List α} : ([] == l) = l.isEmpty := by
|
||||
cases l <;> rfl
|
||||
|
||||
@[simp] theorem cons_beq_cons [BEq α] {a b : α} {l₁ l₂ : List α} :
|
||||
(a :: l₁ == b :: l₂) = (a == b && l₁ == l₂) := rfl
|
||||
|
||||
theorem length_eq_of_beq [BEq α] {l₁ l₂ : List α} (h : l₁ == l₂) : l₁.length = l₂.length :=
|
||||
match l₁, l₂ with
|
||||
| [], [] => rfl
|
||||
| [], _ :: _ => by simp [beq_nil_iff] at h
|
||||
| _ :: _, [] => by simp [nil_beq_iff] at h
|
||||
| a :: l₁, b :: l₂ => by
|
||||
simp at h
|
||||
simpa [Nat.add_one_inj]using length_eq_of_beq h.2
|
||||
|
||||
/-! ### Lexicographic ordering -/
|
||||
|
||||
protected theorem lt_irrefl [LT α] (lt_irrefl : ∀ x : α, ¬x < x) (l : List α) : ¬l < l := by
|
||||
@@ -856,6 +874,12 @@ theorem foldr_eq_foldrM (f : α → β → β) (b) (l : List α) :
|
||||
l.foldr f b = l.foldrM (m := Id) f b := by
|
||||
induction l <;> simp [*, foldr]
|
||||
|
||||
@[simp] theorem id_run_foldlM (f : β → α → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldlM f b) = l.foldl f b := (foldl_eq_foldlM f b l).symm
|
||||
|
||||
@[simp] theorem id_run_foldrM (f : α → β → Id β) (b) (l : List α) :
|
||||
Id.run (l.foldrM f b) = l.foldr f b := (foldr_eq_foldrM f b l).symm
|
||||
|
||||
/-! ### foldl and foldr -/
|
||||
|
||||
@[simp] theorem foldr_cons_eq_append (l : List α) : l.foldr cons l' = l ++ l' := by
|
||||
@@ -1800,7 +1824,7 @@ theorem getElem_append_right' (l₁ : List α) {l₂ : List α} {n : Nat} (hn :
|
||||
l₂[n] = (l₁ ++ l₂)[n + l₁.length]'(by simpa [Nat.add_comm] using Nat.add_lt_add_left hn _) := by
|
||||
rw [getElem_append_right] <;> simp [*, le_add_left]
|
||||
|
||||
@[deprecated (since := "2024-06-12")]
|
||||
@[deprecated "Deprecated without replacement." (since := "2024-06-12")]
|
||||
theorem get_append_right_aux {l₁ l₂ : List α} {n : Nat}
|
||||
(h₁ : l₁.length ≤ n) (h₂ : n < (l₁ ++ l₂).length) : n - l₁.length < l₂.length := by
|
||||
rw [length_append] at h₂
|
||||
@@ -1817,7 +1841,7 @@ theorem getElem_of_append {l : List α} (eq : l = l₁ ++ a :: l₂) (h : l₁.l
|
||||
rw [← getElem?_eq_getElem, eq, getElem?_append_right (h ▸ Nat.le_refl _), h]
|
||||
simp
|
||||
|
||||
@[deprecated (since := "2024-06-12")]
|
||||
@[deprecated "Deprecated without replacement." (since := "2024-06-12")]
|
||||
theorem get_of_append_proof {l : List α}
|
||||
(eq : l = l₁ ++ a :: l₂) (h : l₁.length = n) : n < length l := eq ▸ h ▸ by simp_arith
|
||||
|
||||
@@ -3333,10 +3357,10 @@ theorem any_eq_not_all_not (l : List α) (p : α → Bool) : l.any p = !l.all (!
|
||||
theorem all_eq_not_any_not (l : List α) (p : α → Bool) : l.all p = !l.any (!p .) := by
|
||||
simp only [not_any_eq_all_not, Bool.not_not]
|
||||
|
||||
@[simp] theorem any_map {l : List α} {p : α → Bool} : (l.map f).any p = l.any (p ∘ f) := by
|
||||
@[simp] theorem any_map {l : List α} {p : β → Bool} : (l.map f).any p = l.any (p ∘ f) := by
|
||||
induction l with simp | cons _ _ ih => rw [ih]
|
||||
|
||||
@[simp] theorem all_map {l : List α} {p : α → Bool} : (l.map f).all p = l.all (p ∘ f) := by
|
||||
@[simp] theorem all_map {l : List α} {p : β → Bool} : (l.map f).all p = l.all (p ∘ f) := by
|
||||
induction l with simp | cons _ _ ih => rw [ih]
|
||||
|
||||
@[simp] theorem any_filter {l : List α} {p q : α → Bool} :
|
||||
|
||||
@@ -293,7 +293,7 @@ theorem sorted_mergeSort
|
||||
apply sorted_mergeSort trans total
|
||||
termination_by l => l.length
|
||||
|
||||
@[deprecated (since := "2024-09-02")] abbrev mergeSort_sorted := @sorted_mergeSort
|
||||
@[deprecated sorted_mergeSort (since := "2024-09-02")] abbrev mergeSort_sorted := @sorted_mergeSort
|
||||
|
||||
/--
|
||||
If the input list is already sorted, then `mergeSort` does not change the list.
|
||||
@@ -429,7 +429,8 @@ theorem sublist_mergeSort
|
||||
((fun w => Sublist.of_sublist_append_right w h') fun b m₁ m₃ =>
|
||||
(Bool.eq_not_self true).mp ((rel_of_pairwise_cons hc m₁).symm.trans (h₃ b m₃))))
|
||||
|
||||
@[deprecated (since := "2024-09-02")] abbrev mergeSort_stable := @sublist_mergeSort
|
||||
@[deprecated sublist_mergeSort (since := "2024-09-02")]
|
||||
abbrev mergeSort_stable := @sublist_mergeSort
|
||||
|
||||
/--
|
||||
Another statement of stability of merge sort.
|
||||
@@ -442,7 +443,8 @@ theorem pair_sublist_mergeSort
|
||||
(hab : le a b) (h : [a, b] <+ l) : [a, b] <+ mergeSort l le :=
|
||||
sublist_mergeSort trans total (pairwise_pair.mpr hab) h
|
||||
|
||||
@[deprecated (since := "2024-09-02")] abbrev mergeSort_stable_pair := @pair_sublist_mergeSort
|
||||
@[deprecated pair_sublist_mergeSort(since := "2024-09-02")]
|
||||
abbrev mergeSort_stable_pair := @pair_sublist_mergeSort
|
||||
|
||||
theorem map_merge {f : α → β} {r : α → α → Bool} {s : β → β → Bool} {l l' : List α}
|
||||
(hl : ∀ a ∈ l, ∀ b ∈ l', r a b = s (f a) (f b)) :
|
||||
|
||||
@@ -835,7 +835,7 @@ theorem isPrefix_iff : l₁ <+: l₂ ↔ ∀ i (h : i < l₁.length), l₂[i]? =
|
||||
simpa using ⟨0, by simp⟩
|
||||
| cons b l₂ =>
|
||||
simp only [cons_append, cons_prefix_cons, ih]
|
||||
rw (occs := .pos [2]) [← Nat.and_forall_add_one]
|
||||
rw (occs := [2]) [← Nat.and_forall_add_one]
|
||||
simp [Nat.succ_lt_succ_iff, eq_comm]
|
||||
|
||||
theorem isPrefix_iff_getElem {l₁ l₂ : List α} :
|
||||
|
||||
@@ -224,7 +224,7 @@ theorem take_succ {l : List α} {n : Nat} : l.take (n + 1) = l.take n ++ l[n]?.t
|
||||
· simp only [take, Option.toList, getElem?_cons_zero, nil_append]
|
||||
· simp only [take, hl, getElem?_cons_succ, cons_append]
|
||||
|
||||
@[deprecated (since := "2024-07-25")]
|
||||
@[deprecated "Deprecated without replacement." (since := "2024-07-25")]
|
||||
theorem drop_sizeOf_le [SizeOf α] (l : List α) (n : Nat) : sizeOf (l.drop n) ≤ sizeOf l := by
|
||||
induction l generalizing n with
|
||||
| nil => rw [drop_nil]; apply Nat.le_refl
|
||||
|
||||
@@ -789,7 +789,7 @@ theorem pred_lt_of_lt {n m : Nat} (h : m < n) : pred n < n :=
|
||||
pred_lt (not_eq_zero_of_lt h)
|
||||
|
||||
set_option linter.missingDocs false in
|
||||
@[deprecated (since := "2024-06-01")] abbrev pred_lt' := @pred_lt_of_lt
|
||||
@[deprecated pred_lt_of_lt (since := "2024-06-01")] abbrev pred_lt' := @pred_lt_of_lt
|
||||
|
||||
theorem sub_one_lt_of_lt {n m : Nat} (h : m < n) : n - 1 < n :=
|
||||
sub_one_lt (not_eq_zero_of_lt h)
|
||||
@@ -1075,7 +1075,7 @@ theorem pred_mul (n m : Nat) : pred n * m = n * m - m := by
|
||||
| succ n => rw [Nat.pred_succ, succ_mul, Nat.add_sub_cancel]
|
||||
|
||||
set_option linter.missingDocs false in
|
||||
@[deprecated (since := "2024-06-01")] abbrev mul_pred_left := @pred_mul
|
||||
@[deprecated pred_mul (since := "2024-06-01")] abbrev mul_pred_left := @pred_mul
|
||||
|
||||
protected theorem sub_one_mul (n m : Nat) : (n - 1) * m = n * m - m := by
|
||||
cases n with
|
||||
@@ -1087,7 +1087,7 @@ theorem mul_pred (n m : Nat) : n * pred m = n * m - n := by
|
||||
rw [Nat.mul_comm, pred_mul, Nat.mul_comm]
|
||||
|
||||
set_option linter.missingDocs false in
|
||||
@[deprecated (since := "2024-06-01")] abbrev mul_pred_right := @mul_pred
|
||||
@[deprecated mul_pred (since := "2024-06-01")] abbrev mul_pred_right := @mul_pred
|
||||
|
||||
theorem mul_sub_one (n m : Nat) : n * (m - 1) = n * m - n := by
|
||||
rw [Nat.mul_comm, Nat.sub_one_mul , Nat.mul_comm]
|
||||
|
||||
@@ -92,7 +92,7 @@ protected theorem div_mul_cancel {n m : Nat} (H : n ∣ m) : m / n * n = m := by
|
||||
rw [Nat.mul_comm, Nat.mul_div_cancel' H]
|
||||
|
||||
@[simp] theorem mod_mod_of_dvd (a : Nat) (h : c ∣ b) : a % b % c = a % c := by
|
||||
rw (occs := .pos [2]) [← mod_add_div a b]
|
||||
rw (occs := [2]) [← mod_add_div a b]
|
||||
have ⟨x, h⟩ := h
|
||||
subst h
|
||||
rw [Nat.mul_assoc, add_mul_mod_self_left]
|
||||
|
||||
@@ -5,6 +5,7 @@ Authors: Floris van Doorn, Leonardo de Moura, Kim Morrison
|
||||
-/
|
||||
prelude
|
||||
import Init.Omega
|
||||
import Init.Data.List.FinRange
|
||||
|
||||
set_option linter.missingDocs true -- keep it documented
|
||||
universe u
|
||||
@@ -137,6 +138,54 @@ theorem allTR_loop_congr {n m : Nat} (w : n = m) (f : (i : Nat) → i < n → Bo
|
||||
omega
|
||||
go n 0 f
|
||||
|
||||
@[simp] theorem fold_zero {α : Type u} (f : (i : Nat) → i < 0 → α → α) (init : α) :
|
||||
fold 0 f init = init := by simp [fold]
|
||||
|
||||
@[simp] theorem fold_succ {α : Type u} (n : Nat) (f : (i : Nat) → i < n + 1 → α → α) (init : α) :
|
||||
fold (n + 1) f init = f n (by omega) (fold n (fun i h => f i (by omega)) init) := by simp [fold]
|
||||
|
||||
theorem fold_eq_finRange_foldl {α : Type u} (n : Nat) (f : (i : Nat) → i < n → α → α) (init : α) :
|
||||
fold n f init = (List.finRange n).foldl (fun acc ⟨i, h⟩ => f i h acc) init := by
|
||||
induction n with
|
||||
| zero => simp
|
||||
| succ n ih =>
|
||||
simp [ih, List.finRange_succ_last, List.foldl_map]
|
||||
|
||||
@[simp] theorem foldRev_zero {α : Type u} (f : (i : Nat) → i < 0 → α → α) (init : α) :
|
||||
foldRev 0 f init = init := by simp [foldRev]
|
||||
|
||||
@[simp] theorem foldRev_succ {α : Type u} (n : Nat) (f : (i : Nat) → i < n + 1 → α → α) (init : α) :
|
||||
foldRev (n + 1) f init = foldRev n (fun i h => f i (by omega)) (f n (by omega) init) := by
|
||||
simp [foldRev]
|
||||
|
||||
theorem foldRev_eq_finRange_foldr {α : Type u} (n : Nat) (f : (i : Nat) → i < n → α → α) (init : α) :
|
||||
foldRev n f init = (List.finRange n).foldr (fun ⟨i, h⟩ acc => f i h acc) init := by
|
||||
induction n generalizing init with
|
||||
| zero => simp
|
||||
| succ n ih => simp [ih, List.finRange_succ_last, List.foldr_map]
|
||||
|
||||
@[simp] theorem any_zero {f : (i : Nat) → i < 0 → Bool} : any 0 f = false := by simp [any]
|
||||
|
||||
@[simp] theorem any_succ {n : Nat} (f : (i : Nat) → i < n + 1 → Bool) :
|
||||
any (n + 1) f = (any n (fun i h => f i (by omega)) || f n (by omega)) := by simp [any]
|
||||
|
||||
theorem any_eq_finRange_any {n : Nat} (f : (i : Nat) → i < n → Bool) :
|
||||
any n f = (List.finRange n).any (fun ⟨i, h⟩ => f i h) := by
|
||||
induction n with
|
||||
| zero => simp
|
||||
| succ n ih => simp [ih, List.finRange_succ_last, List.any_map, Function.comp_def]
|
||||
|
||||
@[simp] theorem all_zero {f : (i : Nat) → i < 0 → Bool} : all 0 f = true := by simp [all]
|
||||
|
||||
@[simp] theorem all_succ {n : Nat} (f : (i : Nat) → i < n + 1 → Bool) :
|
||||
all (n + 1) f = (all n (fun i h => f i (by omega)) && f n (by omega)) := by simp [all]
|
||||
|
||||
theorem all_eq_finRange_all {n : Nat} (f : (i : Nat) → i < n → Bool) :
|
||||
all n f = (List.finRange n).all (fun ⟨i, h⟩ => f i h) := by
|
||||
induction n with
|
||||
| zero => simp
|
||||
| succ n ih => simp [ih, List.finRange_succ_last, List.all_map, Function.comp_def]
|
||||
|
||||
end Nat
|
||||
|
||||
namespace Prod
|
||||
|
||||
@@ -651,8 +651,8 @@ theorem sub_mul_mod {x k n : Nat} (h₁ : n*k ≤ x) : (x - n*k) % n = x % n :=
|
||||
| .inr npos => Nat.mod_eq_of_lt (mod_lt _ npos)
|
||||
|
||||
theorem mul_mod (a b n : Nat) : a * b % n = (a % n) * (b % n) % n := by
|
||||
rw (occs := .pos [1]) [← mod_add_div a n]
|
||||
rw (occs := .pos [1]) [← mod_add_div b n]
|
||||
rw (occs := [1]) [← mod_add_div a n]
|
||||
rw (occs := [1]) [← mod_add_div b n]
|
||||
rw [Nat.add_mul, Nat.mul_add, Nat.mul_add,
|
||||
Nat.mul_assoc, Nat.mul_assoc, ← Nat.mul_add n, add_mul_mod_self_left,
|
||||
Nat.mul_comm _ (n * (b / n)), Nat.mul_assoc, add_mul_mod_self_left]
|
||||
@@ -679,6 +679,10 @@ theorem add_mod (a b n : Nat) : (a + b) % n = ((a % n) + (b % n)) % n := by
|
||||
@[simp] theorem mod_mul_mod {a b c : Nat} : (a % c * b) % c = a * b % c := by
|
||||
rw [mul_mod, mod_mod, ← mul_mod]
|
||||
|
||||
theorem mod_eq_sub (x w : Nat) : x % w = x - w * (x / w) := by
|
||||
conv => rhs; congr; rw [← mod_add_div x w]
|
||||
simp
|
||||
|
||||
/-! ### pow -/
|
||||
|
||||
theorem pow_succ' {m n : Nat} : m ^ n.succ = m * m ^ n := by
|
||||
@@ -846,6 +850,18 @@ protected theorem pow_lt_pow_iff_pow_mul_le_pow {a n m : Nat} (h : 1 < a) :
|
||||
rw [←Nat.pow_add_one, Nat.pow_le_pow_iff_right (by omega), Nat.pow_lt_pow_iff_right (by omega)]
|
||||
omega
|
||||
|
||||
protected theorem lt_pow_self {n a : Nat} (h : 1 < a) : n < a ^ n := by
|
||||
induction n with
|
||||
| zero => exact Nat.zero_lt_one
|
||||
| succ _ ih => exact Nat.lt_of_lt_of_le (Nat.add_lt_add_right ih 1) (Nat.pow_lt_pow_succ h)
|
||||
|
||||
protected theorem lt_two_pow_self : n < 2 ^ n :=
|
||||
Nat.lt_pow_self Nat.one_lt_two
|
||||
|
||||
@[simp]
|
||||
protected theorem mod_two_pow_self : n % 2 ^ n = n :=
|
||||
Nat.mod_eq_of_lt Nat.lt_two_pow_self
|
||||
|
||||
@[simp]
|
||||
theorem two_pow_pred_mul_two (h : 0 < w) :
|
||||
2 ^ (w - 1) * 2 = 2 ^ w := by
|
||||
|
||||
@@ -246,6 +246,12 @@ instance (a b : UInt64) : Decidable (a ≤ b) := UInt64.decLe a b
|
||||
instance : Max UInt64 := maxOfLe
|
||||
instance : Min UInt64 := minOfLe
|
||||
|
||||
theorem usize_size_le : USize.size ≤ 18446744073709551616 := by
|
||||
cases usize_size_eq <;> next h => rw [h]; decide
|
||||
|
||||
theorem le_usize_size : 4294967296 ≤ USize.size := by
|
||||
cases usize_size_eq <;> next h => rw [h]; decide
|
||||
|
||||
@[extern "lean_usize_mul"]
|
||||
def USize.mul (a b : USize) : USize := ⟨a.toBitVec * b.toBitVec⟩
|
||||
@[extern "lean_usize_div"]
|
||||
@@ -264,10 +270,29 @@ def USize.xor (a b : USize) : USize := ⟨a.toBitVec ^^^ b.toBitVec⟩
|
||||
def USize.shiftLeft (a b : USize) : USize := ⟨a.toBitVec <<< (mod b (USize.ofNat System.Platform.numBits)).toBitVec⟩
|
||||
@[extern "lean_usize_shift_right"]
|
||||
def USize.shiftRight (a b : USize) : USize := ⟨a.toBitVec >>> (mod b (USize.ofNat System.Platform.numBits)).toBitVec⟩
|
||||
/--
|
||||
Upcast a `Nat` less than `2^32` to a `USize`.
|
||||
This is lossless because `USize.size` is either `2^32` or `2^64`.
|
||||
This function is overridden with a native implementation.
|
||||
-/
|
||||
@[extern "lean_usize_of_nat"]
|
||||
def USize.ofNat32 (n : @& Nat) (h : n < 4294967296) : USize :=
|
||||
USize.ofNatCore n (Nat.lt_of_lt_of_le h le_usize_size)
|
||||
@[extern "lean_uint32_to_usize"]
|
||||
def UInt32.toUSize (a : UInt32) : USize := USize.ofNat32 a.toBitVec.toNat a.toBitVec.isLt
|
||||
@[extern "lean_usize_to_uint32"]
|
||||
def USize.toUInt32 (a : USize) : UInt32 := a.toNat.toUInt32
|
||||
/-- Converts a `UInt64` to a `USize` by reducing modulo `USize.size`. -/
|
||||
@[extern "lean_uint64_to_usize"]
|
||||
def UInt64.toUSize (a : UInt64) : USize := a.toNat.toUSize
|
||||
/--
|
||||
Upcast a `USize` to a `UInt64`.
|
||||
This is lossless because `USize.size` is either `2^32` or `2^64`.
|
||||
This function is overridden with a native implementation.
|
||||
-/
|
||||
@[extern "lean_usize_to_uint64"]
|
||||
def USize.toUInt64 (a : USize) : UInt64 :=
|
||||
UInt64.ofNatCore a.toBitVec.toNat (Nat.lt_of_lt_of_le a.toBitVec.isLt usize_size_le)
|
||||
|
||||
instance : Mul USize := ⟨USize.mul⟩
|
||||
instance : Mod USize := ⟨USize.mod⟩
|
||||
|
||||
@@ -94,10 +94,8 @@ def UInt32.toUInt64 (a : UInt32) : UInt64 := ⟨⟨a.toNat, Nat.lt_trans a.toBit
|
||||
|
||||
instance UInt64.instOfNat : OfNat UInt64 n := ⟨UInt64.ofNat n⟩
|
||||
|
||||
theorem usize_size_gt_zero : USize.size > 0 := by
|
||||
cases usize_size_eq with
|
||||
| inl h => rw [h]; decide
|
||||
| inr h => rw [h]; decide
|
||||
@[deprecated usize_size_pos (since := "2024-11-24")] theorem usize_size_gt_zero : USize.size > 0 :=
|
||||
usize_size_pos
|
||||
|
||||
def USize.val (x : USize) : Fin USize.size := x.toBitVec.toFin
|
||||
@[extern "lean_usize_of_nat"]
|
||||
|
||||
@@ -133,6 +133,9 @@ declare_uint_theorems UInt32
|
||||
declare_uint_theorems UInt64
|
||||
declare_uint_theorems USize
|
||||
|
||||
theorem USize.toNat_ofNat_of_lt_32 {n : Nat} (h : n < 4294967296) : toNat (ofNat n) = n :=
|
||||
toNat_ofNat_of_lt (Nat.lt_of_lt_of_le h le_usize_size)
|
||||
|
||||
theorem UInt32.toNat_lt_of_lt {n : UInt32} {m : Nat} (h : m < size) : n < ofNat m → n.toNat < m := by
|
||||
simp [lt_def, BitVec.lt_def, UInt32.toNat, toBitVec_eq_of_lt h]
|
||||
|
||||
@@ -145,22 +148,22 @@ theorem UInt32.toNat_le_of_le {n : UInt32} {m : Nat} (h : m < size) : n ≤ ofNa
|
||||
theorem UInt32.le_toNat_of_le {n : UInt32} {m : Nat} (h : m < size) : ofNat m ≤ n → m ≤ n.toNat := by
|
||||
simp [le_def, BitVec.le_def, UInt32.toNat, toBitVec_eq_of_lt h]
|
||||
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt8.zero_toNat := @UInt8.toNat_zero
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt8.div_toNat := @UInt8.toNat_div
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt8.mod_toNat := @UInt8.toNat_mod
|
||||
@[deprecated UInt8.toNat_zero (since := "2024-06-23")] protected abbrev UInt8.zero_toNat := @UInt8.toNat_zero
|
||||
@[deprecated UInt8.toNat_div (since := "2024-06-23")] protected abbrev UInt8.div_toNat := @UInt8.toNat_div
|
||||
@[deprecated UInt8.toNat_mod (since := "2024-06-23")] protected abbrev UInt8.mod_toNat := @UInt8.toNat_mod
|
||||
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt16.zero_toNat := @UInt16.toNat_zero
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt16.div_toNat := @UInt16.toNat_div
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt16.mod_toNat := @UInt16.toNat_mod
|
||||
@[deprecated UInt16.toNat_zero (since := "2024-06-23")] protected abbrev UInt16.zero_toNat := @UInt16.toNat_zero
|
||||
@[deprecated UInt16.toNat_div (since := "2024-06-23")] protected abbrev UInt16.div_toNat := @UInt16.toNat_div
|
||||
@[deprecated UInt16.toNat_mod (since := "2024-06-23")] protected abbrev UInt16.mod_toNat := @UInt16.toNat_mod
|
||||
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt32.zero_toNat := @UInt32.toNat_zero
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt32.div_toNat := @UInt32.toNat_div
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt32.mod_toNat := @UInt32.toNat_mod
|
||||
@[deprecated UInt32.toNat_zero (since := "2024-06-23")] protected abbrev UInt32.zero_toNat := @UInt32.toNat_zero
|
||||
@[deprecated UInt32.toNat_div (since := "2024-06-23")] protected abbrev UInt32.div_toNat := @UInt32.toNat_div
|
||||
@[deprecated UInt32.toNat_mod (since := "2024-06-23")] protected abbrev UInt32.mod_toNat := @UInt32.toNat_mod
|
||||
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt64.zero_toNat := @UInt64.toNat_zero
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt64.div_toNat := @UInt64.toNat_div
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev UInt64.mod_toNat := @UInt64.toNat_mod
|
||||
@[deprecated UInt64.toNat_zero (since := "2024-06-23")] protected abbrev UInt64.zero_toNat := @UInt64.toNat_zero
|
||||
@[deprecated UInt64.toNat_div (since := "2024-06-23")] protected abbrev UInt64.div_toNat := @UInt64.toNat_div
|
||||
@[deprecated UInt64.toNat_mod (since := "2024-06-23")] protected abbrev UInt64.mod_toNat := @UInt64.toNat_mod
|
||||
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev USize.zero_toNat := @USize.toNat_zero
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev USize.div_toNat := @USize.toNat_div
|
||||
@[deprecated (since := "2024-06-23")] protected abbrev USize.mod_toNat := @USize.toNat_mod
|
||||
@[deprecated USize.toNat_zero (since := "2024-06-23")] protected abbrev USize.zero_toNat := @USize.toNat_zero
|
||||
@[deprecated USize.toNat_div (since := "2024-06-23")] protected abbrev USize.div_toNat := @USize.toNat_div
|
||||
@[deprecated USize.toNat_mod (since := "2024-06-23")] protected abbrev USize.mod_toNat := @USize.toNat_mod
|
||||
|
||||
@@ -44,9 +44,6 @@ def elimAsList {motive : Vector α n → Sort u}
|
||||
(v : Vector α n) → motive v
|
||||
| ⟨⟨a⟩, ha⟩ => mk a ha
|
||||
|
||||
/-- The empty vector. -/
|
||||
@[inline] def empty : Vector α 0 := ⟨.empty, rfl⟩
|
||||
|
||||
/-- Make an empty vector with pre-allocated capacity. -/
|
||||
@[inline] def mkEmpty (capacity : Nat) : Vector α 0 := ⟨.mkEmpty capacity, rfl⟩
|
||||
|
||||
@@ -215,7 +212,7 @@ Compares two vectors of the same size using a given boolean relation `r`. `isEqv
|
||||
`true` if and only if `r v[i] w[i]` is true for all indices `i`.
|
||||
-/
|
||||
@[inline] def isEqv (v w : Vector α n) (r : α → α → Bool) : Bool :=
|
||||
Array.isEqvAux v.toArray w.toArray (by simp) r 0 (by simp)
|
||||
Array.isEqvAux v.toArray w.toArray (by simp) r n (by simp)
|
||||
|
||||
instance [BEq α] : BEq (Vector α n) where
|
||||
beq a b := isEqv a b (· == ·)
|
||||
@@ -249,9 +246,7 @@ Finds the first index of a given value in a vector using `==` for comparison. Re
|
||||
no element of the index matches the given value.
|
||||
-/
|
||||
@[inline] def indexOf? [BEq α] (v : Vector α n) (x : α) : Option (Fin n) :=
|
||||
match v.toArray.indexOf? x with
|
||||
| some res => some (res.cast v.size_toArray)
|
||||
| none => none
|
||||
(v.toArray.indexOf? x).map (Fin.cast v.size_toArray)
|
||||
|
||||
/-- Returns `true` when `v` is a prefix of the vector `w`. -/
|
||||
@[inline] def isPrefixOf [BEq α] (v : Vector α m) (w : Vector α n) : Bool :=
|
||||
|
||||
172
src/Init/Data/Vector/Lemmas.lean
Normal file
172
src/Init/Data/Vector/Lemmas.lean
Normal file
@@ -0,0 +1,172 @@
|
||||
/-
|
||||
Copyright (c) 2024 Shreyas Srinivas. All rights reserved.
|
||||
Released under Apache 2.0 license as described in the file LICENSE.
|
||||
Authors: Shreyas Srinivas, Francois Dorais
|
||||
-/
|
||||
prelude
|
||||
import Init.Data.Vector.Basic
|
||||
|
||||
/-!
|
||||
## Vectors
|
||||
Lemmas about `Vector α n`
|
||||
-/
|
||||
|
||||
namespace Vector
|
||||
|
||||
theorem length_toList {α n} (xs : Vector α n) : xs.toList.length = n := by simp
|
||||
|
||||
@[simp] theorem getElem_mk {data : Array α} {size : data.size = n} {i : Nat} (h : i < n) :
|
||||
(Vector.mk data size)[i] = data[i] := rfl
|
||||
|
||||
@[simp] theorem getElem_toArray {α n} (xs : Vector α n) (i : Nat) (h : i < xs.toArray.size) :
|
||||
xs.toArray[i] = xs[i]'(by simpa using h) := by
|
||||
cases xs
|
||||
simp
|
||||
|
||||
theorem getElem_toList {α n} (xs : Vector α n) (i : Nat) (h : i < xs.toList.length) :
|
||||
xs.toList[i] = xs[i]'(by simpa using h) := by simp
|
||||
|
||||
@[simp] theorem getElem_ofFn {α n} (f : Fin n → α) (i : Nat) (h : i < n) :
|
||||
(Vector.ofFn f)[i] = f ⟨i, by simpa using h⟩ := by
|
||||
simp [ofFn]
|
||||
|
||||
/-- The empty vector maps to the empty vector. -/
|
||||
@[simp]
|
||||
theorem map_empty (f : α → β) : map f #v[] = #v[] := by
|
||||
rw [map, mk.injEq]
|
||||
exact Array.map_empty f
|
||||
|
||||
theorem toArray_inj : ∀ {v w : Vector α n}, v.toArray = w.toArray → v = w
|
||||
| {..}, {..}, rfl => rfl
|
||||
|
||||
/-- A vector of length `0` is the empty vector. -/
|
||||
protected theorem eq_empty (v : Vector α 0) : v = #v[] := by
|
||||
apply Vector.toArray_inj
|
||||
apply Array.eq_empty_of_size_eq_zero v.2
|
||||
|
||||
/--
|
||||
`Vector.ext` is an extensionality theorem.
|
||||
Vectors `a` and `b` are equal to each other if their elements are equal for each valid index.
|
||||
-/
|
||||
@[ext]
|
||||
protected theorem ext {a b : Vector α n} (h : (i : Nat) → (_ : i < n) → a[i] = b[i]) : a = b := by
|
||||
apply Vector.toArray_inj
|
||||
apply Array.ext
|
||||
· rw [a.size_toArray, b.size_toArray]
|
||||
· intro i hi _
|
||||
rw [a.size_toArray] at hi
|
||||
exact h i hi
|
||||
|
||||
@[simp] theorem push_mk {data : Array α} {size : data.size = n} {x : α} :
|
||||
(Vector.mk data size).push x =
|
||||
Vector.mk (data.push x) (by simp [size, Nat.succ_eq_add_one]) := rfl
|
||||
|
||||
@[simp] theorem pop_mk {data : Array α} {size : data.size = n} :
|
||||
(Vector.mk data size).pop = Vector.mk data.pop (by simp [size]) := rfl
|
||||
|
||||
@[simp] theorem swap_mk {data : Array α} {size : data.size = n} {i j : Nat} {hi hj} :
|
||||
(Vector.mk data size).swap i j hi hj = Vector.mk (data.swap i j) (by simp_all) := rfl
|
||||
|
||||
@[simp] theorem getElem_push_last {v : Vector α n} {x : α} : (v.push x)[n] = x := by
|
||||
rcases v with ⟨data, rfl⟩
|
||||
simp
|
||||
|
||||
@[simp] theorem getElem_push_lt {v : Vector α n} {x : α} {i : Nat} (h : i < n) :
|
||||
(v.push x)[i] = v[i] := by
|
||||
rcases v with ⟨data, rfl⟩
|
||||
simp [Array.getElem_push_lt, h]
|
||||
|
||||
@[simp] theorem getElem_pop {v : Vector α n} {i : Nat} (h : i < n - 1) : (v.pop)[i] = v[i] := by
|
||||
rcases v with ⟨data, rfl⟩
|
||||
simp
|
||||
|
||||
/--
|
||||
Variant of `getElem_pop` that will sometimes fire when `getElem_pop` gets stuck because of
|
||||
defeq issues in the implicit size argument.
|
||||
-/
|
||||
@[simp] theorem getElem_pop' (v : Vector α (n + 1)) (i : Nat) (h : i < n + 1 - 1) :
|
||||
@getElem (Vector α n) Nat α (fun _ i => i < n) instGetElemNatLt v.pop i h = v[i] :=
|
||||
getElem_pop h
|
||||
|
||||
@[simp] theorem push_pop_back (v : Vector α (n + 1)) : v.pop.push v.back = v := by
|
||||
ext i
|
||||
by_cases h : i < n
|
||||
· simp [h]
|
||||
· replace h : i = v.size - 1 := by rw [size_toArray]; omega
|
||||
subst h
|
||||
simp [pop, back, back!, ← Array.eq_push_pop_back!_of_size_ne_zero]
|
||||
|
||||
theorem push_swap (a : Vector α n) (x : α) {i j : Nat} {hi hj} :
|
||||
(a.swap i j hi hj).push x = (a.push x).swap i j := by
|
||||
cases a
|
||||
simp [Array.push_swap]
|
||||
|
||||
/-! ### cast -/
|
||||
|
||||
@[simp] theorem cast_mk {n m} (a : Array α) (w : a.size = n) (h : n = m) :
|
||||
(Vector.mk a w).cast h = ⟨a, h ▸ w⟩ := by
|
||||
simp [Vector.cast]
|
||||
|
||||
@[simp] theorem cast_refl {n} (a : Vector α n) : a.cast rfl = a := by
|
||||
cases a
|
||||
simp
|
||||
|
||||
@[simp] theorem toArray_cast {n m} (a : Vector α n) (h : n = m) :
|
||||
(a.cast h).toArray = a.toArray := by
|
||||
subst h
|
||||
simp
|
||||
|
||||
theorem cast_inj {n m} (a : Vector α n) (b : Vector α n) (h : n = m) :
|
||||
a.cast h = b.cast h ↔ a = b := by
|
||||
cases h
|
||||
simp
|
||||
|
||||
theorem cast_eq_iff {n m} (a : Vector α n) (b : Vector α m) (h : n = m) :
|
||||
a.cast h = b ↔ a = b.cast h.symm := by
|
||||
cases h
|
||||
simp
|
||||
|
||||
theorem eq_cast_iff {n m} (a : Vector α n) (b : Vector α m) (h : m = n) :
|
||||
a = b.cast h ↔ a.cast h.symm = b := by
|
||||
cases h
|
||||
simp
|
||||
|
||||
/-! ### Decidable quantifiers. -/
|
||||
|
||||
theorem forall_zero_iff {P : Vector α 0 → Prop} :
|
||||
(∀ v, P v) ↔ P #v[] := by
|
||||
constructor
|
||||
· intro h
|
||||
apply h
|
||||
· intro h v
|
||||
obtain (rfl : v = #v[]) := (by ext i h; simp at h)
|
||||
apply h
|
||||
|
||||
theorem forall_cons_iff {P : Vector α (n + 1) → Prop} :
|
||||
(∀ v : Vector α (n + 1), P v) ↔ (∀ (x : α) (v : Vector α n), P (v.push x)) := by
|
||||
constructor
|
||||
· intro h _ _
|
||||
apply h
|
||||
· intro h v
|
||||
have w : v = v.pop.push v.back := by simp
|
||||
rw [w]
|
||||
apply h
|
||||
|
||||
instance instDecidableForallVectorZero (P : Vector α 0 → Prop) :
|
||||
∀ [Decidable (P #v[])], Decidable (∀ v, P v)
|
||||
| .isTrue h => .isTrue fun ⟨v, s⟩ => by
|
||||
obtain (rfl : v = .empty) := (by ext i h₁ h₂; exact s; cases h₂)
|
||||
exact h
|
||||
| .isFalse h => .isFalse (fun w => h (w _))
|
||||
|
||||
instance instDecidableForallVectorSucc (P : Vector α (n+1) → Prop)
|
||||
[Decidable (∀ (x : α) (v : Vector α n), P (v.push x))] : Decidable (∀ v, P v) :=
|
||||
decidable_of_iff' (∀ x (v : Vector α n), P (v.push x)) forall_cons_iff
|
||||
|
||||
instance instDecidableExistsVectorZero (P : Vector α 0 → Prop) [Decidable (P #v[])] :
|
||||
Decidable (∃ v, P v) :=
|
||||
decidable_of_iff (¬ ∀ v, ¬ P v) Classical.not_forall_not
|
||||
|
||||
instance instDecidableExistsVectorSucc (P : Vector α (n+1) → Prop)
|
||||
[Decidable (∀ (x : α) (v : Vector α n), ¬ P (v.push x))] : Decidable (∃ v, P v) :=
|
||||
decidable_of_iff (¬ ∀ v, ¬ P v) Classical.not_forall_not
|
||||
@@ -206,12 +206,12 @@ instance : GetElem (List α) Nat α fun as i => i < as.length where
|
||||
@[simp] theorem getElem_cons_zero (a : α) (as : List α) (h : 0 < (a :: as).length) : getElem (a :: as) 0 h = a := by
|
||||
rfl
|
||||
|
||||
@[deprecated (since := "2024-06-12")] abbrev cons_getElem_zero := @getElem_cons_zero
|
||||
@[deprecated getElem_cons_zero (since := "2024-06-12")] abbrev cons_getElem_zero := @getElem_cons_zero
|
||||
|
||||
@[simp] theorem getElem_cons_succ (a : α) (as : List α) (i : Nat) (h : i + 1 < (a :: as).length) : getElem (a :: as) (i+1) h = getElem as i (Nat.lt_of_succ_lt_succ h) := by
|
||||
rfl
|
||||
|
||||
@[deprecated (since := "2024-06-12")] abbrev cons_getElem_succ := @getElem_cons_succ
|
||||
@[deprecated getElem_cons_succ (since := "2024-06-12")] abbrev cons_getElem_succ := @getElem_cons_succ
|
||||
|
||||
@[simp] theorem getElem_mem : ∀ {l : List α} {n} (h : n < l.length), l[n]'h ∈ l
|
||||
| _ :: _, 0, _ => .head ..
|
||||
@@ -223,7 +223,8 @@ theorem getElem_cons_drop_succ_eq_drop {as : List α} {i : Nat} (h : i < as.leng
|
||||
| _::_, 0 => rfl
|
||||
| _::_, i+1 => getElem_cons_drop_succ_eq_drop (i := i) _
|
||||
|
||||
@[deprecated (since := "2024-11-05")] abbrev get_drop_eq_drop := @getElem_cons_drop_succ_eq_drop
|
||||
@[deprecated getElem_cons_drop_succ_eq_drop (since := "2024-11-05")]
|
||||
abbrev get_drop_eq_drop := @getElem_cons_drop_succ_eq_drop
|
||||
|
||||
end List
|
||||
|
||||
|
||||
@@ -251,10 +251,16 @@ def neutralConfig : Simp.Config := {
|
||||
|
||||
end Simp
|
||||
|
||||
/-- Configuration for which occurrences that match an expression should be rewritten. -/
|
||||
inductive Occurrences where
|
||||
/-- All occurrences should be rewritten. -/
|
||||
| all
|
||||
/-- A list of indices for which occurrences should be rewritten. -/
|
||||
| pos (idxs : List Nat)
|
||||
/-- A list of indices for which occurrences should not be rewritten. -/
|
||||
| neg (idxs : List Nat)
|
||||
deriving Inhabited, BEq
|
||||
|
||||
instance : Coe (List Nat) Occurrences := ⟨.pos⟩
|
||||
|
||||
end Lean.Meta
|
||||
|
||||
@@ -2116,6 +2116,11 @@ theorem usize_size_eq : Or (Eq USize.size 4294967296) (Eq USize.size 18446744073
|
||||
| _, Or.inl rfl => Or.inl (of_decide_eq_true rfl)
|
||||
| _, Or.inr rfl => Or.inr (of_decide_eq_true rfl)
|
||||
|
||||
theorem usize_size_pos : LT.lt 0 USize.size :=
|
||||
match USize.size, usize_size_eq with
|
||||
| _, Or.inl rfl => of_decide_eq_true rfl
|
||||
| _, Or.inr rfl => of_decide_eq_true rfl
|
||||
|
||||
/--
|
||||
A `USize` is an unsigned integer with the size of a word
|
||||
for the platform's architecture.
|
||||
@@ -2155,24 +2160,7 @@ def USize.decEq (a b : USize) : Decidable (Eq a b) :=
|
||||
instance : DecidableEq USize := USize.decEq
|
||||
|
||||
instance : Inhabited USize where
|
||||
default := USize.ofNatCore 0 (match USize.size, usize_size_eq with
|
||||
| _, Or.inl rfl => of_decide_eq_true rfl
|
||||
| _, Or.inr rfl => of_decide_eq_true rfl)
|
||||
|
||||
/--
|
||||
Upcast a `Nat` less than `2^32` to a `USize`.
|
||||
This is lossless because `USize.size` is either `2^32` or `2^64`.
|
||||
This function is overridden with a native implementation.
|
||||
-/
|
||||
@[extern "lean_usize_of_nat"]
|
||||
def USize.ofNat32 (n : @& Nat) (h : LT.lt n 4294967296) : USize where
|
||||
toBitVec :=
|
||||
BitVec.ofNatLt n (
|
||||
match System.Platform.numBits, System.Platform.numBits_eq with
|
||||
| _, Or.inl rfl => h
|
||||
| _, Or.inr rfl => Nat.lt_trans h (of_decide_eq_true rfl)
|
||||
)
|
||||
|
||||
default := USize.ofNatCore 0 usize_size_pos
|
||||
/--
|
||||
A `Nat` denotes a valid unicode codepoint if it is less than `0x110000`, and
|
||||
it is also not a "surrogate" character (the range `0xd800` to `0xdfff` inclusive).
|
||||
@@ -3432,25 +3420,6 @@ class Hashable (α : Sort u) where
|
||||
|
||||
export Hashable (hash)
|
||||
|
||||
/-- Converts a `UInt64` to a `USize` by reducing modulo `USize.size`. -/
|
||||
@[extern "lean_uint64_to_usize"]
|
||||
opaque UInt64.toUSize (u : UInt64) : USize
|
||||
|
||||
/--
|
||||
Upcast a `USize` to a `UInt64`.
|
||||
This is lossless because `USize.size` is either `2^32` or `2^64`.
|
||||
This function is overridden with a native implementation.
|
||||
-/
|
||||
@[extern "lean_usize_to_uint64"]
|
||||
def USize.toUInt64 (u : USize) : UInt64 where
|
||||
toBitVec := BitVec.ofNatLt u.toBitVec.toNat (
|
||||
let ⟨⟨n, h⟩⟩ := u
|
||||
show LT.lt n _ from
|
||||
match System.Platform.numBits, System.Platform.numBits_eq, h with
|
||||
| _, Or.inl rfl, h => Nat.lt_trans h (of_decide_eq_true rfl)
|
||||
| _, Or.inr rfl, h => h
|
||||
)
|
||||
|
||||
/-- An opaque hash mixing operation, used to implement hashing for tuples. -/
|
||||
@[extern "lean_uint64_mix_hash"]
|
||||
opaque mixHash (u₁ u₂ : UInt64) : UInt64
|
||||
|
||||
@@ -5,6 +5,7 @@ Authors: Leonardo de Moura, Mario Carneiro
|
||||
-/
|
||||
prelude
|
||||
import Init.Util
|
||||
import Init.Data.UInt.Basic
|
||||
|
||||
namespace ShareCommon
|
||||
/-
|
||||
|
||||
@@ -72,6 +72,21 @@ theorem let_body_congr {α : Sort u} {β : α → Sort v} {b b' : (a : α) →
|
||||
(a : α) (h : ∀ x, b x = b' x) : (let x := a; b x) = (let x := a; b' x) :=
|
||||
(funext h : b = b') ▸ rfl
|
||||
|
||||
theorem letFun_unused {α : Sort u} {β : Sort v} (a : α) {b b' : β} (h : b = b') : @letFun α (fun _ => β) a (fun _ => b) = b' :=
|
||||
h
|
||||
|
||||
theorem letFun_congr {α : Sort u} {β : Sort v} {a a' : α} {f f' : α → β} (h₁ : a = a') (h₂ : ∀ x, f x = f' x)
|
||||
: @letFun α (fun _ => β) a f = @letFun α (fun _ => β) a' f' := by
|
||||
rw [h₁, funext h₂]
|
||||
|
||||
theorem letFun_body_congr {α : Sort u} {β : Sort v} (a : α) {f f' : α → β} (h : ∀ x, f x = f' x)
|
||||
: @letFun α (fun _ => β) a f = @letFun α (fun _ => β) a f' := by
|
||||
rw [funext h]
|
||||
|
||||
theorem letFun_val_congr {α : Sort u} {β : Sort v} {a a' : α} {f : α → β} (h : a = a')
|
||||
: @letFun α (fun _ => β) a f = @letFun α (fun _ => β) a' f := by
|
||||
rw [h]
|
||||
|
||||
@[congr]
|
||||
theorem ite_congr {x y u v : α} {s : Decidable b} [Decidable c]
|
||||
(h₁ : b = c) (h₂ : c → x = u) (h₃ : ¬ c → y = v) : ite b x y = ite c u v := by
|
||||
|
||||
@@ -90,10 +90,14 @@ def withPtrAddr {α : Type u} {β : Type v} (a : α) (k : USize → β) (h : ∀
|
||||
def Runtime.markMultiThreaded (a : α) : α := a
|
||||
|
||||
/--
|
||||
Marks given value and its object graph closure as persistent. This will remove
|
||||
reference counter updates but prevent the closure from being deallocated until
|
||||
the end of the process! It can still be useful to do eagerly when the value
|
||||
will be marked persistent later anyway and there is available time budget to
|
||||
mark it now or it would be unnecessarily marked multi-threaded in between. -/
|
||||
Marks given value and its object graph closure as persistent. This will remove
|
||||
reference counter updates but prevent the closure from being deallocated until
|
||||
the end of the process! It can still be useful to do eagerly when the value
|
||||
will be marked persistent later anyway and there is available time budget to
|
||||
mark it now or it would be unnecessarily marked multi-threaded in between.
|
||||
|
||||
This function is only safe to use on objects (in the full closure) which are
|
||||
not used concurrently or which are already persistent.
|
||||
-/
|
||||
@[extern "lean_runtime_mark_persistent"]
|
||||
def Runtime.markPersistent (a : α) : α := a
|
||||
unsafe def Runtime.markPersistent (a : α) : α := a
|
||||
|
||||
@@ -31,6 +31,11 @@ register_builtin_option maxHeartbeats : Nat := {
|
||||
descr := "maximum amount of heartbeats per command. A heartbeat is number of (small) memory allocations (in thousands), 0 means no limit"
|
||||
}
|
||||
|
||||
register_builtin_option Elab.async : Bool := {
|
||||
defValue := false
|
||||
descr := "perform elaboration using multiple threads where possible"
|
||||
}
|
||||
|
||||
/--
|
||||
If the `diagnostics` option is not already set, gives a message explaining this option.
|
||||
Begins with a `\n`, so an error message can look like `m!"some error occurred{useDiagnosticMsg}"`.
|
||||
|
||||
@@ -1150,48 +1150,33 @@ private def throwLValError (e : Expr) (eType : Expr) (msg : MessageData) : TermE
|
||||
throwError "{msg}{indentExpr e}\nhas type{indentExpr eType}"
|
||||
|
||||
/--
|
||||
`findMethod? S fName` tries the following for each namespace `S'` in the resolution order for `S`:
|
||||
- If `env` contains `S' ++ fName`, returns `(S', S' ++ fName)`
|
||||
- Otherwise if `env` contains private name `prv` for `S' ++ fName`, returns `(S', prv)`
|
||||
`findMethod? S fName` tries the for each namespace `S'` in the resolution order for `S` to resolve the name `S'.fname`.
|
||||
If it resolves to `name`, returns `(S', name)`.
|
||||
-/
|
||||
private partial def findMethod? (structName fieldName : Name) : MetaM (Option (Name × Name)) := do
|
||||
let env ← getEnv
|
||||
let find? structName' : MetaM (Option (Name × Name)) := do
|
||||
let fullName := structName' ++ fieldName
|
||||
if env.contains fullName then
|
||||
return some (structName', fullName)
|
||||
let fullNamePrv := mkPrivateName env fullName
|
||||
if env.contains fullNamePrv then
|
||||
return some (structName', fullNamePrv)
|
||||
return none
|
||||
-- We do not want to make use of the current namespace for resolution.
|
||||
let candidates := ResolveName.resolveGlobalName (← getEnv) Name.anonymous (← getOpenDecls) fullName
|
||||
|>.filter (fun (_, fieldList) => fieldList.isEmpty)
|
||||
|>.map Prod.fst
|
||||
match candidates with
|
||||
| [] => return none
|
||||
| [fullName'] => return some (structName', fullName')
|
||||
| _ => throwError "\
|
||||
invalid field notation '{fieldName}', the name '{fullName}' is ambiguous, possible interpretations: \
|
||||
{MessageData.joinSep (candidates.map (m!"'{.ofConstName ·}'")) ", "}"
|
||||
-- Optimization: the first element of the resolution order is `structName`,
|
||||
-- so we can skip computing the resolution order in the common case
|
||||
-- of the name resolving in the `structName` namespace.
|
||||
find? structName <||> do
|
||||
let resolutionOrder ← if isStructure env structName then getStructureResolutionOrder structName else pure #[structName]
|
||||
for h : i in [1:resolutionOrder.size] do
|
||||
if let some res ← find? resolutionOrder[i] then
|
||||
for ns in resolutionOrder[1:resolutionOrder.size] do
|
||||
if let some res ← find? ns then
|
||||
return res
|
||||
return none
|
||||
|
||||
/--
|
||||
Return `some (structName', fullName)` if `structName ++ fieldName` is an alias for `fullName`, and
|
||||
`fullName` is of the form `structName' ++ fieldName`.
|
||||
|
||||
TODO: if there is more than one applicable alias, it returns `none`. We should consider throwing an error or
|
||||
warning.
|
||||
-/
|
||||
private def findMethodAlias? (env : Environment) (structName fieldName : Name) : Option (Name × Name) :=
|
||||
let fullName := structName ++ fieldName
|
||||
-- We never skip `protected` aliases when resolving dot-notation.
|
||||
let aliasesCandidates := getAliases env fullName (skipProtected := false) |>.filterMap fun alias =>
|
||||
match alias.eraseSuffix? fieldName with
|
||||
| none => none
|
||||
| some structName' => some (structName', alias)
|
||||
match aliasesCandidates with
|
||||
| [r] => some r
|
||||
| _ => none
|
||||
|
||||
private def throwInvalidFieldNotation (e eType : Expr) : TermElabM α :=
|
||||
throwLValError e eType "invalid field notation, type is not of the form (C ...) where C is a constant"
|
||||
|
||||
@@ -1223,30 +1208,22 @@ private def resolveLValAux (e : Expr) (eType : Expr) (lval : LVal) : TermElabM L
|
||||
throwLValError e eType m!"invalid projection, structure has only {numFields} field(s)"
|
||||
| some structName, LVal.fieldName _ fieldName _ _ =>
|
||||
let env ← getEnv
|
||||
let searchEnv : Unit → TermElabM LValResolution := fun _ => do
|
||||
if let some (baseStructName, fullName) ← findMethod? structName (.mkSimple fieldName) then
|
||||
return LValResolution.const baseStructName structName fullName
|
||||
else if let some (structName', fullName) := findMethodAlias? env structName (.mkSimple fieldName) then
|
||||
return LValResolution.const structName' structName' fullName
|
||||
else
|
||||
throwLValError e eType
|
||||
m!"invalid field '{fieldName}', the environment does not contain '{Name.mkStr structName fieldName}'"
|
||||
-- search local context first, then environment
|
||||
let searchCtx : Unit → TermElabM LValResolution := fun _ => do
|
||||
let fullName := Name.mkStr structName fieldName
|
||||
for localDecl in (← getLCtx) do
|
||||
if localDecl.isAuxDecl then
|
||||
if let some localDeclFullName := (← read).auxDeclToFullName.find? localDecl.fvarId then
|
||||
if fullName == (privateToUserName? localDeclFullName).getD localDeclFullName then
|
||||
/- LVal notation is being used to make a "local" recursive call. -/
|
||||
return LValResolution.localRec structName fullName localDecl.toExpr
|
||||
searchEnv ()
|
||||
if isStructure env structName then
|
||||
match findField? env structName (Name.mkSimple fieldName) with
|
||||
| some baseStructName => return LValResolution.projFn baseStructName structName (Name.mkSimple fieldName)
|
||||
| none => searchCtx ()
|
||||
else
|
||||
searchCtx ()
|
||||
if let some baseStructName := findField? env structName (Name.mkSimple fieldName) then
|
||||
return LValResolution.projFn baseStructName structName (Name.mkSimple fieldName)
|
||||
-- Search the local context first
|
||||
let fullName := Name.mkStr structName fieldName
|
||||
for localDecl in (← getLCtx) do
|
||||
if localDecl.isAuxDecl then
|
||||
if let some localDeclFullName := (← read).auxDeclToFullName.find? localDecl.fvarId then
|
||||
if fullName == (privateToUserName? localDeclFullName).getD localDeclFullName then
|
||||
/- LVal notation is being used to make a "local" recursive call. -/
|
||||
return LValResolution.localRec structName fullName localDecl.toExpr
|
||||
-- Then search the environment
|
||||
if let some (baseStructName, fullName) ← findMethod? structName (.mkSimple fieldName) then
|
||||
return LValResolution.const baseStructName structName fullName
|
||||
throwLValError e eType
|
||||
m!"invalid field '{fieldName}', the environment does not contain '{Name.mkStr structName fieldName}'"
|
||||
| none, LVal.fieldName _ _ (some suffix) _ =>
|
||||
if e.isConst then
|
||||
throwUnknownConstant (e.constName! ++ suffix)
|
||||
@@ -1326,7 +1303,7 @@ Otherwise, if there isn't another parameter with the same name, we add `e` to `n
|
||||
|
||||
Remark: `fullName` is the name of the resolved "field" access function. It is used for reporting errors
|
||||
-/
|
||||
private partial def addLValArg (baseName : Name) (fullName : Name) (e : Expr) (args : Array Arg) (namedArgs : Array NamedArg) (f : Expr) :
|
||||
private partial def addLValArg (baseName : Name) (fullName : Name) (e : Expr) (args : Array Arg) (namedArgs : Array NamedArg) (f : Expr) (explicit : Bool) :
|
||||
MetaM (Array Arg × Array NamedArg) := do
|
||||
withoutModifyingState <| go f (← inferType f) 0 namedArgs (namedArgs.map (·.name)) true
|
||||
where
|
||||
@@ -1351,11 +1328,11 @@ where
|
||||
/- If there is named argument with name `xDecl.userName`, then it is accounted for and we can't make use of it. -/
|
||||
remainingNamedArgs := remainingNamedArgs.eraseIdx idx
|
||||
else
|
||||
if (← typeMatchesBaseName xDecl.type baseName) then
|
||||
/- We found a type of the form (baseName ...).
|
||||
First, we check if the current argument is an explicit one,
|
||||
if ← typeMatchesBaseName xDecl.type baseName then
|
||||
/- We found a type of the form (baseName ...), or we found the first explicit argument in useFirstExplicit mode.
|
||||
First, we check if the current argument is one that can be used positionally,
|
||||
and if the current explicit position "fits" at `args` (i.e., it must be ≤ arg.size) -/
|
||||
if h : argIdx ≤ args.size ∧ bInfo.isExplicit then
|
||||
if h : argIdx ≤ args.size ∧ (explicit || bInfo.isExplicit) then
|
||||
/- We can insert `e` as an explicit argument -/
|
||||
return (args.insertIdx argIdx (Arg.expr e), namedArgs)
|
||||
else
|
||||
@@ -1363,13 +1340,13 @@ where
|
||||
if there isn't an argument with the same name occurring before it. -/
|
||||
if !allowNamed || unusableNamedArgs.contains xDecl.userName then
|
||||
throwError "\
|
||||
invalid field notation, function '{fullName}' has argument with the expected type\
|
||||
invalid field notation, function '{.ofConstName fullName}' has argument with the expected type\
|
||||
{indentExpr xDecl.type}\n\
|
||||
but it cannot be used"
|
||||
else
|
||||
return (args, namedArgs.push { name := xDecl.userName, val := Arg.expr e })
|
||||
/- Advance `argIdx` and update seen named arguments. -/
|
||||
if bInfo.isExplicit then
|
||||
if explicit || bInfo.isExplicit then
|
||||
argIdx := argIdx + 1
|
||||
unusableNamedArgs := unusableNamedArgs.push xDecl.userName
|
||||
/- If named arguments aren't allowed, then it must still be possible to pass the value as an explicit argument.
|
||||
@@ -1380,7 +1357,7 @@ where
|
||||
if let some f' ← coerceToFunction? (mkAppN f xs) then
|
||||
return ← go f' (← inferType f') argIdx remainingNamedArgs unusableNamedArgs false
|
||||
throwError "\
|
||||
invalid field notation, function '{fullName}' does not have argument with type ({baseName} ...) that can be used, \
|
||||
invalid field notation, function '{.ofConstName fullName}' does not have argument with type ({.ofConstName baseName} ...) that can be used, \
|
||||
it must be explicit or implicit with a unique name"
|
||||
|
||||
/-- Adds the `TermInfo` for the field of a projection. See `Lean.Parser.Term.identProjKind`. -/
|
||||
@@ -1426,7 +1403,7 @@ private def elabAppLValsAux (namedArgs : Array NamedArg) (args : Array Arg) (exp
|
||||
let projFn ← mkConst constName
|
||||
let projFn ← addProjTermInfo lval.getRef projFn
|
||||
if lvals.isEmpty then
|
||||
let (args, namedArgs) ← addLValArg baseStructName constName f args namedArgs projFn
|
||||
let (args, namedArgs) ← addLValArg baseStructName constName f args namedArgs projFn explicit
|
||||
elabAppArgs projFn namedArgs args expectedType? explicit ellipsis
|
||||
else
|
||||
let f ← elabAppArgs projFn #[] #[Arg.expr f] (expectedType? := none) (explicit := false) (ellipsis := false)
|
||||
@@ -1434,7 +1411,7 @@ private def elabAppLValsAux (namedArgs : Array NamedArg) (args : Array Arg) (exp
|
||||
| LValResolution.localRec baseName fullName fvar =>
|
||||
let fvar ← addProjTermInfo lval.getRef fvar
|
||||
if lvals.isEmpty then
|
||||
let (args, namedArgs) ← addLValArg baseName fullName f args namedArgs fvar
|
||||
let (args, namedArgs) ← addLValArg baseName fullName f args namedArgs fvar explicit
|
||||
elabAppArgs fvar namedArgs args expectedType? explicit ellipsis
|
||||
else
|
||||
let f ← elabAppArgs fvar #[] #[Arg.expr f] (expectedType? := none) (explicit := false) (ellipsis := false)
|
||||
|
||||
@@ -1079,7 +1079,9 @@ def synthesizeInstMVarCore (instMVar : MVarId) (maxResultSize? : Option Nat := n
|
||||
let oldValType ← inferType oldVal
|
||||
let valType ← inferType val
|
||||
unless (← isDefEq oldValType valType) do
|
||||
let (oldValType, valType) ← addPPExplicitToExposeDiff oldValType valType
|
||||
throwError "synthesized type class instance type is not definitionally equal to expected type, synthesized{indentExpr val}\nhas type{indentExpr valType}\nexpected{indentExpr oldValType}{extraErrorMsg}"
|
||||
let (oldVal, val) ← addPPExplicitToExposeDiff oldVal val
|
||||
throwError "synthesized type class instance is not definitionally equal to expression inferred by typing rules, synthesized{indentExpr val}\ninferred{indentExpr oldVal}{extraErrorMsg}"
|
||||
else
|
||||
unless (← isDefEq (mkMVar instMVar) val) do
|
||||
|
||||
@@ -897,13 +897,18 @@ def finalizeImport (s : ImportState) (imports : Array Import) (opts : Options) (
|
||||
initialized constant. We have seen significant savings in `open Mathlib`
|
||||
timings, where we have both a big environment and interpreted environment
|
||||
extensions, from this. There is no significant extra cost to calling
|
||||
`markPersistent` multiple times like this. -/
|
||||
env := Runtime.markPersistent env
|
||||
`markPersistent` multiple times like this.
|
||||
|
||||
Safety: There are no concurrent accesses to `env` at this point. -/
|
||||
env := unsafe Runtime.markPersistent env
|
||||
env ← finalizePersistentExtensions env s.moduleData opts
|
||||
if leakEnv then
|
||||
/- Ensure the final environment including environment extension states is
|
||||
marked persistent as documented. -/
|
||||
env := Runtime.markPersistent env
|
||||
marked persistent as documented.
|
||||
|
||||
Safety: There are no concurrent accesses to `env` at this point, assuming
|
||||
extensions' `addImportFn`s did not spawn any unbound tasks. -/
|
||||
env := unsafe Runtime.markPersistent env
|
||||
pure env
|
||||
|
||||
@[export lean_import_modules]
|
||||
|
||||
@@ -46,17 +46,33 @@ delete the space after private, it becomes a syntactically correct structure wit
|
||||
privateaxiom! So clearly, because of uses of atomic in the grammar, an edit can affect a command
|
||||
syntax tree even across multiple tokens.
|
||||
|
||||
Now, what we do today, and have done since Lean 3, is to always reparse the last command completely
|
||||
preceding the edit location. If its syntax tree is unchanged, we preserve its data and reprocess all
|
||||
following commands only, otherwise we reprocess it fully as well. This seems to have worked well so
|
||||
far but it does seem a bit arbitrary given that even if it works for our current grammar, it can
|
||||
certainly be extended in ways that break the assumption.
|
||||
What we did in Lean 3 was to always reparse the last command completely preceding the edit location.
|
||||
If its syntax tree is unchanged, we preserve its data and reprocess all following commands only,
|
||||
otherwise we reprocess it fully as well. This worked well but did seem a bit arbitrary given that
|
||||
even if it works for a grammar at some point, it can certainly be extended in ways that break the
|
||||
assumption.
|
||||
|
||||
With grammar changes in Lean 4, we found that the following example indeed breaks this assumption:
|
||||
```
|
||||
structure Signature where
|
||||
/-- a docstring -/
|
||||
Sort : Type
|
||||
--^ insert: "s"
|
||||
```
|
||||
As the keyword `Sort` is not a valid start of a structure field and the parser backtracks across the
|
||||
docstring in that case, this is parsed as the complete command `structure Signature where` followed
|
||||
by the partial command `/-- a docstring -/ <missing>`. If we insert an `s` after the `t`, the last
|
||||
command completely preceding the edit location is the partial command containing the docstring. Thus
|
||||
we need to go up two commands to ensure we reparse the `structure` command as well. This kind of
|
||||
nested docstring is the only part of the grammar to our knowledge that requires going up at least
|
||||
two commands; as we never backtrack across more than one docstring, going up two commands should
|
||||
also be sufficient.
|
||||
|
||||
Finally, a more actually principled and generic solution would be to invalidate a syntax tree when
|
||||
the parser has reached the edit location during parsing. If it did not, surely the edit cannot have
|
||||
an effect on the syntax tree in question. Sadly such a "high-water mark" parser position does not
|
||||
exist currently and likely it could at best be approximated by e.g. "furthest `tokenFn` parse". Thus
|
||||
we remain at "go two commands up" at this point.
|
||||
we remain at "go up two commands" at this point.
|
||||
-/
|
||||
|
||||
/-!
|
||||
@@ -340,11 +356,12 @@ where
|
||||
if let some old := old? then
|
||||
if let some oldSuccess := old.result? then
|
||||
if let some (some processed) ← old.processedResult.get? then
|
||||
-- ...and the edit location is after the next command (see note [Incremental Parsing])...
|
||||
-- ...and the edit is after the second-next command (see note [Incremental Parsing])...
|
||||
if let some nextCom ← processed.firstCmdSnap.get? then
|
||||
if (← isBeforeEditPos nextCom.parserState.pos) then
|
||||
-- ...go immediately to next snapshot
|
||||
return (← unchanged old old.stx oldSuccess.parserState)
|
||||
if let some nextNextCom ← processed.firstCmdSnap.get? then
|
||||
if (← isBeforeEditPos nextNextCom.parserState.pos) then
|
||||
-- ...go immediately to next snapshot
|
||||
return (← unchanged old old.stx oldSuccess.parserState)
|
||||
|
||||
withHeaderExceptions ({ · with
|
||||
ictx, stx := .missing, result? := none, cancelTk? := none }) do
|
||||
@@ -437,11 +454,6 @@ where
|
||||
traceState
|
||||
}
|
||||
let prom ← IO.Promise.new
|
||||
-- The speedup of these `markPersistent`s is negligible but they help in making unexpected
|
||||
-- `inc_ref_cold`s more visible
|
||||
let parserState := Runtime.markPersistent parserState
|
||||
let cmdState := Runtime.markPersistent cmdState
|
||||
let ctx := Runtime.markPersistent ctx
|
||||
parseCmd none parserState cmdState prom (sync := true) ctx
|
||||
return {
|
||||
diagnostics
|
||||
@@ -473,11 +485,12 @@ where
|
||||
prom.resolve <| { old with nextCmdSnap? := some { range? := none, task := newProm.result } }
|
||||
else prom.resolve old -- terminal command, we're done!
|
||||
|
||||
-- fast path, do not even start new task for this snapshot
|
||||
-- fast path, do not even start new task for this snapshot (see [Incremental Parsing])
|
||||
if let some old := old? then
|
||||
if let some nextCom ← old.nextCmdSnap?.bindM (·.get?) then
|
||||
if (← isBeforeEditPos nextCom.parserState.pos) then
|
||||
return (← unchanged old old.parserState)
|
||||
if let some nextNextCom ← nextCom.nextCmdSnap?.bindM (·.get?) then
|
||||
if (← isBeforeEditPos nextNextCom.parserState.pos) then
|
||||
return (← unchanged old old.parserState)
|
||||
|
||||
let beginPos := parserState.pos
|
||||
let scope := cmdState.scopes.head!
|
||||
@@ -630,16 +643,24 @@ where
|
||||
-- definitely resolve eventually
|
||||
snap.new.resolve <| .ofTyped { diagnostics := .empty : SnapshotLeaf }
|
||||
|
||||
let mut infoTree := cmdState.infoState.trees[0]!
|
||||
let mut infoTree : InfoTree := cmdState.infoState.trees[0]!
|
||||
let cmdline := internal.cmdlineSnapshots.get scope.opts && !Parser.isTerminalCommand stx
|
||||
if cmdline then
|
||||
infoTree := Runtime.markPersistent infoTree
|
||||
if cmdline && !Elab.async.get scope.opts then
|
||||
/-
|
||||
Safety: `infoTree` was created by `elabCommandTopLevel`. Thus it
|
||||
should not have any concurrent accesses if we are on the cmdline and
|
||||
async elaboration is disabled.
|
||||
-/
|
||||
-- TODO: we should likely remove this call when `Elab.async` is turned on
|
||||
-- by default
|
||||
infoTree := unsafe Runtime.markPersistent infoTree
|
||||
finishedPromise.resolve {
|
||||
diagnostics := (← Snapshot.Diagnostics.ofMessageLog cmdState.messages)
|
||||
infoTree? := infoTree
|
||||
traces := cmdState.traceState
|
||||
cmdState := if cmdline then {
|
||||
env := Runtime.markPersistent cmdState.env
|
||||
/- Safety: as above -/
|
||||
env := unsafe Runtime.markPersistent cmdState.env
|
||||
maxRecDepth := 0
|
||||
} else cmdState
|
||||
}
|
||||
|
||||
@@ -31,6 +31,10 @@ builtin_initialize deprecatedAttr : ParametricAttribute DeprecationEntry ←
|
||||
let newName? ← id?.mapM Elab.realizeGlobalConstNoOverloadWithInfo
|
||||
let text? := text?.map TSyntax.getString
|
||||
let since? := since?.map TSyntax.getString
|
||||
if id?.isNone && text?.isNone then
|
||||
logWarning "`[deprecated]` attribute should specify either a new name or a deprecation message"
|
||||
if since?.isNone then
|
||||
logWarning "`[deprecated]` attribute should specify the date or library version at which the deprecation was introduced, using `(since := \"...\")`"
|
||||
return { newName?, text?, since? }
|
||||
}
|
||||
|
||||
@@ -46,7 +50,9 @@ def getDeprecatedNewName (env : Environment) (declName : Name) : Option Name :=
|
||||
def checkDeprecated [Monad m] [MonadEnv m] [MonadLog m] [AddMessageContext m] [MonadOptions m] (declName : Name) : m Unit := do
|
||||
if getLinterValue linter.deprecated (← getOptions) then
|
||||
let some attr := deprecatedAttr.getParam? (← getEnv) declName | pure ()
|
||||
logWarning <| .tagged ``deprecatedAttr <| attr.text?.getD <|
|
||||
match attr.newName? with
|
||||
| none => s!"`{declName}` has been deprecated"
|
||||
| some newName => s!"`{declName}` has been deprecated, use `{newName}` instead"
|
||||
logWarning <| .tagged ``deprecatedAttr <|
|
||||
s!"`{declName}` has been deprecated" ++ match attr.text? with
|
||||
| some text => s!": {text}"
|
||||
| none => match attr.newName? with
|
||||
| some newName => s!": use `{newName}` instead"
|
||||
| none => ""
|
||||
|
||||
@@ -5,14 +5,40 @@ Authors: Mac Malone
|
||||
-/
|
||||
prelude
|
||||
import Init.System.IO
|
||||
|
||||
namespace Lean
|
||||
|
||||
/--
|
||||
Dynamically loads a shared library so that its symbols can be used by
|
||||
the Lean interpreter (e.g., for interpreting `@[extern]` declarations).
|
||||
Equivalent to passing `--load-dynlib=lib` to `lean`.
|
||||
Equivalent to passing `--load-dynlib=path` to `lean`.
|
||||
|
||||
Note that Lean never unloads libraries.
|
||||
**Lean never unloads libraries.** Attempting to load a library that defines
|
||||
symbols shared with a previously loaded library (including itself) will error.
|
||||
If multiple libraries share common symbols, those symbols should be linked
|
||||
and loaded as separate libraries.
|
||||
-/
|
||||
@[extern "lean_load_dynlib"]
|
||||
opaque loadDynlib (path : @& System.FilePath) : IO Unit
|
||||
|
||||
/--
|
||||
Loads a Lean plugin and runs its initializers.
|
||||
|
||||
A Lean plugin is a shared library built from a Lean module.
|
||||
This means it has an `initialize_<module-name>` symbol that runs the
|
||||
module's initializers (including its imports' initializers). Initializers
|
||||
are declared with the `initialize` or `builtin_initialize` commands.
|
||||
|
||||
This is similar to passing `--plugin=path` to `lean`.
|
||||
Lean environment initializers, such as definitions calling
|
||||
`registerEnvExtension`, also require `Lean.initializing` to be `true`.
|
||||
To enable them, use `loadPlugin` within a `withImporting` block. This will
|
||||
set `Lean.initializing` (but not `IO.initializing`).
|
||||
|
||||
**Lean never unloads plugins.** Attempting to load a plugin that defines
|
||||
symbols shared with a previously loaded plugin (including itself) will error.
|
||||
If multiple plugins share common symbols (e.g., imports), those symbols
|
||||
should be linked and loaded separately.
|
||||
-/
|
||||
@[extern "lean_load_plugin"]
|
||||
opaque loadPlugin (path : @& System.FilePath) : IO Unit
|
||||
|
||||
@@ -571,10 +571,127 @@ def congr (e : Expr) : SimpM Result := do
|
||||
else
|
||||
congrDefault e
|
||||
|
||||
/--
|
||||
Returns `true` if `e` is of the form `@letFun _ (fun _ => β) _ _`
|
||||
|
||||
`β` must not contain loose bound variables. Recall that `simp` does not have support for `let_fun`s
|
||||
where resulting type depends on the `let`-value. Example:
|
||||
```
|
||||
let_fun n := 10;
|
||||
BitVec.zero n
|
||||
```
|
||||
-/
|
||||
def isNonDepLetFun (e : Expr) : Bool :=
|
||||
let_expr letFun _ beta _ body := e | false
|
||||
beta.isLambda && !beta.bindingBody!.hasLooseBVars && body.isLambda
|
||||
|
||||
/--
|
||||
Auxiliary structure used to represent the return value of `simpNonDepLetFun.go`.
|
||||
-/
|
||||
private structure SimpLetFunResult where
|
||||
/--
|
||||
The simplified expression. Note that is may contain loose bound variables.
|
||||
`simpNonDepLetFun.go` attempts to minimize the quadratic overhead imposed
|
||||
by the locally nameless discipline in a sequence of `let_fun` declarations.
|
||||
-/
|
||||
expr : Expr
|
||||
/--
|
||||
The proof that the simplified expression is equal to the input one.
|
||||
It may containt loose bound variables. See `expr` field.
|
||||
-/
|
||||
proof : Expr
|
||||
/--
|
||||
`modified := false` iff `expr` is structurally equal to the input expression.
|
||||
-/
|
||||
modified : Bool
|
||||
|
||||
/--
|
||||
Simplifies a sequence of `let_fun` declarations.
|
||||
It attempts to minimize the quadratic overhead imposed by
|
||||
the locally nameless discipline.
|
||||
-/
|
||||
partial def simpNonDepLetFun (e : Expr) : SimpM Result := do
|
||||
let rec go (xs : Array Expr) (e : Expr) : SimpM SimpLetFunResult := do
|
||||
/-
|
||||
Helper function applied when `e` is not a `let_fun` or
|
||||
is a non supported `let_fun` (e.g., the resulting type depends on the value).
|
||||
-/
|
||||
let stop : SimpM SimpLetFunResult := do
|
||||
let e := e.instantiateRev xs
|
||||
let r ← simp e
|
||||
return { expr := r.expr.abstract xs, proof := (← r.getProof).abstract xs, modified := r.expr != e }
|
||||
let_expr f@letFun alpha betaFun val body := e | stop
|
||||
let us := f.constLevels!
|
||||
let [_, v] := us | stop
|
||||
/-
|
||||
Recall that `let_fun x : α := val; e[x]` is encoded at
|
||||
```
|
||||
@letFun α (fun x : α => β[x]) val (fun x : α => e[x])
|
||||
```
|
||||
`betaFun` is `(fun x : α => β[x])`. If `β[x]` does not have loose bound variables then the resulting type
|
||||
does not depend on the value since it does not depend on `x`.
|
||||
|
||||
We also check whether `alpha` does not depend on the previous `let_fun`s in the sequence.
|
||||
This check is just to make the code simpler. It is not common to have a type depending on the value of a previous `let_fun`.
|
||||
-/
|
||||
if alpha.hasLooseBVars || !betaFun.isLambda || !body.isLambda || betaFun.bindingBody!.hasLooseBVars then
|
||||
stop
|
||||
else if !body.bindingBody!.hasLooseBVar 0 then
|
||||
/-
|
||||
Redundant `let_fun`. The simplifier will remove it.
|
||||
Remark: the `hasLooseBVar` check here may introduce a quadratic overhead in the worst case.
|
||||
If observe that in practice, we may use a separate step for removing unused variables.
|
||||
|
||||
Remark: note that we do **not** simplify the value in this case.
|
||||
-/
|
||||
let x := mkConst `__no_used_dummy__ -- dummy value
|
||||
let { expr, proof, .. } ← go (xs.push x) body.bindingBody!
|
||||
let proof := mkApp6 (mkConst ``letFun_unused us) alpha betaFun.bindingBody! val body.bindingBody! expr proof
|
||||
return { expr, proof, modified := true }
|
||||
else
|
||||
let beta := betaFun.bindingBody!
|
||||
let valInst := val.instantiateRev xs
|
||||
let valResult ← simp valInst
|
||||
withLocalDecl body.bindingName! body.bindingInfo! alpha fun x => do
|
||||
let valIsNew := valResult.expr != valInst
|
||||
let { expr, proof, modified := bodyIsNew } ← go (xs.push x) body.bindingBody!
|
||||
if !valIsNew && !bodyIsNew then
|
||||
/-
|
||||
Value and body were not simplified. We just return `e` and a new refl proof.
|
||||
We must use the low-level `Expr` APIs because `e` may contain loose bound variables.
|
||||
-/
|
||||
let proof := mkApp2 (mkConst ``Eq.refl [v]) beta e
|
||||
return { expr := e, proof, modified := false }
|
||||
else
|
||||
let body' := mkLambda body.bindingName! body.bindingInfo! alpha expr
|
||||
let val' := valResult.expr.abstract xs
|
||||
let e' := mkApp4 f alpha betaFun val' body'
|
||||
if valIsNew && bodyIsNew then
|
||||
-- Value and body were simplified
|
||||
let valProof := (← valResult.getProof).abstract xs
|
||||
let proof := mkApp8 (mkConst ``letFun_congr us) alpha beta val val' body body' valProof (mkLambda body.bindingName! body.bindingInfo! alpha proof)
|
||||
return { expr := e', proof, modified := true }
|
||||
else if valIsNew then
|
||||
-- Only the value was simplified.
|
||||
let valProof := (← valResult.getProof).abstract xs
|
||||
let proof := mkApp6 (mkConst ``letFun_val_congr us) alpha beta val val' body valProof
|
||||
return { expr := e', proof, modified := true }
|
||||
else
|
||||
-- Only the body was simplified.
|
||||
let proof := mkApp6 (mkConst ``letFun_body_congr us) alpha beta val body body' (mkLambda body.bindingName! body.bindingInfo! alpha proof)
|
||||
return { expr := e', proof, modified := true }
|
||||
let { expr, proof, modified } ← go #[] e
|
||||
if !modified then
|
||||
return { expr := e }
|
||||
else
|
||||
return { expr, proof? := proof }
|
||||
|
||||
def simpApp (e : Expr) : SimpM Result := do
|
||||
if isOfNatNatLit e || isOfScientificLit e || isCharLit e then
|
||||
-- Recall that we fold "orphan" kernel Nat literals `n` into `OfNat.ofNat n`
|
||||
return { expr := e }
|
||||
else if isNonDepLetFun e then
|
||||
simpNonDepLetFun e
|
||||
else
|
||||
congr e
|
||||
|
||||
|
||||
@@ -291,11 +291,11 @@ but will later become a primitive operation.
|
||||
⟨(Raw₀.Const.insertMany ⟨m.1, m.2.size_buckets_pos⟩ l).1,
|
||||
(Raw₀.Const.insertMany ⟨m.1, m.2.size_buckets_pos⟩ l).2 _ Raw.WF.insert₀ m.2⟩
|
||||
|
||||
@[inline, inherit_doc Raw.Const.insertManyUnit] def Const.insertManyUnit
|
||||
@[inline, inherit_doc Raw.Const.insertManyIfNewUnit] def Const.insertManyIfNewUnit
|
||||
{ρ : Type w} [ForIn Id ρ α] (m : DHashMap α (fun _ => Unit)) (l : ρ) :
|
||||
DHashMap α (fun _ => Unit) :=
|
||||
⟨(Raw₀.Const.insertManyUnit ⟨m.1, m.2.size_buckets_pos⟩ l).1,
|
||||
(Raw₀.Const.insertManyUnit ⟨m.1, m.2.size_buckets_pos⟩ l).2 _ Raw.WF.insert₀ m.2⟩
|
||||
⟨(Raw₀.Const.insertManyIfNewUnit ⟨m.1, m.2.size_buckets_pos⟩ l).1,
|
||||
(Raw₀.Const.insertManyIfNewUnit ⟨m.1, m.2.size_buckets_pos⟩ l).2 _ Raw.WF.insertIfNew₀ m.2⟩
|
||||
|
||||
@[inline, inherit_doc Raw.ofList] def ofList [BEq α] [Hashable α] (l : List ((a : α) × β a)) :
|
||||
DHashMap α β :=
|
||||
@@ -313,11 +313,11 @@ instance [BEq α] [Hashable α] : Union (DHashMap α β) := ⟨union⟩
|
||||
|
||||
@[inline, inherit_doc Raw.Const.unitOfList] def Const.unitOfList [BEq α] [Hashable α] (l : List α) :
|
||||
DHashMap α (fun _ => Unit) :=
|
||||
Const.insertManyUnit ∅ l
|
||||
Const.insertManyIfNewUnit ∅ l
|
||||
|
||||
@[inline, inherit_doc Raw.Const.unitOfArray] def Const.unitOfArray [BEq α] [Hashable α] (l : Array α) :
|
||||
DHashMap α (fun _ => Unit) :=
|
||||
Const.insertManyUnit ∅ l
|
||||
Const.insertManyIfNewUnit ∅ l
|
||||
|
||||
@[inherit_doc Raw.Internal.numBuckets] def Internal.numBuckets
|
||||
(m : DHashMap α β) : Nat :=
|
||||
|
||||
@@ -407,14 +407,14 @@ variable {β : Type v}
|
||||
return r
|
||||
|
||||
/-- Internal implementation detail of the hash map -/
|
||||
@[inline] def Const.insertManyUnit {ρ : Type w} [ForIn Id ρ α] [BEq α] [Hashable α]
|
||||
@[inline] def Const.insertManyIfNewUnit {ρ : Type w} [ForIn Id ρ α] [BEq α] [Hashable α]
|
||||
(m : Raw₀ α (fun _ => Unit)) (l : ρ) :
|
||||
{ m' : Raw₀ α (fun _ => Unit) // ∀ (P : Raw₀ α (fun _ => Unit) → Prop),
|
||||
(∀ {m'' a b}, P m'' → P (m''.insert a b)) → P m → P m' } := Id.run do
|
||||
(∀ {m'' a b}, P m'' → P (m''.insertIfNew a b)) → P m → P m' } := Id.run do
|
||||
let mut r : { m' : Raw₀ α (fun _ => Unit) // ∀ (P : Raw₀ α (fun _ => Unit) → Prop),
|
||||
(∀ {m'' a b}, P m'' → P (m''.insert a b)) → P m → P m' } := ⟨m, fun _ _ => id⟩
|
||||
(∀ {m'' a b}, P m'' → P (m''.insertIfNew a b)) → P m → P m' } := ⟨m, fun _ _ => id⟩
|
||||
for a in l do
|
||||
r := ⟨r.1.insert a (), fun _ h hm => h (r.2 _ h hm)⟩
|
||||
r := ⟨r.1.insertIfNew a (), fun _ h hm => h (r.2 _ h hm)⟩
|
||||
return r
|
||||
|
||||
end
|
||||
|
||||
@@ -738,9 +738,9 @@ theorem Const.wfImp_insertMany {β : Type v} [BEq α] [Hashable α] [EquivBEq α
|
||||
{l : ρ} (h : Raw.WFImp m.1) : Raw.WFImp (Const.insertMany m l).1.1 :=
|
||||
Raw.WF.out ((Const.insertMany m l).2 _ Raw.WF.insert₀ (.wf m.2 h))
|
||||
|
||||
theorem Const.wfImp_insertManyUnit [BEq α] [Hashable α] [EquivBEq α] [LawfulHashable α] {ρ : Type w}
|
||||
theorem Const.wfImp_insertManyIfNewUnit [BEq α] [Hashable α] [EquivBEq α] [LawfulHashable α] {ρ : Type w}
|
||||
[ForIn Id ρ α] {m : Raw₀ α (fun _ => Unit)} {l : ρ} (h : Raw.WFImp m.1) :
|
||||
Raw.WFImp (Const.insertManyUnit m l).1.1 :=
|
||||
Raw.WF.out ((Const.insertManyUnit m l).2 _ Raw.WF.insert₀ (.wf m.2 h))
|
||||
Raw.WFImp (Const.insertManyIfNewUnit m l).1.1 :=
|
||||
Raw.WF.out ((Const.insertManyIfNewUnit m l).2 _ Raw.WF.insertIfNew₀ (.wf m.2 h))
|
||||
|
||||
end Raw₀
|
||||
|
||||
@@ -58,7 +58,12 @@ instance : Inhabited (Raw α β) where
|
||||
default := ∅
|
||||
|
||||
/--
|
||||
Inserts the given mapping into the map, replacing an existing mapping for the key if there is one.
|
||||
Inserts the given mapping into the map. If there is already a mapping for the given key, then both
|
||||
key and value will be replaced.
|
||||
|
||||
Note: this replacement behavior is true for `HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw`.
|
||||
The `insert` function on `HashSet` and `HashSet.Raw` behaves differently: it will return the set
|
||||
unchanged if a matching key is already present.
|
||||
-/
|
||||
@[inline] def insert [BEq α] [Hashable α] (m : Raw α β) (a : α) (b : β a) : Raw α β :=
|
||||
if h : 0 < m.buckets.size then
|
||||
@@ -373,6 +378,10 @@ instance : ForIn m (Raw α β) ((a : α) × β a) where
|
||||
/--
|
||||
Inserts multiple mappings into the hash map by iterating over the given collection and calling
|
||||
`insert`. If the same key appears multiple times, the last occurrence takes precedence.
|
||||
|
||||
Note: this precedence behavior is true for `HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw`.
|
||||
The `insertMany` function on `HashSet` and `HashSet.Raw` behaves differently: it will prefer the first
|
||||
appearance.
|
||||
-/
|
||||
@[inline] def insertMany [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ ((a : α) × β a)]
|
||||
(m : Raw α β) (l : ρ) : Raw α β :=
|
||||
@@ -388,15 +397,16 @@ Inserts multiple mappings into the hash map by iterating over the given collecti
|
||||
|
||||
/--
|
||||
Inserts multiple keys with the value `()` into the hash map by iterating over the given collection
|
||||
and calling `insert`. If the same key appears multiple times, the last occurrence takes precedence.
|
||||
and calling `insertIfNew`. If the same key appears multiple times, the first occurrence takes
|
||||
precedence.
|
||||
|
||||
This is mainly useful to implement `HashSet.insertMany`, so if you are considering using this,
|
||||
`HashSet` or `HashSet.Raw` might be a better fit for you.
|
||||
-/
|
||||
@[inline] def Const.insertManyUnit [BEq α] [Hashable α] {ρ : Type w}
|
||||
@[inline] def Const.insertManyIfNewUnit [BEq α] [Hashable α] {ρ : Type w}
|
||||
[ForIn Id ρ α] (m : Raw α (fun _ => Unit)) (l : ρ) : Raw α (fun _ => Unit) :=
|
||||
if h : 0 < m.buckets.size then
|
||||
(Raw₀.Const.insertManyUnit ⟨m, h⟩ l).1
|
||||
(Raw₀.Const.insertManyIfNewUnit ⟨m, h⟩ l).1
|
||||
else m -- will never happen for well-formed inputs
|
||||
|
||||
/-- Creates a hash map from a list of mappings. If the same key appears multiple times, the last
|
||||
@@ -420,7 +430,7 @@ This is mainly useful to implement `HashSet.ofList`, so if you are considering u
|
||||
`HashSet` or `HashSet.Raw` might be a better fit for you. -/
|
||||
@[inline] def Const.unitOfList [BEq α] [Hashable α] (l : List α) :
|
||||
Raw α (fun _ => Unit) :=
|
||||
Const.insertManyUnit ∅ l
|
||||
Const.insertManyIfNewUnit ∅ l
|
||||
|
||||
/-- Creates a hash map from an array of keys, associating the value `()` with each key.
|
||||
|
||||
@@ -428,7 +438,7 @@ This is mainly useful to implement `HashSet.ofArray`, so if you are considering
|
||||
`HashSet` or `HashSet.Raw` might be a better fit for you. -/
|
||||
@[inline] def Const.unitOfArray [BEq α] [Hashable α] (l : Array α) :
|
||||
Raw α (fun _ => Unit) :=
|
||||
Const.insertManyUnit ∅ l
|
||||
Const.insertManyIfNewUnit ∅ l
|
||||
|
||||
/--
|
||||
Returns the number of buckets in the internal representation of the hash map. This function may be
|
||||
@@ -547,10 +557,10 @@ theorem WF.Const.insertMany {β : Type v} [BEq α] [Hashable α] {ρ : Type w} [
|
||||
simpa [Raw.Const.insertMany, h.size_buckets_pos] using
|
||||
(Raw₀.Const.insertMany ⟨m, h.size_buckets_pos⟩ l).2 _ WF.insert₀ h
|
||||
|
||||
theorem WF.Const.insertManyUnit [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ α]
|
||||
{m : Raw α (fun _ => Unit)} {l : ρ} (h : m.WF) : (Const.insertManyUnit m l).WF := by
|
||||
simpa [Raw.Const.insertManyUnit, h.size_buckets_pos] using
|
||||
(Raw₀.Const.insertManyUnit ⟨m, h.size_buckets_pos⟩ l).2 _ WF.insert₀ h
|
||||
theorem WF.Const.insertManyIfNewUnit [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ α]
|
||||
{m : Raw α (fun _ => Unit)} {l : ρ} (h : m.WF) : (Const.insertManyIfNewUnit m l).WF := by
|
||||
simpa [Raw.Const.insertManyIfNewUnit, h.size_buckets_pos] using
|
||||
(Raw₀.Const.insertManyIfNewUnit ⟨m, h.size_buckets_pos⟩ l).2 _ WF.insertIfNew₀ h
|
||||
|
||||
theorem WF.ofList [BEq α] [Hashable α] {l : List ((a : α) × β a)} :
|
||||
(ofList l : Raw α β).WF :=
|
||||
@@ -562,7 +572,7 @@ theorem WF.Const.ofList {β : Type v} [BEq α] [Hashable α] {l : List (α × β
|
||||
|
||||
theorem WF.Const.unitOfList [BEq α] [Hashable α] {l : List α} :
|
||||
(Const.unitOfList l : Raw α (fun _ => Unit)).WF :=
|
||||
Const.insertManyUnit WF.empty
|
||||
Const.insertManyIfNewUnit WF.empty
|
||||
|
||||
end WF
|
||||
|
||||
|
||||
@@ -265,9 +265,9 @@ instance [BEq α] [Hashable α] {m : Type w → Type w} : ForIn m (HashMap α β
|
||||
[ForIn Id ρ (α × β)] (m : HashMap α β) (l : ρ) : HashMap α β :=
|
||||
⟨DHashMap.Const.insertMany m.inner l⟩
|
||||
|
||||
@[inline, inherit_doc DHashMap.Const.insertManyUnit] def insertManyUnit
|
||||
@[inline, inherit_doc DHashMap.Const.insertManyIfNewUnit] def insertManyIfNewUnit
|
||||
{ρ : Type w} [ForIn Id ρ α] (m : HashMap α Unit) (l : ρ) : HashMap α Unit :=
|
||||
⟨DHashMap.Const.insertManyUnit m.inner l⟩
|
||||
⟨DHashMap.Const.insertManyIfNewUnit m.inner l⟩
|
||||
|
||||
@[inline, inherit_doc DHashMap.Const.ofList] def ofList [BEq α] [Hashable α] (l : List (α × β)) :
|
||||
HashMap α β :=
|
||||
|
||||
@@ -694,11 +694,11 @@ theorem contains_keys [EquivBEq α] [LawfulHashable α] {k : α} :
|
||||
|
||||
@[simp]
|
||||
theorem mem_keys [LawfulBEq α] [LawfulHashable α] {k : α} :
|
||||
k ∈ m.keys ↔ k ∈ m :=
|
||||
k ∈ m.keys ↔ k ∈ m :=
|
||||
DHashMap.mem_keys
|
||||
|
||||
theorem distinct_keys [EquivBEq α] [LawfulHashable α] :
|
||||
m.keys.Pairwise (fun a b => (a == b) = false) :=
|
||||
m.keys.Pairwise (fun a b => (a == b) = false) :=
|
||||
DHashMap.distinct_keys
|
||||
|
||||
end
|
||||
|
||||
@@ -229,9 +229,9 @@ m.inner.values
|
||||
{ρ : Type w} [ForIn Id ρ (α × β)] (m : Raw α β) (l : ρ) : Raw α β :=
|
||||
⟨DHashMap.Raw.Const.insertMany m.inner l⟩
|
||||
|
||||
@[inline, inherit_doc DHashMap.Raw.Const.insertManyUnit] def insertManyUnit [BEq α] [Hashable α]
|
||||
{ρ : Type w} [ForIn Id ρ α] (m : Raw α Unit) (l : ρ) : Raw α Unit :=
|
||||
⟨DHashMap.Raw.Const.insertManyUnit m.inner l⟩
|
||||
@[inline, inherit_doc DHashMap.Raw.Const.insertManyIfNewUnit] def insertManyIfNewUnit [BEq α]
|
||||
[Hashable α] {ρ : Type w} [ForIn Id ρ α] (m : Raw α Unit) (l : ρ) : Raw α Unit :=
|
||||
⟨DHashMap.Raw.Const.insertManyIfNewUnit m.inner l⟩
|
||||
|
||||
@[inline, inherit_doc DHashMap.Raw.Const.ofList] def ofList [BEq α] [Hashable α]
|
||||
(l : List (α × β)) : Raw α β :=
|
||||
@@ -306,9 +306,9 @@ theorem WF.insertMany [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ (α ×
|
||||
(h : m.WF) : (m.insertMany l).WF :=
|
||||
⟨DHashMap.Raw.WF.Const.insertMany h.out⟩
|
||||
|
||||
theorem WF.insertManyUnit [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ α] {m : Raw α Unit} {l : ρ}
|
||||
(h : m.WF) : (m.insertManyUnit l).WF :=
|
||||
⟨DHashMap.Raw.WF.Const.insertManyUnit h.out⟩
|
||||
theorem WF.insertManyIfNewUnit [BEq α] [Hashable α] {ρ : Type w} [ForIn Id ρ α] {m : Raw α Unit}
|
||||
{l : ρ} (h : m.WF) : (m.insertManyIfNewUnit l).WF :=
|
||||
⟨DHashMap.Raw.WF.Const.insertManyIfNewUnit h.out⟩
|
||||
|
||||
theorem WF.ofList [BEq α] [Hashable α] {l : List (α × β)} : (ofList l).WF :=
|
||||
⟨DHashMap.Raw.WF.Const.ofList⟩
|
||||
|
||||
@@ -73,6 +73,10 @@ instance [BEq α] [Hashable α] : Inhabited (HashSet α) where
|
||||
/--
|
||||
Inserts the given element into the set. If the hash set already contains an element that is
|
||||
equal (with regard to `==`) to the given element, then the hash set is returned unchanged.
|
||||
|
||||
Note: this non-replacement behavior is true for `HashSet` and `HashSet.Raw`.
|
||||
The `insert` function on `HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw` behaves
|
||||
differently: it will overwrite an existing mapping.
|
||||
-/
|
||||
@[inline] def insert (m : HashSet α) (a : α) : HashSet α :=
|
||||
⟨m.inner.insertIfNew a ()⟩
|
||||
@@ -218,13 +222,16 @@ instance [BEq α] [Hashable α] {m : Type v → Type v} : ForIn m (HashSet α)
|
||||
m.inner.keysArray
|
||||
|
||||
/--
|
||||
Inserts multiple elements into the hash set. Note that unlike repeatedly calling `insert`, if the
|
||||
collection contains multiple elements that are equal (with regard to `==`), then the last element
|
||||
in the collection will be present in the returned hash set.
|
||||
Inserts multiple mappings into the hash set by iterating over the given collection and calling
|
||||
`insert`. If the same key appears multiple times, the first occurrence takes precedence.
|
||||
|
||||
Note: this precedence behavior is true for `HashSet` and `HashSet.Raw`. The `insertMany` function on
|
||||
`HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw` behaves differently: it will prefer the last
|
||||
appearance.
|
||||
-/
|
||||
@[inline] def insertMany {ρ : Type v} [ForIn Id ρ α] (m : HashSet α) (l : ρ) :
|
||||
HashSet α :=
|
||||
⟨m.inner.insertManyUnit l⟩
|
||||
⟨m.inner.insertManyIfNewUnit l⟩
|
||||
|
||||
/--
|
||||
Creates a hash set from a list of elements. Note that unlike repeatedly calling `insert`, if the
|
||||
|
||||
@@ -74,6 +74,10 @@ instance : Inhabited (Raw α) where
|
||||
/--
|
||||
Inserts the given element into the set. If the hash set already contains an element that is
|
||||
equal (with regard to `==`) to the given element, then the hash set is returned unchanged.
|
||||
|
||||
Note: this non-replacement behavior is true for `HashSet` and `HashSet.Raw`.
|
||||
The `insert` function on `HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw` behaves
|
||||
differently: it will overwrite an existing mapping.
|
||||
-/
|
||||
@[inline] def insert [BEq α] [Hashable α] (m : Raw α) (a : α) : Raw α :=
|
||||
⟨m.inner.insertIfNew a ()⟩
|
||||
@@ -216,13 +220,16 @@ instance {m : Type v → Type v} : ForIn m (Raw α) α where
|
||||
m.inner.keysArray
|
||||
|
||||
/--
|
||||
Inserts multiple elements into the hash set. Note that unlike repeatedly calling `insert`, if the
|
||||
collection contains multiple elements that are equal (with regard to `==`), then the last element
|
||||
in the collection will be present in the returned hash set.
|
||||
Inserts multiple mappings into the hash set by iterating over the given collection and calling
|
||||
`insert`. If the same key appears multiple times, the first occurrence takes precedence.
|
||||
|
||||
Note: this precedence behavior is true for `HashSet` and `HashSet.Raw`. The `insertMany` function on
|
||||
`HashMap`, `DHashMap`, `HashMap.Raw` and `DHashMap.Raw` behaves differently: it will prefer the last
|
||||
appearance.
|
||||
-/
|
||||
@[inline] def insertMany [BEq α] [Hashable α] {ρ : Type v} [ForIn Id ρ α] (m : Raw α) (l : ρ) :
|
||||
Raw α :=
|
||||
⟨m.inner.insertManyUnit l⟩
|
||||
⟨m.inner.insertManyIfNewUnit l⟩
|
||||
|
||||
/--
|
||||
Creates a hash set from a list of elements. Note that unlike repeatedly calling `insert`, if the
|
||||
@@ -290,7 +297,7 @@ theorem WF.filter [BEq α] [Hashable α] {m : Raw α} {f : α → Bool} (h : m.W
|
||||
|
||||
theorem WF.insertMany [BEq α] [Hashable α] {ρ : Type v} [ForIn Id ρ α] {m : Raw α} {l : ρ}
|
||||
(h : m.WF) : (m.insertMany l).WF :=
|
||||
⟨HashMap.Raw.WF.insertManyUnit h.out⟩
|
||||
⟨HashMap.Raw.WF.insertManyIfNewUnit h.out⟩
|
||||
|
||||
theorem WF.ofList [BEq α] [Hashable α] {l : List α} :
|
||||
(ofList l : Raw α).WF :=
|
||||
|
||||
@@ -27,10 +27,10 @@ structure TZdb where
|
||||
localPath : System.FilePath := "/etc/localtime"
|
||||
|
||||
/--
|
||||
The path to the directory containing all available time zone files. These files define various
|
||||
All the possible paths to the directories containing all available time zone files. These files define various
|
||||
time zones and their rules.
|
||||
-/
|
||||
zonesPath : System.FilePath := "/usr/share/zoneinfo/"
|
||||
zonesPaths : Array System.FilePath := #["/usr/share/zoneinfo", "/share/zoneinfo", "/etc/zoneinfo", "/usr/share/lib/zoneinfo"]
|
||||
|
||||
namespace TZdb
|
||||
open TimeZone
|
||||
@@ -52,7 +52,7 @@ def parseTZif (bin : ByteArray) (id : String) : Except String ZoneRules := do
|
||||
Reads a TZif file from disk and retrieves the zone rules for the specified timezone ID.
|
||||
-/
|
||||
def parseTZIfFromDisk (path : System.FilePath) (id : String) : IO ZoneRules := do
|
||||
let binary ← try IO.FS.readBinFile path catch _ => throw <| IO.userError s!"cannot find {id} in the local timezone database"
|
||||
let binary ← try IO.FS.readBinFile path catch _ => throw <| IO.userError s!"unable to locate {id} in the local timezone database at '{path}'"
|
||||
IO.ofExcept (parseTZif binary id)
|
||||
|
||||
/--
|
||||
@@ -64,8 +64,8 @@ def idFromPath (path : System.FilePath) : Option String := do
|
||||
let last₁ ← res.get? (res.size - 2)
|
||||
|
||||
if last₁ = some "zoneinfo"
|
||||
then last
|
||||
else last₁ ++ "/" ++ last
|
||||
then last.trim
|
||||
else last₁.trim ++ "/" ++ last.trim
|
||||
|
||||
/--
|
||||
Retrieves the timezone rules from the local timezone data file.
|
||||
@@ -89,4 +89,17 @@ def readRulesFromDisk (path : System.FilePath) (id : String) : IO ZoneRules := d
|
||||
|
||||
instance : Std.Time.Database TZdb where
|
||||
getLocalZoneRules db := localRules db.localPath
|
||||
getZoneRules db id := readRulesFromDisk db.zonesPath id
|
||||
|
||||
getZoneRules db id := do
|
||||
let env ← IO.getEnv "TZDIR"
|
||||
|
||||
if let some path := env then
|
||||
let result ← readRulesFromDisk path id
|
||||
return result
|
||||
|
||||
for path in db.zonesPaths do
|
||||
if ← System.FilePath.pathExists path then
|
||||
let result ← readRulesFromDisk path id
|
||||
return result
|
||||
|
||||
throw <| IO.userError s!"cannot find {id} in the local timezone database"
|
||||
|
||||
@@ -63,7 +63,7 @@ def compileLeanModule
|
||||
unless txt.isEmpty do
|
||||
logInfo s!"stdout:\n{txt}"
|
||||
unless out.stderr.isEmpty do
|
||||
logInfo s!"stderr:\n{out.stderr}"
|
||||
logInfo s!"stderr:\n{out.stderr.trim}"
|
||||
if out.exitCode ≠ 0 then
|
||||
error s!"Lean exited with code {out.exitCode}"
|
||||
|
||||
|
||||
@@ -120,5 +120,6 @@ Logs a build step with `message`.
|
||||
As a result, this no longer functions the way it used to. It now just logs the
|
||||
`message` via `logVerbose`.
|
||||
-/
|
||||
@[deprecated (since := "2024-05-25"), inline] def logStep [Monad m] [MonadLog m] (message : String) : m Unit := do
|
||||
@[deprecated "See doc-string for deprecation information." (since := "2024-05-25"), inline]
|
||||
def logStep [Monad m] [MonadLog m] (message : String) : m Unit := do
|
||||
logVerbose message
|
||||
|
||||
@@ -152,7 +152,7 @@ Will NOT cause the whole build to fail if the release cannot be fetched.
|
||||
abbrev Package.optGitHubReleaseFacet := `optRelease
|
||||
package_data optRelease : BuildJob Bool
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated optGitHubReleaseFacet (since := "2024-09-27")]
|
||||
abbrev Package.optReleaseFacet := optGitHubReleaseFacet
|
||||
|
||||
/--
|
||||
@@ -162,7 +162,7 @@ Will cause the whole build to fail if the release cannot be fetched.
|
||||
abbrev Package.gitHubReleaseFacet := `release
|
||||
package_data release : BuildJob Unit
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated gitHubReleaseFacet (since := "2024-09-27")]
|
||||
abbrev Package.releaseFacet := gitHubReleaseFacet
|
||||
|
||||
/-- A package's `extraDepTargets` mixed with its transitive dependencies'. -/
|
||||
|
||||
@@ -241,10 +241,10 @@ abbrev Package.gitHubRelease (self : Package) : BuildInfo :=
|
||||
abbrev Package.optGitHubRelease (self : Package) : BuildInfo :=
|
||||
self.facet optGitHubReleaseFacet
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated gitHubRelease (since := "2024-09-27")]
|
||||
abbrev Package.release := @gitHubRelease
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated optGitHubRelease (since := "2024-09-27")]
|
||||
abbrev Package.optRelease := @optGitHubRelease
|
||||
|
||||
@[inherit_doc extraDepFacet]
|
||||
|
||||
@@ -47,10 +47,10 @@ def Package.optBuildCacheFacetConfig : PackageFacetConfig optBuildCacheFacet :=
|
||||
def Package.maybeFetchBuildCache (self : Package) : FetchM (BuildJob Bool) := do
|
||||
let shouldFetch :=
|
||||
(← getTryCache) &&
|
||||
!(← self.buildDir.pathExists) && -- do not automatically clobber prebuilt artifacts
|
||||
(self.preferReleaseBuild || -- GitHub release
|
||||
((self.scope == "leanprover" || self.scope == "leanprover-community")
|
||||
&& !(← getElanToolchain).isEmpty
|
||||
&& !(← self.buildDir.pathExists))) -- Reservoir
|
||||
&& !(← getElanToolchain).isEmpty)) -- Reservoir
|
||||
if shouldFetch then
|
||||
self.optBuildCache.fetch
|
||||
else
|
||||
@@ -185,14 +185,14 @@ def Package.barrelFacetConfig : PackageFacetConfig reservoirBarrelFacet :=
|
||||
def Package.optGitHubReleaseFacetConfig : PackageFacetConfig optGitHubReleaseFacet :=
|
||||
mkOptBuildArchiveFacetConfig buildArchiveFile getReleaseUrl
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated optGitHubReleaseFacetConfig (since := "2024-09-27")]
|
||||
abbrev Package.optReleaseFacetConfig := optGitHubReleaseFacetConfig
|
||||
|
||||
/-- The `PackageFacetConfig` for the builtin `gitHubReleaseFacet`. -/
|
||||
def Package.gitHubReleaseFacetConfig : PackageFacetConfig gitHubReleaseFacet :=
|
||||
mkBuildArchiveFacetConfig optGitHubReleaseFacet "GitHub release"
|
||||
|
||||
@[deprecated (since := "2024-09-27")]
|
||||
@[deprecated gitHubReleaseFacetConfig (since := "2024-09-27")]
|
||||
abbrev Package.releaseFacetConfig := gitHubReleaseFacetConfig
|
||||
|
||||
/--
|
||||
|
||||
@@ -305,7 +305,9 @@ If not, check if the info is newer than this trace's modification time.
|
||||
**Deprecated:** Should not be done manually,
|
||||
but as part of `buildUnlessUpToDate`.
|
||||
-/
|
||||
@[deprecated (since := "2024-06-14"), specialize] def checkAgainstFile
|
||||
@[deprecated "Should not be done manually, but as part of `buildUnlessUpToDate`."
|
||||
(since := "2024-06-14"), specialize]
|
||||
def checkAgainstFile
|
||||
[CheckExists i] [GetMTime i]
|
||||
(info : i) (traceFile : FilePath) (self : BuildTrace)
|
||||
: BaseIO Bool := do
|
||||
@@ -320,7 +322,8 @@ Write trace to a file.
|
||||
**Deprecated:** Should not be done manually,
|
||||
but as part of `buildUnlessUpToDate`.
|
||||
-/
|
||||
@[deprecated (since := "2024-06-14")] def writeToFile (traceFile : FilePath) (self : BuildTrace) : IO PUnit := do
|
||||
@[deprecated "Should not be done manually, but as part of `buildUnlessUpToDate`." (since := "2024-06-14")]
|
||||
def writeToFile (traceFile : FilePath) (self : BuildTrace) : IO PUnit := do
|
||||
createParentDirs traceFile
|
||||
IO.FS.writeFile traceFile self.hash.toString
|
||||
|
||||
|
||||
@@ -104,6 +104,25 @@ structure MaterializedDep where
|
||||
@[inline] def MaterializedDep.configFile (self : MaterializedDep) :=
|
||||
self.manifestEntry.configFile
|
||||
|
||||
def pkgNotIndexed (scope name : String) (rev? : Option String := none) : String :=
|
||||
let (leanRev, tomlRev) :=
|
||||
if let some rev := rev? then
|
||||
(s!" @ {repr rev}", s! "\n rev = {repr rev}")
|
||||
else ("", "")
|
||||
s!"{scope}/{name}: package not found on Reservoir.
|
||||
|
||||
If the package is on GitHub, you can add a Git source. For example:
|
||||
|
||||
require ...
|
||||
from git \"https://github.com/{scope}/{name}\"{leanRev}
|
||||
|
||||
or, if using TOML:
|
||||
|
||||
[[require]]
|
||||
git = \"https://github.com/{scope}/{name}\"{tomlRev}
|
||||
...
|
||||
"
|
||||
|
||||
/--
|
||||
Materializes a configuration dependency.
|
||||
For Git dependencies, updates it to the latest input revision.
|
||||
@@ -131,9 +150,14 @@ def Dependency.materialize
|
||||
else
|
||||
error s!"{dep.name}: unsupported dependency version format '{ver}' (should be \"git#>rev>\")"
|
||||
let depName := dep.name.toString (escape := false)
|
||||
let some pkg ← Reservoir.fetchPkg? lakeEnv dep.scope depName
|
||||
| error s!"{dep.scope}/{depName}: could not materialize package: \
|
||||
dependency has no explicit source and was not found on Reservoir"
|
||||
let pkg ←
|
||||
match (← Reservoir.fetchPkg? lakeEnv dep.scope depName |>.toLogT) with
|
||||
| .ok (some pkg) => pure pkg
|
||||
| .ok none => error <| pkgNotIndexed dep.scope depName verRev?
|
||||
| .error e =>
|
||||
logError s!"{dep.scope}/{depName}: could not materialize package: \
|
||||
this may be a transient error or a bug in Lake or Reservoir"
|
||||
throw e
|
||||
let relPkgDir := relPkgsDir / pkg.name
|
||||
match pkg.gitSrc? with
|
||||
| some (.git _ url githubUrl? defaultBranch? subDir?) =>
|
||||
|
||||
@@ -30,7 +30,6 @@ in their Lake configuration file with
|
||||
|
||||
require {newName} from
|
||||
git \"https://github.com/leanprover-community/{newName}\"{rev}
|
||||
|
||||
"
|
||||
|
||||
/--
|
||||
@@ -146,7 +145,7 @@ Also, move the packages directory if its location has changed.
|
||||
-/
|
||||
private def reuseManifest
|
||||
(ws : Workspace) (toUpdate : NameSet)
|
||||
: UpdateT LogIO PUnit := do
|
||||
: UpdateT LoggerIO PUnit := do
|
||||
let rootName := ws.root.name.toString (escape := false)
|
||||
match (← Manifest.load ws.manifestFile |>.toBaseIO) with
|
||||
| .ok manifest =>
|
||||
@@ -174,7 +173,7 @@ private def reuseManifest
|
||||
logWarning s!"{rootName}: ignoring previous manifest because it failed to load: {e}"
|
||||
|
||||
/-- Add a package dependency's manifest entries to the update state. -/
|
||||
private def addDependencyEntries (pkg : Package) : UpdateT LogIO PUnit := do
|
||||
private def addDependencyEntries (pkg : Package) : UpdateT LoggerIO PUnit := do
|
||||
match (← Manifest.load pkg.manifestFile |>.toBaseIO) with
|
||||
| .ok manifest =>
|
||||
manifest.packages.forM fun entry => do
|
||||
@@ -189,7 +188,7 @@ private def addDependencyEntries (pkg : Package) : UpdateT LogIO PUnit := do
|
||||
/-- Materialize a single dependency, updating it if desired. -/
|
||||
private def updateAndMaterializeDep
|
||||
(ws : Workspace) (pkg : Package) (dep : Dependency)
|
||||
: UpdateT LogIO MaterializedDep := do
|
||||
: UpdateT LoggerIO MaterializedDep := do
|
||||
if let some entry ← fetch? dep.name then
|
||||
entry.materialize ws.lakeEnv ws.dir ws.relPkgsDir
|
||||
else
|
||||
@@ -210,7 +209,7 @@ private def updateAndMaterializeDep
|
||||
/-- Verify that a dependency was loaded with the correct name. -/
|
||||
private def validateDep
|
||||
(pkg : Package) (dep : Dependency) (matDep : MaterializedDep) (depPkg : Package)
|
||||
: LogIO PUnit := do
|
||||
: LoggerIO PUnit := do
|
||||
if depPkg.name ≠ dep.name then
|
||||
if dep.name = .mkSimple "std" then
|
||||
let rev :=
|
||||
@@ -352,7 +351,7 @@ where
|
||||
@[inline] updateAndLoadDep pkg dep := do
|
||||
let matDep ← updateAndMaterializeDep (← getWorkspace) pkg dep
|
||||
loadUpdatedDep pkg dep matDep
|
||||
@[inline] loadUpdatedDep pkg dep matDep : StateT Workspace (UpdateT LogIO) Package := do
|
||||
@[inline] loadUpdatedDep pkg dep matDep : StateT Workspace (UpdateT LoggerIO) Package := do
|
||||
let depPkg ← loadDepPackage matDep dep.opts leanOpts true
|
||||
validateDep pkg dep matDep depPkg
|
||||
addDependencyEntries depPkg
|
||||
@@ -361,7 +360,7 @@ where
|
||||
/-- Write package entries to the workspace manifest. -/
|
||||
def Workspace.writeManifest
|
||||
(ws : Workspace) (entries : NameMap PackageEntry)
|
||||
: LogIO PUnit := do
|
||||
: IO PUnit := do
|
||||
let manifestEntries := ws.packages.foldl (init := #[]) fun arr pkg =>
|
||||
match entries.find? pkg.name with
|
||||
| some entry => arr.push <|
|
||||
@@ -376,7 +375,7 @@ def Workspace.writeManifest
|
||||
manifest.saveToFile ws.manifestFile
|
||||
|
||||
/-- Run a package's `post_update` hooks. -/
|
||||
def Package.runPostUpdateHooks (pkg : Package) : LakeT LogIO PUnit := do
|
||||
def Package.runPostUpdateHooks (pkg : Package) : LakeT LoggerIO PUnit := do
|
||||
unless pkg.postUpdateHooks.isEmpty do
|
||||
logInfo s!"{pkg.name}: running post-update hooks"
|
||||
pkg.postUpdateHooks.forM fun hook => hook.get.fn pkg
|
||||
@@ -405,7 +404,7 @@ reporting warnings and/or errors as appropriate.
|
||||
-/
|
||||
def validateManifest
|
||||
(pkgEntries : NameMap PackageEntry) (deps : Array Dependency)
|
||||
: LogIO PUnit := do
|
||||
: LoggerIO PUnit := do
|
||||
if pkgEntries.isEmpty && !deps.isEmpty then
|
||||
error "missing manifest; use `lake update` to generate one"
|
||||
deps.forM fun dep => do
|
||||
@@ -429,7 +428,7 @@ downloading and/or updating them as necessary.
|
||||
def Workspace.materializeDeps
|
||||
(ws : Workspace) (manifest : Manifest)
|
||||
(leanOpts : Options := {}) (reconfigure := false)
|
||||
: LogIO Workspace := do
|
||||
: LoggerIO Workspace := do
|
||||
if !manifest.packages.isEmpty && manifest.packagesDir? != some (mkRelPathString ws.relPkgsDir) then
|
||||
logWarning <|
|
||||
"manifest out of date: packages directory changed; \
|
||||
|
||||
@@ -172,8 +172,9 @@ def uriEncodeChar (c : Char) (s := "") : String :=
|
||||
def uriEncode (s : String) : String :=
|
||||
s.foldl (init := "") fun s c => uriEncodeChar c s
|
||||
|
||||
/-- Perform a HTTP `GET` request of a URL (using `curl`) and return the body. -/
|
||||
def getUrl (url : String) (headers : Array String := #[]) : LogIO String := do
|
||||
let args := #["-s", "-L"]
|
||||
let args := #["-s", "-L", "--retry", "3"] -- intermittent network errors can occur
|
||||
let args := headers.foldl (init := args) (· ++ #["-H", ·])
|
||||
captureProc {cmd := "curl", args := args.push url}
|
||||
|
||||
@@ -206,9 +207,9 @@ def Reservoir.fetchPkg? (lakeEnv : Lake.Env) (owner pkg : String) : LogIO (Optio
|
||||
let out ←
|
||||
try
|
||||
getUrl url Reservoir.lakeHeaders
|
||||
catch _ =>
|
||||
catch e =>
|
||||
logError s!"{owner}/{pkg}: Reservoir lookup failed"
|
||||
return none
|
||||
throw e
|
||||
match Json.parse out >>= fromJson? with
|
||||
| .ok json =>
|
||||
match fromJson? json with
|
||||
@@ -220,11 +221,14 @@ def Reservoir.fetchPkg? (lakeEnv : Lake.Env) (owner pkg : String) : LogIO (Optio
|
||||
if status == 404 then
|
||||
return none
|
||||
else
|
||||
logError s!"{owner}/{pkg}: Reservoir lookup failed: {msg}"
|
||||
return none
|
||||
error s!"{owner}/{pkg}: Reservoir lookup failed: {msg}"
|
||||
| .error e =>
|
||||
errorWithLog do
|
||||
logError s!"{owner}/{pkg}: Reservoir lookup failed; server returned unsupported JSON: {e}"
|
||||
return none
|
||||
logVerbose s!"{owner}/{pkg}: Reservoir responded with:\n{out.trim}"
|
||||
failure
|
||||
| .error e =>
|
||||
errorWithLog do
|
||||
logError s!"{owner}/{pkg}: Reservoir lookup failed; server returned invalid JSON: {e}"
|
||||
return none
|
||||
logVerbose s!"{owner}/{pkg}: Reservoir responded with:\n{out.trim}"
|
||||
failure
|
||||
|
||||
@@ -130,9 +130,9 @@ protected def LogEntry.toString (self : LogEntry) (useAnsi := false) : String :=
|
||||
if useAnsi then
|
||||
let {level := lv, message := msg} := self
|
||||
let pre := Ansi.chalk lv.ansiColor s!"{lv.toString}:"
|
||||
s!"{pre} {msg.trim}"
|
||||
s!"{pre} {msg}"
|
||||
else
|
||||
s!"{self.level}: {self.message.trim}"
|
||||
s!"{self.level}: {self.message}"
|
||||
|
||||
instance : ToString LogEntry := ⟨LogEntry.toString⟩
|
||||
|
||||
@@ -161,7 +161,8 @@ export MonadLog (logEntry)
|
||||
message := mkErrorStringWithPos msg.fileName msg.pos str none
|
||||
}
|
||||
|
||||
@[deprecated (since := "2024-05-18")] def logToIO (e : LogEntry) (minLv : LogLevel) : BaseIO PUnit := do
|
||||
@[deprecated "No deprecation message available." (since := "2024-05-18")]
|
||||
def logToIO (e : LogEntry) (minLv : LogLevel) : BaseIO PUnit := do
|
||||
match e.level with
|
||||
| .trace => if minLv ≥ .trace then
|
||||
IO.println e.message.trim |>.catchExceptions fun _ => pure ()
|
||||
@@ -189,7 +190,8 @@ abbrev lift [MonadLiftT m n] (self : MonadLog m) : MonadLog n where
|
||||
instance [MonadLift m n] [methods : MonadLog m] : MonadLog n := methods.lift
|
||||
|
||||
set_option linter.deprecated false in
|
||||
@[deprecated (since := "2024-05-18")] abbrev io [MonadLiftT BaseIO m] (minLv := LogLevel.info) : MonadLog m where
|
||||
@[deprecated "Deprecated without replacement." (since := "2024-05-18")]
|
||||
abbrev io [MonadLiftT BaseIO m] (minLv := LogLevel.info) : MonadLog m where
|
||||
logEntry e := logToIO e minLv
|
||||
|
||||
abbrev stream [MonadLiftT BaseIO m]
|
||||
@@ -399,7 +401,7 @@ from an `ELogT` (e.g., `LogIO`).
|
||||
[Monad m] [MonadLiftT BaseIO m] [MonadLog m] [MonadFinally m] (x : m α)
|
||||
: m α := do
|
||||
let (out, a) ← IO.FS.withIsolatedStreams x
|
||||
unless out.isEmpty do logInfo s!"stdout/stderr:\n{out}"
|
||||
unless out.isEmpty do logInfo s!"stdout/stderr:\n{out.trim}"
|
||||
return a
|
||||
|
||||
/-- Throw with the logged error `message`. -/
|
||||
|
||||
@@ -18,9 +18,9 @@ def mkCmdLog (args : IO.Process.SpawnArgs) : String :=
|
||||
[Monad m] (out : IO.Process.Output) (log : String → m PUnit)
|
||||
: m Unit := do
|
||||
unless out.stdout.isEmpty do
|
||||
log s!"stdout:\n{out.stdout}"
|
||||
log s!"stdout:\n{out.stdout.trim}"
|
||||
unless out.stderr.isEmpty do
|
||||
log s!"stderr:\n{out.stderr}"
|
||||
log s!"stderr:\n{out.stderr.trim}"
|
||||
|
||||
@[inline] def rawProc (args : IO.Process.SpawnArgs) (quiet := false) : LogIO IO.Process.Output := do
|
||||
withLogErrorPos do
|
||||
|
||||
@@ -8,7 +8,12 @@ export ELAN_TOOLCHAIN=test
|
||||
./clean.sh
|
||||
# Tests requiring a package not in the index
|
||||
($LAKE -f bogus-dep.toml update 2>&1 && exit 1 || true) |
|
||||
grep --color "error: bogus/bogus: could not materialize package: dependency has no explicit source and was not found on Reservoir"
|
||||
grep --color "package not found on Reservoir"
|
||||
# Tests a request error
|
||||
(RESERVOIR_API_URL=example.com $LAKE -f bogus-dep.toml update 2>&1 && exit 1 || true) |
|
||||
grep --color "server returned invalid JSON"
|
||||
(RESERVOIR_API_URL=example.com $LAKE -f bogus-dep.toml update -v 2>&1 && exit 1 || true) |
|
||||
grep --color "Reservoir responded with"
|
||||
|
||||
./clean.sh
|
||||
$LAKE -f git.toml update --keep-toolchain
|
||||
|
||||
@@ -4,6 +4,7 @@ Released under Apache 2.0 license as described in the file LICENSE.
|
||||
|
||||
Author: Leonardo de Moura, Mac Malone
|
||||
*/
|
||||
#include "util/io.h"
|
||||
#include "runtime/io.h"
|
||||
#include "runtime/object.h"
|
||||
#include "runtime/sstream.h"
|
||||
@@ -41,4 +42,78 @@ extern "C" LEAN_EXPORT obj_res lean_load_dynlib(b_obj_arg path, obj_arg) {
|
||||
return io_result_mk_error(ex.what());
|
||||
}
|
||||
}
|
||||
|
||||
/* loadPlugin : System.FilePath -> IO Unit */
|
||||
extern "C" LEAN_EXPORT obj_res lean_load_plugin(b_obj_arg path, obj_arg) {
|
||||
// we never want to look up plugins using the system library search
|
||||
std::string rpath;
|
||||
#if defined(LEAN_EMSCRIPTEN)
|
||||
rpath = string_to_std(path);
|
||||
auto sep = rpath.rfind('/');
|
||||
#elif defined(LEAN_WINDOWS)
|
||||
constexpr unsigned BufferSize = 8192;
|
||||
char buffer[BufferSize];
|
||||
DWORD retval = GetFullPathName(string_cstr(path), BufferSize, buffer, nullptr);
|
||||
if (retval == 0 || retval > BufferSize) {
|
||||
rpath = string_to_std(path);
|
||||
} else {
|
||||
rpath = std::string(buffer);
|
||||
}
|
||||
auto sep = rpath.rfind('\\');
|
||||
#else
|
||||
char buffer[PATH_MAX];
|
||||
char * tmp = realpath(string_cstr(path), buffer);
|
||||
if (tmp) {
|
||||
rpath = std::string(tmp);
|
||||
} else {
|
||||
inc(path);
|
||||
return io_result_mk_error(lean_mk_io_error_no_file_or_directory(path, ENOENT, mk_string("")));
|
||||
}
|
||||
auto sep = rpath.rfind('/');
|
||||
#endif
|
||||
if (sep == std::string::npos) {
|
||||
sep = 0;
|
||||
} else {
|
||||
sep++;
|
||||
}
|
||||
auto dot = rpath.rfind(".");
|
||||
if (dot == std::string::npos) {
|
||||
dot = rpath.size();
|
||||
}
|
||||
std::string pkg = rpath.substr(sep, dot - sep);
|
||||
std::string sym = "initialize_" + pkg;
|
||||
void * init;
|
||||
#ifdef LEAN_WINDOWS
|
||||
HMODULE h = LoadLibrary(rpath.c_str());
|
||||
if (!h) {
|
||||
return io_result_mk_error((sstream()
|
||||
<< "error loading plugin " << rpath << ": " << GetLastError()).str());
|
||||
}
|
||||
init = reinterpret_cast<void *>(GetProcAddress(h, sym.c_str()));
|
||||
#else
|
||||
// Like lean_load_dynlib, the library is loaded with RTLD_GLOBAL.
|
||||
// This ensures the interpreter has access to plugin definitions that are also
|
||||
// imported (e.g., an environment extension defined with builtin_initialize).
|
||||
// In either case, loading the same symbol twice (and thus e.g. running initializers
|
||||
// manipulating global `IO.Ref`s twice) should be avoided; the common module
|
||||
// should instead be factored out into a separate shared library
|
||||
void *handle = dlopen(rpath.c_str(), RTLD_LAZY | RTLD_GLOBAL);
|
||||
if (!handle) {
|
||||
return io_result_mk_error((sstream()
|
||||
<< "error loading plugin, " << dlerror()).str());
|
||||
}
|
||||
init = dlsym(handle, sym.c_str());
|
||||
#endif
|
||||
if (!init) {
|
||||
return io_result_mk_error((sstream()
|
||||
<< "error, plugin " << rpath << " does not seem to contain a module '" << pkg << "'").str());
|
||||
}
|
||||
auto init_fn = reinterpret_cast<object *(*)(uint8_t, object *)>(init);
|
||||
return init_fn(1 /* builtin */, io_mk_world());
|
||||
// NOTE: we never unload plugins
|
||||
}
|
||||
|
||||
void load_plugin(std::string path) {
|
||||
consume_io_result(lean_load_plugin(mk_string(path), io_mk_world()));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,4 +9,5 @@ Author: Mac Malone
|
||||
|
||||
namespace lean {
|
||||
LEAN_EXPORT void load_dynlib(std::string path);
|
||||
LEAN_EXPORT void load_plugin(std::string path);
|
||||
}
|
||||
|
||||
@@ -5,11 +5,18 @@ options get_default_options() {
|
||||
options opts;
|
||||
// see https://lean-lang.org/lean4/doc/dev/bootstrap.html#further-bootstrapping-complications
|
||||
#if LEAN_IS_STAGE0 == 1
|
||||
// switch to `true` for ABI-breaking changes affecting meta code
|
||||
// set to true to generally avoid bootstrapping issues limited to tactic
|
||||
// blocks in stage 1
|
||||
opts = opts.update({"debug", "byAsSorry"}, false);
|
||||
// switch to `true` for ABI-breaking changes affecting meta code;
|
||||
// see also next option!
|
||||
opts = opts.update({"interpreter", "prefer_native"}, false);
|
||||
// switch to `true` for changing built-in parsers used in quotations
|
||||
opts = opts.update({"internal", "parseQuotWithCurrentStage"}, false);
|
||||
// toggling `parseQuotWithCurrentStage` may also require toggling the following option if macros/syntax
|
||||
// switch to `false` when enabling `prefer_native` should also affect use
|
||||
// of built-in parsers in quotations; this is usually the case, but setting
|
||||
// both to `true` may be necessary for handling non-builtin parsers with
|
||||
// builtin elaborators
|
||||
opts = opts.update({"internal", "parseQuotWithCurrentStage"}, true);
|
||||
// changes to builtin parsers may also require toggling the following option if macros/syntax
|
||||
// with custom precheck hooks were affected
|
||||
opts = opts.update({"quotPrecheck"}, true);
|
||||
|
||||
|
||||
@@ -54,8 +54,6 @@ Author: Leonardo de Moura
|
||||
|
||||
#ifdef LEAN_WINDOWS
|
||||
#include <windows.h>
|
||||
#else
|
||||
#include <dlfcn.h>
|
||||
#endif
|
||||
|
||||
#ifdef _MSC_VER
|
||||
@@ -323,34 +321,6 @@ options set_config_option(options const & opts, char const * in) {
|
||||
}
|
||||
}
|
||||
|
||||
void load_plugin(std::string path) {
|
||||
void * init;
|
||||
// we never want to look up plugins using the system library search
|
||||
path = lrealpath(path);
|
||||
std::string pkg = stem(path);
|
||||
std::string sym = "initialize_" + pkg;
|
||||
#ifdef LEAN_WINDOWS
|
||||
HMODULE h = LoadLibrary(path.c_str());
|
||||
if (!h) {
|
||||
throw exception(sstream() << "error loading plugin " << path << ": " << GetLastError());
|
||||
}
|
||||
init = reinterpret_cast<void *>(GetProcAddress(h, sym.c_str()));
|
||||
#else
|
||||
void *handle = dlopen(path.c_str(), RTLD_LAZY);
|
||||
if (!handle) {
|
||||
throw exception(sstream() << "error loading plugin, " << dlerror());
|
||||
}
|
||||
init = dlsym(handle, sym.c_str());
|
||||
#endif
|
||||
if (!init) {
|
||||
throw exception(sstream() << "error, plugin " << path << " does not seem to contain a module '" << pkg << "'");
|
||||
}
|
||||
auto init_fn = reinterpret_cast<object *(*)(uint8_t, object *)>(init);
|
||||
object *r = init_fn(1 /* builtin */, io_mk_world());
|
||||
consume_io_result(r);
|
||||
// NOTE: we never unload plugins
|
||||
}
|
||||
|
||||
namespace lean {
|
||||
extern "C" object * lean_run_frontend(
|
||||
object * input,
|
||||
@@ -619,7 +589,7 @@ extern "C" LEAN_EXPORT int lean_main(int argc, char ** argv) {
|
||||
#endif
|
||||
case 'p':
|
||||
check_optarg("p");
|
||||
load_plugin(optarg);
|
||||
lean::load_plugin(optarg);
|
||||
forwarded_args.push_back(string_ref("--plugin=" + std::string(optarg)));
|
||||
break;
|
||||
case 'l':
|
||||
|
||||
BIN
stage0/src/CMakeLists.txt
generated
BIN
stage0/src/CMakeLists.txt
generated
Binary file not shown.
BIN
stage0/src/kernel/type_checker.cpp
generated
BIN
stage0/src/kernel/type_checker.cpp
generated
Binary file not shown.
BIN
stage0/src/runtime/load_dynlib.cpp
generated
BIN
stage0/src/runtime/load_dynlib.cpp
generated
Binary file not shown.
BIN
stage0/src/runtime/load_dynlib.h
generated
BIN
stage0/src/runtime/load_dynlib.h
generated
Binary file not shown.
@@ -5,11 +5,18 @@ options get_default_options() {
|
||||
options opts;
|
||||
// see https://lean-lang.org/lean4/doc/dev/bootstrap.html#further-bootstrapping-complications
|
||||
#if LEAN_IS_STAGE0 == 1
|
||||
// switch to `true` for ABI-breaking changes affecting meta code
|
||||
// set to true to generally avoid bootstrapping issues limited to tactic
|
||||
// blocks in stage 1
|
||||
opts = opts.update({"debug", "byAsSorry"}, false);
|
||||
// switch to `true` for ABI-breaking changes affecting meta code;
|
||||
// see also next option!
|
||||
opts = opts.update({"interpreter", "prefer_native"}, false);
|
||||
// switch to `true` for changing built-in parsers used in quotations
|
||||
opts = opts.update({"internal", "parseQuotWithCurrentStage"}, false);
|
||||
// toggling `parseQuotWithCurrentStage` may also require toggling the following option if macros/syntax
|
||||
// switch to `false` when enabling `prefer_native` should also affect use
|
||||
// of built-in parsers in quotations; this is usually the case, but setting
|
||||
// both to `true` may be necessary for handling non-builtin parsers with
|
||||
// builtin elaborators
|
||||
opts = opts.update({"internal", "parseQuotWithCurrentStage"}, true);
|
||||
// changes to builtin parsers may also require toggling the following option if macros/syntax
|
||||
// with custom precheck hooks were affected
|
||||
opts = opts.update({"quotPrecheck"}, true);
|
||||
|
||||
|
||||
BIN
stage0/src/util/shell.cpp
generated
BIN
stage0/src/util/shell.cpp
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data.c
generated
BIN
stage0/stdlib/Init/Data.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/AC.c
generated
BIN
stage0/stdlib/Init/Data/AC.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array.c
generated
BIN
stage0/stdlib/Init/Data/Array.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/Basic.c
generated
BIN
stage0/stdlib/Init/Data/Array/Basic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/BinSearch.c
generated
BIN
stage0/stdlib/Init/Data/Array/BinSearch.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/DecidableEq.c
generated
BIN
stage0/stdlib/Init/Data/Array/DecidableEq.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/FinRange.c
generated
Normal file
BIN
stage0/stdlib/Init/Data/Array/FinRange.c
generated
Normal file
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/InsertionSort.c
generated
BIN
stage0/stdlib/Init/Data/Array/InsertionSort.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/QSort.c
generated
BIN
stage0/stdlib/Init/Data/Array/QSort.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Array/Set.c
generated
BIN
stage0/stdlib/Init/Data/Array/Set.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/ByteArray/Basic.c
generated
BIN
stage0/stdlib/Init/Data/ByteArray/Basic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Fin/Fold.c
generated
BIN
stage0/stdlib/Init/Data/Fin/Fold.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Format/Basic.c
generated
BIN
stage0/stdlib/Init/Data/Format/Basic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/List.c
generated
BIN
stage0/stdlib/Init/Data/List.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/List/FinRange.c
generated
Normal file
BIN
stage0/stdlib/Init/Data/List/FinRange.c
generated
Normal file
Binary file not shown.
BIN
stage0/stdlib/Init/Data/List/OfFn.c
generated
BIN
stage0/stdlib/Init/Data/List/OfFn.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Nat/Fold.c
generated
BIN
stage0/stdlib/Init/Data/Nat/Fold.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Nat/Linear.c
generated
BIN
stage0/stdlib/Init/Data/Nat/Linear.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Ord.c
generated
BIN
stage0/stdlib/Init/Data/Ord.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/UInt/Basic.c
generated
BIN
stage0/stdlib/Init/Data/UInt/Basic.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Vector.c
generated
Normal file
BIN
stage0/stdlib/Init/Data/Vector.c
generated
Normal file
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Vector/Basic.c
generated
Normal file
BIN
stage0/stdlib/Init/Data/Vector/Basic.c
generated
Normal file
Binary file not shown.
BIN
stage0/stdlib/Init/Data/Vector/Lemmas.c
generated
Normal file
BIN
stage0/stdlib/Init/Data/Vector/Lemmas.c
generated
Normal file
Binary file not shown.
BIN
stage0/stdlib/Init/Meta.c
generated
BIN
stage0/stdlib/Init/Meta.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/MetaTypes.c
generated
BIN
stage0/stdlib/Init/MetaTypes.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/Prelude.c
generated
BIN
stage0/stdlib/Init/Prelude.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/ShareCommon.c
generated
BIN
stage0/stdlib/Init/ShareCommon.c
generated
Binary file not shown.
BIN
stage0/stdlib/Init/System/IO.c
generated
BIN
stage0/stdlib/Init/System/IO.c
generated
Binary file not shown.
BIN
stage0/stdlib/Lake/Build/Actions.c
generated
BIN
stage0/stdlib/Lake/Build/Actions.c
generated
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user