This PR adds a heterogeneous version of the constructor injectivity theorems.
These theorems are useful for indexed families, and will be used in `grind`.
This PR fixes a bug in `grind?`. The suggestion using the `grind`
interactive mode was dropping the configuration options provided by the
user. In the following account, the third suggestion was dropping the
`-reducible` option.
```lean
/--
info: Try these:
[apply] grind -reducible only [Equiv.congr_fun, #5103]
[apply] grind -reducible only [Equiv.congr_fun]
[apply] grind -reducible => cases #5103 <;> instantiate only [Equiv.congr_fun]
-/
example :
(Equiv.sigmaCongrRight e).trans (Equiv.sigmaEquivProd α₁ β₂)
= (Equiv.sigmaEquivProd α₁ β₁).trans (prodCongrRight e) := by
grind? -reducible [Equiv.congr_fun]
```
This PR adds the `grind` option `reducible` (default: `true`). When
enabled, definitional equality tests expand only declarations marked as
`@[reducible]`.
Use `grind -reducible` to allow expansion of non-reducible declarations
during definitional equality tests.
This option affects only definitional equality; the canonicalizer and
theorem pattern internalization always unfold reducible declarations
regardless of this setting.
This PR refines several error error messages, mostly involving invalid
use of field notation, generalized field notation, and numeric
projection. Provides a new error explanation for field notation.
## Error message changes
In general:
- Uses a slightly different convention for expression-type pairs, where
the expression is always given `indentExpr` and the type is given
`inlineExpr` treatment. This is something of a workaround for the fact
that the `Format` type is awkward for embedding possibly-linebreaking
expressions in not-linebreaking text, which may be a separate issue
worth addressing.
- Tries to give slightly more "why" reasoning — the environment does not
contain `String.parse`, and _therefore you can't project `.parse` from a
`String`_.
Some specific examples:
### No such projection function
```lean4
#check "".parse
```
before:
```
error: Invalid field `parse`: The environment does not contain `String.parse`
""
has type
String
```
after:
```
error: Invalid field `parse`: The environment does not contain `String.parse`, so it is not possible to project the field `parse` from an expression
""
of type `String`
```
### Type does not have the correct form
```lean4
example (x : α) := (foo x).foo
```
before:
```
error: Invalid field notation: Type is not of the form `C ...` where C is a constant
foo x
has type
α
```
after:
```
error: Invalid field notation: Field projection operates on types of the form `C ...` where C is a constant. The expression
foo x
has type `α` which does not have the necessary form.
```
## Refactoring
Includes some refactoring changes as well:
- factors out multiple uses of number (1, 2, 3, 212, 222) to ordinal
("first", "second", "third", "212th", "222nd") conversion into
Lean.Elab.ErrorUtils
- significant refactoring of `resolveLValAux` in `Lean.Elab.App` — in
place of five helper functions, a special-case function case analysis,
and a case analysis on the projection type and structure, there's now a
single case analysis on the projection type and structure. This allows
several error messages to be more explicit (there were a number of cases
where index projection was being described as field projection in an
error messages) and gave the opportunity to slightly improve positining
for several errors: field *notation* errors should appear on `foo.bar`,
but field *projection* errors should appear only on the `bar` part of
`foo.bar`.
This PR moves the processing of options passed to the CLI from
`shell.cpp` to `Shell.lean`.
As with previous ports, this attempts to mirror as much of the original
behavior as possible, Benefits to be gained from the ported code can
come in later PRs. There should be no significant behavioral changes
from this port. Nonetheless, error reporting has changed some, hopefully
for the better. For instance, errors for improper argument
configurations has been made more consistent (e.g., Lean will now error
if numeric arguments fall outside the expected range for an option).
(Redo of #11345 to fix Windows issue.)
This PR generalizes the `noConfusion` constructions to heterogeneous
equalities (assuming propositional equalities between the indices). This
lays ground work for better support for applying injection to
heterogeneous equalities in grind.
The `Meta.mkNoConfusion` app builder shields most of the code from these
changes.
Since the per-constructor noConfusion principles are now more
expressive, `Meta.mkNoConfusion` no longer uses the general one.
In `Init.Prelude` some proofs are more pedestrian because `injection`
now needs a bit more machinery.
This is a breaking change for whoever uses the `noConfusion` principle
manually and explicitly for a type with indices.
Fixes#11450.
This PR adds lemmas stating that if a get operation returns a value,
then the queried key must be contained in the collection. These lemmas
are added for HashMap and TreeMap-based collections, with a similar
lemma also added for `Init.getElem`.
This PR adapts the lambda lifter in LCNF to eta contract instead of
lambda lift if possible. This prevents the creation of a few hundred
unnecessary lambdas across the code base.
This PR moves the `Inhabited` instances in constant `DTreeMap` (and
related) queries, such as `Const.get!`, where the `Inhabited` instance
can be provided before proving a key.
This PR fixes a panic in `getEqnsFor?` when called on matchers generated
from match expressions in theorem types.
When a theorem's type contains a match expression (e.g., `theorem bar :
(match ... with ...) = 0`), the compiler generates a matcher like
`bar.match_1`. Calling `getEqnsFor?` on this matcher would panic with:
```
PANIC: duplicate normalized declaration name bar.match_1.eq_1 vs. _private...bar.match_1.eq_1
```
This also affected the `try?` tactic, which internally uses
`getEqnsFor?`.
We make `shouldGenerateEqnThms` return `false` for matchers, since their
equations are already generated separately by
`Lean.Meta.Match.MatchEqs`. This prevents the equation generation
machinery from attempting to create duplicate equation theorems.
Closes#11461Closes#10390🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes various typos across the codebase in documentation and
comments.
- `infered` → `inferred` (ParserCompiler.lean)
- `declartation` → `declaration` (Cleanup.lean)
- `certian` → `certain` (CasesInfo.lean)
- `wil` → `will` (Cache.lean)
- `the the` → `the` (multiple files - PrefixTree.lean, Sum/Basic.lean,
List/Nat/Perm.lean, Time.lean, Bounded.lean, Lake files)
- `to to` → `to` (MutualInductive.lean, simp_bubblesort_256.lean)
- Grammar improvements in Bounded.lean and Time.lean
All changes are to comments and documentation only - no functional
changes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds `+suggestions` support to `solve_by_elim`, following the
pattern established by `grind +suggestions` and `simp_all +suggestions`.
Gracefully handles invalid/nonexistent suggestions by filtering them out
🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds `solve_by_elim` as a fallback in the `try?` tactic's simple
tactics. When `rfl` and `assumption` both fail but `solve_by_elim`
succeeds (e.g., for goals requiring hypothesis chaining or
backtracking), `try?` will now suggest `solve_by_elim`.
The structure is `first | (attempt_all | rfl | assumption) |
solve_by_elim`, so `solve_by_elim` only runs when the faster tactics
fail.
This is a prerequisite for removing the "first pass" `solve_by_elim`
from `apply?`. Currently `apply?` calls `solve_by_elim` twice: once
before library search, and once after each lemma application. The first
pass can be removed once `try?` includes `solve_by_elim`.
🤖 Prepared with Claude Code
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR removes the "first pass" behavior where `exact?` and `apply?`
would try `solve_by_elim` on the original goal before doing library
search. This simplifies the `librarySearch` API and focuses these
tactics on their primary purpose: finding library lemmas.
Users who want to find proofs using local hypotheses should use `try?`
instead, which now includes `solve_by_elim` in its pipeline (see
https://github.com/leanprover/lean4/pull/11462).
Changes:
- Removed first pass from `librarySearch`
- Simplified `tactic` parameter from `Bool → List MVarId → MetaM (List
MVarId)` to `List MVarId → MetaM (List MVarId)`
- Updated test expectations
🤖 Prepared with Claude Code
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR improves the error message when no library suggestions engine is
registered to recommend importing `Lean.LibrarySuggestions.Default` for
the built-in engine.
**Before:**
```
No library suggestions engine registered. (Note that Lean does not provide a default library suggestions engine, these must be provided by a downstream library, and configured using `set_library_suggestions`.)
```
**After:**
```
No library suggestions engine registered. (Add `import Lean.LibrarySuggestions.Default` to use Lean's built-in engine, or use `set_library_suggestions` to configure a custom one.)
```
🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds recording functionality such that `shake` can more
precisely track whether an import should be preserved solely for its
`attribute` commands.
This PR adds the difference operation on `DTreeMap`/`TreeMap`/`TreeSet`
and proves several lemmas about it.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
The bench script expected no output on stdout from `compile.sh`, which
was not always the case. Now, it separates the compilation and size
measurement steps.
This PR slightly improves the types involved in creating boxed
declarations. Previously the type of
the vdecl used for the return was always `tobj` when returning a boxed
scalar. This is not the most
precise annotation we can give.
This PR modifies the error message for type synthesis failure for the
case where the type class in question is potentially derivable using a
`deriving` command. Also changes the error explanation for type class
instance synthesis failure with an illustration of this pattern.
## Example
```lean4
inductive MyColor where
| chartreuse | sienna | thistle
def forceColor (oc : Option MyColor) :=
oc.get!
```
Before this PR, this gives the potentially confusing impression that
Lean may have decided that `MyColor` is _not_ inhabited — people used to
Rust may be especially inclined towards this confusion.
```
failed to synthesize instance of type class
Inhabited MyColor
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
After this PR, a targeted hint suggests precisely the command that will
fix the issue:
```
error: failed to synthesize instance of type class
Inhabited MyColor
Hint: Adding the command `deriving instance Inhabited for MyColor` may allow Lean to derive the missing instance.
```
This PR lets recursive functions defined by well-founded recursion use a
different `fix` function when the termination measure is of type `Nat`.
This fix-point operator use structural recursion on “fuel”, initialized
by the given measure, and is thus reasonable to reduce, e.g. in `by
decide` proofs.
Extra provisions are in place that the fixpoint operator only starts
reducing when the fuel is fully known, to prevent “accidential” defeqs
when the remaining fuel for the recursive calls match the initial fuel
for that recursive argument.
To opt-out, the idiom `termination_by (n,0)` can be used.
We still use `@[irreducible]` as the default for such recursive
definitions, to avoid unexpected `defeq` lemmas. Making these functions
`@[semireducible]` by default showed performance regressions in lean.
When the measure is of type `Nat`, the system will accept an explicit
`@[semireducible]` without the usual warning.
Fixes#5234. Fixes: #11181.
This PR performs minor maintenance on the String API
- Rename `String.Pos.toCopy` to `String.Pos.copy` to adhere to the
naming convention
- Rename `String.Pos.extract` to `String.extract` to get sane dot
notation again
- Add `String.Slice.Pos.extract`
This PR reduces the memory consumption of the language server (the
watchdog process in particular). In Mathlib, it reduces memory
consumption by about 1GB.
It also fixes two bugs in the call hierarchy:
- When an open file had import errors (e.g. from a transitive build
failure), the call hierarchy would not display any usages in that file.
Now we use the reference information from the .ilean instead.
- When a command would not set a parent declaration (e.g. `#check`), the
result was filtered from the call hierarchy. Now we display it as
`[anonymous]` instead.
This PR documents the `grind_pattern` command for manually selecting
theorem instantiation patterns, including multi-patterns and the
constraint system (`=/=`, `=?=`, `size`, `depth`, `is_ground`,
`is_value`, `is_strict_value`, `gen`, `max_insts`, `guard`, `check`).
This PR implements support for **guards** in `grind_pattern`. The new
feature provides additional control over theorem instantiation. For
example, consider the following monotonicity theorem:
```lean
opaque f : Nat → Nat
theorem fMono : x ≤ y → f x ≤ f y := ...
```
We can use `grind_pattern` to instruct `grind` to instantiate the
theorem for every pair `f x` and `f y` occurring in the goal:
```lean
grind_pattern fMono => f x, f y
```
Then we can automatically prove the following simple example using
`grind`:
```lean
/--
trace: [grind.ematch.instance] fMono: f a ≤ b → f (f a) ≤ f b
[grind.ematch.instance] fMono: f a ≤ c → f (f a) ≤ f c
[grind.ematch.instance] fMono: f a ≤ a → f (f a) ≤ f a
[grind.ematch.instance] fMono: f a ≤ f (f a) → f (f a) ≤ f (f (f a))
[grind.ematch.instance] fMono: f a ≤ f a → f (f a) ≤ f (f a)
[grind.ematch.instance] fMono: f (f a) ≤ b → f (f (f a)) ≤ f b
[grind.ematch.instance] fMono: f (f a) ≤ c → f (f (f a)) ≤ f c
[grind.ematch.instance] fMono: f (f a) ≤ a → f (f (f a)) ≤ f a
[grind.ematch.instance] fMono: f (f a) ≤ f (f a) → f (f (f a)) ≤ f (f (f a))
[grind.ematch.instance] fMono: f (f a) ≤ f a → f (f (f a)) ≤ f (f a)
[grind.ematch.instance] fMono: a ≤ b → f a ≤ f b
[grind.ematch.instance] fMono: a ≤ c → f a ≤ f c
[grind.ematch.instance] fMono: a ≤ a → f a ≤ f a
[grind.ematch.instance] fMono: a ≤ f (f a) → f a ≤ f (f (f a))
[grind.ematch.instance] fMono: a ≤ f a → f a ≤ f (f a)
[grind.ematch.instance] fMono: c ≤ b → f c ≤ f b
[grind.ematch.instance] fMono: c ≤ c → f c ≤ f c
[grind.ematch.instance] fMono: c ≤ a → f c ≤ f a
[grind.ematch.instance] fMono: c ≤ f (f a) → f c ≤ f (f (f a))
[grind.ematch.instance] fMono: c ≤ f a → f c ≤ f (f a)
[grind.ematch.instance] fMono: b ≤ b → f b ≤ f b
[grind.ematch.instance] fMono: b ≤ c → f b ≤ f c
[grind.ematch.instance] fMono: b ≤ a → f b ≤ f a
[grind.ematch.instance] fMono: b ≤ f (f a) → f b ≤ f (f (f a))
[grind.ematch.instance] fMono: b ≤ f a → f b ≤ f (f a)
-/
#guard_msgs in
example : f b = f c → a ≤ f a → f (f a) ≤ f (f (f a)) := by
set_option trace.grind.ematch.instance true in
grind
```
However, many unnecessary theorem instantiations are generated.
With the new `guard` feature, we can instruct `grind` to instantiate the
theorem **only if** `x ≤ y` is already known to be true in the current
`grind` state:
```lean
grind_pattern fMono => f x, f y where
guard x ≤ y
x =/= y
```
If we run the example again, only three instances are generated:
```lean
/--
trace: [grind.ematch.instance] fMono: a ≤ f a → f a ≤ f (f a)
[grind.ematch.instance] fMono: f a ≤ f (f a) → f (f a) ≤ f (f (f a))
[grind.ematch.instance] fMono: a ≤ f (f a) → f a ≤ f (f (f a))
-/
#guard_msgs in
example : f b = f c → a ≤ f a → f (f a) ≤ f (f (f a)) := by
set_option trace.grind.ematch.instance true in
grind
```
Note that `guard` does **not** check whether the expression is
*implied*. It only checks whether the expression is *already known* to
be true in the current `grind` state. If this fact is eventually
learned, the theorem will be instantiated.
If you want `grind` to check whether the expression is implied, you
should use:
```lean
grind_pattern fMono => f x, f y where
check x ≤ y
x =/= y
```
Remark: we can use multiple `guard`/`check`s in a `grind_pattern`
command.
This PR adds per-module `.ilean` and `.olean` file size metrics, global
and per-module cycle counting, and adds back `lean --stat`-based
metrics. It also renames some `size/*` metrics to get rid of the name
`stdlib`.
This PR clarifies the bootstrap documentation to explain that to trigger
the automatic stage0 update mechanism, you should modify
`stage0/src/stdlib_flags.h` (not `src/stdlib_flags.h`). The existing
text was ambiguous about which file to modify.
🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR reverts https://github.com/leanprover/lean4/pull/11396, which
changed `set_library_suggestions` to create an auxiliary definition
marked with `@[library_suggestions]`, rather than storing `Syntax`
directly in the environment extension.
It wasn't tested properly.
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes a kernel type mismatch error in grind's denominator
cleanup feature. When generating proofs involving inverse numerals (like
`2⁻¹`), the proof context is compacted to only include variables
actually used. This involves renaming variable indices - e.g., if
original indices were `{0: r, 1: 2⁻¹}` and only `2⁻¹` is used, it gets
renamed to index 0.
The bug was that polynomials were correctly renamed via `varRename`, but
the variable index `x` stored in `cancelDen` constraints was passed
directly to the proof without renaming, causing a mismatch between the
polynomial's variable references and the theorem's variable argument.
Added `ringVarDecls` to track ring variable indices that need renaming,
similar to how `ringPolyDecls` tracks polynomials. The `mkRingContext`
function now also renames these variable indices.
See zulip discussion at [#nightly-testing > Mathlib status updates @
💬](https://leanprover.zulipchat.com/#narrow/channel/428973-nightly-testing/topic/Mathlib.20status.20updates/near/560575295).
🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR changes `set_library_suggestions` to create an auxiliary
definition marked with `@[library_suggestions]`, rather than storing
`Syntax` directly in the environment extension. This enables better
persistence and consistency of library suggestions across modules.
The change requires a stage0 update before tests can be restored. After
CI updates stage0, a follow-up PR will restore the test cases.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
This PR fixes an issue where `grind` would fail after multiple
`norm_cast`
calls with the error "unexpected metadata found during internalization".
The `norm_cast` tactic adds mdata nodes to expressions, and when called
multiple times it creates nested mdata. The `eraseIrrelevantMData`
preprocessing function was using `.continue e` when stripping mdata,
which causes `Core.transform` to reconstruct the mdata node around the
visited children. By changing to `.visit e`, the inner expression is
passed back to `pre` for another round of processing, allowing all
nested mdata layers to be stripped.
Closes#11411🤖 Prepared with Claude Code
Co-authored-by: Claude <noreply@anthropic.com>
This PR sorts the declarations fed into ElimDeadBranches in increasing
size. This can improve performance when we are dealing with a lot of
iterations.
The motivation for this change is as follows. Currently the algorithm
for doing one step of abstract interpretation is:
```
for decl in scc do
interpDecl
if summaryChanged decl then
return true
return false
```
whenever we return true we run another step. Now suppose we are in a
situation where we have an SCC with one big decl in the front and then
`n` small ones afterwards. For each time that the small ones change
their summary, we will re-run analysis of the big one in the front.
Currently the ordering is basically at "random" based on how other
compilers inject things into the SCC. This change ensures the behavior
is consistent and at least somewhat intelligent. By putting the small
declarations first, whenever we trigger a rerun of the loop we bias
analyzing the small declarations first, thus decreasing run time.
Note that this change does not have much effect on the current pipeline
because: We usually construct the SCCs in a way such that small ones
happen to be in front anyways. However, with upcomping changes on
specialization this is about to change.
This PR implements the following `grind_pattern` constraints:
```lean
grind_pattern fax => f x where
depth x < 2
grind_pattern fax => f x where
is_ground x
grind_pattern fax => f x where
size x < 5
grind_pattern fax => f x where
gen < 2
grind_pattern fax => f x where
max_insts < 4
grind_pattern gax => g as where
as =?= _ :: _
```
This PR adds support for difference operation for
`DHashMap`/`HashMap`/`HashSet` and proves several lemmas about it.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR implements new kinds of constraints for the `grind_pattern`
command. These constraints allow users to control theorem instantiation
in `grind`.
It requires a manual `update-stage0` because the change affects the
`.olean` format, and the PR fails without it.
Not tested carefully: I will shake out any problems during the next
release. This script would have detected the mistakes I made in recent
releases of `v4.24.1` / `v4.25.1` and `v4.25.2`. (And #11374 would have
prevented these mistakes.)
This PR fixes the compilation of structure projections with unboxed
arguments marked `extern`, adding missing `dec` instructions. It led to
leaking single allocations when such functions were used as closures or
in the interpreter.
This is the minimal working fix; `extern` should not replicate parts of
the compilation pipeline, which will be possible via #10291.
This PR is a followup of #11381 and enforces the invariants on ordering
of closed terms and constants required by the EmitC pass properly by
toposorting before saving the declarations into the Environment.
This PR adds `ofArray` to `DHashMap`/`HashMap`/`HashSet` and proves a
simp lemma allowing to rewrite `ofArray` to `ofList`.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR adds a release note draft for the next major release, where the
module system will cease being experimental.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR adds missing docstrings for constants that occur in the
reference manual.
---------
Co-authored-by: Johannes Tantow <44068763+jt0202@users.noreply.github.com>
This PR fixes a bug where the closed term extraction does not respect
the implicit invariant of the
c emitter to have closed term decls first, other decls second, within an
SCC. This bug has not yet
been triggered in the wild but was unearthed during work on upcoming
modifications of the
specializer.
This PR renames `String.Slice.Pos.ofSlice` to `String.Pos.ofToSlice` to
adhere with the (yet-to-be documented) naming convention for mapping
positions to positions. It then adds several new functions so that for
every way to construct a slice from a string and slice, there are now
functions for mapping positions forwards and backwards along this
construction.
This PR sets `@[macro_inline]` on the (trivial) `.ctorIdx` for inductive
types with one constructor, to reduce the number of symbols generated by
the compiler.
This PR aims to improve the performance of `String.contains`,
`String.find`, etc. when using patterns of type `Char` or `Char -> Bool`
by moving the needle out of the iterator state and thus working around
missing unboxing in the compiler.
This PR makes the library suggestions extension state available when
importing from `module` files.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds support for cleaning up denominators in `grind linarith`
when the type is a `Field`.
Examples:
```lean
open Std Lean.Grind
section
variable {α : Type} [Field α] [LE α] [LT α] [LawfulOrderLT α] [IsLinearOrder α] [OrderedRing α]
example (a b : α) (h : a < b / 2) : 2 * a < b := by grind
example (a b : α) (_ : 0 ≤ a) (h : a ≤ b) : a / 7 ≤ b / 2 := by grind
example (a b : α) (_ : b < 0) (h : a < b) : (3/2) * a < (5/4) * b := by grind
example (a b : α) (h : a = b * (3⁻¹)^2) : 9 * a ≤ b := by grind
example (a b : α) (h : a / 2 ≠ b / 9) : 9 * a < 2 * b ∨ 9 * a > 2 * b := by grind
example (a b : α) (h : a < b / (2^2 - 3/2 + -1 + 1/2)) : 2 * a < b := by grind
end
example (a b : Rat) (h : a < b / 2) : a + a < b := by grind
example (a b : Rat) (h : a < b / 2) : a + a ≤ b := by grind
example (a b : Rat) (h : a ≠ b * (3⁻¹)^2) : 9 * a < b ∨ 9 * a > b := by grind
example (a b : Rat) (h : a / 2 ≠ b / 9) : 9 * a < 2 * b ∨ 9 * a > 2 * b := by grind
```
This PR makes the `Std.Time.Format` import in
`Lean.Elab.Tactic.Grind.Annotated` private rather than public,
preventing the entire `Std.Time` infrastructure (including timezone
databases) from being re-exported through `import Lean`.
The `grindAnnotatedExt` extension is kept private, with a new public
accessor function `isGrindAnnotatedModule` exposed for use by
`LibrarySuggestions.Basic`.
This should address the +2.5% instruction increase on `import Lean`
observed after merging #11332.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR enables parallelism in `try?`. Currently, we replace the
`attempt_all` stages (there are two, one for builtin tactics including
`grind` and `simp_all`, and a second one for all user extensions) with
parallel versions. We do not (yet?) change the behaviour of `first`
based stages.
This PR moves the processing of options passed to the CLI from
`shell.cpp` to `Shell.lean`.
As with previous ports, this attempts to mirror as much of the original
behavior as possible, Benefits to be gained from the ported code can
come in later PRs. There should be no significant behavioral changes
from this port. Nonetheless, error reporting has changed some, hopefully
for the better. For instance, errors for improper argument
configurations has been made more consistent (e.g., Lean will now error
if numeric arguments fall outside the expected range for an option).
This PR accelerates termination of the ElimDeadBranches compiler pass.
The implementation addresses situations such as `choice [none, some
top]` which can be summarized to
`top` because `Option` has only two constructors and all constructor
arguments are `top`.
This PR implements a helper simproc for `grind`. It is part of the
infrastructure used to cleanup denominators in `grind linarith`.
---------
Co-authored-by: Kim Morrison <kim@tqft.net>
This PR adds a focused error explanation aimed at the case where someone
tries to use Natural-Numbers-Game-style `induction` proofs directly in
Lean, where such proofs are not syntactically valid.
## Discussion
The natural numbers game uses a syntax that overlaps with Lean's
`induction` syntax despite having more structural similarity to
`induction'`. This means that fully correct proofs in the natural
numbers game, like this...
```lean4
import Mathlib
theorem zero_mul (m : ℕ) : 0 * m = 0 := by
induction m with n n_ih
rw [mul_zero]
rfl
rw [mul_succ]
rw [add_zero]
rw [n_ih]
rfl
```
...have completely baffling error messages from a newcomers'
perspective:
```
notNaturalNumbersGame.lean:3:20: error: unknown tactic
notNaturalNumbersGame.lean:3:2: error: Alternative `zero` has not been provided
notNaturalNumbersGame.lean:3:2: error: Alternative `succ` has not been provided
```
(the Mathlib import here only provides the `ℕ` syntax here; equivalently
`ℕ` could be renamed to `Nat` and the import could be removed, [like
this](https://live.lean-lang.org/#codez=C4Cwpg9gTmC2AEAvMUIH1YFcA28AUCAXPAHICGwAlPMQAzwBU8CAvPPYWwEYCeAUPHgBLAHYATTAGNgQiCObwA7kNDx5ItEJAD4URfADaWbGmSoAujqgAzbFf1GcaAM5TJlwXsNkxY0yggPXQcNLSCbbCA))
There are many problems with this proof from the perspective of "stock"
Lean, but the error messages in the `induction` case are particularly
unfriendly and provide no guidance from a NNG learner's perspective.
This PR provides more information about what is wrong:
```
notNaturalNumbersGame.lean:3:20: error: unknown tactic
notNaturalNumbersGame.lean:3:14: error(lean.inductionWithNoAlts): Invalid syntax for induction tactic: The `with` keyword must followed by a tactic or by an alternative (e.g. `| zero =>`), but here it is followed by the identifier `n`.
```
The error explanation it links to explicitly flags the transition of
NNG-style proofs to Lean as the likely culprit, and gives an example of
an effective translation.
This PR updates the `foldr`, `all`, `any` and `contains` functions on
`String` to be defined in terms of their `String.Slice` counterparts.
This is the last one in a long series of PRs. After this, all `String`
operations are polymorphic in the pattern, and no `String` operation
falls back to `String.Pos.Raw` internally (except those in the
`String.Pos.Raw` and `String.Substring.Raw` namespaces of course, which
still play a role in metaprogramming and will stay for the foreseeable
future).
This PR adds a new [radar]-based [temci]-less bench suite that replaces
the `stdlib` benchmarks from the old suite and also measures per-module
instruction counts. All other benchmarks from the old suite are
unaffected.
The readme at `tests/bench-radar/README.md` explains in more detail how
the bench suite is structured and how it works. The readmes in the
benchmark subdirectories explain what each benchmark does and which
metrics it collects.
All metrics except `stdlib//max dynamic symbols` were ported to the new
suite, though most have been renamed.
[radar]: https://github.com/leanprover/radar
[temci]: https://github.com/parttimenerd/temci
This PR changes the interface of the `ForIn`, `ForIn'`, and `ForM`
typeclasses to not take a `Monad m` parameter. This is a breaking change
for most downstream `instance`s, which will will now need to assume
`[Monad m]`.
The rationale is that if the provider of an instance requires `m` to be
a Monad, they should assume this up front. This makes it possible for
the instanve to assume `LawfulMonad m` or some other stronger
requirement, and also to provided a concrete instance for a particular
`m` without assuming a non-canonical `Monad` structure on it.
Zulip: [#lean4 > Monad assumptions in fields of other typeclasses @
💬](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Monad.20assumptions.20in.20fields.20of.20other.20typeclasses/near/537102158)
This PR activates the `grind_annotated` command in
`Init.Data.List.Lemmas` by removing the TODO comment and uncommenting
the command.
This PR depends on #11346 (implement `grind_annotated` command) and
should be merged after that PR (and after CI has done an
`update-stage0`.
This PR enables the syntax `use [ns Foo]` and `instantiate only [ns
Foo]` inside a `grind` tactic block, and has the effect of activating
all grind patterns scoped to that namespace. We can use this to
implement specialized tactics using `grind`, but only controlled subsets
of theorems.
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR upstreams the `with_weak_namespace` command from Mathlib:
`with_weak_namespace <id> <cmd>` changes the current namespace to `<id>`
for the duration of executing command `<cmd>`, without causing scoped
things to go out of scope. This is in preparation for upstreaming the
`scoped[Foo.Bar]` syntax from Mathlib, which will be useful now that we
are adding `grind` annotations in scopes.
This PR adds a `grind_annotated "YYYY-MM-DD"` command that marks files
as manually annotated for grind.
When LibrarySuggestions is called with `caller := "grind"` (as happens
with `grind +suggestions`), theorems from grind-annotated files are
filtered out from premise selection. The date argument validates using
Std.Time and is informational only for now, but could be used later to
detect files that need re-review.
There's no need for the library suggestions tools to suggest `grind`
theorems from files that have already been carefully annotated by hand.
This PR adds infrastructure for parallel execution across Lean's tactic
monads.
- Add IO.waitAny' to Init/System/IO.lean for waiting on task completion
- Add `Lean.Elab.Task` with `asTask` utilities for `CoreM`, `MetaM`,
`TermElabM`, `TacticM`
- Add `Lean.Elab.Parallel` with parallel execution strategies:
* `par`/`par'` - collect results in original order
* `parIter`/`parIterGreedy` - iterate over results (original or
completion order) (also variants with a cancellation token)
* `parFirst` - return first successful result
This does *not* attempt to be a monad-polymorphic framework for
parallelism. It's intentionally hard-coded to the Lean tactic monads
which I need to work with. If there's desire to make this polymorphic,
hopefully that can be done separately.
This PR renames `String.bytes` to `String.toByteArray`.
This is for two reasons: first, `toByteArray` is a better name, and
second, we have something else that wants to use the name `bytes`,
namely the function that returns in iterator over the string's bytes.
This PR documents that `backward.*` options are only temporary
migration aids and may disappear without further notice after 6 months
after their introduction. Users are kindly asked to report if they rely
on these options.
This PR adds a coercion from `String` to `String.Slice`.
In our envisioned future, most functions operating on strings will
accept `String.Slice` parameters by default (like `str` in Rust), and
this enables calling such functions with arguments of type `String`.
Closes#11298.
This PR renames `String.ValidPos` to `String.Pos`, `String.endValidPos`
to `String.endPos` and `String.startValidPos` to `String.startPos`.
Accordingly, the deprecations of `String.Pos` to `String.Pos.Raw` and
`String.endPos` to `String.rawEndPos` are removed early, after an
abbreviated deprecation cycle of two releases.
This PR removes the `group` field from option descriptions. It is
unused, does not have a clear meaning and often matches the first
component of the option name.
Given its run time of >2hrs, the job is added as a secondary job for
nightly releases and a primary job for full releases. A new check level
for differentiating between nightlies and full releases is added for
this.
(Trying to) reactivate lsan will happen in a follow-up PR.
This PR fixes freeing memory accidentally retained for each document
version in the language server on certain elaboration workloads. The
issue must have existed since 4.18.0.
This PR adds an explicit normalization layer for ring constraints in the
`grind linarith` module. For example, it will be used to clean up
denominators when the ring is a field.
This PR renames the `cutsat` tactic to `lia` for better alignment with
standard terminology in the theorem proving community.
`cutsat` still works but now emits a deprecation warning and suggests
using `lia` instead via "Try this:". Both tactics have identical
behavior.
Co-authored-by: Claude <noreply@anthropic.com>
This PR cleans up the API around `String.find` and moves it uniformly to
the new position types `String.ValidPos` and `String.Slice.Pos`
Overview:
- To search for a character, character predicate, string or slice in a
string or slice `s`, use `s.find?` or `s.find`.
- To do the same, but starting at a position `p` of a string or slice,
use `p.find?` or `p.find`.
- To do the same but between two positions `p` and `q`, construct the
slice from `p` to `q` and then use `find?` or `find` on that.
- To search backwards, all of the above applies, except that the
function is called `revFind?`, there is no non-question-mark version
(use `getD` if there is a sane default return value in your specific
application), and that you can only search for characters and character
predicates, not strings or slices.
This PR ensures that users can provide `grind` proof parameters whose
types are not `forall`-quantified. Examples:
```lean
opaque f : Nat → Nat
axiom le_f (a : Nat) : a ≤ f a
example (a : Nat) : a ≤ f a := by
grind [le_f a]
example (a b : α) (h : ∀ x y : α, x = y) : a = b := by
grind [h a b]
```
This PR redefines `front` and `back` on `String` to go through
`String.Slice` and adds the new `String` functions `front?`, `back?`,
`positions`, `chars`, `revPositions`, `revChars`, `byteIterator`,
`revBytes`, `lines`.
This PR adds `CoreM.toIO'`, the analogue of `CoreM.toIO` dropping the
state from the return type, and similarly for `TermElabM.toIO'` and
`MetaM.toIO'`.
This PR introduces a new `grind` option, `funCC` (enabled by default),
which extends congruence closure to *function-valued* equalities. When
`funCC` is enabled, `grind` tracks equalities of **partially applied
functions**, allowing reasoning steps such as:
```lean
a : Nat → Nat
f : (Nat → Nat) → (Nat → Nat)
h : f a = a
⊢ (f a) m = a m
g : Nat → Nat
f : Nat → Nat → Nat
h : f a = g
⊢ f a b = g b
```
Given an application `f a₁ a₂ … aₙ`, when `funCC := true` and function
equality is enabled for `f`, `grind` generates and tracks equalities for
all partial applications:
* `f a₁`
* `f a₁ a₂`
* …
* `f a₁ a₂ … aₙ`
This allows equalities such as `f a₁ = g` to propagate through further
applications.
**When is function equality enabled for a symbol?**
Function equality is enabled for `f` in the following cases:
1. `f` is **not a constant** (e.g., a lambda, a local function, or a
function parameter).
2. `f` is a **structure field projection**, provided the structure is
**not a `class`**.
3. `f` is a constant marked with `@[grind funCC]`
Users can also enable function equality for specific constants in a
single call using:
```lean
grind [funCC f, funCC g]
```
**Examples:**
```lean
example (m : Nat) (a : Nat → Nat) (f : (Nat → Nat) → (Nat → Nat)) (h : f a = a) :
f a m = a m := by
grind
example (m : Nat) (a : Nat → Nat) (f : (Nat → Nat) → (Nat → Nat)) (h : f a = a) :
f a m = a m := by
fail_if_success grind -funCC -- fails if `funCC` is disabled
grind
```
```lean
example (a b : Nat) (g : Nat → Nat) (f : Nat → Nat → Nat) (h : f a = g) :
f a b = g b := by
grind
example (a b : Nat) (g : Nat → Nat) (f : Nat → Nat → Nat) (h : f a = g) :
f a b = g b := by
fail_if_success grind -funCC
grind
```
**Enabling per-symbol with parameters or attributes**
```lean
opaque f : Nat → Nat → Nat
opaque g : Nat → Nat
example (a b c : Nat) : f a = g → b = c → f a b = g c := by
grind [funCC f, funCC g]
attribute [grind funCC] f g
example (a b c : Nat) : f a = g → b = c → f a b = g c := by
grind
```
This feature substantially improves `grind`’s support for higher-order
and partially-applied function equalities, while preserving
compatibility with first-order SMT behavior when `funCC` is disabled.
Closes#11309
This PR significantly changes the signature of the `ToIterator` type
class. The obtained iterators' state is no longer dependently typed and
is an `outParam` instead of being bundled inside the class. Among other
benefits, `simp` can now rewrite inside of `Slice.toList` and
`Slice.toArray`. The downside is that we lose flexibility. For example,
the former combinator-based implementation of `Subarray`'s iterators is
no longer feasible because the states are dependently typed. Therefore,
this PR provides a hand-written iterator for `Subarray`, which does not
require a dependently typed state and is faster than the previous one.
Converting a family of dependently typed iterators into a simply typed
one using a `Sigma`-state iterator generates forbiddingly bad code, so
that we do provide such a combinator. This PR adds a benchmark for this
problem.
This PR improves the support for `Fin n` in `grind` when `n` is not a
numeral.
- `toInt (0 : Fin n) = 0` in `grind lia`.
- `Fin.mk`-applications are treated as interpreted terms in `grind lia`.
- `Fin.val` applications are suppressed from `grind lia`
counterexamples.
This PR fixes a breakage in Lake's TOML test caused by String API
changes. It also removes a JSON parser workaround that has since been
fixed, and it more generally polishes up the code.
This PR fixes an issue affecting `grind -revert`. In this mode, assigned
metavariables in hypotheses were not being instantiated. This issue was
affecting two files in Mathlib.
This PR fixes a local declaration internalization in `grind` that was
exposed when using `grind -revert`. This bug was affecting a `grind`
proof in Mathlib.
This PR makes the specializer (correctly) share more cache keys across
invocations, causing us to produce less code bloat.
We observed that in functions with lots of specialization, sometimes
cache keys are defeq but not BEq because one has unused let decls
(introduced by specialization) that the other doesn't. This PR resolves
this conflict by erasing unused let decls from specializer cache keys.
This PR improves the error message encountered in the case of a type
class instance resolution failure, and adds an error explanation that
discusses the common new-user case of binary operation overloading and
points to the `trace.Meta.synthInstance` option for advanced debugging.
## Example
```lean4
def f (x : String) := x + x
```
Before:
```
failed to synthesize
HAdd String String ?m.5
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
After:
```
failed to synthesize instance of type class
HAdd String String ?m.5
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
Error code: lean.failedToSynthesizeTypeclassInstance
[View explanation](https://lean-lang.org/doc/reference/latest/find/?domain=Manual.errorExplanation&name=lean.failedToSynthesizeTypeclassInstance)
```
The error message is changed in three important ways:
* Explains *what* failed to synthesize, using the "type class"
terminology that's more likely to be recognized than the "instance"
terminology
* Points to the `trace.Meta.synthInstance` option which is otherwise
nearly undiscoverable but is quite powerful (see also
leanprover/reference-manual#663 which is adding commentary on this
option)
* Gives an error explanation link (which won't actually work until the
next release after this is merged) which prioritizes the common-case
explanation of using the wrong binary operation
This PR removes all code that sets the `Option.Decl.group` field, which
is unused and has no clearly documented meaning.
The actual removal of the field would be #11305.
This PR renames the CTests tests to use filenames as test names. So
instead of
```
2080 - leanruntest_issue5767.lean (Failed)
```
we get
```
2080 - tests/lean/run/issue5767.lean (Failed)
```
which allows Ctrl-Click’ing on them in the VSCode terminal.
This PR renames congruence lemmas for union on
`DHashMap`/`HashMap`/`HashSet`/`DTreeMap`/`TreeMap`/`TreeSet` to fit the
convention of being in the `Equiv` namespace.
This PR fixes a bug in the propagation rules for `ite` and `dite` used
in `grind`. The bug prevented equalities from being propagated to the
satellite solvers. Here is an example affected by this issue.
```lean
example
[LE α] [LT α] [Std.IsLinearOrder α] [Std.LawfulOrderLT α]
[Lean.Grind.CommRing α] [DecidableLE α] [Lean.Grind.OrderedRing α]
(a b c : α) :
(if a - b ≤ -(a - b) then -(a - b) else a - b) ≤
((if a - c ≤ -(a - c) then -(a - c) else a - c) + if c - d ≤ -(c - d) then -(c - d) else c - d) +
if b - d ≤ -(b - d) then -(b - d) else b - d := by
grind
```
This PR adds support for decidable equality of empty lists and empty
arrays. Decidable equality for lists and arrays is suitably modified so
that all diamonds are definitionally equal.
Following #9302, the strong condition of definitionally equal under
`with_reducible_and_instances` is tested. This also moves some of the
comments added in #9302 out of docstrings.
---------
Co-authored-by: Aaron Liu <aaronliu2008@outlook.com>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
This PR renames `String.replaceStartEnd` to `String.slice`,
`String.replaceStart` to `String.sliceFrom`, and `String.replaceEnd` to
`String.sliceTo`, and similar for the corresponding functions on
`String.Slice`.
This PR adds several lemmas that relate
`getMin`/`getMin?`/`getMin!`/`getMinD` and insertion to the empty
(D)TreeMap/TreeSet and their extensional variants.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
Global `attribute` commands on non-local declarations are impossible to
track granularly a priori and so should be preserved by `shake` by
default. A new `shake` option could be added to ignore these
dependencies for evaluation.
This PR adds `Std.Slice.Pattern` instances for `p : Char -> Prop` as
long as `DecidablePred p`, to allow things like `"hello".dropWhile (· =
'h')`.
To achieve this, we refactor `ForwardPattern` and friends to be
"non-uniform", i.e., the class is now `ForwardPattern pat`, not
`ForwardPattern ρ` (where `pat : ρ`).
This PR provides intersection operation for
`ExtDHashMap`/`ExtHashMap`/`ExtHashSet` and proves several lemmas about
it.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR provides intersection on `DTreeMap`/`TreeMap`/`TreeSet`and
provides several lemmas about it.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR adds a function `String.Slice.length`, with the following
deprecation string: There is no constant-time length function on slices.
Use `s.positions.count` instead, or `isEmpty` if you only need to know
whether the slice is empty.
This PR adds the alias `String.Slice.any` for `String.Slice.contains`.
It would probably be even better to only have one, but we don't have a
good mechanism for pointing people looking for one towards the other, so
an alias it is for now.
This PR splits the single grind_lint.lean test (50+ seconds) into 7
separate files that each run in under 7 seconds:
- grind_lint_list.lean (5.7s): List namespace with exceptions
- grind_lint_array.lean (4.6s): Array namespace
- grind_lint_bitvec.lean (3.9s): BitVec namespace with exceptions
- grind_lint_std_hashmap.lean (6.8s): Std hash map/set namespaces
- grind_lint_std_treemap.lean (~6s): Std tree map/set namespaces
- grind_lint_std_misc.lean (~5s): Std.Do, Std.Range, Std.Tactic
- grind_lint_misc.lean (5.5s): All other non-Lean namespaces
Each file maintains complete namespace coverage and preserves all
existing exceptions. The split enables better CI parallelization and
faster feedback.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
This PR implements support for arbitrary `grind` parameters. The feature
is similar to the one available in `simp`, where a proof term is treated
as a local universe-polymorphic lemma. This feature relies on `grind
-revert` (see #11248). For example, users can now write:
```lean
def snd (p : α × β) : β := p.2
theorem snd_eq (a : α) (b : β) : snd (a, b) = b := rfl
/--
trace: [grind.ematch.instance] snd_eq (a + 1): snd (a + 1, Type) = Type
[grind.ematch.instance] snd_eq (a + 1): snd (a + 1, true) = true
-/
#guard_msgs (trace) in
set_option trace.grind.ematch.instance true in
example (a : Nat) : (snd (a + 1, true), snd (a + 1, Type), snd (2, 2)) = (true, Type, snd (2, 2)) := by
grind [snd_eq (a + 1)]
```
Note that in the example above, `snd_eq` is instantiated only twice, but
with different universe parameters.
As described in #11248, the new feature cannot be used with `grind
+revert`.
This PR marks the automatically generated `sizeOf` theorems as `grind`
theorems.
closes#11259
Note: Requested update stage0, we need it to be able to solve example in
the issue above.
```lean
example (a: Nat) (b: Nat): sizeOf a < sizeOf (a, b) := by
grind
```
This PR fixes several memory leaks in the new `String` API.
These leaks are mostly situations where we forgot to put borrowing
annotations. The single
exception is the new `String` constructor `ofByteArray`. It cannot take
the `ByteArray` as
a borrowed argument anymore and must thus free it on its own.
This PR continues the homogenization between matchers and splitters,
following up on #11256. In particular it removes the ambiguity whether
`numParams` includes the `discrEqns` or not.
This PR replaces `MatcherInfo.numAltParams` with a more detailed data
structure that allows us, in particular, to distinguish between an
alternative for a constructor with a `Unit` field and the alternative
for a nullary constructor, where an artificial `Unit` argument is
introduced.
This PR introduces a function `String.split` which is based on
`String.Slice.split` and therefore supports all pattern types and
returns a `Std.Iter String.Slice`.
This supersedes the functions `String.splitOn` and `String.splitToList`,
and we remove all all uses of these functions from core. They will be
deprecated in a future PR.
Migrating from `String.splitOn` and `String.splitToList` is easy: we
introduce functions `Iter.toStringList` and `Iter.toStringArray` that
can be used to conveniently go from `Std.Iter String.Slice` to `List
String` and `Array String`, so for example `s.splitOn "foo"` can be
replaced by `s.split "foo" |>.toStringList`.
This PR adds a `Unit` assumption to alternatives of the splitter that
would otherwise not have arguments. This fixes#11211.
In practice these argument-less alternatives did not cause wrong
behavior, as the motive when used with `split` is always a function
type. But it is better to be safe here (maybe someone uses splitters in
other ways), it may increase the effectiveness of #10184 and simplifies
#11220.
The perf impact is insignificant in the grand scheme of things on
stdlib, but the change is effective:
```
~/lean4 $ build/release/stage1/bin/lean tests/lean/run/matchSplitStats.lean
969 splitters found
455 splitters are const defs
~/lean4 $ build/release/stage2/bin/lean tests/lean/run/matchSplitStats.lean
969 splitters found
829 splitters are const defs
```
This PR implements the option `revert`, which is set to `false` by
default. To recover the old `grind` behavior, you should use `grind
+revert`. Previously, `grind` used the `RevSimpIntro` idiom, i.e., it
would revert all hypotheses and then re-introduce them while simplifying
and applying eager `cases`. This idiom created several problems:
* Users reported that `grind` would include unnecessary parameters. See
[here](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Grind.20aggressively.20includes.20local.20hypotheses.2E/near/554887715).
* Unnecessary section variables were also being introduced. See the new
test contributed by Sebastian Graf.
* Finally, it prevented us from supporting arbitrary parameters as we do
in `simp`. In `simp`, I implemented a mechanism that simulates local
universe-polymorphic theorems, but this approach could not be used in
`grind` because there is no mechanism for reverting (and re-introducing)
local universe-polymorphic theorems. Adding such a mechanism would
require substantial work: I would need to modify the local context
object. I considered maintaining a substitution from the original
variables to the new ones, but this is also tricky, because the mapping
would have to be stored in the `grind` goal objects, and it is not just
a simple mapping. After reverting everything, I would need to keep a
sequence of original variables that must be added to the mapping as we
re-introduce them, but eager case splits complicate this quite a bit.
The whole approach felt overly messy.
The new behavior `grind -revert` addresses all these issues. None of the
`grind` proofs in our test suite broke after we fixed the bugs exposed
by the new feature. That said, the traces and counterexamples produced
by `grind` are different. The new proof terms are also different.
This PR introduces a clarifying note to "undefined identifier" error
messages when the undefined identifier is in a syntactic position where
autobinding might generally apply, but where and autobinding is
disabled. A corresponding note is made in the `lean.unknownIdentifier`
error explanation.
The core intended audience for this error message change is "newcomer
who would otherwise be baffled why the thing that works in this Mathlib
project gets 'unknown identifier' errors in this non-Mathlib project."
## Modified behavior
### Example 1
```lean4
set_option autoImplicit true in
set_option relaxedAutoImplicit false in
def thisBreaks (x : α₂) (y : size₂) := ()
```
Before:
```
Unknown identifier `size₂`
```
After:
```
Unknown identifier `size₂`
Note: It is not possible to treat `size₂` as an implicitly bound variable here because it has multiple characters while the `relaxedAutoImplicit` option is set to `false`.
```
### Example 2
```lean4
set_option autoImplicit false in
def thisAlsoBreaks (x : α₃) (y : size₃) := ()
```
Before:
```
Unknown identifier `α₃`
Unknown identifier `size₃`
```
After:
```
Unknown identifier `α₃`
Note: It is not possible to treat `α₃` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
Unknown identifier `size₃`
Note: It is not possible to treat `size₃` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```
## How this works
The elaboration process knows whether it is considering syntax where we
be able to auto-bind implicits thanks to information in the
`Lean.Elab.Term.Context`.
Before this PR, this contains:
* `autoBoundImplicit`, a boolean that is true when we are considering
syntax that might be able to auto-bind implicit AND when the
`autoImplicit` flag is set to true
* `autoBoundImplicits`, an array of `Expr` variables that we've
autobound
After this PR, this contains:
* `autoBoundImplicitCtx`, an option which is `some` **whenever** we are
considering syntax that might be able to auto-bind implicit, and carries
the array of exprs as well as a copy of the `autoImplicit` flag's value.
(The latter lets us re-implement the `autoBoundImplicit` flag for
backward compatibility.)
Therefore, rather than having access to "elaboration is in an
autobinding context && flag is enabled", it's possible to recover both
of those individual values, and give different information to the user
in cases where we didn't attempt autobinding but would have if different
options had been set.
## Rationale
The revised error message avoids offering much guidance — it doesn't
actively suggest setting the option to a different value or suggest
adding an implicit binding. Care needs to be taken here to make sure
advice is not misleading; as the accepted RFC in #6462 points out, a
substantial portion of autobinding failures are just going to be
misspellings.
I considered and then rejected a code action here to that would add a
local `set_option autoImplicit true`. This seems undesirable or
counterproductive — if a project like Mathlib has proactively disabled
`autoImplicit`, its odd to be pushing local exceptions.
A hint prompting the user to add an implicit binding would be more
proper, but only in certain circumstances — we want to be conservative
in suggesting specific code actions! In a situation like this one, we'd
want to _avoid_ giving the suggestion of adding a `{HasArr}` binding,
which I think either requires tricky heuristics or means we'd want the
elaboration to play through the consequences of auto-binding and make
sure it doesn't cause any follow-on errors before suggesting adding an
implicit binding.
```
set_option autoImplicit true
set_option relaxedAutoImplicit false
instance has_arr : HasArr Preorder := { Arr := Function }
```
Additionally, it seems like it would make the most sense to offer to
auto-bind _all_ the relevant unknown identifiers at once. To avoid being
misleading, this too would seem to require playing through the
consequences of autobinding before being able to safely suggest the
change. This is enough additional complexity that I'm leaving it for
future work.
---------
Co-authored-by: David Thrane Christiansen <david@davidchristiansen.dk>
This PR prevents symbol clashes between (non-`@[export]`) definitions
from different Lean packages.
Previously, if two modules define a function with the same name and were
transitively imported (even privately) by some downstream module,
linking would fail due to a symbol clash. Similarly, if a user defined a
symbol with the same name as one in the `Lean` library, Lean would use
the core symbol even if one did not import `Lean`.
This is solved by changing Lean's name mangling algorithm to include an
optional package identifier. This identifier is provided by Lake via
`--setup` when building a module. This information is weaved through the
elaborator, interpreter, and compiler via a persistent environment
extension that associates modules with their package identifier.
With a package identifier, standard symbols have the form
`lp_<pkg-id>_<mangled-def>`. Without one, the old scheme is used (i.e.,
`l_<mangled-def>`). Module initializers are also prefixed with package
identifier (if any). For example, the initializer for a module `Foo` in
a package `test` is now `initialize_test_Foo` (instead of
`initialize_Foo`). Lake's default for native library names has also been
adjusted accordingly, so that libraries can still, by default, be used
as plugins. Thus, the default library name of the `lean_lib Foo` in
`package test` will now be `libtest_Foo`.
When using Lake to build the Lean core (i.e., `bootstrap = true`), no
package identifier will be used. Thus, definitions in user packages can
never have symbol clashes with core.
Closes#222.
This PR adds a test that covers importing modules defined in multiple
packages.
Currently, will resolve the module to its first occurrence in the its
search order. However, this will soon change, so this test is designed
to analyze that behavior.
This PR redefines `String.take` and variants to operate on
`String.Slice`. While previously functions returning a substring of the
input sometimes returned `String` and sometimes returned
`Substring.Raw`, they now uniformly return `String.Slice`.
This is a BREAKING change, because many functions now have a different
return type. So for example, if `s` is a string and `f` is a function
accepting a string, `f (s.drop 1)` will no longer compile because
`s.drop 1` is a `String.Slice`. To fix this, insert a call to `copy` to
restore the old behavior: `f (s.drop 1).copy`.
Of course, in many cases, there will be more efficient options. For
example, don't write `f <| s.drop 1 |>.copy |>.dropEnd 1 |>.copy`, write
`f <| s.drop 1 |>.dropEnd 1 |>.copy` instead. Also, instead of `(s.drop
1).copy = "Hello"`, write `s.drop 1 == "Hello".toSlice` instead.
This PR adds `Std.Tricho r`, a typeclass for relations which identifies
them as trichotomous. This is preferred to `Std.Antisymm (¬ r · ·)` in
all cases (which it is equivalent to).
This PR is split from a future PR and adds the function
`String.Pos.next`, an alias (and soon to be correct name) of
`String.ValidPos.next`.
This is for boring bootstrapping reasons.
This PR extracts two modules from `Match.MatchEqs`, in preparation of
#11220
and to use the module system to draw clear boundaries between concerns
here.
This PR registers a node kind for `Lean.Parser.Term.elabToSyntax` in
order to support the `Lean.Elab.Term.elabToSyntax` functionality without
registering a dedicated parser for user-accessible syntax.
This PR adds intersection operation on `DHashMap`/`HashMap`/`HashSet`
and provides several lemmas about its behaviour.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR removes duplicated instance parameters in the standard library
and flips lemmas of the form `toList_eq_toListIter` into a form that is
suitable for `simp`.
This PR implements `elabToSyntax` for creating scoped syntax `s :
Syntax` for an arbitrary elaborator `el : Option Expr -> TermElabM Expr`
such that `elabTerm s = el`.
Roundtripping example implementing an elaborator imitating `let`:
```lean
elab "lett " decl:letDecl ";" e:term : term <= ty? => do
let elabE (ty? : Option Expr) : TermElabM Expr := do elabTerm e ty?
elabToSyntax elabE fun body => do
elabTerm (← `(let $decl:letDecl; $body)) ty?
#guard lett x := 42; (x + 1) = 43
```
This PR ensures that the `text` argument of `computeArtifact` is always
provided in Lake code, fixing a hashing bug with
`buildArtifactUnlessUpToDate` in the process.
Closes#11209
This PR avoids match splitter calculation from testing all quadratically
many pairs of alternatives for overlaps, by keeping track of possible
overlaps during matcher calculation, storing that information in the
`MatcherInfo`, and using that during matcher calculation.
This PR fixes fallout of the closure allocator changes in #10982. As far
as we know
this bug only meaningfully manifests in non default build configurations
without mimalloc such as:
`cmake --preset release -DUSE_MIMALLOC=OFF`
The issue is that I forgot to update the deallocation functions for
closures. However, this only
seems to matter if we disable mimalloc which is why this slipped through
testing.
This PR provides a polymorphic `ForIn` instance for slices and an MPL
`spec` lemma for the iteration over slices using `for ... in`. It also
provides a version specialized to `Subarray`.
This PR fixes an error message in Lake which suggested incorrect
lakefile syntax.
The error message (which was very helpful by the way) looked like this:
```
error: TwoFX/batteries: package not found on Reservoir.
If the package is on GitHub, you can add a Git source. For example:
require ...
from git "https://github.com/TwoFX/batteries" @ git "main"
or, if using TOML:
[[require]]
git = "https://github.com/TwoFX/batteries"
rev = "main"
...
```
The suggested Lakefile syntax does not work. The correct syntax,
according to the reference manual and according to my tests, is
```
require ...
from git "https://github.com/TwoFX/batteries" @ "main"
```
without the second `git`.
This PR fixes a bug in the LCNF simplifier unearthed while working on
#11078. In some situations caused by `unsafeCast`, the simplifier would
record incorrect information about `cases`, leading to further bugs down
the line.
Suppose we have `v : NonScalar` due to an `unsafeCast` and we run
`cases` on it, expecting `Prod.mk fst snd`. The current code attempts to
record both the arguments from the constructor application in the case
arm `fst`, `snd` and the parameters for the type by inspecting the discr
`v`. However, `NonScalar` does of course not have any parameters,
causing the simplifier to record wrong information. This patch makes the
`cases` infrastructure more cautious when extracting information from
the type of `v`.
This PR changes how sparse case expressions represent the
none-of-the-above information. Instead of of many `x.ctorIdx ≠ i`
hypotheses, it introduces a single `Nat.hasNotBit mask x.ctorIdx`
hypothesis which compresses that information into a bitmask. This avoids
a quadratic overhead during splitter generation, where all n assumptions
would be refined through `.subst` and `.cases` constructions for all n
assumption of the splitter alternative.
The definition of `Nat.hasNotBit` uses `Nat.rightShift` which is fiddly
to get to reduce well, especially on open terms and with `Meta.whnf`.
Some experimentation was needed to find proof terms that work, these are
all put together in the `Lean.Meta.HasNotBit` module.
Fixes#11183
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR fixes the `reduceArity` compiler pass to consider
over-applications to functions that have their arity reduced.
Previously, this pass assumed that the amount of arguments to
applications was always the same as the number of parameters in the
signature. This is usually true, since the compiler eagerly introduces
parameters as long as the return type is a function type, resulting in a
function with a return type that isn't a function type. However, for
dependent types that sometimes are function types and sometimes not,
this assumption is broken, resulting in the additional parameters to be
dropped.
Closes#11131
This ensures that no `grind` annotated theorem, simply by being
instantiated, causes a chain of >20 further instantiations, with a small
list of documented exceptions.
This PR modifies the `try?` framework, so each subsidiary tactic runs
with a separate `maxHeartbeats` budget.
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR has `#grind_list check` produce a "Try this:" suggestion with
`#grind_list inspect` commands, as this is usually the next step in
dealing with problematic cases. We also fix the grind pattern for one
theorem, as part of testing the workflow. More to follow.
This PR fixes a few minor issues in the new `Action` framework used in
`grind`. The goal is to eventually delete the old `SearchM`
infrastructure. The main `solve` function used by `grind` is now based
on the `Action` framework. The PR also deletes dead code in `SearchM`.
Creates an inductive data type with 100 constructors, and a function
that does
matches on half of its constructors, with a catch-all for the other
half, and generates the splitter.
Related to #11183.
This PR renames `Substring` to `Substring.Raw`.
This is to signify its status as a second-class citizen (not deprecated,
but no real plans for verification, like `String.Pos.Raw`) and to free
up the name `Substring` for a possible future type `String.Substring :
String -> Type` so that `s.Substring` is the type of substrings of `s`.
The functions `String.toSubstring` and `String.toSubstring'` will remain
for now for bootstrapping reasons.
This PR implements `try?` using the new `finish?` infrastructure. It
also removes the old tracing infrastructure, which is now obsolete.
Example:
```lean
/--
info: Try these:
[apply] grind
[apply] grind only [findIdx, insert, = mem_indices_of_mem, = getElem?_neg, = getElem?_pos, = HashMap.mem_insert,
= HashMap.getElem_insert, #1bba]
[apply] grind only [findIdx, insert, = mem_indices_of_mem, = getElem?_neg, = getElem?_pos, = HashMap.mem_insert,
= HashMap.getElem_insert]
[apply] grind =>
instantiate only [findIdx, insert, = mem_indices_of_mem]
instantiate only [= getElem?_neg, = getElem?_pos]
cases #1bba
· instantiate only [findIdx]
· instantiate only
instantiate only [= HashMap.mem_insert, = HashMap.getElem_insert]
-/
#guard_msgs in
example (m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
try?
```
This PR modifies the error message that is returned when more than one
synthetic metavariable can't be resolved.
The two heuristics used for prioritization are:
- prefer typeclass problems associated with small ranges over typeclass
problems associated with large ranges (I'm pretty confident in this
heuristic)
- do not prefer typeclass problems over other kinds of errors (not as
confident in this heuristic)
This PR uses the new `grind_pattern` constraints to fix cases where an
unbounded number of theorem instantiations would be generated for
certain theorems in the standard library.
This PR implements `grind_pattern` constraints. They are useful for
controlling theorem instantiation in `grind`. As an example, consider
the following two theorems:
```lean
theorem extract_empty {start stop : Nat} :
(#[] : Array α).extract start stop = #[] := …
theorem extract_extract {as : Array α} {i j k l : Nat} :
(as.extract i j).extract k l = as.extract (i + k) (min (i + l) j) := …
```
If both are used for theorem instantiation, an unbounded number of
instances is generated as soon as we add the term `#[].extract i j` to
the `grind` context.
We can now prevent this by adding a `grind_pattern` constraint to
`extract_extract`:
```lean
grind_pattern extract_extract => (as.extract i j).extract k l where
as =/= #[]
```
With this constraint, only one instance is generated, as expected:
```lean
/-- trace: [grind.ematch.instance] extract_empty: #[].extract i j = #[] -/
#guard_msgs (drop error, trace) in
set_option trace.grind.ematch.instance true in
example (as : Array Nat) (h : #[].extract i j = as) : False := by
grind only [= extract_empty, usr extract_extract]
```
This PR changes all module build keys in Lake to be scoped by their
package. This enables building modules with the same name in different
packages (something previously only well-supported for executable
roots).
API-wise, the `BuildKey` definitions `module` and `moduleFacet` have
been deprecated and replaced with `packageModule` and
`packageModuleFacet`. The `moduleTargetIndicator` has also been removed
(with its purpose subsumed by `packageModule`).
This PR adds syntax for specifying `grind_pattern` constraints and
extends the `EMatchTheorem` object.
---
Note: We need a manual stage0 update because it affects the .olean
files.
This PR removes most cases where an error message explained that it was
"probably due to metavariables," giving more explanation and a hint.
## Example
```
def square x := x * x
```
Before:
```lean4
typeclass instance problem is stuck, it is often due to metavariables
HMul ?m.9 ?m.9 (?m.3 x)
```
After:
```
typeclass instance problem is stuck
HMul ?m.9 ?m.9 (?m.3 x)
Note: Lean will not try to resolve this typeclass instance problem because the
first and second type arguments to `HMul` are metavariables. These arguments
must be fully determined before Lean will try to resolve the typeclass.
Hint: Adding type annotations and supplying implicit arguments to functions
can give Lean more information for typeclass resolution. For example, if you
have a variable `x` that you intend to be a `Nat`, but Lean reports it as
having an unresolved type like `?m`, replacing `x` with `(x : Nat)` can get
typeclass resolution un-stuck.
```
In addition to providing beginner-and-intermediate-friendly explanation
about **why** typeclass instance problems are treated as "stuck" when
metavariables appear in output positions, this PR provides
potentially-valuable improvement even to expert users: it explains
**which of the typeclass arguments are inputs** and therefore need to be
fully specified before typeclass resolution will be attempted. This
information can be tricky to find otherwise.
## Next steps, but probably after this PR
* error explanation
* detecting when the syntactic source is a binop and giving a
special-cased explanation on the binary operators and their associated
typeclasses
* detecting when the syntactic source is a function call, inspecting the
function call's type somewhat, and replacing the generic "replace `x`
with `(x : Nat)` hint with a specialized "replace `foo` with `foo (tyArg
:= Nat)`" hint
This PR introduces slices of lists that are available via slice notation
(e.g., `xs[1...5]`).
* Moved the `take` combinator and the `List` iterator producer to
`Init`.
* Introduced a `toTake` combinator: `it.toTake` behaves like `it`, but
it has the same type as `it.take n`. There is a constant cost per
iteration compared to `it` itself.
* Introduced `List` slices. Their iterators are defined as
`suffixList.iter.take n` for upper-bounded slices and
`suffixList.iter.toTake` for unbounded ones.
Performance characteristics of using the slice `list[a...b]`:
* when creating it: `O(a)`
* every iterator step: `O(1)`
* `toList`: `O(b - a + 1)` (given that a <= b)
Because the slice only stores a suffix of `xs` internally, two slices
can be equal even though the underlying lists differ in an irrelevant
prefix. Because the `stop` field is allowed to be beyond the list's
upper bound, the slices `[1][0...1]` and `[1][0...2]` are not equal,
even though they effectively cover the same range of the same list.
Improving this would require us to call `List.length` when building the
slice, which would iterate through the whole list.
This PR replaces #11138, which just added a `@[csimp]` lemma for
`Int.pow`, this time actually replacing the definition. This means we
not only get fast runtime behaviour, but take advantage of the special
kernel support for `Nat.pow`.
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR adds tactic and term mode macros for `∎` (typed `\qed`) which
expand to `try?`. The term mode version captures any produced
suggestions and prepends `by`.
Co-authored-by: Claude <noreply@anthropic.com>
This PR removes `simp_all? +suggestions` from `try?` for now. It's
really slow out in Mathlib; too often the suggestions cause `simp` to
loop. Until we have the ability for `try?` to move past a timeing-out
tactic (or maybe even until we have parallelism), it needs to be
removed.
Alternatively, we could try modifying `simp` so that e.g. it won't use a
premise more than once. This might help avoid loops, but it would
produce less-reproducible proofs.
Co-authored-by: Claude <noreply@anthropic.com>
This PR ensures that tactics using library suggestions set the caller
field, so the premise selection engine has access to this. We'll later
use this to filter out some modules for grind, which we know have
already been fully annotated.
Co-authored-by: Claude <noreply@anthropic.com>
This PR implements support for `#grind_lint check in module <module>`.
Mathlib does not use namespaces, so we need to restrict the
`#grind_lint` search space using module (prefix) names. Example:
```lean
/--
info: instantiating `Array.filterMap_some` triggers more than 100 additional `grind` theorem instantiations
---
info: Array.filterMap_some
[thm] instances
[thm] Array.filterMap_filterMap ↦ 94
[thm] Array.size_filterMap_le ↦ 5
[thm] Array.filterMap_some ↦ 1
---
info: instantiating `Array.range_succ` triggers 22 additional `grind` theorem instantiations
-/
#guard_msgs in
#grind_lint check (min := 20) in module Init.Data.Array
```
This PR changes the default library suggestions (e.g. for `grind
+suggestions` or `simp_all? +suggestions) to include the theorems from
the current file in addition to the output of Sine Qua Non.
This PR implements the following improvements to the `#grind_lint`
command:
1. More informative messages when the number of instances exceeds the
minimum threshold.
2. A code action for `#grind_lint inspect` that inserts
`set_option trace.grind.ematch.instance true` whenever the number of
instances exceeds
the minimum threshold.
3. Displaying doc strings for `grind` configuration options in
`#grind_lint`.
4. Improve doc strings for `#grind_lint inspect` and `#grind_lint
check`.
Example:
```lean
/--
info: instantiating `Array.filterMap_some` triggers more than 100 additional `grind` theorem instantiations
---
info: Array.filterMap_some
[thm] instances
[thm] Array.filterMap_filterMap ↦ 94
[thm] Array.size_filterMap_le ↦ 5
[thm] Array.filterMap_some ↦ 1
---
info: Try this to display the actual theorem instances:
[apply] set_option trace.grind.ematch.instance true in
#grind_lint inspect Array.filterMap_some
-/
#guard_msgs in
#grind_lint inspect Array.filterMap_some
```
This PR renames `String.Iterator` to `String.Legacy.Iterator`.
From the docstring of `String.Legacy.Iterator`:
> This is a no-longer-supported legacy API that will be removed in a
future release. You should use
> `String.ValidPos` instead, which is similar, but safer. To iterate
over a string `s`, start with
> `p : s.startValidPos`, advance it using `p.next`, access the current
character using `p.get` and
> check if the position is at the end using `p = s.endValidPos` or
`p.IsAtEnd`.
This PR adds a test replicating Kim's diamond dependency example.
The top-level package, `D`, depends on two intermediate packages, `B`
and `C`, which each require semantically different versions of another
package, `A`. The portion of `A` that `B` and `C` publicly use is
unchanged across the versions, but they both privately make use of
changed API. Currently, this causes a version clash. This will be made
to work without error later this quarter.
This PR fixes some details in the Markdown renderings of Verso
docstrings, and adds tests to keep them correct. Also adds tests for
Verso docstring metadata.
This PR implements the `#grind_lint` command, a diagnostic tool for
analyzing the behavior of theorems annotated for theorem instantiation.
The command helps identify problematic theorems that produce excessive
or unbounded instance generation during E-matching, which can lead to
performance issues.
The main entry point is:
```
#grind_lint check
```
which analyzes all theorems marked with the `@[grind]` attribute.
For each theorem, it creates an artificial goal and runs `grind`,
collecting statistics about the number of instances produced.
Results are summarized using info messages, and detailed breakdowns are
shown for lemmas exceeding a configurable threshold.
Additional subcommands are provided for targeted inspection and control:
* `#grind_lint inspect thm`: analyzes one or more specific theorems in
detail
* `#grind_lint mute thm`: excludes a theorem from instantiation during
analysis
* `#grind_lint skip thm`: omits a theorem from being analyzed by
`#grind_lint check`
This PR adds a user-extension mechanism for the `try?` tactic. You can
either use the `@[try_suggestion]` attribute on a declaration with
signature ``MVarId -> Try.Info -> MetaM (Array (TSyntax `tactic))`` to
produce suggestions, or the `register_try?_tactic <stx>` command with a
fixed piece of syntax. User-extensions are only tried *after* the
built-in try strategies have been tried and failed.
I wanted to ensure that if the user provides a tactic that produces a
"Try this:" suggestion, we both emit the original tactic and the
suggested replacement (this is what we already do with `grind` and
`simp`). I have this working, but it is quite hacky: we grab the message
log and parse it. I fear this will break when the "Try this:" format is
inevitably changed in the future.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Adds user-defined suggestion generators for `try?` via
`@[try_suggestion]` and `register_try?_tactic`, executed after built-ins
with priority and double-suggestion handling.
>
> - **Parser/Command**:
> - Add command syntax `register_try?_tactic (priority := n)?
<tacticSeq>` in `Lean.Parser.Command`.
> - **Suggestion registry**:
> - Introduce `@[try_suggestion (prio)]` attribute with a scoped env
extension to register generators (`MVarId → Try.Info → MetaM (Array
(TSyntax `tactic))`).
> - Priority ordering (higher first); supports local/global scope.
> - **Tactic engine (`try?`)**:
> - New unsafe pipeline to collect and run user generators after
built-in tactics; expands nested "Try this" outputs from user tactics.
> - `mkTryEvalSuggestStx` now takes `(goal, info)`; integrates user
tactics as fallback via `attempt_all`.
> - Suppress intermediate "Try this" messages during `evalAndSuggest` by
restoring the message log.
> - **Imports**:
> - Add `meta import Lean.Elab.Command` for command elaboration.
> - **Tests**:
> - `try_register_builtin.lean`: command availability and warning
without import.
> - `try_user_suggestions.lean`: basic, priority, built-in fallback,
double-suggestion, and command registration cases.
> - Update `versoDocMissing.lean.expected.out` to include
`register_try?_tactic` in expected commands.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
302dc94544. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
This PR replaces `Iter(M).size` with the `Iter(M).count`. While the
former used a special `IteratorSize` type class, the latter relies on
`IteratorLoop`. The `IteratorSize` class is deprecated. The PR also
renames lemmas about ranges be replacing `_Rcc` with `_rcc`, `_Rco` with
`_roo` (and so on) in names, in order to be more consistent with the
naming convention.
This PR adds a new, inactive and unused `doElem_elab` attribute that
will allow users to register custom elaborators for `doElem`s in the
form of the new type `DoElab`. The old `do` elaborator is active by
default but can be switched off by disabling the new option
`backward.do.legacy`.
This PR fixes a bug in #11125. Added a test this time ...
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Exclude deprecated declarations from library suggestions and add a
test verifying they are filtered out.
>
> - **Library Suggestions**:
> - Update `isDeniedPremise` in `src/Lean/LibrarySuggestions/Basic.lean`
to treat `Lean.Linter.isDeprecated` as denied (`true`), filtering
deprecated constants from suggestions.
> - **Tests**:
> - Add `tests/lean/run/library_suggestions_deprecated.lean` to verify
deprecated theorems (e.g., `deprecatedTheorem`) are not suggested by
`currentFile`, while non-deprecated ones are.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
ef7e546dbc. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
This PR adds iterators and slices for `DTreeMap`/`TreeMap`/`TreeSet`
based on zippers and provides basic lemmas about them.
---------
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR aims to bring the performance of `String.ValidPos` closer to
that of `String.Pos.Raw` by adding/correcting `extern` annotations as
needed.
This is in response to a regression observed after #11127. The changes
to the `String` `Parsec` module lead to different compiler behavior for
functions like `strCore` and `natCore`. The new IR *looks* better than
the old IR, but the
[numbers](1e438647ba)
are a bit mixed.
This PR removes all uses of `String.Iterator` from core, preferring
`String.ValidPos` instead.
In an upcoming PR, `String.Iterator` will be renamed to
`String.Legacy.Iterator`.
This PR adds support for `try?` to use induction; it will only perform
induction on inductive types defined in the current namespace and/or
module; so in particular for now it will not induct on built-in
inductives such as `Nat` or `List`.
This is stacked on top of #11132, and there are overlapping changes.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Adds vanilla induction suggestions to `try?`, updates collection of
inductive candidates, and tests the new behavior on custom inductive
types.
>
> - **Try tactic pipeline**:
> - Add vanilla induction generators (`mkIndStx`, `mkAllIndStx`) that
try `induction <var> <;> …`, with fallback via `expose_names` when
needed.
> - Integrate induction into `mkTryEvalSuggestStx`, alongside existing
atomic, suggestions, and function-induction options.
> - **Collector updates (`Try/Collect.lean`)**:
> - Enhance `checkInductive` to `whnf` the type and use `getAppFn` to
detect inductive heads, populating `indCandidates`.
> - **Tests**:
> - New `tests/lean/run/try_induction.lean` covering suggestions for
`induction` on custom inductives, interaction with `grind`, and
coexistence with `fun_induction`.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
b357990c97. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds a `csimp` lemma for faster runtime evaluation of `Int.pow`
in terms of `Nat.pow`.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Replaces `Int.pow` evaluation with a `@[csimp]` lemma using `Nat.pow`
and adds supporting lemmas (`pow_mul`, `neg_pow`, nonneg results).
>
> - **Performance/runtime**:
> - Introduce `powImp` and `@[csimp]` theorem `pow_eq_powImp` to
evaluate `Int.pow` via `Nat.pow` with sign handling.
> - **Math lemmas (supporting)**:
> - `Int.pow_mul`: `a ^ (n * m) = (a ^ n) ^ m`.
> - `Int.sq_nonnneg`: nonnegativity of `m ^ 2`.
> - `Int.pow_nonneg_of_even`: nonnegativity for even exponents.
> - `Int.neg_pow`: `(-m)^n = (-1)^(n % 2) * m^n`.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
66ac236db7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
This PR adds support for `grind +suggestions` and `simp_all?
+suggestions` in `try?`. It outputs `grind only [X, Y, Z]` or `simp_all
only [X, Y, Z]` suggestions (rather than just `+suggestions`).
Co-authored-by: Claude <noreply@anthropic.com>
This PR fixes disequality propagation for constructor applications in
`grind`. The equivalence class representatives may be distinct
constructor applications, but we must ensure they have the same type.
Examples that were panic'ing before this PR:
```lean
example (a b : List Nat)
: a ≍ ([] : List Int) → b ≍ ([1] : List Int) → a = b ∨ p → p := by
grind
example (a b : List Nat)
: a = [] → a ≍ ([] : List Int) → b = [1] → a = b ∨ p → p := by
grind
example (a b : List Nat)
: a = [] → a ≍ ([] : List Int) → b = [1] → b ≍ [(1 : Int)] → a = b ∨ p → p := by
grind
example (a b : List Nat)
: a = [] → b = [1] → a = b ∨ p → p := by
grind
example (a b : List Nat)
: a = [] → a ≍ ([] : List Int) → b = [1] → a = b ∨ p → p := by
grind
```
Closes#11124
This PR fixes a typo in the doc string of `List.finIdxOf?`. The first
line of the doc string previously says the function returns the size of
the list if no element equal to `a`, but both the examples in the doc
string and real run-time behavior indicate it returns `none` in this
case.
Closes#11110
This PR fixes a problem for structures with diamond inheritance: rather
than copying doc-strings (which are not available unless `.server.olean`
is loaded), we link to them. Adds tests.
This PR adds `Job.sync` as a standard way of declaring a synchronous
job.
It makes some non-behavior changes to related Job APIs to improve
compilation.
This PR lets the match compilation procedure use sparse case analysis
when the patterns only match on some but not all constructors of an
inductive type. This way, less code is produce. Before, code handling
each of the other cases was then optimized and commoned-up by later
compilation pipeline, but that is wasteful to do.
In some cases this will prevent Lean from noticing that a match
statement is complete
because it performs less case-splitting for the unreachable case. In
this case, give explicit
patterns to perform the deeper split with `by contradiction` as the
right-hand side.
At least temporarily, there is also the option to disable this behaviour
with
```
set_option backwards.match.sparseCases false
```
This PR removes the `verifyEnum` functions from the bv_decide frontend.
These functions looked at the implementation of matchers to see if they
really do the matching that they claim to do. This breaks that
abstraction barrier, and should not be necessary, as only functions with
a `MatcherInfo` env entry are considered here, which should all play
nicely.
This PR adds `theorem Int.ediv_pow {a b : Int} {n : Nat} (hab : b ∣ a) :
(a / b) ^ n = a ^ n / b ^ n` and related lemmas.
---------
Co-authored-by: Bhavik Mehta <bhavikmehta8@gmail.com>
This PR adds “sparse casesOn” constructions. They are similar to
`.casesOn`, but have arms only for some constructors and a catch-all
(providing `t.ctorIdx ≠ 42` assumptions). The compiler has native
support for these constructors and now (because of the similarity) also
the per-constructor elimination principles.
This PR ensures that the `denote` functions used to implement
proof-by-reflection terms in `grind` are abbreviations. This change
eliminates the need for the `withAbstractAtoms` gadget.
This PR fixes a panic during equality propagation in the `grind ring`
module. If the maximum number of steps has been reached, the polynomials
may not be fully simplified.
Closes#11073
This PR adds union operations on DTreeMap/TreeMap/TreeSet and their raw
variants and provides lemmas about union operations.
---------
Co-authored-by: Paul Reichert <6992158+datokrat@users.noreply.github.com>
This PR enables Lake users to require Reservoir dependencies by a
semantic version range. On a `lake update`, Lake will fetch the
package's version information from Reservoir and select the newest
version of the package that satisfies the range.
### Using Version Ranges
Version ranges can be specified through the `version` field of a TOML
`require` or the `@` clause of a Lean `require`. They are only
meaningful on Reservoir dependencies.
**lakefile.lean**
```lean-4
require "Seasawher" / "mdgen" @ "2.*"
```
**lakefile.toml**
```toml
[[require]]
name = "mdgen"
scope = "Seasawher"
version = "2.*"
```
The syntax for these versions ranges is a mix of
[Rust's](https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html?highlight=caret#version-requirement-syntax)
and
[Node's](https://github.com/npm/node-semver/tree/v7.7.3?tab=readme-ov-file#ranges)
with some Lean-friendly deviations.
### Comparators
The basic unit of semantic version ranges are version comparators. They
take a base version and a comparison operator and match versions which
compare positively with their base. Lake supports the following
comparison operators.
* `<`, `<=` / `≤`, `>`, `>=` / `≥`, `=`, `!=` / `≠`
Unlike Rust and Node, Lake supports Unicode alternatives for the
operators. It also adds the not-equal operator (`!=` / `≠`) to make
excluding broken versions easier.
Comparators can be combined into clauses via conjunction or disjunction:
* **AND clauses**: Rust-style `≥1.2.3, <1.8.0` or Node-style `1.2.3
<1.8.0`
* **OR clauses**: Node-style `1.2.7 || >=1.2.9, <2.0.0`
When the base version of a comparator has a `-` suffix (e.g.,
`>1.2.3-alpha.3`) it will match versions of the same core (`1.2.3`) with
suffixes that lexicographically compare (e.g., `1.2.3-alpha.7` or
`1.2.3-beta.2`) but will not match suffixed versions of different cores
(e.g., `3.4.5-rc5`). An empty `-` suffix can be used to disable this
behavior. For example, `<2.0.0-` will match `1.2.3-beta.2` and
`2.0.0-alpha.1`.
### Range Macros
In addition to the basic comparators, Lake also supports standard
shorthand for specifying more complex ranges. Namely, it supports the
caret (`^`) and tilde (`~`) operator along with wildcard ranges.
**Caret Ranges**
* `^1` => `≥1.0.0, <2.0.0-`
* `^1.2` => `≥1.2.0, <2.0.0-`
* `^1.2.3` => `≥1.2.3, <2.0.0-`
* `^1.2.3-beta.2` => `≥1.2.3-beta.2, <2.0.0-`
* `^0.2` => `≥0.0.0, <0.3.0-`
* `^0.2.3` => `≥0.2.3, <0.3.0-`
* `^0.0.3` => `≥0.0.3, <0.0.4-`
* `^0` => `≥0.0.0, <1.0.0-`
* `^0.0` => `≥0.0.0, <0.1.0-`
**Tilde Ranges**
* `~1` => `≥1.0.0, <2.0.0-`
* `~1.2` => `≥1.2.0, <1.3.0-`
* `~1.2.3` => `≥1.2.3, <1.3.0-`
* `~1.2.3-beta.2` => `≥1.2.3-beta.2, <1.3.0-`
* `^0` => `≥0.0.0, <1.0.0-`
* `^0.2.3` => `≥0.2.3, <0.3.0-`
* `^0.0.3` => `≥0.0.3, <0.0.4-`
* `~0` => `≥0.0.0, <1.0.0-`
* `~0.0` => `≥0.0.0, <0.1.0-`
* `~0.0.0` => `≥0.0.0, <0.1.0-`
**Wildcard Ranges**
* `*` => `≥0.0.0`
* `1.x` => `≥1.0.0, <2.0.0-`
* `1.*.x` => `≥1.0.0, <2.0.0-`
* `1.2.*` => `≥1.2.0, <1.3.0-`
These ranges closely follow Rust's and Node's syntax. Like Node but
unlike Rust, wildcard ranges support `x` and `X` as alternative syntax
for wildcards.
This PR implements `simp? +suggestions`, which uses the configured
library suggestion engine to add relevant theorems to the `simp` call.
`simp +suggestions` without the `?` prints a message requiring adding
the `?`.
This PR changes Lake's debug build type to use `-O0` instead of `-Og`
when compiling C code. `-Og` was found to be insufficient for debugging
compiled Lean code -- relevant code was stilled optimized out.
This PR changes `Nat.ble` by joining the two `Nat.ble Nat.zero _` cases
into one, allowing `decide (0 <= x) = true` and `decide (0 < succ x) =
true` to be solvable by `rfl`.
This PR fixes `ST.Ref.ptrEq` to act as described in the docs. This fixes
two bugs:
1. The recent `IO.RealWorld` elimination PR overlooked this function
(afaik this is the only one),
causing its return value to be generally wrong.
2. The implementation of `ptrEq` would previously always consider two
different cells with pointer
equivalent value to be pointer equal. However, the function is supposed
to check whether two
`Ref` are the same cell, not whether the contained elements are.
This PR implements equality propagation for `Nat` in `grind order`.
`grind order` supports offset equalities for rings, but it has an
adapter for `Nat`. Example:
```lean
example (a b : Nat) (f : Nat → Int) : a ≤ b + 1 → b + 1 ≤ a → f (1 + a) = f (1 + b + 1) := by
grind -offset -mbtc -lia -linarith (splits := 0)
```
This PR implements (nested term) equality propagation in `grind order`.
That is, it propagates implied equalities from `grind order` back to the
`grind` core. Examples:
```lean
open Lean Grind Std
example [LE α] [IsPartialOrder α] (a b : α) (f : α → Nat) : a ≤ b → b ≤ c → c ≤ a → f a = f b := by
grind (splits := 0)
example [CommRing α] [LE α] [LT α] [LawfulOrderLT α] [IsPartialOrder α] [OrderedRing α]
(a b : α) (f : α → Int) : a ≤ b + 1 → b ≤ a - 1 → f a = f (2 + b - 1) := by
grind -mbtc -lia -linarith (splits := 0)
example (a b : Int) (f : Int → Int) : a ≤ b + 1 → b ≤ a - 1 → f a = f (2 + b - 1) := by
grind -mbtc -lia -linarith (splits := 0)
```
`prelude-injectivity.lean` was testing inj thm generation for all
inductives in core, including private ones, which could lead to name
clashes that should not be able to occur in actual files. Put it under
the module system to not load private decls in the first place.
This PR enforces users of the constant folder API to provide proofs of
their algebraic properties,
thus hopefully avoiding bugs such as #11042 and #11043 in the future.
This PR fixes a case of overeager constant folding on Nat where the
compiler would mistakenly assume `0 - x = x` (see also #11042 for the
same bug on UInts).
This PR adds a new suggestion to `finish?`. It now generates the `grind`
tactic script as before, and a `finish only` tactic. Example:
```lean
/--
info: Try these:
[apply] ⏎
instantiate only [findIdx, insert, = mem_indices_of_mem]
instantiate only [= getElem?_neg, = getElem?_pos]
cases #1bba
· instantiate only [findIdx]
· instantiate only
instantiate only [= HashMap.mem_insert, = HashMap.getElem_insert]
[apply] finish only [findIdx, insert, = mem_indices_of_mem, = getElem?_neg, = getElem?_pos, = HashMap.mem_insert,
= HashMap.getElem_insert, #1bba]
-/
example (m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
grind => finish?
```
This PR establishes `String.ofList` and `String.toList` as the preferred
method for converting between strings and lists of characters and
deprecates the alternatives `String.mk`, `List.asString` and
`String.data`.
This PR updates the `pr-title` CI check to enforce that the commit
message does not start with a capital letter followed by a non-capital
letter.
This should ensure that messages do not start with a capitalized word,
but allow messages that start with an acronym.
This PR adds a library suggestion engine for local theorems. To be
useful, I still need to write more combinators to re-rank and combine
suggestions from multiple engines.
This is stacked on top of #11029.
This PR changes the terminology used from "premise selection" to
"library suggestions". This will be more understandable to users (we
don't assume anyone is familiar with the premise selection literature),
and avoids a conflict with the existing use of "premise" in Lean
terminology (e.g. "major premise" in induction, as well as generally the
synonym for "hypothesis"/"argument").
This PR improves match compilation: Branch on variables in the order
suggested by the first remaining alternative, and do not branch when the
first remaining alternative does not require it. This fixes
https://github.com/leanprover/lean4/issues/10749. With `set_option
backwards.match.rowMajor false` the old behavior can be turned on.
(For now this is an experiment to get familiar with the code and the
whole
problem domain. It is likely overly naive.)
This PR renames theorems that use `sorted` in their name to instead use
`pairwise`.
Closes#10742.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
This PR improves the detection of situations where we branch multiple
times on the same value in the
code generator. Previously this would only consider repeated branching
on function arguments, now on
arbitrary values.
Closes: #11018
When documenting smul : M → α → α, it is unintuitive and confusing to
use variables a:M and b:α. It is better to use m:M and a:α. I also
specified the types of all involved terms in the documentation text.
This PR improves join point finding in the compiler through two means:
1. We now handle situations where a function `f` can only become a join
point when a function `g`
becomes a join point as well correctly.
2. We introduce a second join point finding pass after specialisation
and before the following
simplification pass, as the specialiser might have introduced new join
point opportunities for
the simplifier to exploit.
Notably in the code from #10995 we now correctly detect the missing join
point which required both
of these changes to be made.
Closes: #10995
This PR extracts some refactorings from #10763, including dropping dead
code and not failing in `inaccessibleAsCtor`, which leadas to (slightly)
better error messages, and also on the grounds that the failing
alternative may actually be unreachable.
This PR makes the eager lambda lifting heuristic more predictable by
blocking it from lifting from
any kind of inlineable function, not just `@[inline]`. It also adapts
the doc-string to describe
what is actually going on.
This PR inlines several Decidable instances for performance reasons.
Unlike the previous #10934 it does not attempt to also simplify the
Decidable instance system as
that has proven to have non trivial performance impact.
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR fixes a memleak caused by the Lean based `IO.waitAny`
implementation by reverting it.
This the faulty Lean implementation:
```lean
def IO.waitAny (tasks : @& List (Task α)) (h : tasks.length > 0 := by exact Nat.zero_lt_succ _) :
BaseIO α := do
have : Nonempty α := ⟨tasks[0].get⟩
let promise : IO.Promise α ← IO.Promise.new
tasks.forM <| fun t => BaseIO.chainTask (sync := true) t promise.resolve
return promise.result!.get
```
In a situation where we call this function repeatedly in a loop with a
pair of tasks `[t1, t2]`
where `t2` is a long lived task that we pass every time and `t1` is
fresh a short lived task, `t2` will
accumlate more and more children from `BaseIO.chainTask` that fill
memory over time. The old C++
implementation did not have this issue so we are reverting.
This PR defines `String.Slice.replace` and redefines `String.replace` to
use the `Slice` version.
The new implementation is generic in the pattern, so it supports things
like `"education".replace isVowel "☃!" = "☃!d☃!c☃!t☃!☃!n"`. Since it
uses the `ForwardSearcher` infrastructure, `String` patterns are
searched using KMP, unlike the previous implementation which had
quadratic runtime. As a side effect, the behavior when replacing an
empty string now matches that of most other programming languages,
namely `"abc".replace "" "k" = "kakbkck"`.
This PR fixes the KMP implementation, which did incorrect bookkeeping of
the backtracking process, leading to incorrect starting ranges of
matches.
The new implementation does not require `partial` anywhere.
This PR adds support for specifying anchors to restrict the search space
in `grind` when using `grind only`. Anchors can limit which case splits
are performed and which local lemmas are instantiated.
This PR adds the `set_config` tactic for setting `grind` configuration
options. It uses the same syntax used for setting configuration options
in the `grind` main tactic.
This PR tries to preserve names of pattern variables in match
alternatives in `decreasing_by`, by telescoping into the concrete
alternative rather than the type of the matcher's alt. Fixes#10976.
This PR ensures that searching for an empty string returns the expected
pattern of alternating size-zero matches and size-one rejects.
In particular, splitting by an empty string returns an array formed of
the empty string, all of the string's characters as singleton strings,
followed by another empty string. This matches the [Rust
behavior](https://doc.rust-lang.org/std/primitive.str.html#method.split),
for example.
This PR adds inline annotations to several `Decidable` instances.
Additionally, it removes the `Decidable` instance for `p → q` which is
made redundant by `forall_prop_decidable`.
This PR changes the closure allocator to use the general allocator
instead of the small object one.
This is because users may create closures with a gigantic amount of
closed variables which in turn
boost the size of the closure beyond the small object threshold.
This issue was uncovered by #10979. Detecting that the small object
threshold is at fault requires
building mimalloc in debug mode at which point it yields:
```
mimalloc: assertion failed: at "/home/henrik/lean4/build/debug/mimalloc/src/mimalloc/src/alloc.c":132, mi_heap_malloc_small_zero
assertion: "size <= MI_SMALL_SIZE_MAX"
```
The generated code at fault here looks as follows:
```c
LEAN_EXPORT lean_object* l_initExec___at___00res_spec__0(lean_object* x_1) {
_start:
{
lean_object* x_2; lean_object* x_3; lean_object* x_4; lean_object* x_5; lean_object* x_6; lean_object* x_7; lean_object* x_8; lean_object* x_9; lean_object* x_10; lean_object* x_11; lean_object* x_12; lean_object* x_13; lean_object* x_14;
x_2 = lean_alloc_closure((void*)(l_initializer_ext___at___00initExec___at___00res_spec__0_spec__0___lam__0___boxed), 3, 0);
x_3 = l_initExec___redArg___closed__0;
x_4 = l_initExec___redArg___closed__1;
x_5 = l_instMonadLiftNonDetT___closed__0;
x_6 = l_initExec___redArg___closed__2;
x_7 = l_initExec___at___00res_spec__0___closed__0;
lean_inc_ref(x_2);
x_8 = lean_alloc_closure((void*)(l_initExec___at___00res_spec__0___lam__29___boxed), 213, 212);
lean_closure_set(x_8, 0, x_3);
lean_closure_set(x_8, 1, x_2);
lean_closure_set(x_8, 2, x_4);
lean_closure_set(x_8, 3, x_3);
lean_closure_set(x_8, 4, x_4);
lean_closure_set(x_8, 5, x_3);
lean_closure_set(x_8, 6, x_4);
lean_closure_set(x_8, 7, x_3);
lean_closure_set(x_8, 8, x_4);
lean_closure_set(x_8, 9, x_3);
lean_closure_set(x_8, 10, x_4);
lean_closure_set(x_8, 11, x_3);
lean_closure_set(x_8, 12, x_4);
lean_closure_set(x_8, 13, x_3);
lean_closure_set(x_8, 14, x_4);
lean_closure_set(x_8, 15, x_5);
lean_closure_set(x_8, 16, x_6);
lean_closure_set(x_8, 17, x_5);
lean_closure_set(x_8, 18, x_5);
lean_closure_set(x_8, 19, x_5);
lean_closure_set(x_8, 20, x_5);
lean_closure_set(x_8, 21, x_5);
lean_closure_set(x_8, 22, x_5);
...
```
With the crash happening in `lean_alloc_closure` where we
unconditionally invoke the small allocator
which cannot cope with closures this large. Hopefully changing this to
the general purpose allocator
doesn't have too much of an impact on performance.
Closes: #10979
This PR adds the basic infrastructure to perform termination proofs
about `String.ValidPos` and `String.Slice.Pos`.
We choose approach where the intended way to do termination arguments is
to argue about the position itself rather than some projection of it
like `remainingBytes`.
The types `String.ValidPos` and `String.Slice.Pos` are equipped with a
`WellFoundedRelation` instance given by the greater-than relation. This
means that if a function takes a position `p` and performs a recursive
call on `q`, then the decreasing obligation will be `p < q`. This works
well in the common case where `q` is `p.next h`, in which case the goal
`p < p.next h` is solved by the simplifier.
For stepping through a string backwards, we introduce a type synonym
with a `WellFoundedRelation` instance given by the less-than relation.
This means that if a function takes a position `p` and performs a
recursive call on `q` and specifies `termination_by p.down`, then the
decreasing obligation will be `q < p`. This works well in the case where
`q` is `p.prev h`, in which case the goal `p.prev h < p` is solved by
the simplifier.
For termination arguments invoving multiple strings, the lower-level
primitive `p.remainingBytes` (landing in `Nat`) is also available.
In a future PR, we will additionally provide the necessary typeclasses
instances to register `String.ValidPos` and `String.Slice.Pos` with
`grind` to make complex termination arguments more convenient in user
code.
This PR performs more widening in ElimDeadBranches in an attempt to
improve performance in situations with a lot of local precision.
While this is not enough to make the compilation instant it pushes
compilation time from 12s to 3s for the example in #10857 and barely
introduces regressions so it seems like a good first step in this
direction.
Closes: #10857
This PR implements the following `grind` improvements:
1. `set_option` can now be used to set `grind` configuration options in
the interactive mode.
2. Fixes a bug in the repeated theorem instantiation detection.
3. Adds the macro `use [...]` as a shorthand for `instantiate only
[...]`.
This PR adds the combinator ` · t_1 ... t_n` to the `grind` interactive
mode. The `finish?` tactic now generates scripts using this combinator
to conform to Mathlib coding standards. The new format is also more
compact. Example:
```lean
/--
info: Try this:
[apply] ⏎
instantiate only [= mem_indices_of_mem, insert, = getElem_def]
instantiate only [= getElem?_neg, = getElem?_pos]
cases #f590
· cases #ffdf
· instantiate only
instantiate only [= Array.getElem_set]
· instantiate only
instantiate only [size, = HashMap.mem_insert, = HashMap.getElem_insert, = Array.getElem_push]
· instantiate only [= mem_indices_of_mem, = getElem_def]
instantiate only [usr getElem_indices_lt]
instantiate only [size]
cases #ffdf
· instantiate only [=_ WF]
instantiate only [= getElem?_neg, = getElem?_pos, = Array.getElem_set]
instantiate only [WF']
· instantiate only
instantiate only [= HashMap.mem_insert, = HashMap.getElem_insert, = Array.getElem_push]
-/
#guard_msgs in
example (m : IndexMap α β) (a a' : α) (b : β) (h : a' ∈ m.insert a b) :
(m.insert a b)[a'] = if h' : a' == a then b else m[a'] := by
grind => finish?
```
This PR ensures that model-based theory combination in `grind cutsat`
considers nonlinear terms. Nonlinear multiplications such as `x * y` are
treated as uninterpreted symbols in `cutsat`.
Closes#10885
This PR adds support for scientific literals for `Rat` in `grind`.
`grind` does not yet add support for this kind of literal in arbitrary
fields.
closes#10489
This PR fixes a bug in the equality propagation procedure in
`grind.order`. Specifically, it affects the procedure that asserts
equalities in the `grind` core state that are implied by (ring)
inequalities in the `grind.order` module.
closes#10622
This is a guard against #10705; if a kernel error is raised when the
return value of this function is eventually checked, it is often
silenced downstream, making it hard to spot the failure.
If we panic here via `assert!`, then the diagnostic cannot be missed.
This PR adds the `mbtc` tactic to the `grind` interactive mode. It
implements model-based theory combination. It also ensures `finish?` is
capable of generating it.
This PR ensures that solver propagation steps are necessary in the
generated tactic script to close the goal.
It produces more compact proof scripts, but this is not just an
optimization, if we include an unnecessary step, we may fail to replay
the generated script when `cases` steps are pruned using
non-chronological backtracking (NCB). For example, when executing
`finish?`, we may have performed a `cases #<anchor>` step that enabled
`ring` to propagate a new fact. If this fact is not used in the final
proof, and the corresponding `cases #<anchor>` step is pruned by NCB,
the `ring` step will fail during replay.
This PR ensures that `finish?` produces partial tactic scripts
containing `sorry`s.
We may add an option to disable this feature in the future.
It is enabled by default because it provides a useful way to debug
`grind` failures.
This PR topologically sorts abstracted vars in
`Meta.Closure.mkValueTypeClosure` if MVars are being abstracted.
Fixes#10705
---------
Co-authored-by: Eric Wieser <efw@google.com>
This PR fixes a regression introduced by the `ST` redefinition by making
the definition of `ST` as
reducible as previously. The key issue here is that in the previous
state it would reduce to a
function at which point the monad lifting mechanisms don't kick in in
the same fashion anymore.
This PR fixes another instance of the “default parameter value in
constructor” footgun, which was affecting the `cases` tactic in the
`grind` interactive mode.
This PR adds a warning to `wf_preproces` that these lemmas can be used
to introduce hidden partiality.
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR uses hashmaps for the symbol lookups in the IR interpreter
instead of the existing rbmaps.
Thus reducing the constant overhead per function call.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR fixes name mangling to be unambiguous / injective by adding `00`
for disambiguation where necessary. Additionally, the inverse function,
`Lean.Name.unmangle` has been added which can be used to unmangle a
mangled identifier. This unmangler has been added to demonstrate the
injectivity but also to allow unmangling identifiers e.g. for debugging
purposes.
Closes#10724
This PR adds support for `grind +premises`, calling the currently
configured premise selection algorithm and including the results as
parameters to `grind`. (Recall that there is not currently a default
premise selector provided by Lean4: you need a downstream premise
selector to make use of this.)
This PR adds a test for depending on two packages which privately import
modules that define the same Lean definition. It verifies the current
behavior of a symbol clash. This behavior will be fixed later this
quarter.
This PR adds a `+lax` configuration option for `grind`, causing it to
ignore parameters referring to non-existent theorems, or to theorems for
which we can't generate a pattern. This allows throwing large sets of
theorems (e.g. from a premise selection enginre) into `grind` to see
what happens.
This PR implements the `have <ident>? : <prop>` tactic for the `grind`
interactive mode. The proposition is proved using the default `grind`
search strategy. This tactic is also useful for inspecting or querying
the current `grind` state.
This PR implements parameter optimization for the generated
`instantiate` tactics produced by `finish?`.
We use a simple parameter optimizer that takes two sets as input: the
lower and upper bounds.
The lower bound consists of the theorems actually used in the proof
term, while the upper bound includes all the theorems instantiated in a
particular theorem instantiation step.
The lower bound is often sufficient to replay the proof, but in some
cases, additional theorems must be included because a theorem
instantiation may contribute to the proof by providing terms and many
not be present in the final proof term.
This PR makes minor changes to the MePo premise selection algorithm.
I'm increasingly believing that MePo will not work well in Lean; I've
tried a few things without success. Alistair Geesing's thesis from 2023
had similar conclusions.
My intention is to reach the point we can properly benchmark premise
selection algorithms before doing any more work here.
This PR optimizes two `String` proofs and makes sure that
`MkIffOfInductiveProp` does not import `Lean.Elab.Tactic`, which
previously pushed it to the very end of the import graph.
This PR splits some low-hanging fruit out of `Init.Data.String.Basic`:
basic material about `String.Pos.Raw`, `String.Substrig`, and
`String.Iterator`.
More splitting required and the remaining material is quite unorganized,
but it's a start.
This PR implements zero cost `BaseIO` by erasing the `IO.RealWorld`
parameter from argument lists and structures. This is a **major breaking
change for FFI**.
Concretely:
- `BaseIO` is defined in terms of `ST IO.RealWorld`
- `EIO` (and thus `IO`) is defined in terms of `EST IO.RealWorld`
- The opaque `Void` type is introduced and the trivial structure
optimization updated to account for it. Furthermore, arguments of type
`Void s` are removed from the argument lists of the C functions.
- `ST` is redefined as `Void s -> ST.Out s a` where `ST.Out` is a pair
of `Void s` and `a`
This together has the following major effects on our generated code:
- Functions that return `BaseIO`/`ST`/`EIO`/`IO`/`EST` now do not take
the dummy world parameter anymore. To account for this FFI code needs to
delete the dummy world parameter from the argument lists.
- Functions that return `BaseIO`/`ST` now return their wrapped value
directly. In particular `BaseIO UInt32` now returns a `uint32_t` instead
of a `lean_object*`. To account for this FFI code might have to change
the return type and does not need to call `lean_io_result_mk_ok` anymore
but can instead just `return` values right away (same with extracting
values from `BaseIO` computations.
- Functions that return `EIO`/`IO`/`EST` now only return the equivalent
of an `Except` node which reduces the allocation size. The
`lean_io_result_mk_ok`/`lean_io_result_mk_error` functions were updated
to account for this already so no change is required.
Besides improving performance by dropping allocation (sizes) we can now
also do fun new things such as:
```lean
@[extern "malloc"]
opaque malloc (size : USize) : BaseIO USize
```
This PR renames the cast functions on `String.ValidPos` for `set` and
`modify` to adhere to the established naming convention.
It also fixes two typos and very slighly tweaks the import graph,
shortening the critical path by a negligible amount.
This PR fixes a bug with Lake's cache where revisions were stored at the
incorrect path. Revisions were stored at `<rev>/<pkg>.jsonl` rather than
the correct `<pkg>/<rev>.jsonl`.
This PR fixes a proof instability source in `grind`.
We say a proof is *unstable* if minor changes in the `.lean` file
containing the proof **affect** it.
This PR adds a missing move assignment operator, and deletes the copy
assignment operator.
C++ types should not implement move constructors without also
implementing move assignment. This also ensures that `m_fn` is correctly
emptied after a move, which is not guaranteed by the standard.
This change is also needed to allow `lean::optional` to be eventually
replaced by `std::optional`.
This PR improves the performance of `mvcgen` by an optimized
implementation for `try (mpure_intro; trivial)`. This tactic sequence is
used to eagerly discharge VCs and in the process instantiates schematic
variables.
This PR renames `String.endPos` to `String.rawEndPos`, as in a future
release the name `String.endPos` will be taken by the function that is
currently called `String.endValidPos`.
This PR fixes a bug in `String.Slice.takeWhile` which caused it to get
its bookkeeping wrong and panic. The new version only uses safe
operations on `String.Slice.Pos`.
This PR reduces the amount of symbols in our DLLs by cutting open a
linking cycle of the shape:
`Environment -> Compiler -> Meta -> Environment`
This is achieved by introducing a dynamic call to the compiler hidden
behind a `Ref` as previously
done in the pretty printer.
This PR adds a field `isDisplayableTerm` to `TermInfo` and all utility
functions which create `TermInfo` that can be set to force the language
server to render the term in hover popups.
This PR fixes `input_dir` tracking to also recurse through
subdirectories. The `filter` of an `input_dir` will be applied to each
file in the directory tree (the path names of directories will not be
checked).
Closes#10827.
This PR improves the release automation. We link to CI output for
building the release tag, don't give instructions for bumping downstream
repositories until the release it ready, and improve documentation and
prompts.
This PR lets match compilation use exfalso as soon as no alternatives
are left. This way, the compiler does not have to look at subsequent
case splits.
This PR shows that the iterators returned by `String.Slice.split` and
`String.Slice.splitInclusive` are finite as long as the forward matcher
iterator for the pattern is finite (which we already know for all of our
patterns).
At actually also completely redefines the iterators to avoid the inner
loop in `Internal.nextMatch` which generates inefficient code. Instead,
when encountering a mismach from the matcher, we `skip` the split
iterator.
This PR improves the `grind` tactic generated by the `instantiate`
action in tracing mode. It also updates the syntax for the `instantiate`
tactic, making it similar to `simp`. For example:
* `instantiate only [thm1, thm2]` instantiates only theorems `thm1` and
`thm2`.
* `instantiate [thm1, thm2]` instantiates theorems marked with the
`@[grind]` attribute **and** theorems `thm1` and `thm2`.
The action produces `instantiate only [...]` tactics. Example:
```lean
/--
info: Try this:
[apply] ⏎
instantiate only [= Array.getElem_set]
instantiate only [= Array.getElem_set]
-/
#guard_msgs in
example (as bs cs : Array α) (v₁ v₂ : α)
(i₁ i₂ j : Nat)
(h₁ : i₁ < as.size)
(h₂ : bs = as.set i₁ v₁)
(h₃ : i₂ < bs.size)
(h₄ : cs = bs.set i₂ v₂)
(h₅ : i₁ ≠ j ∧ i₂ ≠ j)
(h₆ : j < cs.size)
(h₇ : j < as.size) :
cs[j] = as[j] := by
grind => finish?
```
Recall that `finish?` replays generated tactics before suggesting them.
The `instantiate` action inspects the generated proof term to decide
which theorems to include as parameters in the `instantiate only [...]`
tactic. However, in some cases, a theorem contributes only by adding a
term to the state. In such cases, the generated tactic cannot be fully
replayed, and the action uses
`instantiate approx [<thms instantiated>]` to indicate which parts of
the tactic script are approximate. The `approx` is just a hint for
users.
This PR implements the `finish?` tactic for the `grind` interactive
mode. When it successfully closes the goal, it produces a code action
that allows the user to close the goal using explicit grind tactic
steps, i.e., without any search. It also makes explicit which solvers
have been used.
This is just the first version, we will add many "bells and whistles"
later. For example, `instantiate` steps will clearly show which theorems
have been instantiated.
Example:
```lean
/--
info: Try this:
[apply] ⏎
cases #b0f4
next => cases #50fc
next => cases #50fc <;> lia
-/
#guard_msgs in
example (p : Nat → Prop) (x y z w : Int) :
(x = 1 ∨ x = 2) →
(w = 1 ∨ w = 4) →
(y = 1 ∨ (∃ x : Nat, y = 3 - x ∧ p x)) →
(z = 1 ∨ z = 0) → x + y ≤ 6 := by
grind => finish?
```
The anchors in the generated script are based on stable hash codes.
Moreover, users can hover over them to see the exact term used in the
case split. `grind?` will also be implemented using the new framework.
This PR implements support for `Action` in the `grind` solver extensions
(`SolverExtension`). It also provides the `Solvers.mkAction` function
that constructs an `Action` using all registered solvers. The generated
action is "fair," that is, a solver cannot prevent other solvers from
making progress.
This PR implements infrastructure for evaluating `grind` tactics in the
`GrindM` monad. We are going to use it to check whether auto-generated
tactics can effectively close the original goal.
This PR moves many operations involving `String.Pos.Raw` to a the
`String.Pos.Raw` namespace with the eventual aim of freeing up the
`String` namespace to contain operations using `String.ValidPos` (to be
renamed to `String.Pos`) instead.
This PR adds the `String.ValidPos.set` and `String.ValidPos.modify`
functions.
After this PR, `String.pos_lt_eq` is no longer a `simp` lemma. Add
`String.Pos.Raw.lt_iff` as a `simp` lemma if your proofs break.
This PR renames `String.split` to `String.splitToList`, because soon the
name `String.split` will be used by a new implementation which is
superior because it is polymorphic over the pattern kind and it returns
an iterator of slices instead of a list of strings.
This PR implements a compact notation for inspecting the `grind` state
in interactive mode. Within a `grind` tactic block, each tactic may
optionally have a suffix of the form `| filter?`.
Examples:
```lean
instantiate | gen > 0 -- Displays terms in the `grind` state after executing `instantiate` with generation greater than zero
```
```lean
instantiate | -- Displays the `grind` state after executing `instantiate`
```
Remark: If the user places the cursor one space before `|`, the state
*before* executing `instantiate` is displayed.
This PR removes the code that was silently displaying the `grind` state
after each tactic step, as it was too noisy.
It also updates the notation for the `first` combinator in the `grind`
tactic mode to avoid conflicts with the new syntax.
This PR changes match compilation to reject some pattern matches that
were previously accepted due to inaccessible patterns sometimes treated
like accessible ones. Fixes#10794.
This PR adds more selectors for TCP and Signals.
It also fixes a problem with `Selectors` that they cannot be closures
over a promise, otherwise it causes the waiter promise to never be
dropped.
This PR introduces the `backward.privateInPublic` option to aid in
porting projects to the module system by temporarily allowing access to
private declarations from the public scope, even across modules. A
warning will be generated by such accesses unless
`backward.privateInPublic.warn` is disabled.
This PR fixes `getHexNumSize` to consider underscores. Previously, only
the amount of bytes was counted, making it output 9 for `1234_abcd`
instead of the actual number of digits, which is 8.
the tested situation (kernel runs into deep recursion but elaborator is
happy) is not very stable and depends on, for example, stack size. This
test is not worth that hassle.
This PR adds adds union operation on `DHashMap`/`HashMap`/`HashSet` and
their raw variants and provides lemmas about union operations.
---------
Co-authored-by: Paul-Lez <paul.lezeau@gmail.com>
Co-authored-by: Markus Himmel <markus@lean-fro.org>
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR adds `.vscode/settings.json` to our `.gitignore`, which allows
Lean 4 developers to set local workspace settings. We already use the
the workspace file for settings in core, so this shouldn't cause any
problems.
This PR fixes a bug in the unknown identifier code actions where the
identifiers wouldn't be correctly minimized in nested namespaces. It
also fixes a bug where identifiers would sometimes be minimized to
`[anonymous]`.
The first bug was introduced in #10619.
This PR adds a silent info message with the `grind` state in its
interactive mode. The message is shown only when there is exactly one
goal in the grind interactive mode. The condition is a workaround for
current limitations of our `InfoTree`.
This PR lets match compilation look only at the first remaining
alternative in `processLeaf`. At this point we have no further variables
we can split on, so if the first one isn’t applicable, match compilation
should fail.
This PR fixes a regression introduced by #10307, where hovering the name
of an inductive type or constructor in its own declaration didn't show
the docstring. In the process, a bug in docstring handling for
coinductive types was discovered and also fixed. Tests are added to
prevent the regression from repeating in the future.
This PR ensures that `grind` interactive mode is hygienic. It also adds
tactics for renaming inaccessible names: `rename_i h_1 ... h_n` and
`next h_1 ... h_n => ..`, and `expose_names` for automatically generated
tactic scripts. The PR also adds helper functions for implementing
case-split actions.
This PR improves the scripts assisting with cutting Lean releases (by
reporting CI status of open PRs, and adding documentation), and adds a
`.claude/commands/release.md` prompt file so Claude can assist.
This PR introduces a no-op version of `Shrink`, a type that should allow
shrinking small types into smaller universes given a proof that the type
is small enough, and uses it in the iterator library. Because this type
would require special compiler support, the current version is just a
wrapper around the inner type so that the wrapper is equivalent, but not
definitionally equivalent.
While `Shrink` is unable to shrink universes right now, but introducing
it now will allow us to generalize the universes in the iterator library
with fewer breaking changes as soon as an actual `Shrink` is possible.
This PR fixes a bug in combination with VS Code where Lean code that
looks like CSS color codes would display a color picker decoration.
VS Code displays this decoration by default for all languages, not just
CSS. Due to https://github.com/microsoft/vscode/issues/91533, this
setting cannot be disabled in the client on a per-language basis.
However, we can override the default behavior by providing a color
provider of our own. This PR implements an empty color provider to
override the VS Code one.
This PR adds support for interactivity to the combined "try this"
messages that were introduced in #9966. In doing so, it moves the link
to apply a suggestion to a separate `[apply]` button in front of the
suggestion. Hints with diffs remain unchanged, as they did not
previously support interacting with terms in the diff, either.
<img width="379" height="256" alt="Suggestion with interactive message"
src="https://github.com/user-attachments/assets/7838ebf6-0613-46e7-bc88-468a05acbf51"
/>
This PR restores the change in #8656, which removed `autoImplicit =
false` from the default lake template (per previous discussions linked
there). This was accidentally reverted in #8866.
This PR fixes a performance regression introduced in #10518. More
generally, it ensures both message log and info state are per-command,
which has been the case in practice ever since the asynchronous language
driver was introduced.
This PR fixes a bug where partially up-to-date files built with `--old`
could be stored in the cache as fully up-to-date. Such files are no
longer cached. In addition, builds without traces now only perform an
modification time check with `--old`. Otherwise, they are considered
out-of-date.
This PR changes the Lake's remote cache interface to scope cache outputs
by toolchain and/or platform were useful.
Packages that set `platformIndependent = true` will not be scoped by
platform and the core build (i.e., `bootstrap = true`) will not be
scoped by toolchain. Lake's detected platform and toolchain can be
overridden with the new `--platform` and `--toolchain` options to `cache
get` and `cache put`.
Lake no longer accepts the `--scope` option when using `cache get` with
Reservoir.. The `--repo` option must be used instead.
This PR follows upon #10606 and creates equational theorems uniformly
from the unfold theorem, there is only one handler registered in
`registerGetEqnsFn`.
For now we keep `registerGetEqnsFn`, because it’s used by mathlib’s
`irreducible_def`, but I’d like to get rid of it in the long term,
relying only on `registerGetUnfoldEqnFn` for constructions that should
unfold differently.
This PR improves the tactics `ac`, `linarith`, `lia`, `ring` tactics in
`grind` interactive mode. They now fail if no progress has been made.
They also generate an info message with counterexample/basis if the goal
was not closed.
This PR provides range support for the signed finite number types
`Int{8,16,32,64}` and `ISize`. The proof obligations are handled by
reducing all of them to proofs about an internal `UpwardEnumerable`
instance for `BitVec` interpreted as signed numbers.
This PR changes where errors are displayed when trying to use
`coinductive` keyword when targeting things that do not live in `Prop`.
Instead of displaying the error above the first element of the mutual
block, it is displayed above the erroneous definition.
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR removes support for reducible well-founded recursion, a Breaking
Change. Using `@[semireducible]` on a definition by well-founded
recursion prints a warning that this is no longer effective.
With the upcoming module system, proofs are often not available. With
this change, we remove a fringe use case hat may require proofs, and
that would not be supported under the module system anyways.
At least for now, direct use of `WellFounded.fix` is not affected.
This fixes: #5192
This PR re-enables semantic tokens for Verso docstrings, after a prior
change accidentally disabled them. It also adds a test to prevent this
from happening again.
In the process, it became clear that there was a bug. The highlighting
strategy led to overlapping but not identical tokens, but the code had
previously assumed that this couldn't happen at the delta-encoding step.
So this PR additionally replaces the removal of duplicate tokens with
priority-based handling of overlapping tokens.
---------
Co-authored-by: Marc Huisinga <mhuisi@protonmail.com>
This PR adds the following tactics to the `grind` interactive mode:
- `focus <grind_tac_seq>`
- `next => <grind_tac_seq>`
- `any_goals <grind_tac_seq>`
- `all_goals <grind_tac_seq>`
- `grind_tac <;> grind_tac`
- `cases <anchor>`
- `tactic => <tac_seq>`
Example:
```lean
def g (as : List Nat) :=
match as with
| [] => 1
| [_] => 2
| _::_::_ => 3
example : g bs = 1 → g as ≠ 0 := by
grind [g.eq_def] =>
instantiate
cases #ec88
next => instantiate
next => finish
tactic =>
rw [h_2] at h_1
simp [g] at h_1
```
This PR enforces rules around arithmetic of `String.Pos.Raw`.
Specifically, it adopts the following conventions:
- Byte indices ("ordinals") in strings should be represented using
`String.Pos.Raw`
- Amounts of bytes ("cardinals") in strings should be represented using
`Nat`.
For example, `String.Slice.utf8ByteSize` now returns `Nat` instead of
`String.Pos.Raw`, and there is a new function `String.Slice.rawEndPos`.
Finally, the `HAdd` and `HSub` instances for `String.Pos.Raw` are
reorganized. This is a **breaking change**.
The `HAdd/HSub String.Pos.Raw String.Pos.Raw String.Pos.Raw` instances
have been removed. For the use case of tracking positions relative to
some other position, we instead provide `offsetBy` and `unoffsetBy`
functions. For the use case of advancing/unadvancing a position by an
arbitrary number of bytes, we instead provide `increaseBy` and
`decreaseBy` functions. For
offsetting/unoffsetting/advancing/unadvancing a position `p` by the size
of a string `s` (resp. character `c`), use `s + p`/`p - s`/`p + s`/`p -
s` (resp. `c + p`/`p - c`/`p + c`/`p - c`).
This PR re-enables the "experimental" warning for `mvcgen` by changing
its default. The official release has been postponed to justify small
breaking changes in the semantic foundations in the near future.
This PR adds a new helper parser for implementing parsers that contain
hexadecimal numbers. We are going to use it to implement anchors in the
`grind` interactive mode.
This PR aims to fix the Timer API selector to make it finish as soon as
possible when unregistered. This change makes the `Selectable.one`
function drop the `selectables` array as soon as possible, so when
combined with finalizers that have some effects like the TCP socket
finalizer, it runs it as soon as possible.
This PR fixes several causes of test flakiness and re-enables the tests
that were disabled in #10665, #10669 and #10673.
Specifically, it fixes:
- A race condition in the file worker that caused it to report an
incomplete snapshot prefix in the inlay hint request (confirmed to be
the cause of #10665)
- A bug in the test runner where it didn't correctly account for
non-deterministic message ordering inducing different RPC pointer
numbering (confirmed to be the cause of #10673)
- A race condition in the watchdog that would sometimes cause the module
hierarchy to be empty (likely the cause of #10669, but not confirmed as
this issue only reproduced again once in tens of thousands of test runs
on various machines, including CI)
- An unrelated bug in the module hierarchy implementation that would
cause it to report an empty module hierarchy when the file was changed
It also replaces some calls to `Task.get` in the language server with
`IO.wait` to protect the code against unfortunate compiler re-ordering.
This PR adds auto-completion for identifiers after `end`. It also fixes
a bug where completion in the whitespace after `set_option` would not
yield the full option list.
Closes#3885.
### Breaking changes
The `«end»` syntax is adjusted to take an `identWithPartialTrailingDot`
instead of an `ident`.
This PR adds the `USE_LAKE_CACHE` option to the core CMake build
(defaults to `OFF`). When enabled, the Lake artifact cache will be
enabled (via `enableArtifactCache`) for stage 1 builds (which includes
interactive use).
This PR implements *anchors* (also known as stable hash codes) for
referencing terms occurring in a `grind` goal. It also introduces the
commands `show_splits` and `show_state`. The former displays the anchors
for candidate case splits in the current `grind` goal.
This PR adds a new `allowImportAll` configuration option for packages
and libraries. When enabled by an upstream package or library,
downstream packages will be able to `import all` modules of that package
or library. This enables package authors to selectively choose which
`private` elements, if any, downstream packages may have access to.
This PR adds the `have` tactic for the `grind` interactive mode.
Example:
```lean
example {a b c d e : Nat}
: a > 0 → b > 0 → 2*c + e <= 2 → e = d + 1 → a*b + 2 > 2*c + d := by
grind =>
have : a*b > 0 := Nat.mul_pos h h_1
lia
```
This PR adds lemmas `forall_fin_zero` and `exists_fin_zero`. It also
marks lemmas `forall_fin_zero`, `forall_fin_one`, `forall_fin_two`,
`exists_fin_zero`, `exists_fin_one`, `exists_fin_two` with `simp`
attribute.
Closes#10629
This PR introduces a `coinductive` keyword, that can be used to define
coinductive predicates via a syntax identical to the one for `inductive`
keyword. The machinery relies on the implementation of elaboration of
inductive types and extracts an endomap on the appropriate space of the
predicates from the definition that is then fed to the
`PartialFixpoint`. Upon elaborating definitions, all the constructors
are declared through automatically generated lemmas.
For example, infinite sequence of transitions in a relation, can be
given by the following:
```lean4
section
variable (α : Type)
coinductive infSeq (r : α → α → Prop) : α → Prop where
| step : r a b → infSeq r b → infSeq r a
/--
info: infSeq.coinduct (α : Type) (r : α → α → Prop) (pred : α → Prop) (hyp : ∀ (x : α), pred x → ∃ b, r x b ∧ pred b)
(x✝ : α) : pred x✝ → infSeq α r x✝
-/
#guard_msgs in
#check infSeq.coinduct
/--
info: infSeq.step (α : Type) (r : α → α → Prop) {a b : α} : r a b → infSeq α r b → infSeq α r a
-/
#guard_msgs in
#check infSeq.step
end
```
The machinery also supports `mutual` blocks, as well as mixing inductive
and coinductive predicate definitions:
```lean4
mutual
coinductive tick : Prop where
| mk : ¬tock → tick
inductive tock : Prop where
| mk : ¬tick → tock
end
/--
info: tick.mutual_induct (pred_1 pred_2 : Prop) (hyp_1 : pred_1 → pred_2 → False) (hyp_2 : (pred_1 → False) → pred_2) :
(pred_1 → tick) ∧ (tock → pred_2)
-/
#guard_msgs in
#check tick.mutual_induct
```
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR renames `Nat.and_distrib_right` to `Nat.and_or_distrib_right`.
This is to make the name consistent with other theorems in the same file
(e.g. `Nat.and_or_distrib_left`).
This PR changes how Lean proves the equational theorems for structural
recursion. The core idea is to let-bind the `f` argument to `brecOn` and
rewriting `.brecOn` with an unfolding theorem. This means no extra case
split for the `.rec` in `.brecOn` is needed, and `simp` doesn't change
the `f` argument which can break the definitional equality with the
defined function. With this, we can prove the unfolding theorem first,
and derive the equational theorems from that, like for all other ways of
defining recursive functions.
Backs out the changes from #10415, the old strategy works well with the
new goals.
Fixes#5667Fixes#10431Fixes#10195Fixes#2962
This PR fixes a broken link to the firefox profile definitions in one of
the comments.
The `profile.js` file was renamed to `profile.ts` while the rest of the
url remained the same.
This PR adds the StreamMap type that enables multiplexing in
asynchronous streams.
This PR depends on: #10366, #10367 and #10370.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
This PR fixes an oversight in the RC insertion phase in the code
generator.
If the code generator encounters a `let` that is unused (which is
perfectly reasonable as at this
phase we are in an impure IR and as such allow for side effects to
happen so we cannot remove all
unused `let`) it didn't insert a `dec` instruction for this variable.
This has previously gone
unnoticed because at this point in the compiler basically all unused
lets are removed already
anyways. However with the `IO`/`ST` token erasure coming up they will be
very frequent.
This PR renames some declarations in the range API for better
consistency and readability. For example,
`UpwardEnumerable.succMany?_succ?` is now called `succMany?_add_one`, in
order to (a) correct the erroneous use of `succ?` instead of `succ`
(=`Nat.succ`) and (b) distinguish the successor of natural numbers
(`add_one`) from the successor of the upward-enumerable type (`succ?` or
`succ`).
This PR adds the `IO.FS.hardLink` function, which can be used to create
hard links.
This is implemented via libuv's `uv_fs_link` function.
Lake hopes to make use of this function to decrease the storage cost of
restoring artifacts.
This PR also fixes some C implementation issues found in nearby similar
functions.
This PR adds a multi-consumer, multi-producer channel to Std.Sync.
This PR depends on: #10366, #10367 and #10370.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
This PR implements the basic tactics for the new `grind` interactive
mode. While many additional `grind` tactics will be added later, the
foundational framework is already operational. The following `grind`
tactics are currently implemented: `skip`, `done`, `finish`, `lia`, and
`ring`.
This PR also removes the notion of `grind` fallback procedure since it
is subsumed by the new framework. Examples:
```lean
example (x y : Nat) : x ≥ y + 1 → x > 0 := by
grind => skip; lia; done
open Lean Grind
example [CommRing α] (a b c : α)
: a + b + c = 3 →
a^2 + b^2 + c^2 = 5 →
a^3 + b^3 + c^3 = 7 →
a^4 + b^4 + c^4 = 9 := by
grind => ring
```
This PR records extra mod uses that previously caused wrong unnecessary
import reports from shake.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR fixes one potential source of inlay hint flakiness.
In the old `IO.waitAny` implementation, we could rely on the fact that
if all tasks in the list were finished, `IO.waitAny` would pick the
first finished one. In the new implementation (#9732), this isn't the
case anymore for fairness reasons, but this also means that in
`IO.AsyncList.getFinishedPrefixWithTimeout`, it can happen that we don't
scan the full finished command snapshot prefix because we pick the
timeout task before the finished snapshot task. This is likely the cause
of a flaky test failure
[here](https://github.com/leanprover/lean4/actions/runs/18215430028/job/51863870111),
where the inlay hint test yielded no result (the timeout task has an
edit delay of 0ms in the first inlay hint request that is emitted,
finishes immediately and can thus immediately cause the finished prefix
to be skipped with the new `waitAny` implementation).
This PR fixes this issue by adding a `hasFinished` check before the
`waitAny` to ensure that we always scan the finished prefix and don't
need to rely on a brittle invariant that doesn't hold anymore. It also
converts some `Task.get`s to `IO.wait` for safety so that the compiler
can't re-order them.
This PR disables `{name}` suggestions for `.anonymous` and adds syntax
suggestions.
When the provided name can't be resolved, the `{name}` role suggests
fully-qualified variants. But if the name is a syntax error, it
attempted to suggest names that had `.anonymous` as a suffix; the
resulting list of suggestions of all names in Lean's environment
overloaded the language server.
This PR "monomorphizes" the structure `Std.PRange shape α`, replacing it
with nine distinct structures `Std.Rcc`, `Std.Rco`, `Std.Rci` etc., one
for each possible shape of a range's bounds. This change was necessary
because the shape polymorphism is detrimental to attempts of automation.
**BREAKING CHANGE:** While range/slice notation itself is unchanged,
this essentially breaks the entire remaining (polymorphic) range and
slice API except for the dot-notation(`toList`, `iter`, ...). It is not
possible to deprecate old declarations that were formulated in a
shape-polymorphic way that is not available anymore.
This PR explicitly tries to synthesize synthetic MVars in `mspec`. Doing
so resolves a bug triggered by use of the loop invariant lemma for
`Std.PRange`.
This PR exposes the definitions about `Int*`. The main reason is that
the `SInt` simprocs require many of them to be exposed. Furthermore,
`decide` now works with `Int*` operations. This fixes#10631.
This PR defines `ByteArray.validateUTF8`, uses it to show that
`ByteArray.IsValidUtf8` is decidable and redefines `String.fromUTF8` and
friends to use it.
The functions `String.validateUTF8` and `String.utf8DecodeChar?` are
deprecated in favor of the identically named functions in the
`ByteArray` namespace.
This PR reduces the aggressiveness of the dead let eliminator from
lambda RC.
The motivation for this is that all other passes in lambda RC respect
impurity but the dead let eliminator still operates under the assumption
of purity. There is a couple of motivations for the elim dead let
elaborator:
- unused projections introduced by the ToIR translation
- the elim dead branch pass introducing new opportunities
- closed term extraction introducing new opportunities
This PR significantly improves the test coverage of the language server,
providing at least a single basic test for every request that is used by
the client. It also implements infrastructure for testing all of these
requests, e.g. the ability to run interactive tests in a project context
and refactors the interactive test runner to be more maintainable.
Finally, it also fixes a small bug with the recently implemented unknown
identifier code actions for auto-implicits (#10442) that was discovered
in testing, where the "import all unambiguous unknown identifiers" code
action didn't work correctly on auto-implicit identifiers.
This PR alters the Lake directory detection so that the core build
(i.e., `bootstrap = true`) is stored in the user cache directory (if
available) and never in a toolchain-specific directory.
It is also fixes some issues with cache environment configuration
discovered along the way.
This PR switches the core build Lake configuration file to use
`libPrefixOnWindows` rather than a CMake hack.
It also removes some dead TOML variables from the CMake configuration.
This PR cuts some edges from the import graph.
Specifically:
- `TreeMap` and `HashMap` no longer depend on `String`, so now the
expensive things are all in parallel instead of partially in sequence
- `Omega` no longer relies on `List` lemmas
- The section of the import graph between `Init.Omega` and
`Init.Data.Bitvec.Lemmas` is cleaned up a bit
This PR fixes a bug in the unknown identifier code actions where it
would yield non-sensical suggestions for nested `open` declarations like
`open Foo.Bar`.
This PR removes superfluous `Monad` instances from the spec lemmas of
the `MonadExceptOf` lifting framework.
It also adds a bit of documentation and more tracing to `mvcgen`.
Fixes#10564.
This PR ensures that even if a type is marked as `irreducible` the
compiler can see through it in
order to discover functions hidden behind type aliases.
This PR fixes a bad error message due to elaborating partial syntax with
Verso docstrings.
When elaborating partial syntax, the elaborator sometimes attempts to
add a docstring for a declaration that it didn't parse a name for. The
name defaults to anonymous, but inserting the docs for the anonymous
name throws a panic about being on the wrong async branch.
With this change, the reported error is the expected parser error
instead, which is much friendlier.
This PR adds infrastructure for the upcoming `grind` tactic mode, which
will be similar to the `conv` mode. The goal is to extend `grind` from a
terminal tactic into an interactive mode: `grind => …`.
It will serve as the foundation for `ungrind`, the process of converting
an expensive (and potentially fragile) `grind` proof into a robust
script. This mode will include tactics for expensive reasoning steps
such as cutsat model-based search, Gröbner basis computation,
E-matching, case splits, and more.
It will also provide robust, succinct references to facts and terms:
labels, structural matches, and anchors (e.g., `#abcd`).
This PR adds the necessary infrastructure for recording elaboration
dependencies that may not be apparent from the resulting environment
such as notations and other metaprograms. An adapted version of `shake`
from Mathlib is added to `script/` but may be moved to another location
or repo in the future.
This PR implements support for negative constraints in `grind order`.
Examples:
```lean
open Lean Grind
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsLinearPreorder α]
(a b c d : α) : a ≤ b → ¬ (c ≤ b) → ¬ (d ≤ c) → d < a → False := by
grind -linarith (splits := 0)
example [LE α] [Std.IsLinearPreorder α]
(a b c d : α) : a ≤ b → ¬ (c ≤ b) → ¬ (d ≤ c) → ¬ (a ≤ d) → False := by
grind -linarith (splits := 0)
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsLinearPreorder α] [CommRing α] [OrderedRing α]
(a b c d : α) : a - b ≤ 5 → ¬ (c ≤ b) → ¬ (d ≤ c + 2) → d ≤ a - 8 → False := by
grind -linarith (splits := 0)
```
This PR implements support for positive constraints in `grind order`.
The new module can already solve problems such as:
```lean
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α]
(a b c : α) : a ≤ b → b ≤ c → c < a → False := by
grind
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α]
(a b c d : α) : a ≤ b → b ≤ c → c < d → d ≤ a → False := by
grind
example [LE α] [Std.IsPreorder α]
(a b c : α) : a ≤ b → b ≤ c → a ≤ c := by
grind
example [LE α] [Std.IsPreorder α]
(a b c d : α) : a ≤ b → b ≤ c → c ≤ d → a ≤ d := by
grind
```
It also generalizes support for offset constraints in `grind` to rings.
The new module implements theory propagation and reduces the number of
case splits required to solve problems:
```lean
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [Ring α] [OrderedRing α]
(a b : α) : a ≤ 5 → b ≤ 8 → a > 6 ∨ b > 10 → False := by
grind -linarith (splits := 0)
example [LE α] [LT α] [Std.LawfulOrderLT α] [Std.IsPreorder α] [CommRing α] [OrderedRing α]
(a b c : α) : a + b*c + 2*c ≤ 5 → a + c > 5 - c - c*b → False := by
grind -linarith (splits := 0)
example (a b : Int) (h : a + b > 5) : (if a + b ≤ 0 then b else a) = a := by
grind -linarith -cutsat (splits := 0)
```
We still need to implement support for negated constraints.
This PR implements the function for adding new edges to the graph used
by `grind order`. The graph maintains the transitive closure of all
asserted constraints.
This PR ensures that Lake only receives recognized CMake build types
from CMake. This fixes an issue with #10581 which broke the
`RelWithAssert` build.
This PR makes Lake no longer error if build outputs found in a trace
file (or in the artifact cache) are ill-formed.
This is caused a problem with the CI cache and is just generally too
strict.
This PR alters `libPrefixOnWindows` behavior to add the `lib` prefix to
the library's `libName` rather than just the file path. This means that
Lake's `-l` will now have the prefix on Windows. While this should not
matter to a MSYS2 build (which accepts both `lib`-prefixed and
unprefixed variants), it should ensure compatibility with MSVC (if that
is ever an issue).
This PR invalidates the CI cache for the Linux Lake build job by bumping
the version of the CI cache key.
The CI cache is broken due to a change in the output format in build
traces. This will be fixed in #10586, but this should prevent further
breakages of PRs in the meantime.
This PR adds a new package configuration option: `restoreAllArtifacts`.
When set to `true` and the Lake local artifact cache is enabled, Lake
will copy all cached artifacts into the build directory. This ensures
they are available for external consumers who expect build results to be
in the build directory.
This PR enables Reservoir packages to be required as dependencies at a
specific package version (i.e., the `version` specified in the package's
configuration file).
This file is essentially just for me and can cause problems with the
language server, so I have removed it from the committed code (and left
an ignored version on my own setup).
* Wrap proof subterms in `by exact` so dependencies can be demoted to
private `import`s
* Remove trivial instance re-definitions that may cause name collisions
on import changes
* Remove unused `open`s that may fail on import removals
This PR ensures that `SPred` proof mode tactics such as `mspec`,
`mintro`, etc. immediately replace the main goal when entering the proof
mode. This prevents `No goals to be solved` errors.
This PR ensures private declarations are accessible from the private
scope iff they are local or imported through an `import all` chain,
including for anonymous notation and structure instance notation.
This PR adds support for case label like syntax in `mvcgen invariants`
in order to refer to inaccessible names. Example:
```lean
def copy (l : List Nat) : Id (Array Nat) := do
let mut acc := #[]
for x in l do
acc := acc.push x
return acc
theorem copy_labelled_invariants (l : List Nat) : ⦃⌜True⌝⦄ copy l ⦃⇓ r => ⌜r = l.toArray⌝⦄ := by
mvcgen [copy] invariants
| inv1 acc => ⇓ ⟨xs, letMuts⟩ => ⌜acc = l.toArray⌝
with admit
```
This PR improves `mvcgen invariants?` to suggest concrete invariants
based on how invariants are used in VCs.
These suggestions are intentionally simplistic and boil down to "this
holds at the start of the loop and this must hold at the end of the
loop":
```lean
def mySum (l : List Nat) : Nat := Id.run do
let mut acc := 0
for x in l do
acc := acc + x
return acc
/--
info: Try this:
invariants
· ⇓⟨xs, letMuts⟩ => ⌜xs.prefix = [] ∧ letMuts = 0 ∨ xs.suffix = [] ∧ letMuts = l.sum⌝
-/
#guard_msgs (info) in
theorem mySum_suggest_invariant (l : List Nat) : mySum l = l.sum := by
generalize h : mySum l = r
apply Id.of_wp_run_eq h
mvcgen invariants?
all_goals admit
```
It still is the user's job to weaken this invariant such that it
interpolates over all loop iterations, but it *is* a good starting point
for iterating. It is also useful because the user does not need to
remember the exact syntax.
This PR simplifies the `grind order` module, and internalizes the order
constraints. It removes the `Offset` type class because it introduced
too much complexity. We now cover the same use cases with a simpler
approach:
- Any type that implements at least `Std.IsPreorder`
- Arbitrary ordered rings.
- `Nat` by the `Nat.ToInt` adapter.
This PR refactors the Lake log monads to take a `LogConfig` structure
when run (rather than multiple arguments). This breaking change should
help minimize future breakages due to changes in configurations options.
In addition, the CLI logging monad stack has been polished up and
`LogIO` now supports the `failLv` configuration option.
This PR adds support for remote artifact caches (e.g., Reservoir) to
Lake. As part of this support, a new suite of `lake cache` CLI commands
has been introduced to help manage Lake's cache. Also, the existing
local cache support has been overhauled for better interplay with the
new remote support.
**Cache CLI**
Artifacts are uploaded to a remote cache via `lake cache put`. This
command takes a JSON Lines input-to-outputs file which describes the
output artifacts for a build (indexed by its input hash). This file can
be produced by a run of `lake build` with the new `-o` option. Lake will
write the input-to-outputs mappings of thee root package artifacts
traversed by the build to the file specified via `-o`. This file can
then be passed to `lake cache put` to upload both it and the built
artifacts from the local cache to the remote cache.
The remote cache service can be customized using the following
environment variables:
* `LAKE_CACHE_KEY`: This is the authorization key for the remote cache.
Lake uploads artifacts via `curl` using the AWS Signature Version 4
protocol, so this should be the S3 `<key>:<secret>` pair expected by
`curl`.
* `LAKE_CACHE_ARTIFACT_ENDPOINT`: This is the base URL to upload (or
download) artifacts to a given remote cache. Artifacts will be stored at
`<endpoint>/<scope/<content-hash>.art`.
* `LAKE_CACHE_REVISION_ENDPOINT`: This is the base URL to upload (or
download) input-to-output mappings to a given remote cache. Mappings are
indexed by the Git revision of the package, and are stored at
`<endpoint>/<scope/<rev>.jsonl`.
The `<scope>` is provided through the `--scope` option to `lake cache
put`. This option is used to prevent one package from overwriting the
artifacts/mappings of another. Lake artifact hashes and Git revisions
hashes are not cryptographically secure, so it is not safe for a service
to store untrusted files across packages in a single flat store.
Once artifacts are available in a remote cache, the `lake cache get`
command can be used to retrieve them. By default, it will fetch
artifacts for the root package's dependencies from Reservoir using its
API. But, like `cache put`, it can be configured to use a custom
endpoint with the above environment variables and an explicit `--scope`.
When so configured, `cache get` will instead download artifacts for the
root package. Lake only downloads artifacts for a single package in this
case, because it cannot deduce the necessary package scopes without
Reservoir.
**Significant local cache changes**
* Lake now always has a cache directory. If Lake cannot find a good
candidate directory on the system for the cache, it will instead store
the cache at `.lake/cache` within the workspace.
* If the local cache is disabled, Lake will not save built artifacts to
the cache. However, Lake will, nonetheless, always attempt to lookup
build artifacts in the cache. If found, the cached artifact will be
copied to the the build location ("restored").
* Input-to-outputs mappings in the local cache are no longer stored in a
single file for a package, but rather in individual files per input (in
the `outputs` subdirectory of the cache).
* Outputs in a trace file, outputs file, or mappings file are now an
`ArtifactDescr`, which is currently composed of both the content hash
and the file extension.
* Trace files now contain a date-based `schemaVersion` to help make
version to version migration easier. Hashes in JSON and in artifacts
names now use a 16-digit hexadecimal encoding (instead of a variable
decimal encoding).
* `buildArtifactUnlessUpToDate` now returns an `Artifact` instead of a
`FilePath`.
**NOTE:** The Lake local cache is still disabled by default. This means
that built artifacts, by default, will not be placed in the cache
directory, and thus will not be available for `lake cache put` to
upload. Users must first explicitly enable the cache by either setting
the `LAKE_ARTIFACT_CACHE` environment variable to a truthy value or by
setting the `enableArtifactCache` package configuration option to
`true`.
This PR changes the way that scientific numerals are parsed in order to
give better error messages for (invalid) syntax like `32.succ`.
Example:
```lean4
#check 32.succ
```
Before, the error message is:
```
unexpected identifier; expected command
```
This is because `32.` parses as a complete float, and `#check 32.`
parses as a complete command, so `succ` is being read as the start of a
new command.
With this change, the error message will move from the `succ` token to
the `32` token (which isn't totally ideal from my perspective) but gives
a less misleading error message and corresponding suggestion:
```
unexpected identifier after decimal point; consider parenthesizing the number
```
This PR refactors Lake's package naming procedure to allow packages to
be renamed by the consumer. With this, users can now require a package
using a different name than the one it was defined with.
This is support will be used in the future to enable seamlessly
including the same package at multiple different versions within the
same workspace.
In a Lake package configuration file written in Lean, the current
package's assigned name is now accessed through `__name__` instead of
the previous `_package.name`. A deprecation warning has been added to
`_package.name` to assist in migration.
This PR adds a `.` in front of `pass` in the `#guard_msgs`
implementation.
Previously, the match arm read `| pass => ...`. Presumably, `pass` was
intended to mean `SpecResult.pass`, but, this isn't in scope, so instead
`pass` here is a catch-all variable. By adding a dot, we ensure we
actually refer to the constant. Note that this was the last case in the
pattern-match, and since all other constructors were correctly
referenced, the only case that went to the fallback was
`SpecResult.pass`, so the code did the right thing. Still, by fixing
this, we prevent a surprise in the event that a new `SpecResult`
constructor is added.
This PR ensures that `Substring.beq` is reflexive, and in particular
satisfies the equivalence `ss1 == ss2 <-> ss1.toString = ss2.toString`.
Closes#10511.
Note: I also fixed a strange line in the `String.extract` documentation
which looks like it may have been a copypasta, and added another example
to show how invalid UTF8 positions work, but the doc also makes a point
of saying that it is unspecified so maybe it would be better not to have
the example? 🤷
This PR fixes deadlocking `exit` calls in the language server.
We have previously observed deadlocking calls to `exit` inside of the
language server and deemed them irrelevant. However, child processes of
these deadlocking exiting processes can continue to consume a large
amount of CPU as they try to compile a library etc. Hence, this PR
switches to the MT safe `_Exit` inside of the language server,
in order to ensure the server finishes when it is told to.
This PR introduces safe alternatives to `String.Pos` and `Substring`
that can only represent valid positions/slices.
Specifically, the PR
- introduces the predicate `String.Pos.IsValid`;
- proves several nontrivial equivalent conditions for
`String.Pos.IsValid`;
- introduces `String.ValidPos`, which is a `String.Pos` with an
`IsValid` proof;
- introduces `String.Slice`, which is like `Substring` but made from
`String.ValidPos` instead of `Pos`;
- introduces `String.Pos.IsValidForSlice`, which is like
`String.Pos.IsValid` but for slices;
- introduces `String.Slice.Pos`, which is like `String.ValidPos` but for
slices;
- introduces various functions for converting between the two types of
positions.
The API added in this PR is not complete. It will be expanded in future
PRs with addional operations and verification.
This PR prevents some nonsensical code from crashing the server.
Specifically, the kernel is changed to
- properly check that passed expressions do not contain loose bvars,
which could lead to a segmentation fault on a well-crafted input
(discovered through fuzzing), and
- check that constants generated when creating a new inductive type do
not overwrite each other, which could lead to the kernel taking
something out of the environment and then casting it to something it
isn't.
Partially addresses #8258, but let's keep that one open until the error
message is a little better.
Fixes#10492.
This PR disables `trace.profiler` in `bench/riskv-ast.lean`. We don't
want to optimize the trace profiler, but normal code.
While at it, I removed the `#exit` to cover more of the file.
While at it, also import the latest from from upstream.
This PR allows `.congr_simp` theorems to be created not just for
definitoins, but any constant. This is important to make the machinery
work across module boundaries.
It also moves the `enableRealizationsForConst` for constructors to a
more sensible
place, and enables it for axioms.
This PR adds some helper functions for the premise selection API, to
assist implementers.
---------
Co-authored-by: Thomas Zhu <thomas.zhu.sh@hotmail.com>
This PR introduces a simple script that adjusts module headers in a
package for use of the module system, without further minimizing import
or annotation use.
---------
Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
This PR fixes `simp` in `-zeta -zetaUnused` mode from producing
incorrect proofs if in a `have` telescope a variable occurrs in the
type of the body only transitively. Fixes#10353.
This PR adds a docstring role for module names, called `module`. It also
improves the suggestions provided for code elements, making them more
relevant and proposing `lit`.
This PR modifies the "issues" grind diagnostics prints. Previously we
would just describe synthesis failures. These messages were confusing to
users, as in fact the linarith module continues to work, but less
capably. For most of the issues, we now explain the resulting change in
behaviour. There is a still a TODO to explain the change when
`IsOrderedRing` is not available.
This PR adds `Notify` that is a structure that is similar to `CondVar`
but it's used for concurrency. The main difference between
`Std.Sync.Notify` and `Std.Condvar` is that depends on a `Std.Mutex` and
blocks the entire thread that the `Task` is using while waiting. If I
try to use it with async and a lot of `Task`s like this:
```lean
def condvar : Async Unit := do
let condvar ← Std.Condvar.new
let mutex ← Std.Mutex.new false
for i in [0:threads] do
background do
IO.println s!"start {i + 1}"
await =<< (show IO (ETask _ _) from IO.asTask (mutex.atomically (condvar.wait mutex)))
IO.println s!"end {i + 1}"
IO.sleep 2000
condvar.notifyAll
```
It causes some weird behavior because some tasks start running and get
notified, while others don’t, because `condvar.wait` blocks the `Task`
entire task and right now afaik it blocks an entire thread and cannot be
paused while doing blocking operations like that.
`Notify` uses `Promise`s so it’s better suited for concurrency. The
`Task` is not blocked while waiting for a notification which makes it
simpler for use cases that just involve notifying:
```lean
def notify : Async Unit := do
let notify ← Std.Notify.new
for i in [0:threads] do
background do
IO.println s!"start {i}"
notify.wait
IO.println s!"end {i}"
IO.sleep 2000
notify.notify
```
This PR depends on: #10366, #10367 and #10370.
This PR removes some `grind` annotations for `Array.attach` and related
functions. These lemmas introduce lambda on the right hand side which
`grind` can't do much with. I've added a test file that verifies that
the theorems with removed annotations can actually be proved already by
grind. Removing the annotations will help with excessive instantiation.
The radar bench scripts at
https://github.com/leanprover/radar-bench-lean4/ split up the benchmarks
between the two runners based on the tags: One runner filters by the tag
`stdlib` while the other filters by the tag `other`. Only benchmarks
using one of these tags will be run, and any benchmark tagged with both
will waste electricity.
As far as I know, the tags are unused otherwise, so I just replaced all
the old tags.
This also exposed an issue with `#guard_msgs` in Verso mode where the
docstring would log parse errors as if it contained Verso, even though
it actually worked. This has been fixed, and error messages improved as
well.
Hi, the doc of `String.fromUTF8` previously said invalid characters are
replaced with 'A'. But the parameter `h : validateUTF8 a` guarantees
there are no invalid characters, so that explanation doesn't make sense
to me. This PR deletes that explanation (and fixes some unrelated
typos).
I also have a patch that uses `h` to prove each of the characters is
valid, eliminating the need for a default character
([pr/chore-String-fromUTF8-prove-valid](27f1ff36b2)),
would you be interested in merging that?
<details>
<summary>Notes on invalid characters from unchecked C++</summary>
I don't know if this function may be called from unchecked C++ with
invalid characters. If it may, I'm not sure what would happen with my
patched function... I'm not familiar with Lean's safety model, but it
seems like a bad idea to have a Lean function that takes a proof of a
proposition but is expected to operate in a certain way even if the
proposition is false. I think the safe approach is to have two functions
-- one that takes a proof and is only called from Lean, and another that
doesn't take a proof and replaces invalid chars (for use from C++, not
sure whether it's useful from Lean); I'd prefer to go even further and
report an error instead of silently replacing invalid characters (I'm
not sure if there is any easy way to report errors/panic in Lean code
called from C++).
</details>
This PR resolves a potential bad interaction between the compiler and
the module system where references to declarations not imported are
brought into scope by inlining or specializing. We now proactively check
that declarations to be inlined/specialized only reference public
imports. The intention is to later resolve this limitation by moving out
compilation into a separate build step with its own import/incremental
system.
This PR annotates the shadowing main definitions of `bv_decide`,
`mvcgen` and similar tactics in `Std` with the semantically richer
`tactic_alt` attribute so that `verso` will not warn about overloads.
This fixesleanprover/verso#535.
This PR adds a simple implementation of MePo, from "Lightweight
relevance filtering for machine-generated resolution problems" by Meng
and Paulson.
This needs tuning, but is already useful as a baseline or test case.
---------
Co-authored-by: Thomas Zhu <thomas.zhu.sh@hotmail.com>
This PR fixes constant folding for UIntX in the code generator. This
optimization was previously simply dead code due to the way that uint
literals are encoded.
This PR implements module docstrings in Verso syntax, as well as adding
a number of improvements and fixes to Verso docstrings in general. In
particular, they now have language server support and are parsed at
parse time rather than elaboration time, so the snapshot's syntax tree
includes the parsed documentation.
This PR adds vectored write for TCP and UDP (that helps a lot with not
copying the arrays over and over) and fix a RC issue in TCP and UDP
cancel functions with the line `lean_dec((lean_object*)udp_socket);` and
a similar one that tries to decrement the object inside of the `socket`.
This PR adds a code action for `grind` parameters. We need to use
`set_option grind.param.codeAction true` to enable the option. The PR
also adds a modifier to instruct `grind` to use the "default" pattern
inference strategy.
This PR reduces noise in the 'Equivalence classes' section of the
`grind` diagnostics. It now uses a notion of *support expressions*.
Right now, it is hard-coded, but we will probably make it extensible in
the future. The current definition is
- `match`, `ite` and `dite`-applications. They have builtin support in
`grind`.
- Cast-like applications used by `grind`: `toQ`, `toInt`, `Nat.cast`,
`Int.cast`, and `cast`
- `grind` gadget applications (e.g., `Grind.nestedDecidable`)
- Projections of constructors (e.g., `{ x := 1, y := 2}.x`)
- Auxiliary arithmetic terms constructed by solvers such as `cutsat` and
`ring`.
If an equivalence class contains at most one non-support term, it goes
into the “others” bucket. Otherwise, we display the non-support elements
and place the support terms in a child node.
**BEFORE**:
<img width="1397" height="1558" alt="image"
src="https://github.com/user-attachments/assets/4fd4de31-7300-4158-908b-247024381243"
/>
**AFTER**:
<img width="840" height="340" alt="image"
src="https://github.com/user-attachments/assets/05020f34-4ade-49bf-8ccc-9eb0ba53c861"
/>
**Remark**: No information is lost; it is just grouped differently."
This PR adds an alternative implementation of `Deriving Ord` based on
comparing `.ctorIdx` and using a dedicated matcher for comparing same
constructors (added in #10152). The new option
`deriving.ord.linear_construction_threshold` sets the constructor count
threshold (10 by default) for using the new construction.
It also (unconditionally) changes the implementation for enumeration
types to simply compare the `ctorIdx`.
This PR implements `mvcgen invariants?` for providing initial invariant
skeletons for the user to flesh out. When the loop body has an early
return, it will helpfully suggest `Invariant.withEarlyReturn ...` as a
skeleton.
```lean
def mySum (l : List Nat) : Nat := Id.run do
let mut acc := 0
for x in l do
acc := acc + x
return acc
/--
info: Try this:
invariants
· ⇓⟨xs, acc⟩ => _
-/
#guard_msgs (info) in
theorem mySum_suggest_invariant (l : List Nat) : mySum l = l.sum := by
generalize h : mySum l = r
apply Id.of_wp_run_eq h
mvcgen invariants?
all_goals admit
def nodup (l : List Int) : Bool := Id.run do
let mut seen : HashSet Int := ∅
for x in l do
if x ∈ seen then
return false
seen := seen.insert x
return true
/--
info: Try this:
invariants
· Invariant.withEarlyReturn (onReturn := fun r acc => _) (onContinue := fun xs acc => _)
-/
#guard_msgs (info) in
theorem nodup_suggest_invariant (l : List Int) : nodup l ↔ l.Nodup := by
generalize h : nodup l = r
apply Id.of_wp_run_eq h
mvcgen invariants?
all_goals admit
```
This PR fixes a potential miscompilation when using non-exposed type
definitions using the module system by turning it into a static error. A
future revision may lift the restriction by making the compiler metadata
independent of the current module.
This PR makes `mvcgen` reduce through `let`s, so that it progresses over
`(have t := 42; fun _ => foo t) 23` by reduction to `have t := 42; foo
t` and then introducing `t`.
This PR ensures that issues reported by the E-matching module are
displayed only when `set_option grind.debug true` is enabled. Users
reported that these messages are too distracting and not very useful.
They are more valuable for library developers when annotating their
libraries.
This PR adds an alternative implementation of `DerivingBEq` based on
comparing `.ctorIdx` and using a dedicated matcher for comparing same
constructors (added in #10152), to avoid the quadratic overhead of the
default match implementation. The new option
`deriving.beq.linear_construction_threshold` sets the constructor count
threshold (10 by default) for using the new construction. Such instances
also allow `deriving ReflBEq, LawfulBeq`, although these proofs for
these properties are still quadratic.
This PR adds the `reduceCtorIdx` simproc which recognizes and reduces
`ctorIdx` applications. This is not on by default yet because it does
not use the discrimination tree (yet).
This PR redefines `String` to be the type of byte arrays `b` for which
`b.IsValidUtf8`.
This moves the data model of strings much closer to the actual data
representation at runtime.
In the near future, we will
- provide variants of `String.Pos` and `Substring` that only allow for
valid positions
- redefine all `String` functions to be much closer to their C++
implementations
In the near-to-medium future we will then provide comprehensive
verification of `String` based on these refactors.
This PR adds support the Count Trailing Zeros operation `BitVec.ctz` to
the bitvector library and to `bv_decide`, relying on the existing `clz`
circuit. We also build some theory around `BitVec.ctz` (analogous to the
theory existing for `BitVec.clz`) and introduce lemmas
`BitVec.[ctz_eq_reverse_clz, clz_eq_reverse_ctz, ctz_lt_iff_ne_zero,
getLsbD_false_of_lt_ctz, getLsbD_true_ctz_of_ne_zero,
two_pow_ctz_le_toNat_of_ne_zero, reverse_reverse_eq,
reverse_eq_zero_iff]`.
`ctz` operation is common in numerous compiler intrinsics (see
[here](https://clang.llvm.org/docs/LanguageExtensions.html#intrinsics-support-within-constant-expressions))
and architectures (see
[here](https://en.wikipedia.org/wiki/Find_first_set)).
---------
Co-authored-by: Siddharth <siddu.druid@gmail.com>
This PR adds `reprove N by T`, which effectively elaborates `example
type_of% N := by T`. It supports multiple identifiers. This is useful
for testing tactics.
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR enables the new E-matching pattern inference heuristic for
`grind`, implemented in PR #10422.
**Important**: Users can still use the old pattern inference heuristic
by setting:
```lean
set_option backward.grind.inferPattern true
```
In PR #10422, we introduced the new modifier `@[grind!]` for enabling
the minimal indexable subexpression condition. This option can now also
be set in `grind` parameters. Example:
```lean
opaque f : Nat → Nat
opaque fInv : Nat → Nat
axiom fInv_f : fInv (f x) = x
/-- trace: [grind.ematch.pattern] fInv_f: [f #0] -/
#guard_msgs in
set_option trace.grind.ematch.pattern true in
example {x y} : f x = f y → x = y := by
/-
The modifier `!` instructs `grind` to use the minimal indexable subexpression
(i.e., `f x` in this case).
-/
grind [!fInv_f]
```
This PR refines and clarifies the `meta` phase distinction in the module
system.
* `meta import A` without `public` now has the clarified meaning of
"enable compile-time evaluation of declarations in or above `A` in the
current module, but not downstream". This is now checked statically by
enforcing that public meta defs, which therefore may be referenced from
outside, can only use public meta imports, and that global evaluating
attributes such as `@[term_parser]` can only be applied to public meta
defs.
* `meta def`s may no longer reference non-meta defs even when in the
same module. This clarifies the meta distinction as well as improves
locality of (new) error messages.
* parser references in `syntax` are now also properly tracked as meta
references.
* A `meta import` of an `import` now properly loads only the `.ir` of
the nested module for the purposes of execution instead of also making
its declarations available for general elaboration.
* `initialize` is now no longer being run on import under the module
system, which is now covered by `meta initialize`.
This PR ensures users can select the "minimal indexable subexpression"
condition in `grind` parameters. Example, they can now write `grind [!
-> thmName]`. `grind?` will include the `!` modifier whenever users had
used `@[grind!]`. This PR also fixes a missing case in the new pattern
inference procedure.
It also adjusts some `grind` annotations and tests in preparation for
setting the new pattern inference heuristic as the new default.
This PR changes the order of steps tried when proving equational
theorems for structural recursion. In order to avoid goals that `split`
cannot handle, avoid unfolding the LHS of the equation to `.brecOn` and
`.rec` until after the RHS has been split into its final cases.
Fixes: #10195
This PR lets the `split` tactic generalize discriminants that are not
free variables and proofs using `generalize`. If the only
non-fvar-discriminants are proofs, then this avoids the more elaborate
generalization strategy of `split`, which can fail with dependent
motives, thus mitigating issue #10424.
This PR changes the defeq algorithm to perform `whnf` on the `String.mk`
expression it creates for string literals.
This is currently a no-op, but will no longer be one once `String` is
redefined so that `String.mk` is a regular function instead of a
constructor.
This PR implements the new E-matching pattern inference heuristic for
`grind`. It is not enabled yet. You can activate the new behavior using
`set_option backward.grind.inferPattern false`. Here is a summary of the
new behavior.
* `[grind =]`, `[grind =_]`, `[grind _=_]`, `[grind <-=]`: no changes;
we keep the current behavior.
* `[grind ->]`, `[grind <-]`, `[grind =>]`, `[grind <=]`: we stop using
the *minimal indexable subexpression* and instead use the first
indexable one.
* `[grind! <mod>]`: behaves like `[grind <mod>]` but uses the minimal
indexable subexpression restriction. We generate an error if the user
writes `[grind! =]`, `[grind! =_]`, `[grind! _=_]`, or `[grind! <-=]`,
since there is no pattern search in these cases.
* `[grind]`: it tries `=`, `=_`, `<-`, `->`, `<=`, `=>` with and without
the minimal indexable subexpression restriction. For the ones that work,
we generate a code action to encourage users to select the one they
prefer.
* `[grind!]`: it tries `<-`, `->`, `<=`, `=>` using the minimal
indexable subexpression restriction. For the ones that work, we generate
a code action to encourage users to select the one they prefer.
* `[grind? <mod>]`: where `<mod>` is one of the modifiers above, it
behaves like `[grind <mod>]` but also displays the pattern.
Example:
```lean
/--
info: Try these:
• [grind =] for pattern: [f (g #0)]
• [grind =_] for pattern: [r #0#0]
• [grind! ←] for pattern: [g #0]
-/
#guard_msgs in
@[grind] axiom fg₇ : f (g x) = r x x
```
This PR adds a normalizer for non-commutative semirings to `grind`.
Examples:
```lean
open Lean.Grind
variable (R : Type u) [Semiring R]
example (a b c : R) : a * (b + c) = a * c + a * b := by grind
example (a b : R) : (a + 2 * b)^2 = a^2 + 2 * a * b + 2 * b * a + 4 * b^2 := by grind
example (a b : R) : b^2 + (a + 2 * b)^2 = a^2 + 2 * a * b + b * (1+1) * a * 1 + 5 * b^2 := by grind
example (a b : R) : a^3 + a^2*b + a*b*a + b*a^2 + a*b^2 + b*a*b + b^2*a + b^3 = (a+b)^3 := by grind
```
This PR adds the helper theorem `eq_normS_nc` for normalizing
non-commutative semirings. We will use this theorem to justify
normalization steps in the `grind ring` module.
This PR changes the automation in `deriving_LawfulEq_tactic_step` to use
`with_reducible` when asserting the shape of the goal using `change`, so
that we do not accidentally unfold `x == x'` calls here. Fixes#10416.
This PR adds the ability to do `deriving ReflBEq, LawfulBEq`. Both
classes have to listed in the `deriving` clause. For `ReflBEq`, a simple
`simp`-based proof is used. For `LawfulBEq`, a dedicated,
syntax-directed tactic is used that should work for derived `BEq`
instances. This is meant to work with `deriving BEq` (but you can try to
use it on hand-rolled `@[methods_specs] instance : BEq…` instances).
Does not support mutual or nested inductives.
This PR fixes a bug where definitions with nested proofs that contain
`sorry` might not report "warning: declaration uses 'sorry'" if the
proof has the same type as another nested proof from a previous
declaration. The bug only affected log messages; `#print axioms` would
still correctly report uses of `sorryAx`.
The fix is that now the abstract nested proofs procedure does not
consult the aux lemma cache if the proof contains a `sorry`.
Closes#10196
This PR gives anonymous constructor notation (`⟨x,y⟩`) an error recovery
mechanism where if there are not enough arguments then synthetic sorries
are inserted for the missing arguments and an error is logged, rather
than outright failing.
Closes#9591.
This PR fixes an issue with the `if` tactic where errors were not placed
at the correct source ranges. It also adds some error recovery to avoid
additional errors about unsolved goals on the `if` token when the tactic
has incomplete syntax.
Closes#7972
This PR adds the `reduceBEq` and `reduceOrd` simprocs. They rewrite
occurrences of `_ == _` resp. `Ord.compare _ _` if both arguments are
constructors and the corresponding instance has been marked with
`@[method_specs]` (introduced in #10302), which now by default is the
case for derived instances.
This PR introduces the `@[specs]` attribute. It can be applied to
(certain) type class instances and define “specification theorems” for
the class’ operations, by taking the equational theorems of the
implementation function mentioned in the type class instance and
rephrasing them in terms of the overloaded operations. Fixes#5295.
Example:
```
inductive L α where
| nil : L α
| cons : α → L α → L α
def L.beqImpl [BEq α] : L α → L α → Bool
| nil, nil => true
| cons x xs, cons y ys => x == y && L.beqImpl xs ys
| _, _ => false
@[method_specs] instance [BEq α] : BEq (L α) := ⟨L.beqImpl⟩
/--
info: theorem instBEqL.beq_spec_2.{u_1} : ∀ {α : Type u_1} [inst : BEq α] (x_2 : α) (xs : L α) (y : α) (ys : L α),
(L.cons x_2 xs == L.cons y ys) = (x_2 == y && xs == ys)
-/
#guard_msgs(pass trace, all) in
#print sig instBEqL.beq_spec_2
```
It also introduces the `method_specs_norm` simpset to allow registering
further normalization of the theorems. The intended use of this is to
rewrite, say, `Append.append` to the `HAppend.hAppend` (i.e. `++`) that
the user wants to see. Library annotations to follow in a separate PR.
This PR makes the builtin Verso docstring elaborators bootstrap
correctly, adds the ability to postpone checks (which is necessary for
resolving forward references and bootstrapping issues), and fixes a
minor parser bug.
This PR includes some improvements to the release process, making the
updating of `stable` branches more robust, and including `cslib` in the
release checklist.
This PR implements sanity checks in the `grind ring` module to ensure
the instances synthesized by type class resolution are definitionally
equal to the corresponding ones in the `grind` core classes. The
definitional equality test is performed with reduction restricted to
reducible definitions and instances.
This PR fixes an issue where the "eta feature" in the app elaborator,
which is invoked when positional arguments are skipped due to named
arguments, results in variables that can be captured by those named
arguments. Now the temporary local variables that implement this feature
get fresh names. The names used for the closed lambda expression still
use the original parameter names.
Closes#6373
This PR enables using `notation` items in
`infix`/`infixl`/`infixr`/`prefix`/`postfix`. The motivation for this is
to enable being able to use `pp.unicode`-aware parsers. A followup PR
can combine core parsers as such:
```lean
infixr:30 unicode(" ∨ ", " \\/ ") => Or
```
Continuation of #10373.
This PR modifies the syntax for tactic configurations. Previously just
`(ident` would commit to tactic configuration item parsing, but now it
needs to be `(ident :=`. This enables reliably using tactic
configurations before the `term` category. For example, given `syntax
"my_tac" optConfig term : tactic`, it used to be that `my_tac (x + y)`
would have an error on `+` with "expected `:=`", but now it parses the
term.
An additional rationale is that these are like named arguments; (1)
terms can't begin with named arguments so now there is no parsing
ambiguity and (2) `Parser.Term.namedArgument` indeed already includes
`:=` in the atomic part.
This PR modifies pretty printing of `fun` binders, suppressing the safe
shadowing feature among the binders in the same `fun`. For example,
rather than pretty printing as `fun x x => 0`, we now see `fun x x_1 =>
0`. The calculation is done per `fun`, so for example `fun x => id fun x
=> 0` pretty prints as-is, taking advantage of safe shadowing.
The motivation for this change is that many users have reported that
safe shadowing within the same `fun` is confusing.
This PR adds support for non-commutative ring normalization in `grind`.
The new normalizer also accounts for the `IsCharP` type class. Examples:
```lean
open Lean Grind
variable (R : Type u) [Ring R]
example (a b : R) : (a + 2 * b)^2 = a^2 + 2 * a * b + 2 * b * a + 4 * b^2 := by grind
example (a b : R) : (a + 2 * b)^2 = a^2 + 2 * a * b + -b * (-4) * a - 2*b*a + 4 * b^2 := by grind
variable [IsCharP R 4]
example (a b : R) : (a - b)^2 = a^2 - a * b - b * 5 * a + b^2 := by grind
example (a b : R) : (a - b)^2 = 13*a^2 - a * b - b * 5 * a + b*3*b*3 := by grind
```
This PR adds the options `pp.piBinderNames` and
`pp.piBinderNames.hygienic`. Enabling `pp.piBinderNames` causes
non-dependent pi binder names to be pretty printed, rather than be
omitted. When `pp.piBinderNames.hygienic` is false (the default) then
only non-hygienic such biner names are pretty printed. Setting `pp.all`
enables `pp.piBinderNames` if it is not otherwise explicitly set.
Implementation note: this is exposing the secret pretty printer option
`pp.piBinderNames` that was being used within the signature pretty
printer.
Closes#1134.
This PR fixes a few bugs in the `rw` tactic: it could "steal" goals
because they appear in the type of the rewrite, it did not do an occurs
check, and new proof goals would not be synthetic opaque. This PR also
lets the `rfl` tactic assign synthetic opaque metavariables so that it
is equivalent to `exact rfl`.
Implementation note: filtering old vs new is not sufficient. This PR
partially addresses the bug where the rw tactic creates natural
metavariables for each of the goals; now new proof goals are synthetic
opaque.
Metaprogramming API: Instead of `Lean.MVarId.rewrite` prefer
`Lean.Elab.Tactic.elabRewrite` for elaborating rewrite theorems and
applying rewrites to expressions.
Closes#10172
This PR adds a `pp.unicode` option and a `unicode("→", "->")` syntax
description alias for the lower-level `unicodeSymbol "→" "->"` parser.
The syntax is added to the `notation` command as well. When `pp.unicode`
is true (the default) then the first form is used when pretty printing,
and otherwise the second ASCII form is used. A variant, `unicode("→",
"->", preserveForPP)` causes the `->` form to be preferred; delaborators
can insert `→` directly into the syntax, which will be pretty printed
as-is; this allows notations like `fun` to use custom options such as
`pp.unicode.fun` to opt into the unicode form when pretty printing.
Additionally:
- Adds more documentation for the `symbol` and `nonReservedSymbol`
parser descriptions.
- Adds documentation for the
`infix`/`infixr`/`infixl`/`prefix`/`postfix` commands.
- The parenthesizers for symbols are improved to backtrack if the atom
doesn't match.
- Fixes a bug where `&"..."` symbols aren't validated.
This is partial progress for issue #1056. What remains is enabling
`unicode(...)` for mixfix commands and then making use of it for core
notation.
This PR explicitly imports `Lake.Util` submodules in `Lake`, ensuring
Lake utilities are consistently available by default in configuration
files.
It also simplifies the Lake globs for the core build to ensure all Lake
submodules are built (even if they are not imported).
This PR ensures that the infotree recognizes `Classical.propDecidable`
as an instance, when below a `classical` tactic.
The `classical` tactic modifies the environment that the subsequent
sequence of tactics runs in (by making `Classical.propDecidable` an
instance). However, it does not add a corresponding `InfoTree.context`
node, so its effects are not visible when we want to replay a tactic
sequence (for example when running a tactic in the tactic analysis
framework). We should add a call to `Lean.Elab.withSafeInfoContext` to
remedy this issue.
There are two potential places to add this class: in the meta-level
`Lean.Elab.Tactic.classical` wrapper, or the tactic-level
`evalClassical` tactic elaborator. I chose the latter since meta-level
does not have access to info tree operations (unless we add many
parameters to `Lean.Elab.Tactic.classical`: `[MonadNameGenerator m]
[MonadOptions m] [MonadMCtx m] [MonadResolveName m] [MonadFileMap m]`).
A testcase that uses the tactic analysis framework is available here:
https://github.com/leanprover-community/mathlib4/pull/29501
This PR adds syntax for defining compile-time initializers under the
module system, with other initializers to be restricted from running at
compile time in a follow-up PR.
This PR uses the per-constructor `noConfusion` principles (from #10315)
in the `mkNoConfusion` app builder, if possible. This means they are
used by `injection`, `grind`, `simp` and other places. This brings
notable performance improvements when dealing with inductives with a
large number of constructors.
This PR adds `T.ctor.noConfusion` declarations, which are
specializations of `T.noConfusion` to equalities between `T.ctor`. The
point is to avoid reducing the `T.noConfusionType` construction every
time we use `injection` or a similar tactic.
```lean
Vec.cons.noConfusion.{u_1, u} {α : Type u} (P : Sort u_1) {n : Nat}
(x : α) (xs : Vec α n) (x' : α) (xs' : Vec α n)
(h : Vec.cons x xs = Vec.cons x' xs')
(k : n = n → x = x' → xs ≍ xs' → P) : P
```
The constructions are not as powerful as `T.noConfusion` when the
indices of the inductive type are not just constructor parameters (or
constructor applications of these parameters), so the full
`T.noConfusion` construction is still needed as a fallback.
It may seem costly to generate these eagerly, but given that we eagerly
generate injectivity theorems already, and we will use them there, it
seems reasonable for now.
To further reduce the cost, we only generate them for constructors with
fields (for others, the `T.noConfusion` theorem doesn't provide any
information), and we use `macro_inline` to prevent the compiler from
creating code for these, given that the compiler has special support for
`T.noConfusion` that we want it to use).
An earlier version of this PR also removed trivial equations and
un-HEq-ed others, leading to
```
(k : x = x' → xs = xs' → P)
```
in the example above. I backed out of that change, as it makes it harder
for tactics like `injectivity` to know how often to `intro`, so better
to keep things uniform.
This PR adds range support to`BitVec` and the `UInt*` types. This means
that it is now possible to write, for example, `for i in (1 : UInt8)...5
do`, in order to loop over the values 1, 2, 3 and 4 of type `UInt8`.
This PR adds more lemmas about the `toList` and `toArray` functions on
ranges and iterators. It also renames `Array.mem_toArray` into
`List.mem_toArray`.
This PR adds the type `Std.Internal.Parsec.Error`, which contains the
constructors `.eof` (useful for checking if parsing failed due to not
having enough input and then retrying when more input arrives that is
useful in the HTTP server) and `.other`, which describes other errors.
It also adds documentation to many functions, along with some new
functions to the `ByteArray` Parsec, such as `peekWhen?`, `octDigit`,
`takeWhile`, `takeUntil`, `skipWhile`, and `skipUntil`.
This PR is followup to the change in grind pattern heuristics from
#10342, typically resolving the discrepancy by writing out an explicit
`grind_pattern` for the intended pattern. The new behaviour is more
aggressive, because it selects smaller patterns.
This PR allows the interpreter to jump to native code of `[export]`
declarations, which can increase performance as well as the
effectiveness of `interpreter.prefer_native=true` during bootstrapping.
This PR reimplements `mkNoConfusionType` in lean, thus removing the
remaining C code related to this construction.
Also uses the ctor elimination principles only when there are more than
three ctors.
This PR completes the review of `@[grind]` annotations without a sigil
(e.g. `=` or `←`), replacing most of them with more specific annotations
or patterns.
---------
Co-authored-by: Leonardo de Moura <leomoura@amazon.com>
This PR implements a new E-matching pattern inference procedure that is
faithful to the behavior documented in the reference manual regarding
minimal indexable subexpressions. The old inference procedure was
failing to enforce this condition. For example, the manual documents
`[grind ->]` as follows
`[@grind →]` selects a multi-pattern from the hypotheses of the theorem.
In other words, `grind` will use the theorem for forwards reasoning.
To generate a pattern, it traverses the hypotheses of the theorem from
left to right. Each time it encounters a **minimal indexable
subexpression** which covers an argument which was not previously
covered, it adds that subexpression as a pattern, until all arguments
have been covered.
That said, the new procedure is currently disabled, and the following
option must be used to enable it.
```
set_option backward.grind.inferPattern false
```
Users can inspect differences between the old a new procedures using the
option
```
set_option backward.grind.checkInferPatternDiscrepancy true
```
Example:
```lean
/--
warning: found discrepancy between old and new `grind` pattern inference procedures, old:
[@List.length #2 (@toList _ #1#0)]
new:
[@toList #2#1#0]
use `set_option backward.grind.inferPattern true` to force old procedure
-/
#guard_msgs in
set_option backward.grind.checkInferPatternDiscrepancy true in
@[grind] theorem Vector.length_toList' (xs : Vector α n) : xs.toList.length = n := by sorry
```
This PR moves the definitions and basic facts about `Function.Injective`
and `Function.Surjective` up from Mathlib. We can do a better job of
arguing via injectivity in `grind` if these are available.
This PR updates `@[grind]` annotations which should be `@[grind =]`, for
robustness (and, presumably, in some fraction of cases the existing
heuristic for `@[grind]` is already too liberal).
This PR shares common functionality relate to equalities between same
constructors, and when these are type-correct. In particular it uses the
more complete logic from `mkInjectivityThm` also in other places, such
as `CasesOnSameCtor` and the deriving code for `BEq`, `DecidableEq`,
`Ord`, for more consistency and better error messages.
This PR implements `mkNoConfusionImp` in Lean rather than in C. This
reduces our reliance on C, and may bring performance benefits from not
reducing `noConfusionType` during elaboration time (it still gets
reduced by the kernel when type-checking).
This PR makes it possible to write custom interpolation notation which
treats interpolated `String`s specially.
Sometimes it is desirable for `let w := "world"; foo!"hello {w}"` and
`foo!"hello world"` to mean different things; for instance, if debugging
and wanting to show all interpolands with `repr`. The current approach
forces `hello` to also be rendered with `repr`, which is not desirable.
This doesn't modify any existing formatters.
Requested in [#lean4 > ✔ dbg_trace should use `Repr` instance @
💬](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/.E2.9C.94.20dbg_trace.20should.20use.20.60Repr.60.20instance/near/495082575)
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR upstreams the Verso parser and adds preliminary support for
Verso in docstrings. This will allow the compiler to check examples and
cross-references in documentation.
After a `stage0` update, a follow-up PR will add the appropriate
attributes that allow the feature to be used. The parser tests from
Verso also remain to be upstreamed, and user-facing documentation will
be added once the feature has been used on more internals.
This PR fixes a performance issue in `grind linarith`. It was creating
unnecessary `NatModule`/`IntModule` structures for commutative rings
without an order. This kind of type should be handled by `grind ring`
only.
This PR implements model-based theory combination for types `A` which
implement the `ToInt` interface. Examples:
```lean
example {C : Type} (h : Fin 4 → C) (x : Fin 4)
: 3 ≤ x → x ≤ 3 → h x = h (-1) := by
grind
example {C : Type} (h : UInt8 → C) (x y z w : UInt8)
: y + 1 + w ≤ x + w → x + w ≤ z → z ≤ y + w + 1 → h (x + w) = h (y + w + 1) := by
grind
example {C : Type} (h : Fin 8 → C) (x y w r : Fin 8)
: y + 1 + w ≤ r → r ≤ y + w + x → x = 1 → h r = h (y + w + 1) := by
grind
```
This PR removes `grind →` annotations that fire too often, unhelpfully.
It would be nice for `grind` to instantiate these lemmas, but only if
they already see `xs ++ ys` and `#[]` in the same equivalence class, not
just as soon as it sees `xs ++ ys`.
In the meantime, let's see what is using these.
This PR introduces limited functionality frontends `cutsat` and
`grobner` for `grind`. We disable theorem instantiation (and case
splitting for `grobner`), and turn off all other solvers. Both still
allow `grind` configuration options, so for example one can use `cutsat
+ring` (or `grobner +cutsat`) to solve problems that require both.
For `cutsat`, it is helpful to instantiate a limited set of theorems
(e.g. `Nat.max_def`). Currently this isn't supported, but we intend to
add this later.
This PR fixes the `grind` canonicalizer for `OfNat.ofNat` applications.
Example:
```lean
example {C : Type} (h : Fin 2 → C) :
-- `0` in the first `OfNat.ofNat` is not a raw literal
h (@OfNat.ofNat (Fin (1 + 1)) 0 Fin.instOfNat) = h 0 := by
grind
```
This PR ensures that the auxiliary temporary metavariable IDs created by
the E-matching module used in `grind` are not affected by what has been
executed before invoking `grind`. The goal is to increase `grind`’s
robustness.
For example, in the E-matching module we use `Expr.quickLt` to sort
candidates. `Expr.quickLt` depends on the `Expr` hash code, which in
turn depends on metavariable IDs. Thus, before this change, the initial
next metavariable ID at the time of `grind` invocation could affect the
order in which instances were generated, and consequently the `grind`
search.
This PR changes the string interpolation procedure to omit redundant
empty parts. For example `s!"{1}{2}"` previously elaborated to `toString
"" ++ toString 1 ++ toString "" ++ toString 2 ++ toString ""` and now
elaborates to `toString 1 ++ toString 2`.
- [x] Updated docstrings for `simp!`, `simp_all!`, `dsimp!` to use
user-friendly language
- [x] Updated docstrings for `autoUnfold` fields to use user-friendly
language
- [x] Fixed broken test by updating expected output for simp! hover
documentation
- [x] Replaced technical terms with clear language: "will unfold
applications of functions defined by pattern matching, when one of the
patterns applies"
<!-- START COPILOT CODING AGENT TIPS -->
---
💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: nomeata <148037+nomeata@users.noreply.github.com>
This PR adds missing the lemmas `ofList_eq_insertMany_empty`,
`get?_eq_some_iff`, `getElem?_eq_some_iff` and `getKey?_eq_some_iff` to
all container types.
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5>✅ No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
with:
artifact-ids: 12345
path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code></tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/actions/download-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This PR adds missing `grind` normalization rules for `natCast` and
`intCast` Examples:
```
open Lean.Grind
variable (R : Type) (a b : R)
section CommSemiring
variable [CommSemiring R]
example (m n : Nat) : (m + n) • a = m • a + n • a := by grind
example (m n : Nat) : (m * n) • a = m • (n • a) := by grind
end CommSemiring
section CommRing
variable [CommRing R]
example (m n : Nat) : (m + n) • a = m • a + n • a := by grind
example (m n : Nat) : (m * n) • a = m • (n • a) := by grind
example (m n : Int) : (m * n) • (a * b) = (m • a) * (n • b) := by grind
end CommRing
```
This PR makes `IO.RealWorld` opaque. It also adds a new compiler -only
`lcRealWorld` constant to represent this type within the compiler. By
default, an opaque type definition is treated like `lcAny`, whereas we
want a more efficient representation. At the moment, this isn't a big
difference, but in the future we would like to completely erase
`IO.RealWorld` at runtime.
This PR offers an alternative `noConfusion` construction for the
off-diagonal use (i.e. for different constructors), based on comparing
the `.ctorIdx`. This should lead to faster type checking, as the kernel
only has to reduce `.ctorIdx` twice, instead of the complicate
`noConfusionType` construction.
This PR changes the "declaration uses 'sorry'" error to pretty print an
actual `sorry` expression in the message. The effect is that the `sorry`
is hoverable and, if it's labeled, you can "go to definition" to see
where it came from.
The implementation prefers reporting synthetic sorries. These can appear
even if there are no error messages if a declaration refers to a
declaration that has elaboration errors. Users should focus on
elaboration errors before worrying about user-written `sorry`s.
In the future we could have some more precise logic for sorry reporting.
All the sorries in a declaration should be considered to be reported,
and we should not re-report sorries in later declarations. Some
elaborators use `warn.sorry` to avoid re-reporting sorries in auxiliary
declarations.
This PR changes the implementation of a function `unfoldPredRel` used in
(co)inductive predicate machinery, that unfolds pointwise order on
predicates to quantifications and implications. Previous implementation
relied on `withDeclsDND` that could not deal with types which depend on
each other. This caused the following example to fail:
```lean4
inductive infSeq_functor1.{u} {α : Type u} (r : α → α → Prop) (call : {α : Type u} → (r : α → α → Prop) → α → Prop) : α → Prop where
| step : r a b → infSeq_functor1 r call b → infSeq_functor1 r call a
def infSeq1 (r : α → α → Prop) : α → Prop := infSeq_functor1 r (infSeq1)
coinductive_fixpoint monotonicity by sorry
#check infSeq1.coinduct
```
Closes#10234.
This test involves re-running the compiler on decls that have already
been compiled, which can cause all sorts of issues. I just hit these
issues on a PR, so it's time to retire this test like others that hit
the same issues.
This PR completes the `grind` solver extension design and ports the
`grind ac` solver to the new framework. Future PRs will document the API
and port the remaining solvers. An additional benefit of the new design
is faster build times.
The proof of the instWPMonad instance relies on the equality of any two
terms of type `IO.RealWorld`, which is only a side effect of the current
transparent definition. Ignoring the questions around the utility of
proving things about programs in `IO`, the semantic validity of this
instance in the intended model of the IO monad is also unclear.
I tried a few things to axiomatize this instance so it could be put into
the test file to preserve the one test section that relies on it, but I
was unsuccessful; everything I attempted caused errors.
This PR adds infrastructure for registering new `grind` solvers. `grind`
already includes many solvers, and this PR is the first step toward
modularizing the design and supporting user-defined solvers.
This PR prepares for a future reorganization of the import hierarchy so
that `Init.Data.String.Basic` can import `Init.Data.UInt.Bitwise` and
`Init.Data.Array.Lemmas`.
This PR moves `String.utf8EncodeChar` to the prelude to prepare for the
imminent redefinition of `String`.
The definition in the prelude uses modulo and division operations on
natural numbers. In `String.Extra`, a `csimp` lemma is provided, showing
that the new definition is equal to the previous one (which is now
called `utf8EncodeCharFast`) which uses bitwise operations on `UInt8`.
This PR implements diagnostic information for the `grind ac` module. It
now displays the basis, normalized disequalities, and additional
properties detected for each associative operator.
This PR improves the counterexamples produced by `grind linarith` for
`NatModule`s. `grind` now hides occurrences of the auxiliary function
`Grind.IntModule.OfNatModule.toQ`.
This PR implements `NatModule` normalization when the `AddRightCancel`
instance is not available. Note that in this case, the embedding into
`IntModule` is not injective. Therefore, we use a custom normalizer,
similar to the `CommSemiring` normalizer used in the `grind ring`
module. Example:
```lean
open Lean Grind
example [NatModule α] (a b c : α)
: 2•a + 2•(b + 2•c) + 3•a = 4•a + c + 2•b + 3•c + a := by
grind
```
This PR adds the auxiliary theorem `Lean.Grind.Linarith.eq_normN` for
normalizing `NatModule` equations when the instance `AddRightCancel` is
not available.
This PR changes the implementation of the linear `DecidableEq`
implementation to use `match decEq` rather than `if h : ` to compare the
constructor tags. Otherwise, the “smart unfolding” machinery will not
let `rfl` decide that different constructors are different.
This PR adds support for `NatModule` equalities and inequalities in
`grind linarith`. Examples:
```lean
open Lean Grind Std
example [NatModule α] [LE α] [LT α]
[LawfulOrderLT α] [IsLinearOrder α] [OrderedAdd α]
(x y : α) : x ≤ y → 2 • x + y ≤ 3 • y := by
grind
example [NatModule α] [AddRightCancel α] [LE α] [LT α]
[LawfulOrderLT α] [IsLinearOrder α] [OrderedAdd α]
(a b c d : α) : a ≤ b → a ≥ c + d → d ≤ 0 → d ≥ 0 → b = c → a = b := by
grind
```
This PR changes the naming of the internal functions in deriving
instances like BEq to use accessible names. This is necessary to
reasonably easily prove things about these functions. For example after
`deriving BEq` for a type `T`, the implementation of `instBEqT` is in
`instBEqT.beq`.
This PR adds a new option `maxErrors` that limits the number of errors
printed from a single `lean` run, defaulting to 100. Processing is
aborted when the limit is reached, but this is tracked only on a
per-command level.
Smaller values can be useful when making changes that break a lot of
files and would otherwise scroll the actual root failures out of the
terminal view.
This PR implements the infrastructure for supporting `NatModule` in
`grind linarith` and uses it to handle disequalities. Another PR will
add support for equalities and inequalities. Example:
```lean
open Lean Grind
variable (M : Type) [NatModule M] [AddRightCancel M]
example (x y : M) : 2 • x + 3 • y + x = 3 • (x + y) := by
grind
```
This PR fixes a panic in `grind ring` exposed by #10242. `grind ring`
should not assume that all normalizations have been applied, because
some subterms cannot be rewritten by `simp` due to typing constraints.
Moreover, `grind` uses `preprocessLight` in a few places, and it skips
the simplifier/normalizer.
Closes#10242
This PR improves the names of definitions and lemmas in the polymorphic
range API. It also introduces a recommended spelling. For example, a
left-closed, right-open range is spelled `Rco` in analogy with Mathlib's
`Ico` intervals.
This PR speeds up auto-completion by a factor of ~3.5x through various
performance improvements in the language server. On one machine, with
`import Mathlib`, completing `i` used to take 3200ms and now instead
yields a result in 920ms.
Specifically, the following improvements are made:
- The watchdog process no longer de-serializes and re-serializes most
messages from the file worker before passing them on to the user - a
fast partial de-serialization procedure is now used to determine whether
the message needs to be de-serialized in full or not.
- `escapePart` is optimized to perform better on ASCII strings that do
not need escaping.
- `Json.compress` is optimized to allocate fewer objects.
- A faster JSON compression specifically for completion responses is
implemented that skips allocating `Json` altogether.
- The JSON compression has been moved to the task where we convert a
request response to `Json` so that converting to a string won't block
the output task of the FileWorker and so the `Json` value is not marked
as multi-threaded when we compress is, which drastically increases the
cost of reference-counting.
- The JSON representation of the `data?` field of each completion item
is optimized.
- Both the completion kind and the set of completion tags for each
imported completion item is now cached.
- The filtering of duplicate completion items is optimized.
Other adjustments:
- `LT UInt8` and `LE UInt8` are moved to Prelude so that they can be
used in `Init.Meta` for the name part escaping fast path.
- `Array.usize` is exposed since it was marked as `@[simp]`.
This PR moves the `PackageConfig` definition from `Lake.Config.Package`
into its own module. This enables a significant reduction in the `meta
import` tree of the `Lake.CLI.Translate` modules.
This PR fixes a bug in the `LinearOrderPackage.ofOrd` factory. If there
is a `LawfulEqOrd` instance available, it should automatically use it
instead of requiring the user to provide the `eq_of_compare` argument to
the factory. The PR also solves a hygiene-related problem making the
factories fail when `Std` is not open.
This PR adds some test cases for `grind` working with `Fin`. There are
many still failing tests in `tests/lean/grind/grind_fin.lean` which I'm
intending to triage and work on.
This PR fixes the E-matching procedure for theorems that contain
universe parameters not referenced by any regular parameter. This kind
of theorem seldom happens in practice, but we do have instances in the
standard library. Example:
```
@[simp, grind =] theorem Std.Do.SPred.down_pure {φ : Prop} : (⌜φ⌝ : SPred []).down = φ := rfl
```
closes#10233
This PR fixes a missing case in the `grind` canonicalizer. Some types
may include terms or propositions that are internalized later in the
`grind` state.
closes#10232
This PR adds `MonoBind` for more monad transformers. This allows using
`partial_fixpoint` for more complicated monads based on `Option` and
`EIO`. Example:
```lean-4
abbrev M := ReaderT String (StateT String.Pos Option)
def parseAll (x : M α) : M (List α) := do
if (← read).atEnd (← get) then
return []
let val ← x
let list ← parseAll x
return val :: list
partial_fixpoint
```
This PR is the result of analyzing the elaborator performance regression
introduced by #10005. It makes the `workspaceSymboldNewRanges` and
`iterators` benchmarks less noisy. It also replaces some range-related
instances for `Nat` with shortcuts to the general-purpose instances.
This is a trade-off between the ergonomics and the synthesis cost of
having general-purpose instances.
This PR generalizes the monadic operations for `HashMap`, `TreeMap`, and
`HashSet` to work for `m : Type u → Type v`.
This upstreams [a workaround from
Aesop](66a992130e/Aesop/Util/Basic.lean (L57-L66)),
and seems to continue a pattern already established in other files, such
as:
```lean
Array.forM.{u, v, w} {α : Type u} {m : Type v → Type w} [Monad m] (f : α → m PUnit) (as : Array α) (start : Nat := 0)
(stop : Nat := as.size) : m PUnit
```
This PR introduces an alternative construction for `DecidableEq`
instances that avoids the quadratic overhead of the default
construction.
The usual construction uses a `match` statement that looks at each pair
of constructors, and thus is necessarily quadratic in size. For
inductive data type with dozens of constructors or more, this quickly
becomes slow to process.
The new construction first compares the constructor tags (using the
`.ctorIdx` introduced in #9951), and handles the case of a differing
constructor tag quickly. If the constructor tags match, it uses the
per-constructor-eliminators (#9952) to create a linear-size instance. It
does so by creating a custom “matcher” for a parallel match on the data
types and the `h : x1.ctorIdx = x2.ctorIdx` assumption; this behaves
(and delaborates) like a normal `match` statement, but is implemented in
a bespoke way. This same-constructor-matcher will be useful for
implementing other instances as well.
The new construction produces less efficient code at the moment, so we
use it only for inductive types with 10 or more constructors by default.
The option `deriving.decEq.linear_construction_threshold` can be used to
adjust the threshold; set it to 0 to always use the new construction.
This PR implements equality propagation from the new AC module into the
`grind` core. Examples:
```lean
example {α β : Sort u} (f : α → β) (op : α → α → α) [Std.Associative op] [Std.Commutative op]
(a b c d : α) : op a (op b b) = op d c → f (op (op b a) (op b c)) = f (op c (op d c)) := by
grind only
example (a b c : Nat) : min a (max b (max c 0)) = min (max c b) a := by
grind -cutsat only
example {α β : Sort u} (bar : α → β) (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op]
(a b c d e f x y w : α) :
op d (op x c) = op a b →
op e (op f (op y w)) = op (op d a) (op b c) →
bar (op d (op x c)) = bar (op e (op f (op y w))) := by
grind only
```
This PR adds the extra critical pairs to ensure the `grind ac` procedure
is complete when the operator is associative and idempotent, but not
commutative. Example:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op] (a b c d e f x y w : α)
: op d (op x c) = op a b →
op e (op f (op y w)) = op a (op b c) →
op d (op x c) = op e (op f (op y w)) := by
grind only
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.IdempotentOp op] (a b c d e f x y w : α)
: op a (op d x) = op b c →
op e (op f (op y w)) = op a (op b c) →
op a (op d x) = op e (op f (op y w)) := by
grind only
```
This PR makes `saveModuleData` throw an IO.Error instead of panicking,
if given something that cannot be serialized. This doesn't really matter
for saving modules, but is handy when writing tools to save auxiliary
date in olean files via Batteries' `pickle`.
The caller of this C++ function already is guarded in a `try`/`catch`
that promotes from a `lean::exception` to an `IO.userError`.
A simple test of this in the web editor is
```
import Batteries
#eval pickle "/tmp/foo.txt" fun x : Nat => x
```
which crashes before this change.
---------
Co-authored-by: Laurent Sartran <lsartran@google.com>
This PR adds the extra critical pairs to ensure the `grind ac` procedure
is complete when the operator is AC and idempotent. Example:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.Commutative op] [Std.IdempotentOp op]
(a b c d : α) : op a (op b b) = op d c → op (op b a) (op b c) = op c (op d c) := by
grind only
```
This PR adds the inverse of a dyadic rational, at a given precision, and
characterising lemmas. Also cleans up various parts of the `Int.DivMod`
and `Rat` APIs, and proves some characterising lemmas about
`Rat.toDyadic`.
---------
Co-authored-by: Rob23oba <152706811+Rob23oba@users.noreply.github.com>
This PR adds superposition for associative (but non-commutative)
operators in `grind ac`. Examples:
```lean
example {α} (op : α → α → α) [Std.Associative op] (a b c d : α)
: op a b = c →
op b a = d →
op (op c a) (op b c) = op (op a d) (op d b) := by
grind
example {α} (a b c d : List α)
: a ++ b = c →
b ++ a = d →
c ++ a ++ b ++ c = a ++ d ++ d ++ b := by
grind only
```
This PR adds superposition for associative and commutative operators in
`grind ac`. Examples:
```lean
example (a b c d e f g h : Nat) :
max a b = max c d → max b e = max d f → max b g = max d h →
max (max f d) (max c g) = max (max e (max d (max b (max c e)))) h := by
grind -cutsat only
example {α} (op : α → α → α) [Std.Associative op] [Std.Commutative op] (a b c d : α)
: op a b = op b c → op c c = op d c →
op (op d a) (op b d) = op (op a a) (op b d) := by
grind only
```
This PR almost completely rewrites the inductive predicate recursion
algorithm; in particular `IndPredBelow` to function more consistently.
Historically, the `brecOn` generation through `IndPredBelow` has been
very error-prone -- this should be fixed now since the new algorithm is
very direct and doesn't rely on tactics or meta-variables at all.
Additionally, the new structural recursion procedure for inductive
predicates shares more code with regular structural recursion and thus
allows for mutual and nested recursion in the same way it was possible
with regular structural recursion. For example, the following works now:
```lean-4
mutual
inductive Even : Nat → Prop where
| zero : Even 0
| succ (h : Odd n) : Even n.succ
inductive Odd : Nat → Prop where
| succ (h : Even n) : Odd n.succ
end
mutual
theorem Even.exists (h : Even n) : ∃ a, n = 2 * a :=
match h with
| .zero => ⟨0, rfl⟩
| .succ h =>
have ⟨a, ha⟩ := h.exists
⟨a + 1, congrArg Nat.succ ha⟩
termination_by structural h
theorem Odd.exists (h : Odd n) : ∃ a, n = 2 * a + 1 :=
match h with
| .succ h =>
have ⟨a, ha⟩ := h.exists
⟨a, congrArg Nat.succ ha⟩
termination_by structural h
end
```
Closes#1672Closes#10004
This PR implements the proof terms for the new `grind ac` module.
Examples:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] (a b c d : α)
: op a (op b b) = op c d → op c (op d c) = op (op a b) (op b c) := by
grind only
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.Commutative op] (a b c d : α)
: op a (op b b) = op d c → op (op b a) (op b c) = op c (op d c) := by
grind only
example {α : Sort u} (op : α → α → α) [Std.Associative op] [Std.Commutative op]
(one : α) [Std.LawfulIdentity op one] (a b c d : α)
: op a (op (op b one) b) = op d c → op (op b a) (op (op b one) c) = op (op c one) (op d c) := by
grind only
```
The `grind ac` module is not complete yet, we still need to implement
critical pair computation and fix the support for idempotent operators.
This PR fixes `grind` instance normalization procedure.
Some modules in grind use builtin instances defined directly in core
(e.g., `cutsat`), while others synthesize them using `synthInstance`
(e.g., `ring`). This inconsistency is problematic, as it may introduce
mismatches and result in two different representations for the same
term. This PR fixes the issue.
This PR modifies macros, which implement non-atomic definitions and
```$cmd1 in $cmd2``` syntax. These macros involve implicit scopes,
introduced through ```section``` and ```namespace``` commands. Since
sections or namespaces are designed to delimit local attributes, this
has led to unintuitive behaviour when applying local attributes to
definitions appearing in the above-mentioned contexts. This has been
causing the following examples to fail:
```lean4
axiom A : Prop
namespace ex1
open Nat in
@[local simp] axiom a : A ↔ True
example : A := by simp
end ex1
namespace ex2
@[local simp] axiom Foo.a : A ↔ True
example : A := by simp
end ex2
```
This PR adds an internal-only piece of syntax,
```InternalSyntax.end_local_scope```, that influences the
```ScopedEnvExtension.addLocalEntry``` used in implementing local
attributes, to avoid delimiting local entries in the current scope. This
command is used in the above-mentioned macros.
Closes [#9445](https://github.com/leanprover/lean4/issues/9445).
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR changes the construction of a `CompleteLattice` instance on
predicates (maps intro `Prop`) inside of
`coinductive_fixpoint`/`inductive_fixpoint` machinery.
Consider a following endomap on predicates of the type ` α → Prop`:
```lean4
def DefFunctor (r : α → α → Prop) (infSeq : α → Prop) : α → Prop :=
λ x : α => ∃ y, r x y ∧ infSeq y
```
The following eta-reduced expression failed to elaborate:
```lean4
def def1 (r : α → α → Prop) : α → Prop := DefFunctor r (def1 r)
coinductive_fixpoint monotonicity sorry
```
At the same time, eta-expanded variant would elaborate correctly:
```lean4
def def2 (r : α → α → Prop) : α → Prop := fun x => DefFunctor r (def2 r) x
coinductive_fixpoint monotonicity sorry
```
This PR fixes the above issue, by changing the way how `CompleteLattice`
instance on the space of predicates is constructed, to allow for the
eta-reduced case, as outlined above.
This PR reviews the expected-to-fail-right-now tests for `grind`, moving
some (now passing) tests to the main test suite, updating some tests,
and adding some tests about normalisation of exponents.
Re-enables `Suggestion.messageData?` after it was deprecated in #9966
since it is needed for the workaround described in #10150. We will
hopefully be able to clean up with API once #10150 is properly fixed.
This PR adds benchmarks for deriving `DecidableEq` on inductives with
many constructors. (Although at the moment, many is “many” as we timeout
for more than 30 or 40 constructors.)
This PR ensures `where finally` tactics can access private data under
the module system even when the corresponding holes are in the public
scope as long as all of them are of proposition types.
This PR adds “non-branching case statements”: For each inductive
constructor `T.con` this adds a function `T.con.with` that is similar
`T.casesOn`, but has only one arm (the one for `con`), and an additional
`t.toCtorIdx = 12` assumption.
For example:
```lean
inductive Vec (α : Type) : Nat → Type where
| nil : Vec α 0
| cons {n} : α → Vec α n → Vec α (n + 1)
/--
info: @[reducible] protected def Vec.cons.elim.{u} : {α : Type} →
{motive : (a : Nat) → Vec α a → Sort u} →
{a : Nat} →
(t : Vec α a) →
t.ctorIdx = 1 → ({n : Nat} → (a : α) → (a_1 : Vec α n) → motive (n + 1) (Vec.cons a a_1)) → motive a t
-/
#guard_msgs in
#print sig Vec.cons.elim
```
This is a building block for non-quadratic implementations of `BEq` and
`DecidableEq` etc.
Builds on top of #9951.
The compiled code for a these functions could presumably, without
branching on the inductive value, directly access the fields. Achieving
this optimization (and achieving it without a quadratic compilation
cost) is not in scope for this PR.
Visibility is now handled implicitly for all deriving handlers by
adjusting section visibility according to the presence of private types
while removing exposition on presence of private constructors can be
opted in on a per-handler level via the new combinator
`withoutExposeFromCtors`.
Fixes#10062#10063#10064#10065
This PR adds support for pretty printing using generalized field
notation (dot notation) for private definitions on public types. It also
modifies dot notation elaboration to resolve names after removing the
private prefix, which enables using dot notation for private definitions
on private imported types.
It won't pretty print with dot notation for definitions on inaccessible
private types from other modules.
Closes#7297
This PR implements the basic infrastructure for the new procedure
handling AC operators in grind. It already supports normalizing
disequalities. Future PRs will add support for simplification using
equalities, and computing critical pairs. Examples:
```lean
example {α : Sort u} (op : α → α → α) [Std.Associative op] (a b c : α)
: op a (op b c) = op (op a b) c := by
grind only
example {α : Sort u} (op : α → α → α) (u : α) [Std.Associative op] [Std.LawfulIdentity op u] (a b c : α)
: op a (op b c) = op (op a b) (op c u) := by
grind only
example {α : Type u} (op : α → α → α) (u : α) [Std.Associative op] [Std.Commutative op]
[Std.IdempotentOp op] [Std.LawfulIdentity op u] (a b c : α)
: op (op a a) (op b c) = op (op (op b a) (op (op u b) b)) c := by
grind only
example {α} (as bs cs : List α) : as ++ (bs ++ cs) = ((as ++ []) ++ bs) ++ (cs ++ []) := by
grind only
example (a b c : Nat) : max a (max b c) = max (max b 0) (max a c) ∧ min a b = min b a := by
grind only [cases Or]
```
This PR ports more of the post-initialization C++ shell code to Lean.
All that remains is the initialization of the profiler and task manager.
As initialization tasks rather than main shell code, they were left in
C++ (where the rest of the initialization code currently is).
The `max_memory` and `timeout` Lean options used by the the `--memory`
and `--timeout` command-line options are now properly registered. The
server defaults for max memory and max heartbeats (timeout) were removed
as they were not actually used (because the `server` option that was
checked was neither set nor exists).
This PR also makes better use of the module system in `Shell.lean` and
fixes a minor bug in a previous port where the file name check was
dependent on building the `.ilean` rather than the `.c` file (as was
originally the case).
Fixes#9879.
This PR adds a private `Lean.Name.getUtf8Byte'` to `Init.Meta` for a
future PR that optimizes `Lean.Name.escapePart`.
`Lean.Name.getUtf8Byte'` should be replaced with `String.getUtf8Byte`
once the string refactor is through.
This PR implements the fast circuit for overflow detection in unsigned
multiplication used by Bitwuzla and proposed in:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=987767
The theorem is based on three definitions:
* `uppcRec`: the unsigned parallel prefix circuit for the bits until a
certain `i`
* `aandRec`: the conjunction between the parallel prefix circuit at of
the first operand until a certain `i` and the `i`-th bit in the second
operand
* `resRec`: the preliminary overflow flag computed with these two
definitions
To establish the correspondence between these definitiions and their
meaning in `Nat`, we rely on `clz` and `clzAuxRec` definitions.
Therefore, this PR contains the `clz`- and `clzAuxRec`-related
infrastructure that was necessary to get the proofs through.
An additional change this PR contains is the moving of `### Count
leading zeros` section in `BitVec.Lemmas` downwards. In fact, some of
the proofs I wrote required introducing `Bitvec.toNat_lt_iff` and
`BitVec.le_toNat_iff` which I believe should live in the `Inequalities`
section. Therefore, to put these in the appropriate section, I decided
to move the whole `clz` section downwards (while it's small and
relatively self contained. Specifically, the theorems I moved are:
`clzAuxRec_zero`, `clzAuxRec_succ`, `clzAuxRec_eq_clzAuxRec_of_le`,
`clzAuxRec_eq_clzAuxRec_of_getLsbD_false`.
The fast circuit is not yet the default one in the bitblaster, as it's
performance is not yet competitive due to some missing rewrites that
bitwuzla supports but are not in Lean yet.
co-authored-by: @bollu
---------
Co-authored-by: Tobias Grosser <tobias@grosser.es>
This PR lets the `ctorIdx` definition for single constructor inductives
avoid the pointless `.casesOn`, and uses `macro_inline` to avoid
compiling the function and wasting symbols.
This PR adjusts the "try this" widget to be rendered as a widget message
under 'Messages', not a separate widget under a 'Suggestions' section.
The main benefit of this is that the message of the widget is not
duplicated between 'Messages' and 'Suggestions'.
Since widget message suggestions were already implemented by @jrr6 for
the new hint infrastructure, this PR replaces the old "try this"
implementation with the new hint infrastructure. In doing so, the
`style?` field of suggestions is deprecated, since the hint
infrastructure highlights hints using diff colors, and `style?` also
never saw much use downstream. Additionally, since the message and the
suggestion are now the same component, the `messageData?` field of
suggestions is deprecated as well. Notably, the "Try this:" message
string now also contains a newline and indentation to separate the
suggestion from the rest of the message more clearly and the `postInfo?`
field of the suggestion is now part of the message.
Finally, this PR changes the diff colors used by the hint infrastructure
to be more color-blindness-friendly (insertions are now blue, not green,
and text that remains unchanged is now using the editor foreground color
instead of blue).
### Breaking changes
Tests that use `#guard_msgs` to test the "Try this:" message may need to
be adjusted for the new formatting of the message.
This PR makes the generation of functional induction principles more
robust when the user `let`-binds a variable that is then `match`'ed on.
Fixes#10132.
This PR creates the deprecated `.toCtorIdx` alias only for enumeration
types, which are the types that used to have this function. No need
generating an alias for types that never had it. Should reduce the
number of symbols in the standard library.
This PR prevents `rcases` and `obtain` from creating absurdly long case
tag names when taking single constructor types (like `Exists`) apart.
Fixes#6550
The change does not affect `cases` and `induction`, it seems (where the
user might be surprised to not address the single goal with a name),
because I make the change in Lean/`Meta/Tactic/Induction.lean`, not
`Lean/Elab/Tactic/Induction.lean`. Yes, that's confusing.
This PR defines the dyadic rationals, showing they are an ordered ring
embedding into the rationals. We will use this for future interval
arithmetic tactics.
Many thanks to @Rob23oba, who did most of the implementation work here.
---------
Co-authored-by: Rob23oba <robin.arnez@web.de>
This PR fixes an issue where private definitions recursively invoked
using generalized field notation (dot notation) would give an "invalid
field" errors. It also fixes an issue where "invalid field notation"
errors would pretty print the name of the declaration with a `_private`
prefix.
Closes#10044
this PR reorders the `DiscrTree.Key` constructors to match the order
given in the manually written `DiscrTree.Key.ctorIdx`. This allows us to
use the auto-generated one, and moreover lets this code benefit from
special compiler support for `.ctorIdx`, once that lands.
This PR normalizes the published diagnostics in the test runner so that
messages published out of order (due to parallelism) cannot cause test
failures. Clients can handle out-of-order messages just fine.
This PR generates `.ctorIdx` functions for all inductive types, not just
enumeration types. This can be a building block for other constructions
(`BEq`, `noConfusion`) that are size-efficient even for large
inductives.
It also renames it from `.toCtorIdx` to `.ctorIdx`, which is the more
idiomatic naming.
The old name exists as an alias, with a deprecation attribute to be
added after the next
stage0 update.
These functions can arguably compiled down to a rather efficient tag
lookup, rather than a `case` statement. This is future work (but
hopefully near future).
For a fair number of basic types the compiler is not able to compile a
function using `casesOn` until further definitions have been defined.
This therefore (ab)uses the `genInjectivity` flag and
`gen_injective_theorems%` command to also control the generation of this
construct.
For (slightly) more efficient kernel reduction one could use `.rec`
rather than `.casesOn`. I did not do that yet, also because it
complicates compilation.
This PR fixes a bug that caused the Lean server process tree to survive
the closing of VS Code.
The cause of this issue was that the file worker main task was blocked
on waiting for the result of `lake setup-file` because the blocking call
was lifted outside of the dedicated server task that was supposed to
contain it by the compiler.
This PR upstreams lemmas about `Rat` from `Mathlib.Data.Rat.Defs` and
`Mathlib.Algebra.Order.Ring.Unbundled.Rat`, specifically enough to get
`Lean.Grind.Field Rat` and `Lean.Grind.OrderedRing Rat`. In addition to
the lemmas, instances for `Inv Rat`, `Pow Rat Nat` and `Pow Rat Int`
have been upstreamed.
---------
Co-authored-by: Kim Morrison <kim@tqft.net>
This PR adds support for detecting associative operators in `grind`. The
new AC module also detects whether the operator is commutative,
idempotent, and whether it has a neutral element. The information is
cached.
This PR allows for more fine-grained control over what derived instances
have exposed definitions under the module system: handlers should not
expose their implementation unless either the deriving item or a
surrounding section is marked with `@[expose]`. Built-in handlers to be
updated after a stage 0 update.
This PR adds the modules in `Lake.Util` to core's Lake configuration to
ensure all utilities are built. With the module system port, they were
no longer all transitively imported.
Specifically, `Lake.Util.Lock` is unused because Lake does not currently
use a lock file for the build.
This PR allows Lean's parser to run with a final position prior to the
end of the string, so it can be invoked on a sub-region of the input.
This has applications in Verso proper, which parses Lean syntax in
contexts such as code blocks and docstrings, and it is a prerequisite to
parsing the contents of Lean docstrings.
This PR replaces `Std.Internal.Rat` with the new public `Rat` upstreamed
from Batteries.
The time library was depending on some defeqs which are no longer true,
so I have inserted some casts.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
Co-authored-by: Sofia Rodrigues <sofia@algebraic.dev>
This PR contains lemmas about `Int` (minor amendments for BitVec and
Nat) that are being used in preparing the dyadics. This is all work of
@Rob23oba, which I'm pulling out of #9993 early to keep that one
manageable.
This PR fixes the compilation of `noConfusion` by repairing an oversight
made when porting this code from the old compiler. The old compiler only
repeatedly expanded the major for each non-`Prop` field of the inductive
under consideration, mirroring the construction of `noConfusion` itself,
whereas the new compiler erroneously counted all fields.
Fixes#9971.
This PR improves support for `a^n` in `grind cutsat`. For example, if
`cutsat` discovers that `a` and `b` are equal to numerals, it now
propagates the equality. This PR is similar to #9996, but `a^b`.
Example:
```lean
example (n : Nat) : n = 2 → 2 ^ (n+1) = 8 := by
grind
```
With #10022, it also improves the support for `BitVec n` when `n` is not
numeral. Example:
```lean
example {n m : Nat} (x : BitVec n)
: 2 ≤ n → n ≤ m → m = 2 → x = 0 ∨ x = 1 ∨ x = 2 ∨ x = 3 := by
grind
```
This PR refactors the Lake codebase to use the new module system
throughout. Every module in `Lake` is now a `module`.
As this was already a large-scale refactor, a general cleanup of the
code has also been bundled in.
This PR also uses workarounds for currently outstanding module system
issues: #10061, #10062, #10063, #10064, #10065, #10067, and #10068.
**Breaking change:** Since the module system encourages a
`private`-by-default design, the Lake API has switched from its previous
`public`-by-default approach. As such, many definitions that were
previously public are now private. The newly private definitions are not
expected to have had significant user use, Nonetheless, important use
cases could be missed. If a key API is now inaccessible but seems like
it should be public, users are encouraged to report this as an issue on
GitHub.
This PR reverts parts of #10005 that surprisingly turned out to cause a
performance regression in the benchmarks. The slowdown seems to be
related to elaboration, not inefficiencies in the generated code. This
is just a quick fix. I will take a closer look in a week.
This PR adds a stop position field to parser input contexts, allowing
the parser to be instructed to stop parsing prior to the end of a file.
This is step 1, prior to a stage0 update, to make run-time data
structures sufficiently compatible to avoid segfaults. After the update,
the actual code to stop parsing can be merged.
This PR implements the necessary typeclasses so that range notation
works for integers. For example, `((-2)...3).toList = [-2, -1, 0, 1, 2]
: List Int`.
This PR changes macro scope numbering from per-module to per-command,
ensuring that unrelated changes to other commands do not affect macro
scopes generated by a command, which improves `prefer_native` hit rates
on bootstrapping as well as avoids further rebuilds under the module
system.
In detail, instead of always using the current module name as a macro
scope prefix, each command now introduces a new macro scope prefix
(called "context") of the shape `<main module>._hygCtx_<uniq>` where
`uniq` is a `UInt32` derived from the command but automatically
incremented in case of conflicts (which must be local to the current
module). In the current implementation, `uniq` is the hash of the
declaration name, if any, or else the hash of the full command's syntax.
Thus, it is always independent of syntactic changes to other commands
(except in case of hash conflicts, which should only happen in practice
for syntactically identical commands) and, in the case of declarations,
also independent of syntactic changes to any private parts of the
declaration.
This PR replaces the implementation of `Nat.log2` with a version that
reduces faster.
The new version can handle:
```lean-4
example : Nat.log2 (1 <<< 500) = 500 := rfl
```
This PR enables core's `LakeMain` to be a `module` when core is built
without `USE_LAKE`.
This was a problem when porting Lake to the module system (#9749).
This PR shortens the work necessary to make a type compatible with the
polymorphic range notation. In the concrete case of `Nat`, it reduces
the required lines of code from 150 to 70.
This PR adds useful declarations to the `LawfulOrderMin/Max` and
`LawfulOrderLeftLeaningMin/Max` API. In particular, it introduces
`.leftLeaningOfLE` factories for `Min` and `Max`. It also renames
`LawfulOrderMin/Max.of_le` to .of_le_min_iff` and `.of_max_le_iff` and
introduces a second variant with different arguments.
This PR changes the `toMono` pass to replace decls with their `_redArg`
equivalent, which has the consequence of not considering arguments
deemed useless by the `reduceArity` pass for the purposes of the
`noncomputable` check.
This PR adds support for correctly handling computations on fields in
`casesOn` for inductive predicates that support large elimination. In
any such predicate, the only relevant fields allowed are those that are
also used as an index, in which case we can find the supplied index and
use that term instead.
This PR improves support for `Fin n` in `grind cutsat` when `n` is not a
numeral. For example, the following goals can now be solved
automatically:
```lean
example (p d : Nat) (n : Fin (p + 1))
: 2 ≤ p → p ≤ d + 1 → d = 1 → n = 0 ∨ n = 1 ∨ n = 2 := by
grind
example (s : Nat) (i j : Fin (s + 1)) (hn : i ≠ j) (hl : ¬i < j) : j < i := by
grind
example {n : Nat} (j : Fin (n + 1)) : j ≤ j := by
grind
example {n : Nat} (x y : Fin ((n + 1) + 1)) (h₂ : ¬x = y) (h : ¬x < y) : y < x := by
grind
```
This PR adds `@[expose]` to `Lean.ParserState.setPos`. This makes it
possible to prove in-boundedness for a state produced by `setPos` for
functions like `next'` and `get'` without needing to `import all`.
This came up while porting Lake to the module system (#9749).
This PR changes the handling of overapplied constructors when lowering
LCNF to IR from a (slightly implicit) assertion failure to producing
`unreachable`. Transformations on inlined unreachable code can produce
constructor applications with additional arguments.
In the old compiler, these additional arguments were silently ignored,
but it seems more sensible to replace them with `unreachable`, just in
case they arise due to a compiler error.
Fixes#9937.
This PR derives `BEq` and `Hashable` for `Lean.Import`. Lake already did
this later, but it now done when defining `Import`.
Doing this in Lake became problematic when porting it to the module
system (#9749).
This PR exposes the bodies of `Name.append`, `Name.appendCore`, and
`Name.hasMacroScopes`. This enables proof by reflection of the
concatenation of name literals when using the module system.
```lean
example : `foo ++ `bar = `foo.bar := rfl
```
This is necessary for Lake as part of the port to using `module`
(#9749).
This PR make some minor changes to the grind annotation analysis script,
including sorting results and handling errors. Still need to add an
external UI.
This PR changes Lake to not set `LEAN_GITHASH` when in core (i.e.
`bootstrap = true`). This avoids Lake rebuilding modules when the Lake
watchdog is on one build of Lean/Lake and the command line is on a
different one.
The current reuse analysis is greedy in that every function attempts to
reuse a value. However, this means that if the last use is an owned
argument, it will be `inc`'d prior to this last use, in order to prevent
reuse from happening in the callee. In many cases, it makes more sense
to give the callee the chance to reuse it instead. The benchmark results
indicate that this is a much better default.
This PR improves support for nonlinear `/` and `%` in `grind cutsat`.
For example, given `a / b`, if `cutsat` discovers that `b = 2`, it now
propagates that `a / b = b / 2`. This PR is similar to #9996, but for
`/` and `%`. Example:
```lean
example (a b c d : Nat)
: b > 1 → d = 1 → b ≤ d + 1 → a % b = 1 → a = 2 * c → False := by
grind
```
This PR fixes a bug in `#eval` where clicking on the evaluated
expression could show errors in the Infoview. This was caused by `#eval`
not saving the temporary environment that is used when elaborating the
expression.
This PR provides factories that derive order typeclasses in bulk, given
an `Ord` instance. If present, existing instances are preferred over
those derived from `Ord`. It is possible to specify any instance
manually if desired.
This PR reduces the number of `Nat.Bitwise` grind annotations we have
the deal with distributivity. The new smaller set encourages `grind` to
rewrite into DNF. The old behaviour just resulted in saturating up to
the instantiation limits.
This PR improves support for nonlinear monomials in `grind cutsat`. For
example, given a monomial `a * b`, if `cutsat` discovers that `a = 2`,
it now propagates that `a * b = 2 * b`.
Recall that nonlinear monomials like `a * b` are treated as variables in
`cutsat`, a procedure designed for linear integer arithmetic.
Example:
```lean
example (a : Nat) (ha : a < 8) (b c : Nat) : 2 ≤ b → c = 1 → b ≤ c + 1 → a * b < 8 * b := by
grind
example (x y z w : Int) : z * x * y = 4 → x = z + w → z = 1 → w = 2 → False := by
grind
```
This PR registers a parser alias for `Lean.Parser.Command.visibility`.
This avoids having to import `Lean.Parser.Command` in simple command
macros that use visibilities.
This PR provides the means to quickly provide all the order instances
associated with some high-level order structure (preorder, partial
order, linear preorder, linear order). This can be done via the factory
functions `PreorderPackage.ofLE`, `PartialOrderPackage.ofLE`,
`LinearPreorderPackage.ofLE` and `LinearOrderPackage.ofLE`.
This PR makes `IsPreorder`, `IsPartialOrder`, `IsLinearPreorder` and
`IsLinearOrder` extend `BEq` and `Ord` as appropriate, adds the
`LawfulOrderBEq` and `LawfulOrderOrd` typeclasses relating `BEq` and
`Ord` to `LE`, and adds many lemmas and instances.
Note: This PR contains a refactoring where `Init.Data.Ord` is moved to
`Init.Data.Ord.Basic`. If I added `Init.Data.Ord` simply importing all
submodules, git would not be able to determine that `Init.Data.Ord` was
renamed to `Init.Data.Ord.Basic`. This could lead to unnecessary merge
conflicts in the future. Hence, I chose the name `Init.Data.OrdRoot`
instead of `Init.Data.Ord` temporarily. After this PR, I will rename
this module back to `Init.Data.Ord` in a separate PR.
(This is a copy of #9430: I will not touch that PR because it currently
allows to debug a CI problem and pushing commits might break the
reproducibility.)
This PR eliminates uses of `intros x y z` (with arguments) and updates
the `intros` docstring to suggest that `intro x y z` should be used
instead. The `intros` tactic is historical, and can be traced all the
way back to Lean 2, when `intro` could only introduce a single
hypothesis. Since 2020, the `intro` tactic has superceded it. The
`intros` tactic (without arguments) is currently still useful.
This PR upstreams the definition of Rat from Batteries, for use in our
planned interval arithmetic tactic.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR adds two test cases extracted from Mathlib, that `grind` cannot
solve but `omega` can. Originally the multiplication instance came from
`Nat.instSemiring` and `Int.instSemiring`, in minimizing I found that
`Distrib` is already enough.
---------
Co-authored-by: Kim Morrison <kim@tqft.net>
This PR fixes an issue when running Mathlib's `FintypeCat` as code,
where an erased type former is passed to a polymorphic function. We were
lowering the arrow type to`object`, which conflicts with the runtime
representation of an erased value as a tagged scalar.
This PR modifies the generation of induction and partial correctness
lemmas for `mutual` blocks defined via `partial_fixpoint`. Additionally,
the generation of lattice-theoretic induction principles of functions
via `mutual` blocks is modified for consistency with `partial_fixpoint`.
The lemmas now come in two variants:
1. A conjunction variant that combines conclusions for all elements of
the mutual block. This is generated only for the first function inside
of the mutual block.
2. Projected variants for each function separately
## Example 1
```lean4
axiom A : Type
axiom B : Type
axiom A.toB : A → B
axiom B.toA : B → A
mutual
noncomputable def f : A := g.toA
partial_fixpoint
noncomputable def g : B := f.toB
partial_fixpoint
end
```
Generated `fixpoint_induct` lemmas:
```lean4
f.fixpoint_induct (motive_1 : A → Prop) (motive_2 : B → Prop) (adm_1 : admissible motive_1)
(adm_2 : admissible motive_2) (h_1 : ∀ (g : B), motive_2 g → motive_1 g.toA)
(h_2 : ∀ (f : A), motive_1 f → motive_2 f.toB) : motive_1 f
g.fixpoint_induct (motive_1 : A → Prop) (motive_2 : B → Prop) (adm_1 : admissible motive_1)
(adm_2 : admissible motive_2) (h_1 : ∀ (g : B), motive_2 g → motive_1 g.toA)
(h_2 : ∀ (f : A), motive_1 f → motive_2 f.toB) : motive_2 g
```
Mutual (conjunction) variant:
```lean4
f.mutual_fixpoint_induct (motive_1 : A → Prop) (motive_2 : B → Prop) (adm_1 : admissible motive_1) (adm_2 : admissible motive_2)
(h_1 : ∀ (g : B), motive_2 g → motive_1 g.toA) (h_2 : ∀ (f : A), motive_1 f → motive_2 f.toB) :
motive_1 f ∧ motive_2 g
```
## Example 2
```lean4
mutual
def f (n : Nat) : Option Nat :=
g (n + 1)
partial_fixpoint
def g (n : Nat) : Option Nat :=
if n = 0 then .none else f (n + 1)
partial_fixpoint
end
```
Generated `partial_correctness` lemmas (in a projected variant):
```lean4
f.partial_correctness (motive_1 motive_2 : Nat → Nat → Prop)
(h_1 :
∀ (g : Nat → Option Nat),
(∀ (n r : Nat), g n = some r → motive_2 n r) → ∀ (n r : Nat), g (n + 1) = some r → motive_1 n r)
(h_2 :
∀ (f : Nat → Option Nat),
(∀ (n r : Nat), f n = some r → motive_1 n r) →
∀ (n r : Nat), (if n = 0 then none else f (n + 1)) = some r → motive_2 n r)
(n r✝ : Nat) : f n = some r✝ → motive_1 n r✝
g.partial_correctness (motive_1 motive_2 : Nat → Nat → Prop)
(h_1 :
∀ (g : Nat → Option Nat),
(∀ (n r : Nat), g n = some r → motive_2 n r) → ∀ (n r : Nat), g (n + 1) = some r → motive_1 n r)
(h_2 :
∀ (f : Nat → Option Nat),
(∀ (n r : Nat), f n = some r → motive_1 n r) →
∀ (n r : Nat), (if n = 0 then none else f (n + 1)) = some r → motive_2 n r)
(n r✝ : Nat) : g n = some r✝ → motive_2 n r✝
```
Mutual (conjunction) variant:
```
f.mutual_partial_correctness (motive_1 motive_2 : Nat → Nat → Prop)
(h_1 :
∀ (g : Nat → Option Nat),
(∀ (n r : Nat), g n = some r → motive_2 n r) → ∀ (n r : Nat), g (n + 1) = some r → motive_1 n r)
(h_2 :
∀ (f : Nat → Option Nat),
(∀ (n r : Nat), f n = some r → motive_1 n r) →
∀ (n r : Nat), (if n = 0 then none else f (n + 1)) = some r → motive_2 n r) :
(∀ (n r : Nat), f n = some r → motive_1 n r) ∧ ∀ (n r : Nat), g n = some r → motive_2 n r
```
This PR modifies `intro` to create tactic info localized to each
hypothesis, making it possible to see how `intro` works
variable-by-variable. Additionally:
- The tactic supports `intro rfl` to introduce an equality and
immediately substitute it, like `rintro rfl` (recall: the `rfl` pattern
is like doing `intro h; subst h`). The `rintro` tactic can also now
support `HEq` in `rfl` patterns if `eq_of_heq` applies.
- In `intro (h : t)`, elaboration of `t` is interleaved with unification
with the type of `h`, which prevents default instances from causing
unification to fail.
- Tactics that change types of hypotheses (including `intro (h : t)`,
`delta`, `dsimp`) now update the local instance cache.
In `intro x y z`, tactic info ranges are `intro x`, `y`, and `z`. The
reason for including `intro` with `x` is to make sure the info range is
"monotonic" while adding the first argument to `intro`.
This PR adds lemmas for the `TreeMap` operations `filter`, `map` and
`filterMap`. These lemmas existed already for hash maps and are simply
ported over from there.
This PR allows most of the `List.lookup` lemmas to be used when
`LawfulBEq α` is not available.
`LawfulBEq` is very strong. Most of the lemmas don't actually require it
-- some only require `ReflBEq`, and only `List.lookup_eq_some_iff`
actually requires `LawfulBEq`.
This PR moves arithmetic of `String.Pos` out of the prelude.
Other `String` declarations are part of the prelude because they are
generated by macros, but this does not seem to be the case for these.
This PR cleans up `optParam`/`autoParam`/etc. annotations before
elaborating definition bodies, theorem bodies, `fun` bodies, and `let`
function bodies. Both `variable`s and binders in declaration headers are
supported.
There are no changes to `inductive`/`structure`/`axiom`/etc. processing,
just `def`/`theorem`/`example`/`instance`.
This PR ensures that equations in the `grind cutsat` module are
maintained in solved form. That is, given an equation `a*x + p = 0` used
to eliminate `x`, the linear polynomial `p` must not contain other
eliminated variables. Before this PR, equations were maintained in
triangular form. We are going to use the solved form to linearize
nonlinear terms.
This PR removes the option `grind +ringNull`. It provided an alternative
proof term construction for the `grind ring` module, but it was less
effective than the default proof construction mode and had effectively
become dead code.
This PR also optimizes semiring normalization proof terms using the
infrastructure added in #9946.
**Remark:** After updating stage0, we can remove several background
theorems from the `Init/Grind` folder.
This PR optimizes the proof terms produced by `grind linarith`. It is
similar to #9945, but for the `linarith` module in `grind`.
It removes unused entries from the context objects when generating the
final proof, significantly reducing the amount of junk in the resulting
terms.
This PR optimizes the proof terms produced by `grind cutsat`. It removes
unused entries from the context objects when generating the final proof,
significantly reducing the amount of junk in the resulting terms.
Example:
```lean
/--
trace: [grind.debug.proof] fun h h_1 h_2 h_3 h_4 h_5 h_6 h_7 h_8 =>
let ctx := RArray.leaf (f 2);
let p_1 := Poly.add 1 0 (Poly.num 0);
let p_2 := Poly.add (-1) 0 (Poly.num 1);
let p_3 := Poly.num 1;
le_unsat ctx p_3 (eagerReduce (Eq.refl true)) (le_combine ctx p_2 p_1 p_3 (eagerReduce (Eq.refl true)) h_8 h_1)
-/
#guard_msgs in -- Context should contain only `f 2`
open Lean Int Linear in
set_option trace.grind.debug.proof true in
example (f : Nat → Int) :
f 1 <= 0 → f 2 <= 0 → f 3 <= 0 → f 4 <= 0 → f 5 <= 0 →
f 6 <= 0 → f 7 <= 0 → f 8 <= 0 → -1 * f 2 + 1 <= 0 → False := by
grind
```
This PR expands `mvcgen using invariants | $n => $t` to `mvcgen; case
inv<$n> => exact $t` to avoid MVar instantiation mishaps observable in
the test case for #9581.
Closes#9581.
This PR implements extended `induction`-inspired syntax for `mvcgen`,
allowing optional `using invariants` and `with` sections.
```lean
mvcgen
using invariants
| 1 => Invariant.withEarlyReturn
(onReturn := fun ret seen => ⌜ret = false ∧ ¬l.Nodup⌝)
(onContinue := fun traversalState seen =>
⌜(∀ x, x ∈ seen ↔ x ∈ traversalState.prefix) ∧ traversalState.prefix.Nodup⌝)
with mleave -- mleave is a no-op here, but we are just testing the grammar
| vc1 => grind
| vc2 => grind
| vc3 => grind
| vc4 => grind
| vc5 => grind
```
This PR guards the `Std.Tactic.Do.MGoalEntails` delaborator by a check
ensuring that there are at least 3 arguments present, preventing
potential panics.
This PR fixes the `forIn` function, that previously caused the resulting
Promise to be dropped without a value when an exception was thrown
inside of it. It also corrects the parameter order of the `background`
function.
This PR changes `internalizeCode` to replace all substitutions with
non-param-bound fvars in `Expr`s (which are all types) with `lcAny`,
preserving the invariant that there are no such dependencies. The
violation of this invariant across files caused test failures in a
pending PR, but it is difficult to write a direct test for it. In the
future, we should probably change the LCNF checker to detect this.
This change also speeds up some compilation-heavy benchmarks much more
than I would've expected, which is a pleasant surprise. This indicates
we might get more speedups from reducing the amount of type information
we preserve in LCNF.
This PR fixes a panic in the delaborator for `Std.PRange`. It also
modifies the delaborators for both `Std.Range` and `Std.PRange` to not
use `let_expr`, which cleans up annotations and metadata, since
delaborators must follow the structures of expressions. It adds support
for `pp.notation` and `pp.explicit` options. It also adds tests for
these delaborators.
---------
Co-authored-by: Kim Morrison <kim@tqft.net>
Co-authored-by: Kyle Miller <kmill31415@gmail.com>
This PR adds a guard for a delaborator that is causing panics in
doc-gen4. This is a band-aid solution for now, and @sgraf812 will take a
look when they're back from leave.
This PR ensures `grind cutsat` does not rely on div/mod terms to have
been normalized. The `grind` preprocessor has normalizers for them, but
sometimes they cannot be applied because of type dependencies.
Closes#9907
This PR addresses a missing check in the module system where private
names that remain in the public environment map for technical reasons
(e.g. inductive constructors generated by the kernel and relied on by
the code generator) accidentally were accessible in the public scope.
This PR reviews `grind` annotations for `Option`, preferring to use
`@[grind =]` instead of `@[grind]` (and fixing a few problems revealed
by this), and making sure `@[grind =]` theorems are "fully applied".
This PR lets the unused simp argument linter explain that the given hint
of removing `←` arguments may be too strong, and that replacing them
with `-` arguments can be needed. Fixes#9909.
This PR adds a JSON schema for `lakefile.toml`. Importantly, this schema
is *not* intended for validating `lakefile.toml`, but is instead
optimized for auto-completion and hovers using the [Even Better
TOML](https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml)
VS Code extension.
Once merged, I will attempt to contribute a link to this schema to the
[JSON Schema store](https://github.com/SchemaStore/schemastore). When
that is done, we can integrate the Lean 4 VS Code extension with Even
Better TOML, providing us with language server support in
`lakefile.toml`.
The schema contributed by this PR has the following known deficiencies:
- Superfluous properties do not produce an error.
- The structure of complicated structures (e.g. path or version
patterns) is deliberately not accurately reflected in the schema. Even
Better TOML doesn't seem to handle these structures well in
auto-completion.
- Due to the lack of an accurate declarative spec of the lakefile.toml
format and several deviations from the format to provide better
auto-completions, this schema will have to be kept in sync manually with
the code in Lake, at least for now.
This PR ensures that `Nat.cast` and `Int.cast` of numerals are
normalized by `grind`.
It also adds a `simp` flag for controlling how bitvector literals are
represented. By default, the bitvector simprocs use `BitVec.ofNat`. This
representation is problematic for the `grind ring` and `grind cutsat`
modules. The new flag allows the use of `OfNat.ofNat` and `Neg.neg` to
represent literals, consistent with how they are represented for other
commutative rings.
Closes#9321
This PR improves the heuristic used to select patterns for local
`forall` expressions occurring in the goal being solved by `grind`. It
now considers all singleton patterns in addition to the selected
multi-patterns. Example:
```lean
example (p : Nat → Prop) (h₁ : x < n) (h₂ : ¬ p x) : ∃ i, i < n ∧ ¬ p i := by
grind
```
This PR makes `mvcgen` aggressively eta-expand before trying to apply a
spec. This ensures that `mspec` will be able to frame hypotheses
involving uninstantiated loop invariants in goals for the inductive step
of a loop instead of losing them in a destructive world update.
This PR moves `List.range'_elim` to `List.eq_of_range'_eq_append_cons`
and adds a couple of `grind` annotations for `List.range'`. This will
make it more convenient to work with proof obligations produced by
`mvcgen`.
This PR is initially motivated by noticing `Lean.Grind.Preorder.toLE`
appearing in long Mathlib typeclass searches; this change will prevent
these searches. These changes are also helpful preparation for
potentially dropping the custom `Lean.Grind.*` typeclasses, and unifying
with the new typeclasses introduced in #9729.
This PR adds improved support for proof-by-reflection to the kernel type
checker. It addresses the performance issue exposed by #9854. With this
PR, whenever the kernel type-checks an argument of the form `eagerReduce
_`, it enters "eager-reduction" mode. In this mode, the kernel is more
eager to reduce terms. The new `eagerReduce _` hint is often used to
wrap `Eq.refl true`. The new hint should not negatively impact any
existing Lean package.
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR adds new variants of `Array.getInternal` and
`Array.get!Internal` that return their argument borrowed, i.e. without a
reference count increment. These are intended for use by the compiler in
cases where it can determine that the array will continue to hold a
valid reference to the element for the returned value's lifetime.
In the future, this will likely be replaced by a return value borrow
annotation, in which case the special variant of the functions could be
removed, with the compiler inserting an extra `inc` in the non-borrow
cases.
This PR ensures `grind` can E-match patterns containing universe
polymorphic ground sub-patterns. For example, given
```
set_option pp.universes true in
attribute [grind?] Id.run_pure
```
the pattern
```
Id.run_pure.{u_1}: [@Id.run.{u_1} #1 (@pure.{u_1, u_1} `[Id.{u_1}] `[Applicative.toPure.{u_1, u_1}] _ #0)]
```
contains two nested universe polymorphic ground patterns
- `Id.{u_1}`
- `Applicative.toPure.{u_1, u_1}`
This kind of pattern is not common, but it occurs in core.
This PR removes the `inShareCommon` quick filter used in `grind`
preprocessing steps. `shareCommon` is no longer used only for fully
preprocessed terms.
closes#9830
This PR makes `mvcgen` produce deterministic case labels for the
generated VCs. Invariants will be named `inv<n>` and every other VC will
be named `vc<n>.*`, where the `*` part serves as a loose indication of
provenance.
This PR introduces a canonical way to endow a type with an order
structure. The basic operations (`LE`, `LT`, `Min`, `Max`, and in later
PRs `BEq`, `Ord`, ...) and any higher-level property (a preorder, a
partial order, a linear order etc.) are then put in relation to `LE` as
necessary. The PR provides `IsLinearOrder` instances for many core types
and updates the signatures of some lemmas.
**BREAKING CHANGES:**
* The requirements of the `lt_of_le_of_lt`/`le_trans` lemmas for
`Vector`, `List` and `Array` are simplified. They now require an
`IsLinearOrder` instance. The new requirements are logically equivalent
to the old ones, but the `IsLinearOrder` instance is not automatically
inferred from the smaller typeclasses.
* Hypotheses of type `Std.Total (¬ · < · : α → α → Prop)` are replaced
with the equivalent class `Std.Asymm (· < · : α → α → Prop)`. Breakage
should be limited because there is now an instance that derives the
latter from the former.
* In `Init.Data.List.MinMax`, multiple theorem signatures are modified,
replacing explicit parameters for antisymmetry, totality, `min_ex_or`
etc. with corresponding instance parameters.
This PR migrates the ⌜p⌝ notation for embedding pure p : Prop into SPred
σs to expand into a simple, first-order expression SPred.pure p that can
be supported by e-matching in grind.
Doing so deprives ⌜p⌝ notation of its idiom-bracket-like support for
#selector and ‹Nat›ₛ syntax which is thus removed.
This PR replaces some `HashSet Expr`-typed collections of facts in
`omega`'s implementation with plain lists. This change makes some
`omega` calls faster, some slower, but the advantage is that `omega`'s
performance is more independent the state of the name generator that
produces fvar IDs.
I've created this PR for discussion and am happy to hear opinions on
whether this should be merged or not. A good reason *not* to merge is
that it causes regressions in some places and `grind` is expected to
supersede `omega` either way. A good reason to merge is that `omega` is
used all over the place and its flaky performance increases the noise in
future benchmarks.
This PR fixes a bug in `mvcgen` triggered by excess state arguments to
the `wp` application, a situation which arises when working with
`StateT` primitives.
This PR improves the delta deriving handler, giving it the ability to
process definitions with binders, as well as the ability to recursively
unfold definitions. Furthermore, delta deriving now tries all explicit
non-out-param arguments to a class, and it can handle "mixin" instance
arguments. The `deriving` syntax has been changed to accept general
terms, which makes it possible to derive specific instances with for
example `deriving OfNat _ 1` or `deriving Module R`. The class is
allowed to be a pi type, to add additional hypotheses; here is a Mathlib
example:
```lean
def Sym (α : Type*) (n : ℕ) :=
{ s : Multiset α // Multiset.card s = n }
deriving [DecidableEq α] → DecidableEq _
```
This underscore stands for where `Sym α n` may be inserted, which is
necessary when `→` is used. The `deriving instance` command can refer to
scoped variables when delta deriving as well. Breaking change: the
derived instance's name uses the `instance` command's name generator,
and the new instance is added to the current namespace.
This closes
[mathlib4#380](https://github.com/leanprover-community/mathlib4/issues/380).
This PR reorganizes the directory structure of Lake's module test and
renames some of the files to be more descriptive.
Originally, this was meant to be combined with a fix, but that fix
appears to be incorrect, so this is just a refactor.
This PR fixes a bug where the `DecidableEq` deriving handler did not
take universe levels into account for enumerations (inductive types
whose constructors all have no fields). Closes#9541.
This PR adds a script for analyzing `grind` E-matching annotations. The
script is useful for detecting matching loops. We plan to add
user-facing commands for running the script in the future.
This PR allows trailing comma in the argument list of `simp?`, `dsimp?`,
`simpa`, etc... Previously, it was only allowed in the non `?` variants
of `simp`, `dsimp`, `simp_all`.
Closes#7383.
This PR improves the API for invariants and postconditions and as such
introduces a few breaking changes to the existing pre-release API around
`Std.Do`. It also adds Markus Himmel's `pairsSumToZero` example as a
test case.
This PR changes the IR RC pass to take "implied borrows" from
projections into account. If a projected value's lifetime is contained
in that of its parent (or any projection ancestor), then it does not
need its reference count incremented (or later decremented).
I believe that this same technique should generalize to both the
reset/reuse and borrow signature inference passes.
This PR splits out an implementation detail of
MVarId.getMVarDependencies into a top-level function. Aesop was relying
on the function defined in the where clause, which is no longer possible
after #9759.
This PR modifies the pretty printing of anonymous metavariables to use
the index rather than the internal name. This leads to smaller numerical
suffixes in `?m.123` since the indices are numbered within a given
metavariable context rather than across an entire file, hence each
command gets its own numbering. This does not yet affect pretty printing
of universe level metavariables.
For debugging purposes, metavariables that are not defined now pretty
print as `?_mvar.123` rather than cause pretty printing to fail.
This PR combines the simplification and unfold-reducible-constants steps
in `grind` to ensure that no potential normalization steps are missed.
Closes#9610
This PR fixes a bug in the projection over constructor propagator used
in `grind`. It may construct type incorrect terms when a equivalence
class contains heterogeneous equalities.
closes#9769
We compute the liveness information for the join point body, so the only
thing that updateJPLiveVarMap should be adding is the binding of the
params, which we can easily do ourselves.
If we supported recursive join points, I believe this would actually be
a correctness issue, but as-is it doesn't affect the output.
This PR fixes the documentation of the pipe operator |>, which is
currently (emphasis mine):
> Haskell-like pipe operator `|>`. `x |> f` means **the same as the same
as** `f x`,
> and it chains such that `x |> f |> g` is interpreted as `g (f x)`.
This PR implements the option `mvcgen +jp` to employ a slightly lossy VC
encoding for join points that prevents exponential VC blowup incurred by
naïve splitting on control flow.
```lean
def ifs_pure (n : Nat) : Id Nat := do
let mut x := 0
if n > 0 then x := x + 1 else x := x + 2
if n > 1 then x := x + 3 else x := x + 4
if n > 2 then x := x + 1 else x := x + 2
if n > 3 then x := x + 1 else x := x + 2
if n > 4 then x := x + 1 else x := x + 2
if n > 5 then x := x + 1 else x := x + 2
return x
theorem ifs_pure_triple : ⦃⌜True⌝⦄ ifs_pure n ⦃⇓ r => ⌜r > 0⌝⦄ := by
unfold ifs_pure
mvcgen +jp
/-
...
h✝⁵ : if n > 0 then x✝⁵ = 0 + 1 else x✝⁵ = 0 + 2
h✝⁴ : if n > 1 then x✝⁴ = x✝⁵ + 3 else x✝⁴ = x✝⁵ + 4
h✝³ : if n > 2 then x✝³ = x✝⁴ + 1 else x✝³ = x✝⁴ + 2
h✝² : if n > 3 then x✝² = x✝³ + 1 else x✝² = x✝³ + 2
h✝¹ : if n > 4 then x✝¹ = x✝² + 1 else x✝¹ = x✝² + 2
h✝ : if n > 5 then x✝ = x✝¹ + 1 else x✝ = x✝¹ + 2
⊢ x✝ > 0
-/
grind
```
This PR does what #9234 regrettably failed to do: actually reintroduce
the signatures of some `Subarray` functions that are now implemented via
slices (see #9017) in order to ensure backward compatibility and
consistency. With this PR, the old interface is restored. As an added
benefit, `Subarray.forIn` is no longer opaque.
This PR re-implements `IO.waitAny` using Lean instead of C++. This is to
reduce the size and
complexity of `task_manager` in order to ease future refactorings.
There is an import behavioral change of `IO.waitAny` in this PR.
Consider a situation where we have
two promises `p1`, `p2` and call `IO.waitAny [p1.result!, p2.result!]`
and `p1` resolves instantly.
Previously this would just return the result of `p1` and require nothing
else. With the new
implementation if `p2` is released before being resolved this can cause
a panic, even if
`IO.waitAny` has already finished. I argue that this is reasonable
behavior, given that an
invocation of `result!` promises that the promise will eventually be
resolved.
This PR moves the validation of cross-package `import all` to Lake and
the syntax validation of import keywords (`public`, `meta`, and `all`)
to the two import parsers.
It also fixes the error reporting of the fast import parser
(`Lean.parseImports`) and adds positions to its errors.
This PR addresses an outstanding feature in the module system to
automatically mark `let rec` and `where` helper declarations as private
unless they are defined in a public context such as under `@[expose]`.
This PR removes an error which implicitly assumes that the sort of type
dependency between erased types present in the test being added can not
occur. It would be difficult to refine the error using only the
information present in LCNF types, and it is of very little ongoing
value (I don't recall it ever finding an actual problem), so it makes
more sense to delete it.
Fixes#9692.
This PR changes the LCNF `elimDeadBranches` pass so that it considers
all non-`Nat` literal types to be `⊤`. It turns out that fixing this to
correctly handle all of these types with the current abstract value
representation is surprisingly nontrivial, and it's better to just land
the fix first.
This PR adds a version of `CommRing.Expr.toPoly` optimized for kernel
reduction. We use this function not only to implement `grind ring`, but
also to interface the ring module with `grind cutsat`.
This PR updates `release_repos.yml` to reflect that `import-graph` no
longer depends on `batteries`, and reorders the repositories to better
reflect dependencies.
This PR adds propagation rules for functions that take singleton types.
This feature is useful for discharging verification conditions produced
by `mvcgen`. For example:
```lean
example (h : (fun (_ : Unit) => x + 1) = (fun _ => 1 + y)) : x = y := by
grind
```
This removes the early call to `contradiction` from `simpH`, and
replaces it with a quick check if the pattern start with different
constructors.
We already call `simpH` quadratically often (unavoidable), so we want it
to be quick. Most common contradictions are found later on, so maybe we
don't want to the expensive `contradiction` tactic to be run early.
May help with #9598.
This PR adjusts the formatting type classes for `lake query` to no
longer require both a text and JSON form and instead work with any
combination of the two. The classes have also been renamed. In addition,
the query formatting of a text module header has been improved to only
produce valid headers.
This PR restricts Lake's production of thin archives to only the Windows
core build (i.e., `bootstrap = true`). The unbundled `ar` usually used
for core builds on macOS does not support `--thin`, so we avoid using it
unless necessary.
* Have asynchronous environment extensions specify whether they are
manipulate data for declarations from the "outside"/main branch (e.g.
attributes) or from the "inside"/async branch (e.g. data collected from
body elaboration) in order to avoid unnecessary waiting.
* Merge `findStateAsync?` into `getState` via a new, optional
`asyncDecl` parameter.
* Make `mayContainAsync` check an automatic part of `modifyState`.
This PR uses a more simple approach to proving the unfolding theorem for
a function defined by well-founded recursion. Instead of looping a bunch
of tactics, it uses simp in single-pass mode to (try to) exactly undo
the changes done in `WF.Fix`, using a dedicated theorem that pushes the
extra argument in for each matcher (or `casesOn`).
Improves performance for recursive functions with large `match`
statements, as in #9598.
This PR fixes a regression introduced by an optimization in the
`unfoldReducible` step used by the `grind` normalizer. It also ensures
that projection functions are not reduced, as they are folded in a later
step.
This PR adds build times to each build step of the build monitor (under
`-v` or in CI) and delays exiting on a `--no-build` until after the
build monitor finishes. Thus, a `--no-build` failure will now report
which targets blocked Lake by needing a rebuild.
This PR adds normalizers for nonstandard arithmetic instances. The types
`Nat` and `Int` have built-in support in `grind`, which uses the
standard instances for these types and assumes they are the ones in use.
However, users may define their own alternative instances that are
definitionally equal to the standard ones. This PR normalizes such
instances using simprocs. This situation actually occurs in Mathlib.
Example:
```lean
class Distrib (R : Type _) extends Mul R where
namespace Nat
instance instDistrib : Distrib Nat where
mul := (· * ·)
theorem odd_iff.extracted_1_4 {n : Nat} (m : Nat)
(hm : n =
@HMul.hMul _ _ _ (@instHMul Nat instDistrib.toMul)
2 m + 1) :
n % 2 = 1 := by
grind
end Nat
```
This PR adds support for `Fin.val` in `grind cutsat`. Examples:
```lean
example (a b : Fin 2) (n : Nat) : n = 1 → ↑(a + b) ≠ n → a ≠ 0 → b = 0 → False := by
grind
example (m n : Nat) (i : Fin (m + n)) (hi : m ≤ ↑i) : ↑i - m < n := by
grind
example {n : Nat} (m : Nat) (i : Fin n) ⦃j : Fin (n + m)⦄
(this : ↑i + m ≤ ↑j) : ↑j - m < n := by
grind
example {n : Nat} (i : Fin n) (j : Nat) (hj : j < ↑i) : j < n := by
grind
```
This PR ensures that `grind cutsat` processes `ToInt.toInt` applications
provided by the user. Example:
```lean
open Lean Grind
example (x : Fin 3) : ToInt.toInt x ≠ 0 → ToInt.toInt x ≠ 1 → ToInt.toInt x ≠ 2 → False := by
grind -ring
example (x y z : Fin 5) : ToInt.toInt (x + z) = ToInt.toInt y → z = 0 → x = y := by
grind -ring
```
This PR fixes support for `SMul.smul` in `grind ring`. `SMul.smul`
applications are now normalized. Example:
```lean
example (x : BitVec 2) : x - 2 • x + x = 0 := by
grind
```
This PR add constructors `.intCast k` and `.natCast k` to
`CommRing.Expr`. We need them because terms such as `Nat.cast (R := α)
1` and `(1 : α)` are not definitionally equal. This is pervaise in
Mathlib for the numerals `0` and `1`.
```lean
import Mathlib
example {α : Type} [AddMonoidWithOne α] : Nat.cast (R := α) 0 = (0 : α) := rfl -- not defeq
example {α : Type} [AddMonoidWithOne α] : Nat.cast (R := α) 1 = (1 : α) := rfl -- not defeq
example {α : Type} [AddMonoidWithOne α] : Nat.cast (R := α) 2 = (2 : α) := rfl -- defeq from here
-- Similarly for everything past `AddMonoidWithOne` in the Mathlib hierarchy, e.g. `Ring`.
```
This PR adjusts the import graph, primarily of `Lean`, such that the
worst case rebuild time of core (`lean` only) is below 3 minutes on the
speedcenter machine (not captured by benchmark yet).
This PR performs some micro optimizations on fuzzy matching for a `~20%`
instructions win.
The three key changes are:
- try to remove some unnecessary allocations of things such as tuples
- change `containsInOrderLower` to use the efficient `get'` and `next'`
primitives. I hope that we can replace these with iterators on strings
in the second half of this quarter
- Do the same thing as clangd and use `Int16` with the `minValue` being
used for "worst score" while this does have the potential to
over/underflow, if the user is working with a score in the 10000s
something weird is certainly going on already (the score usually seems
to be in the 2 digit area based on some).
As an additional bonus, once we finally have unboxed arrays we will get
some additional cache wins on the 16 bit arrays!
This PR introduces checks to make sure that the IO functions produce
errors when inputs contain NUL bytes (instead of ignoring everything
after the first NUL byte).
This PR continues #9644 , fixing the core build when using an older
system libuv.
This only affected users building Lean from scratch, since the lean
binaries we ship as part of toolchains statically link their own copy of
libuv 1.50+.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
This PR changes `lake setup-file` to use the server-provided header for
workspace modules.
This also reverts #9163 as the underlying issue is now fixed.
This PR modifies dot identifier notation so that `(.a : T)` resolves
`T.a` with respect to the root namespace, like for generalized field
notation. This lets the notation refer to private names, follow aliases,
and also use open namespaces. The LSP completions are improved to follow
how dot ident notation is resolved, but it doesn't yet take into account
aliases or open namespaces.
Closes#9629
This PR adds error explanations for two common errors caused by large
elimination from `Prop`. To support this functionality, "nested" named
errors thrown by sub-tactics are now able to display their error code
and explanation.
This PR fixes the core build when using an older system libuv.
This only affected users building Lean from scratch, since the `lean`
binaries we ship as part of toolchains statically link their own copy of
libuv 1.50+.
This PR introduces a `mutual_induct` variant of the generated
(co)induction proof principle for mutually defined (co)inductive
predicates. Unlike the standard (co)induction principle (which projects
conclusions separately for each predicate), `mutual_induct` produces a
conjunction of all conclusions.
## Example
Given the following mutual definition:
```lean4
mutual
def f : Prop := g
coinductive_fixpoint
def g : Prop := f
coinductive_fixpoint
end
```
Standard coinduction principles:
```lean4
f.coind : ∀ (pred_1 pred_2 : Prop), (pred_1 → pred_2) → (pred_2 → pred_1) → pred_1 → f
g.coind : ∀ (pred_1 pred_2 : Prop), (pred_1 → pred_2) → (pred_2 → pred_1) → pred_2 → g
```
New `mutual_induct`principle:
```lean4
f.mutual_induct: ∀ (pred_1 pred_2 : Prop), (pred_1 → pred_2) → (pred_2 → pred_1) → (pred_1 → f) ∧ (pred_2 → g)
```
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR adds support for generating lattice-theoretic (co)induction
proof principles for predicates defined via `mutual` blocks using
`inductive_fixpoint`/`coinductive_fixpoint` constructs.
### Key Changes
- The order on product lattices (used to define fixpoints of mutual
blocks) is unfolded.
- Hypotheses in generated principles are curried.
- Conclusions are projected to focus only on the predicate of interest
(rather than being a conjunction of conclusions for all functions
defined in the `mutual` block.
### Example
Given:
```lean4
mutual
def f : Prop :=
g
coinductive_fixpoint
def g : Prop :=
f
coinductive_fixpoint
end
```
The system now generates these coinduction principles:
```lean4
f.coinduct (pred_1 pred_2 : Prop) (hyp_1 : pred_1 → pred_2) (hyp_2 : pred_2 → pred_1) : pred_1 → f
```
and
```lean4
g.coinduct (pred_1 pred_2 : Prop) (hyp_1 : pred_1 → pred_2) (hyp_2 : pred_2 → pred_1) : pred_2 → g
```
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR adds the separate directions of
`List.pairwise_iff_forall_sublist` as named lemmas.
I want to explore how they could/should be used by `grind` in Mathlib.
This PR brings the Mac OSX build instructions up to date slightly. (They
currently refer to facts "...as of November 2014...")
- Remove specific OS version number from the title as it is out of date
with respect to filename.
- Nonetheless don't change filename for the sake of not breaking
incoming links.
- Update C++ language version to C++14, which I believe is what is
currently required, based on other platform documentation.
- Bump versions of C++ compilers that seem to be current. I expect the
exact values of these version numbers aren't crucial but maybe good for
the reader calibrating a vague sense of whether their compiler is in the
right ballpark.
- Add `lld` to the homebrew clang instructions, because homebrew changed
the way they package llvm tools, spinning the linker off into its own
package.
This PR updates the styling and wording of error messages produced in
inductive type declarations and anonymous constructor notation,
including hints for inferable constructor visibility updates.
This PR optimizes `Lean.Name.toString`, giving a 10% instruction
benefit.
Crucially this is a breaking change as the old `Lean.Name.toString`
method used to support a method for identifying tokens. This method is
now available as `Lean.Name.toStringWithToken` in order to allow for
specialization of the (highly common) `toString` code path which sets
this function to just return `false`.
This PR simplifies the docstring for `propext` significantly.
The old docstring explained general concepts of axioms that are now
covered in the reference manual, and had a large example that was out of
date and has been subsumed by reference manual content.
This PR adds `@[grind =]` to `Prod.lex_def`. Note that `omega` has
special handling for `Prod.Lex`, and this is needed for `grind`'s cutsat
module to achieve parity.
The `isUnderBinder` check is intended to avoid inlining repeated
computations into specializations, but this doesn’t apply to local
function decls whose bodies are already delayed.
This PR adds lemmas about `UIntX.toBitVec` and `UIntX.ofBitVec` and `^`.
These match the existing lemas for `*`.
After #7887 these can be made true by `rfl`.
This PR ensures `ite` and `dite` are to selected as E-matching patterns.
They are bad patterns because the then/else branches are only
internalized after `grind` decided whether the condition is
`True`/`False`.
The issue reported by #9572 has been fixed, but the fix exposed another
issue. The patterns for `List.Pairwise` produce an unbounded number of
E-matching instances.
```lean
example (l : List α) : l.Pairwise R := by
grind
```
This PR fixes an issue in `grind`'s disequality proof construction. The
issue occurs when an equality is merged with the `False` equivalence
class, but it is not the root of its congruence class, and its
congruence root has not yet been merged into the `False` equivalence
class yet.
closes#9562
This PR optimizes the proof terms generated by `grind ring`. For
example, before this PR, the kernel took 2.22 seconds (on a M4 Max) to
type-check the proof in the benchmark `grind_ring_5.lean`; it now takes
only 0.63 seconds.
This PR generalizes `Process.output` and `Process.run` with an optional
`String` argument that can be piped to `stdin`.
To date we have been using shims `Process.runCmdWithInput` in Batteries.
This PR fixes a bug introduced in #7830 where if the cursor is at the
indicated position
```lean
example (as bs : List Nat) : (as.append bs).length = as.length + bs.length := by
induction as with
| nil => -- cursor
| cons b bs ih =>
```
then the Infoview would show "no goals" rather than the `nil` goal. The
PR also fixes a separate bug where placing the cursor on the next line
after the `induction`/`cases` tactics like in
```lean
induction as with
| nil => sorry
| cons b bs ih => sorry
I -- < cursor
```
would report the original goal in the goal list. Furthermore, there are
numerous improvements to error recovery (including `allGoals`-type logic
for pre-tactics) and the visible tactic states when there are errors.
Adds `Tactic.throwOrLogErrorAt`/`Tactic.throwOrLogError` for throwing or
logging errors depending on the recovery state.
This PR restores the feature where in `induction`/`cases` for `Nat`, the
`zero` and `succ` labels are hoverable. This was added in #1660, but
broken in #3629 and #3655 when custom eliminators were added. In
general, if a custom eliminator `T.elim` for an inductive type `T` has
an alternative `foo`, and `T.foo` is a constant, then the `foo` label
will have `T.foo` hover information.
This PR consolidates common attribute-related error messages into
reusable functions and updates the wording and formatting of relevant
error messages.
This extends the specialization behavior of functions taking instance
implicits to ordinary implicit arguments that are of instance type. The
choice between the two is often made for subtle inference-related
reasons. It also affects visibility of these functions, because the
module system makes template-like decls visible to the compiler in other
modules.
This PR makes the second instance of the `inferVisibility` pass run
after the `saveMono` pass. As the comment above the first instance of
the pass indicates, this needs to be after `saveMono` in order to see
all decls with their updated bodies.
This PR upstreams some helper instances for `NameSet` from Batteries.
(These could be generalized to an arbitrary TreeSet, but I'll leave that
for someone else.)
This PR changes `Lean.Grind.NoNatZeroDivisors` so that it is
parametrised by a `NatModule` instance rather than just a `HMul`
instance. This is sufficiently general for our purposes, and is a
band-aid (~40% improvement) for the performance problems we've been
seeing coming from inference here. The problems observed in Mathlib may
not see much improvement, however.
This PR lets the equation compiler unfold abstracted proofs again if
they would otherwise hide recursive calls.
This fixes#8939.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR fixes Lake's handling of a module system `import all`.
Previously, Lake treated `import all` the same a non-module `import`,
importing all private data in the transitive import tree. Lake now
distinguishes the two, with `import all M` just importing the private
data of `M`. The direct private imports of `M` are followed, but they
are not promoted.
This also fixes some other Lake bugs with module system imports that
were discovered in the process.
In the early days of the new compiler, it was common to make tests that
manually compiled a definition with the new compiler. The arity
reduction pass in LCNF deliberately does not compute a fixed point to
find a minimal set of used parameters for performance reasons, but
running it a second time can lead to different decisions being made and
a decl arity mismatch. This has been an issue for multiple people during
development. Removing the tests fixes the problem.
Fixes#9186.
This PR uses `withAbstractAtoms` to prevent the kernel from accidentally
reducing the atoms in the arith normlizer while typechecking. This PR
also sets `implicitDefEqProofs := false` in the `grind` normalizer
This PR corrects the changes to `Lean.Grind.Field` made in #9500.
(The lack of examples of fields in the core repository is a problem! I
guess it is likely that for interval arithmetic we will at least need
`Rat` soon.)
(Almost) only typos in constant names and doc-strings were considered;
grammar was not considered. Also, along others,
`mkDefinitionValInferrringUnsafe` has been fixed :-)
This PR adds a benchmark for the persistent hashmap, in particular also
covering the non
linear insert case which is often hit in practical uses. Furthermore the
same test case is also
added to the treemap benchmark.
This PR makes `mframe`, `mspec` and `mvcgen` respect hygiene.
Inaccessible stateful hypotheses can now be named with a new tactic
`mrename_i` that works analogously to `rename_i`.
This PR surfaces kernel diagnostics even in `example`.
The problem was that the kernel checking happens asynchronously. We
cannot use `reportDiag` in `addDecl`, which spawns that task, due to the
module hierarchy. For non `example`-declaration, `reportDiag` is called
somewhere else later, but for `example`, the `withoutModifyingEnv` in
`elabMutualDef` hid the kernel diagnostics. (But only the kernel
diagnostics; they are in the `Environment`, while the others are in the
`State`).
I also observed that the `reportDiag` in `elabAsync` (but not in
`elabSync`) duplicated the reporting, so without `elab.Async true` you
get the message twice. To fix this, `reportDiag` now resets the
diagnostics. This should avoid reporting counts twice in general (at
least within a linear use of the state).
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR removes vestigial syntax definitions in
`Lean.Elab.Tactic.Do.VCGen` that when imported undefine the `mvcgen`
tactic. Now it should be possible to import Mathlib and still use
`mvcgen`.
This PR adds a few more `*.by_wp` "adequacy theorems" that allows to
prove facts about programs in `ReaderM` and `ExceptM` using the `Std.Do`
framework.
This PR adds a `HPow \a Int \a` field to `Lean.Grind.Field`, and
sufficient axioms to connect it to the operations, so that in future we
can reason about exponents in `grind`. To avoid collisions, we also move
the `HPow \a Nat \a` field in `Semiring` from the extends clause to a
field. Finally, we add some failing tests about normalizing exponents.
This PR makes cdot function expansion take hygiene information into
account, fixing "parenthesis capturing" errors that can make erroneous
cdots trigger cdot expansion in conjunction with macros. For example,
given
```lean
macro "baz% " t:term : term => `(1 + ($t))
```
it used to be that `baz% ·` would expand to `1 + fun x => x`, but now
the parentheses in `($t)` do not capture the cdot. We also fix an
oversight where cdot function expansion ignored the fact that type
ascriptions and tuples were supposed to delimit expansion, and also now
the quotation prechecker ignores the identifier in `hygieneInfo`. (#9491
added the hygiene information to the parenthesis and cdot syntaxes.)
This fixes a bug discovered by [Google
DeepMind](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/P1/index.html),
which made use of `useλy . x=>y.rec λS p=>?_`. The `use` tactic from
Mathlib wrapped the provided term in a type ascription, and so this was
equivalent to `use fun x => λy x x=>y.rec λS p=>?_`. (Note that cdot
function expansion is not able to take into account *where* the cdots
are located, and it is syntactically valid to insert an identifier into
the binder list like this. If we ever want to address this in the
future, we could have cdots expand into a special term that wraps an
identifier that evaluates to a local, but which would cause errors in
other contexts.)
Design note: we put the `hygieneInfo` on the open parenthesis rather
than at the end, since that way the hygiene information is available
even when there are parsing errors. This is important since we rely on
being able to elaborate partial syntax to get elab info (e.g. in `(a.`
to get completion info). Note that syntax matchers check that the
`hygieneInfo` is actually present, so such partial syntax would not be
matched.
This PR adds a feature where `structure` constructors can override the
inferred binder kinds of the type's parameters. In the following, the
`(p)` binder on `toLp` causes `p` to be an explicit parameter to
`WithLp.toLp`:
```lean
structure WithLp (p : Nat) (V : Type) where toLp (p) ::
ofLp : V
```
This reflects the syntax of the feature added in #7742 for overriding
binder kinds of structure projections. Similarly, only those parameters
in the header of the `structure` may be updated; it is an error to try
to update binder kinds of parameters included via `variable`.
Closes#9072.
Fixes a possible bug from stale caches when creating the type of the
constructor.
This PR resolves an issue where the `Meta.Context.configKey` field is
private but we still want to use the constructor of the structure for
setting other fields, which would be prevented by the module system
checks:
```lean
structure Context where
private config : Config := {}
private configKey : UInt64 := config.toKey
...
def ContextInfo.runMetaM (info : ContextInfo) (lctx : LocalContext) (x : MetaM α) : IO α := do
-- cannot call private constructor of `Meta.Context`!
(·.1) <$> info.runCoreM (x.run { lctx := lctx } { mctx := info.mctx })
```
Instead, the private field is extracted into an (existing) structure
that applies its default value:
```lean
/-- Configuration with key produced by `Config.toKey`. -/
structure ConfigWithKey where
private mk ::
config : Config := {}
key : UInt64 := config.toKey
structure Context where
keyedConfig : ConfigWithKey := default
```
Thus `Context`'s constructor remains public without exposing a way to
set `key` directly.
This PR fixes a kernel type mismatch that occurs when using `grind` on
goals containing non-standard `OfNat.ofNat` terms. For example, in issue
#9477, the `0` in the theorem `range_lower` has the form:
```lean
(@OfNat.ofNat
(Std.PRange.Bound (Std.PRange.RangeShape.lower (Std.PRange.RangeShape.mk Std.PRange.BoundShape.closed Std.PRange.BoundShape.open)) Nat)
(nat_lit 0)
(instOfNatNat (nat_lit 0)))
```
instead of the more standard form:
```lean
(@OfNat.ofNat
Nat
(nat_lit 0)
(instOfNatNat (nat_lit 0)))
```
Closes#9477
This PR improves the `evalInt?` function, which is used to evaluate
configuration parameters from the `ToInt` type class. This PR also adds
a new `evalNat?` function for handling the `IsCharP` type class, and
introduces a configuration option:
```
grind (exp := <num>)
```
This option controls the maximum exponent size considered during
expression evaluation. Previously, `evalInt?` used `whnf`, which could
run out of stack space when reducing terms such as `2^1024`.
closes#9427
This PR adds `binrel%` macros for `!=` and `≠` notation defined in
`Init.Core`. This allows the elaborator to insert coercions on both
sides of the relation, instead of committing to the type on the left
hand side.
I first discovered this bug while working on Brouwer's fixed point
theorem. See the discussion on Zulip at [#lean4 > Elaboration of
`≠` @
💬](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Elaboration.20of.20.60.E2.89.A0.60/near/526236907).
This PR replaces the proof of the simplification lemma `Nat.zero_mod`
with
`rfl` since it is, by design, a definitional equality. This solves an
issue
whereby the lemma could not be used by the simplifier when in 'dsimp'
mode.
Closes#9389
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR introduces tactic `mleave` that leaves the `SPred` proof mode by
eta expanding through its abstractions and applying some mild
simplifications. This is useful to apply automation such as `grind`
afterwards.
Relates to #9363.
This PR adds support in the `mintro` tactic for introducing `let`/`have`
binders in stateful targets, akin to `intro`. This is useful when
specifications introduce such let bindings.
Closes#9365.
This PR makes `PProdN.reduceProjs` also look for projection functions.
Previously, all redexes were created by the functions in `PProdN`, which
used primitive projections. But with `mkAdmProj` the projection
functions creep in via the types of the `admissible_pprod_fst` theorem.
So let's just reduce both of them.
Fixes#9462.
The `isRef` check being removed here used to be an optimization, because
this structure only tracked whether ref counting operations need to be
inserted at all. Now the structure also tracks whether the value needs
to be checked for being a scalar or not, which is something that can be
refined by a `cases` arm, since inductive types can have a mix of scalar
and non-scalar constructors.
This PR fixes an issue that caused some `deriving` handlers to fail when
the name of the type being declared matched that of a declaration in an
open namespace.
Closes#9366
This PR improves the 'Go to Definition' UX, specifically:
- Using 'Go to Definition' on a type class projection will now extract
the specific instances that were involved and provide them as locations
to jump to. For example, using 'Go to Definition' on the `toString` of
`toString 0` will yield results for `ToString.toString` and `ToString
Nat`.
- Using 'Go to Definition' on a macro that produces syntax with type
class projections will now also extract the specific instances that were
involved and provide them as locations to jump to. For example, using
'Go to Definition' on the `+` of `1 + 1` will yield results for
`HAdd.hAdd`, `HAdd α α α` and `Add Nat`.
- Using 'Go to Declaration' will now provide all the results of 'Go to
Definition' in addition to the elaborator and the parser that were
involved. For example, using 'Go to Declaration' on the `+` of `1 + 1`
will yield results for `HAdd.hAdd`, `HAdd α α α`, `Add Nat`,
``macro_rules | `($x + $y) => ...`` and `infixl:65 " + " => HAdd.hAdd`.
- Using 'Go to Type Definition' on a value with a type that contains
multiple constants will now provide 'Go to Definition' results for each
constant. For example, using 'Go to Type Definition' on `x` for `x :
Array Nat` will yield results for `Array` and `Nat`.
### Details
'Go to Definition' for type class projections was first implemented by
#1767, but there were still a couple of shortcomings with the
implementation. E.g. in order to jump to the instance in `toString 0`,
one had to add another space within the application and then use 'Go to
Definition' on that, or macros would block instances from being
displayed. Then, when the .ilean format was added, most 'Go to
Definition' requests were already handled using the .ileans in the
watchdog process, and so the file worker never received them to handle
them with the semantic information that it has available.
This PR resolves most of the issues with the previous implementation and
refactors the 'Go to Definition' control flow so that 'Go to Definition'
requests are always handled by the file worker, with the watchdog merely
using its .ilean position information to update the positions in the
response to a more up-to-date state. This is necessary because the file
worker obtains its position information from the .oleans, which need to
be rebuilt in order to be up-to-date, while the watchdog always receives
.ilean update notifications from each active file worker with the
current position information in the editor.
Finally, all of the 'Go to Definition' code is refactored to be easier
to maintain.
### Breaking changes
`InfoTree.hoverableInfoAt?` has been generalized to
`InfoTree.hoverableInfoAtM?` and now takes a general `filter` argument
instead of several boolean flags, as was the case before.
This PR removes uses of `Lean.RBMap` in Lean itself.
Furthermore some massaging of the import graph is done in order to avoid
having `Std.Data.TreeMap.AdditionalOperations` (which is quite
expensive) be the critical path for a large chunk of Lean. In particular
we can build `Lean.Meta.Simp` and `Lean.Meta.Grind` without it thanks to
these changes.
We did previously not conduct this change as `Std.TreeMap` was not
outperforming `Lean.RBMap` yet, however this has changed with the new
code generator.
This PR addresses the lean crash (stack overflow) with nested induction
and the generation of the `SizeOf` spec lemmas, reported at #9018.
It does not address the underlying issue that in these cases, the
generated SizeOf code does not match the the SizeOf function found by
instance search, and thus the generation fails.
This seem hard to fix: `mkSizeOfMinors` would have to recognize that
given, say, `List (Id Tree)`, the derived instance (assuming there was
one for `Tree`) is not the same as for `List Tree`.
The problem seems just about as hard as getting derived SizeOf right in
the presence of nested induction and non-canonical SizeOf instances.
This PR fixes the behavior of `String.prev`, aligning the runtime
implementation with the reference implementation. In particular, the
following statements hold now:
- `(s.prev p).byteIdx` is at least `p.byteIdx - 4` and at most
`p.byteIdx - 1`
- `s.prev 0 = 0`
- `s.prev` is monotone
Closes#9439
This PR adds the number of jobs run to the final message Lake produces
on a successfully run of `lake build`.
**Examples**
```
Build completed successfully (1 job).
Build completed successfully (6 jobs).
```
This PR adds the `libPrefixOnWindows` package and library configuration
option. When enabled, Lake will prefix static and shared libraries with
`lib` on Windows (i.e., the same way it does on Unix).
This PR changes the Lake local cache infrastructure to restore
executables and shared and static libraries from the cache. This means
they keep their expected names, which some use cases still rely on.
This PR adds a hint to the "invalid projection" message suggesting the
correct nested projection for expressions of the form `t.n` where `t` is
a tuple and `n > 2`.
This feature was originally proposed by @nomeata in #8986.
This PR improves the error messages produced by the `split` tactic,
including suggesting syntax fixes and related tactics with which it
might be confused.
Note that, to avoid clashing with the new error message styling
conventions used in these messages, this PR also updates the formatting
of the message produced by `throwTacticEx`.
Closes#6224
This PR changes `IRType.boxed` to map `erased` to `tobject` rather than
`object`, since `erased` has a representation of a boxed scalar 0 when
we are forced to represent it at runtime. This case does not occur at
all in the Lean codebase.
This PR updates the formatting of, and adds explanations for, "unknown
identifier" errors as well as "failed to infer type" errors for binders
and definitions.
It attempts to ameliorate some of the confusion encountered in #1592 by
modifying the wording of the "header is elaborated before body is
processed" note and adding further discussion and examples of this
behavior in the corresponding error explanation.
An earlier PR (#9017) replaced certain subarray functions such as
`Subarray.foldl` with generic slice functions `Slice.foldl`. For
backward compatibility reasons, This PR reintroduces `Subarray.foldl`
etc. as aliases for the `Slice` versions.
This PR adds Lake tests for builds involving the Lean module system and
fixes some bugs encountered in the process. In particular, it fixes the
parsing of private imports and how Lake handles the `import all` of a
private import.
This PR fixes some issues with the Lake tests on Windows and macOS. It
also avoids downloading Mathlib in the `init` test, which was currently
doing this after changes to the `math-lax` template in #8866.
To skip the Mathlib download in `init`, an undocumented `--offline`
option was added`. This option is currently meant for internal use only.
This PR increases the number of cases where `isArrowProposition` returns
a result other than `.undef`. This function is used to implement the
`isProof` predicate, which is invoked on every subterm visited by
`simp`.
This PR adds improves the "invalid named argument" error message in
function applications and match patterns by providing clickable hints
with valid argument names. In so doing, it also fixes an issue where
this error message would erroneously flag valid match-pattern argument
names.
This PR adds support for compilation of `casesOn` for subsingletons. We
rely on the elaborator's type checking to restrict this to inductives in
`Prop` that can actually eliminate into `Type n`. This does not yet
cover other recursors of these types (or of inductives not in `Prop` for
that matter).
This PR implements a simple optimization: dependent implications are no
longer treated as E-matching theorems in `grind`. In
`grind_bitvec2.lean`, this change saves around 3 seconds, as many
dependent implications are generated. Example:
```lean
∀ (h : i + 1 ≤ w), x.abs.getLsbD i = x.abs[i]
```
This PR adds a benchmark to our suite, specifically targeting the fact
that local hypotheses
are currently not indexed in simp and can thus cause significant
slowdowns compared to having them
as external declarations.
This PR splits up `Module.recBuildLean` into smaller functions and
optimizes the implementation of `Module.cacheOutputArtifacts` for the
new compiler. Now, all functions within `Lake.Build.Module` take Lean
<1s to compile.
This PR updates Lake to resolve the `.olean` files for transitive
imports for Lean through the `modules` field of `lean --setup`. This
enables means the Lean can now directly use the `.olean` files from the
Lake cache without needed to locate them at a specific hierarchical
path.
Resolving transitive imports still has a performance penalty, but it is
now much less.
This is mostly a refactoring that replaces other analyses with type
information, but due to the introduction of `tagged` it also has the
side effect of eliminating ref counting ops entirely for types that
always have a tagged scalar representation, e.g. `Unit`.
This PR fixes a bug at `mkCongrSimpCore?`. It fixes the issue reported
by @joehendrix at #9388.
The fix is just commit: afc4ba617f. The
rest of the PR is just cleaning up the file.
closes#9388
This PR fixes an unsafe trick where a sentinel for a hash table of Exprs
(keyed by pointer) is created by constructing a value whose runtime
representation can never be a valid Expr. The value chosen for this
purpose was Unit.unit, which violates the inference that Expr has no
scalar constructors. Instead, we change this to a freshly allocated Unit
× Unit value.
A micro-benchmark for plain, mostly first-order rewriting of simp:
This uses axiom to make it independent of specific optimization (e.g.
for `Nat`).
It generates a “list” of 128 `b`s followed by 128 `a` and uses
bubble-sort to to sort it and compares it against the expected output.
This PR changes the dependency cloning mechanism in lake so the log
message that lake is cloning a
dependency occurs before it is finished doing so (and instead before it
starts). This has been a
huge source of confusion for users that don't understand why lake seems
to be just stuck for no
reason when setting up a new project, the output now is:
```
λ lake +lean4 new math math
info: downloading mathlib `lean-toolchain` file
info: math: no previous manifest, creating one from scratch
info: leanprover-community/mathlib: cloning https://github.com/leanprover-community/mathlib4
<hang>
info: leanprover-community/mathlib: checking out revision 'cd11c28c6a0d514a41dd7be9a862a9c8815f8599'
```
This PR fixes a performance issue that occurs when generating equation
lemmas for functions that use match-expressions containing several
literals. This issue was exposed by #9322 and arises from a combination
of factors:
1. Literal values are compiled into a chain of dependent if-then-else
expressions.
2. Dependent if-then-else expressions are significantly more expensive
to simplify than regular ones.
3. The `split` tactic selects a target, splits it, and then invokes
`simp` on the resulting subgoals. Moreover, `simp` traverses the entire
goal bottom-up and does not stop after reaching the target.
This PR addresses the issue by introducing a custom simproc that avoids
recursively simplifying nested if-then-else expressions. It does **not**
alter the user-facing behavior of the `split` tactic because such a
change would be highly disruptive. Instead, the PR adds a new flag,
`backward.split` to control the behavior of the user-facing `split`
tactic. It is currently set to `true`, i.e., the old behavior is still
the default one. In a future PR, we should set this flag to `false` by
default and begin repairing all affected proofs.
closes#9322
This PR modifies the encoding from `Nat` to `Int` used in `grind
cutsat`. It is simpler, more extensible, and similar to the generic
`ToInt`. After update stage0, we will be able to delete the leftovers.
This PR correctly populates the `xType` field of the `IR.FnBody.case`
constructor. It turns out that there is no obvious consequence for this
being incorrect, because it is conservatively recomputed by the `Boxing`
pass.
This PR changes the implementation of `trace.Compiler.result` to use the
decls as they are provided rather than looking them up in the LCNF mono
environment extension, which was seemingly done to save the trouble of
re-normalizing fvar IDs before printing the decl. This means that the
`._closed` decls created by the `extractClosed` pass will now be
included in the output, which was definitely confusing before if you
didn't know what was happening.
This currently relies on the encoding pun of Nat.zero as the first
tagged constructor of Nat. Since Nat.succ is lowered to addition, it
makes sense to also lower Nat.zero to a zero literal. This might also
expose more optimization opportunities in the future.
This PR fixes IR constructor argument lowering to correctly handle an
irrelevant argument being passed for a relevant parameter in all cases.
This happened because constructor argument lowering (incompletely)
reimplemented general LCNF-to-IR argument lowering, and the fix is to
just adopt the generic helper functions. This is probably due to an
incomplete refactoring when the new compiler was still on a branch.
This PR uses the `mkCongrSimpForConst?` API in `simp` to reduce the
number of times the same congruence lemma is generated. Before this PR,
`grind` would spend `1.5`s creating congruence theorems during
normalization in the `grind_bitvec2.lean` benchmark. It now spends
`0.6`s. This PR should make an even bigger difference after we merge
#9300.
This PR removes the unnecessary requirement of `BEq α` for
`Array.any_push`, `Array.any_push'`, `Array.all_push`, `Array.all_push'`
as well as `Vector.any_push` and `Vector.all_push`.
This PR replaces the `reduceCtorEq` simproc used in `grind` by a much
more efficient one. The default one use in `simp` is just overhead
because the `grind` normalizer is already normalizing arithmetic.
In a separate PR, we will push performance improvements to the default
`reduceCtorEq`.
This PR fixes `toISO8601String` to produce a string that conforms to the
ISO 8601 format specification. The previous implementation separated the
minutes and seconds fragments with a `.` instead of a `:` and included
timezone offsets without the hour and minute fragments separated by a
`:`.
Closes#9235
This PR moves the implementation of `lean_add_extern`/`addExtern` from
C++ into Lean. I believe is the last C++ helper function from the
library/compiler directory being relied upon by the new compiler. I put
it into its own file and duplicated some code because this function
needs to execute in CoreM, whereas the other IR functions live in their
own monad stack. After the C++ compiler is removed, we can move the IR
functions into CoreM.
This PR optimizes support for `Decidable` instances in `grind`. Because
`Decidable` is a subsingleton, the canonicalizer no longer wastes time
normalizing such instances, a significant performance bottleneck in
benchmarks like `grind_bitvec2.lean`. In addition, the
congruence-closure module now handles `Decidable` instances, and can
solve examples such as:
```lean
example (p q : Prop) (h₁ : Decidable p) (h₂ : Decidable (p ∧ q)) : (p ↔ q) → h₁ ≍ h₂ := by
grind
```
This PR adds support for `.mdata` in LCNF mono types (and then drops it
at the IR type level instead). This better matches the behavior of
extern decls in the C++ code of the old compiler, which is still being
used to create extern decls at the moment and will soon be replaced.
This is covered by existing tests.
This PR fixes the bug that `collectAxioms` didn't collect axioms
referenced by other axioms. One of the results of this bug is that
axioms collected from a theorem proved by `native_decide` may not
include `Lean.trustCompiler`.
Closes#8840.
This PR demotes the builtin elaborators for `Std.Do.PostCond.total` and
`Std.Do.Triple` into macros, following the DefEq improvements of #9015.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR makes `isDefEq` detect more stuck definitional equalities
involving smart unfoldings. Specifically, if `t =?= defn ?m` and `defn`
matches on its argument, then this equality is stuck on `?m`. Prior to
this change, we would not see this dependency and simply return `false`.
Fixes#8766.
Co-authored-by: Kyle Miller <kmill31415@gmail.com>
This PR improves the `congr` tactic so that it can handle function
applications with fewer arguments than the arity of the head function.
This also fixes a bug where `congr` could not make progress with
`Set`-valued functions in Mathlib, since `Set` was being unfolded and
making such functions have an apparently higher arity.
This addresses issue #2128 for the `congr` tactic, but not `simp` and
others.
This PR adds theorem `BitVec.clzAuxRec_eq_clzAuxRec_of_getLsbD_false` as
a more general statement than `BitVec.clzAuxRec_eq_clzAuxRec_of_le`,
replacing the latter in the bitblaster too.
This PR makes the logic and tactics of `Std.Do` universe polymorphic, at
the cost of a few definitional properties arising from the switch from
`Prop` to `ULift Prop` in the base case `SPred []`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR migrates usages of `Std.Range` to the new polymorphic ranges.
This PR unfortunately increases the transitive imports for
frequently-used parts of `Init` because the ranges now rely on iterators
in order to provide their functionality for types other than `Nat`.
However, iteration over ranges in compiled code is as efficient as
before in the examples I checked. This is because of a special
`IteratorLoop` implementation provided in the PR for this purpose.
There were two issues that were uncovered during migration:
* In `IndPredBelow.lean`, migrating the last remaining range causes
`compilerTest1.lean` to break. I have minimized the issue and came to
the conclusion it's a compiler bug. Therefore, I have not replaced said
old range usage yet (see #9186).
* In `BRecOn.lean`, we are publicly importing the ranges. Making this
import private should theoretically work, but there seems to be a
problem with the module system, causing the build to panic later in
`Init.Data.Grind.Poly` (see #9185).
* In `FuzzyMatching.lean`, inlining fails with the new ranges, which
would have led to significant slowdown. Therefore, I have not migrated
this file either.
This PR adds a benchmark for the rewriting engine of bv_decide, based on
a problem extracted from
SMT-LIB. Note that this problem has significant elaboration time itself
due to its sheer size though
the overall execution time is split approximately 50:50 between
elaboration and rewriting.
This PR improves the startup time for `grind ring` by generating the
required type classes on demand. This optimization is particularly
relevant for files that make hundreds of calls to `grind`, such as
`tests/lean/run/grind_bitvec2.lean`. For example, before this change,
`grind` spent 6.87 seconds synthesizing type classes, compared to 3.92
seconds after this PR.
We can probably remove `lcUnreachable` once we delete the old compiler,
but for now it makes more sense to move it earlier, since LCNF already
has `Code.unreachable`.
This PR changes the `toMono` pass to consider the type of an application
and erase all arguments corresponding to erased params. This enables a
lightweight form of relevance analysis by changing the mono type of a
decl. I would have liked to unify this with the behavior for
constructors, but my attempt to give constructors the same behavior in
#9222 (which was in preparation for this PR) had a minor performance
regression that is really incidental to the change. Still, I decided to
hold off on it for the time being. In the future, we can hopefully
extend this to constructors, extern decls, etc.
This PR removes code that has the false assumption that LCNF local vars
can occur in types. There are other comments in `ElimDead.lean`
asserting that this is not possible, so this must have been a change
early in the development of the new compiler.
This PR makes the LCNF `elimDeadBranches` pass handle unsafe decls a bit
more carefully. Now the result of an unsafe decl will only become ⊤ if
there is value flow from a recursive call.
These are used by the checker for `.ctor`, but I don't think that that
unboxed types will reuse `.ctor`, whose implementation details are
intimately connected to our runtime representation of objects.
This PR changes the `getLiteral` helper function of `elimDeadBranches`
to correctly handle inductives with constructors. This function is not
used as often as it could be, which makes this issue rare to hit outside
of targeted test cases.
This PR extends the `Eq` simproc used in `grind`. It covers more cases
now. It also adds 3 reducible declarations to the list of declarations
to unfold.
This PR implements `exists` normalization using a simproc instead of
rewriting rules in grind. This is the first part of the PR, after update
stage0, we must remove the normalization theorems.
This PR changes the compiler's specialization analysis to consider
higher-order params that are rebundled in a way that only changes their
`Prop` arguments to be fixed. This means that they get specialized with
a mere `@[specialize]`, rather than the compiler having to opt-in to
more aggressive parameter-specific specialization.
This PR implements `forall` normalization using a simproc instead of
rewriting rules in `grind`. This is the first part of the PR, after
update stage0, we must remove the normalization theorems.
This PR fixes stealing of `⇓` syntax by the new notation for total
postconditions by demoting it to non-builtin syntax and scoping it to
`Std.Do`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR tries to improve the E-matching pattern inference for `grind`.
That said, we still need better tools for annotating and maintaining
`grind` annotations in libraries.
closes#9125
This PR removes the `Subarray`-specific `toArray`, `foldlM` and `foldl`
methods and instead provides these operations on `Std.Slice`, which are
implemented with the `ToIterator` instance of the slice. Calling
`subarray.toArray` etc. still works, since `Subarray` is an abbreviation
for `Slice _`.
Because the benchmarks are not so clear, to be safe, I will merge this
only after the release. In contrast to the ranges, the iteration over
slices is not quite as efficient as the old `Subarray`-specific
implementation, which would require either more optimizations in the
iterator library (special `IteratorLoop` and `IteratorCollect`
implementations) or better unboxing support by the compiler.
This PR makes the `pullInstances` pass avoid pulling any instance
expressions containing erased propositions, because we don't correctly
represent the dependencies that remain after erasure.
This PR disables the use of the header produced by `lake setup-file` in
the server for now. It will be re-enabled once Lake takes into account
the header given by the server when processing workspace modules.
Without that, `setup-file` header can produce odd behavior when the file
on disk and in an editor disagree on whether the file participates in
the module system.
This PR fixes `undefined symbol: lean::mpz::divexact(lean::mpz const&,
lean::mpz const&)` when building without `LEAN_USE_GMP`
This fixes a regression in #8089
This PR makes `mvcgen` split ifs rather than applying specifications.
Doing so fixes a bug reported by Rish.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR resolves a defeq diamond, which caused a problem in Mathlib:
```
import Mathlib
example (R : Type) [I : Ring R] :
@AddCommGroup.toGrindIntModule R (@Ring.toAddCommGroup R I) =
@Lean.Grind.Ring.instIntModule R (@Ring.toGrindRing R I) := rfl -- fails
```
This PR fixes two issues with Lake's process of creating static
archives.
Lake now always recreates static archives by first deleting any existing
one and then recreating it. `ar rcs` does not remove delete files, so
running it when the archive already exists can leave behind "ghost"
symbols of removed object files.
Second, Lake now use `T` rather than `--thin` to create thin archives.
While `--thin` is the recommended spelling, older versions of LLVM `ar`
do not support it. Thus, either choice produces tradeoffs. `T` is chosen
to make Lake consistent with the Lean core's own (Make) build scripts.
This PR enforces the non-inlining of _override impls in the base phase
of LCNF compilation. The current situation allows for constructor/cases
mismatches to be exposed to the simplifier, which triggers an assertion
failure. The reason this didn't show up sooner for Expr is that Expr has
a custom extern implementation of its computed field getter.
Fixes#9156.
This test was originally checked in for a panic in the pretty printer,
but at some point the output of every LCNF simp pass was added to
#guard_msgs output. Since this is printing LCNF built by the stage0
compiler, this causes a lot of unnecessary churn.
This PR changes the key Lake uses for the `,ir` artifact in the content
hash data structure to `r`, maintaining the convention of single
character key names.
This PR fixes the syntax of `grind` modifiers to use `patternIgnore` for
cases where both unicode and ascii variants are matched. This fixes an
issue where several variants of grind syntax weren't accepted (e.g.
`@[grind ← gen]`). Additionally, this reduces the chance that we get
another syntax matching bootstrap hell.
This PR wraps `simpLemma` and `grindLemma` in `ppGroup` to make sure
that the modifiers aren't printed separately from the term / identifier.
Example:
```
simp only [very_long_lemma_oh_no_can_you_please_stop_we're_getting_to_the_limit, ←
wait_this_is_rewritten_backwards_oh_uhh_where's_the_arrow_you_ask?_oh_wait_it's_up_there!]
==>
simp only [very_long_lemma_oh_no_can_you_please_stop_we're_getting_to_the_limit,
← wait_this_is_rewritten_backwards_and_wow_it's_very_clear_and_obvious]
```
This PR tightens the IR typing rules around applications of closures.
When re-reading some code, I realized that the code in `mkPartialApp`
has a clear typo—`.object` and `type` should be swapped. However, it
doesn't matter, because later IR passes smooth out the mismatch here. It
makes more sense to be strict up-front and require applications of
closures to always return an `.object`.
This PR removes a rather ugly hack in the module system, exposing the
bodies of theorems whose type mention `WellFounded`.
The original motivation was that reducing well-founded definitions (e.g.
in `by rfl`) requires reducing proofs, so they need to be available.
But reducing proofs is generally fraught with peril, and we have been
nudging our users away from using it for a while, e.g. in #5182. Since
the module system is opt-in and users will gradually migrate to it, it
may be reasonable to expect them to avoid reducing well-founded
recursion in the process
This way we don't need hacks like this (which, without evidence, I
believe would be incomplete anyways) and we get the nice guarantee that
within the module system, theorems bodies are always private.
This PR removes some unnecessary `Decidable*` instance arguments by
using lemmas in the `Classical` namespace instead of the `Decidable`
namespace.
This might lead to some additional dependency on classical axioms, but
large parts of the standard library are relying on them either way.
This PR ensures that the state is reverted when compilation using the
new compiler fails. This is especially important for noncomputable
sections where the compiler might generate half-compiled functions which
may then be erroneously used while compiling other functions.
This PR generalizes the `a^(m+n)` grind normalizer to any semirings.
Example:
```
variable [Field R]
example (M : R) (h₀ : M ≠ 0) {n : Nat} (hn : n > 0) : M ^ n / M = M ^ (n - 1) := by
cases n <;> grind
```
This PR adds support for representing more inductive as enums,
summarized up as extending support to those that fail to be enums
because of parameters or irrelevant fields. While this is nice to have,
it is actually motivated by correctness of a future desired
optimization. The existing type representation is unsound if we
implement `object`/`tobject` distinction between values guaranteed to be
an object pointer and those that may also be a tagged scalar. In
particular, types like the ones added in this PR's tests would have all
of their constructors encoded via tagged values, but under the natural
extension of the existing rules of type representation they would be
considered `object` rather than `tobject`.
This PR converts the `lowerEnumToScalarType?` cache to a cache of IR
types of named types. This is more sensible than just focusing on the
enum optimization, and due to uniform representation of polymorphism we
have to compile `Constant T1` and `Constant T2` to the same
representation.
Bumps
[dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact)
from 10 to 11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v11</h2>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dawidd6/action-download-artifact/compare/v10...v11">https://github.com/dawidd6/action-download-artifact/compare/v10...v11</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ac66b43f0e"><code>ac66b43</code></a>
node_modules: upgrade</li>
<li><a
href="9b54a0a70c"><code>9b54a0a</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/dawidd6/action-download-artifact/compare/v10...v11">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This PR changes ToIR to call `lowerEnumToScalarType?` with
`ConstructorVal.induct` rather than the name of the constructor itself.
This was an oversight in some refactoring of code in the new compiler
before landing it. It should not affect runtime of compiled code (due to
the extra tagging/untagging being optimized by LLVM), but it does make
IR for the interpreter slightly more efficient.
This PR fixes spacing in the `grind` attribute and tactic syntax.
Previously `@[grind]` was incorrectly pretty-printed as `@[grind ]`, and
`grind [...] on_failure ...` was pretty-printed `grind [...]on_failure
...`. Fixes that `on_failure` was reserved as keyword.
This PR moves the construction of the `Option.SomeLtNone.lt` (and `le`)
relation, in which `some` is less than `none`, to
`Init.Data.Option.Basic` and moves well-foundedness proofs for
`Option.lt` and `Option.SomeLtNone.lt` into `Init.Data.Option.Lemmas`.
This PR improves the “expected type mismatch” error message by omitting
the type's types when they are defeq, and putting them into separate
lines when not.
I found it rather tediuos to parse the error message when the expected
type is long, because I had to find the `:` in the middle of a large
expression somewhere. Also, when both are of sort `Prop` or `Type` it
doesn't add much value to print the sort (and it’s only one hover away
anyways).
This PR further improves release automation, automatically incorporating
material from `nightly-testing` and `bump/v4.X.0` branches in the bump
PRs to downstream repositories.
This PR adjusts the experimental module system to not import the IR of
non-`meta` declarations. It does this by replacing such IR with opaque
foreign declarations on export and adjusting the new compiler
accordingly.
This PR should not be merged before the new compiler.
Based on #8664.
This PR fixes a bug introduce by #9081 where the source file was dropped
from the module input trace and some entries were dropped from the
module job log.
This PR moves the constructor layout code from C++ to Lean. When
writing the new compiler, we just reused the existing C++ code,
even though it was a bit inconvenient, because we wanted to
ensure that constructor layout always matched the existing
compiler.
This fixes#2589 by handling struct field types just like any
other type being lowered, and thus applying the trivial structure
optimization in the process. Originally, I wanted to port the
code to Lean without any functional changes, but I found that
it took less code to just implement it "correctly" and get this
fix as a consequence than to emulate the bugs of the existing
C++ implementation.
This PR ensures that `mspec` uses the configured transparency setting
and makes `mvcgen` use default transparency when calling `mspec`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR further updates release automation. The per-repository update
scripts `script/release_steps.py` now actually performs the tests,
rather than outputting a script for the release manager to run line by
line. It's been tested on `v4.21.0` (i.e. the easy case of a stable
release), and we'll debug its behaviour on `v4.22.0-rc1` tonight.
This PR fixes some bugs with the local Lake artifact cache and cleans up
the surrounding API. It also adds the ability to opt-in to the cache on
packages without `enableArtifactCache` set using the
`LAKE_ARTIFACT_CACHE` environment variable.
Bug-wise, this fixes an issue where cached executable did not have right
permissions to run on Unix systems and a bug where cached artifacts
would not be invalidated on changes. Lake also now writes a trace file
to local build directory if there is none when fetching an artifact from
the cache. This trace has a new `synthetic` field set to `true` to
distinguish it from traces produced by full builds.
This PR removes the `irreducible` attribute from `letFun`, which is one
step toward removing special `letFun` support; part of #9086.
Removing the attribute seems to break some `module` tests in stage2.
This PR improves `pp.oneline`, where it now preserves tags when
truncating formatted syntax to a single line. Note that the `[...]`
continuation does not yet have any functionality to enable seeing the
untruncated syntax. Closes#3681.
This PR enables transforming nondependent `let`s into `have`s in a
number of contexts: the bodies of nonrecursive definitions, equation
lemmas, smart unfolding definitions, and types of theorems. A motivation
for this change is that when zeta reduction is disabled, `simp` can only
effectively rewrite `have` expressions (e.g. `split` uses `simp` with
zeta reduction disabled), and so we cache the nondependence calculations
by transforming `let`s to `have`s. The transformation can be disabled
using `set_option cleanup.letToHave false`.
Uses `Meta.letToHave`, introduced in #8954.
This PR adds a `warn.sorry` option (default true) that logs the
"declaration uses 'sorry'" warning when declarations contain `sorryAx`.
When false, the warning is not logged.
Closes#8611 (assuming that one would set `warn.sorry` as an extra flag
when building).
Other change: Uses `warn.sorry` when creating auxiliary declarations in
`structure` elaborator, to suppress irrelevant 'sorry' warnings.
We could include the sorries themselves in the message if they are
labeled, letting users "go to definition" to see where the sorries are
coming from.
In an earlier version, added additional information to the warning when
it is a synthetic sorry, since these can be caused by elaboration bugs
and they can also be caused by elaboration failures in previous
declarations. This idea needs some more work, so it's not included.
This PR fixes a bug with Lake where the job monitor would sit on a
top-level build (e.g., `mathlib/Mathlib:default`) instead of reporting
module build progress.
The issue was actually simpler than it initially appeared. The wrong
portion of the module build was being registered to job monitor. Moving
it to right place fixes it, no job priorities necessary.
This PR adds an unexpander for `OfSemiring.toQ`. This an auxiliary
function used by the `ring` module in `grind`, but we want to reduce the
clutter in the diagnostic information produced by `grind`. Example:
```
example [CommSemiring α] [AddRightCancel α] [IsCharP α 0] (x y : α)
: x^2*y = 1 → x*y^2 = y → x + y = 2 → False := by
grind
```
produces
```
[ring] Ring `Ring.OfSemiring.Q α` ▼
[basis] Basis ▼
[_] ↑x + ↑y + -2 = 0
[_] ↑y + -1 = 0
```
This PR uses the commutative ring module to normalize nonlinear
polynomials in `grind cutsat`. Examples:
```lean
example (a b : Nat) (h₁ : a + 1 ≠ a * b * a) (h₂ : a * a * b ≤ a + 1) : b * a^2 < a + 1 := by
grind
example (a b c : Int) (h₁ : a + 1 + c = b * a) (h₂ : c + 2*b*a = 0) : 6 * a * b - 2 * a ≤ 2 := by
grind
```
This PR implements support for the type class `LawfulEqCmp`. Examples:
```lean
example (a b c : Vector (List Nat) n)
: b = c → a.compareLex (List.compareLex compare) b = o → o = .eq → a = c := by
grind
example [Ord α] [Std.LawfulEqCmp (compare : α → α → Ordering)] (a b c : Array (Vector (List α) n))
: b = c → o = .eq → a.compareLex (Vector.compareLex (List.compareLex compare)) b = o → a = c := by
grind
```
This PR adjusts the experimental module system to make `private` the
default visibility modifier in `module`s, introducing `public` as a new
modifier instead. `public section` can be used to revert the default for
an entire section, though this is more intended to ease gradual adoption
of the new semantics such as in `Init` (and soon `Std`) where they
should be replaced by a future decl-by-decl re-review of visibilities.
This PR implements support for equations `<num> = 0` in rings and fields
of unknown characteristic. Examples:
```lean
example [Field α] (a : α) : (2 * a)⁻¹ = a⁻¹ / 2 := by grind
example [Field α] (a : α) : (2 : α) ≠ 0 → 1 / a + 1 / (2 * a) = 3 / (2 * a) := by grind
example [CommRing α] (a b : α) (h₁ : a + 2 = a) (h₂ : 2*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 6 = a) (h₂ : b + 9 = b) (h₂ : 3*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 6 = a) (h₂ : b + 9 = b) (h₂ : 3*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 2 = a) (h₂ : b = 0) : 4*a + b = 0 := by
grind
example [CommRing α] (a b c : α) (h₁ : a + 6 = a) (h₂ : c = c + 9) (h : b + 3*c = 0) : 27*a + b = 0 := by
grind
```
This PR proves that the default `toList`, `toListRev` and `toArray`
functions on slices can be described in terms of the slice iterator.
Relying on new lemmas for the `uLift` and `attachWith` iterator
combinators, a more concrete description of said functions is given for
`Subarray`.
This PR introduces a simple variable-reordering heuristic for `cutsat`.
It is needed by the `ToInt` adapter to support finite types such as
`UInt64`. The current encoding into `Int` produces large coefficients,
which can enlarge the search space when an unfavorable variable order is
used. Example:
```lean
example (a b c : UInt64) : a ≤ 2 → b ≤ 3 → c - a - b = 0 → c ≤ 5 := by
grind
```
This PR implements support for equalities and disequalities in `grind
cutsat`. We still have to improve the encoding. Examples:
```lean
example (a b c : Fin 11) : a ≤ 2 → b ≤ 3 → c = a + b → c ≤ 5 := by
grind
example (a : Fin 2) : a ≠ 0 → a ≠ 1 → False := by
grind
```
This PR enables the error explanation widget in named error messages.
Note that the displayed links won't work until the new manual version is
released (unless overriding `LEAN_MANUAL_ROOT` with a suitably recent
manual build).
This PR replaces all usages of `[:]` slice notation in `src` with the
new `[...]` notation in production code, tests and comments. The
underlying implementation of the `Subarray` functions stays the same.
Notation cheat sheet:
* `*...*` is the doubly-unbounded range.
* `*...a` or `*...<a` contains all elements that are less than `a`.
* `*...=a` contains all elements that are less than or equal to `a`.
* `a...*` contains all elements that are greater than or equal to `a`.
* `a...b` or `a...<b` contains all elements that are greater than or
equal to `a` and less than `b`.
* `a...=b` contains all elements that are greater than or equal to `a`
and less than or equal to `b`.
* `a<...*` contains all elements that are greater than `a`.
* `a<...b` or `a<...<b` contains all elements that are greater than `a`
and less than `b`.
* `a<...=b` contains all elements that are greater than `a` and less
than or equal to `b`.
Benchmarks have shown that importing the iterator-backed parts of the
polymorphic slice library in `Init` impacts build performance. This PR
avoids this problem by separating those parts of the library that do not
rely on iterators from those those that do. Whereever the new slice
notation is used, only the iterator-independent files are imported.
This PR implements support for strict inequalities in the `ToInt`
adapter used in `grind cutsat`. Example:
```lean
example (a b c : Fin 11) : c ≤ 9 → a ≤ b → b < c → a < c + 1 := by
grind
```
This PR fixes a type error in `mvcgen` and makes it turn fewer natural
goals into synthetic opaque ones, so that tactics such as `trivial` may
instantiate them more easily.
---------
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR makes `mspec` detect more viable assignments by `rfl` instead of
generating a VC.
---------
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
Co-authored-by: Rishikesh Vaishnav <rishhvaishnav@gmail.com>
This PR adds a Mathlib-like testing and feedback system for the
reference manual. Lean PRs will receive comments that reflect the status
of the language reference with respect to the PR.
This PR adds test cases for the VC generator and implements a few small
and tedious fixes to ensure they pass.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR extends the list of acceptable characters to all the french ones
as well as some others,
by adding characters from the Latin-1-Supplement add Latin-Extended-A
unicode block.
This PR adds DNS functions to the standard library
---------
Co-authored-by: Henrik Böving <hargonix@gmail.com>
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR adds a function called `lean_setup_libuv` that initializes
required LIBUV components. It needs to be outside of
`lean_initialize_runtime_module` because it requires `argv` and `argc`
to work correctly.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
This PR fixes a couple of bootstrapping-related hiccups in the newly
added `Std.Do` module. More precisely,
* The `spec` attribute syntax was registered under the wrong name and
its implementation needed to use a different priority parser
* Elaborators and delaborators for `MGoal`, `Triple`, `PostCond` and
`PostCond.total` were broken and are now properly builtin
* `Std.Do` should not transitively import `Std.Tactic.Do.Syntax`
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR fixes a bug where semantic highlighting would only highlight
keywords that started with an alphanumeric character. Now, it uses
`Lean.isIdFirst`.
This PR provides an iterator combinator that lifts the emitted values
into a higher universe level via `ULift`. This combinator is then used
to make the subarray iterators universe-polymorphic. Previously, they
were only available for `Subarray α` if `α : Type`.
This PR implements support for (non strict) `ToInt` inequalities in
`grind cutsat`. `grind cutsat` can solve simple problems such as:
```lean
example (a b c : Fin 11) : a ≤ b → b ≤ c → a ≤ c := by
grind
example (a b c : Fin 11) : c ≤ 9 → a ≤ b → b ≤ c → a ≤ c + 1 := by
grind
example (a b c : UInt8) : a ≤ b → b ≤ c → a ≤ c := by
grind
example (a b c d : UInt32) : a ≤ b → b ≤ c → c ≤ d → a ≤ d := by
grind
```
Next step: strict inequalities, and equalities.
This PR introduces a local artifact cache for Lake. When enabled, Lake
will shared build artifacts (built files) across different instances of
the same package using an input- and content-addressed cache.
To enable support for the local cache, packages must set
`enableArtifactCache := true` in their package configuration. The reason
for this is twofold. This feature is new and experimental, so it should
be opt-in. Also, some packages may need to disable it as the cache
entails that artifacts are no longer necessarily available within the
build directory, which can break custom build scripts.
The cache location is determined by the system configuration. Lake's
first preference is to store it under the Lean toolchain in a
`lake/cache` directory. If Elan is not available, Lake will store it in
common system location (e.g., `$XDG_CACHE_HOME/lake`, or
`~/.cache/lake`). On an exotic system where neither of these exist, the
cache will be disabled. Users can override this location through the
`LAKE_CACHE_DIR` environment variable. If set to empty, caching will be
disabled.
The cache is both input and content-addressed. Mappings from input hash
to output content hash(es) are stored in a per-package JSON Lines file
(e.g., `<cache-dir>/inputs/<pkg-name>.jsonl`). Thus, mappings are shared
across different instances of a package, but not between packages. The
output content hashes are also now stored in trace files in a new
`outputs` field. The value of this field can be either a single hash or
an object of multiple content hashes for targets which produce multiple
artifacts (e.g., Lean module builds). Separately, artifacts are stored
in a single flat content-addressed cache (e.g.,
`<cache-dir>/artifacts/<hash>.art`. Artifacts are therefore shared
across all cache-enabled packages.
Module `*.olean` and and `*.ilean` artifacts are cached. However, each
package will still copy the files to their build directory, as Lean and
the server currently expect them to be at a specific path. This will be
changed for `*.olean` files when the performance issues with
pre-resolving modules in Lake for `lean --setup` are solved.
This PR adds the following features to `simp`:
- A routine for simplifying `have` telescopes in a way that avoids
quadratic complexity arising from locally nameless expression
representations, like what #6220 did for `letFun` telescopes.
Furthermore, simp converts `letFun`s into `have`s (nondependent lets),
and we remove the #6220 routine since we are moving away from `letFun`
encodings of nondependent lets.
- A `+letToHave` configuration option (enabled by default) that converts
lets into haves when possible, when `-zeta` is set. Previously Lean
would need to do a full typecheck of the bodies of `let`s, but the
`letToHave` procedure can skip checking some subexpressions, and it
modifies the `let`s in an entire expression at once rather than one at a
time.
- A `+zetaHave` configuration option, to turn off zeta reduction of
`have`s specifically. The motivation is that dependent `let`s can only
be dsimped by let, so zeta reducing just the dependent lets is a
reasonable way to make progress. The `+zetaHave` option is also added to
the meta configuration.
- When `simp` is zeta reducing, it now uses an algorithm that avoids
complexity quadratic in the depth of the let telescope.
- Additionally, the zeta reduction routines in `simp`, `whnf`, and
`isDefEq` now all are consistent with how they apply the `zeta`,
`zetaHave`, and `zetaUnused` configurations.
The `letToFun` option is addressing a TODO in `getSimpLetCase` ("handle
a block of nested let decls in a single pass if this becomes a
performance problem").
Performance should be compared to before #8804, which temporarily
disabled the #6220 optimizations for `letFun` telescopes.
Good kernel performance depends on carefully handling the `have`
encoding. Due to the way the kernel instantiates bvars (it does *not*
beta reduce when instantiating), we cannot use congruence theorems of
the form `(have x := v; f x) = (have x ;= v'; f' x)`, since the bodies
of the `have`s will not be syntactically equal, which triggers zeta
reduction in the kernel in `is_def_eq`. Instead, we work with `f v = f'
v'`, where `f` and `f'` are lambda expressions. There is still zeta
reduction, but only when converting between these two forms at the
outset of the generated proof.
This PR adds `BitVec.toFin_(sdiv, smod, srem)` and `BitVec.toNat_srem`.
The strategy for the `rhs` of the `toFin_*` lemmas is to consider what
the corresponding `toNat_*` theorems do and push the `toFin` closerto
the operands. For the `rhs` of `BitVec.toNat_srem` I used the same
strategy as `BitVec.toNat_smod`.
This PR closes#3791, making sure that the Syntax formatter inserts
whitespace before and after comments in the leading and trailing text of
Syntax to avoid having comments comment out any following syntax, and to
avoid comments' lexical syntax from being interpreted as being part of
another syntax. If the text contains newlines before or after any
comments, they are formatted as hard newlines rather than soft newlines.
For example, `--` comments will have a hard newline after them. Note:
metaprograms generating Syntax with comments should be sure to include
newlines at the ends of `--` comments.
This PR improves the error messages produced by invalid projections and
field notation. It also adds a hint to the "function expected" error
message noting the argument to which the term is being applied, which
can be helpful for debugging spurious "function expected" messages
actually caused by syntax errors.
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR introduces a Hoare logic for monadic programs in
`Std.Do.Triple`, and assorted tactics:
* `mspec` for applying Hoare triple specifications
* `mvcgen` to turn a Hoare triple proof obligation `⦃P⦄ prog ⦃Q⦄` into
pure verification conditoins (i.e., without any traces of Hoare triples
or weakest preconditions reminiscent of `prog`). The resulting
verification conditions in the stateful logic of `Std.Do.SPred` can be
discharged manually with the tactics coming with its custom proof mode
or with automation such as `simp` and `grind`.
This is pre-release of a planned feature and not yet intended for
production use. We are grateful for feedback of early adopters, though.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR introduces polymorphic slices in their most basic form. They
come with a notation similar to the new range notation. `Subarray` is
now also a slice and can produce an iterator now. It is intended to
migrate more operations of `Subarray` to the `Slice` wrapper type to
make them available for slices of other types, too.
The PR also moves the `filterMap` combinators into `Init` because they
are used internally to implement iterators on array slices.
This PR adds the types `Std.ExtDTreeMap`, `Std.ExtTreeMap` and
`Std.ExtTreeSet` of extensional tree maps and sets. These are very
similar in construction to the existing extensional hash maps with one
exception: extensional tree maps and sets provide all functions from
regular tree maps and sets. This is possible because in contrast to hash
maps, tree maps are always ordered.
This PR adds a logic of stateful predicates SPred to Std.Do in order to
support reasoning about monadic programs. It comes with a dedicated
proof mode the tactics of which are accessible by importing
Std.Tactic.Do.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR introduces ranges that are polymorphic, in contrast to the
existing `Std.Range` which only supports natural numbers.
Breakdown of core changes:
* `Lean.Parser.Basic`: Modified the number parser (`Lean.Parser.Basic`)
so that it will only consider a *single* dot to be part of a decimal
number. `1..` will no longer be parsed as `1.` followed by `.`, but as
`1` followed by `..`.
* The test `ellipsisProjIssue` ensures that `#check Nat.add ...succ`
produces a syntax error. After introducing the new range notation (see
below), it returns a different (less nice) error message. I updated the
test to reflect the new error message. (The error message will become
nicer as soon as a delaborator for the ranges is implemented. This is
out of scope for this PR.)
Breakdown of standard library changes:
Modified modules: `Init.Data.Range.Polymorphic` (added),
`Init.Data.Iterators`, `Std.Data.Iterators`
* Introduced the type `Std.PRange` that is parameterized over the type
in which the range operates and the shapes of the lower and upper bound.
* Introduced a new notation for ranges. Examples for this notation are:
`1...*`, `1...=3`, `1...<3`, `1<...=2`, `*...=3`.
* Defined lots of typeclasses for different capabilities of ranges,
depending on their shape and underlying type.
* Introduced `Iter(M).size`.
* Introduced the `Iter(M).stepSize n` combinator, which iterates over an
iterator with the given step size `n`. It will drop `n - 1` values
between every value it emits.
* Replaced `LawfulPureIterator` with a new and better typeclass
`LawfulDeterministicIterator`.
* Simplified some lemma statements in the iterator library such as
`IterM.toList_eq_match`, which unnecessarily matched over a `Subtype`,
hindering rewrites due to type dependencies.
Reasons for the concrete choice of notation:
* `lean4-cli` uses `...`-based notation for the `Cmd` notation and it
clashes with `...a` range notation.
* test `2461` fails when using two-dot-based notation because of the
existing `{ a.. }` notation.
This PR adds a generic `MonadLiftT Id m` instance. We do not implement a
`MonadLift Id m` instance because it would slow down instance resolution
and because it would create more non-canonical instances. This change
makes it possible to iterate over a pure iterator, such as `[1, 2,
3].iter`, in arbitrary monads.
This PR adds a new monadic interface for `Async` operations.
This is the design for the `Async` monad that I liked the most. The idea
was refined with the help of @tydeu. Before that, I had some
prerequisites in mind:
1. Good performance
2. Explicit `yield` points, so we could avoid using `bindTask` for every
lifted IO operation
3. A way to avoid creating an infinite chain of `Task`s during recursion
The 2 and 3 points are not covered in this PR, I wish I had a good
solution but right now only a few sketches of this.
### Explicit `yield` points
I thought this would be easy at first, but it actually turned out kinda
tricky. I ended up creating the `suspend` syntax, which is just a small
modification of the lift method (`<- ...`) syntax. It desugars to
`Suspend.suspend task fun _ => ...`. So something like:
```lean
do
IO.println "a"
IO.println "b"
let result := suspend (client.recv? 1024)
IO.println "c"
IO.println "d"
```
Would become:
```lean
Bind.bind (IO.println "a") fun _ =>
Bind.bind (IO.println "b") fun _ =>
Suspend.suspend (client.recv? 1024) fun message =>
Bind.bind (IO.println "c") fun _ =>
IO.println "d"
```
This makes things a bit more efficient. When using `bind`, we would try
to avoid creating a `Task` chain, and the `suspend` would be the only
place we use `Task.bind`. But there's a problem if we use `bind` with
something that needs `suspend`, it’ll block the whole task. Blocking is
the only way to prevent task accumulation when using plain `bind` inside
a structure like that:
```
inductive AsyncResult (ε σ α : Type u) where
| ok : α → σ → AsyncResult ε σ α
| error : ε → σ → AsyncResult ε σ α
| ofTask : Task (EStateM.Result ε σ α) → σ →AsyncResult ε σ α
```
Because we simply need to remove the `ofTask` and transform it into an
`ok`.
### Infinite chain of Tasks
If you create an infinite recursive function using `Task` (which is
super common in servers like HTTP ones), it can lead to a lot of memory
usage. Because those tasks get chained forever and won't be freed until
the function returns.
To get around that, I used CPS and instead of just calling `Task.bind`,
I’d spawn a new task and return an "empty" one like:
```lean
fun k => Task.bind (...) fun value => do k value; pure emptyTask
```
This works great with a CPS-style monad, but it generates a huge IR by
itself.
Just doing CPS alone was too much, though, because every lifted
operation created a new continuation and a `Task.bind`. So, I used it
with `suspend` and got a better performance, but the usage is not good
with `suspend`.
### The current monad
Right now, the monad I’m using is super simple. It doesn't solve the
earlier problems, but the API is clean, and the generated IR is small
enough. An example of how we should use it is:
```lean
-- A loop that repeatedly sends a message and waits for a reply.
partial def writeLoop (client : Socket.Client) (message : String) : Async (AsyncTask Unit) := async do
IO.println s!"sending: {message}"
await (← client.send (String.toUTF8 message))
if let some mes ← await (← client.recv? 1024) then
IO.println s!"received: {String.fromUTF8! mes}"
-- use parallel to avoid building up an infinite task chain
parallel (writeLoop client message)
else
IO.println "client disconnected from receiving"
-- Server’s main accept loop, keeps accepting and echoing for new clients.
partial def acceptLoop (server : Socket.Server) (promise : IO.Promise Unit) : Async (AsyncTask Unit) := async do
let client ← await (← server.accept)
await (← client.send (String.toUTF8 "tutturu "))
-- allow multiple clients to connect at the same time
parallel (writeLoop client "hi!!")
-- and keep accepting more clients, parallel again to avoid building up an infinite task chain
parallel (acceptLoop server promise)
-- A simple client that connects and sends a message.
def echoClient (addr : SocketAddress) (message : String) : Async (AsyncTask Unit) := async do
let socket ← Client.mk
await (← socket.connect addr)
parallel (writeLoop socket message)
-- TCP setup: bind, listen, serve, and run a sample client.
partial def mainTCP : Async Unit := do
let addr := SocketAddressV4.mk (.ofParts 127 0 0 1) 8080
let server ← Server.mk
server.bind addr
server.listen 128
-- promise exists since the server is (probably) never going to stop
let promise ← IO.Promise.new
let acceptAction ← acceptLoop server promise
await (← echoClient addr "hi!")
await acceptAction
await promise
-- Entry point
def main : IO Unit := mainTCP.wait
```
---------
Co-authored-by: Henrik Böving <hargonix@gmail.com>
Co-authored-by: Mac Malone <tydeu@hatpress.net>
This PR adjusts the `lean.redundantMatchAlt` error explanation to remove
the word "unprefixed," which the reference manual's style linter does
not recognize.
This PR refactors the juggling of universes in the linear
`noConfusionType` construction: Instead of using `PUnit.{…} → ` in the
to get the branches of `withCtorType` to the same universe level, we use
`PULift`.
This fixes https://github.com/leanprover/lean4/issues/8962, although
probably doesn’t solve all issues of that kind while level equality
checking is incomplete.
This PR updates the `solveMonoStep` function used in the `monotonicity`
tactic to check for definitional equality between the current goal and
the monotonicity proof obtained from a recursive call. This ensures
soundness by preventing incorrect applications when
`Lean.Order.PartialOrder` instances differ—an issue that can arise with
`mutual` blocks defined using the `partial_fixpoint` keyword, where
different `Lean.Order.CCPO` structures may be involved.
Closes https://github.com/leanprover/lean4/issues/8894.
This PR adds some missing `ToInt.X` typeclass instances for `grind`.
There are still several more to add (in particular, for `ToInt.Pow`),
but I am going to perform an intermediate refactor first.
This PR removes Lake's usage of `lean -R` and `moduleNameOfFileName` to
pass module names to Lean. For workspace names, it now relies on
directly passing the module name through `lean --setup`. For
non-workspace modules passed to `lake lean` or `lake setup-file`, it
uses a fixed module name of `_unknown`.
This means that `lake lean` and `lake setup-file` can be successfully
and consistently used on modules that do not lie under the working
directory or the workspace root.
This PR passes Lean options configured via CMake variables onto the Lake
build. For example, this will ensure CI' setting of `warningAsError` via
`LEAN_EXTRA_MAKE_OPTS` reaches Lake.
This PR both adds initial `@[grind]` annotations for `BitVec`, and uses
`grind` to remove many proofs from `BitVec/Lemmas`.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR adds `BitVec.(getElem, getLsbD, getMsbD)_(smod, sdiv, srem)`
theorems to complete the API for `sdiv`, `srem`, `smod`. Even though the
rhs is not particularly succint (it's hard to find a meaning for what it
means to have "the n-th bit of the result of a signed division/modulo
operation"), these lemmas prevent the need to `unfold` of operations.
---------
Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
This PR embeds a NatModule into its IntModule completion, which is
injective when we have AddLeftCancel, and monotone when the modules are
ordered. Also adds some (failing) grind test cases that can be verified
once `grind` uses this embedding.
This PR changes the CI setup to generate `lean-pr-testing-NNNN` branches
for Mathlib on the `leanprover-community/mathlib4-nightly-testing` fork,
rather than on the main repo.
This PR add instances showing that the Grothendieck (i.e. additive)
envelope of a semiring is an ordered ring if the original semiring is
ordered (and satisfies ExistsAddOfLE), and in this case the embedding is
monotone.
This PR improves the case splitting strategy used in `grind`, and
ensures `grind` also considers simple `match`-conditions for
case-splitting. Example:
```lean
example (x y : Nat)
: 0 < match x, y with
| 0, 0 => 1
| _, _ => x + y := by -- x or y must be greater than 0
grind
```
This PR adds configuration options to the `let`/`have` tactic syntaxes.
For example, `let (eq := h) x := v` adds `h : x = v` to the local
context. The configuration options are the same as those for the
`let`/`have` term syntaxes.
This PR changes `toLCNF` to stop caching translations of expressions
upon seeing an expression marked `never_extract`. This is more
coarse-grained than it needs to be, but it is difficult to do any
better, as the new compiler's `Expr` cache is based on structural
identity (rather than the pointer identity of the old compiler).
The newly added `tests/compiler/never_extract.lean` is also converted
into a `run` tests, because during development I found the order of the
output to `stderr` to be a bit finicky. The reason for making it a
`compiler` test in the first place is that closed term decls work
slightly differently between native code and the interpreter, and it
would be good to test both, but we already have separate tests for
`never_extract` and closed term extraction.
Fixes#8944.
This PR adds a procedure that efficiently transforms `let` expressions
into `have` expressions (`Meta.letToHave`). This is exposed as the
`let_to_have` tactic.
It uses the `withTrackingZetaDelta` technique: the expression is
typechecked, and any `let` variables that don't enter the zeta delta set
are nondependent. The procedure uses a number of heuristics to limit the
amount of typechecking performed. For example, it is ok to skip
subexpressions that do not contain fvars, mvars, or `let`s.
This PR implements support for normalization for commutative semirings
that do not implement `AddRightCancel`. Examples:
```lean
variable (R : Type u) [CommSemiring R]
example (a b c : R) : a * (b + c) = a * c + b * a := by grind
example (a b : R) : (a + b)^2 = a^2 + 2 * a * b + b^2 := by grind
example (a b : R) : (a + 2 * b)^2 = a^2 + 4 * a * b + 4 * b^2 := by grind
example (a b : R) : (a + 2 * b)^2 = 4 * b^2 + b * 4 * a + a^2 := by grind
```
This PR fixes the handling of the `never_extract` attribute in the
compiler's CSE pass. There is an interesting debate to be had about
exactly how hard the compiler should try to avoid duplicating anything
that transitively uses `never_extract`, but this is the simplest form
and roughly matches the check in the old compiler (although due to
different handling of local function decls in the two compilers, the
consequences might be slightly different).
This gets half of the way to #8944.
This PR adds support to the server for the new module setup process by
changing how `lake setup-file` is used.
In the new server setup, `lake setup-file` is invoked with the file name
of the edited module passed as a CLI argument and with the parsed header
passed to standard input in JSON form. Standard input is used to avoid
potentially exceeding the CLI length limits on Windows. Lake will build
the module's imports along with any other dependencies and then return
the module's workspace configuration via JSON (now in the form of
`ModuleSetup`). The server then post-processes this configuration a bit
and returns it back to the Lean language processor.
The server's header is currently only fully respected by Lake for
external modules (files that are not part of any workspace library). For
workspace modules, the saved module header is currently used to build
imports (as has been done since #7909). A follow-up Lake PR will align
both cases to follow the server's header.
Lean search paths (e.g., `LEAN_PATH`, `LEAN_SRC_PATH`) are no longer
negotiated between the server and Lake. These environment variables are
already configured during sever setup by `lake serve` and do not change
on a per-file basis. Lake can also pre-resolve the `.olean` files of
imports via the `importArts` field of `ModuleSetup`, limiting the
potential utility of communicating `LEAN_PATH`.
This PR skips attempting to compute a module name from the file name and
root directory (i.e., `lean -R`) if a name is already provided via `lean
--setup`.
This is accomplished by porting the rest of the frontend code in the
`try` block to Lean.
This PR adds explanations for a few errors concerning noncomputability,
redundant match alternatives, and invalid inductive declarations.
These adopt a lower-case error naming style, which is also applied to
existing error explanation tests.
This PR upgrades the `math` template for `lake init` and `lake new` to
configures the new project to meet rigorous Mathlib maintenance
standards. In comparison with the previous version (now available as
`lake new ... math-lax`), this automatically provides:
* Strict linting options matching Mathlib.
* GitHub workflow for automatic upgrades to newer Lean and Mathlib
releases.
* Automatic release tagging for toolchain upgrades.
* API documentation generated by
[doc-gen4](https://github.com/leanprover/doc-gen4) and hosted on
`github.io`.
* README with some GitHub-specific instructions.
The previous edition of the template is still available, renamed to
`math-lax`.
---------
Co-authored-by: Mac Malone <tydeu@hatpress.net>
This PR introduces antitonicity lemmas that support the elaboration of
mixed inductive-coinductive predicates defined using the
`least_fixpoint` / `greatest_fixpoint` constructs.
For instance, the following definition elaborates correctly because all
occurrences of the inductively defined predicate `tock `within the
coinductive definition of `tick` appear in negative positions. The dual
situation applies to the definition of `tock`:
```
mutual
def tick : Prop :=
tock → tick
greatest_fixpoint
def tock : Prop :=
tick → tock
least_fixpoint
end
```
This PR allows `simp` to recognize and warn about simp lemmas that are
likely looping in the current simp set. It does so automatically
whenever simplification fails with the dreaded “max recursion depth”
error fails, but it can be made to do it always with `set_option
linter.loopingSimpArgs true`. This check is not on by default because it
is somewhat costly, and can warn about simp calls that still happen to
work.
This closes#5111. In the end, this implemented much simpler logic than
described there (and tried in the abandoned #8688; see that PR
description for more background information), but it didn’t work as well
as I thought. The current logic is:
“Simplify the RHS of the simp theorem, complain if that fails”.
It is a reasonable policy for a Lean project to say that all simp
invocation should be so that this linter does not complain. Often it is
just a matter of explicitly disabling some simp theorems from the
default simp set, to make it clear and robust that in this call, we do
not want them to trigger. But given that often such simp call happen to
work, it’s too pedantic to impose it on everyone.
This PR adds the `+generalize` option to the `let` and `have` syntaxes.
For example, `have +generalize n := a + b; body` replaces all instances
of `a + b` in the expected type with `n` when elaborating `body`. This
can be likened to a term version of the `generalize` tactic. One can
combine this with `eq` in `have +generalize (eq := h) n := a + b; body`
as an analogue of `generalize h : n = a + b`.
This PR finishes post-stage0-cleanup after #8914 and #8929. Also:
- adds configuration options for `haveI` and `letI` terms.
- adds `letConfig` parser alias
This PR implements first-class support for nondependent let expressions
in the elaborator; recall that a let expression `let x : t := v; b` is
called *nondependent* if `fun x : t => b` typechecks, and the notation
for a nondependent let expression is `have x := v; b`. Previously we
encoded `have` using the `letFun` function, but now we make use of the
`nondep` flag in the `Expr.letE` constructor for the encoding. This has
been given full support throughout the metaprogramming interface and the
elaborator. Key changes to the metaprogramming interface:
- Local context `ldecl`s with `nondep := true` are generally treated as
`cdecl`s. This is because in the body of a `have` expression the
variable is opaque. Functions like `LocalDecl.isLet` by default return
`false` for nondependent `ldecl`s. In the rare case where it is needed,
they take an additional optional `allowNondep : Bool` flag (defaults to
`false`) if the variable is being processed in a context where the value
is relevant.
- Functions such as `mkLetFVars` by default generalize nondependent let
variables and create lambda expressions for them. The
`generalizeNondepLet` flag (default true) can be set to false if `have`
expressions should be produced instead. **Breaking change:** Uses of
`letLambdaTelescope`/`mkLetFVars` need to use `generalizeNondepLet :=
false`. See the next item.
- There are now some mapping functions to make telescoping operations
more convenient. See `mapLetTelescope` and `mapLambdaLetTelescope`.
There is also `mapLetDecl` as a counterpart to `withLetDecl` for
creating `let`/`have` expressions.
- Important note about the `generalizeNondepLet` flag: it should only be
used for variables in a local context that the metaprogram "owns". Since
nondependent let variables are treated as constants in most cases, the
`value` field might refer to variables that do not exist, if for example
those variables were cleared or reverted. Using `mapLetDecl` is always
fine.
- The simplifier will cache its let dependence calculations in the
nondep field of let expressions.
- The `intro` tactic still produces *dependent* local variables. Given
that the simplifier will transform lets into haves, it would be
surprising if that would prevent `intro` from creating a local variable
whose value cannot be used.
Note that nondependence of lets is not checked by the kernel. To
external checker authors: If the elaborator gets the nondep flag wrong,
we consider this to be an elaborator error. Feel free to typecheck `letE
n t v b true` as if it were `app (lam n t b default) v` and please
report issues.
This PR follows up from #8751, which made sure the nondep flag was
preserved in the C++ interface.
This PR is a followup to #8914, fixing an oversight where
`letIdDeclBinders` is was not updated with the new format. This relies
on some bootstrapping code to stay in place, but we do bootstrap cleanup
that is currently possible.
This PR adds `IO.FS.Stream.readToEnd` which parallels
`IO.FS.Handle.readToEnd` along with its upstream definitions (i.e.,
`readBinToEndInto` and `readBinToEnd`). It also removes an unnecessary
`partial` from `IO.FS.Handle.readBinToEnd`.
This function is useful for reading, for example, all of standard input.
This PR generalizes `IO.FS.lines` with `IO.FS.Handle.lines` and adds the
parallel `IO.FS.Stream.lines` for streams.
The stream version is useful for reading, for example, the lines of
standard input.
This PR adds a linter (`linter.unusedSimpArgs`) that complains when a
simp argument (`simp [foo]`) is unused. It should do the right thing if
the `simp` invocation is run multiple times, e.g. inside `all_goals`. It
does not trigger when the `simp` call is inside a macro. The linter
message contains a clickable hint to remove the simp argument.
I chose to display a separate warning for each unused argument. This
means that the user has to click multiple times to remove all of them
(and wait for re-elaboration in between). But this just means multiple
endorphine kicks, and the main benefit over a single warning that would
have to span the whole argument list is that already the squigglies tell
the users about unused arguments.
This closes#4483.
Making Init and Std clean wrt to this linter revealed close to 1000
unused simp args, a pleasant experience for anyone enjoying tidying
things: #8905
The `.impure` LCNF `Phase` is not currently used, but was intended for a
potential future where the current `IR` passes (which operate on a
highly impure representation) were rewritten to operate on LCNF instead.
For several reasons, I don't think this is very likely to happen, and
instead we are more likely to remove some of the unnecessary differences
between LCNF and IR while keeping them distinct.
This PR modifies `let` and `have` term syntaxes to be consistent with
each other. Adds configuration options; for example, `have` is
equivalent to `let +nondep`, for *nondependent* lets. Other options
include `+usedOnly` (for `let_tmp`), `+zeta` (for `letI`/`haveI`), and
`+postponeValue` (for `let_delayed)`. There is also `let (eq := h) x :=
v; b` for introducing `h : x = v` when elaborating `b`. The `eq` option
works for pattern matching as well, for example `let (eq := h) (x, y) :=
p; b`.
Future PRs will add these options to tactic syntax, once a stage0 update
has been done.
This PR implements support for (commutative) semirings in `grind`. It
uses the Grothendieck completion to construct a (commutative) ring
`Lean.Grind.Ring.OfSemiring.Q α` from a (commutative) semiring `α`. This
construction is mostly useful for semirings that implement
`AddRightCancel α`. Otherwise, the function `toQ` is not injective.
Examples:
```lean
example (x y : Nat) : x^2*y = 1 → x*y^2 = y → y*x = 1 := by
grind
example [CommSemiring α] [AddRightCancel α] (x y : α) : x^2*y = 1 → x*y^2 = y → y*x = 1 := by
grind
example (a b : Nat) : 3 * a * b = a * b * 3 := by grind
example (k z : Nat) : k * (z * 2 * (z * 2 + 1)) = z * (k * (2 * (z * 2 + 1))) := by grind
example [CommSemiring α] [AddRightCancel α] [IsCharP α 0] (x y : α)
: x^2*y = 1 → x*y^2 = y → x + y = 1 → False := by
grind
```
This PR makes `simp` consult its own cache more often, to avoid
replicating work.
Before, the simp cache was checked upon entry of `simpImpl` only, which
then calls `simpLoop`, which recursively iterates the `pre`-lemmas,
without checking the cache again.
Now, `simpLoop` itself checks the cache. This seems more principled,
given that `simpLoop` is actually putting entries into the cache for
each of its calls, so it’s more uniform if it checks the cache itself.
This avoids repeated rewrites. For example given
```
theorem ab : a = b := testSorry
theorem bc : b = c := testSorry
example (h : P c) : P b ∧ P a := by simp [ab, bc, h]
```
simp would rewrite `b ==> c` twice (once as part of `b ==> c` and then
again as part of `a ==> b ==> c`). And it’d be order dependent: With
```
example (h : P c) : P a ∧ P b := by simp [ab, bc, h]
```
the `a ==> b ==> c` chain would insert `b ==> c` into the cache, and
picked up by `simpImpl` when rewriting `P b`.
With this change, `b ==> c` is performed only once in both examples.
Instruction counts on stdlib and mathlib both show a mild improvement
across the board (0.5%), with individual modules improving by up to 4%
in stdlib and even more in mathlib.
(This does not check the cache before applying `post`, which explains
where there are still some repeated rewrites in the trace logs. But I’m
less sure about inserting a cache check here and so I am treading
carefully here. It’s also going to be at most one `post` application
that’s duplicated, because if `post` returns `.visit`, we go back to
`pre` and thus a cache check.)
This PR refactors the way simp arguments are elaborated: Instead of
changing the `SimpTheorems` structure as we go, this elaborates each
argument to a more declarative description of what it does, and then
apply those. This enables more interesting checks of simp arguments that
need to happen in the context of the eventually constructed simp context
(the checks in #8688), or after simp has run (unused argument linter
#8901).
The new data structure describing an elaborated simp argument isn’t the
most elegant, but follows from the code.
While I am at it, move handling of `[*]` into `elabSimpArgs`. Downstream
adaption branches exist (but may not be fully up to date because of the
permission changes).
While I am at it, I cleaned up `SimpTheorems.lean` file a bit (sorting
declarations, mild renaming) and added documentation.
This PR make sure that the local instance cache calculation applies more
reductions. In #2199 there was an issue where metavariables could
prevent local variables from being considered as local instances. We use
a slightly different approach that ensures that, for example, `let`s at
the ends of telescopes do not cause similar problems. These reductions
were already being calculated, so this does not require any additional
work to be done.
Metaprogramming interface addition: the various forall telescope
functions that do reduction now have a `whnfType` flag (default false).
If it's true, then the callback `k` is given the WHNF of the type. This
is a free operation, since the telescope function already computes it.
This PR refactors `Lean.Grind.NatModule/IntModule/Ring.IsOrdered`.
We ensure the the diamond from `Ring` to `NatModule` via either
`Semiring` or `IntModule` is defeq, which was not previously the case.
---------
Co-authored-by: Leonardo de Moura <leomoura@amazon.com>
To build Lean you should use `make -j -C build/release`.
To run a test you should use `cd tests/lean/run && ./test_single.sh example_test.lean`.
## New features
When asked to implement new features:
* begin by reviewing existing relevant code and tests
* write comprehensive tests first (expecting that these will initially fail)
* and then iterate on the implementation until the tests pass.
All new tests should go in `tests/lean/run/`. These tests don't have expected output; we just check there are no errors. You should use `#guard_msgs` to check for specific messages.
## Success Criteria
*Never* report success on a task unless you have verified both a clean build without errors, and that the relevant tests pass.
## Build System Safety
**NEVER manually delete build directories** (build/, stage0/, stage1/, etc.) even when builds fail.
- ONLY use the project's documented build command: `make -j -C build/release`
- If a build is broken, ask the user before attempting any manual cleanup
## LSP and IDE Diagnostics
After rebuilding, LSP diagnostics may be stale until the user interacts with files. Trust command-line test results over IDE diagnostics.
## Update prompting when the user is frustrated
If the user expresses frustration with you, stop and ask them to help update this `.claude/CLAUDE.md` file with missing guidance.
## Creating pull requests.
All PRs must have a first paragraph starting with "This PR". This paragraph is automatically incorporated into release notes. Read `lean4/doc/dev/commit_convention.md` when making PRs.
echo "... and Batteries has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
@@ -183,7 +183,6 @@ jobs:
echo "... but Batteries does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Batteries CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-mathlib\`, Batteries CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
echo "... and the reference manual has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE=""
else
echo "... but the reference manual does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Reference manual CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-manual\`, reference manual CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
MESSAGE="- ❗ Reference manual CI will not be attempted unless your PR branches off the \`nightly-with-manual\` branch. Try \`git rebase $MERGE_BASE_SHA --onto $NIGHTLY_WITH_MANUAL_SHA\`."
# The reference manual's `nightly-testing` branch or `nightly-testing-YYYY-MM-DD` tag may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
Since version 4, Lean is a partially bootstrapped program: most parts of the
Lean is a bootstrapped program: the
frontend and compiler are written in Lean itself and thus need to be built before
building Lean itself - which is needed to again build those parts. This cycle is
broken by using pre-built C files checked into the repository (which ultimately
@@ -72,6 +72,14 @@ update the archived C source code of the stage 0 compiler in `stage0/src`.
The github repository will automatically update stage0 on `master` once
`src/stdlib_flags.h` and `stage0/src/stdlib_flags.h` are out of sync.
To trigger this, modify `stage0/src/stdlib_flags.h` (e.g., by adding or changing
a comment). When `update-stage0` runs, it will overwrite `stage0/src/stdlib_flags.h`
with the contents of `src/stdlib_flags.h`, bringing them back in sync.
NOTE: A full rebuild of stage 1 will only be triggered when the *committed* contents of `stage0/` are changed.
Thus if you change files in it manually instead of through `update-stage0-commit` (see below) or fetching updates from git, you either need to commit those changes first or run `make -C build/release clean-stdlib`.
The same is true for further stages except that a rebuild of them is retriggered on any committed change, not just to a specific directory.
Thus when debugging e.g. stage 2 failures, you can resume the build from these failures on but may want to explicitly call `clean-stdlib` to either observe changes from `.olean` files of modules that built successfully or to check that you did not break modules that built successfully at some prior point.
If you have write access to the lean4 repository, you can also manually
trigger that process, for example to be able to use new features in the compiler itself.
@@ -82,13 +90,13 @@ gh workflow run update-stage0.yml
```
Leaving stage0 updates to the CI automation is preferable, but should you need
to do it locally, you can use `make update-stage0-commit` in `build/release` to
update `stage0` from `stage1` or `make -C stageN update-stage0-commit` to
to do it locally, you can use `make -C build/release update-stage0-commit` to
update `stage0` from `stage1` or `make -C build/release/stageN update-stage0-commit` to
update from another stage. This command will automatically stage the updated files
and introduce a commit,so make sure to commit your work before that.
and introduce a commit,so make sure to commit your work before that.
If you rebased the branch (either onto a newer version of `master`, or fixing
up some commits prior to the stage0 update, recreate the stage0 update commits.
up some commits prior to the stage0 update), recreate the stage0 update commits.
The script `script/rebase-stage0.sh` can be used for that.
The CI should prevent PRs with changes to stage0 (besides `stdlib_flags.h`)
@@ -52,7 +52,7 @@ In the case of `@[extern]` all *irrelevant* types are removed first; see next se
Similarly, the signed integer types `Int8`, ..., `Int64`, `ISize` are also represented by the unsigned C types `uint8_t`, ..., `uint64_t`, `size_t`, respectively, because they have a trivial structure.
* `Nat` and `Int` are represented by `lean_object *`.
Their runtime values is either a pointer to an opaque bignum object or, if the lowest bit of the "pointer" is 1 (`lean_is_scalar`), an encoded unboxed natural number or integer (`lean_box`/`lean_unbox`).
* A universe `Sort u`, type constructor `... → Sort u`, or proposition `p : Prop` is *irrelevant* and is either statically erased (see above) or represented as a `lean_object *` with the runtime value `lean_box(0)`
* A universe `Sort u`, type constructor `... → Sort u`, `Void α` or proposition `p : Prop` is *irrelevant* and is either statically erased (see above) or represented as a `lean_object *` with the runtime value `lean_box(0)`
* Any other type is represented by `lean_object *`.
Its runtime value is a pointer to an object of a subtype of `lean_object` (see the "Inductive types" section below) or the unboxed value `lean_box(cidx)` for the `cidx`th constructor of an inductive type if this constructor does not have any relevant parameters.
@@ -68,7 +68,7 @@ The memory order of the fields is derived from the types and order of the fields
* Fields of type `USize`
* Other scalar fields, in decreasing order by size
Within each group the fields are ordered in declaration order. **Warning**: Trivial wrapper types still count toward a field being treated as non-scalar for this purpose.
Within each group the fields are ordered in declaration order. Trivial wrapper types count as their underlying wrapped type for this purpose.
* To access fields of the first kind, use `lean_ctor_get(val, i)` to get the `i`th non-scalar field.
* To access `USize` fields, use `lean_ctor_get_usize(val, n+i)` to get the `i`th usize field and `n` is the total number of fields of the first kind.
@@ -80,32 +80,32 @@ structure S where
ptr_1 : Array Nat
usize_1 : USize
sc64_1 : UInt64
ptr_2 : { x : UInt64 // x > 0 } -- wrappers don't count as scalars
sc64_2 : Float -- `Float` is 64 bit
sc64_2 : { x : UInt64 // x > 0 } -- wrappers of scalars count as scalars
sc64_3 : Float -- `Float` is 64 bit
sc8_1 : Bool
sc16_1 : UInt16
sc8_2 : UInt8
sc64_3 : UInt64
sc64_4 : UInt64
usize_2 : USize
ptr_3 : Char -- trivial wrapper around `UInt32`
sc32_1 : UInt32
sc32_1 : Char -- trivial wrapper around `UInt32`
sc32_2 : UInt32
sc16_2 : UInt16
```
would get re-sorted into the following memory order:
@@ -129,23 +129,29 @@ For all other modules imported by `lean`, the initializer is run without `builti
Thus `[init]` functions are run iff their module is imported, regardless of whether they have native code available or not, while `[builtin_init]` functions are only run for native executable or plugins, regardless of whether their module is imported or not.
`lean` uses built-in initializers for e.g. registering basic parsers that should be available even without importing their module (which is necessary for bootstrapping).
The initializer for module `A.B` is called `initialize_A_B` and will automatically initialize any imported modules.
Module initializers are idempotent (when run with the same `builtin` flag), but not thread-safe.
The initializer for module `A.B` in a package `foo` is called `initialize_foo_A_B`. For modules in the Lean core (e.g., `Init.Prelude`), the initializer is called `initialize_Init_Prelude`. Module initializers will automatically initialize any imported modules. They are also idempotent (when run with the same `builtin` flag), but not thread-safe.
**Important for process-related functionality**: If your application needs to use process-related functions from libuv, such as `Std.Internal.IO.Process.getProcessTitle` and `Std.Internal.IO.Process.setProcessTitle`, you must call `lean_setup_args(argc, argv)` (which returns a potentially modified `argv` that must be used in place of the original) **before** calling `lean_initialize()` or `lean_initialize_runtime_module()`. This sets up process handling capabilities correctly, which is essential for certain system-level operations that Lean's runtime may depend on.
Together with initialization of the Lean runtime, you should execute code like the following exactly once before accessing any Lean declarations:
@@ -8,8 +8,8 @@ You should not edit the `stage0` directory except using the commands described i
## Development Setup
You can use any of the [supported editors](../setup.md) for editing the Lean source code.
If you set up `elan` as below, opening `src/` as a *workspace folder* should ensure that stage 0 (i.e. the stage that first compiles `src/`) will be used for files in that directory.
You can use any of the [supported editors](https://lean-lang.org/install/manual/) for editing the Lean source code.
Please see below for specific instructions for VS Code.
### Dev setup using elan
@@ -68,6 +68,10 @@ code lean.code-workspace
```
on the command line.
You can use the `Refresh File Dependencies` command as in other projects to rebuild modules from inside VS Code but be aware that this does not trigger any non-Lake build targets.
In particular, after updating `stage0/` (or fetching an update to it), you will want to invoke `make` directly to rebuild `stage0/bin/lean` as described in [building Lean](../make/index.md).
You should then run the `Restart Server` command to update all open files and the server watchdog process as well.
### `ccache`
Lean's build process uses [`ccache`](https://ccache.dev/) if it is
@@ -85,5 +89,29 @@ such that changing files in `Init` doesn't force a full rebuild of `Lean`.
You can test a Lean PR against Mathlib and Batteries by rebasing your PR
on to `nightly-with-mathlib` branch. (It is fine to force push after rebasing.)
CI will generate a branch of Mathlib and Batteries called `lean-pr-testing-NNNN`
that uses the toolchain for your PR, and will report back to the Lean PR with results from Mathlib CI.
on the `leanprover-community/mathlib4-nightly-testing` fork of Mathlib.
This branch uses the toolchain for your PR, and will report back to the Lean PR with results from Mathlib CI.
See https://leanprover-community.github.io/contribute/tags_and_branches.html for more details.
### Testing against the Lean Language Reference
You can test a Lean PR against the reference manual by rebasing your PR
on to `nightly-with-manual` branch. (It is fine to force push after rebasing.)
CI will generate a branch of the reference manual called `lean-pr-testing-NNNN`
in `leanprover/reference-manual`. This branch uses the toolchain for your PR,
and will report back to the Lean PR with results from Mathlib CI.
### Avoiding rebuilds for downstream projects
If you want to test changes to Lean on downstream projects and would like to avoid rebuilding modules you have already built/fetched using the project's configured Lean toolchain, you can often do so as long as your build of Lean is close enough to that Lean toolchain (compatible .olean format including structure of all relevant environment extensions).
To override the toolchain without rebuilding for a single command, for example `lake build` or `lake lean`, you can use the prefix
```
LEAN_GITHASH=$(lean --githash) lake +lean4 ...
```
Alternatively, use
```
export LEAN_GITHASH=$(lean --githash)
export ELAN_TOOLCHAIN=lean4
```
to persist these changes for the lifetime of the current shell, which will affect any processes spawned from it such as VS Code started via `code .`.
If you use a setup where you cannot directly start your editor from the command line, such as VS Code Remote, you might want to consider using [direnv](https://direnv.net/) together with an editor extension for it instead so that you can put the lines above into `.envrc`.
@@ -59,7 +59,7 @@ All these tests are included by [src/shell/CMakeLists.txt](https://github.com/le
open Foo in
theorem tst2 (h : a ≤ b) : a + 2 ≤ b + 2 :=
Bla.
--^ textDocument/completion
--^ completion
```
In this example, the test driver [`test_single.sh`](https://github.com/leanprover/lean4/tree/master/tests/lean/interactive/test_single.sh) will simulate an
auto-completion request at `Bla.`. The expected output is stored in
These are instructions to set up a working development environment for those who wish to make changes to Lean itself. It is part of the [Development Guide](../dev/index.md).
We strongly suggest that new users instead follow the [Quickstart](../quickstart.md) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
We strongly suggest that new users instead follow the [Installation Instructions](https://lean-lang.org/install/) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
renderedStatement:="Std.DHashMap.Raw.Const.get.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α]\n (m : Std.DHashMap.Raw α fun x => β) (a : α) (h : a ∈ m) : β",
description:="Operations that take as input an associative container and return a 'single' piece of information (e.g., `GetElem` or `isEmpty`, but not `toList`)."
This release introduces the Lean module system, which allows files to
control the visibility of their contents for other files. In previous
releases, this feature was available as a preview when the option
`experimental.module` was set to `true`; it is now a fully supported
feature of Lean.
# Benefits
Because modules reduce the amount of information exposed to other
code, they speed up rebuilds because irrelevant changes can be
ignored, they make it possible to be deliberate about API evolution by
hiding details that may change from clients, they help proofs be
checked faster by avoiding accidentally unfolding definitions, and
they lead to smaller executable files through improved dead code
elimination.
# Visibility
A source file is a module if it begins with the `module` keyword. By
default, declarations in a module are private; the `public` modifier
exports them. Proofs of theorems and bodies of definitions are private
by default even when their signatures are public; the bodies of
definitions can be made public by adding the `@[expose]`
attribute. Theorems and opaque constants never expose their bodies.
`public section` and `@[expose] section` change the default visibility
of declarations in the section.
# Imports
Modules may only import other modules. By default, `import` adds the
public information of the imported module to the private scope of the
current module. Adding the `public` modifier to an import places the
imported modules's public information in the public scope of the
current module, exposing it in turn to the current module's clients.
Within a package, `import all` can be used to import another module's
private scope into the current module; this can be used to separate
lemmas or tests from definition modules without exposing details to
downstream clients.
# Meta Code
Code used in metaprograms must be marked `meta`. This ensures that the
code is compiled and available for execution when it is needed during
elaboration. Meta code may only reference other meta code. A whole
module can be made available in the meta phase using `meta import`;
this allows code to be shared across phases by importing the module in
each phase. Code that is reachable from public metaprograms must be
imported via `public meta import`, while local metaprograms can use
plain `meta import` for their dependencies.
The module system is described in detail in [the Lean language reference](https://lean-reference-manual-review.netlify.app/find/?domain=Verso.Genre.Manual.section&name=files).
parser=argparse.ArgumentParser(description="Check release status of Lean4 repositories")
parser.add_argument("toolchain",help="The toolchain version to check (e.g., v4.6.0)")
@@ -312,30 +602,57 @@ def main():
print(f" ❌ Short commit hash {commit_hash[:SHORT_HASH_LENGTH]} is numeric and starts with 0, causing issues for version parsing. Try regenerating the last commit to get a new hash.")
print(f" ❌ Bump branch {bump_branch} does not exist. Run `gh api -X POST /repos/{org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)` to create it.")
nightly_note=f" (will set lean-toolchain to {latest_nightly})"ifnamein["batteries","mathlib4"]andlatest_nightlyelse""
print(f" ❌ Bump branch {bump_branch} does not exist. Run `gh api -X POST /repos/{bump_org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)` to create it{nightly_note}.")
repo_status[name]=False
continue
print(f"… Bump branch {bump_branch} does not exist. Creating it...")
result=run_command(f"gh api -X POST /repos/{org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)",check=False)
print(f"⮕ Bump branch {bump_branch} does not exist. Creating it...")
result=run_command(f"gh api -X POST /repos/{bump_org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)",check=False)
ifresult.returncode!=0:
print(f" ❌ Failed to create bump branch {bump_branch}")
set(LEAN_EXTRA_LINKER_FLAGS${LEAN_EXTRA_LINKER_FLAGS_DEFAULT}CACHESTRING"Additional flags used by the linker")
set(LEAN_EXTRA_CXX_FLAGS""CACHESTRING"Additional flags used by the C++ compiler. Unlike `CMAKE_CXX_FLAGS`, these will not be used to build e.g. cadical.")
set(LEAN_TEST_VARS"LEAN_CC=${CMAKE_C_COMPILER}"CACHESTRING"Additional environment variables used when running tests")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.