This PR changes `Lean.Grind.NoNatZeroDivisors` so that it is
parametrised by a `NatModule` instance rather than just a `HMul`
instance. This is sufficiently general for our purposes, and is a
band-aid (~40% improvement) for the performance problems we've been
seeing coming from inference here. The problems observed in Mathlib may
not see much improvement, however.
This PR lets the equation compiler unfold abstracted proofs again if
they would otherwise hide recursive calls.
This fixes#8939.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR fixes Lake's handling of a module system `import all`.
Previously, Lake treated `import all` the same a non-module `import`,
importing all private data in the transitive import tree. Lake now
distinguishes the two, with `import all M` just importing the private
data of `M`. The direct private imports of `M` are followed, but they
are not promoted.
This also fixes some other Lake bugs with module system imports that
were discovered in the process.
In the early days of the new compiler, it was common to make tests that
manually compiled a definition with the new compiler. The arity
reduction pass in LCNF deliberately does not compute a fixed point to
find a minimal set of used parameters for performance reasons, but
running it a second time can lead to different decisions being made and
a decl arity mismatch. This has been an issue for multiple people during
development. Removing the tests fixes the problem.
Fixes#9186.
This PR uses `withAbstractAtoms` to prevent the kernel from accidentally
reducing the atoms in the arith normlizer while typechecking. This PR
also sets `implicitDefEqProofs := false` in the `grind` normalizer
This PR corrects the changes to `Lean.Grind.Field` made in #9500.
(The lack of examples of fields in the core repository is a problem! I
guess it is likely that for interval arithmetic we will at least need
`Rat` soon.)
(Almost) only typos in constant names and doc-strings were considered;
grammar was not considered. Also, along others,
`mkDefinitionValInferrringUnsafe` has been fixed :-)
This PR adds a benchmark for the persistent hashmap, in particular also
covering the non
linear insert case which is often hit in practical uses. Furthermore the
same test case is also
added to the treemap benchmark.
This PR makes `mframe`, `mspec` and `mvcgen` respect hygiene.
Inaccessible stateful hypotheses can now be named with a new tactic
`mrename_i` that works analogously to `rename_i`.
This PR surfaces kernel diagnostics even in `example`.
The problem was that the kernel checking happens asynchronously. We
cannot use `reportDiag` in `addDecl`, which spawns that task, due to the
module hierarchy. For non `example`-declaration, `reportDiag` is called
somewhere else later, but for `example`, the `withoutModifyingEnv` in
`elabMutualDef` hid the kernel diagnostics. (But only the kernel
diagnostics; they are in the `Environment`, while the others are in the
`State`).
I also observed that the `reportDiag` in `elabAsync` (but not in
`elabSync`) duplicated the reporting, so without `elab.Async true` you
get the message twice. To fix this, `reportDiag` now resets the
diagnostics. This should avoid reporting counts twice in general (at
least within a linear use of the state).
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR removes vestigial syntax definitions in
`Lean.Elab.Tactic.Do.VCGen` that when imported undefine the `mvcgen`
tactic. Now it should be possible to import Mathlib and still use
`mvcgen`.
This PR adds a few more `*.by_wp` "adequacy theorems" that allows to
prove facts about programs in `ReaderM` and `ExceptM` using the `Std.Do`
framework.
This PR adds a `HPow \a Int \a` field to `Lean.Grind.Field`, and
sufficient axioms to connect it to the operations, so that in future we
can reason about exponents in `grind`. To avoid collisions, we also move
the `HPow \a Nat \a` field in `Semiring` from the extends clause to a
field. Finally, we add some failing tests about normalizing exponents.
This PR makes cdot function expansion take hygiene information into
account, fixing "parenthesis capturing" errors that can make erroneous
cdots trigger cdot expansion in conjunction with macros. For example,
given
```lean
macro "baz% " t:term : term => `(1 + ($t))
```
it used to be that `baz% ·` would expand to `1 + fun x => x`, but now
the parentheses in `($t)` do not capture the cdot. We also fix an
oversight where cdot function expansion ignored the fact that type
ascriptions and tuples were supposed to delimit expansion, and also now
the quotation prechecker ignores the identifier in `hygieneInfo`. (#9491
added the hygiene information to the parenthesis and cdot syntaxes.)
This fixes a bug discovered by [Google
DeepMind](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/P1/index.html),
which made use of `useλy . x=>y.rec λS p=>?_`. The `use` tactic from
Mathlib wrapped the provided term in a type ascription, and so this was
equivalent to `use fun x => λy x x=>y.rec λS p=>?_`. (Note that cdot
function expansion is not able to take into account *where* the cdots
are located, and it is syntactically valid to insert an identifier into
the binder list like this. If we ever want to address this in the
future, we could have cdots expand into a special term that wraps an
identifier that evaluates to a local, but which would cause errors in
other contexts.)
Design note: we put the `hygieneInfo` on the open parenthesis rather
than at the end, since that way the hygiene information is available
even when there are parsing errors. This is important since we rely on
being able to elaborate partial syntax to get elab info (e.g. in `(a.`
to get completion info). Note that syntax matchers check that the
`hygieneInfo` is actually present, so such partial syntax would not be
matched.
This PR adds a feature where `structure` constructors can override the
inferred binder kinds of the type's parameters. In the following, the
`(p)` binder on `toLp` causes `p` to be an explicit parameter to
`WithLp.toLp`:
```lean
structure WithLp (p : Nat) (V : Type) where toLp (p) ::
ofLp : V
```
This reflects the syntax of the feature added in #7742 for overriding
binder kinds of structure projections. Similarly, only those parameters
in the header of the `structure` may be updated; it is an error to try
to update binder kinds of parameters included via `variable`.
Closes#9072.
Fixes a possible bug from stale caches when creating the type of the
constructor.
This PR resolves an issue where the `Meta.Context.configKey` field is
private but we still want to use the constructor of the structure for
setting other fields, which would be prevented by the module system
checks:
```lean
structure Context where
private config : Config := {}
private configKey : UInt64 := config.toKey
...
def ContextInfo.runMetaM (info : ContextInfo) (lctx : LocalContext) (x : MetaM α) : IO α := do
-- cannot call private constructor of `Meta.Context`!
(·.1) <$> info.runCoreM (x.run { lctx := lctx } { mctx := info.mctx })
```
Instead, the private field is extracted into an (existing) structure
that applies its default value:
```lean
/-- Configuration with key produced by `Config.toKey`. -/
structure ConfigWithKey where
private mk ::
config : Config := {}
key : UInt64 := config.toKey
structure Context where
keyedConfig : ConfigWithKey := default
```
Thus `Context`'s constructor remains public without exposing a way to
set `key` directly.
This PR fixes a kernel type mismatch that occurs when using `grind` on
goals containing non-standard `OfNat.ofNat` terms. For example, in issue
#9477, the `0` in the theorem `range_lower` has the form:
```lean
(@OfNat.ofNat
(Std.PRange.Bound (Std.PRange.RangeShape.lower (Std.PRange.RangeShape.mk Std.PRange.BoundShape.closed Std.PRange.BoundShape.open)) Nat)
(nat_lit 0)
(instOfNatNat (nat_lit 0)))
```
instead of the more standard form:
```lean
(@OfNat.ofNat
Nat
(nat_lit 0)
(instOfNatNat (nat_lit 0)))
```
Closes#9477
This PR improves the `evalInt?` function, which is used to evaluate
configuration parameters from the `ToInt` type class. This PR also adds
a new `evalNat?` function for handling the `IsCharP` type class, and
introduces a configuration option:
```
grind (exp := <num>)
```
This option controls the maximum exponent size considered during
expression evaluation. Previously, `evalInt?` used `whnf`, which could
run out of stack space when reducing terms such as `2^1024`.
closes#9427
This PR adds `binrel%` macros for `!=` and `≠` notation defined in
`Init.Core`. This allows the elaborator to insert coercions on both
sides of the relation, instead of committing to the type on the left
hand side.
I first discovered this bug while working on Brouwer's fixed point
theorem. See the discussion on Zulip at [#lean4 > Elaboration of
`≠` @
💬](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Elaboration.20of.20.60.E2.89.A0.60/near/526236907).
This PR replaces the proof of the simplification lemma `Nat.zero_mod`
with
`rfl` since it is, by design, a definitional equality. This solves an
issue
whereby the lemma could not be used by the simplifier when in 'dsimp'
mode.
Closes#9389
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR introduces tactic `mleave` that leaves the `SPred` proof mode by
eta expanding through its abstractions and applying some mild
simplifications. This is useful to apply automation such as `grind`
afterwards.
Relates to #9363.
This PR adds support in the `mintro` tactic for introducing `let`/`have`
binders in stateful targets, akin to `intro`. This is useful when
specifications introduce such let bindings.
Closes#9365.
This PR makes `PProdN.reduceProjs` also look for projection functions.
Previously, all redexes were created by the functions in `PProdN`, which
used primitive projections. But with `mkAdmProj` the projection
functions creep in via the types of the `admissible_pprod_fst` theorem.
So let's just reduce both of them.
Fixes#9462.
The `isRef` check being removed here used to be an optimization, because
this structure only tracked whether ref counting operations need to be
inserted at all. Now the structure also tracks whether the value needs
to be checked for being a scalar or not, which is something that can be
refined by a `cases` arm, since inductive types can have a mix of scalar
and non-scalar constructors.
This PR fixes an issue that caused some `deriving` handlers to fail when
the name of the type being declared matched that of a declaration in an
open namespace.
Closes#9366
This PR improves the 'Go to Definition' UX, specifically:
- Using 'Go to Definition' on a type class projection will now extract
the specific instances that were involved and provide them as locations
to jump to. For example, using 'Go to Definition' on the `toString` of
`toString 0` will yield results for `ToString.toString` and `ToString
Nat`.
- Using 'Go to Definition' on a macro that produces syntax with type
class projections will now also extract the specific instances that were
involved and provide them as locations to jump to. For example, using
'Go to Definition' on the `+` of `1 + 1` will yield results for
`HAdd.hAdd`, `HAdd α α α` and `Add Nat`.
- Using 'Go to Declaration' will now provide all the results of 'Go to
Definition' in addition to the elaborator and the parser that were
involved. For example, using 'Go to Declaration' on the `+` of `1 + 1`
will yield results for `HAdd.hAdd`, `HAdd α α α`, `Add Nat`,
``macro_rules | `($x + $y) => ...`` and `infixl:65 " + " => HAdd.hAdd`.
- Using 'Go to Type Definition' on a value with a type that contains
multiple constants will now provide 'Go to Definition' results for each
constant. For example, using 'Go to Type Definition' on `x` for `x :
Array Nat` will yield results for `Array` and `Nat`.
### Details
'Go to Definition' for type class projections was first implemented by
#1767, but there were still a couple of shortcomings with the
implementation. E.g. in order to jump to the instance in `toString 0`,
one had to add another space within the application and then use 'Go to
Definition' on that, or macros would block instances from being
displayed. Then, when the .ilean format was added, most 'Go to
Definition' requests were already handled using the .ileans in the
watchdog process, and so the file worker never received them to handle
them with the semantic information that it has available.
This PR resolves most of the issues with the previous implementation and
refactors the 'Go to Definition' control flow so that 'Go to Definition'
requests are always handled by the file worker, with the watchdog merely
using its .ilean position information to update the positions in the
response to a more up-to-date state. This is necessary because the file
worker obtains its position information from the .oleans, which need to
be rebuilt in order to be up-to-date, while the watchdog always receives
.ilean update notifications from each active file worker with the
current position information in the editor.
Finally, all of the 'Go to Definition' code is refactored to be easier
to maintain.
### Breaking changes
`InfoTree.hoverableInfoAt?` has been generalized to
`InfoTree.hoverableInfoAtM?` and now takes a general `filter` argument
instead of several boolean flags, as was the case before.
This PR removes uses of `Lean.RBMap` in Lean itself.
Furthermore some massaging of the import graph is done in order to avoid
having `Std.Data.TreeMap.AdditionalOperations` (which is quite
expensive) be the critical path for a large chunk of Lean. In particular
we can build `Lean.Meta.Simp` and `Lean.Meta.Grind` without it thanks to
these changes.
We did previously not conduct this change as `Std.TreeMap` was not
outperforming `Lean.RBMap` yet, however this has changed with the new
code generator.
This PR addresses the lean crash (stack overflow) with nested induction
and the generation of the `SizeOf` spec lemmas, reported at #9018.
It does not address the underlying issue that in these cases, the
generated SizeOf code does not match the the SizeOf function found by
instance search, and thus the generation fails.
This seem hard to fix: `mkSizeOfMinors` would have to recognize that
given, say, `List (Id Tree)`, the derived instance (assuming there was
one for `Tree`) is not the same as for `List Tree`.
The problem seems just about as hard as getting derived SizeOf right in
the presence of nested induction and non-canonical SizeOf instances.
This PR fixes the behavior of `String.prev`, aligning the runtime
implementation with the reference implementation. In particular, the
following statements hold now:
- `(s.prev p).byteIdx` is at least `p.byteIdx - 4` and at most
`p.byteIdx - 1`
- `s.prev 0 = 0`
- `s.prev` is monotone
Closes#9439
This PR adds the number of jobs run to the final message Lake produces
on a successfully run of `lake build`.
**Examples**
```
Build completed successfully (1 job).
Build completed successfully (6 jobs).
```
This PR adds the `libPrefixOnWindows` package and library configuration
option. When enabled, Lake will prefix static and shared libraries with
`lib` on Windows (i.e., the same way it does on Unix).
This PR changes the Lake local cache infrastructure to restore
executables and shared and static libraries from the cache. This means
they keep their expected names, which some use cases still rely on.
This PR adds a hint to the "invalid projection" message suggesting the
correct nested projection for expressions of the form `t.n` where `t` is
a tuple and `n > 2`.
This feature was originally proposed by @nomeata in #8986.
This PR improves the error messages produced by the `split` tactic,
including suggesting syntax fixes and related tactics with which it
might be confused.
Note that, to avoid clashing with the new error message styling
conventions used in these messages, this PR also updates the formatting
of the message produced by `throwTacticEx`.
Closes#6224
This PR changes `IRType.boxed` to map `erased` to `tobject` rather than
`object`, since `erased` has a representation of a boxed scalar 0 when
we are forced to represent it at runtime. This case does not occur at
all in the Lean codebase.
This PR updates the formatting of, and adds explanations for, "unknown
identifier" errors as well as "failed to infer type" errors for binders
and definitions.
It attempts to ameliorate some of the confusion encountered in #1592 by
modifying the wording of the "header is elaborated before body is
processed" note and adding further discussion and examples of this
behavior in the corresponding error explanation.
An earlier PR (#9017) replaced certain subarray functions such as
`Subarray.foldl` with generic slice functions `Slice.foldl`. For
backward compatibility reasons, This PR reintroduces `Subarray.foldl`
etc. as aliases for the `Slice` versions.
This PR adds Lake tests for builds involving the Lean module system and
fixes some bugs encountered in the process. In particular, it fixes the
parsing of private imports and how Lake handles the `import all` of a
private import.
This PR fixes some issues with the Lake tests on Windows and macOS. It
also avoids downloading Mathlib in the `init` test, which was currently
doing this after changes to the `math-lax` template in #8866.
To skip the Mathlib download in `init`, an undocumented `--offline`
option was added`. This option is currently meant for internal use only.
This PR increases the number of cases where `isArrowProposition` returns
a result other than `.undef`. This function is used to implement the
`isProof` predicate, which is invoked on every subterm visited by
`simp`.
This PR adds improves the "invalid named argument" error message in
function applications and match patterns by providing clickable hints
with valid argument names. In so doing, it also fixes an issue where
this error message would erroneously flag valid match-pattern argument
names.
This PR adds support for compilation of `casesOn` for subsingletons. We
rely on the elaborator's type checking to restrict this to inductives in
`Prop` that can actually eliminate into `Type n`. This does not yet
cover other recursors of these types (or of inductives not in `Prop` for
that matter).
This PR implements a simple optimization: dependent implications are no
longer treated as E-matching theorems in `grind`. In
`grind_bitvec2.lean`, this change saves around 3 seconds, as many
dependent implications are generated. Example:
```lean
∀ (h : i + 1 ≤ w), x.abs.getLsbD i = x.abs[i]
```
This PR adds a benchmark to our suite, specifically targeting the fact
that local hypotheses
are currently not indexed in simp and can thus cause significant
slowdowns compared to having them
as external declarations.
This PR splits up `Module.recBuildLean` into smaller functions and
optimizes the implementation of `Module.cacheOutputArtifacts` for the
new compiler. Now, all functions within `Lake.Build.Module` take Lean
<1s to compile.
This PR updates Lake to resolve the `.olean` files for transitive
imports for Lean through the `modules` field of `lean --setup`. This
enables means the Lean can now directly use the `.olean` files from the
Lake cache without needed to locate them at a specific hierarchical
path.
Resolving transitive imports still has a performance penalty, but it is
now much less.
This is mostly a refactoring that replaces other analyses with type
information, but due to the introduction of `tagged` it also has the
side effect of eliminating ref counting ops entirely for types that
always have a tagged scalar representation, e.g. `Unit`.
This PR fixes a bug at `mkCongrSimpCore?`. It fixes the issue reported
by @joehendrix at #9388.
The fix is just commit: afc4ba617f. The
rest of the PR is just cleaning up the file.
closes#9388
This PR fixes an unsafe trick where a sentinel for a hash table of Exprs
(keyed by pointer) is created by constructing a value whose runtime
representation can never be a valid Expr. The value chosen for this
purpose was Unit.unit, which violates the inference that Expr has no
scalar constructors. Instead, we change this to a freshly allocated Unit
× Unit value.
A micro-benchmark for plain, mostly first-order rewriting of simp:
This uses axiom to make it independent of specific optimization (e.g.
for `Nat`).
It generates a “list” of 128 `b`s followed by 128 `a` and uses
bubble-sort to to sort it and compares it against the expected output.
This PR changes the dependency cloning mechanism in lake so the log
message that lake is cloning a
dependency occurs before it is finished doing so (and instead before it
starts). This has been a
huge source of confusion for users that don't understand why lake seems
to be just stuck for no
reason when setting up a new project, the output now is:
```
λ lake +lean4 new math math
info: downloading mathlib `lean-toolchain` file
info: math: no previous manifest, creating one from scratch
info: leanprover-community/mathlib: cloning https://github.com/leanprover-community/mathlib4
<hang>
info: leanprover-community/mathlib: checking out revision 'cd11c28c6a0d514a41dd7be9a862a9c8815f8599'
```
This PR fixes a performance issue that occurs when generating equation
lemmas for functions that use match-expressions containing several
literals. This issue was exposed by #9322 and arises from a combination
of factors:
1. Literal values are compiled into a chain of dependent if-then-else
expressions.
2. Dependent if-then-else expressions are significantly more expensive
to simplify than regular ones.
3. The `split` tactic selects a target, splits it, and then invokes
`simp` on the resulting subgoals. Moreover, `simp` traverses the entire
goal bottom-up and does not stop after reaching the target.
This PR addresses the issue by introducing a custom simproc that avoids
recursively simplifying nested if-then-else expressions. It does **not**
alter the user-facing behavior of the `split` tactic because such a
change would be highly disruptive. Instead, the PR adds a new flag,
`backward.split` to control the behavior of the user-facing `split`
tactic. It is currently set to `true`, i.e., the old behavior is still
the default one. In a future PR, we should set this flag to `false` by
default and begin repairing all affected proofs.
closes#9322
This PR modifies the encoding from `Nat` to `Int` used in `grind
cutsat`. It is simpler, more extensible, and similar to the generic
`ToInt`. After update stage0, we will be able to delete the leftovers.
This PR correctly populates the `xType` field of the `IR.FnBody.case`
constructor. It turns out that there is no obvious consequence for this
being incorrect, because it is conservatively recomputed by the `Boxing`
pass.
This PR changes the implementation of `trace.Compiler.result` to use the
decls as they are provided rather than looking them up in the LCNF mono
environment extension, which was seemingly done to save the trouble of
re-normalizing fvar IDs before printing the decl. This means that the
`._closed` decls created by the `extractClosed` pass will now be
included in the output, which was definitely confusing before if you
didn't know what was happening.
This currently relies on the encoding pun of Nat.zero as the first
tagged constructor of Nat. Since Nat.succ is lowered to addition, it
makes sense to also lower Nat.zero to a zero literal. This might also
expose more optimization opportunities in the future.
This PR fixes IR constructor argument lowering to correctly handle an
irrelevant argument being passed for a relevant parameter in all cases.
This happened because constructor argument lowering (incompletely)
reimplemented general LCNF-to-IR argument lowering, and the fix is to
just adopt the generic helper functions. This is probably due to an
incomplete refactoring when the new compiler was still on a branch.
This PR uses the `mkCongrSimpForConst?` API in `simp` to reduce the
number of times the same congruence lemma is generated. Before this PR,
`grind` would spend `1.5`s creating congruence theorems during
normalization in the `grind_bitvec2.lean` benchmark. It now spends
`0.6`s. This PR should make an even bigger difference after we merge
#9300.
This PR removes the unnecessary requirement of `BEq α` for
`Array.any_push`, `Array.any_push'`, `Array.all_push`, `Array.all_push'`
as well as `Vector.any_push` and `Vector.all_push`.
This PR replaces the `reduceCtorEq` simproc used in `grind` by a much
more efficient one. The default one use in `simp` is just overhead
because the `grind` normalizer is already normalizing arithmetic.
In a separate PR, we will push performance improvements to the default
`reduceCtorEq`.
This PR fixes `toISO8601String` to produce a string that conforms to the
ISO 8601 format specification. The previous implementation separated the
minutes and seconds fragments with a `.` instead of a `:` and included
timezone offsets without the hour and minute fragments separated by a
`:`.
Closes#9235
This PR moves the implementation of `lean_add_extern`/`addExtern` from
C++ into Lean. I believe is the last C++ helper function from the
library/compiler directory being relied upon by the new compiler. I put
it into its own file and duplicated some code because this function
needs to execute in CoreM, whereas the other IR functions live in their
own monad stack. After the C++ compiler is removed, we can move the IR
functions into CoreM.
This PR optimizes support for `Decidable` instances in `grind`. Because
`Decidable` is a subsingleton, the canonicalizer no longer wastes time
normalizing such instances, a significant performance bottleneck in
benchmarks like `grind_bitvec2.lean`. In addition, the
congruence-closure module now handles `Decidable` instances, and can
solve examples such as:
```lean
example (p q : Prop) (h₁ : Decidable p) (h₂ : Decidable (p ∧ q)) : (p ↔ q) → h₁ ≍ h₂ := by
grind
```
This PR adds support for `.mdata` in LCNF mono types (and then drops it
at the IR type level instead). This better matches the behavior of
extern decls in the C++ code of the old compiler, which is still being
used to create extern decls at the moment and will soon be replaced.
This is covered by existing tests.
This PR fixes the bug that `collectAxioms` didn't collect axioms
referenced by other axioms. One of the results of this bug is that
axioms collected from a theorem proved by `native_decide` may not
include `Lean.trustCompiler`.
Closes#8840.
This PR demotes the builtin elaborators for `Std.Do.PostCond.total` and
`Std.Do.Triple` into macros, following the DefEq improvements of #9015.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR makes `isDefEq` detect more stuck definitional equalities
involving smart unfoldings. Specifically, if `t =?= defn ?m` and `defn`
matches on its argument, then this equality is stuck on `?m`. Prior to
this change, we would not see this dependency and simply return `false`.
Fixes#8766.
Co-authored-by: Kyle Miller <kmill31415@gmail.com>
This PR improves the `congr` tactic so that it can handle function
applications with fewer arguments than the arity of the head function.
This also fixes a bug where `congr` could not make progress with
`Set`-valued functions in Mathlib, since `Set` was being unfolded and
making such functions have an apparently higher arity.
This addresses issue #2128 for the `congr` tactic, but not `simp` and
others.
This PR adds theorem `BitVec.clzAuxRec_eq_clzAuxRec_of_getLsbD_false` as
a more general statement than `BitVec.clzAuxRec_eq_clzAuxRec_of_le`,
replacing the latter in the bitblaster too.
This PR makes the logic and tactics of `Std.Do` universe polymorphic, at
the cost of a few definitional properties arising from the switch from
`Prop` to `ULift Prop` in the base case `SPred []`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR migrates usages of `Std.Range` to the new polymorphic ranges.
This PR unfortunately increases the transitive imports for
frequently-used parts of `Init` because the ranges now rely on iterators
in order to provide their functionality for types other than `Nat`.
However, iteration over ranges in compiled code is as efficient as
before in the examples I checked. This is because of a special
`IteratorLoop` implementation provided in the PR for this purpose.
There were two issues that were uncovered during migration:
* In `IndPredBelow.lean`, migrating the last remaining range causes
`compilerTest1.lean` to break. I have minimized the issue and came to
the conclusion it's a compiler bug. Therefore, I have not replaced said
old range usage yet (see #9186).
* In `BRecOn.lean`, we are publicly importing the ranges. Making this
import private should theoretically work, but there seems to be a
problem with the module system, causing the build to panic later in
`Init.Data.Grind.Poly` (see #9185).
* In `FuzzyMatching.lean`, inlining fails with the new ranges, which
would have led to significant slowdown. Therefore, I have not migrated
this file either.
This PR adds a benchmark for the rewriting engine of bv_decide, based on
a problem extracted from
SMT-LIB. Note that this problem has significant elaboration time itself
due to its sheer size though
the overall execution time is split approximately 50:50 between
elaboration and rewriting.
This PR improves the startup time for `grind ring` by generating the
required type classes on demand. This optimization is particularly
relevant for files that make hundreds of calls to `grind`, such as
`tests/lean/run/grind_bitvec2.lean`. For example, before this change,
`grind` spent 6.87 seconds synthesizing type classes, compared to 3.92
seconds after this PR.
We can probably remove `lcUnreachable` once we delete the old compiler,
but for now it makes more sense to move it earlier, since LCNF already
has `Code.unreachable`.
This PR changes the `toMono` pass to consider the type of an application
and erase all arguments corresponding to erased params. This enables a
lightweight form of relevance analysis by changing the mono type of a
decl. I would have liked to unify this with the behavior for
constructors, but my attempt to give constructors the same behavior in
#9222 (which was in preparation for this PR) had a minor performance
regression that is really incidental to the change. Still, I decided to
hold off on it for the time being. In the future, we can hopefully
extend this to constructors, extern decls, etc.
This PR removes code that has the false assumption that LCNF local vars
can occur in types. There are other comments in `ElimDead.lean`
asserting that this is not possible, so this must have been a change
early in the development of the new compiler.
This PR makes the LCNF `elimDeadBranches` pass handle unsafe decls a bit
more carefully. Now the result of an unsafe decl will only become ⊤ if
there is value flow from a recursive call.
These are used by the checker for `.ctor`, but I don't think that that
unboxed types will reuse `.ctor`, whose implementation details are
intimately connected to our runtime representation of objects.
This PR changes the `getLiteral` helper function of `elimDeadBranches`
to correctly handle inductives with constructors. This function is not
used as often as it could be, which makes this issue rare to hit outside
of targeted test cases.
This PR extends the `Eq` simproc used in `grind`. It covers more cases
now. It also adds 3 reducible declarations to the list of declarations
to unfold.
This PR implements `exists` normalization using a simproc instead of
rewriting rules in grind. This is the first part of the PR, after update
stage0, we must remove the normalization theorems.
This PR changes the compiler's specialization analysis to consider
higher-order params that are rebundled in a way that only changes their
`Prop` arguments to be fixed. This means that they get specialized with
a mere `@[specialize]`, rather than the compiler having to opt-in to
more aggressive parameter-specific specialization.
This PR implements `forall` normalization using a simproc instead of
rewriting rules in `grind`. This is the first part of the PR, after
update stage0, we must remove the normalization theorems.
This PR fixes stealing of `⇓` syntax by the new notation for total
postconditions by demoting it to non-builtin syntax and scoping it to
`Std.Do`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR tries to improve the E-matching pattern inference for `grind`.
That said, we still need better tools for annotating and maintaining
`grind` annotations in libraries.
closes#9125
This PR removes the `Subarray`-specific `toArray`, `foldlM` and `foldl`
methods and instead provides these operations on `Std.Slice`, which are
implemented with the `ToIterator` instance of the slice. Calling
`subarray.toArray` etc. still works, since `Subarray` is an abbreviation
for `Slice _`.
Because the benchmarks are not so clear, to be safe, I will merge this
only after the release. In contrast to the ranges, the iteration over
slices is not quite as efficient as the old `Subarray`-specific
implementation, which would require either more optimizations in the
iterator library (special `IteratorLoop` and `IteratorCollect`
implementations) or better unboxing support by the compiler.
This PR makes the `pullInstances` pass avoid pulling any instance
expressions containing erased propositions, because we don't correctly
represent the dependencies that remain after erasure.
This PR disables the use of the header produced by `lake setup-file` in
the server for now. It will be re-enabled once Lake takes into account
the header given by the server when processing workspace modules.
Without that, `setup-file` header can produce odd behavior when the file
on disk and in an editor disagree on whether the file participates in
the module system.
This PR fixes `undefined symbol: lean::mpz::divexact(lean::mpz const&,
lean::mpz const&)` when building without `LEAN_USE_GMP`
This fixes a regression in #8089
This PR makes `mvcgen` split ifs rather than applying specifications.
Doing so fixes a bug reported by Rish.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR resolves a defeq diamond, which caused a problem in Mathlib:
```
import Mathlib
example (R : Type) [I : Ring R] :
@AddCommGroup.toGrindIntModule R (@Ring.toAddCommGroup R I) =
@Lean.Grind.Ring.instIntModule R (@Ring.toGrindRing R I) := rfl -- fails
```
This PR fixes two issues with Lake's process of creating static
archives.
Lake now always recreates static archives by first deleting any existing
one and then recreating it. `ar rcs` does not remove delete files, so
running it when the archive already exists can leave behind "ghost"
symbols of removed object files.
Second, Lake now use `T` rather than `--thin` to create thin archives.
While `--thin` is the recommended spelling, older versions of LLVM `ar`
do not support it. Thus, either choice produces tradeoffs. `T` is chosen
to make Lake consistent with the Lean core's own (Make) build scripts.
This PR enforces the non-inlining of _override impls in the base phase
of LCNF compilation. The current situation allows for constructor/cases
mismatches to be exposed to the simplifier, which triggers an assertion
failure. The reason this didn't show up sooner for Expr is that Expr has
a custom extern implementation of its computed field getter.
Fixes#9156.
This test was originally checked in for a panic in the pretty printer,
but at some point the output of every LCNF simp pass was added to
#guard_msgs output. Since this is printing LCNF built by the stage0
compiler, this causes a lot of unnecessary churn.
This PR changes the key Lake uses for the `,ir` artifact in the content
hash data structure to `r`, maintaining the convention of single
character key names.
This PR fixes the syntax of `grind` modifiers to use `patternIgnore` for
cases where both unicode and ascii variants are matched. This fixes an
issue where several variants of grind syntax weren't accepted (e.g.
`@[grind ← gen]`). Additionally, this reduces the chance that we get
another syntax matching bootstrap hell.
This PR wraps `simpLemma` and `grindLemma` in `ppGroup` to make sure
that the modifiers aren't printed separately from the term / identifier.
Example:
```
simp only [very_long_lemma_oh_no_can_you_please_stop_we're_getting_to_the_limit, ←
wait_this_is_rewritten_backwards_oh_uhh_where's_the_arrow_you_ask?_oh_wait_it's_up_there!]
==>
simp only [very_long_lemma_oh_no_can_you_please_stop_we're_getting_to_the_limit,
← wait_this_is_rewritten_backwards_and_wow_it's_very_clear_and_obvious]
```
This PR tightens the IR typing rules around applications of closures.
When re-reading some code, I realized that the code in `mkPartialApp`
has a clear typo—`.object` and `type` should be swapped. However, it
doesn't matter, because later IR passes smooth out the mismatch here. It
makes more sense to be strict up-front and require applications of
closures to always return an `.object`.
This PR removes a rather ugly hack in the module system, exposing the
bodies of theorems whose type mention `WellFounded`.
The original motivation was that reducing well-founded definitions (e.g.
in `by rfl`) requires reducing proofs, so they need to be available.
But reducing proofs is generally fraught with peril, and we have been
nudging our users away from using it for a while, e.g. in #5182. Since
the module system is opt-in and users will gradually migrate to it, it
may be reasonable to expect them to avoid reducing well-founded
recursion in the process
This way we don't need hacks like this (which, without evidence, I
believe would be incomplete anyways) and we get the nice guarantee that
within the module system, theorems bodies are always private.
This PR removes some unnecessary `Decidable*` instance arguments by
using lemmas in the `Classical` namespace instead of the `Decidable`
namespace.
This might lead to some additional dependency on classical axioms, but
large parts of the standard library are relying on them either way.
This PR ensures that the state is reverted when compilation using the
new compiler fails. This is especially important for noncomputable
sections where the compiler might generate half-compiled functions which
may then be erroneously used while compiling other functions.
This PR generalizes the `a^(m+n)` grind normalizer to any semirings.
Example:
```
variable [Field R]
example (M : R) (h₀ : M ≠ 0) {n : Nat} (hn : n > 0) : M ^ n / M = M ^ (n - 1) := by
cases n <;> grind
```
This PR adds support for representing more inductive as enums,
summarized up as extending support to those that fail to be enums
because of parameters or irrelevant fields. While this is nice to have,
it is actually motivated by correctness of a future desired
optimization. The existing type representation is unsound if we
implement `object`/`tobject` distinction between values guaranteed to be
an object pointer and those that may also be a tagged scalar. In
particular, types like the ones added in this PR's tests would have all
of their constructors encoded via tagged values, but under the natural
extension of the existing rules of type representation they would be
considered `object` rather than `tobject`.
This PR converts the `lowerEnumToScalarType?` cache to a cache of IR
types of named types. This is more sensible than just focusing on the
enum optimization, and due to uniform representation of polymorphism we
have to compile `Constant T1` and `Constant T2` to the same
representation.
Bumps
[dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact)
from 10 to 11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v11</h2>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dawidd6/action-download-artifact/compare/v10...v11">https://github.com/dawidd6/action-download-artifact/compare/v10...v11</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ac66b43f0e"><code>ac66b43</code></a>
node_modules: upgrade</li>
<li><a
href="9b54a0a70c"><code>9b54a0a</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/dawidd6/action-download-artifact/compare/v10...v11">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This PR changes ToIR to call `lowerEnumToScalarType?` with
`ConstructorVal.induct` rather than the name of the constructor itself.
This was an oversight in some refactoring of code in the new compiler
before landing it. It should not affect runtime of compiled code (due to
the extra tagging/untagging being optimized by LLVM), but it does make
IR for the interpreter slightly more efficient.
This PR fixes spacing in the `grind` attribute and tactic syntax.
Previously `@[grind]` was incorrectly pretty-printed as `@[grind ]`, and
`grind [...] on_failure ...` was pretty-printed `grind [...]on_failure
...`. Fixes that `on_failure` was reserved as keyword.
This PR moves the construction of the `Option.SomeLtNone.lt` (and `le`)
relation, in which `some` is less than `none`, to
`Init.Data.Option.Basic` and moves well-foundedness proofs for
`Option.lt` and `Option.SomeLtNone.lt` into `Init.Data.Option.Lemmas`.
This PR improves the “expected type mismatch” error message by omitting
the type's types when they are defeq, and putting them into separate
lines when not.
I found it rather tediuos to parse the error message when the expected
type is long, because I had to find the `:` in the middle of a large
expression somewhere. Also, when both are of sort `Prop` or `Type` it
doesn't add much value to print the sort (and it’s only one hover away
anyways).
This PR further improves release automation, automatically incorporating
material from `nightly-testing` and `bump/v4.X.0` branches in the bump
PRs to downstream repositories.
This PR adjusts the experimental module system to not import the IR of
non-`meta` declarations. It does this by replacing such IR with opaque
foreign declarations on export and adjusting the new compiler
accordingly.
This PR should not be merged before the new compiler.
Based on #8664.
This PR fixes a bug introduce by #9081 where the source file was dropped
from the module input trace and some entries were dropped from the
module job log.
This PR moves the constructor layout code from C++ to Lean. When
writing the new compiler, we just reused the existing C++ code,
even though it was a bit inconvenient, because we wanted to
ensure that constructor layout always matched the existing
compiler.
This fixes#2589 by handling struct field types just like any
other type being lowered, and thus applying the trivial structure
optimization in the process. Originally, I wanted to port the
code to Lean without any functional changes, but I found that
it took less code to just implement it "correctly" and get this
fix as a consequence than to emulate the bugs of the existing
C++ implementation.
This PR ensures that `mspec` uses the configured transparency setting
and makes `mvcgen` use default transparency when calling `mspec`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR further updates release automation. The per-repository update
scripts `script/release_steps.py` now actually performs the tests,
rather than outputting a script for the release manager to run line by
line. It's been tested on `v4.21.0` (i.e. the easy case of a stable
release), and we'll debug its behaviour on `v4.22.0-rc1` tonight.
This PR fixes some bugs with the local Lake artifact cache and cleans up
the surrounding API. It also adds the ability to opt-in to the cache on
packages without `enableArtifactCache` set using the
`LAKE_ARTIFACT_CACHE` environment variable.
Bug-wise, this fixes an issue where cached executable did not have right
permissions to run on Unix systems and a bug where cached artifacts
would not be invalidated on changes. Lake also now writes a trace file
to local build directory if there is none when fetching an artifact from
the cache. This trace has a new `synthetic` field set to `true` to
distinguish it from traces produced by full builds.
This PR removes the `irreducible` attribute from `letFun`, which is one
step toward removing special `letFun` support; part of #9086.
Removing the attribute seems to break some `module` tests in stage2.
This PR improves `pp.oneline`, where it now preserves tags when
truncating formatted syntax to a single line. Note that the `[...]`
continuation does not yet have any functionality to enable seeing the
untruncated syntax. Closes#3681.
This PR enables transforming nondependent `let`s into `have`s in a
number of contexts: the bodies of nonrecursive definitions, equation
lemmas, smart unfolding definitions, and types of theorems. A motivation
for this change is that when zeta reduction is disabled, `simp` can only
effectively rewrite `have` expressions (e.g. `split` uses `simp` with
zeta reduction disabled), and so we cache the nondependence calculations
by transforming `let`s to `have`s. The transformation can be disabled
using `set_option cleanup.letToHave false`.
Uses `Meta.letToHave`, introduced in #8954.
This PR adds a `warn.sorry` option (default true) that logs the
"declaration uses 'sorry'" warning when declarations contain `sorryAx`.
When false, the warning is not logged.
Closes#8611 (assuming that one would set `warn.sorry` as an extra flag
when building).
Other change: Uses `warn.sorry` when creating auxiliary declarations in
`structure` elaborator, to suppress irrelevant 'sorry' warnings.
We could include the sorries themselves in the message if they are
labeled, letting users "go to definition" to see where the sorries are
coming from.
In an earlier version, added additional information to the warning when
it is a synthetic sorry, since these can be caused by elaboration bugs
and they can also be caused by elaboration failures in previous
declarations. This idea needs some more work, so it's not included.
This PR fixes a bug with Lake where the job monitor would sit on a
top-level build (e.g., `mathlib/Mathlib:default`) instead of reporting
module build progress.
The issue was actually simpler than it initially appeared. The wrong
portion of the module build was being registered to job monitor. Moving
it to right place fixes it, no job priorities necessary.
This PR adds an unexpander for `OfSemiring.toQ`. This an auxiliary
function used by the `ring` module in `grind`, but we want to reduce the
clutter in the diagnostic information produced by `grind`. Example:
```
example [CommSemiring α] [AddRightCancel α] [IsCharP α 0] (x y : α)
: x^2*y = 1 → x*y^2 = y → x + y = 2 → False := by
grind
```
produces
```
[ring] Ring `Ring.OfSemiring.Q α` ▼
[basis] Basis ▼
[_] ↑x + ↑y + -2 = 0
[_] ↑y + -1 = 0
```
This PR uses the commutative ring module to normalize nonlinear
polynomials in `grind cutsat`. Examples:
```lean
example (a b : Nat) (h₁ : a + 1 ≠ a * b * a) (h₂ : a * a * b ≤ a + 1) : b * a^2 < a + 1 := by
grind
example (a b c : Int) (h₁ : a + 1 + c = b * a) (h₂ : c + 2*b*a = 0) : 6 * a * b - 2 * a ≤ 2 := by
grind
```
This PR implements support for the type class `LawfulEqCmp`. Examples:
```lean
example (a b c : Vector (List Nat) n)
: b = c → a.compareLex (List.compareLex compare) b = o → o = .eq → a = c := by
grind
example [Ord α] [Std.LawfulEqCmp (compare : α → α → Ordering)] (a b c : Array (Vector (List α) n))
: b = c → o = .eq → a.compareLex (Vector.compareLex (List.compareLex compare)) b = o → a = c := by
grind
```
This PR adjusts the experimental module system to make `private` the
default visibility modifier in `module`s, introducing `public` as a new
modifier instead. `public section` can be used to revert the default for
an entire section, though this is more intended to ease gradual adoption
of the new semantics such as in `Init` (and soon `Std`) where they
should be replaced by a future decl-by-decl re-review of visibilities.
This PR implements support for equations `<num> = 0` in rings and fields
of unknown characteristic. Examples:
```lean
example [Field α] (a : α) : (2 * a)⁻¹ = a⁻¹ / 2 := by grind
example [Field α] (a : α) : (2 : α) ≠ 0 → 1 / a + 1 / (2 * a) = 3 / (2 * a) := by grind
example [CommRing α] (a b : α) (h₁ : a + 2 = a) (h₂ : 2*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 6 = a) (h₂ : b + 9 = b) (h₂ : 3*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 6 = a) (h₂ : b + 9 = b) (h₂ : 3*b + a = 0) : a = 0 := by
grind
example [CommRing α] (a b : α) (h₁ : a + 2 = a) (h₂ : b = 0) : 4*a + b = 0 := by
grind
example [CommRing α] (a b c : α) (h₁ : a + 6 = a) (h₂ : c = c + 9) (h : b + 3*c = 0) : 27*a + b = 0 := by
grind
```
This PR proves that the default `toList`, `toListRev` and `toArray`
functions on slices can be described in terms of the slice iterator.
Relying on new lemmas for the `uLift` and `attachWith` iterator
combinators, a more concrete description of said functions is given for
`Subarray`.
This PR introduces a simple variable-reordering heuristic for `cutsat`.
It is needed by the `ToInt` adapter to support finite types such as
`UInt64`. The current encoding into `Int` produces large coefficients,
which can enlarge the search space when an unfavorable variable order is
used. Example:
```lean
example (a b c : UInt64) : a ≤ 2 → b ≤ 3 → c - a - b = 0 → c ≤ 5 := by
grind
```
This PR implements support for equalities and disequalities in `grind
cutsat`. We still have to improve the encoding. Examples:
```lean
example (a b c : Fin 11) : a ≤ 2 → b ≤ 3 → c = a + b → c ≤ 5 := by
grind
example (a : Fin 2) : a ≠ 0 → a ≠ 1 → False := by
grind
```
This PR enables the error explanation widget in named error messages.
Note that the displayed links won't work until the new manual version is
released (unless overriding `LEAN_MANUAL_ROOT` with a suitably recent
manual build).
This PR replaces all usages of `[:]` slice notation in `src` with the
new `[...]` notation in production code, tests and comments. The
underlying implementation of the `Subarray` functions stays the same.
Notation cheat sheet:
* `*...*` is the doubly-unbounded range.
* `*...a` or `*...<a` contains all elements that are less than `a`.
* `*...=a` contains all elements that are less than or equal to `a`.
* `a...*` contains all elements that are greater than or equal to `a`.
* `a...b` or `a...<b` contains all elements that are greater than or
equal to `a` and less than `b`.
* `a...=b` contains all elements that are greater than or equal to `a`
and less than or equal to `b`.
* `a<...*` contains all elements that are greater than `a`.
* `a<...b` or `a<...<b` contains all elements that are greater than `a`
and less than `b`.
* `a<...=b` contains all elements that are greater than `a` and less
than or equal to `b`.
Benchmarks have shown that importing the iterator-backed parts of the
polymorphic slice library in `Init` impacts build performance. This PR
avoids this problem by separating those parts of the library that do not
rely on iterators from those those that do. Whereever the new slice
notation is used, only the iterator-independent files are imported.
This PR implements support for strict inequalities in the `ToInt`
adapter used in `grind cutsat`. Example:
```lean
example (a b c : Fin 11) : c ≤ 9 → a ≤ b → b < c → a < c + 1 := by
grind
```
This PR fixes a type error in `mvcgen` and makes it turn fewer natural
goals into synthetic opaque ones, so that tactics such as `trivial` may
instantiate them more easily.
---------
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR makes `mspec` detect more viable assignments by `rfl` instead of
generating a VC.
---------
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
Co-authored-by: Rishikesh Vaishnav <rishhvaishnav@gmail.com>
This PR adds a Mathlib-like testing and feedback system for the
reference manual. Lean PRs will receive comments that reflect the status
of the language reference with respect to the PR.
This PR adds test cases for the VC generator and implements a few small
and tedious fixes to ensure they pass.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR extends the list of acceptable characters to all the french ones
as well as some others,
by adding characters from the Latin-1-Supplement add Latin-Extended-A
unicode block.
This PR adds DNS functions to the standard library
---------
Co-authored-by: Henrik Böving <hargonix@gmail.com>
Co-authored-by: Markus Himmel <markus@himmel-villmar.de>
This PR adds a function called `lean_setup_libuv` that initializes
required LIBUV components. It needs to be outside of
`lean_initialize_runtime_module` because it requires `argv` and `argc`
to work correctly.
---------
Co-authored-by: Markus Himmel <markus@lean-fro.org>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
This PR fixes a couple of bootstrapping-related hiccups in the newly
added `Std.Do` module. More precisely,
* The `spec` attribute syntax was registered under the wrong name and
its implementation needed to use a different priority parser
* Elaborators and delaborators for `MGoal`, `Triple`, `PostCond` and
`PostCond.total` were broken and are now properly builtin
* `Std.Do` should not transitively import `Std.Tactic.Do.Syntax`
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR fixes a bug where semantic highlighting would only highlight
keywords that started with an alphanumeric character. Now, it uses
`Lean.isIdFirst`.
This PR provides an iterator combinator that lifts the emitted values
into a higher universe level via `ULift`. This combinator is then used
to make the subarray iterators universe-polymorphic. Previously, they
were only available for `Subarray α` if `α : Type`.
This PR implements support for (non strict) `ToInt` inequalities in
`grind cutsat`. `grind cutsat` can solve simple problems such as:
```lean
example (a b c : Fin 11) : a ≤ b → b ≤ c → a ≤ c := by
grind
example (a b c : Fin 11) : c ≤ 9 → a ≤ b → b ≤ c → a ≤ c + 1 := by
grind
example (a b c : UInt8) : a ≤ b → b ≤ c → a ≤ c := by
grind
example (a b c d : UInt32) : a ≤ b → b ≤ c → c ≤ d → a ≤ d := by
grind
```
Next step: strict inequalities, and equalities.
This PR introduces a local artifact cache for Lake. When enabled, Lake
will shared build artifacts (built files) across different instances of
the same package using an input- and content-addressed cache.
To enable support for the local cache, packages must set
`enableArtifactCache := true` in their package configuration. The reason
for this is twofold. This feature is new and experimental, so it should
be opt-in. Also, some packages may need to disable it as the cache
entails that artifacts are no longer necessarily available within the
build directory, which can break custom build scripts.
The cache location is determined by the system configuration. Lake's
first preference is to store it under the Lean toolchain in a
`lake/cache` directory. If Elan is not available, Lake will store it in
common system location (e.g., `$XDG_CACHE_HOME/lake`, or
`~/.cache/lake`). On an exotic system where neither of these exist, the
cache will be disabled. Users can override this location through the
`LAKE_CACHE_DIR` environment variable. If set to empty, caching will be
disabled.
The cache is both input and content-addressed. Mappings from input hash
to output content hash(es) are stored in a per-package JSON Lines file
(e.g., `<cache-dir>/inputs/<pkg-name>.jsonl`). Thus, mappings are shared
across different instances of a package, but not between packages. The
output content hashes are also now stored in trace files in a new
`outputs` field. The value of this field can be either a single hash or
an object of multiple content hashes for targets which produce multiple
artifacts (e.g., Lean module builds). Separately, artifacts are stored
in a single flat content-addressed cache (e.g.,
`<cache-dir>/artifacts/<hash>.art`. Artifacts are therefore shared
across all cache-enabled packages.
Module `*.olean` and and `*.ilean` artifacts are cached. However, each
package will still copy the files to their build directory, as Lean and
the server currently expect them to be at a specific path. This will be
changed for `*.olean` files when the performance issues with
pre-resolving modules in Lake for `lean --setup` are solved.
This PR adds the following features to `simp`:
- A routine for simplifying `have` telescopes in a way that avoids
quadratic complexity arising from locally nameless expression
representations, like what #6220 did for `letFun` telescopes.
Furthermore, simp converts `letFun`s into `have`s (nondependent lets),
and we remove the #6220 routine since we are moving away from `letFun`
encodings of nondependent lets.
- A `+letToHave` configuration option (enabled by default) that converts
lets into haves when possible, when `-zeta` is set. Previously Lean
would need to do a full typecheck of the bodies of `let`s, but the
`letToHave` procedure can skip checking some subexpressions, and it
modifies the `let`s in an entire expression at once rather than one at a
time.
- A `+zetaHave` configuration option, to turn off zeta reduction of
`have`s specifically. The motivation is that dependent `let`s can only
be dsimped by let, so zeta reducing just the dependent lets is a
reasonable way to make progress. The `+zetaHave` option is also added to
the meta configuration.
- When `simp` is zeta reducing, it now uses an algorithm that avoids
complexity quadratic in the depth of the let telescope.
- Additionally, the zeta reduction routines in `simp`, `whnf`, and
`isDefEq` now all are consistent with how they apply the `zeta`,
`zetaHave`, and `zetaUnused` configurations.
The `letToFun` option is addressing a TODO in `getSimpLetCase` ("handle
a block of nested let decls in a single pass if this becomes a
performance problem").
Performance should be compared to before #8804, which temporarily
disabled the #6220 optimizations for `letFun` telescopes.
Good kernel performance depends on carefully handling the `have`
encoding. Due to the way the kernel instantiates bvars (it does *not*
beta reduce when instantiating), we cannot use congruence theorems of
the form `(have x := v; f x) = (have x ;= v'; f' x)`, since the bodies
of the `have`s will not be syntactically equal, which triggers zeta
reduction in the kernel in `is_def_eq`. Instead, we work with `f v = f'
v'`, where `f` and `f'` are lambda expressions. There is still zeta
reduction, but only when converting between these two forms at the
outset of the generated proof.
This PR adds `BitVec.toFin_(sdiv, smod, srem)` and `BitVec.toNat_srem`.
The strategy for the `rhs` of the `toFin_*` lemmas is to consider what
the corresponding `toNat_*` theorems do and push the `toFin` closerto
the operands. For the `rhs` of `BitVec.toNat_srem` I used the same
strategy as `BitVec.toNat_smod`.
This PR closes#3791, making sure that the Syntax formatter inserts
whitespace before and after comments in the leading and trailing text of
Syntax to avoid having comments comment out any following syntax, and to
avoid comments' lexical syntax from being interpreted as being part of
another syntax. If the text contains newlines before or after any
comments, they are formatted as hard newlines rather than soft newlines.
For example, `--` comments will have a hard newline after them. Note:
metaprograms generating Syntax with comments should be sure to include
newlines at the ends of `--` comments.
This PR improves the error messages produced by invalid projections and
field notation. It also adds a hint to the "function expected" error
message noting the argument to which the term is being applied, which
can be helpful for debugging spurious "function expected" messages
actually caused by syntax errors.
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
This PR introduces a Hoare logic for monadic programs in
`Std.Do.Triple`, and assorted tactics:
* `mspec` for applying Hoare triple specifications
* `mvcgen` to turn a Hoare triple proof obligation `⦃P⦄ prog ⦃Q⦄` into
pure verification conditoins (i.e., without any traces of Hoare triples
or weakest preconditions reminiscent of `prog`). The resulting
verification conditions in the stateful logic of `Std.Do.SPred` can be
discharged manually with the tactics coming with its custom proof mode
or with automation such as `simp` and `grind`.
This is pre-release of a planned feature and not yet intended for
production use. We are grateful for feedback of early adopters, though.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR introduces polymorphic slices in their most basic form. They
come with a notation similar to the new range notation. `Subarray` is
now also a slice and can produce an iterator now. It is intended to
migrate more operations of `Subarray` to the `Slice` wrapper type to
make them available for slices of other types, too.
The PR also moves the `filterMap` combinators into `Init` because they
are used internally to implement iterators on array slices.
This PR adds the types `Std.ExtDTreeMap`, `Std.ExtTreeMap` and
`Std.ExtTreeSet` of extensional tree maps and sets. These are very
similar in construction to the existing extensional hash maps with one
exception: extensional tree maps and sets provide all functions from
regular tree maps and sets. This is possible because in contrast to hash
maps, tree maps are always ordered.
This PR adds a logic of stateful predicates SPred to Std.Do in order to
support reasoning about monadic programs. It comes with a dedicated
proof mode the tactics of which are accessible by importing
Std.Tactic.Do.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR introduces ranges that are polymorphic, in contrast to the
existing `Std.Range` which only supports natural numbers.
Breakdown of core changes:
* `Lean.Parser.Basic`: Modified the number parser (`Lean.Parser.Basic`)
so that it will only consider a *single* dot to be part of a decimal
number. `1..` will no longer be parsed as `1.` followed by `.`, but as
`1` followed by `..`.
* The test `ellipsisProjIssue` ensures that `#check Nat.add ...succ`
produces a syntax error. After introducing the new range notation (see
below), it returns a different (less nice) error message. I updated the
test to reflect the new error message. (The error message will become
nicer as soon as a delaborator for the ranges is implemented. This is
out of scope for this PR.)
Breakdown of standard library changes:
Modified modules: `Init.Data.Range.Polymorphic` (added),
`Init.Data.Iterators`, `Std.Data.Iterators`
* Introduced the type `Std.PRange` that is parameterized over the type
in which the range operates and the shapes of the lower and upper bound.
* Introduced a new notation for ranges. Examples for this notation are:
`1...*`, `1...=3`, `1...<3`, `1<...=2`, `*...=3`.
* Defined lots of typeclasses for different capabilities of ranges,
depending on their shape and underlying type.
* Introduced `Iter(M).size`.
* Introduced the `Iter(M).stepSize n` combinator, which iterates over an
iterator with the given step size `n`. It will drop `n - 1` values
between every value it emits.
* Replaced `LawfulPureIterator` with a new and better typeclass
`LawfulDeterministicIterator`.
* Simplified some lemma statements in the iterator library such as
`IterM.toList_eq_match`, which unnecessarily matched over a `Subtype`,
hindering rewrites due to type dependencies.
Reasons for the concrete choice of notation:
* `lean4-cli` uses `...`-based notation for the `Cmd` notation and it
clashes with `...a` range notation.
* test `2461` fails when using two-dot-based notation because of the
existing `{ a.. }` notation.
This PR adds a generic `MonadLiftT Id m` instance. We do not implement a
`MonadLift Id m` instance because it would slow down instance resolution
and because it would create more non-canonical instances. This change
makes it possible to iterate over a pure iterator, such as `[1, 2,
3].iter`, in arbitrary monads.
This PR adds a new monadic interface for `Async` operations.
This is the design for the `Async` monad that I liked the most. The idea
was refined with the help of @tydeu. Before that, I had some
prerequisites in mind:
1. Good performance
2. Explicit `yield` points, so we could avoid using `bindTask` for every
lifted IO operation
3. A way to avoid creating an infinite chain of `Task`s during recursion
The 2 and 3 points are not covered in this PR, I wish I had a good
solution but right now only a few sketches of this.
### Explicit `yield` points
I thought this would be easy at first, but it actually turned out kinda
tricky. I ended up creating the `suspend` syntax, which is just a small
modification of the lift method (`<- ...`) syntax. It desugars to
`Suspend.suspend task fun _ => ...`. So something like:
```lean
do
IO.println "a"
IO.println "b"
let result := suspend (client.recv? 1024)
IO.println "c"
IO.println "d"
```
Would become:
```lean
Bind.bind (IO.println "a") fun _ =>
Bind.bind (IO.println "b") fun _ =>
Suspend.suspend (client.recv? 1024) fun message =>
Bind.bind (IO.println "c") fun _ =>
IO.println "d"
```
This makes things a bit more efficient. When using `bind`, we would try
to avoid creating a `Task` chain, and the `suspend` would be the only
place we use `Task.bind`. But there's a problem if we use `bind` with
something that needs `suspend`, it’ll block the whole task. Blocking is
the only way to prevent task accumulation when using plain `bind` inside
a structure like that:
```
inductive AsyncResult (ε σ α : Type u) where
| ok : α → σ → AsyncResult ε σ α
| error : ε → σ → AsyncResult ε σ α
| ofTask : Task (EStateM.Result ε σ α) → σ →AsyncResult ε σ α
```
Because we simply need to remove the `ofTask` and transform it into an
`ok`.
### Infinite chain of Tasks
If you create an infinite recursive function using `Task` (which is
super common in servers like HTTP ones), it can lead to a lot of memory
usage. Because those tasks get chained forever and won't be freed until
the function returns.
To get around that, I used CPS and instead of just calling `Task.bind`,
I’d spawn a new task and return an "empty" one like:
```lean
fun k => Task.bind (...) fun value => do k value; pure emptyTask
```
This works great with a CPS-style monad, but it generates a huge IR by
itself.
Just doing CPS alone was too much, though, because every lifted
operation created a new continuation and a `Task.bind`. So, I used it
with `suspend` and got a better performance, but the usage is not good
with `suspend`.
### The current monad
Right now, the monad I’m using is super simple. It doesn't solve the
earlier problems, but the API is clean, and the generated IR is small
enough. An example of how we should use it is:
```lean
-- A loop that repeatedly sends a message and waits for a reply.
partial def writeLoop (client : Socket.Client) (message : String) : Async (AsyncTask Unit) := async do
IO.println s!"sending: {message}"
await (← client.send (String.toUTF8 message))
if let some mes ← await (← client.recv? 1024) then
IO.println s!"received: {String.fromUTF8! mes}"
-- use parallel to avoid building up an infinite task chain
parallel (writeLoop client message)
else
IO.println "client disconnected from receiving"
-- Server’s main accept loop, keeps accepting and echoing for new clients.
partial def acceptLoop (server : Socket.Server) (promise : IO.Promise Unit) : Async (AsyncTask Unit) := async do
let client ← await (← server.accept)
await (← client.send (String.toUTF8 "tutturu "))
-- allow multiple clients to connect at the same time
parallel (writeLoop client "hi!!")
-- and keep accepting more clients, parallel again to avoid building up an infinite task chain
parallel (acceptLoop server promise)
-- A simple client that connects and sends a message.
def echoClient (addr : SocketAddress) (message : String) : Async (AsyncTask Unit) := async do
let socket ← Client.mk
await (← socket.connect addr)
parallel (writeLoop socket message)
-- TCP setup: bind, listen, serve, and run a sample client.
partial def mainTCP : Async Unit := do
let addr := SocketAddressV4.mk (.ofParts 127 0 0 1) 8080
let server ← Server.mk
server.bind addr
server.listen 128
-- promise exists since the server is (probably) never going to stop
let promise ← IO.Promise.new
let acceptAction ← acceptLoop server promise
await (← echoClient addr "hi!")
await acceptAction
await promise
-- Entry point
def main : IO Unit := mainTCP.wait
```
---------
Co-authored-by: Henrik Böving <hargonix@gmail.com>
Co-authored-by: Mac Malone <tydeu@hatpress.net>
This PR adjusts the `lean.redundantMatchAlt` error explanation to remove
the word "unprefixed," which the reference manual's style linter does
not recognize.
This PR refactors the juggling of universes in the linear
`noConfusionType` construction: Instead of using `PUnit.{…} → ` in the
to get the branches of `withCtorType` to the same universe level, we use
`PULift`.
This fixes https://github.com/leanprover/lean4/issues/8962, although
probably doesn’t solve all issues of that kind while level equality
checking is incomplete.
This PR updates the `solveMonoStep` function used in the `monotonicity`
tactic to check for definitional equality between the current goal and
the monotonicity proof obtained from a recursive call. This ensures
soundness by preventing incorrect applications when
`Lean.Order.PartialOrder` instances differ—an issue that can arise with
`mutual` blocks defined using the `partial_fixpoint` keyword, where
different `Lean.Order.CCPO` structures may be involved.
Closes https://github.com/leanprover/lean4/issues/8894.
This PR adds some missing `ToInt.X` typeclass instances for `grind`.
There are still several more to add (in particular, for `ToInt.Pow`),
but I am going to perform an intermediate refactor first.
This PR removes Lake's usage of `lean -R` and `moduleNameOfFileName` to
pass module names to Lean. For workspace names, it now relies on
directly passing the module name through `lean --setup`. For
non-workspace modules passed to `lake lean` or `lake setup-file`, it
uses a fixed module name of `_unknown`.
This means that `lake lean` and `lake setup-file` can be successfully
and consistently used on modules that do not lie under the working
directory or the workspace root.
This PR passes Lean options configured via CMake variables onto the Lake
build. For example, this will ensure CI' setting of `warningAsError` via
`LEAN_EXTRA_MAKE_OPTS` reaches Lake.
This PR both adds initial `@[grind]` annotations for `BitVec`, and uses
`grind` to remove many proofs from `BitVec/Lemmas`.
---------
Co-authored-by: Sebastian Ullrich <sebasti@nullri.ch>
This PR adds `BitVec.(getElem, getLsbD, getMsbD)_(smod, sdiv, srem)`
theorems to complete the API for `sdiv`, `srem`, `smod`. Even though the
rhs is not particularly succint (it's hard to find a meaning for what it
means to have "the n-th bit of the result of a signed division/modulo
operation"), these lemmas prevent the need to `unfold` of operations.
---------
Co-authored-by: Kim Morrison <477956+kim-em@users.noreply.github.com>
This PR embeds a NatModule into its IntModule completion, which is
injective when we have AddLeftCancel, and monotone when the modules are
ordered. Also adds some (failing) grind test cases that can be verified
once `grind` uses this embedding.
This PR changes the CI setup to generate `lean-pr-testing-NNNN` branches
for Mathlib on the `leanprover-community/mathlib4-nightly-testing` fork,
rather than on the main repo.
This PR add instances showing that the Grothendieck (i.e. additive)
envelope of a semiring is an ordered ring if the original semiring is
ordered (and satisfies ExistsAddOfLE), and in this case the embedding is
monotone.
This PR improves the case splitting strategy used in `grind`, and
ensures `grind` also considers simple `match`-conditions for
case-splitting. Example:
```lean
example (x y : Nat)
: 0 < match x, y with
| 0, 0 => 1
| _, _ => x + y := by -- x or y must be greater than 0
grind
```
This PR adds configuration options to the `let`/`have` tactic syntaxes.
For example, `let (eq := h) x := v` adds `h : x = v` to the local
context. The configuration options are the same as those for the
`let`/`have` term syntaxes.
This PR changes `toLCNF` to stop caching translations of expressions
upon seeing an expression marked `never_extract`. This is more
coarse-grained than it needs to be, but it is difficult to do any
better, as the new compiler's `Expr` cache is based on structural
identity (rather than the pointer identity of the old compiler).
The newly added `tests/compiler/never_extract.lean` is also converted
into a `run` tests, because during development I found the order of the
output to `stderr` to be a bit finicky. The reason for making it a
`compiler` test in the first place is that closed term decls work
slightly differently between native code and the interpreter, and it
would be good to test both, but we already have separate tests for
`never_extract` and closed term extraction.
Fixes#8944.
This PR adds a procedure that efficiently transforms `let` expressions
into `have` expressions (`Meta.letToHave`). This is exposed as the
`let_to_have` tactic.
It uses the `withTrackingZetaDelta` technique: the expression is
typechecked, and any `let` variables that don't enter the zeta delta set
are nondependent. The procedure uses a number of heuristics to limit the
amount of typechecking performed. For example, it is ok to skip
subexpressions that do not contain fvars, mvars, or `let`s.
This PR implements support for normalization for commutative semirings
that do not implement `AddRightCancel`. Examples:
```lean
variable (R : Type u) [CommSemiring R]
example (a b c : R) : a * (b + c) = a * c + b * a := by grind
example (a b : R) : (a + b)^2 = a^2 + 2 * a * b + b^2 := by grind
example (a b : R) : (a + 2 * b)^2 = a^2 + 4 * a * b + 4 * b^2 := by grind
example (a b : R) : (a + 2 * b)^2 = 4 * b^2 + b * 4 * a + a^2 := by grind
```
This PR fixes the handling of the `never_extract` attribute in the
compiler's CSE pass. There is an interesting debate to be had about
exactly how hard the compiler should try to avoid duplicating anything
that transitively uses `never_extract`, but this is the simplest form
and roughly matches the check in the old compiler (although due to
different handling of local function decls in the two compilers, the
consequences might be slightly different).
This gets half of the way to #8944.
This PR adds support to the server for the new module setup process by
changing how `lake setup-file` is used.
In the new server setup, `lake setup-file` is invoked with the file name
of the edited module passed as a CLI argument and with the parsed header
passed to standard input in JSON form. Standard input is used to avoid
potentially exceeding the CLI length limits on Windows. Lake will build
the module's imports along with any other dependencies and then return
the module's workspace configuration via JSON (now in the form of
`ModuleSetup`). The server then post-processes this configuration a bit
and returns it back to the Lean language processor.
The server's header is currently only fully respected by Lake for
external modules (files that are not part of any workspace library). For
workspace modules, the saved module header is currently used to build
imports (as has been done since #7909). A follow-up Lake PR will align
both cases to follow the server's header.
Lean search paths (e.g., `LEAN_PATH`, `LEAN_SRC_PATH`) are no longer
negotiated between the server and Lake. These environment variables are
already configured during sever setup by `lake serve` and do not change
on a per-file basis. Lake can also pre-resolve the `.olean` files of
imports via the `importArts` field of `ModuleSetup`, limiting the
potential utility of communicating `LEAN_PATH`.
This PR skips attempting to compute a module name from the file name and
root directory (i.e., `lean -R`) if a name is already provided via `lean
--setup`.
This is accomplished by porting the rest of the frontend code in the
`try` block to Lean.
This PR adds explanations for a few errors concerning noncomputability,
redundant match alternatives, and invalid inductive declarations.
These adopt a lower-case error naming style, which is also applied to
existing error explanation tests.
This PR upgrades the `math` template for `lake init` and `lake new` to
configures the new project to meet rigorous Mathlib maintenance
standards. In comparison with the previous version (now available as
`lake new ... math-lax`), this automatically provides:
* Strict linting options matching Mathlib.
* GitHub workflow for automatic upgrades to newer Lean and Mathlib
releases.
* Automatic release tagging for toolchain upgrades.
* API documentation generated by
[doc-gen4](https://github.com/leanprover/doc-gen4) and hosted on
`github.io`.
* README with some GitHub-specific instructions.
The previous edition of the template is still available, renamed to
`math-lax`.
---------
Co-authored-by: Mac Malone <tydeu@hatpress.net>
This PR introduces antitonicity lemmas that support the elaboration of
mixed inductive-coinductive predicates defined using the
`least_fixpoint` / `greatest_fixpoint` constructs.
For instance, the following definition elaborates correctly because all
occurrences of the inductively defined predicate `tock `within the
coinductive definition of `tick` appear in negative positions. The dual
situation applies to the definition of `tock`:
```
mutual
def tick : Prop :=
tock → tick
greatest_fixpoint
def tock : Prop :=
tick → tock
least_fixpoint
end
```
This PR allows `simp` to recognize and warn about simp lemmas that are
likely looping in the current simp set. It does so automatically
whenever simplification fails with the dreaded “max recursion depth”
error fails, but it can be made to do it always with `set_option
linter.loopingSimpArgs true`. This check is not on by default because it
is somewhat costly, and can warn about simp calls that still happen to
work.
This closes#5111. In the end, this implemented much simpler logic than
described there (and tried in the abandoned #8688; see that PR
description for more background information), but it didn’t work as well
as I thought. The current logic is:
“Simplify the RHS of the simp theorem, complain if that fails”.
It is a reasonable policy for a Lean project to say that all simp
invocation should be so that this linter does not complain. Often it is
just a matter of explicitly disabling some simp theorems from the
default simp set, to make it clear and robust that in this call, we do
not want them to trigger. But given that often such simp call happen to
work, it’s too pedantic to impose it on everyone.
This PR adds the `+generalize` option to the `let` and `have` syntaxes.
For example, `have +generalize n := a + b; body` replaces all instances
of `a + b` in the expected type with `n` when elaborating `body`. This
can be likened to a term version of the `generalize` tactic. One can
combine this with `eq` in `have +generalize (eq := h) n := a + b; body`
as an analogue of `generalize h : n = a + b`.
This PR finishes post-stage0-cleanup after #8914 and #8929. Also:
- adds configuration options for `haveI` and `letI` terms.
- adds `letConfig` parser alias
This PR implements first-class support for nondependent let expressions
in the elaborator; recall that a let expression `let x : t := v; b` is
called *nondependent* if `fun x : t => b` typechecks, and the notation
for a nondependent let expression is `have x := v; b`. Previously we
encoded `have` using the `letFun` function, but now we make use of the
`nondep` flag in the `Expr.letE` constructor for the encoding. This has
been given full support throughout the metaprogramming interface and the
elaborator. Key changes to the metaprogramming interface:
- Local context `ldecl`s with `nondep := true` are generally treated as
`cdecl`s. This is because in the body of a `have` expression the
variable is opaque. Functions like `LocalDecl.isLet` by default return
`false` for nondependent `ldecl`s. In the rare case where it is needed,
they take an additional optional `allowNondep : Bool` flag (defaults to
`false`) if the variable is being processed in a context where the value
is relevant.
- Functions such as `mkLetFVars` by default generalize nondependent let
variables and create lambda expressions for them. The
`generalizeNondepLet` flag (default true) can be set to false if `have`
expressions should be produced instead. **Breaking change:** Uses of
`letLambdaTelescope`/`mkLetFVars` need to use `generalizeNondepLet :=
false`. See the next item.
- There are now some mapping functions to make telescoping operations
more convenient. See `mapLetTelescope` and `mapLambdaLetTelescope`.
There is also `mapLetDecl` as a counterpart to `withLetDecl` for
creating `let`/`have` expressions.
- Important note about the `generalizeNondepLet` flag: it should only be
used for variables in a local context that the metaprogram "owns". Since
nondependent let variables are treated as constants in most cases, the
`value` field might refer to variables that do not exist, if for example
those variables were cleared or reverted. Using `mapLetDecl` is always
fine.
- The simplifier will cache its let dependence calculations in the
nondep field of let expressions.
- The `intro` tactic still produces *dependent* local variables. Given
that the simplifier will transform lets into haves, it would be
surprising if that would prevent `intro` from creating a local variable
whose value cannot be used.
Note that nondependence of lets is not checked by the kernel. To
external checker authors: If the elaborator gets the nondep flag wrong,
we consider this to be an elaborator error. Feel free to typecheck `letE
n t v b true` as if it were `app (lam n t b default) v` and please
report issues.
This PR follows up from #8751, which made sure the nondep flag was
preserved in the C++ interface.
This PR is a followup to #8914, fixing an oversight where
`letIdDeclBinders` is was not updated with the new format. This relies
on some bootstrapping code to stay in place, but we do bootstrap cleanup
that is currently possible.
This PR adds `IO.FS.Stream.readToEnd` which parallels
`IO.FS.Handle.readToEnd` along with its upstream definitions (i.e.,
`readBinToEndInto` and `readBinToEnd`). It also removes an unnecessary
`partial` from `IO.FS.Handle.readBinToEnd`.
This function is useful for reading, for example, all of standard input.
This PR generalizes `IO.FS.lines` with `IO.FS.Handle.lines` and adds the
parallel `IO.FS.Stream.lines` for streams.
The stream version is useful for reading, for example, the lines of
standard input.
This PR adds a linter (`linter.unusedSimpArgs`) that complains when a
simp argument (`simp [foo]`) is unused. It should do the right thing if
the `simp` invocation is run multiple times, e.g. inside `all_goals`. It
does not trigger when the `simp` call is inside a macro. The linter
message contains a clickable hint to remove the simp argument.
I chose to display a separate warning for each unused argument. This
means that the user has to click multiple times to remove all of them
(and wait for re-elaboration in between). But this just means multiple
endorphine kicks, and the main benefit over a single warning that would
have to span the whole argument list is that already the squigglies tell
the users about unused arguments.
This closes#4483.
Making Init and Std clean wrt to this linter revealed close to 1000
unused simp args, a pleasant experience for anyone enjoying tidying
things: #8905
The `.impure` LCNF `Phase` is not currently used, but was intended for a
potential future where the current `IR` passes (which operate on a
highly impure representation) were rewritten to operate on LCNF instead.
For several reasons, I don't think this is very likely to happen, and
instead we are more likely to remove some of the unnecessary differences
between LCNF and IR while keeping them distinct.
This PR modifies `let` and `have` term syntaxes to be consistent with
each other. Adds configuration options; for example, `have` is
equivalent to `let +nondep`, for *nondependent* lets. Other options
include `+usedOnly` (for `let_tmp`), `+zeta` (for `letI`/`haveI`), and
`+postponeValue` (for `let_delayed)`. There is also `let (eq := h) x :=
v; b` for introducing `h : x = v` when elaborating `b`. The `eq` option
works for pattern matching as well, for example `let (eq := h) (x, y) :=
p; b`.
Future PRs will add these options to tactic syntax, once a stage0 update
has been done.
This PR implements support for (commutative) semirings in `grind`. It
uses the Grothendieck completion to construct a (commutative) ring
`Lean.Grind.Ring.OfSemiring.Q α` from a (commutative) semiring `α`. This
construction is mostly useful for semirings that implement
`AddRightCancel α`. Otherwise, the function `toQ` is not injective.
Examples:
```lean
example (x y : Nat) : x^2*y = 1 → x*y^2 = y → y*x = 1 := by
grind
example [CommSemiring α] [AddRightCancel α] (x y : α) : x^2*y = 1 → x*y^2 = y → y*x = 1 := by
grind
example (a b : Nat) : 3 * a * b = a * b * 3 := by grind
example (k z : Nat) : k * (z * 2 * (z * 2 + 1)) = z * (k * (2 * (z * 2 + 1))) := by grind
example [CommSemiring α] [AddRightCancel α] [IsCharP α 0] (x y : α)
: x^2*y = 1 → x*y^2 = y → x + y = 1 → False := by
grind
```
This PR makes `simp` consult its own cache more often, to avoid
replicating work.
Before, the simp cache was checked upon entry of `simpImpl` only, which
then calls `simpLoop`, which recursively iterates the `pre`-lemmas,
without checking the cache again.
Now, `simpLoop` itself checks the cache. This seems more principled,
given that `simpLoop` is actually putting entries into the cache for
each of its calls, so it’s more uniform if it checks the cache itself.
This avoids repeated rewrites. For example given
```
theorem ab : a = b := testSorry
theorem bc : b = c := testSorry
example (h : P c) : P b ∧ P a := by simp [ab, bc, h]
```
simp would rewrite `b ==> c` twice (once as part of `b ==> c` and then
again as part of `a ==> b ==> c`). And it’d be order dependent: With
```
example (h : P c) : P a ∧ P b := by simp [ab, bc, h]
```
the `a ==> b ==> c` chain would insert `b ==> c` into the cache, and
picked up by `simpImpl` when rewriting `P b`.
With this change, `b ==> c` is performed only once in both examples.
Instruction counts on stdlib and mathlib both show a mild improvement
across the board (0.5%), with individual modules improving by up to 4%
in stdlib and even more in mathlib.
(This does not check the cache before applying `post`, which explains
where there are still some repeated rewrites in the trace logs. But I’m
less sure about inserting a cache check here and so I am treading
carefully here. It’s also going to be at most one `post` application
that’s duplicated, because if `post` returns `.visit`, we go back to
`pre` and thus a cache check.)
This PR refactors the way simp arguments are elaborated: Instead of
changing the `SimpTheorems` structure as we go, this elaborates each
argument to a more declarative description of what it does, and then
apply those. This enables more interesting checks of simp arguments that
need to happen in the context of the eventually constructed simp context
(the checks in #8688), or after simp has run (unused argument linter
#8901).
The new data structure describing an elaborated simp argument isn’t the
most elegant, but follows from the code.
While I am at it, move handling of `[*]` into `elabSimpArgs`. Downstream
adaption branches exist (but may not be fully up to date because of the
permission changes).
While I am at it, I cleaned up `SimpTheorems.lean` file a bit (sorting
declarations, mild renaming) and added documentation.
This PR make sure that the local instance cache calculation applies more
reductions. In #2199 there was an issue where metavariables could
prevent local variables from being considered as local instances. We use
a slightly different approach that ensures that, for example, `let`s at
the ends of telescopes do not cause similar problems. These reductions
were already being calculated, so this does not require any additional
work to be done.
Metaprogramming interface addition: the various forall telescope
functions that do reduction now have a `whnfType` flag (default false).
If it's true, then the callback `k` is given the WHNF of the type. This
is a free operation, since the telescope function already computes it.
This PR refactors `Lean.Grind.NatModule/IntModule/Ring.IsOrdered`.
We ensure the the diamond from `Ring` to `NatModule` via either
`Semiring` or `IntModule` is defeq, which was not previously the case.
---------
Co-authored-by: Leonardo de Moura <leomoura@amazon.com>
This PR corrects the pretty printing of `grind` modifiers. Previously
`@[grind →]` was being pretty printed as `@[grind→ ]` (Space on the
right of the symbol, rather than left.) This fixes the pretty printing
of attributes, and preserves the presence of spaces after the symbol in
the output of `grind?`.
---------
Co-authored-by: Leonardo de Moura <leomoura@amazon.com>
This PR implements a `finally` section following a (potentially empty)
`where` block. `where ... finally` opens a tactic sequence block in
which the goals are the unassigned metavariables from the definition
body and its auxiliary definitions that arise from use of `let rec` and
`where`.
This can be useful for discharging multiple proof obligations in the
definition body by a single invocation of a tactic such as `all_goals`:
```lean
example (i j : Nat) (xs : Array Nat) (hi : i < xs.size) (hj: j < xs.size) :=
match i with
| 0 => x
| _ => xs[i]'?_ + xs[j]'?_
where x := 13
finally all_goals assumption
```
---------
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR adds a logic of stateful predicates `SPred` to `Std.Do` in order
to support reasoning about monadic programs. It comes with a dedicated
proof mode the tactics of which are accessible by importing
`Std.Tactic.Do`.
Co-authored-by: Sebastian Graf <sg@lean-fro.org>
This PR removes an old workaround around non-implemented C++11 features
in the thread finalization.
This `ifdef` dates back to approximately 2015 as can be seen
[here](https://github.com/leanprover/lean3/blame/master/src/util/thread.cpp#L177),
the comments mention that it was originally implemented because not all
compilers at the time were able to support the C++11 `thread_local`
keyword. 10 years later this is hopefully the case and we can remove
this workaround.
There is an additional motivation for doing this,
`lean::initialize_thread` contains the following allocation:
```cpp
g_thread_finalizers_mgr = new thread_finalizers_manager;
```
this is supposed to be freed at some point but:
```cpp
// TODO(gabriel): race condition with thread finalizers
void delete_thread_finalizer_manager() {
// delete g_thread_finalizers_mgr;
// g_thread_finalizers_mgr = nullptr;
}
```
so `g_thread_finalizers_mgr` leaks upon repeated invocation of
`lean::initialize_thread`.
Note that Windows has already been using this alternative implementation
for a while so the alternative implementation has (hopefully) not rotten
away in the meantime.
This PR changes the definition of `DHashMap` to a structure. This makes
it more consistent with the other map types, which are generally defined
as structures. It also ensures that the type `DHashMap α β` is already
in weak head normal form, making it easier for `grind` to successfully
generate patterns for `DHashMap` lemmas.
Although `HEq` was abbreviated as `≍` in #8503, many instances of the
form `HEq x y` still remain.
Therefore, I searched for occurrences of `HEq x y` using the regular
expression `(?<![A-Za-z/@]|``)HEq(?![A-Za-z.])` and replaced as many as
possible with the form `x ≍ y`.
This PR adds doc-strings to the `Lean.Grind` algebra typeclasses, as
these will appear in the reference manual explaining how to extend
`grind` algebra solvers to new types. Also removes some redundant
fields.
This PR adds `@[expose]` annotations to terms that appear in `grind`
proof certificates, so `grind` can be used in the module system. It's
possible/likely that I haven't identified all of them yet.
This PR provides a compact formula for the MSB of the sdiv. Most of the
work in the PR involves handling the corner cases of division
overflowing (e.g. `intMin / -1 = intMin`)
---------
Co-authored-by: Luisa Cicolini <48860705+luisacicolini@users.noreply.github.com>
Co-authored-by: Tobias Grosser <github@grosser.es>
This PR introduces a `ForIn'` instance and a `size` function for
iterators in a minimal fashion. The `ForIn'` instance is not marked as
an instance because it is unclear which `Membership` relation is
sufficiently useful. The `ForIn'` instance existing as a `def` and
inducing the `ForIn` instance, it becomes possible to provide more
specialized `ForIn'` instances, with nice `Membership` relations, for
various types of iterators. The `size` function has no lemmas yet.
This PR adds support for throwing named errors with associated error
explanations. In particular, it adds elaborators for the syntax defined
in #8649, which use the error-explanation infrastructure added in #8651.
This includes completions, hovers, and jump-to-definition for error
names.
Note that another stage0 rebuild will be required to define explanations
using `register_error_explanation`.
---------
Co-authored-by: Joachim Breitner <mail@joachim-breitner.de>
Co-authored-by: Marc Huisinga <mhuisi@protonmail.com>
This PR moves parts of the iterator library from `Std` to `Init`. The
reason is that the polymorphic range API must be in `Init` and it
depends on the iterators.
This PR reintroduces the basics of `lean --setup` integration into Lake
without the module computation which is still undergoing performance
debugging in #8787.
Partially reverts #8736 and partially reimplements #8447.
2025-06-18 06:12:39 +00:00
3871 changed files with 78767 additions and 28831 deletions
echo "... and Batteries has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
@@ -183,7 +183,6 @@ jobs:
echo "... but Batteries does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Batteries CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-mathlib\`, Batteries CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
echo "... and the reference manual has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE=""
else
echo "... but the reference manual does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Reference manual CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-manual\`, reference manual CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
MESSAGE="- ❗ Reference manual CI will not be attempted unless your PR branches off the \`nightly-with-manual\` branch. Try \`git rebase $MERGE_BASE_SHA --onto $NIGHTLY_WITH_MANUAL_SHA\`."
# The reference manual's `nightly-testing` branch or `nightly-testing-YYYY-MM-DD` tag may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
@@ -68,7 +68,7 @@ The memory order of the fields is derived from the types and order of the fields
* Fields of type `USize`
* Other scalar fields, in decreasing order by size
Within each group the fields are ordered in declaration order. **Warning**: Trivial wrapper types still count toward a field being treated as non-scalar for this purpose.
Within each group the fields are ordered in declaration order. Trivial wrapper types count as their underlying wrapped type for this purpose.
* To access fields of the first kind, use `lean_ctor_get(val, i)` to get the `i`th non-scalar field.
* To access `USize` fields, use `lean_ctor_get_usize(val, n+i)` to get the `i`th usize field and `n` is the total number of fields of the first kind.
@@ -80,32 +80,32 @@ structure S where
ptr_1 : Array Nat
usize_1 : USize
sc64_1 : UInt64
ptr_2 : { x : UInt64 // x > 0 } -- wrappers don't count as scalars
sc64_2 : Float -- `Float` is 64 bit
sc64_2 : { x : UInt64 // x > 0 } -- wrappers of scalars count as scalars
sc64_3 : Float -- `Float` is 64 bit
sc8_1 : Bool
sc16_1 : UInt16
sc8_2 : UInt8
sc64_3 : UInt64
sc64_4 : UInt64
usize_2 : USize
ptr_3 : Char -- trivial wrapper around `UInt32`
sc32_1 : UInt32
sc32_1 : Char -- trivial wrapper around `UInt32`
sc32_2 : UInt32
sc16_2 : UInt16
```
would get re-sorted into the following memory order:
@@ -131,14 +131,21 @@ Thus `[init]` functions are run iff their module is imported, regardless of whet
The initializer for module `A.B` is called `initialize_A_B` and will automatically initialize any imported modules.
Module initializers are idempotent (when run with the same `builtin` flag), but not thread-safe.
**Important for process-related functionality**: If your application needs to use process-related functions from libuv, such as `Std.Internal.IO.Process.getProcessTitle` and `Std.Internal.IO.Process.setProcessTitle`, you must call `lean_setup_args(argc, argv)` (which returns a potentially modified `argv` that must be used in place of the original) **before** calling `lean_initialize()` or `lean_initialize_runtime_module()`. This sets up process handling capabilities correctly, which is essential for certain system-level operations that Lean's runtime may depend on.
Together with initialization of the Lean runtime, you should execute code like the following exactly once before accessing any Lean declarations:
renderedStatement:="Std.DHashMap.Raw.Const.get.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α]\n (m : Std.DHashMap.Raw α fun x => β) (a : α) (h : a ∈ m) : β",
description:="Operations that take as input an associative container and return a 'single' piece of information (e.g., `GetElem` or `isEmpty`, but not `toList`)."
print(f" ❌ Bump branch {bump_branch} does not exist. Run `gh api -X POST /repos/{org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)` to create it.")
nightly_note=f" (will set lean-toolchain to {latest_nightly})"ifnamein["batteries","mathlib4"]andlatest_nightlyelse""
print(f" ❌ Bump branch {bump_branch} does not exist. Run `gh api -X POST /repos/{bump_org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)` to create it{nightly_note}.")
repo_status[name]=False
continue
print(f"… Bump branch {bump_branch} does not exist. Creating it...")
result=run_command(f"gh api -X POST /repos/{org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)",check=False)
print(f"⮕ Bump branch {bump_branch} does not exist. Creating it...")
result=run_command(f"gh api -X POST /repos/{bump_org_repo}/git/refs -f ref=refs/heads/{bump_branch} -f sha=$(gh api /repos/{org_repo}/git/refs/heads/{branch} --jq .object.sha)",check=False)
ifresult.returncode!=0:
print(f" ❌ Failed to create bump branch {bump_branch}")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.