Compare commits

..

23 Commits

Author SHA1 Message Date
Joachim Breitner
7234d6477f perf: eliminate checkSystemInterval overhead when disabled
Skip the FFI call to lean_check_system_interval entirely when
LEAN_CHECK_SYSTEM_INTERVAL_MS is unset by checking a builtin_initialize'd
Bool flag inline.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 18:14:34 +00:00
Joachim Breitner
f3348db41a Merge master into joachim/checkSystemTime 2026-04-06 14:19:13 +00:00
Joachim Breitner
ca28d0719b chore: remove non-instrumentation changes from branch
Keep only the checkSystemInterval FFI and its call in checkInterrupted.
The withIncRecDepth+checkInterrupted change belongs in a separate PR.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 10:14:43 +00:00
Joachim Breitner
9d90cb957a Merge branch 'master' of https://github.com/leanprover/lean4 into joachim/checkSystemTime 2026-04-06 10:10:56 +00:00
Joachim Breitner
e8836ac7a2 chore: remove perf_event_open, amortize LCNF simp checkSystem to % 128
Remove instruction-count mode (perf_event_open) from check_system
interval monitoring — keep only the simpler CPU-time mode
(LEAN_CHECK_SYSTEM_INTERVAL_MS). The perf_event_open code can be
re-added later if needed.

Also amortize LCNF simp checkSystem to every 128 visits (unconditional
was +0.44% on big_do.lean per CI benchmarks).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 09:00:16 +00:00
Joachim Breitner
0a0aca5db4 feat: add instruction-count mode to check_system interval monitoring
Adds `LEAN_CHECK_SYSTEM_INTERVAL_INSN=N` env var (N in millions of
instructions) using perf_event_open on Linux. This is deterministic and
load-independent, unlike the existing CPU-time mode. Instruction count
mode takes priority over `LEAN_CHECK_SYSTEM_INTERVAL_MS` when both are
set.

Also includes LCNF checkSystem calls (simp + per-declaration) for
testing gap coverage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-05 11:47:20 +00:00
Joachim Breitner
555e957bbd Merge branch 'master' of https://github.com/leanprover/lean4 into joachim/checkSystemTime 2026-04-04 21:28:48 +00:00
Joachim Breitner
1581f29df7 Merge remote-tracking branch 'origin/master' into joachim/checkSystemTime 2026-04-04 07:12:07 +00:00
Joachim Breitner
bf3743ef12 Merge branch 'master' of https://github.com/leanprover/lean4 into joachim/checkSystemTime 2026-04-03 21:15:00 +00:00
Joachim Breitner
6ba4b30109 fix: improve check_system interval monitoring
Move the timer reset in check_system_interval to after the backtrace,
so that backtrace overhead doesn't inflate the next gap measurement.

Also fix merge: restore withIncRecDepth calling checkInterrupted
(from #13258) which was lost during the merge with master.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 20:33:36 +00:00
Joachim Breitner
d1af3f6b07 Merge remote-tracking branch 'origin/joachim/inferType-checkSystem' into joachim/checkSystemTime 2026-04-03 06:39:10 +00:00
Joachim Breitner
1de717dc1a Merge branch 'master' of https://github.com/leanprover/lean4 into joachim/checkSystemTime 2026-04-03 06:38:42 +00:00
Joachim Breitner
623b027952 perf: add checkInterrupted to inferType cache miss path
This adds a `Core.checkInterrupted` call in `checkInferTypeCache` on
cache miss, allowing cancellation to be detected during large type
inference traversals. Previously, `inferTypeImp` could run for >100ms
without any interruption check when processing large expressions (e.g.
BVDecide proof terms), making IDE cancellation unresponsive.

The check is only on cache miss (actual computation), so cache hits
have zero overhead. `checkInterrupted` is used instead of the heavier
`checkSystem` to minimize performance impact.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 12:51:21 +00:00
Joachim Breitner
92108472e5 revert: remove ineffective check_heartbeat amortization
check_heartbeat is only ever called from check_system, so amortizing
check_system calls from within check_heartbeat has no effect on the
kernel type checker gaps. Remove the amortization to avoid unnecessary
CI regression.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 11:42:08 +00:00
Joachim Breitner
480b2bffc5 perf: increase check_heartbeat amortization interval to 1024
Reduce overhead of the amortized check_system in check_heartbeat
by checking every 1024 heartbeats instead of 256.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 10:55:35 +00:00
Joachim Breitner
908b647be9 perf: amortize check_system via check_heartbeat
Add an amortized check_system call every 256 heartbeats inside
check_heartbeat. This ensures periodic stack, memory, and monitoring
interval checks for any code path that uses heartbeats, including
the kernel type checker.

Also fix backtrace cascade in check_system_interval monitoring by
resetting the timer after printing the warning, and remove the
redundant per-call checkSystem in LCNF simp (keeping only the
amortized one).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 09:59:28 +00:00
Joachim Breitner
9beb4dad84 Merge remote-tracking branch 'origin/master' into joachim/checkSystemTime 2026-04-02 07:57:47 +00:00
Joachim Breitner
29bd37591c fix: add amortized checkSystem to LCNF simp
This PR adds a periodic `checkSystem` call to the LCNF simplifier's
recursive traversal, checking every 512 visited nodes. This breaks up
multi-second gaps (observed up to 2.3s on `big_do.lean`) without
measurable performance regression — the amortized cost is effectively
zero on benchmarks like `big_do` and `simp_local`.

An unconditional `checkSystem` on every node caused a 0.4% instruction
count regression, which is why the amortized approach is used instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 14:14:18 +00:00
Joachim Breitner
e2937ad233 fix: add checkSystem calls to long-running elaboration paths
This PR adds `checkSystem` calls to several code paths that can run for
extended periods without checking for cancellation, heartbeat limits, or
stack overflow. This improves responsiveness of the cancellation mechanism
in the language server.

Affected paths:
- `simp` rewrite candidate loops (`Rewrite.lean`)
- `simpAppUsingCongr` argument traversal (`Types.lean`)
- `synthesizeSyntheticMVarsStep` mvar loop (`SyntheticMVars.lean`)
- `abstractNestedProofs` visitor (`AbstractNestedProofs.lean`)
- `transform`/`transformWithCache` visitors (`Transform.lean`)
- LCNF compiler pass runner loop (`Main.lean`)
- LCNF checker recursive traversal (`Check.lean`)
- LCNF simplifier recursive traversal (`Simp/Main.lean`)
- `whnfCore` recursive reduction loop (`WHNF.lean`)

With these changes, at a 50ms monitoring threshold across the full test suite,
the only remaining Lean-side elaborator gaps are:
- bv_decide pure computation wrapped in `IO.lazyPure` (architectural)
- Single deep `whnf` reductions from the `constructorNameAsVariable` linter (~100-180ms)

Remaining C++ gaps (type checker, interpreter module init) are tracked separately.

Found using `LEAN_CHECK_SYSTEM_INTERVAL_MS` monitoring from #13218.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 12:56:03 +00:00
Joachim Breitner
66924cc4ac fix: add checkSystem calls to long-running elaboration paths
This PR adds `checkSystem` calls to several code paths that can run for
extended periods without checking for cancellation, heartbeat limits, or
stack overflow. This improves responsiveness of the cancellation mechanism
in the language server.

Affected paths:
- `simp` rewrite candidate loops (`Rewrite.lean`)
- `simpAppUsingCongr` argument traversal (`Types.lean`)
- `synthesizeSyntheticMVarsStep` mvar loop (`SyntheticMVars.lean`)
- `abstractNestedProofs` visitor (`AbstractNestedProofs.lean`)
- `transform`/`transformWithCache` visitors (`Transform.lean`)
- LCNF compiler pass runner loop (`Main.lean`)
- LCNF checker recursive traversal (`Check.lean`)
- LCNF simplifier recursive traversal (`Simp/Main.lean`)
- `whnfCore` recursive reduction loop (`WHNF.lean`)

With these changes, at a 50ms monitoring threshold across the full test suite,
the only remaining Lean-side elaborator gaps are:
- bv_decide pure computation wrapped in `IO.lazyPure` (architectural)
- Single deep `whnf` reductions from the `constructorNameAsVariable` linter (~100-180ms)

Remaining C++ gaps (type checker, interpreter module init) are tracked separately.

Found using `LEAN_CHECK_SYSTEM_INTERVAL_MS` monitoring from #13218.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 09:12:33 +00:00
Joachim Breitner
7488a1f712 feat: share check_system interval timer between C++ and Lean
Factor the interval monitoring into a standalone `check_system_interval`
function exposed to Lean via `@[extern "lean_check_system_interval"]`.
Call it from both the C++ `check_system` and the Lean-level
`Core.checkInterrupted`, so that CPU time spent in Lean elaboration
(e.g. simp) is properly tracked and attributed.

Previously the timer only fired from C++ kernel code, so gaps caused by
Lean-level elaboration were misleadingly attributed to the kernel
component that happened to call `check_system` next.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:28:36 +00:00
Joachim Breitner
ffd0f7de85 feat: add backtrace to check_system interval warnings
This adds a stack trace dump when the check_system interval warning
fires, making it easier to identify the code path responsible for
the gap.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 16:17:27 +00:00
Joachim Breitner
4f7de4a1d9 feat: add optional check_system interval monitoring
This adds optional monitoring of how frequently `check_system` is called
per thread. When `LEAN_CHECK_SYSTEM_INTERVAL_MS` is set, a warning is
printed to stderr if the interval between consecutive calls exceeds the
threshold. This helps identify code paths where cancellation checks are
too infrequent for responsive cancellation.

Disabled by default; set e.g. `LEAN_CHECK_SYSTEM_INTERVAL_MS=1000` to
enable with a 1-second threshold.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-31 13:51:13 +00:00
23 changed files with 311 additions and 416 deletions

View File

@@ -28,14 +28,6 @@ repositories:
branch: main
dependencies: []
- name: leansqlite
url: https://github.com/leanprover/leansqlite
toolchain-tag: true
stable-branch: false
branch: main
dependencies:
- plausible
- name: verso
url: https://github.com/leanprover/verso
toolchain-tag: true
@@ -108,7 +100,7 @@ repositories:
toolchain-tag: true
stable-branch: false
branch: main
dependencies: [lean4-cli, BibtexQuery, mathlib4, leansqlite]
dependencies: [lean4-cli, BibtexQuery, mathlib4]
- name: cslib
url: https://github.com/leanprover/cslib

View File

@@ -363,7 +363,7 @@ theorem toBitVec_eq_of_parseFirstByte_eq_threeMore {b : UInt8} (h : parseFirstBy
public def isInvalidContinuationByte (b : UInt8) : Bool :=
b &&& 0xc0 != 0x80
theorem isInvalidContinuationByte_eq_false_iff {b : UInt8} :
theorem isInvalidContinutationByte_eq_false_iff {b : UInt8} :
isInvalidContinuationByte b = false b &&& 0xc0 = 0x80 := by
simp [isInvalidContinuationByte]

View File

@@ -1880,12 +1880,3 @@ lead to undefined behavior.
-/
@[extern "lean_runtime_forget"]
def Runtime.forget (a : α) : BaseIO Unit := return
set_option linter.unusedVariables false in
/--
Ensures `a` remains at least alive until the call site by holding a reference to `a`. This can be useful
for unsafe code (such as an FFI) that relies on a Lean object not being freed until after some point
in the program. At runtime, this will be a no-op as the C compiler will optimize away this call.
-/
@[extern "lean_runtime_hold"]
def Runtime.hold (a : @& α) : BaseIO Unit := return

View File

@@ -446,6 +446,24 @@ Note that the value of `ctx.initHeartbeats` is ignored and replaced with `IO.get
@[inline] def CoreM.toIO' (x : CoreM α) (ctx : Context) (s : State) : IO α :=
(·.1) <$> x.toIO ctx s
/--
Record a check-in for the `LEAN_CHECK_SYSTEM_INTERVAL_MS` monitoring.
When that env var is set, warns on stderr if too much CPU time has elapsed since
the last check-in from either the C++ `check_system` or this function.
-/
@[extern "lean_check_system_interval"]
private opaque checkSystemIntervalImpl (componentName : @& String) : BaseIO Unit
@[extern "lean_check_system_interval_is_enabled"]
private opaque checkSystemIntervalIsEnabled : Unit Bool
private builtin_initialize checkSystemIntervalEnabled : Bool
pure (checkSystemIntervalIsEnabled ())
@[inline] def checkSystemInterval (componentName : @& String) : BaseIO Unit := do
if checkSystemIntervalEnabled then
checkSystemIntervalImpl componentName
/--
Throws an internal interrupt exception if cancellation has been requested. The exception is not
caught by `try catch` but is intended to be caught by `Command.withLoggingExceptions` at the top
@@ -456,6 +474,7 @@ Like `checkSystem` but without the global heartbeat check, for callers that have
heartbeat tracking (e.g. `SynthInstance`).
-/
@[inline] def checkInterrupted : CoreM Unit := do
checkSystemInterval "Lean elaborator"
if let some tk := ( read).cancelTk? then
if ( tk.isSet) then
throwInterruptException

View File

@@ -3168,10 +3168,6 @@ static inline lean_obj_res lean_manual_get_root(lean_obj_arg _unit) {
return lean_mk_string(LEAN_MANUAL_ROOT);
}
static inline lean_obj_res lean_runtime_hold(b_lean_obj_arg a) {
return lean_box(0);
}
#ifdef LEAN_EMSCRIPTEN
#define LEAN_SCALAR_PTR_LITERAL(b1, b2, b3, b4, b5, b6, b7, b8) (lean_object*)((uint32_t)b1 | ((uint32_t)b2 << 8) | ((uint32_t)b3 << 16) | ((uint32_t)b4 << 24)), (lean_object*)((uint32_t)b5 | ((uint32_t)b6 << 8) | ((uint32_t)b7 << 16) | ((uint32_t)b8 << 24))
#else

View File

@@ -19,7 +19,7 @@ import Lake.Build.InputFile
namespace Lake
/-- The initial set of Lake facets. -/
public def initFacetConfigs : FacetConfigMap :=
public def initFacetConfigs : DNameMap FacetConfig :=
DNameMap.empty
|> insert Module.initFacetConfigs
|> insert Package.initFacetConfigs
@@ -30,4 +30,4 @@ public def initFacetConfigs : FacetConfigMap :=
|> insert InputDir.initFacetConfigs
where
insert {k} (group : DNameMap (KFacetConfig k)) map :=
group.foldl (init := map) fun m _ v => m.insert v.toFacetConfig
group.foldl (init := map) fun m k v => m.insert k v.toFacetConfig

View File

@@ -27,20 +27,6 @@ public structure FacetConfig (name : Name) : Type where
memoize : Bool := true
deriving Inhabited
/-- A mapping of facet names to the configuration for that name. -/
public abbrev FacetConfigMap := DNameMap FacetConfig
/--
Tries to retrieve the facet configuration for the given {lean}`name`,
returning {lean}`none` if no such mapping is present.
-/
public nonrec def FacetConfigMap.get? (name : Name) (self : FacetConfigMap) : Option (FacetConfig name) :=
self.get? name -- specializes `get?`
/-- Inserts the facet configuration {lean}`cfg` into the map (overwriting any existing configuration). -/
public nonrec def FacetConfigMap.insert {name} (cfg : FacetConfig name) (self : FacetConfigMap) : FacetConfigMap :=
self.insert name cfg -- specializes `insert`
public protected abbrev FacetConfig.name (_ : FacetConfig name) := name
public structure KFacetConfig (k : Name) (name : Name) extends FacetConfig name where
@@ -84,9 +70,6 @@ public structure NamedConfigDecl (β : Name → Type u) where
name : Name
config : β name
/-- A facet declaration from a configuration file. -/
public abbrev FacetDecl := NamedConfigDecl FacetConfig
/-- A module facet's declarative configuration. -/
public abbrev ModuleFacetConfig := KFacetConfig Module.facetKind

View File

@@ -1,53 +0,0 @@
/-
Copyright (c) 2025 Mac Malone. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mac Malone
-/
module
prelude
public import Lake.Config.Script
public import Lake.Config.Dependency
public import Lake.Config.PackageConfig
public import Lake.Config.FacetConfig
public import Lake.Config.TargetConfig
set_option doc.verso true
open Lean
namespace Lake
/-- The configuration defined by a Lake configuration file (e.g., {lit}`lakefile.(lean|toml)`). -/
public structure LakefileConfig where
/-- The package deceleration of the configuration file. -/
pkgDecl : PackageDecl
/-- Depdency configurations in the order declared by the configuration file. -/
depConfigs : Array Dependency := #[]
/-- Facet configurations in the order declared by the configuration file. -/
facetDecls : Array FacetDecl := {}
/-- Target configurations in the order declared by the configuration file. -/
targetDecls : Array (PConfigDecl pkgDecl.keyName) := #[]
/-- Name-declaration map of target configurations declared in the configuration file. -/
targetDeclMap : DNameMap (NConfigDecl pkgDecl.keyName) :=
targetDecls.foldl (fun m d => m.insert d.name (.mk d rfl)) {}
/-- The names of targets declared via {lit}`@[default_target]`. -/
defaultTargets : Array Name := #[]
/-- Scripts declared in the configuration file. -/
scripts : NameMap Script := {}
/-- The names of the scripts declared via {lit}`@[default_script]`. -/
defaultScripts : Array Script := #[]
/-- {lit}`post_update` hooks in the order declared by the configuration file.. -/
postUpdateHooks : Array (OpaquePostUpdateHook pkgDecl.keyName) := #[]
/--
The name of the configuration file's test driver.
It is either declared via {lit}`@[test_driver]` or derived from {lean}`PackageConfig.testDriver`.
-/
testDriver : String := pkgDecl.config.testDriver
/--
The name of the configuration file's test driver.
It is either declared via {lit}`@[lint_driver]` or derived from {lean}`PackageConfig.lintDriver`.
-/
lintDriver : String := pkgDecl.config.lintDriver

View File

@@ -45,7 +45,7 @@ public structure Package where
/-- The path to the package's configuration file (relative to `dir`). -/
relConfigFile : FilePath
/-- The path to the package's JSON manifest of remote dependencies (relative to `dir`). -/
relManifestFile : FilePath
relManifestFile : FilePath := defaultManifestFile
/-- The package's scope (e.g., in Reservoir). -/
scope : String
/-- The URL to this package's Git remote. -/

View File

@@ -336,16 +336,11 @@ public configuration PackageConfig (p : Name) (n : Name) extends WorkspaceConfig
deriving Inhabited
/-- The package's name as specified by the author. -/
@[deprecated "Deprecated without replacement" (since := "2025-12-10")]
public abbrev PackageConfig.origName (_ : PackageConfig p n) := n
/-- A package declaration from a configuration written in Lean. -/
public structure PackageDecl where
baseName : Name
keyName : Name
name : Name
origName : Name
config : PackageConfig keyName origName
config : PackageConfig name origName
deriving TypeName
@[deprecated PackageDecl.keyName (since := "2025-12-10")]
public abbrev PackageDecl.name := @keyName

View File

@@ -22,17 +22,15 @@ open Lean (Name LeanOptions)
namespace Lake
/--
**For internal use only.**
Computes the cache to use for the package based on the environment.
-/
/-- **For internal use only.** Computes the cache to use for the package based on the environment. -/
public def computeLakeCache (pkg : Package) (lakeEnv : Lake.Env) : Cache :=
if pkg.bootstrap then
lakeEnv.lakeSystemCache?.getD pkg.lakeDir / "cache"
else
lakeEnv.lakeCache?.getD pkg.lakeDir / "cache"
public structure Workspace.Raw : Type where
/-- A Lake workspace -- the top-level package directory. -/
public structure Workspace : Type where
/-- The root package of the workspace. -/
root : Package
/-- The detected {lean}`Lake.Env` of the workspace. -/
@@ -51,25 +49,14 @@ public structure Workspace.Raw : Type where
The packages within the workspace
(in {lit}`require` declaration order with the root coming first).
-/
packages : Array Package := #[]
packages : Array Package := {}
/-- Name-package map of packages within the workspace. -/
packageMap : DNameMap NPackage := {}
/-- Configuration map of facets defined in the workspace. -/
facetConfigs : FacetConfigMap := {}
deriving Nonempty
facetConfigs : DNameMap FacetConfig := {}
public structure Workspace.Raw.WF (ws : Workspace.Raw) : Prop where
packages_wsIdx : (h : i < ws.packages.size), (ws.packages[i]'h).wsIdx = i
/-- A Lake workspace -- the top-level package directory. -/
public structure Workspace extends raw : Workspace.Raw, wf : raw.WF
public instance : Nonempty Workspace := .intro {
lakeEnv := default
lakeConfig := Classical.ofNonempty
root := Classical.ofNonempty
packages_wsIdx h := by simp at h
}
public instance : Nonempty Workspace :=
by constructor <;> exact Classical.ofNonempty
public hydrate_opaque_type OpaqueWorkspace Workspace
@@ -188,20 +175,9 @@ This is configured through {lit}`cache.service` entries in the system Lake confi
@[inline] public def packageOverridesFile (self : Workspace) : FilePath :=
self.lakeDir / "package-overrides.json"
/-- **For internal use only.** Add a well-formed package to the workspace. -/
@[inline] public def addPackage' (pkg : Package) (self : Workspace) (h : pkg.wsIdx = self.packages.size) : Workspace :=
{self with
packages := self.packages.push pkg
packageMap := self.packageMap.insert pkg.keyName pkg
packages_wsIdx {i} i_lt := by
cases Nat.lt_add_one_iff_lt_or_eq.mp <| Array.size_push .. i_lt with
| inl i_lt => simpa [Array.getElem_push_lt i_lt] using self.packages_wsIdx i_lt
| inr i_eq => simpa [i_eq] using h
}
/-- Add a package to the workspace. -/
@[inline] public def addPackage (pkg : Package) (self : Workspace) : Workspace :=
self.addPackage' {pkg with wsIdx := self.packages.size} rfl
public def addPackage (pkg : Package) (self : Workspace) : Workspace :=
{self with packages := self.packages.push pkg, packageMap := self.packageMap.insert pkg.keyName pkg}
/-- Returns the unique package in the workspace (if any) that is identified by {lean}`keyName`. -/
@[inline] public protected def findPackageByKey? (keyName : Name) (self : Workspace) : Option (NPackage keyName) :=
@@ -275,15 +251,15 @@ public def findTargetDecl? (name : Name) (self : Workspace) : Option ((pkg : Pac
self.packages.findSome? fun pkg => pkg.findTargetDecl? name <&> (pkg, ·)
/-- Add a facet to the workspace. -/
@[inline] public def addFacetConfig {name} (cfg : FacetConfig name) (self : Workspace) : Workspace :=
{self with facetConfigs := self.facetConfigs.insert cfg}
public def addFacetConfig {name} (cfg : FacetConfig name) (self : Workspace) : Workspace :=
{self with facetConfigs := self.facetConfigs.insert name cfg}
/-- Try to find a facet configuration in the workspace with the given name. -/
@[inline] public def findFacetConfig? (name : Name) (self : Workspace) : Option (FacetConfig name) :=
public def findFacetConfig? (name : Name) (self : Workspace) : Option (FacetConfig name) :=
self.facetConfigs.get? name
/-- Add a module facet to the workspace. -/
@[inline] public def addModuleFacetConfig (cfg : ModuleFacetConfig name) (self : Workspace) : Workspace :=
public def addModuleFacetConfig (cfg : ModuleFacetConfig name) (self : Workspace) : Workspace :=
self.addFacetConfig cfg.toFacetConfig
/-- Try to find a module facet configuration in the workspace with the given name. -/
@@ -291,7 +267,7 @@ public def findModuleFacetConfig? (name : Name) (self : Workspace) : Option (Mod
self.findFacetConfig? name |>.bind (·.toKind? Module.facetKind)
/-- Add a package facet to the workspace. -/
@[inline] public def addPackageFacetConfig (cfg : PackageFacetConfig name) (self : Workspace) : Workspace :=
public def addPackageFacetConfig (cfg : PackageFacetConfig name) (self : Workspace) : Workspace :=
self.addFacetConfig cfg.toFacetConfig
/-- Try to find a package facet configuration in the workspace with the given name. -/
@@ -299,7 +275,7 @@ public def findPackageFacetConfig? (name : Name) (self : Workspace) : Option (Pa
self.findFacetConfig? name |>.bind (·.toKind? Package.facetKind)
/-- Add a library facet to the workspace. -/
@[inline] public def addLibraryFacetConfig (cfg : LibraryFacetConfig name) (self : Workspace) : Workspace :=
public def addLibraryFacetConfig (cfg : LibraryFacetConfig name) (self : Workspace) : Workspace :=
self.addFacetConfig cfg.toFacetConfig
/-- Try to find a library facet configuration in the workspace with the given name. -/

View File

@@ -26,18 +26,17 @@ def elabPackageCommand : CommandElab := fun stx => do
let configId : Ident `(pkgConfig)
let id mkConfigDeclIdent nameStx?
let origName := Name.quoteFrom id id.getId
let wsIdx, name := nameExt.getState ( getEnv)
let baseName := match name with
let idx, name := nameExt.getState ( getEnv)
let name := match name with
| .anonymous => origName
| name => Name.quoteFrom id name
let wsIdx := quote wsIdx
let keyName := Syntax.mkCApp ``Name.num #[baseName, wsIdx]
let ty := Syntax.mkCApp ``PackageConfig #[keyName, origName]
let name := Syntax.mkCApp ``Name.num #[name, quote idx]
let ty := Syntax.mkCApp ``PackageConfig #[name, origName]
elabConfig ``PackageConfig configId ty cfg
let attr `(Term.attrInstance| «package»)
let attrs := #[attr] ++ expandAttrs attrs?
let id := mkIdentFrom id packageDeclName
let decl `({baseName := $baseName, origName := $origName, keyName := $keyName, config := $configId})
let decl `({name := $name, origName := $origName, config := $configId})
let cmd `($[$doc?]? @[$attrs,*] abbrev $id : PackageDecl := $decl)
withMacroExpansion stx cmd <| elabCommand cmd
let nameId := mkIdentFrom id <| packageDeclName.str "name"

View File

@@ -7,7 +7,6 @@ module
prelude
public import Lake.Config.Env
public import Lake.Config.Lang
public import Lake.Config.LakeConfig
public import Lake.Load.Manifest
@@ -42,16 +41,8 @@ public structure LoadConfig where
pkgDir : FilePath := wsDir / relPkgDir
/-- The package's Lake configuration file (relative to its directory). -/
relConfigFile : FilePath := defaultConfigFile
/-- The full path to the loaded package's Lake configuration file. -/
/-- The full path to the loaded package's Lake configuration file. -/
configFile : FilePath := pkgDir / relConfigFile
/-
The format of the package's configuration file.
If {lean}`none`, the format will be determined by the file's extension.
-/
configLang? : Option ConfigLang := none
/-- The package's Lake manifest file (relative to its directory). -/
relManifestFile : FilePath := defaultManifestFile
/-- Additional package overrides for this workspace load. -/
packageOverrides : Array PackageEntry := #[]
/-- A set of key-value Lake configuration options (i.e., {lit}`-K` settings). -/
@@ -64,7 +55,6 @@ public structure LoadConfig where
updateDeps : Bool := false
/--
Whether to update the workspace's {lit}`lean-toolchain` when dependencies are updated.
If {lean}`true` and a toolchain update occurs, Lake will need to be restarted.
-/
updateToolchain : Bool := true

View File

@@ -7,7 +7,6 @@ module
prelude
public import Lake.Config.Package
public import Lake.Config.LakefileConfig
public import Lake.Load.Config
import Lake.Load.Lean.Elab
import Lake.Load.Lean.Eval
@@ -22,7 +21,22 @@ open Lean
namespace Lake
/-- Elaborate a Lake configuration file written in Lean and extract the Lake configuration. -/
public def loadLeanConfig (cfg : LoadConfig) : LogIO LakefileConfig := do
/--
Elaborate a Lean configuration file into a `Package`.
The resulting package does not yet include any dependencies.
-/
public def loadLeanConfig (cfg : LoadConfig) : LogIO (Package × Environment) := do
let configEnv importConfigFile cfg
LakefileConfig.loadFromEnv configEnv cfg.leanOpts
let keyName, origName, config IO.ofExcept <| PackageDecl.loadFromEnv configEnv cfg.leanOpts
let baseName := if cfg.pkgName.isAnonymous then origName else cfg.pkgName
let pkg : Package := {
wsIdx := cfg.pkgIdx
baseName, keyName, origName, config
dir := cfg.pkgDir
relDir := cfg.relPkgDir
configFile := cfg.configFile
relConfigFile := cfg.relConfigFile
scope := cfg.scope
remoteUrl := cfg.remoteUrl
}
return ( pkg.loadFromEnv configEnv cfg.leanOpts, configEnv)

View File

@@ -7,7 +7,6 @@ module
prelude
public import Lake.Config.Workspace
public import Lake.Config.LakefileConfig
import Lean.DocString
import Lake.DSL.AttributesCore
@@ -76,38 +75,39 @@ private def mkOrdTagMap
return map.insert declName <| f declName
/-- Load a `PackageDecl` from a configuration environment. -/
private def PackageDecl.loadFromEnv
public def PackageDecl.loadFromEnv
(env : Environment) (opts := Options.empty)
: Except String PackageDecl := do
let declName
match packageAttr.getAllEntries env |>.toList with
| [] => error s!"configuration file is missing a `package` declaration"
| [keyName] => pure keyName
| [name] => pure name
| _ => error s!"configuration file has multiple `package` declarations"
evalConstCheck env opts _ declName
/-- Load a Lake configuration from a configuration file's environment. -/
public def LakefileConfig.loadFromEnv
(env : Environment) (opts : Options)
: LogIO LakefileConfig := do
-- load package declaration
let pkgDecl IO.ofExcept <| PackageDecl.loadFromEnv env opts
let prettyName := pkgDecl.baseName.toString (escape := false)
let keyName := pkgDecl.keyName
/--
Load the optional elements of a `Package` from the Lean environment.
This is done after loading its core configuration but before resolving
its dependencies.
-/
public def Package.loadFromEnv
(self : Package) (env : Environment) (opts : Options)
: LogIO Package := do
-- load target, script, hook, and driver configurations
let constTargetMap IO.ofExcept <| mkOrdTagMap env targetAttr fun name => do
evalConstCheck env opts ConfigDecl name
let targetDecls constTargetMap.toArray.mapM fun decl => do
if h : decl.pkg = keyName then
if h : decl.pkg = self.keyName then
return .mk decl h
else
error s!"target '{decl.name}' was defined in package '{decl.pkg}', \
but registered under '{keyName}'"
error s!"\
target '{decl.name}' was defined in package '{decl.pkg}', \
but registered under '{self.keyName}'"
let targetDeclMap targetDecls.foldlM (init := {}) fun m decl => do
if let some orig := m.get? decl.name then
error s!"{prettyName}: target '{decl.name}' was already defined as a '{orig.kind}', \
error s!"\
{self.prettyName}: target '{decl.name}' was already defined as a '{orig.kind}', \
but then redefined as a '{decl.kind}'"
else
return m.insert decl.name (.mk decl rfl)
@@ -116,7 +116,8 @@ public def LakefileConfig.loadFromEnv
if let some exeConfig := decl.config? LeanExe.configKind then
let root := exeConfig.root
if let some origExe := exeRoots.get? root then
error s!"{prettyName}: executable '{decl.name}' has the same root module '{root}' \
error s!"\
{self.prettyName}: executable '{decl.name}' has the same root module '{root}' \
as executable '{origExe}'"
else
return exeRoots.insert root decl.name
@@ -124,78 +125,80 @@ public def LakefileConfig.loadFromEnv
return exeRoots
let defaultTargets defaultTargetAttr.getAllEntries env |>.mapM fun name =>
if let some decl := constTargetMap.find? name then pure decl.name else
error s!"{prettyName}: package is missing target '{name}' marked as a default"
error s!"{self.prettyName}: package is missing target '{name}' marked as a default"
let scripts mkTagMap env scriptAttr fun scriptName => do
let name := prettyName ++ "/" ++ scriptName.toString (escape := false)
let name := self.prettyName ++ "/" ++ scriptName.toString (escape := false)
let fn IO.ofExcept <| evalConstCheck env opts ScriptFn scriptName
return {name, fn, doc? := findDocString? env scriptName : Script}
let defaultScripts defaultScriptAttr.getAllEntries env |>.mapM fun name =>
if let some script := scripts.get? name then pure script else
error s!"{prettyName}: package is missing script '{name}' marked as a default"
error s!"{self.prettyName}: package is missing script '{name}' marked as a default"
let postUpdateHooks postUpdateAttr.getAllEntries env |>.mapM fun name =>
match evalConstCheck env opts PostUpdateHookDecl name with
| .ok decl =>
if h : decl.pkg = keyName then
return OpaquePostUpdateHook.mk cast (by rw [h]) decl.fn
if h : decl.pkg = self.keyName then
return OpaquePostUpdateHook.mk cast (by rw [h, keyName]) decl.fn
else
error s!"post-update hook was defined in '{decl.pkg}', but was registered in '{keyName}'"
error s!"post-update hook was defined in '{decl.pkg}', but was registered in '{self.keyName}'"
| .error e => error e
let depConfigs IO.ofExcept <| packageDepAttr.getAllEntries env |>.mapM fun name =>
evalConstCheck env opts Dependency name
let testDrivers testDriverAttr.getAllEntries env |>.mapM fun name =>
if let some decl := constTargetMap.find? name then
return decl.name
pure decl.name
else if scripts.contains name then
return name
pure name
else
error s!"{prettyName}: package is missing script or target '{name}' marked as a test driver"
let testDriver id do
error s!"{self.prettyName}: package is missing script or target '{name}' marked as a test driver"
let testDriver
if testDrivers.size > 1 then
error s!"{prettyName}: only one script, executable, or library can be tagged @[test_driver]"
error s!"{self.prettyName}: only one script, executable, or library can be tagged @[test_driver]"
else if h : testDrivers.size > 0 then
if pkgDecl.config.testDriver.isEmpty then
return (testDrivers[0]'h |>.toString)
if self.config.testDriver.isEmpty then
pure (testDrivers[0]'h |>.toString)
else
error s!"{prettyName}: cannot both set testDriver and use @[test_driver]"
error s!"{self.prettyName}: cannot both set testDriver and use @[test_driver]"
else
return pkgDecl.config.testDriver
pure self.config.testDriver
let lintDrivers lintDriverAttr.getAllEntries env |>.mapM fun name =>
if let some decl := constTargetMap.find? name then
return decl.name
pure decl.name
else if scripts.contains name then
return name
pure name
else
error s!"{prettyName}: package is missing script or target '{name}' marked as a lint driver"
let lintDriver id do
error s!"{self.prettyName}: package is missing script or target '{name}' marked as a lint driver"
let lintDriver
if lintDrivers.size > 1 then
error s!"{prettyName}: only one script or executable can be tagged @[lint_driver]"
error s!"{self.prettyName}: only one script or executable can be tagged @[lint_driver]"
else if h : lintDrivers.size > 0 then
if pkgDecl.config.lintDriver.isEmpty then
return (lintDrivers[0]'h |>.toString)
if self.config.lintDriver.isEmpty then
pure (lintDrivers[0]'h |>.toString)
else
error s!"{prettyName}: cannot both set lintDriver and use @[lint_driver]"
error s!"{self.prettyName}: cannot both set lintDriver and use @[lint_driver]"
else
return pkgDecl.config.lintDriver
-- load facets
let facetDecls IO.ofExcept do
let mut decls : Array FacetDecl := #[]
for name in moduleFacetAttr.getAllEntries env do
let decl evalConstCheck env opts ModuleFacetDecl name
decls := decls.push {decl with config := decl.config.toFacetConfig}
for name in packageFacetAttr.getAllEntries env do
let decl evalConstCheck env opts PackageFacetDecl name
decls := decls.push {decl with config := decl.config.toFacetConfig}
for name in libraryFacetAttr.getAllEntries env do
let decl evalConstCheck env opts LibraryFacetDecl name
decls := decls.push {decl with config := decl.config.toFacetConfig}
return decls
pure self.config.lintDriver
-- Fill in the Package
return {
pkgDecl, depConfigs, facetDecls,
targetDecls, targetDeclMap, defaultTargets,
scripts, defaultScripts,
testDriver, lintDriver,
postUpdateHooks,
return {self with
depConfigs, targetDecls, targetDeclMap
defaultTargets, scripts, defaultScripts
testDriver, lintDriver, postUpdateHooks
}
/--
Load module/package facets into a `Workspace` from a configuration environment.
-/
public def Workspace.addFacetsFromEnv
(env : Environment) (opts : Options) (self : Workspace)
: Except String Workspace := do
let mut ws := self
for name in moduleFacetAttr.getAllEntries env do
let decl evalConstCheck env opts ModuleFacetDecl name
ws := ws.addModuleFacetConfig decl.config
for name in packageFacetAttr.getAllEntries env do
let decl evalConstCheck env opts PackageFacetDecl name
ws := ws.addPackageFacetConfig decl.config
for name in libraryFacetAttr.getAllEntries env do
let decl evalConstCheck env opts LibraryFacetDecl name
ws := ws.addLibraryFacetConfig decl.config
return ws

View File

@@ -94,9 +94,6 @@ public structure PackageEntry where
namespace PackageEntry
@[inline] public def prettyName (entry : PackageEntry) : String :=
entry.name.toString (escape := false)
public protected def toJson (entry : PackageEntry) : Json :=
let fields := [
("name", toJson entry.name),

View File

@@ -10,7 +10,6 @@ public import Lake.Config.Env
public import Lake.Load.Manifest
public import Lake.Config.Package
import Lake.Util.Git
import Lake.Util.IO
import Lake.Reservoir
open System Lean
@@ -88,8 +87,6 @@ def materializeGitRepo
cloneGitPkg name repo url rev?
public structure MaterializedDep where
/-- Absolute path to the materialized package. -/
pkgDir : FilePath
/-- Path to the materialized package relative to the workspace's root directory. -/
relPkgDir : FilePath
/--
@@ -108,31 +105,16 @@ namespace MaterializedDep
@[inline] public def name (self : MaterializedDep) : Name :=
self.manifestEntry.name
@[inline] public def prettyName (self : MaterializedDep) : String :=
self.manifestEntry.name.toString (escape := false)
@[inline] public def scope (self : MaterializedDep) : String :=
self.manifestEntry.scope
/-- Path to the dependency's manfiest file (relative to `relPkgDir`). -/
@[inline] public def relManifestFile? (self : MaterializedDep) : Option FilePath :=
/-- Path to the dependency's configuration file (relative to `relPkgDir`). -/
@[inline] public def manifestFile? (self : MaterializedDep) : Option FilePath :=
self.manifestEntry.manifestFile?
/-- Path to the dependency's manfiest file (relative to `relPkgDir`). -/
@[inline] public def relManifestFile (self : MaterializedDep) : FilePath :=
self.relManifestFile?.getD defaultManifestFile
/-- Absolute path to the dependency's manfiest file. -/
@[inline] public def manifestFile (self : MaterializedDep) : FilePath :=
self.pkgDir / self.relManifestFile
/-- Path to the dependency's configuration file (relative to `relPkgDir`). -/
@[inline] public def relConfigFile (self : MaterializedDep) : FilePath :=
self.manifestEntry.configFile
/-- Absolute path to the dependency's configuration file. -/
@[inline] public def configFile (self : MaterializedDep) : FilePath :=
self.pkgDir / self.relConfigFile
self.manifestEntry.configFile
public def fixedToolchain (self : MaterializedDep) : Bool :=
match self.manifest? with
@@ -175,11 +157,10 @@ public def Dependency.materialize
(lakeEnv : Env) (wsDir relPkgsDir relParentDir : FilePath)
: LoggerIO MaterializedDep := do
if let some src := dep.src? then
let sname := dep.name.toString (escape := false)
match src with
| .path dir =>
let relPkgDir := relParentDir / dir
mkDep sname relPkgDir "" (.path relPkgDir)
mkDep (wsDir / relPkgDir) relPkgDir "" (.path relPkgDir)
| .git url inputRev? subDir? => do
let sname := dep.name.toString (escape := false)
let repoUrl := Git.filterUrl? url |>.getD ""
@@ -227,19 +208,16 @@ public def Dependency.materialize
| _ => error s!"{pkg.fullName}: Git source not found on Reservoir"
where
materializeGit name relPkgDir gitUrl remoteUrl inputRev? subDir? : LoggerIO MaterializedDep := do
let gitDir := wsDir / relPkgDir
let repo := GitRepo.mk gitDir
let pkgDir := wsDir / relPkgDir
let repo := GitRepo.mk pkgDir
let gitUrl := lakeEnv.pkgUrlMap.find? dep.name |>.getD gitUrl
materializeGitRepo name repo gitUrl inputRev?
let rev repo.getHeadRevision
let relPkgDir := if let some subDir := subDir? then relPkgDir / subDir else relPkgDir
mkDep name relPkgDir remoteUrl <| .git gitUrl rev inputRev? subDir?
@[inline] mkDep name relPkgDir remoteUrl src : LoggerIO MaterializedDep := do
let pkgDir := wsDir / relPkgDir
let some pkgDir resolvePath? pkgDir
| error s!"{name}: package directory not found: {pkgDir}"
mkDep pkgDir relPkgDir remoteUrl <| .git gitUrl rev inputRev? subDir?
@[inline] mkDep pkgDir relPkgDir remoteUrl src : LoggerIO MaterializedDep := do
return {
pkgDir, relPkgDir, remoteUrl,
relPkgDir, remoteUrl
manifest? := Manifest.load (pkgDir / defaultManifestFile) |>.toBaseIO
manifestEntry := {name := dep.name, scope := dep.scope, inherited, src}
}
@@ -253,9 +231,9 @@ public def PackageEntry.materialize
: LoggerIO MaterializedDep :=
match manifestEntry.src with
| .path (dir := relPkgDir) .. =>
mkDep relPkgDir ""
mkDep (wsDir / relPkgDir) relPkgDir ""
| .git (url := url) (rev := rev) (subDir? := subDir?) .. => do
let sname := manifestEntry.prettyName
let sname := manifestEntry.name.toString (escape := false)
let relGitDir := relPkgsDir / sname
let gitDir := wsDir / relGitDir
let repo := GitRepo.mk gitDir
@@ -276,15 +254,12 @@ public def PackageEntry.materialize
let url := lakeEnv.pkgUrlMap.find? manifestEntry.name |>.getD url
cloneGitPkg sname repo url rev
let relPkgDir := match subDir? with | .some subDir => relGitDir / subDir | .none => relGitDir
mkDep relPkgDir (Git.filterUrl? url |>.getD "")
mkDep gitDir relPkgDir (Git.filterUrl? url |>.getD "")
where
@[inline] mkDep relPkgDir remoteUrl : LoggerIO MaterializedDep := do
let pkgDir := wsDir / relPkgDir
let some pkgDir resolvePath? pkgDir
| error s!"{manifestEntry.prettyName}: package directory not found: {pkgDir}"
@[inline] mkDep pkgDir relPkgDir remoteUrl : LoggerIO MaterializedDep := do
let manifest? id do
if let some manifestFile := manifestEntry.manifestFile? then
Manifest.load (pkgDir / manifestFile) |>.toBaseIO
else
return .error (.noFileOrDirectory "" 0 "")
return {pkgDir, relPkgDir, remoteUrl, manifest?, manifestEntry}
return {relPkgDir, remoteUrl, manifest?, manifestEntry}

View File

@@ -8,13 +8,10 @@ module
prelude
public import Lake.Load.Config
public import Lake.Config.Package
public import Lake.Config.LakefileConfig
import Lake.Util.IO
import Lake.Load.Lean
import Lake.Load.Toml
set_option doc.verso true
/-! # Package Loader
This module contains the main definitions to load a package
@@ -26,38 +23,6 @@ open System (FilePath)
namespace Lake
/--
**For interal use only.**
Constructs a package from the configuration defined in its Lake configuration file
and the load configuaration.
-/
public def mkPackage
(loadCfg : LoadConfig) (fileCfg : LakefileConfig) (wsIdx := loadCfg.pkgIdx)
: Package :=
let {
pkgDir := dir, relPkgDir := relDir,
configFile, relConfigFile, relManifestFile,
scope, remoteUrl, ..} := loadCfg
let {
depConfigs, defaultTargets, scripts, defaultScripts, testDriver, lintDriver
-- destructing needed for type-correctness
pkgDecl, targetDecls, targetDeclMap, postUpdateHooks,
..} := fileCfg
let {baseName, keyName, origName, config} := pkgDecl
{
wsIdx, baseName, keyName, origName, config
dir, relDir, configFile, relConfigFile, relManifestFile
scope, remoteUrl
depConfigs
targetDecls, targetDeclMap, defaultTargets
scripts, defaultScripts
testDriver, lintDriver
postUpdateHooks
}
public theorem wsIdx_mkPackage : (mkPackage l f i).wsIdx = i := by rfl
/--
Return whether a configuration file with the given name
and/or a supported extension exists.
@@ -84,25 +49,21 @@ public def realConfigFile (cfgFile : FilePath) : BaseIO FilePath := do
resolvePath (cfgFile.addExtension "toml")
/--
**For internal use only.**
Resolves a relative configuration file path in {lean}`cfg` and
detects its configuration language (if necessary).
Loads a Lake package configuration (either Lean or TOML).
The resulting package does not yet include any dependencies.
-/
public def resolveConfigFile
public def loadPackageCore
(name : String) (cfg : LoadConfig)
: LogIO {cfg : LoadConfig // cfg.configLang?.isSome} := do
if h : cfg.configLang?.isSome then
let some configFile resolvePath? cfg.configFile
| error s!"{name}: configuration file not found: {cfg.configFile}"
return {cfg with configFile}, h
else if let some ext := cfg.relConfigFile.extension then
let some configFile resolvePath? cfg.configFile
| error s!"{name}: configuration file not found: {cfg.configFile}"
: LogIO (Package × Option Environment) := do
if let some ext := cfg.relConfigFile.extension then
let cfg
if let some configFile resolvePath? cfg.configFile then
pure {cfg with configFile}
else error s!"{name}: configuration file not found: {cfg.configFile}"
match ext with
| "lean" => return {cfg with configFile, configLang? := some .lean}, rfl
| "toml" => return {cfg with configFile, configLang? := some .toml}, rfl
| _ => error s!"{name}: configuration has unsupported file extension: {configFile}"
| "lean" => (·.map id some) <$> loadLeanConfig cfg
| "toml" => ((·,none)) <$> loadTomlConfig cfg
| _ => error s!"{name}: configuration has unsupported file extension: {cfg.configFile}"
else
let relLeanFile := cfg.relConfigFile.addExtension "lean"
let relTomlFile := cfg.relConfigFile.addExtension "toml"
@@ -111,28 +72,18 @@ public def resolveConfigFile
if let some configFile resolvePath? leanFile then
if ( tomlFile.pathExists) then
logInfo s!"{name}: {relLeanFile} and {relTomlFile} are both present; using {relLeanFile}"
return {cfg with configFile, relConfigFile := relLeanFile, configLang? := some .lean}, rfl
else if let some configFile resolvePath? tomlFile then
return {cfg with configFile, relConfigFile := relTomlFile, configLang? := some .toml}, rfl
let cfg := {cfg with configFile, relConfigFile := relLeanFile}
let (pkg, env) loadLeanConfig cfg
return (pkg, some env)
else
error s!"{name}: no configuration file with a supported extension:\n{leanFile}\n{tomlFile}"
/--
**For internal use only.**
Reads the configuration of a Lake configuration file.
The load configuration {lean}`cfg` is assumed to already have an absolute path in
{lean}`cfg.configFile` and that the proper configuratin langauge for it is in
{lean}`cfg.configLang?`. This can be ensured through {lean}`resolveConfigFile`.
-/
public def loadConfigFile (cfg : LoadConfig) (h : cfg.configLang?.isSome) : LogIO LakefileConfig := do
match cfg.configLang?.get h with
| .lean => loadLeanConfig cfg
| .toml => loadTomlConfig cfg
if let some configFile resolvePath? tomlFile then
let cfg := {cfg with configFile, relConfigFile := relTomlFile}
let pkg loadTomlConfig cfg
return (pkg, none)
else
error s!"{name}: no configuration file with a supported extension:\n{leanFile}\n{tomlFile}"
/-- Loads a Lake package as a single independent object (without dependencies). -/
public def loadPackage (cfg : LoadConfig) : LogIO Package := do
Lean.searchPathRef.set cfg.lakeEnv.leanSearchPath
let cfg, h resolveConfigFile "[root]" cfg
let fileCfg loadConfigFile cfg h
return mkPackage cfg fileCfg
public def loadPackage (config : LoadConfig) : LogIO Package := do
Lean.searchPathRef.set config.lakeEnv.leanSearchPath
(·.1) <$> loadPackageCore "[root]" config

View File

@@ -25,43 +25,37 @@ This module contains definitions for resolving the dependencies of a package.
namespace Lake
/-- Returns the load configuration of a materialized dependency. -/
@[inline] def mkDepLoadConfig
(ws : Workspace) (dep : MaterializedDep)
(lakeOpts : NameMap String) (leanOpts : Options) (reconfigure : Bool)
: LoadConfig where
lakeEnv := ws.lakeEnv
wsDir := ws.dir
pkgIdx := ws.packages.size
pkgName := dep.name
pkgDir := dep.pkgDir
relPkgDir := dep.relPkgDir
relConfigFile := dep.relConfigFile
relManifestFile := dep.relManifestFile
lakeOpts; leanOpts; reconfigure
scope := dep.scope
remoteUrl := dep.remoteUrl
def Workspace.addFacetDecls (decls : Array FacetDecl) (self : Workspace) : Workspace :=
decls.foldl (·.addFacetConfig ·.config) self
/--
Loads the package configuration of a materialized dependency.
Adds the package and the facets defined within it to the `Workspace`.
Adds the facets defined in the package to the `Workspace`.
-/
def addDepPackage
def loadDepPackage
(wsIdx : Nat)
(dep : MaterializedDep)
(lakeOpts : NameMap String)
(leanOpts : Options) (reconfigure : Bool)
: StateT Workspace LogIO Package := fun ws => do
let wsIdx := ws.packages.size
let loadCfg := mkDepLoadConfig ws dep lakeOpts leanOpts reconfigure
let loadCfg, h resolveConfigFile dep.prettyName loadCfg
let fileCfg loadConfigFile loadCfg h
let pkg := mkPackage loadCfg fileCfg wsIdx
let ws := ws.addPackage' pkg wsIdx_mkPackage
let ws := ws.addFacetDecls fileCfg.facetDecls
return (pkg, ws)
let name := dep.name.toString (escape := false)
let pkgDir := ws.dir / dep.relPkgDir
let some pkgDir resolvePath? pkgDir
| error s!"{name}: package directory not found: {pkgDir}"
let (pkg, env?) loadPackageCore name {
lakeEnv := ws.lakeEnv
wsDir := ws.dir
pkgIdx := wsIdx
pkgName := dep.name
pkgDir
relPkgDir := dep.relPkgDir
relConfigFile := dep.configFile
lakeOpts, leanOpts, reconfigure
scope := dep.scope
remoteUrl := dep.remoteUrl
}
if let some env := env? then
let ws IO.ofExcept <| ws.addFacetsFromEnv env leanOpts
return (pkg, ws)
else
return (pkg, ws)
/--
The resolver's call stack of dependencies.
@@ -106,7 +100,7 @@ abbrev ResolveT m := DepStackT <| StateT Workspace m
/-
Recursively visits each node in a package's dependency graph, starting from
the workspace package `root`. Each dependency missing from the workspace is
added to the workspace using the `resolve` function.
resolved using the `load` function and added into the workspace.
Recursion occurs breadth-first. Each direct dependency of a package is
resolved in reverse order before recursing to the dependencies' dependencies.
@@ -114,10 +108,9 @@ resolved in reverse order before recursing to the dependencies' dependencies.
See `Workspace.updateAndMaterializeCore` for more details.
-/
@[inline] private def Workspace.resolveDepsCore
[Monad m] [MonadError m] [MonadLiftT LogIO m] (ws : Workspace)
(resolve : Package Dependency Workspace m MaterializedDep)
[Monad m] [MonadError m] (ws : Workspace)
(load : Package Dependency Nat StateT Workspace m Package)
(root : Package := ws.root) (stack : DepStack := {})
(leanOpts : Options := {}) (reconfigure := true)
: m Workspace := do
ws.runResolveT go root stack
where
@@ -130,8 +123,8 @@ where
return -- already handled in another branch
if pkg.baseName = dep.name then
error s!"{pkg.prettyName}: package requires itself (or a package with the same name)"
let matDep resolve pkg dep ( getWorkspace)
discard <| addDepPackage matDep dep.opts leanOpts reconfigure
let depPkg load pkg dep ws.packages.size
modifyThe Workspace (·.addPackage depPkg)
-- Recursively load the dependencies' dependencies
( getWorkspace).packages.forM recurse start
@@ -178,17 +171,17 @@ private def reuseManifest
logWarning s!"{rootName}: ignoring previous manifest because it failed to load: {e}"
/-- Add a package dependency's manifest entries to the update state. -/
private def addDependencyEntries (dep : MaterializedDep) : UpdateT LoggerIO PUnit := do
match ( Manifest.load dep.manifestFile |>.toBaseIO) with
private def addDependencyEntries (pkg : Package) (matDep : MaterializedDep) : UpdateT LoggerIO PUnit := do
match matDep.manifest? with
| .ok manifest =>
manifest.packages.forM fun entry => do
unless ( getThe (NameMap PackageEntry)).contains entry.name do
let entry := entry.setInherited.inDirectory dep.relPkgDir
let entry := entry.setInherited.inDirectory pkg.relDir
store entry.name entry
| .error (.noFileOrDirectory ..) =>
logWarning s!"{dep.prettyName}: ignoring missing manifest:\n {dep.manifestFile}"
logWarning s!"{pkg.prettyName}: ignoring missing manifest '{pkg.manifestFile}'"
| .error e =>
logWarning s!"{dep.prettyName}: ignoring manifest because it failed to load: {e}"
logWarning s!"{pkg.prettyName}: ignoring manifest because it failed to load: {e}"
/-- Materialize a single dependency, updating it if desired. -/
private def updateAndMaterializeDep
@@ -363,6 +356,7 @@ def Workspace.updateAndMaterializeCore
(updateToolchain := true)
: LoggerIO (Workspace × NameMap PackageEntry) := UpdateT.run do
reuseManifest ws toUpdate
let ws := ws.addPackage ws.root
if updateToolchain then
let deps := ws.root.depConfigs.reverse
let matDeps deps.mapM fun dep => do
@@ -371,18 +365,21 @@ def Workspace.updateAndMaterializeCore
ws.updateToolchain matDeps
let start := ws.packages.size
let ws (deps.zip matDeps).foldlM (init := ws) fun ws (dep, matDep) => do
addDependencyEntries matDep
let (_, ws) addDepPackage matDep dep.opts leanOpts true ws
let (depPkg, ws) loadUpdatedDep ws.packages.size dep matDep ws
let ws := ws.addPackage depPkg
return ws
ws.packages.foldlM (init := ws) (start := start) fun ws pkg =>
ws.resolveDepsCore updateAndAddDep pkg [ws.root.baseName] leanOpts true
ws.resolveDepsCore (stack := [ws.root.baseName]) updateAndLoadDep pkg
else
ws.resolveDepsCore updateAndAddDep (leanOpts := leanOpts) (reconfigure := true)
ws.resolveDepsCore updateAndLoadDep
where
@[inline] updateAndAddDep pkg dep ws := do
let matDep updateAndMaterializeDep ws pkg dep
addDependencyEntries matDep
return matDep
@[inline] updateAndLoadDep pkg dep wsIdx := do
let matDep updateAndMaterializeDep ( getWorkspace) pkg dep
loadUpdatedDep wsIdx dep matDep
@[inline] loadUpdatedDep wsIdx dep matDep : StateT Workspace (UpdateT LoggerIO) Package := do
let depPkg loadDepPackage wsIdx matDep dep.opts leanOpts true
addDependencyEntries depPkg matDep
return depPkg
/-- Write package entries to the workspace manifest. -/
def Workspace.writeManifest
@@ -474,9 +471,12 @@ public def Workspace.materializeDeps
if pkgEntries.isEmpty && !ws.root.depConfigs.isEmpty then
error "missing manifest; use `lake update` to generate one"
-- Materialize all dependencies
ws.resolveDepsCore (leanOpts := leanOpts) (reconfigure := reconfigure) fun pkg dep ws => do
let ws := ws.addPackage ws.root
ws.resolveDepsCore fun pkg dep wsIdx => do
let ws getWorkspace
if let some entry := pkgEntries.find? dep.name then
entry.materialize ws.lakeEnv ws.dir relPkgsDir
let result entry.materialize ws.lakeEnv ws.dir relPkgsDir
loadDepPackage wsIdx result dep.opts leanOpts reconfigure
else
if pkg.isRoot then
error <|

View File

@@ -7,12 +7,10 @@ module
prelude
public import Lake.Config.Package
public import Lake.Config.LakefileConfig
public import Lake.Load.Config
public import Lake.Toml.Decode
import Lake.Toml.Load
import Lean.Parser.Extension
import Lake.Build.Infos
import Init.Omega
meta import Lake.Config.LakeConfig
meta import Lake.Config.InputFileConfig
@@ -426,15 +424,14 @@ private def decodeTargetDecls
(pkg : Name) (prettyName : String) (t : Table)
: DecodeM (Array (PConfigDecl pkg) × DNameMap (NConfigDecl pkg)) := do
let r : DecodeTargetState pkg := {}
let r go r LeanLib.keyword LeanLib.configKind LeanLibConfig.decodeToml (by simp)
let r go r LeanExe.keyword LeanExe.configKind LeanExeConfig.decodeToml (by simp)
let r go r InputFile.keyword InputFile.configKind InputFileConfig.decodeToml (by simp)
let r go r InputDir.keyword InputDir.configKind InputDirConfig.decodeToml (by simp)
let r go r LeanLib.keyword LeanLib.configKind LeanLibConfig.decodeToml
let r go r LeanExe.keyword LeanExe.configKind LeanExeConfig.decodeToml
let r go r InputFile.keyword InputFile.configKind InputFileConfig.decodeToml
let r go r InputDir.keyword InputDir.configKind InputDirConfig.decodeToml
return (r.decls, r.map)
where
go (r : DecodeTargetState pkg) kw kind
(decode : {n : Name} Table DecodeM (ConfigType kind pkg n))
(h : DataType kind = OpaqueConfigTarget kind) := do
(decode : {n : Name} Table DecodeM (ConfigType kind pkg n)) := do
let some tableArrayVal := t.find? kw | return r
let some vals tryDecode? tableArrayVal.decodeValueArray | return r
vals.foldlM (init := r) fun r val => do
@@ -449,10 +446,8 @@ where
else
let config @decode name t
let decl : NConfigDecl pkg name :=
-- Safety: By definition, for declarative configurations, the type of a package target
-- is its configuration's data kind (i.e., `CustomData pkg name = DataType kind`).
-- In the equivalent Lean configuration, this would hold by type family axiom.
unsafe {pkg, name, kind, config, wf_data := fun _ => lcProof, h}
-- Safety: By definition, config kind = facet kind for declarative configurations.
unsafe {pkg, name, kind, config, wf_data := lcProof}
-- Check that executables have distinct root module names
let exeRoots id do
if h : kind = LeanExe.configKind then
@@ -474,8 +469,8 @@ where
/-! ## Root Loader -/
/-- Load a Lake configuration from a file written in TOML. -/
public def loadTomlConfig (cfg : LoadConfig) : LogIO LakefileConfig := do
/-- Load a `Package` from a Lake configuration file written in TOML. -/
public def loadTomlConfig (cfg: LoadConfig) : LogIO Package := do
let input IO.FS.readFile cfg.configFile
let ictx := mkInputContext input cfg.relConfigFile.toString
match ( loadToml ictx |>.toBaseIO) with
@@ -487,12 +482,21 @@ public def loadTomlConfig (cfg : LoadConfig) : LogIO LakefileConfig := do
let keyName := baseName.num wsIdx
let prettyName := baseName.toString (escape := false)
let config @PackageConfig.decodeToml keyName origName table
let pkgDecl := {baseName, keyName, origName, config : PackageDecl}
let (targetDecls, targetDeclMap) decodeTargetDecls keyName prettyName table
let defaultTargets table.tryDecodeD `defaultTargets #[]
let defaultTargets := defaultTargets.map stringToLegalOrSimpleName
let depConfigs table.tryDecodeD `require #[]
return {pkgDecl, depConfigs, targetDecls, targetDeclMap, defaultTargets}
return {
wsIdx, baseName, keyName, origName
dir := cfg.pkgDir
relDir := cfg.relPkgDir
configFile := cfg.configFile
relConfigFile := cfg.relConfigFile
scope := cfg.scope
remoteUrl := cfg.remoteUrl
config, depConfigs, targetDecls, targetDeclMap
defaultTargets
}
if errs.isEmpty then
return pkg
else

View File

@@ -31,21 +31,18 @@ Does not resolve dependencies.
public def loadWorkspaceRoot (config : LoadConfig) : LogIO Workspace := do
Lean.searchPathRef.set config.lakeEnv.leanSearchPath
let lakeConfig loadLakeConfig config.lakeEnv
let config := {config with pkgIdx := 0}
let config, h resolveConfigFile "[root]" config
let fileCfg loadConfigFile config h
let root := mkPackage config fileCfg 0
let facetConfigs := fileCfg.facetDecls.foldl (·.insert ·.config) initFacetConfigs
let (root, env?) loadPackageCore "[root]" {config with pkgIdx := 0}
let ws : Workspace := {
root
lakeEnv := config.lakeEnv
lakeConfig
lakeArgs? := config.lakeArgs?
facetConfigs
packages_wsIdx h := by simp at h
facetConfigs := initFacetConfigs
}
let ws := ws.addPackage' root wsIdx_mkPackage
return ws
if let some env := env? then
IO.ofExcept <| ws.addFacetsFromEnv env config.leanOpts
else
return ws
/--
Load a `Workspace` for a Lake package by

View File

@@ -77,7 +77,7 @@ globs = ["Lake.*"]
defaultFacets = ["static", "static.export"]
# Load the previous stage's lake native code into lake's build process in order to prevent ABI
# breakages from affecting bootstrapping.
weakLeanArgs = ["--plugin", "${PREV_STAGE}/${CMAKE_RELATIVE_LIBRARY_OUTPUT_DIRECTORY}/libLake_shared${CMAKE_SHARED_LIBRARY_SUFFIX}"]
moreLeanArgs = ["--plugin", "${PREV_STAGE}/${CMAKE_RELATIVE_LIBRARY_OUTPUT_DIRECTORY}/libLake_shared${CMAKE_SHARED_LIBRARY_SUFFIX}"]
[[lean_lib]]
name = "LakeMain"

View File

@@ -5,6 +5,9 @@ Released under Apache 2.0 license as described in the file LICENSE.
Author: Leonardo de Moura
*/
#include <limits>
#include <cstdlib>
#include <ctime>
#include <execinfo.h>
#include "runtime/thread.h"
#include "runtime/interrupt.h"
#include "runtime/exception.h"
@@ -16,6 +19,68 @@ namespace lean {
LEAN_THREAD_VALUE(size_t, g_max_heartbeat, 0);
LEAN_THREAD_VALUE(size_t, g_heartbeat, 0);
// --- check_system interval monitoring ---
//
// When LEAN_CHECK_SYSTEM_INTERVAL_MS=N is set, warn on stderr if check_system
// is not called within N milliseconds of CPU time on the current thread.
// Uses CLOCK_THREAD_CPUTIME_ID so that IO waits do not count towards the interval.
// Zero overhead when env var is unset.
// 0 = disabled
static unsigned g_check_system_interval_ms = 0;
static bool g_check_system_interval_initialized = false;
static unsigned get_check_system_interval_ms() {
if (!g_check_system_interval_initialized) {
g_check_system_interval_initialized = true;
if (const char * env = std::getenv("LEAN_CHECK_SYSTEM_INTERVAL_MS")) {
g_check_system_interval_ms = std::atoi(env);
}
}
return g_check_system_interval_ms;
}
static uint64_t thread_cpu_time_ns() {
struct timespec ts;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts);
return static_cast<uint64_t>(ts.tv_sec) * 1000000000ULL + static_cast<uint64_t>(ts.tv_nsec);
}
// Thread-local: last CPU time when check_system was called on this thread.
LEAN_THREAD_VALUE(uint64_t, g_last_check_system_ns, 0);
static void check_system_interval(char const * component_name) {
unsigned interval_ms = get_check_system_interval_ms();
if (interval_ms > 0) {
uint64_t now_ns = thread_cpu_time_ns();
uint64_t last_ns = g_last_check_system_ns;
g_last_check_system_ns = now_ns;
if (last_ns != 0) {
uint64_t elapsed_ms = (now_ns - last_ns) / 1000000;
if (elapsed_ms > interval_ms) {
fprintf(stderr,
"[check_system] WARNING: %llu ms CPU time since last check_system call "
"(component: %s)\n",
(unsigned long long)elapsed_ms, component_name);
void * bt_buf[64];
int nptrs = backtrace(bt_buf, 64);
backtrace_symbols_fd(bt_buf, nptrs, 2); // fd 2 = stderr
// Reset timer after printing to avoid backtrace overhead cascading
g_last_check_system_ns = thread_cpu_time_ns();
}
}
}
}
extern "C" LEAN_EXPORT obj_res lean_check_system_interval(b_lean_obj_arg component_name) {
check_system_interval(lean_string_cstr(component_name));
return lean_io_result_mk_ok(lean_box(0));
}
extern "C" LEAN_EXPORT uint8_t lean_check_system_interval_is_enabled(lean_obj_arg /* unit */) {
return get_check_system_interval_ms() > 0;
}
extern "C" LEAN_EXPORT obj_res lean_internal_get_default_max_heartbeat() {
#ifdef LEAN_DEFAULT_MAX_HEARTBEAT
return lean_box(LEAN_DEFAULT_MAX_HEARTBEAT);
@@ -71,6 +136,7 @@ void check_interrupted() {
}
void check_system(char const * component_name, bool do_check_interrupted) {
check_system_interval(component_name);
check_stack(component_name);
check_memory(component_name);
if (do_check_interrupted) {